Dataset Viewer
Auto-converted to Parquet
title
stringlengths
15
188
abstract
stringlengths
400
1.8k
introduction
stringlengths
9
10.5k
content
stringlengths
778
41.9k
abstract_len
int64
400
1.8k
intro_len
int64
9
10.5k
abs_len
int64
400
1.8k
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
An important task for designing QA systems is answer sentence selection (AS2): selecting the sentence containing (or constituting) the answer to a question from a set of retrieved relevant documents. In this paper, we propose three novel sentence-level transformer pre-training objectives that incorporate paragraph-level semantics within and across documents, to improve the performance of transformers for AS2, and mitigate the requirement of large labeled datasets. Specifically, the model is tasked to predict whether: (i) two sentences are extracted from the same paragraph, (ii) a given sentence is extracted from a given paragraph, and (iii) two paragraphs are extracted from the same document. Our experiments on three public and one industrial AS2 datasets demonstrate the empirical superiority of our pre-trained transformers over baseline models such as RoBERTa and ELECTRA for AS2.
Question Answering (QA) finds itself at the core of several commercial applications, for e.g., virtual assistants such as Google Home, Alexa and Siri. Answer Sentence Selection (AS2) is an important task for QA Systems operating on unstructured text such as web documents. When presented with a set of relevant documents for a question (retrieved from a web index), AS2 aims to find the best answer sentence for the question. The recent popularity of pre-trained transformers AS2 is a knowledge-intensive complex reasoning task, where the answer candidates for a question can stem from multiple documents, possibly on different topics linked to concepts in the question. While there have been recent works Furthermore, obtaining high quality human labeled examples for AS2 is expensive and time consuming, due to the large number of answer candidates to be annotated for each question. Domainspecific AS2 datasets such as WikiQA Towards improving the downstream performance of pre-trained transformers for AS2 and mitigating the requirement of large scale labeled data for fine-tuning, we propose three novel sentencelevel transformer pre-training objectives, which can incorporate paragraph-level semantics across multiple documents. Analogous to the sentence-pair nature of AS2, we design our pre-training objectives to operate over a pair of input text sequences. The model is tasked with predicting: (i) whether the sequences are two sentences extracted from the same paragraph, (ii) whether the first sequence is a sentence that is extracted from the second sequence (paragraph), and (iii) whether the sequences are two paragraphs belonging to the same document. We evaluate our paragraph-aware pre-trained transformers for AS2 on three popular public datasets: ASNQ, WikiQA and TREC-QA; and one industrial QA benchmark 1 . Results show that our pre-training can improve the performance of finetuning baseline transformers such as RoBERTa and ELECTRA on AS2 by ∼3-4% points without requiring any additional data (labeled/unlabeled).
Answer Sentence Selection (AS2) Earlier approaches for AS2 used CNNs Paragraph/Document-level Semantics Transformers for Long Inputs Longformer In this section we formally define the task of AS2. Given a question q and a set of answer candidates A={a 1 , . . ., a n }, the objective is to select the candidate ā ∈ A that best answers q. AS2 can be modeled as a ranking task over A to learn a scoring function f : Q×A → R that predicts the probability f (q, a) of an answer candidate a being correct. The best answer ā corresponds to argmax n i=1 f (q, a i ). Pre-trained transformers are used as QA pair encoders for AS2 to approximate the function f . Spans in Same Paragraph (SSP) Given two sequences (A, B) as input to the transformer, the objective is to predict if A and B belong to the same paragraph in a document. To create positive pairs (A, B), given a document D, we extract two small, contiguous and disjoint subsets of sentences to be used as A and B from a single paragraph P i ∈ D. To create negative pairs, we sample spans of sentences B ′ from different paragraphs P j , j ̸ = i in the same document D (hard negatives) and also from different documents (easy negatives). The negative pairs correspond to (A, B ′ ). Posing the above pre-training objective in terms of spans (instead of sentences) allows for modifying the lengths of the inputs A, B (by changing number of sentences ∈A, B). When fine-tuning transformers for AS2, typically the question is provided as the first input and a longer answer candidate/paragraph is provided as the second input. For our experiments (Sec 5), we use a longer span for input B than A. Span in Paragraph (SP) Given two sequences (A, B) as input to the transformer, the objective is to predict if A is a span of text extracted from a paragraph B in a document. To create positive pairs (A, B), given a paragraph P i in a document D, we extract a small contiguous span of sentences A from it and create the input pair as (A, P i \ A). To create negative pairs, we select other paragraphs P j , j ̸ = i in the same document D and remove a randomly chosen span A ′ from each of them. The negative pairs correspond to (A, P j \ A ′ ). This is necessary to ensure that the model does not simply recognize whether the second input is a complete paragraph or a clipped version. To create easy negatives, we use the above approach for paragraphs P j sampled from documents other than D. Paragraphs in Same Document (PSD) Given two sequences (A, B) as input to the transformer, the objective is to predict if A and B are paragraphs belonging to the same document. To create positive pairs (A, B), given a document D k , we randomly select paragraphs P i , P j ∈ D k and obtain a pair (P i , P j ). To create negative pairs, we randomly select P ′ j / ∈ D k , and obtain a pair (P i , P ′ j ). Pre-training To eliminate any improvements stemming from the usage of more data, we perform pre-training on the same corpora as RoBERTa: English Wikipedia, the BookCorpus, OpenWeb-Text and CC-News. We perform continuous pretraining starting from RoBERTa AS2 Fine-tuning We consider three public and one industrial AS2 benchmark as fine-tuning datasets for AS2 (statistics presented in Appendix A). We use standard evaluation metrics for AS2: Mean Average Precision (MAP), Mean Reciprocal Recall (MRR) and Precision@1 (P@1). • ASNQ is a large-scale AS2 dataset (Garg et al., 2020) with questions from Google search engine queries, and answer candidates extracted from a Wikipedia page. ASNQ is a modified version of the Natural Questions (NQ) • WikiQA is a popular AS2 dataset We remove both the (all-) and (all+) questions for our experiments (standard "clean" setting). • TREC-QA is a popular AS2 dataset We present results of our pre-trained models on the AS2 datasets in Table For questions in ASNQ and WikiQA, all candidate answers are extracted from a single Wikipedia document, while for TREC-QA and WQA, candidate answers come from multiple documents extracted from heterogeneous web sources. By design of our objectives SSP, SP and PSD, they perform differently when fine-tuning on different datasets. For example, SSP aligns well with ASNQ and Wik-iQA as they contain many negative candidates, per question, extracted from the same document as the positive (i.e, 'hard' negatives). As per our design of the SSP objective, for every positive sequence pair, we sample 2 'hard' negatives coming from the same document as the positive pair. The presence of hard negatives is of particular importance for WikiQA and ASNQ, as it forces the models to learn and contrast more subtle differences between answer candidates, which might likely be more related as they come from the same document. On the other hand, PSD is designed so as to see paragraphs from same or different documents (with no analogous concept of 'hard' negatives of SSP and SP). For this reason, PSD is better aligned for fine-tuning on datasets where candidates are extracted from multiple documents, such as WQA and TREC-QA. Comparison with TANDA For RoBERTa, our pre-trained models can surprisingly improve/achieve comparable performance to TANDA. Note that our models achieve this performance without using the latter's additional ∼20M labeled ASNQ QA pairs. This lends support to our pretraining objectives mitigating the requirement of large scale labeled data for AS2 fine-tuning. For ELECTRA, we only observe comparable performance to TANDA for WQA and TREC-QA. Ablation: MLM-only Pre-training To mitigate any improvements stemming from the specific data sampling techniques used by our objectives, we pretrain 3 models (starting from RoBERTa-Base) with the same data sampling as each of the SSP, SP and PSD models, but only using the MLM objective. We report results in Table Ablation: Pre-training Task 'Difficulty' We evaluate the pre-trained models (after convergence) on their specific tasks over the validation split of Wikipedia (to enable evaluating baselines such as BERT and ALBERT). Table The results show that our objectives are generally harder than NSP (Next Sentence Prediction by On the other hand, our pre-training objectives are "more challenging" than these previously proposed objectives due to the requirement of reasoning over multiple paragraphs and multiple documents, addressing same or different topics at the same time. In fact, Table In this paper we have presented three sentencelevel pre-training objectives for transformers to incorporate paragraph and document-level semantics. Our objectives predict whether (i) two sequences are sentences extracted from the same paragraph, (ii) first sequence is a sentence extracted from the second, and (iii) two sequences are paragraphs belonging to the same document. We evaluate our pre-trained models for the task of AS2 on four datasets. Our results show that our pre-trained models outperform the baseline transformers such as RoBERTa and ELECTRA. We only consider English language datasets for our experiments in this paper. However we hypothesize that our pre-training objectives should provide similar performance improvements when extended to other languages with limited morphology, like English. The pre-training objectives proposed in our work are designed considering Answer Sentence Selection (AS2) as the target task, and can be extended for other tasks like Natural Language Inference, Question-Question Similarity, etc. in future work. The pre-training experiments in our paper require large amounts of GPU and compute resources (multiple NVIDIA A100 GPUs running for several days) to finish the model pre-training. This makes re-training models using our pre-training approaches computationally expensive using newer data. To mitigate this, we are releasing our code and pre-trained model checkpoints at Here we present statistics and links for downloading the AS2 datasets used: ASNQ We experiment with the base architecture, which uses an hidden size of 768, 12 transformer layers, 12 attention heads and feed-forward size of 3072. We perform continued pre-training starting from the publicly released checkpoints of RoBERTa-Base The evaluation of the models is performed on four different datasets for Answer Sentence Selection. We maintain the same hyperparameters used in pre-training apart from the learning rate, the number of warmup steps and the batch size. We do early stopping on the development set if the number of non-improving validations (patience) is higher than 5. For ASNQ, we found that using a very large batch size is beneficial, providing a higher accuracy. We use a batch size of 2048 examples on ASNQ for RoBERTa models and 1024 for ELECTRA models. The peak learning rate is set to 1 * 10 -5 for all models, and the number of warmup steps to 1000. For WikiQA, TREC-QA and WQA, we select the best batch size out of {16, 32, 64} and learning rate out of {2 * 10 -6 , 5 * 10 -5 , 1 * 10 -5 , 2 * 10 -5 } using crossvalidation. We train the model for 6 epochs on ASNQ, and up to 40 epochs on WikiQA, TREC-QA, and WQA. The performance of practical AS2 systems is typically measured using Precision-at-1 P@1 (Garg and Moschitti, 2021). In addition to P@1, we also use Mean Average Precision (MAP) and Mean Reciprocal Recall (MRR) to evaluate the ranking of the set of candidates produced by the model. We used metrics from Torchmetrics Table We present some qualitative examples from the three public AS2 datasets. We highlight cases in which the baseline RoBERTa-Base model is unable to rank the correct answer in the top position, but where our model pretrained with SP is successful. The examples are provided in Table
893
2,037
893
Style Transfer Through Back-Translation
Style transfer is the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context. This paper introduces a new method for automatic style transfer. We first learn a latent representation of the input sentence which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties. Then adversarial generation techniques are used to make the output match the desired style. We evaluate this technique on three different style transformations: sentiment, gender and political slant. Compared to two state-of-the-art style transfer modeling techniques we show improvements both in automatic evaluation of style transfer and in manual evaluation of meaning preservation and fluency.
Intelligent, situation-aware applications must produce naturalistic outputs, lexicalizing the same meaning differently, depending upon the environment. This is particularly relevant for language generation tasks such as machine translation This paper introduces a novel approach to transferring style of a sentence while better preserving its meaning. We hypothesize-relying on the study of We focus on transferring author attributes: (1) gender and (2) political slant, and (3) on sentiment modification. The second task is novel: given a sentence by an author with a particular political leaning, rephrase the sentence to preserve its meaning but to confound classifiers of political slant ( §3). The task of sentiment modification enables us to compare our approach with state-of-Figure the-art models Style transfer is evaluated using style classifiers trained on held-out data. Our back-translation style transfer model outperforms the state-of-theart baselines The main contribution of this work is a new approach to style transfer that outperforms stateof-the-art baselines in both the quality of inputoutput correspondence (meaning preservation and fluency), and the accuracy of style transfer. The secondary contribution is a new task that we propose to evaluate style transfer: transferring political slant.
Given two datasets 2 } which represent two different styles s 1 and s 2 , respectively, our task is to generate sentences of the desired style while preserving the meaning of the input sentence. Specifically, we generate samples of dataset X 1 such that they belong to style s 2 and samples of X 2 such that they belong to style s 1 . We denote the output of dataset X 1 transfered to style s 2 as X1 = {x (1) 2 , . . . , x(n) 2 } and the output of dataset X 2 transferred to style s 1 as X2 = {x Figure In this section we describe how we learn the latent content variable z using back-translation. The e → f machine translation and f → e backtranslation models are trained using a sequence-tosequence framework Formally, let θ E represent the parameters of the encoder of f → e translation system. Then z is given by: where, x f is the sentence x in language f . Specifically, x f is the output of e → f translation system when x e is given as input. Since z is derived from a non-style specific process, this Encoder is not style specific. Figure We train a convolutional neural network (CNN) classifier to accurately predict the given style. We also use it to evaluate the error in the generated samples for the desired style. We train the classifier in a supervised manner. The classifier accepts either discrete or continuous tokens as inputs. This is done such that the generator output can be used as input to the classifier. We need labeled examples to train the classifier such that each instance in the dataset X should have a label in the set s = {s 1 , s 2 }. Let θ C denote the parameters of the classifier. The objective to train the classifier is given by: To improve the accuracy of the classifier, we augment classifier's inputs with style-specific lexicons. We concatenate binary style indicators to each input word embedding in the classifier. The indicators are set to 1 if the input word is present in a style-specific lexicon; otherwise they are set to 0. Style lexicons are extracted using the log-odds ratio informative Dirichlet prior We use a bidirectional LSTM to build our decoders which generate the sequence of tokens The sequence x is conditioned on the latent code z (in our case, on the machine translation model). In this work we use a corpus translated to French by the machine translation system as the input to the encoder of the backtranslation model. The same encoder is used to encode sentences of both styles. The representation created by this encoder is given by Eq 1. Samples are generated as follows: where, x<t are the tokens generated before xt . Tokens are discrete and non-differentiable. This makes it difficult to use a classifier, as the generation process samples discrete tokens from the multinomial distribution parametrized using softmax function at each time step t. This nondifferentiability, in turn, breaks down gradient propagation from the discriminators to the generator. Instead, following where, o t is the output of the generator and τ is the temperature which decreases as the training proceeds. Let θ G denote the parameters of the generators. Then the reconstruction loss is calculated using the cross entropy function, given by: Here, the back-translation encoder E creates the latent code z by: The generative loss L gen is then given by: where L recon is given by Eq. ( We also use global attention of where h t is the current target state and hs are all source states. While generating sentences, we use the attention vector to replace unknown characters (UNK) using the copy mechanism in Much work in computational social science has shown that people's personal and demographic characteristics-either publicly observable (e.g., age, gender) or private (e.g., religion, political affiliation)-are revealed in their linguistic choices Moreover, prior work has shown that the quality of language identification and POS tagging degrades significantly on African American Vernacular English We thus focus on two tasks that have practical and social-good applications, and also accurate style classifiers. To position our method with respect to prior work, we employ a third task of sentiment transfer, which was used in two stateof-the-art approaches to style transfer Gender. In sociolinguistics, gender is known to be one of the most important social categories driving language choice We used Reddy and Knight's (2016) dataset of reviews from Yelp annotated for two genders corresponding to markers of sex. Sentiment. To compare our work with the stateof-the-art approaches of style transfer for nonparallel corpus we perform sentiment transfer, replicating the models and experimental setups of Dataset statistics. We summarize below corpora statistics for the three tasks: transferring gender, political slant, and sentiment. The dataset for sentiment modification task was used as described in In what follows, we describe our experimental settings, including baselines used, hyperparameter settings, datasets, and evaluation setups. Baseline. We compare our model against the "cross-aligned" auto-encoder Translation data. We trained an English-French neural machine translation system and a French-English back-translation system. We used data from Workshop in Statistical Machine Translation 2015 (WMT15) Hyperparameter settings. In all the experiments, the generator and the encoders are a twolayer bidirectional LSTM with an input size of 300 and the hidden dimension of 500. The generator samples a sentence of maximum length 50. All the generators use global attention vectors of size 500. The CNN classifier is trained with 100 filters of size 5, with max-pooling. The input to CNN is of size 302: the 300-dimensional word embedding plus two bits for membership of the word in our style lexicons, as described in §2.2.1. Balancing parameter λ c is set to 15. For sentiment task, we have used settings provided in We evaluate our approach along three dimensions. (1) Style transfer accuracy, measuring the proportion of our models' outputs that generate sentences of the desired style. The style transfer accuracy is performed using classifiers trained on held-out train data that were not used in training the style transfer models. (2) Preservation of meaning. (3) Fluency, measuring the readability and the naturalness of the generated sentences. We conducted human evaluations for the latter two. In what follows, we first present the quality of our neural machine translation systems, then we present the evaluation setups, and then present the results of our experiments. Translation quality. The BLEU scores achieved for English-French MT system is 32.52 and for French-English MT system is 31.11; these are strong translation systems. We deliberately chose a European language close to English for which massive amounts of parallel data are available and translation quality is high, to concentrate on the style generation, rather than improving a translation system. We measure the accuracy of style transfer for the generated sentences using a pre-trained style classifier ( §2.2.1). The classifier is trained on data that is not used for training our style transfer generative models (as described in §3). The classifier has an accuracy of 82% for the gender-annotated corpus, 92% accuracy for the political slant dataset and 93.23% accuracy for the sentiment dataset. We transfer the style of test sentences and then test the classification accuracy of the generated sentences for the opposite label. For example, if we want to transfer the style of male Yelp reviews to female, then we use the fixed common encoder of the back-translation model to encode the test male sentences and then we use the female generative model to generate the female-styled reviews. We then test these generated sentences for the female label using the gender classifier. In Table On two out of three tasks our model substantially outperforms the baseline, by up to 12% in political slant transfer, by up to 7% in sentiment modification. Although we attempted to use automatics measures to evaluate how well meaning is preserved in our transformations; measures such as BLEU Meaning preservation in style transfer is not trivial to define as literal meaning is likely to change when style transfer occurs. For example "My girlfriend loved the desserts" vs "My partner liked the desserts". Thus we must relax the condition of literal meaning to intent or affect of the utterance within the context of the discourse. Thus if the intent is to criticize a restaurant's service in a review, changing "salad" to "chicken" could still have the same effect but if the intent is to order food that substitution would not be acceptable. downstream task and ensure that the task has the same outcome even after style transfer. This is a hard evaluation and hence we resort to a simpler evaluation of the "meaning" of the sentence. We set up a manual pairwise comparison following We then count the preferences of the eleven participants, measuring the relative acceptance of the generated sentences. 7 A third option "=" was given to participants to mark no preference for either of the generated sentence. The "no preference" option includes choices both are equally bad and both are equally good. We conducted three tests one for each type of experiment -gender, political slant and sentiment. We also divided our annotation set into short (#tokens ≤ 15) and long (15 < #tokens ≤ 30) sentences for the gender and the political slant experiment. In each set we had 20 random samples for each type of style transfer. In total we had 100 sentences to be annotated. Note that we did not ask about appropriateness of the style transfer in this test, or fluency of outputs, only about meaning preservation. The results of human evaluation are presented in Table Although a no-preference option was chosen often-showing that state-ofthe-art systems are still not on par with hu-7 None of the human judges are authors of this paper man expectations-the BST models outperform the baselines in the gender and the political slant transfer tasks. Crucially, the BST models significantly outperform the CAE models when transferring style in longer and harder sentences. Annotators preferred the CAE model only for 12.5% of the long sentences, compared to 47.27% preference for the BST model. Finally, we evaluate the fluency of the generated sentences. Fluency was rated from 1 (unreadable) to 4 (perfect) as is described in The results shown in BST outperforms the baseline overall. It is interesting to note that BST generates significantly more fluent longer sentences than the baseline model. Since the average length of sentences was higher for the gender experiment, BST notably outperformed the baseline in this task, relatively to the sentiment task where the sentences are shorter. Examples of the original and style-transfered sentences generated by the baseline and our model are shown in the Supplementary Material. The loss function of the generators given in Eq. 5 includes two competing terms, one to improve meaning preservation and the other to improve the style transfer accuracy. In the task of sentiment modification, the BST model preserved meaning worse than the baseline, on the expense of being better at style transfer. We note, however, that the sentiment modification task is not particularly well-suited for evaluating style transfer: it is particularly hard (if not impossible) to disentangle the sentiment of a sentence from its proposi-tional content, and to modify sentiment while preserving meaning or intent. On the other hand, the style-transfer accuracy for gender is lower for BST model but the preservation of meaning is much better for the BST model, compared to CAE model and to "No preference" option. This means that the BST model does better job at closely representing the input sentence while taking a mild hit in the style transfer accuracy. Style transfer with non-parallel text corpus has become an active research area due to the recent advances in text generation tasks. Our work is also closely-related to a problem of paraphrase generation We propose a novel approach to the task of style transfer with non-parallel text. In the future work, we will also explore whether an enhanced back-translation by pivoting through several languages will learn better grounded latent meaning representations. In particular, it would be interesting to back-translate through multiple target languages with a single source language Measuring the separation of style from content is hard, even for humans. It depends on the task and the context of the utterance within its discourse. Ultimately we must evaluate our style transfer within some down-stream task where our style transfer has its intended use but we achieve the same task completion criteria.
815
1,317
815
Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws. In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019a) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability. We demonstrate the efficacy of our approach across several dialogue tasks.
Open-ended tasks such as dialogue reveal a number of issues with current neural text generation methods. In more strongly grounded tasks such as machine translation and image captioning, current encoder-decoder architectures provide strong performance, where mostly word-level decisions are often taken correctly by the model. However, critical failings are exposed in less constrained generation: reliance on repetitive copying and overuse of frequent words, and an inability to maintain logical coherence. The former shows the learning objective is faulty in that it cannot match simple statistics of the training data, while the latter touches more to the heart of artificial intelligence: Work done while at Facebook AI Research (FAIR). these models do not understand what they are saying. For example, Figure In this work, we show how the recently introduced unlikelihood objective We first generalize unlikelihood to a different domain: dialogue, where we measure statistics of the training distribution in terms of contextual copies, within-utterance repeats, and vocabulary usage. We then develop loss functions that control these statistics, providing improved metrics on several tasks. Secondly, we show how the same tools can be used to address deeper semantic issues in such models. By leveraging existing natural language inference (NLI) data Code and pre-trained models will be made available. †
Dialogue Generation Dialogue generation consists in predicting an utterance y = (y 1 , . . . , y |y| ) given a context x = {s 1 , . . . , s k , u 1 , . . . , u t } that consists of initial context sentences s 1:k (e.g., scenario, knowledge, personas, etc.) followed by dialogue history utterances u 1:t from speakers who take consecutive turns. Likelihood Training Given a dataset D = {(x (i) , y (i) )} derived from a collection of humanhuman interactions, the standard approach to generative training for dialogue tasks is maximum likelihood estimation (MLE), that minimizes: where x (i) is a gold context (dialogue history and initial context sentences) and y (i) is a gold nextutterance, and y (i) t is the t-th token of y (i) . Likelihood-based (greedy or beam) decoding applied after training a model with this objective yields sequences with statistics that do not match the original human training sequence distribution. To control for such distribution mismatches, we employ the unlikelihood loss The general form of the unlikelihood loss penalizes a set of tokens C t at each time-step, where C t ⊆ V is a subset of the vocabulary, and β(y c ) is a candidate-dependent scale that controls how much the candidate token should be penalized. The overall objective in unlikelihood training then consists of mixing the likelihood and unlikelihood losses, where α ∈ R is the mixing hyper-parameter. Likelihood tries to model the overall sequence probability distribution, while unlikelihood corrects for known biases. It does this via the set of negative candidates C t calculated at each step t, where we are free to select candidate generation functions depending on the biases to be mitigated. Likelihood pushes up the probability of a gold token y (i) t while unlikelihood pushes down the probability of negative candidate tokens y c ∈ C t . In In this paper, we demonstrate how unlikelihood can be used as a general framework by applying it to the dialogue domain. We show how varying the contexts x, targets y, candidates C and scaling β can be used to improve the coherence and language modeling quality of dialogue models. To do this, we now consider the different biases we wish to mitigate, and construct a specific unlikelihood loss for each in turn. We use the ConvAI2 persona-based dialogue To measure label repetition in a sequence y, we use the portion of duplicate n-grams: and report the metric averaged over the examples. Context repetition increases when the model 'copies' n-grams from the context. To quantify language modeling quality, we use standard perplexity and F1 metrics. We use the pre-trained model fine-tuned with MLE as the baseline, and compare it against the pre-trained model fine-tuned with copy and repetition unlikelihood ( §2.1). We evaluate the ability of vocabulary unlikelihood ( §2.2) to reduce the mismatch between model and human token distributions. We use the ConvAI2 dataset, where our baseline is again trained using maximum likelihood. Starting with the baseline model, we then fine-tune several models using vocab unlikelihood at logarithmically interpolated values of α ∈ [1, 1000]. We partition the vocabulary into 'frequent', 'medium', 'rare', and 'rarest' using the human unigram distribution computed with the ConvAI2 training set, corresponding to the sorted token sets whose cumulative mass accounts for the top 40%, the next 30%, the next 20% and the final 10% of usage, respectively. We evaluate a model by generating utterances given contexts from the Con-vAI2 validation set, and compute the fraction of tokens within each class. Results Figure Table Human Evaluation Finally, we perform a human evaluation using the ACUTE-EVAL framework We use the dialogue natural language inference (NLI) task of Two Utterance Generation Task We adapt the initial dialogue NLI dataset by using entailing and neutral training sentence pairs as plausible positive utterances, and contradicting pairs as negatives. That is, if a pair (s 1 , s 2 ) from Dialogue NLI has label E or N, the example We consider two types of entailment: entailing sentence pairs that appear together in a dialogue in the original Persona-Chat dataset and are therefore natural ('entailment'), and those that only entail via their triple relations ('triple-entailment'). The latter are more challenging, noisier targets. Evaluation is performed by measuring the test set perplexity over the four target label types, where contradictions should have relatively higher perplexity. We additionally evaluate a selection accuracy task, where for each test example there are two candidate responses: a positive and a negative (contradicting) statement. The candidate response with the lowest perplexity is considered to be the model's selection, and we measure the selection success rate. Evaluation is broken down by positive type (entailment, triple-entailment, neutral). Dataset statistics are given in Table Full Dialogue Task To evaluate in a more realistic setup that involves full dialogue rather than a single utterance, we take full Persona-Chat dialogues Our work provides new applications of unlikelihood training In terms of dialogue coherence, In all of our experiments we employ a large pre-trained seq2seq Transformer Evaluation results from all evaluated matchups are shown in Figure Generating consistent and coherent human-like dialogue is a core goal of natural language research. We studied several aspects that contribute to that goal, defined metrics to measure them, and proposed algorithms that improve them, mitigating some of the failings of maximum likelihood training, the current dominant approach. Our method defines objective functions under the umbrella of unlikelihood: during training, we wish to make inconsistent dialogue unlikely by lowering the probability of such events occurring. This makes generative models repeat themselves less, copy the context less, and use more rare words from the vocabulary -closer to matching human statistics. Further, utilizing supervised datasets with labeled coherent and incoherent utterances and applying unlikelihood yields measurably improved levels of coherence with respect to the aspect measured, in this case contradiction. Future work could apply this same technique with other supervised data, e.g. correcting causal or commonsense reasoning errors The experiments on repetition and copying in the main paper were carried out with greedy decoding for simplicity. In this section we show that similar results hold with beam decoding as well. Using a beam size of 5, we take the same 4 models from Table Description of ConvAI2 vocabulary setup We follow We first collected 252 model-human conversations with each of the models (MLE baseline, and weights for α of Unlikelihood, examples in 8). We then set up a pairwise-comparison using the software of Description of ELI5 repetition setup We follow
964
1,409
964
Automatic Metric Validation for Grammatical Error Correction
Correction (GEC) is currently done by observing the correlation between human and metric-induced rankings. However, such correlation studies are costly, methodologically troublesome, and suffer from low inter-rater agreement. We propose MAEGE, an automatic methodology for GEC metric validation, that overcomes many of the difficulties with existing practices. Experiments with MAEGE shed a new light on metric quality, showing for example that the standard M 2 metric fares poorly on corpus-level ranking. Moreover, we use MAEGE to perform a detailed analysis of metric behavior, showing that correcting some types of errors is consistently penalized by existing metrics.
Much recent effort has been devoted to automatic evaluation, both within GEC Human rankings are often considered as ground truth in text-to-text generation, but using them reliably can be challenging. Other than the costs of compiling a sizable validation set, human rank-ings are known to yield poor inter-rater agreement in MT The main contribution of this paper is an automatic methodology for metric validation in GEC called MAEGE (Methodology for Automatic Evaluation of GEC Evaluation), which addresses these difficulties. MAEGE requires no human rankings, and instead uses a corpus with gold standard GEC annotation to generate lattices of corrections with similar meanings but varying degrees of grammaticality. For each such lattice, MAEGE generates a partial order of correction quality, a quality score for each correction, and the number and types of edits required to fully correct each. It then computes the correlation of the induced partial order with the metric-induced rankings. MAEGE addresses many of the problems with existing methodology: • Human rankings yield low inter-rater and intra-rater agreement ( §3). Indeed, • CHR uses system outputs to obtain human rankings, which may be misleading, as systems may share similar biases, thus neglecting to evaluate some types of valid corrections ( §7). MAEGE addresses this issue by systematically traversing an inclusive space of corrections. • The difficulty in handling ties is addressed by only evaluating correction pairs where one contains a sub-set of the errors of the other, and is therefore clearly better. • MAEGE uses established statistical tests for determining the significance of its results, thereby avoiding ad-hoc methodologies used in CHR to tackle potential biases in human rankings ( §5, §6). In experiments on the standard NUCLE test set In addition to measuring metric reliability, MAEGE can also be used to analyze the sensitivities of the metrics to corrections of different types, which to our knowledge is a novel contribution of this work. Specifically, we find that not only are valid edits of some error types better rewarded than others, but that correcting certain error types is consistently penalized by existing metrics (Section 7). The importance of interpretability and detail in evaluation practices (as opposed to just providing bottom-line figures), has also been stressed in MT evaluation (e.g.,
We turn to presenting the metrics we experiment with. The standard practice in GEC evaluation is to define differences between the source and a correction (or a reference) as a set of edits BLEU. BLEU GLEU. GLEU iBLEU. iBLEU We set α = 0.8 as suggested by Sun and Zhou. F -Score computes the overlap of edits to the source in the reference, and in the output. As system edits can be constructed in multiple ways, the standard M 2 scorer Levenshtein Distance. We use the Levenshtein distance Correlation with human rankings (CHR) is the standard methodology for assessing the validity of GEC metrics. While informative, human rankings are costly to produce, present low inter-rater agreement (shown for MT evaluation in There are two existing sets of human rankings for GEC that were compiled concurrently: GJG15 by Another source of inconsistency in CHR is that the rankings are relative and sampled, so datasets rank different sets of outputs We conclude by proposing a practice for reporting CHR in future work. First, we combine both sets of human judgments to arrive at the statistically most powerful test. Second, we compute the metrics' corpus-level rankings according to the same subset of sentences used for human rankings. The current practice of allowing metrics to rank systems based on their output on the entire CoNLL test set (while human rankings are only collected for a sub-set thereof), may bias the results due to potential non-uniform system performance on the test set. We report CHR according to the proposed protocol in Table In the following sections we present MAEGE an alternative methodology to CHR, which uses human corrections to induce more reliable and scalable rankings to compare metrics against. We begin our presentation by detailing the method MAEGE 2 The difference between our results and previously reported ones is probably due to a recent update in GLEU to better tackles multiple references (1) 1 The Ois are the original sentences, directed edges represent an application of an edit and R (i) j is the j-th perfect correction of Oi (i.e., the perfect correction that result from applying all the edits of the j-th annotation of Oi). uses to generate source-correction pairs and a partial order between them. MAEGE operates by using a corpus with gold annotation, given as edits, to generate lattices of corrections, each defined by a sub-set of the edits. Within the lattice, every pair of sentences can be regarded as a potential source and a potential output. We create sentence chains, in an increasing order of quality, taking a source sentence and applying edits in some order one after the other (see Figure Formally, for each sentence s in the corpus and each annotation a, we have a set of typed edits edits(s, a) = {e s,a } of size n s,a . We call 2 edits(s,a) the corrections lattice, and denote it with E s,a . We call, s, the correction corresponding to ∅ the original. We define a partial order relation between x, y ∈ E s,a such that x < y if x ⊂ y. This order relation is assumed to be the gold standard ranking between the corrections. For our experiments, we use the NUCLE test data Social media makes our life patten so fast and leave us less time to think about our life. Social media make our life patten so fast and leave us less time to think about our life. Social media make our pace of life so fast and leave us less time to think about our life. left leave makes make life patten pace of life references, produced by Sentences which require no correction according to at least one of the two annotations are discarded. In 26 cases where two edit spans intersect in the same annotation (out of a total of about 40K edits), the edits are manually merged or split. We conduct a corpus-level analysis, namely testing the ability of metrics to determine which corpus of corrections is of better quality. In practice, this procedure is used to rank systems based on their outputs on the test corpus. In order to compile corpora corresponding to systems of different quality levels, we define sev-eral corpus models, each applying a different expected number of edits to the original. Models are denoted with the expected number of edits they apply to the original which is a positive number M ∈ R + . Given a corpus model M , we generate a corpus of corrections by traversing the original sentences, and for each sentence s uniformly sample an annotation a (i.e., a set of edits that results in a perfect correction), and the number of edits applied n edits , which is sampled from a clipped binomial probability with mean M and variance 0.9. Given n edits , we uniformly sample from the lattice E s,a a sub-set of edits of size n edits , and apply this set of edits to s. The corpus of M = 0 is the set of originals. The corpus of source sentences, against which all other corpora are compared, is sampled by traversing the original sentences, and for each sentence s, uniformly sample an annotation a, and given s, a, uniformly sample a sentence from E s,a . Given a metric m ∈ METRICS, we compute its score for each sampled corpus. Where corpuslevel scores are not defined by the metrics themselves, we use the average sentence score instead. We compare the rankings induced by the scores of m and the ranking of systems according to their corpus model (i.e., systems that have a higher M should be ranked higher), and report the correlation between these rankings. Setup. We sample chains using the same sampling method as in §6, and uniformly sample a source from each chain. For each edit type t, we detect all pairs of corrections in the sampled chains that only differ in an edit of type t, and use them to compute ∆ m,t . We use the set of 27 edit types given in the NUCLE corpus. Results. Table In general, the tendency of reference-based metrics (the vast majority of GEC metrics) to penalize edits of various types suggests that many edit types are under-represented in available reference sets. Automatic evaluation of systems that perform these edit types may, therefore, be unreliable. Moreover, not addressing these biases in the metrics may hinder progress in GEC. Indeed, M 2 and GLEU, two of the most commonly used metrics, only award a small sub-set of edit types, thus offering no incentive for systems to improve performance on such types. 6 We proceed by presenting a method for assessing the correlation between metric-induced scores of corrections of the same sentence, and the scores given to these corrections by MAEGE. Given a sentence s and an annotation a, we sample a random permutation over the edits in edits(s, a). We denote the permutation with σ ∈ S ns,a , where S ns,a is the permutation group over {1, • • • , n s,a }. Given σ, we define a monotonic chain in E i,j as: s,a } < . . . < edits(s, a) For each chain, we uniformly sample one of its elements, mark it as the source, and denote it with src. In order to generate a set of chains, MAEGE traverses the original sentences and annotations, and for each sentence-annotation pair, uniformly samples n ch chains without repetition. It then uniformly samples a source sentence from each chain. If the number of chains in E s,a is smaller than n ch , MAEGE selects all the chains. Given a metric m ∈ METRICS, we compute its score for every correction in each sampled chain against the sampled source and available references. We compute the sentence-level correlation of the rankings induced by the scores of m and the rankings induced by <. For computing rank correlation (such as Spearman ρ or Kendall τ ), such a relative ranking is sufficient. We report Kendall τ , which is only sensitive to the relative ranking of correction pairs within the same chain. Kendall is minimalistic in its assumptions, as it does not require numerical scores, but only assuming that < is well-motivated, i.e., that applying a set of valid edits is better in quality than applying only a subset of it. As < is a partial order, and as Kendall τ is standardly defined over total orders, some modification is required. τ is a function of the number of compared pairs and of discongruent pairs (ordered differently in the compared rankings): To compute these quantities, we extract all unique pairs of corrections that can be compared with < (i.e., one applies a sub-set of the edits of the other), and count the number of discongruent ones between the metric's ranking and <. Significance is modified accordingly. 5 Spearman ρ is less applicable in this setting, as it compares total orders whereas here we compare partial orders. To compute linear correlation with Pearson r, we make the simplifying assumption that all edits contribute equally to the overall quality. Specifically, we assume that a perfect correction (i.e., the top of a chain) receives a score of 1. Each original sentence s (the bottom of a chain), for which there exists annotations a 1 , . . . , a n , receives a score of The scores of partial (non-perfect) corrections in each chain are linearly spaced between the score of the perfect correction and that of the original. This scoring system is well-defined, as a partial correction receives the same score according to all chains it is in, as all paths between a partial correction and the original have the same length. We revisit the argument that using system outputs to perform metric validation poses a methodological difficulty. Indeed, as GEC systems are developed, trained and tested using available metrics, and as metrics tend to reward some correction types and penalize others ( §7), it is possible that GEC development adjusts to the metrics, and neglects some error types. Resulting tendencies in GEC systems would then yield biased sets of outputs for human rankings, which in turn would result in biases in the validation process. To make this concrete, GEC systems are often precision-oriented: trained to prefer not to correct than to invalidly correct. Indeed, Choshen and 6 LDS→O tends to award valid corrections of almost all types. As source sentences are randomized across chains, this indicates that on average, corrections with more applied edits tend to be more similar to comparable corrections on the lattice. This is also reflected by the slightly positive sentencelevel correlation of LDS→O ( §6). We use MAEGE to mimic a setting of ranking against precision-oriented outputs. To do so, we perform corpus-level and sentence-level analyses, but instead of randomly sampling a source, we invariably take the original sentence as the source. We thereby create a setting where all edits applied are valid (but not all valid edits are applied). Comparing the results to the regular MAEGE correlation (Table Drawbacks. Like any methodology MAEGE has its simplifying assumptions and drawbacks; we wish to make them explicit. First, any biases introduced in the generation of the test corpus are inherited by MAEGE (e.g., that edits are contiguous and independent of each other). Second, MAEGE does not include errors that a human will not perform but machines might, e.g., significantly altering the meaning of the source. This partially explains why LT, which measures grammaticality but not meaning preservation, excels in our experiments. Third, MAEGE's scoring system ( §6) assumes that all errors damage the score equally. While this assumption is made by GEC metrics, we believe it should be refined in future work by collecting user information. In this paper, we show how to leverage existing annotation in GEC for performing validation reliably. We propose a new automatic methodology, MAEGE, which overcomes many of the shortcomings of the existing methodology. Experiments with MAEGE reveal a different picture of metric quality than previously reported. Our analysis suggests that differences in observed metric quality are partly due to system outputs sharing consistent tendencies, notably their tendency to under-predict corrections. As existing methodology ranks system outputs, these shared tendencies bias the validation process. The difficulties in basing validation on system outputs may be applicable to other text-to-text generation tasks, a question we will explore in future work.
672
2,406
672
Robust Hate Speech Detection via Mitigating Spurious Correlations
We develop a novel robust hate speech detection model that can defend against both wordand character-level adversarial attacks. We identify the essential factor that vanilla detection models are vulnerable to adversarial attacks is the spurious correlation between certain target words in the text and the prediction label. To mitigate such spurious correlation, we describe the process of hate speech detection by a causal graph. Then, we employ the causal strength to quantify the spurious correlation and formulate a regularized entropy loss function. We show that our method generalizes the backdoor adjustment technique in causal inference. Finally, the empirical evaluation shows the efficacy of our method. 1 ,
Online social media bring people together and encourage people to share their thoughts freely. However, it also allows some users to misuse the platforms to promote the hateful language. As a result, hate speech, which "expresses hate or encourages violence towards a person or group based on characteristics such as race, religion, sex, or sexual orientation" Research on defending against adversarial attacks in the text domain has been received significant attention in recent years In this paper, we develop a novel robust hate speech detection model. We target the situation where a group of target words could be replaced with any words even with entire different semantic meanings. We identify the essential factor to defend such attacks as to capture the causation between the semantic meaning of input text and the label and remove the spurious correlation between them. To this end, we use causal graphs
A hate speech detection model can be defined as a functional mapping from T to Y , where t ∈ T is a set of input texts and y ∈ Y is the target label set. In general, the output of the detection model is the softmax probability of predicting each class k, i.e., f k (t; θ) = P (Y = y k |t), where θ is the parameters of the model. We presume a given group of target words (usually hateful or sentiment words) denoted by H, and use X to indicate the remaining text excluding the words in H, i.e., T = ⟨X, H⟩. Adversarial examples are inputs to detection models with perturbations on H that purposely cause the model make mistakes. Causal graphs are widely used for representing causal relationships among variables We propose a causal graph for modeling the hate speech detection shown in Fig. Based on the causal graph, we identify one major reason that vanilla detection models are not robust to adversarial attacks: the detection models make predictions based on both the semantic meanings of texts and the spurious correlation between X and Y via H (i.e., X ← I → H → Y ) that significantly relates to the occurrence of the target words. When the target works, like the f-word, are strongly correlated with the hate label in the training dataset, the model trained on such data may easily make predictions based on the occurrence of the target words without considering the meanings of entire texts. Therefore, once the adversarial attacks that remove such correlations are conducted, the detection model is easy to be fooled. In order to make the detection model robust to any perturbation, one needs to prevent the model from learning the spurious correlation. To this end, we propose to penalize the causal influence of H on Y during the training so that the spurious correlation can be blocked. Inferring causal influences of input on predictions is a challenging task in machine learning. In this paper, we advocate the use of the causal strength proposed in where the second equality is due to factorization. Since the causal strength measures the influence of the word substitution, our problem becomes to penalize the causal strength in the training. In order to integrate the causal strength into the objective function, we rewrite Eq. ( C H→Y = x,h,y P (x, h, y) log P (y|x, h) x,h,y P (x, h, y) log For the first term of Eq. ( where N is the number of text in the data, j indicates the j-th text, and k is the class index. We similarly reformulate the second term of Eq. ( Finally, by adding the causal strength as a regularization term into the cross-entropy loss, we obtain the regularized cross-entropy loss as follows. where λ ∈ [0, 1] is the coefficient for balancing the model utility and the model robustness. We further analyze the meaning of the term L I in Eq. ( (5) Comparing Eqs. ( In Eq. ( We consider five baselines in the experiments: the base BERT and HateBERT To evaluate the robustness of all models, we use three different versions of the test dataset: the clean version, the word-level attack version where each word from the texts present in the list L is randomly replaced by one of the words in L, and the character-level attack version where each word in L is replaced by a misspelled version. Our model uses the pre-trained BERT as the base model which is then fine-tuned by minimizing Eq. (4) on our training data. By default λ = 0.5. The prior probability P (h ′ ) for a target word h ′ is calculated by dividing the total occurrence of h ′ in the training data by the total occurrence of all the words in L in the training data. We refer to our Robust Hate Speech Detection. We first evaluate the performance of all models on three test datasets in terms of accuracy, precision, recall and F1 scores of the positive (i.e., hate) class as well as the Macro F1. The mean and standard deviation of five runs are shown in Table We developed a robust hate speech detection model by leveraging the causal inference to mitigate spurious correlations. The experiment results show that our model can achieve better performance under both word-and character-level attacks compared with other baselines.
717
913
717
Named Entity Recognition with Character-Level Models
We discuss two named-entity recognition models which use characters and character ¤ -grams either exclusively or as an important part of their data representation. The first model is a character-level HMM with minimal context information, and the second model is a maximum-entropy conditional markov model with substantially richer context features. Our best model achieves an overall F¥ of 86.07% on the English test data (92.31% on the development data). This number represents a 25% error reduction over the same model without word-internal (substring) features.
For most sequence-modeling tasks with word-level evaluation, including named-entity recognition and part-ofspeech tagging, it has seemed natural to use entire words as the basic input features. For example, the classic HMM view of these two tasks is one in which the observations are words and the hidden states encode class labels. However, because of data sparsity, sophisticated unknown word models are generally required for good performance. A common approach is to extract word-internal features from unknown words, for example suffix, capitalization, or punctuation features Here, we examine the utility of taking character sequences as a primary representation. We present two models in which the basic units are characters and character ¤ -grams, instead of words and word phrases. Ear- lier papers have taken a character-level approach to named entity recognition (NER), notably
Figure When using character-level models for word-evaluated tasks, one would not want multiple characters inside a single word to receive different labels. This can be avoided in two ways: by explicitly locking state transitions inside words, or by careful choice of transition topology. In our current implementation, we do the latter. Each state is a pair where is an entity type (such as PERSON, and including an other type) and indicates the length of time the system has been in state . There- fore, a state like (PERSON, 2) indicates the second letter inside a person phrase. The final letter of a phrase is a following space (we insert one if there is none) and the state is a special final state like (PERSON, F). Additionally, once reaches our ¤ -gram history order, it stays there. We then use empirical, unsmoothed estimates for state- state transitions. This annotation and estimation enforces consistent labellings in practice. For example, (PERSON, 2) can only transition to the next state (PERSON, 3) or the final state (PERSON, F). Final states can only transition to beginning states, like (other, 1). For emissions, we must estimate a quantity of the form 10 32 54 76 98 3@ BA . Given this model, we can do Viterbi decoding in the standard way. To be clear on what this model does and does not capture, we consider a few examples ( indicates a space). First, we might be asked for ¢ GF ¥ H I$ P F RQ S UT V8 7W YX . In this case, we know both that we are in the middle of a location that begins with Denv and also that the preceding context was to. In essence, encoding into the state lets us distinguish the begin- nings of phrases, which lets us model trends like named entities (all the classes besides other) generally starting with capital letters in English. Second, we may be asked for quantities like ¢ ¥ a`H ( b Rc UT V8 7W Yd , which allows us to model the ends of phrases. Here we have a slight complexity: by the notation, one would expect such emissions to have probability 1, since nothing else can be emitted from a final state. In practice, we have a special stop symbol in our n-gram counts, and the probability of emitting a space from a final state is the probability of the n-gram having chosen the stop character. We did also try to incorporate gazetteer information by adding ¤ -gram counts from gazetteer entries to the train- ing counts that back the above character emission model. However, this reduced performance (by 2.0% with context on). The supplied gazetteers appear to have been built from the training data and so do not increase coverage, and provide only a flat distribution of name phrases whose empirical distributions are very spiked. Given the amount of improvement from using a model backed by character ¤ -grams instead of word ¤ -grams, the immediate question is whether this benefit is complementary to the benefit from features which have traditionally been of use in word level systems, such as syntactic context features, topic features, and so on. To test this, we constructed a maxent classifier which locally classifies single words, without modeling the entity type sequences ¡ . In order to include state sequence features, which allow the classifications at various positions to interact, we have to abandon classifying each position independently. Sequence-sensitive features can be included by chaining our local classifiers together and performing joint inference, i.e., by building a conditional markov model (CMM), also known as a maximum entropy markov model The remaining improvements involved a number of other features which directly targetted observed error types. These features included letter type pattern features (for example 20-month would become d-x for digitlowercase and Italy would become Xx for mixed case). This improved performance substantially, for example allowing the system to detect ALL CAPS regions. Table The primary argument of this paper is that character substrings are a valuable, and, we believe, underexploited source of model features. In an HMM with an admittedly very local sequence model, switching from a word model to a character model gave an error reduction of about 30%. In the final, much richer chained maxent setting, the reduction from the best model minus ¤ -gram features to the reported best model was about 25% -smaller, but still substantial. This paper also again demonstrates how the ease of incorporating features into a discriminative maxent model allows for productive feature engineering.
565
888
565
A Formal Hierarchy of RNN Architectures
We develop a formal hierarchy of the expressive capacity of RNN architectures. The hierarchy is based on two formal properties: space complexity, which measures the RNN's memory, and rational recurrence, defined as whether the recurrent update can be described by a weighted finite-state machine. We place several RNN variants within this hierarchy. For example, we prove the LSTM is not rational, which formally separates it from the related QRNN (Bradbury et al., 2016). We also show how these models' expressive capacity is expanded by stacking multiple layers or composing them with different pooling functions. Our results build on the theory of "saturated" RNNs
While neural networks are central to the performance of today's strongest NLP systems, theoretical understanding of the formal properties of different kinds of networks is still limited. It is established, for example, that the Elman (1990) RNN is Turing-complete, given infinite precision and computation time Recently, In a separate line of work, We compare the expressive power of rational and non-rational RNNs, distinguishing between state expressiveness (what kind and amount of information the RNN states can capture) and language expressiveness (what languages can be recognized when the state is passed to a classifier). To do this, we build on the theory of saturated RNNs.
We introduce a unified hierarchy (Figure We provide the first formal proof that LSTMs can encode functions that rational recurrences cannot. On the other hand, we show that the saturated Elman RNN and GRU are rational recurrences with constant space complexity, whereas the QRNN has unbounded space complexity. We also show that an unrestricted WFA has rich expressive power beyond any saturated RNN we consider-including the LSTM. This difference potentially opens the door to more expressive RNNs incorporating the computational efficiency of rational recurrences. Language expressiveness When applied to classification tasks like language recognition, RNNs are typically combined with a "decoder": additional layer(s) that map their hidden states to a prediction. Thus, despite differences in state expressiveness, rational RNNs might be able to achieve comparable empirical performance to non-rational RNNs on NLP tasks. In this work, we consider the setup in which the decoders only view the final hidden state of the RNN. Experiments Finally, we conduct experiments on formal languages, justifying that our theorems correctly predict which languages unsaturated recognizers trained by gradient descent can learn. Thus, we view our hierarchy as a useful formal tool for understanding the relative capabilities of different RNN architectures. Roadmap We present the formal devices for our analysis of RNNs in Section 2. In Section 3 we develop our hierarchy of state expressiveness for single-layer RNNs. In Section 4, we shift to study RNNs as language recognizers. Finally, in Section 5, we provide empirical results evaluating the relevance of our predictions for unsaturated RNNs. In this work, we analyze RNNs using formal models from automata theory-in particular, WFAs and counter automata. In this section, we first define the basic notion of an encoder studied in this paper, and then introduce more specialized formal concepts: WFAs, counter machines (CMs), space complexity, and, finally, various RNN architectures. We view both RNNs and automata as encoders: machines that can be parameterized to compute a set of functions f : Σ * → Q k , where Σ is an input alphabet and Q is the set of rational reals. Given an encoder M and parameters θ, we use M θ to represent the specific function that the parameterized encoder computes. For each encoder, we refer to the set of functions that it can compute as its state expressiveness. For example, a deterministic finite state acceptor (DFA) is an encoder whose parameters are its transition graph. Its state expressiveness is the indicator functions for the regular languages. Formally, a WFA is a non-deterministic finite automaton where each starting state, transition, and final state is weighted. Let Q denote the set of states, Σ the alphabet, and Q the rational reals. 1. Initial state weights λ Final state weights ρ : Q → Q The weights are used to encode any string x ∈ Σ * : Definition 1 (Path score). Let π be a path of the form q The score of π is given by By Π(x), denote the set of paths producing x. Definition 2 (String encoding). The encoding computed by a WFA A on string x is Hankel matrix Given a function f : Σ * → Q and two enumerations α, ω of the strings in Σ * , we define the Hankel matrix of f as the infinite matrix where or refer to a sub-block of a Hankel matrix, row-and columnindexed by prefixes and suffixes P, S ⊆ Σ * . The following result relates the Hankel matrix to WFAs: Theorem 1 For any f : Σ * → Q, there exists a WFA that computes f if and only if H f has finite rank. Rational series We now turn to introducing a different type of encoder: the real-time counter machine (CM; Definition 3 (General CM; A CM processes input tokens {x t } n t=1 sequentially. Denoting q t , c t ∈ Q × Z k a CM's configuration at time t, define its next configuration: (2) where 1 =0 is a broadcasted "zero-check" operation, i.e., 1 =0 (v) i 1 =0 (v i ). In ( 1. A CM is Σ-restricted iff u and δ depend only on the current input σ ∈ Σ. 2. A CM is (Σ × Q)-restricted iff u and δ depend only on the current input σ ∈ Σ and the current state q ∈ Q. restricted, and the states Q are windows over the last w input tokens, e.g., Q = Σ ≤w . The memory introduced by the stack data structure pushes the encoder into Θ(n) space. We formalize this by showing that, like a WFA, the stack RNN can encode binary strings to their value. Lemma 5. The saturated stack RNN can compute the converging binary encoding function, i.e., 101 → 1 • 1 + 0.5 • 0 + 0.25 • 1 = 1.25. A saturated neural network is a discrete approximation of neural network considered by where N θ denotes the parameters θ multiplied by a scalar N . This transforms each "squashing" function (sigmoid, tanh, etc.) to its extreme values (0, ±1). In line with prior work A recurrent neural network (RNN) is a parameterized update function g θ : The recurrent update function g can take several forms. The original and most simple form is that of the Elman RNN. Since then, more elaborate forms using gating mechanisms have become popular, among them the LSTM, GRU, and QRNN. Elman RNNs (Elman, 1990) Let x t be a vector embedding of x t . For brevity, we suppress the bias terms in this (and the following) affine operations. (5) We refer to the saturated Elman RNN as the s-RNN. The s-RNN has Θ(1) space LSTMs The LSTM can use its memory vector c t as a register of counters GRUs (15) where z t , f t , o t are respectively rows of Z, F, O. A QRNN Q can be seen as an LSTM in which all uses of the state vector h t have been replaced with a computation over the last w input tokens-in this way it is similar to a CNN. The s-QRNN has Θ(log n) space, as the analysis of We now turn to presenting our results. In this section, we develop a hierarchy of single-layer RNNs based on their state expressiveness. A set-theoretic view of the hierarchy is shown in Figure Let R be the set of rational series. The hierarchy relates Θ(log n) space to the following sets: • RR As in • RR-hard An encoder is RR-hard iff its state expressiveness contains R. A Turing machine is RR-hard, as it can simulate any WFA. • RR-complete Finally, an encoder is RRcomplete iff its state expressiveness is equivalent to R. A trivial example of an RRcomplete encoder is a vector of k WFAs. The different RNNs are divided between the intersections of these classes. In Subsection 3.1, we prove that the s-LSTM, already established to have Θ(log n) space, is not RR. In Subsection 3.2, we demonstrate that encoders with restricted counting ability (e.g., QRNNs) are RR, and in Subsection 3.3, we show the same for all encoders with finite state (CNNs, s-RNNs, and s-GRUs). In Subsection 3.4, we demonstrate that none of these RNNs are RR-hard. In Appendix F, we extend this analysis from RNNs to self attention. We find that encoders like the s-LSTM-which, as discussed in Subsection 2.3, is "aware" of its current counter values-are not RR. To do this, we construct f 0 : {a, b} * → N that requires counter awareness to compute on strings of the form a * b * , making it not rational. We then construct an s-LSTM computing f 0 over a * b * . Let # a-b (x) denote the number of as in string x minus the number of bs. Definition 5 (Rectified counting). Therefore rank(A n ) = n-1. Thus, for all n, there is a sub-block of H f with rank n -1, and so rank(H f ) is unbounded. It follows from Theorem 1 that there is no WFA computing f . Theorem 2. The s-LSTM is not RR. Let σ/±m denote a transition that consumes σ and updates the counter by ±m. We write σ, =0/±m (or =) for a transition that requires the counter is 0. Proof. Assume the input has the form a i b j for some i, j. Consider the following LSTM 8 : For a string a i b j , the update in ( While the counter awareness of a general CM enables it to compute non-rational functions, CMs that cannot view their counters are RR. Theorem 3. Any Σ-restricted CM is RR. Proof. We show that any function that a Σrestricted CM can compute can also be computed by a collection of WFAs. The CM update operations (-1, +0, +1, or ×0) can all be reexpressed in terms of functions r(x), u(x) : Σ * → Z k to get: A WFA computing [c t ] i is shown in Figure The WFA in Figure In many rational RNNs, the updates at different time steps are independent of each other outside of a window of w tokens. Theorem 4 tells us this independence is not an essential property of rational encoders. Rather, any CM where the update is conditioned by finite state (as opposed to being conditioned by a local window) is in fact RR. Furthermore, since (Σ w )-restricted CMs are a special case of (Σ × Q)-restricted CMs, Theorem 4 can be directly applied to show that the s-QRNN is RR. See Appendix A for further discussion of this. Theorem 4 motivates us to also think about finitespace encoders: i.e., encoders with no counters" where the output at each prefix is fully determined by a finite amount of memory. The following lemma implies that any finite-space encoder is RR: Proof. Since f is computable in Θ(1) space, there exists a DFA A f whose accepting states are isomorphic to the range of f . We convert A f to a WFA by labelling each accepting state by the value of f that it corresponds to. We set the starting weight of the initial state to 1, and 0 for every other state. We assign each transition weight 1. Since the CNN, s-RNN, and s-GRU have finite state, we obtain the following result: Theorem 5. The CNN, s-RNN, and s-GRU are RR. While While "rational recurrence" is often used to indicate the simplicity of an RNN architecture, we find in this section that WFAs are surprisingly computationally powerful. Figure Figure Figure Theorem 6. Both the saturated and unsaturated RNN, GRU, QRNN, and LSTM 9 are not RR-hard. Proof. Consider the function f b mapping binary strings to their value, e.g. 101 → 5. The WFA in Figure In contrast, memory networks can have Θ(n) space. Appendix G explores this for stack RNNs. Appendix F presents preliminary results extending saturation analysis to self attention. We show saturated self attention is not RR and consider its space complexity. We hope further work will more completely characterize saturated self attention. Having explored the set of functions expressible internally by different saturated RNN encoders, we turn to the languages recognizable when using them with a decoder. We consider the following setup: 1. An s-RNN encodes x to a vector h t ∈ Q k . 2. A decoder function maps the last state h t to an accept/reject decision, respectively: {1, 0}. 9 As well as CMs. We say that a language L is decided by an encoder-decoder pair e, d if d(e(x)) = 1 for every sequence x ∈ L and otherwise d(e(x)) = 0. We explore which languages can be decided by different encoder-decoder pairings. Some related results can be found in Let d 1 be the single-layer linear decoder parameterized by w and b. For an encoder architecture E, we denote by D 1 (E) the set of languages decidable by E with d 1 . We use D 2 (E) analogously for a 2-layer decoder with 1 >0 activations, where the first layer has arbitrary width. We refer to sets of strings using regular expressions, e.g. a * = {a i | i ∈ N}. To illustrate the purpose of the decoder, consider the following language: The Hankel sub-block of the indicator function for L ≤ over P = a * , S = b * is lower triangular. Therefore, no RR encoder can compute it. However, adding the D 1 decoder allows us to compute this indicator function with an s-QRNN, which is RR. We set the s-QRNN layer to compute the simple series c t = # a-b (x) (by increasing on a and decreasing on b). The D 1 layer then checks c t ≤ 0. So, while the indicator function for L ≤ is not itself rational, it can be easily recovered from a rational representation. Thus, L ≤ ∈ D 1 (s-QRNN). We compare the language expressiveness of several rational and non-rational RNNs on the following: a n b n is more interesting than L ≤ because the D 1 decoder cannot decide it simply by asking the encoder to track # a-b (x), as that would require it to compute the non-linearly separable =0 function. Thus, it appears at first that deciding a n b n with D 1 might require a non-rational RNN encoder. However, we show below that this is not the case. Let • denote stacking two layers. We will go on to discuss the following results: , and show that H f has finite rank. It follows that there exists a WFA that can decide a n b n with the D 1 decoder. Counterintuitively, a n b n can be recognized using rational encoders. QRNNs (Appendix C) Although a n b n ∈ D 1 (WFA), it does not follow that every rationally recurrent model can also decide a n b n with the help of D 1 . Indeed, in Theorem 9, we prove that a n b n / ∈ D 1 (s-QRNN), whereas a n b n ∈ D 1 (s-LSTM) (Theorem 13). It is important to note that, with a more complex decoder, the QRNN could recognize a n b n . For example, the s-QRNN can encode c 1 = # a-b (x) and set c 2 to check whether x contains ba, from which a D 2 decoder can recognize a n b n (Theorem 10). This does not mean the hierarchy dissolves as the decoder is strengthened. We show that a n b n Σ *which seems like a trivial extension of a n b n -is not recognizable by the s-QRNN with any decoder. This result may appear counterintuitive, but in fact highlights the s-QRNN's lack of counter awareness: it can only passively encode the information needed by the decoder to recognize a n b n . Failing to recognize that a valid prefix has been matched, it cannot act to preserve that information after additional input tokens are seen. We present a proof in Theorem 11. In contrast, in Theorem 14 we show that the s-LSTM can directly encode an indicator for a n b n Σ * in its internal state. Proof sketch: a n b n Σ * / ∈ D(s-QRNN). A sequence s 1 ∈ a n b n Σ * is shuffled to create s 2 / ∈ a n b n Σ * with an identical multi-set of counter up-dates. We refer to this technique as the suffix attack, and note that it can be used to prove for multiple other languages L ∈ D 2 (s-QRNN) that L•Σ * is not in D(s-QRNN) for any decoder D. 2-layer QRNNs Adding another layer overcomes the weakness of the 1-layer s-QRNN, at least for deciding a n b n . This follows from the fact that a n b n ∈ D 2 (s-QRNN): the second QRNN layer can be used as a linear layer. Similarly, we show in Theorem 10 that a 2-layer s-QRNN can recognize a n b n Σ * ∪ { }. This suggests that adding a second s-QRNN layer compensates for some of the weakness of the 1-layer s-QRNN, which, by the same argument for a n b n Σ * cannot recognize a n b n Σ * ∪ { } with any decoder. Finally, we study the theoretical case where the decoder is an arbitrary recursively enumerable (RE) function. We view this as a loose upper bound of stacking many layers after a rational encoder. What information is inherently lost by using a rational encoder? WFAs can uniquely encode each input, making them Turing-complete under this setup; however, this does not hold for rational s-RNNs. Assuming an RR-complete encoder, a WFA like Figure Bounded space However, the Θ(log n) space bound of saturated rational RNNs like the s-QRNN means these models cannot fully encode the input. In other words, some information about the prefix x :t must be lost in c t . Thus, rational s-RNNs are not Turing-complete with an RE decoder. In Subsection 4.3, we showed that different saturated RNNs vary in their ability to recognize a n b n and a n b n Σ * . We now test empirically whether these predictions carry over to the learnable capacity of unsaturated RNNs. 11 We compare the QRNN and LSTM when coupled with a linear decoder D 1 . We also train a 2-layer QRNN ("QRNN2") and a 1-layer QRNN with a D 2 decoder ("QRNN+"). We train on strings of length 64, and evaluate generalization on longer strings. We also compare to a baseline that always predicts the majority class. The results are shown in Figure Experiment 1 We use the following language, which has similar formal properties to a n b n , but with a more balanced label distribution: In line with (34), the LSTM decides L 5 perfectly for n ≤ 64, and generalizes fairly well to longer strings. As predicted in (35), the QRNN cannot fully learn L 5 even for n = 64. Finally, as predicted in ( We develop a hierarchy of saturated RNN encoders, considering two angles: space complexity and rational recurrence. Based on the hierarchy, we formally distinguish the state expressiveness of the non-rational s-LSTM and its rational counterpart, the s-QRNN. We show further distinctions in state expressiveness based on encoder space complexity. Moreover, the hierarchy translates to differences in language recognition capabilities. Strengthening the decoder alleviates some, but not all, of these differences. We present two languages, both recognizable by an LSTM. We show that one can be recognized by an s-QRNN only with the help of a decoder, and that the other cannot be recognized by an s-QRNN with the help of any decoder. While this means existing rational RNNs are fundamentally limited compared to LSTMs, we find that it is not necessarily being rationally recurrent that limits them: in fact, we prove that a WFA can perfectly encode its input-something no saturated RNN can do. We conclude with an analysis that shows that an RNN architecture's strength must also take into account its space complexity. These results further our understanding of the inner working of NLP systems. We hope they will guide the development of more expressive rational RNNs. We extend the result in Theorem 3 as follows. Theorem 7. Any (Σ × Q)-restricted CM is rationally recurrent. Proof. We present an algorithm to construct a WFA computing an arbitrary counter in a (Σ × Q)restricted CM. First, we create two independent copies of the transition graph for the restricted CM. We refer to one copy of the CM graph as the add graph, and the other as the multiply graph. The initial state in the add graph receives a starting weight of 1, and every other state receives a starting weight of 0. Each state in the add graph receives an accepting weight of 0, and each state in the multiply graph receives an accepting weight of 1. In the add graph, each transition receives a weight of 1. In the multiply graph, each transition receives a weight of 0 if it represents ×0, and 1 otherwise. Finally, for each non-multiplicative update σ/+m Each counter update creates one path ending in the multiply graph. The path score is set to 0 if that counter update is "erased" by a ×0 operation. Thus, the sum of all the path scores in the WFA equals the value of the counter. This construction can be extended to accommodate =m counter updates from q i to q j by adding an additional transition from the initial state to q j in the multiplication graph with weight m. This allows us to apply it directly to s-QRNNs, whose update operations include =1 and =-1. We show that while WFAs cannot directly encode an indicator for the language a n b n = {a n b n | | n ∈ N}, they can encode a function that can be thresholded to recognize a n b n , i.e.: We prove this by showing a function whose Hankel matrix has finite rank that, when combined with the identity transformation (i.e., w = 1, b = 0) followed by thresholding, is an indicator for a n b n . Using the shorthand σ(x) = # σ (x), the function is: a n b n . To prove that its Hankel matrix, H f , has finite rank, we will create 3 infinite matrices of ranks 3, 3 and 1, which sum to H f . The majority of the proof will focus on the rank of the rank 3 matrices, which have similar compositions. We now show 3 series r, s, t and a set of series they can be combined to create. These series will be used to create the base vectors for the rank 3 matrices. where for every j ≤ 2, Lemma 3. Let c i = 1 -2i 2 and {c (k) } k∈N be the set of series defined c Proof. For i ∈ {0, 1, 2}, r i , s i and t i collapse to a 'select' operation, giving the true statement c Substituting the series definitions in the right side of the equation gives which can be expanded to Proof. An ifo s-QRNN can be expressed as a Σ krestricted CM with the additional update operations {:= -1, := 1}, where k is the window size of the QRNN. So it is sufficient to show that such a machine, when coupled with the decoder D 1 (linear translation followed by thresholding), cannot recognize a n b n . Let A be some such CM, with window size k and h counters. Take n = k + 10 and for every m ∈ N denote w m = a n b m and the counter values of A after w m as c m ∈ Q h . Denote by u t the vector of counter update operations made by this machine on input sequence w m at time t ≤ n + m. As A is dependent only on the last k counters, necessarily all u k+i are identical for every i ≥ 1. It follows that for all counters in the machine that go through an assignment (i.e., :=) operation in u k+1 , their values in c k+i are identical for every i ≥ 1, and for every other counter j, c k+i j -c k j = i • δ for some δ ∈ Z. Formally: for every i ≥ 1 there are two sets We now consider the linear thresholder, defined by weights and bias w, b. In order to recognise a n b n , the thresholder must satisfy: Opening these equations gives: However, this does not mean that the s-QRNN is entirely incapable of recognising a n b n . Increasing the decoder power allows it to recognise a n b n quite simply: Theorem 10. For the two-layer decoder D 2 , a n b n ∈ D 2 (s-QRNN). Proof. Let # ba (x) denote the number of ba 2grams in x. We use s-QRNN with window size 2 to maintain two counters: [c t ] 2 can be computed provided the QRNN window size is ≥ 2. A two-layer decoder can then check Theorem 11 (Suffix attack). No s-QRNN and decoder can recognize the language a n b n Σ * = a n b n (a|b) * , n > 0, i.e., a n b n Σ * / ∈ L(s-QRNN) for any decoder L. The proof will rely on the s-QRNN's inability to "freeze" a computed value, protecting it from manipulation by future input. Proof. As in the proof for Theorem 9, it is sufficient to show that no Σ k -restricted CM with the additional operations {:=-1, :=1} can recognize a n b n Σ * for any decoder L. Let A be some such CM, with window size k and h counters. For every w ∈ Σ n denote by c(w) ∈ Q h the counter values of A after processing w. Denote by u t the vector of counter update operations made by this machine on an input sequence w at time t ≤ |w|. Recall that A is Σ k restricted, meaning that u i depends exactly on the window of the last k tokens for every i. We now denote j = k + 10 and consider the sequences w 1 = a j b j a j b j a j b j , w 2 = a j b j-1 a j b j+1 a j b j . w 2 is obtained from w 1 by removing the 2j-th token of w 1 and reinserting it at position 4j. As all of w 1 is composed of blocks of ≥ k identical tokens, the windows preceding all of the other tokens in w 1 are unaffected by the removal of the 2j-th token. Similarly, being added onto the end of a substring b k , its insertion does not affect the windows of the tokens after it, nor is its own window different from before. This means that overall, the set of all operations u i performed on the counters is identical in w 1 and in w 2 . The only difference is in their ordering. w 1 and w 2 begin with a shared prefix a k , and so necessarily the counters are identical after processing it. We now consider the updates to the counters after these first k tokens, these are determined by the windows of k tokens preceding each update. First, consider all the counters that undergo some assignment (:=) operation during these sequences, and denote by {w} the multiset of windows in w ∈ Σ k for which they are reset. w 1 and w 2 only contain k-windows of types a x b k-x or b x a k-x , and so these must all re-appear in the shared suffix b j a j b j of w 1 and w 2 , at which point they will be synchronised. It follows that these counters all finish with identical value in c(w 1 ) and c(w 2 ). All the other counters are only updated using addition of -1, 1 and 0, and so the order of the updates is inconsequential. It follows that they too are identical in c(w 1 ) and c(w 2 ), and therefore necessarily that c(w 1 ) = c(w 2 ). From this we have w 1 , w 2 satisfying w 1 ∈ a n b n Σ * , w 2 / ∈ a n b n Σ * but also c(w 1 ) = c(w 2 ). Therefore, it is not possible to distinguish between w 1 and w 2 with the help of any decoder, despite the fact that w 1 ∈ a n b n Σ * and w 2 / ∈ a n b n Σ * . It follows that the CM and s-QRNN cannot recognize a n b n Σ * with any decoder. For the opposite extension Σ * a n b n , in which the language is augmented by a prefix, we cannot use such a "suffix attack". In fact, Σ * a n b n can be recognized by an s-QRNN with window length w ≥ 2 and a linear threshold decoder as follows: a counter counts # a-b (x) and is reset to 1 on appearances of ba, and the decoder compares it to 0. Note that we define decoders as functions from the final state to the output. Thus, adding an additional QRNN layer does not count as a "decoder" (as it reads multiple states). In fact, we show that having two QRNN layers allows recognizing a n b n Σ * . Theorem 12. Let be the empty string. Then, Proof. We construct a two-layer s-QRNN from which a n b n Σ * can be recognized. Let $ denote the left edge of the string. The first layer computes two quantities d t and e t as follows: Note that e t can be interpreted as a binary value checking whether the first token was b. The second layer computes c t as a function of d t , e t , and x t (which can be passed through the first layer). We will demonstrate a construction for c t by creating linearly separable functions for the gate terms f t and z t that update c t . Now, the update function u t to c t can be expressed -1 otherwise. (71) Finally, the decoder accepts iff c t ≤ 0. To justify this, we consider two cases: either x starts with b or a. If x starts with b, then e t = 0, so we increment c t by 1 and never decrement it. Since 0 < c t for any t, we will reject x. If x starts with a, then we accept iff there exists a sequence of bs following the prefix of as such that both sequences have the same length. In contrast to the s-QRNN, we show that the s-LSTM paired with a simple linear and thresholding decoder can recognize both a n b n and a n b n Σ * . Theorem 13. Proof. Assuming a string a i b i , we set two units of the LSTM state to compute the following functions using the CM in Figure We also add a third unit [c t ] 3 that tracks whether the 2-gram ba has been encountered, which is equivalent to verifying that the string has the form a i b i . Allowing h t = tanh(c t ), we set the linear threshold layer to check Theorem 14. Proof. We use the same construction as Theorem 13, augmenting it with We decide x according to the (still linearly separable) equation Models were trained on strings up to length 64, and, at each index t, were asked to classify whether or not the prefix up to t was a valid string in the language. Models were then tested on independent datasets of lengths 64, 128, 256, 512, 1024, and 2048. The training dataset contained 100000 strings, and the validation and test datasets contained 10000. We discuss task-specific schemes for sampling strings in the next paragraph. All models were trained for a maximum of 100 epochs, with early stopping after 10 epochs based on the validation cross entropy loss. We used default hyperparameters provided by the open-source AllenNLP framework Sampling strings For the language L 5 , each token was sampled uniformly at random from Σ = {a, b}. For a n b n Σ * , half the strings were sampled in this way, and for the other half, we sampled n uniformly between 0 and 32, fixing the first 2n characters of the string to a n b n and sampling the suffix uniformly at random. Experiments were run for 20 GPU hours on Quadro RTX 8000. Architecture We place saturated self attention 1. At time t, compute queries q t , keys k t , and values v t from the input embedding x t using a linear transformation. 2. Compute attention head h t by attending over the keys and values up to time t (K :t and V :t ) with query q t . 3. Let • L denote a layer normalization operation This simplified architecture has only one attention head, and does not incorporate residual connections. It is also masked (i.e., at time t, can only see the prefix X :t ), which enables direct comparison with unidirectional RNNs. For simplicity, we do not add positional information to the input embeddings. Theorem 15. Saturated masked self attention is not RR. Proof. Let # σ (x) denote the number of occurences of σ ∈ Σ in string x. We construct a self attention layer to compute the following function over {a, b} * : Since the Hankel sub-block over P = a * , S = b * has infinite rank, f ∈ R. Fix v t = x t . As shown by For all t, set the key and query k t , q t = 1. Thus, all the key-query similarities are 1, and we obtain: Applying layer norm to this quantity preserves equality of the first and second elements. Thus, we set the layer in (77) to independently check 0 < [h 0 t ] 1 -[h 0 t ] 2 and [h 0 t ] 1 -[h 0 t ] 2 < 0 using ReLU. The final layer c t sums these two quantities, returning 0 if neither condition is met, and 1 otherwise. Since saturated self attention can represent f / ∈ R, it is not RR. Space Complexity We show that self attention falls into the same space complexity class as the LSTM and QRNN. Our method here extends Merrill (2019)'s analysis of attention. Theorem 16. Saturated single-layer self attention has Θ(log n) space. Proof. The construction from Theorem 15 can reach a linear (in sequence length) number of different outputs, implying a linear number of different configurations, and so that the space complexity of saturated self attention is Ω(log n). We now show the upper bound O(log n). A sufficient representation for the internal state (configuration) of a self-attention layer is the unordered group of key-value pairs over the prefixes of the input sequence. Since f k : x t → k t and f v : x t → v t have finite domain (Σ), their images K = image(f k ), V = image(f v ) are finite. Note that this construction does not apply if the "vocabulary" we are attending over is not finite. Thus, using unbounded positional embeddings, stacking multiple self attention layers, or applying attention over other encodings with unbounded state might reach Θ(n). While it eludes our current focus, we hope future work will extend the saturated analysis to self attention more completely. We direct the reader to All of the standard RNN architectures considered in Section 3 have O(log n) space in their saturated form. In this section, we consider a stack RNN encoder similar to the one proposed by Classically, a stack is a dynamic list of objects to which elements v ∈ V can be added and removed in a LIFO manner (using push and pop operations). The stack RNN proposed in Differentiable Stack In a differentiable stack, the update operation takes an element s t to push and a distribution π t over the update operations push, pop, and no-op, and returns the weighted average of the result of applying each to the current stack. The averaging is done elementwise along the stacks, beginning from the top entry. To facilitate this, differentiable stacks are padded with infinite 'null entries'. Their elements must also have a weighted average operation defined. Definition 6 (Geometric k-stack RNN encoder). Initialize the stack S to an infinite list of null entries, and denote by S t the stack value at time t. Using 1-indexing for the stack and denoting [S t-1 ] 0 s t , the geometric k-stack RNN recurrent update is: In this work we will consider the case where the null entries are 0 and the encoding c t is produced as a geometric-weighted sum of the stack contents, This encoding gives preference to the latest values in the stack, giving initial stack encoding c 0 = 0.
667
683
667
A Simple and Effective Approach to Coverage-Aware Neural Machine Translation
We offer a simple and effective method to seek a better balance between model confidence and length preference for Neural Machine Translation (NMT). Unlike the popular length normalization and coverage models, our model does not require training nor reranking the limited n-best outputs. Moreover, it is robust to large beam sizes, which is not well studied in previous work. On the Chinese-English and English-German translation tasks, our approach yields +0.4 ∼ 1.5 BLEU improvements over the state-of-the-art baselines.
In the past few years, Neural Machine Translation (NMT) has achieved state-of-the-art performance in many translation tasks. It models the translation problem using neural networks with no assumption of the hidden structures between two languages, and learns the model parameters from bilingual texts in an end-to-end fashion where x and y are the source and target sentences, and P(y j |y <j , x) is the probability of generating the j-th word y j given the previously-generated words y <j and the source sentence x. However, the straightforward implementation of this model suffers from many problems, the most obvious one being the bias that the system tends to choose shorter translations because the log-probability is added over time steps. The situation is worse when we use beam search where the shorter translations have more chances to beat the longer ones. It is in general to normalize the model score by translation length (say length normalization) to eliminate this system bias Though widely used, length normalization is not a perfect solution. NMT systems still have under-translation and over-translation problem even with a normalized model. It is due to the lack of the coverage model that indicates the degree a source word is translated. As an extreme case, a source word might be translated for several times, which results in many duplicated target words. Several research groups have proposed solutions to this bad case In this paper we present a simple and effective approach by introducing a coverage-based feature into NMT. Unlike previous studies, we do not resort to developing extra models nor reranking the limited n-best translations. Instead, we develop a coverage score and apply it to each decoding step. Our approach has several benefits, • Our approach does not require to train a huge neural network and is easy to implement. Figure We test our approach on the NIST Chinese-English and WMT English-German translation tasks, and it outperforms several state-of-the-art baselines by 0.4∼1.5 BLEU points.
Given a word sequence, a coverage vector indicates whether the word of each position is translated. This is trivial for statistical machine translation However, it is not the case for NMT where the coverage is modeled in a soft way. In NMT, no explicit translation units or rules are used. The attention mechanism is used instead to model the correspondence between a source position and a target position Here, we present a coverage score (CS) to describe to what extent the source words are translated. In principle, the coverage score should be high if the translation covers most words in source sentence, and low if it covers only a few of them. Given a source position i, we define its coverage as the sum of the past attention probabilities c i = |y| j a ij where β is a parameter that can be tuned on a development set. This model has two properties: • Non-linearity Eq. ( • Truncation At the early stage of decoding, the coverage of the most source words is close to 0. This may result in a negative infinity value after the logarithm function, and discard hypotheses with sharp attention distributions, which is not necessarily bad. The truncation with the lowest value β can ensure that the coverage score has a reasonable value. Here β is similar to model warm-up, which makes the model easy to run in the first few decoding steps. Note that our way of truncation is different from For decoding, we incorporate the coverage score into beam search via linear combination with the NMT model score as below, where y is a partial translation generated during decoding, log P(y|x) is the model score, and α is the coefficient for linear interpolation. In standard implementation of NMT systems, once a hypothesis is finished, it is removed from the beam and the beam shrinks accordingly. Here we choose a different decoding strategy. We keep the finished hypotheses in the beam until the decoding completes, which means that we compare the finished hypotheses with partial translations at each step. This method helps because it can dynamically determine whether a finished hypothesis is kept in beam through the entire decoding process, and thus reduce search errors. It enables the decoder to throw away finished hypotheses if they have very low coverage but are of high likelihood values. We evaluated our approach on Chinese-English and German-English translation tasks. We used 1.8M sentence Chinese-English bitext provided within NIST12 OpenMT 2 and 4.5M sentence German-English bitext provided within WMT16. For Chinese-English translation, we chose the evaluation data of NIST MT06 as the development set, and MT08 as the test set. All Chinese sentences were word segmented using the tool provided within NiuTrans Our baseline systems were based on the opensource implementation of the NMT model presented in For comparison, we re-implemented the length normalization (LN) and coverage penalty (CP) methods Table We also compared CP with our method by ap- Table Then, Figure Another interesting question is whether the N-MT systems can generate translations with appropriate lengths. To seek its answer, we studied the length difference between the MT output and the shortest reference. Table Sensitivity analysis on α and β in Table The length preference and coverage problems have been discussed for years since the rise of statistical machine translation Perhaps the most related work to this paper is To address this issue, we remove the probability constraint and make the coverage score interpretable for different cases. Another difference lies in that our coverage model is applied to every beam search step, while Previous work have pointed out that BLEU scores of NMT systems drop as beam size increases We have described a coverage score and integrated it into a state-of-the-art NMT system. Our method is easy to implement and does not need training for additional models. Also, it performs well in searching with large beam sizes. On Chinese-English and English-German translation tasks, it outperforms several baselines significantly.
522
2,040
522
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes? Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; Choi et al., 2016), the design of sculptures (Lehman et al., 2016), and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016). In this paper, we focus on a creative textual task: automatic poetry composition. A distinguishing feature of poetry is its aesthetic forms, e.g. rhyme and rhythm/meter. Shall I compare thee to a summer's day? Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses. Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g. see Figure Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017); our results suggest that future research should look beyond meter and focus on improving readability. In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.
Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013). The earliest attempt at using statistical modelling for poetry generation was Greene et al. (2010), based on a language model paired with a stress model. Neural networks have dominated recent research. Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al. (2016) later simplified by incorporating an attention mechanism and training at the character level. For English poetry, Ghazvininejad et al. (2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation. Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry. A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically. The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.: S -S + S -S + S -S + S -S + S -S + Shall I compare thee to a summer's day? where S -and S + denote unstressed and stressed syllables, respectively. A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG. There are a number of variants, however, mostly seen in the quatrains; e.g. AABB or ABBA are also common. We build our sonnet dataset from the latest image of Project Gutenberg. Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the "BACKGROUND" dataset), leaving the sonnet corpus ("SONNET"). We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words. Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns. We use standard perplexity for evaluating the language model. In terms of model variants, we have: Each number is an average across 10 runs. • LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014); 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models. Perplexity on the test partition is detailed in Table 2. Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * . The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks. The full model LM * * +PM+RM, which learns stress To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary. To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g. 1st time step = S -) to the word if any of its characters receives an attention 0.20. For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017). We present stress accuracy in Table x-axis the characters of the sonnet line (punctuation removed). The attention network appears to perform very well, without any noticeable errors. The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y. Additional heatmaps for the full sonnet are provided in the supplementary material. We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score. Word pairs that are not included in the dictionary are discarded. Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match. We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry. During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step. Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; We next describe how to incorporate the pentameter model for generation. Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter. We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ). We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1. To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary. Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme. We resample the second word of a rhyming pair (e.g. when generating the second A in AABB) until it produces a cosine similarity 0.9. We also resample the second word of a nonrhyming pair (e.g. when generating the first B in AABB) by requiring a cosine similarity 0.7. We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert. A sample of machine-generated sonnets are included in the supplementary material. We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material). Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training. For optimisers, we use Adagrad (Duchi et al., 2011) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models. We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens. Following Hopkins and Kiela (2017), we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem. Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model). We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch. An equal number of human-written quatrains was sampled from the training partition. A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT. Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- cally, and they were restricted to do a maximum of 3 HITs. To dissuade workers from using search engines to identify real poems, we presented the quatrains as images. Accuracy is presented in Table To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e. amount of emotion the poem evokes). All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best). In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human). The expert was blind to the source of each poem. The mean and standard deviation of the ratings are presented in Table We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets. This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997). Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability. In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models. Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model. Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry. We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets. We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert. Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well. Machine-generated generated poems, however, still underperform in terms of readability and emotion.
649
1,971
649
Evolutionary Data Measures: Understanding the Difficulty of Text Classification Tasks
Classification tasks are usually analysed and improved through new model architectures or hyperparameter optimisation but the underlying properties of datasets are discovered on an ad-hoc basis as errors occur. However, understanding the properties of the data is crucial in perfecting models. In this paper we analyse exactly which characteristics of a dataset best determine how difficult that dataset is for the task of text classification. We then propose an intuitive measure of difficulty for text classification datasets which is simple and fast to calculate. We show that this measure generalises to unseen data by comparing it to stateof-the-art datasets and results. This measure can be used to analyse the precise source of errors in a dataset and allows fast estimation of how difficult a dataset is to learn. We searched for this measure by training 12 classical and neural network based models on 78 real-world datasets, then use a genetic algorithm to discover the best measure of difficulty. Our difficulty-calculating code 1 and datasets 2 are publicly available.
If a machine learning (ML) model is trained on a dataset then the same machine learning model on the same dataset but with more granular labels will frequently have lower performance scores than the original model (see results in Such a difficulty measure would be useful as an analysis tool and as a performance estimator. As an analysis tool, it would highlight precisely what is causing difficulty in a dataset, reducing the time practitioners need spend analysing their data. As a performance estimator, when practitioners approach new datasets they would be able to use this measure to predict how well models are likely to perform on the dataset. The complexity of datasets for ML has been previously examined
One source of difficulty in a dataset is mislabelled items of data (noise). Class Diversity. Class diversity provides information about the composition of a dataset by measuring the relative abundances of different classes Class Balance. Unbalanced classes are a known problem in machine learning Data Complexity. Humans find some pieces of text more difficult to comprehend than others. How difficult a piece of text is to read can be calculated automatically using measures such as those proposed by Mc Laughlin (1969); We used 78 text classification datasets and trained 12 different ML algorithms on each of the datasets for a total of 936 models trained. The highest achieved macro F1 score We wanted the discovered difficulty measure to be useful as an analysis tool, so we enforced a restriction that the difficulty measure should be composed only by summation, without weighting the constituent statistics. This meant that each difficulty measure could be used as an analysis tool by examining its components and comparing them to the mean across all datasets. Each difficulty measure was represented as a binary vector of length 48 -one bit for each statistic -each bit being 1 if that statistic was used in the difficulty measure. We therefore had 2 48 possible different difficulty measures that may have correlated with model score and needed to search this space efficiently. Genetic algorithms are biologically inspired search algorithms and are good at searching large spaces efficiently We gathered 27 real-world text classification datasets from public sources, summarised in Table We created 51 more datasets by taking two or more of the original 27 datasets and combining all of the data points from each into one dataset. The label for each data item was the name of the dataset which the text originally came from. We combined similar datasets in this way, for example two different datasets of tweets, so that the classes would not be trivially distinguishable -there is no dataset to classify text as either a tweet or Shakespeare for example as this would be too easy for models. The full list of combined datasets is in Appendix A.2. Our datasets focus on short text classification by limiting each data item to 100 words. We demonstrate that the difficulty measure we discover with this setup generalises to longer text classification in Section 3.1. All datasets were lowercase with no punctuation. For datasets with no validation set, 15% of the training set was randomly sampled as a validation set at runtime. We calculated 12 distinct statistics with different n-gram sizes to produce 48 statistics of each dataset. These statistics are designed to increase in value as difficulty increases. The 12 statistics are described here and a listing of the full 48 is in Appendix B in Table We recorded the Shannon Diversity Index and its normalised variant the Shannon Equitability (Shannon, 2001) using the count-based probability distribution of classes described above. We propose a simple measure of class imbalance: C is the total number of classes, n c is the count of items in class c and T DAT A is the total number of data points. This statistic is 0 if there are an equal number of data points in every class and the upper bound is 2 1 -1 C and is achieved when one class has all the data points -a proof is given in Appendix B.2. Per-class probability distributions were calculated by splitting the dataset into subsets based on the class of each data point and then computing countbased probability distributions as described above for each subset. Hellinger Similarity One minus both the average and minimum Hellinger Distance Top N-Gram Interference Average Jaccard similarity Mutual Information Average mutual information Distinct n-grams : Total n-grams Count of distinct n-grams in a dataset divided by the total number of n-grams. Score of 1 indicates that each ngram occurs once in the dataset. The Flesch Reading Ease (FRE) formula grades text from 100 to 0, 100 indicating most readable and 0 indicating difficult to read N-Gram and Character Diversity Using the Shannon Index and Equitability described by Shannon (2001) we calculate the diversity and equitability of n-grams and characters. Probability distributions are count-based as described at the start of this section. To ensure that any discovered measures did not depend on which model was used (i.e. that they were model agnostic), we trained 12 models on every dataset. The models are summarised in Table Word Embeddings Our neural network models excluding the Convolutional Neural Network (CNN) used 128-dimensional FastText Term Frequency Inverse Document Frequency (tf-idf) Our classical machine learning models represented each data item as a tf-idf vector Characters Our CNN, inspired by The genetic algorithm maintains a population of candidate difficulty measures, each being a binary vector of length 48 (see start of Method section). At each time step, it will evaluate each member of the population using a fitness function. It will then select pairs of parents based on their fitness, and perform crossover and mutation on each pair to produce a new child difficulty measure, which is added to the next population. This process is iterated until the fitness in the population no longer improves. Population The genetic algorithm is nonrandomly initialised with the 48 statistics described in Section 2.2 -each one is a difficulty measure composed of a single statistic. 400 pairs of parents are sampled with replacement from each population, so populations after this first time step will consist of 200 candidate measures. The probability of a measure being selected as a parent is proportional to its fitness. The fitness function of each difficulty measure is based on the Pearson correlation To produce a new difficulty measure from two parents, the constituent statistics of each parent are randomly intermingled, allowing each parent to pass on information about the search space. This is done in the following way: for each of the 48 statistics, one of the two parents is randomly selected and if the parent uses that statistic, the child also does. This produces a child which has features of both parents. To introduce more stochasticity to the process and ensure that the algorithm does not get trapped in a local minima of fitness, the child is mutated. Mutation is performed by randomly adding or taking away each of the 48 statistics with probability 0.01. After this process, the child difficulty measure is added to the new population. Training The process of calculating fitness, selecting parents and creating child difficulty measures is iterated until there has been no improvement in fitness for 15 generations. Due to the stochasticity in the process, we run the whole evolution 50 times. We run 11 different variants of this evolution, leaving out different statistics of the dataset each time to test which are most important in finding a good difficulty measure, in total running 550 evolutions. Training time is fast, averaging 79 seconds per evolution with a standard deviation of 25 seconds, determined over 50 runs of the algorithm on a single CPU. The four hypothesized areas of difficulty This measure is the shortest measure which achieves a higher correlation than the mean, at -0.8814. This measure is plotted against model F1 scores in Figure A difficulty measure is useful as an analysis and performance estimation tool if it is model agnostic and provides an accurate difficulty estimate on unseen datasets. When running the evolution, the F1 scores of our character-level CNN were not observed by the genetic algorithm. If the discovered difficulty measure still correlated with the CNN's scores despite never having seen them during evolution, it is more likely to be model agnostic. The CNN has a different model architecture to the other models and has a different input type which encodes no prior knowledge (as word embeddings do) or contextual information about the dataset (as tf-idf does). D1 has a correlation of -0.9010 with the CNN and D2 has a correlation of -0.8974 which suggests that both of our presented measures do not depend on what model was used. One of the limitations of our method was that our models never saw text that was longer than 100 words and were never trained on any very large datasets (i.e. >1 million data points). We also performed no hyperparameter optimisation and did not use state-of-the-art models. To test whether our measure generalises to large datasets with text longer than 100 words, we compared it to some recent state-of-the-art results in text classification using the eight datasets described by The Difficulty Measure Generalises to Very Large Datasets and Long Data Items. The smallest of the eight datasets described by The Difficulty Measure is Model and Input Type Agnostic. The state-of-the-art models presented in Table The Difficulty Measure Lacks Precision. The average score achieved on the Yahoo Answers dataset is 69.9% and its difficulty is 4.51. The average score achieved on Yelp Full is 56.8%, 13.1% less than Yahoo Answers and its difficulty is 4.42. In ML terms, a difference of 13% is significant yet our difficulty measure assigns a higher difficulty to the easier dataset. However, Yahoo Answers, Yelp Full and Amazon Full, the only three of Stanford Sentiment Treebank Binary Classification (SST 2) Figure An alternate solution would be to split reviews like this into two separate ones: one with the positive component and one with the negative. Furthermore, Figure To show that our analysis with this difficulty measure was accurately observing the difficulty in SST, we randomly sampled and analysed 100 misclassified data points from SST's test set out of 150 total misclassified. Of these 100, 48 were reviews with both strong positive and negative features and would be difficult for a human to classify, 22 were sarcastic and 8 were mislabelled. The remaining 22 could be easily classified by a human and are misclassified due to errors in the model rather than the data items themselves being difficult to interpret. These findings show that our difficulty measure correctly determined the source of difficulty in SST because 78% of the errors are implied by our difficulty measure and the remaining 22% are due to errors in the model itself, not difficulty in the dataset. We hypothesized that the difficulty of a dataset would be determined by four areas not including noise: Class Diversity, Class Balance, Class Interference and Text Complexity. We performed multiple runs of the genetic algorithm, leaving statistics out each time to test which were most important in finding a good difficulty measure which resulted in the following findings: No Single Characteristic Describes Difficulty When the Class Diversity statistic was left out of evolution, the highest achieved correlation was -0.806, 9% lower than D1 and D2. However, on its own Class Diversity had a correlation of -0.644 with model performance. Clearly, Class Diversity is necessary but not sufficient to estimate dataset difficulty. Furthermore, when all measures of Class Diversity and Balance were excluded, the highest achieved correlation was -0.733 and when all measures of Class Interference were excluded the best correlation was -0.727. These three expected areas of difficulty -Class Diversity, Balance and Interference -must all be measured to get an accurate estimate of difficulty because excluding any of them significantly damages the correlation that can be found. Correlations for each individual statistic are in Table Data Complexity Has Little Affect on Difficulty Excluding all measures of Data Complexity from evolution yielded an average correlation of -0.869, only 1% lower than the average when all statistics were included. Furthermore, the only measure of Data Complexity present in D1 and D2 is Distinct Words : Total Words which has a mean value of 0.067 and therefore contributes very little to the difficulty measure. This shows that while Data Complexity is necessary to achieve top correlation, its significance is minimal in comparison to the other areas of difficulty. When a dataset has a large number of balanced classes, then Class Diversity dominates the measure. This means that the difficulty measure is not a useful performance estimator for such datasets. To illustrate this, we created several fake datasets with 1000, 100, 50 and 25 classes. Each dataset had 1000 copies of the same randomly generated string in each class. It was easy for mod-els to overfit and score a 100% F1 score on these fake datasets. For the 1000-class fake data, Class Diversity is 6.91, which by our difficulty measure would indicate that the dataset is extremely difficult. However, all models easily achieve a 100% F1 score. By testing on these fake datasets, we found that the limit for the number of classes before Class Diversity dominates the difficulty measure and renders it inaccurate is approximately 25. Any datasets with more than 25 classes with an approximately equal number of items per class will be predicted as difficult regardless of whether they actually are because of this diversity measure. Datasets with more than 25 unbalanced classes are still measured accurately. For example, the ATIS dataset One of our datasets of New Year's Resolution Tweets has 115 classes but only 3507 data points Our genetic algorithm, based on an unweighted, linear sum, cannot take statistics like data size into account currently because they do not have a convenient range of values; the number of data points in a dataset can vary from several hundred to several million. However, the information is still useful to practitioners in diagnosing the difficulty of a dataset. Given that the difficulty measure lacks precision and may be better suited to classification than regression as discussed in Section 3.1, cannot take account of statistics without a convenient range of values and that the difficulty measure must be interpretable, we suggest that future work could look at combining statistics with a white-box, nonlinear algorithm like a decision tree. As opposed to summation, such a combination could take account of statistics with different value ranges and perform either classification or regression while remaining interpretable. Here we present some general guidelines on how the four areas of difficulty can be reduced. Class Diversity can only be sensibly reduced by lowering the number of classes, for example by grouping classes under superclasses. In academic settings where this is not possible, hierarchical learning allows grouping of classes but will produce granular labels at the lowest level Class Interference is influenced by the amount of noise in the data and linguistic phenomena like sarcasm. It can also be affected by the way the data is labelled, for example as shown in Section 3.2 where SST has data points with both positive and negative features but only a single label. Filtering noise, restructuring or relabelling ambiguous data points and detecting phenomena like sarcasm will help to reduce class interference. Easily confused classes can also be grouped under one superclass if practitioners are willing to sacrifice granularity to gain performance. Class Imbalance can be addressed with data augmentation such as thesaurus based methods Data Complexity can be managed with large amounts of data. This need not necessarily be labelled -unsupervised pre-training can help models understand the form of complex data before attempting to use it Model Selection Once the difficulty of a dataset has been calculated, a practitioner can use this to decide whether they need a complex or simple model to learn the data. Performance Checking and Prediction Practitioners will be able to compare the results their models get to the scores of other models on datasets of an equivalent difficulty. If their models achieve lower results than what is expected ac-cording to the difficulty measure, then this could indicate a problem with the model. When their models do not achieve good results, ML practitioners could potentially calculate thousands of statistics to see what aspects of their datasets are stopping their models from learning. Given this, how do practitioners tell which statistics are the most useful to calculate? Which ones will tell them the most? What changes could they make which will produce the biggest increase in model performance? In this work, we have presented two measures of text classification dataset difficulty which can be used as analysis tools and performance estimators. We have shown that these measures generalise to unseen datasets. Our recommended measure can be calculated simply by counting the words and labels of a dataset and is formed by adding five different, unweighted statistics together. As the difficulty measure is an unweighted sum, its components can be examined individually to analyse the sources of difficulty in a dataset. There are two main benefits to this difficulty measure. Firstly, it will reduce the time that practitioners need to spend analysing their data in order to improve model scores. As we have demonstrated which statistics are most indicative of dataset difficulty, practitioners need only calculate these to discover the sources of difficulty in their data. Secondly, the difficulty measure can be used as a performance estimator. When practitioners approach new tasks they need only calculate these simple statistics in order to estimate how well models are likely to perform. Furthermore, this work has shown that for text classification the areas of Class Diversity, Balance and Interference are essential to measure in order to understand difficulty. Data Complexity is also important, but to a lesser extent. Future work should firstly experiment with nonlinear but interpretable methods of combining statistics into a difficulty measure such as decision trees. Furthermore, it should apply this difficulty measure to other NLP tasks that may require deeper linguistic knowledge than text classification, such as named entity recognition and parsing. Such tasks may require more advanced features than simple word counts as were used in this work.
1,080
715
1,080
Neural Readability Pairwise Ranking for Sentences in Italian Administrative Language
Automatic Readability Assessment aims at assigning a complexity level to a given text, which could help improve the accessibility to information in specific domains, such as the administrative one. In this paper, we investigate the behavior of a Neural Pairwise Ranking Model (NPRM) for sentence-level readability assessment of Italian administrative texts. To deal with data scarcity, we experiment with cross-lingual, cross-and in-domain approaches, and test our models on Admin-It, a new parallel corpus in the Italian administrative language, containing sentences simplified using three different rewriting strategies. We show that NPRMs are effective in zero-shot scenarios (∼0.78 ranking accuracy), especially with ranking pairs containing simplifications produced by overall rewriting at the sentence-level, and that the best results are obtained by adding indomain data (achieving perfect performance for such sentence pairs). Finally, we investigate where NPRMs failed, showing that the characteristics of the data used for fine-tuning, rather than its size, have a bigger effect on a model's performance.
Due to its complexity, the style of Italian administrative texts has been defined as "artificial" and "obscure" One way to tackle this problem is with technologies for Automatic Readability Assessment (ARA) that predict the complexity of texts In this paper, we tackle the data scarcity issue in two ways. First, we introduce Admin-It (Sec. 3), a parallel corpus in the Italian administrative language with sentences that were simplified following three different styles of rewriting. Then, we repurpose We evaluate the performance of NPRMs on Admin-It in zero-shot settings (Sec. 5), fine-tuning models with data from different languages (i.e., Italian, English and Spanish) and domains (i.e., administrative, educational, and news). We show that, overcoming the limitations of traditional ARA system in cross-domain set-ups Finally, we conduct a qualitative analysis on the errors made by NPRMs (Sec. 7), and observe how models deal with various kinds of simplification, such as overall rewriting versus the application of single operations of simplification (e.g., lexical substitution, splitting or deleting). To sum up, our main contributions are: • We create Admin-It, a parallel corpus of sentences for the Italian administrative language containing different simplification styles; 1 • We prove that the Neural Pairwise Ranking Model is also effective for automatic readability assessment of sentences; • We experiment with NPRMs in cross-domain and cross-lingual set-ups, analyzing their performances when fine-tuned with data of different languages and domains, and show that they reach good results in zero-shot scenarios; • We analyze the models' errors according to the styles of simplification applied in different subsections of Admin-It. While ARA is normally a document-level task, we tackle it at the sentence level due to the characteristics of the datasets available in Italian
Early ARA techniques consisted in the so-called "readability formulae". Such formulae were created for educational purposes and mainly considered shallow text features, like word and sentence length or lists of common words However, longer words and sentences are not necessarily complex, and these formulae have been proved to be unreliable NLP and Machine Learning fostered the emergence of "AI readability" systems Recently, Given the paucity of data in the Italian administrative language for sentence readability and simplification, we decided to build Admin-It, a parallel corpus of Italian administrative language. The corpus comprises 736 sentence pairs corresponding to two readability levels: original and simplified. We organized the corpus in three subsets according to the different nature of the applied simplification: Operations (Admin-It OP ): 588 pairs of sen-tences from the subsection of the Simpitiki corpus Rewritten Sents (Admin-It RS ): New 100 pairs of original-simplified sentences. The original sentences were selected from websites of Italian municipalities, Rewritten Docs (Admin-It RD ): 48 pairs of sentences selected from administrative texts, which were collected and simplified by In order to make Admin-It publicly available, we masked potentially sensitive data mentioned in the sentences, such as bank account numbers, addresses, licence numbers, phones and emails. Table 1 reports some quantitative information about the corpus. Admin-It RS has the highest average length of all subsets since, by design, it contains simplifications for very long sentences. Furthermore, both Admin-It RS and Admin-It RD register high Levenshtein distances since these two subsets were simplified through overall rewriting, whereas in Admin-It OP , one single simplification operation per sentence was applied. Examples of sentence pairs can be found in Appendix A (Table In this section, we briefly describe the Neural Pairwise Ranking Model (NPRM) of NPRM for Sentences. In our setting, the input text is sentences instead of documents. Even though the NPRM can rank an arbitrary number of texts in each list of tuples, due to the characteristics of our data, we rank sentences in only two readability levels: complex and simple. Therefore, the input is now a list of two tuples with the vector representations of the original (s o ) and simplified (s s ) versions of the same sentence, and their readabilities. That is No further changes were made to the original model. To validate our adaptation of the model, we examined the performance of the NPRM for ranking sentences in a monolingual setting for English. We fine-tuned it on the OSE corpus (see Sec. 5.1) via 5-Fold cross validation with bert-base-uncased. The resulted ranking accuracy was quite high (0.96) and close to the one obtained by We adapted the released code of We fine-tuned our models using data in three languages (English, Spanish and Italian) and three domains (news, administrative and educational). As a pre-processing step, for all datasets, we filtered out instances where the original and simplified sentences were identical. Simpitiki/Wikipedia (Simpitiki W ): Introduced in Tonelli et al. ( SimPA: This is an English sentence-level simplification corpus in the administrative domain Similarly to For what concerns Baseline L , we decided to focus on sentence length to mimic the behaviour of traditional readability formulae, and because it is a raw text feature that we could easily extract and compare between corpora of different languages. In addition, such baseline assigns a ranking even in cases of ties (see how we handled this in the evalu-ation step in Sec. 5.3). Finally, Baseline L models were trained following different combinations of data, similar to our NPRMs. With regards to Baseline E , the sentence embeddings are obtained from an Italian BERT model that we call BertIta Our models are evaluated in terms of Ranking Accuracy (RA), that is the percentage of pairs ranked correctly. We used the implementation provided by To assess if differences in scores between pairs of models are statistically significant, we used a nonparametric statistical hypothesis test, McNemar's Test We first fine-tuned our models with only Italian data, but not from the administrative domain. Our models were fine-tuned on Simpitiki W , with the NPRM exploiting BertIta. As shown in Table Replacing BertIta with mBERT, We now experiment with adding in-domain data to the previous setting, even if it is in another language. That is, models are now fine-tuned on OSE, NewsEn, NewsEs and SimPA. As shown in Table We proceed to fine-tune our models using out-ofdomain data (i.e., news) in other languages (i.e., English and Spanish). In particular, models are fine-tuned on OSE, NewsEn and NewsEs. Results are reported in Table Despite OSE being smaller than NewsEn and NewsEs, the NPRM fine-tuned on it reached better overall results than when fine-tuned on the other datasets. In particular, even if the differences are not significant, that NPRM achieved a higher RA in Admin-It OP and comparable scores in Admin-It RS . On the other hand, the NPRM fine-tuned on NewsEs obtained a sensible improvement in RA for Admin-It RD , even surpassing Baseline L , although not significantly. The best result for this subset (and on Admin-It overall) is obtained by combining OSE and NewsEs. Adding NewsEs could have helped because Spanish is more similar to Italian than English, both belonging to the same family of Romance languages and therefore sharing similar morphosyntactic structures Finally, combining all three datasets allowed an NPRM to obtain the best results in Admin-It OP and Admin-It RS in this setting. On both subsets, there are significant differences with both the baselines and the NPRMs fine-tuned only on Simpitiki W (p<0.001). When compared to SimPA and to the combination of SimPA and Simpitiki W , the significance is reached only on Admin-It OP (p<0.01). We also experimented with pairwise combinations of the three datasets without substantial improvements (see Appendix C for more scores of these experiments). We analyze where the NPRMs failed when ranking sentence pairs from Admin-It RD and Admin-It OP . We focus on these two subsets of Admin-It given the high results already obtained on Admin-It RS . NPRMs reached the highest RAs in this subset (0.896) when fine-tuned on OSE+NewsEs, OSE+NewsEs+Simpitiki W , or OSE+NewsEn+Simpitiki W . We analyze the errors made by the first model since it also achieved the highest RA (0.785) on the overall dataset among those models. This NPRM failed to rank five out of 48 sentence pairs in Admin-It RD . In some cases, given the same semantic content, punctuation could have affected the scoring because commas split the sentences in various parenthetical expressions (see the first example in Table 4). However, when a sentence contains terms, structures, or formulaic expressions typical of the Italian administrative language, the model ranks the pair correctly regardless of the punctuation, and even in the presence of a higher number of parenthetical expressions in the simplified sentence. In another case, a sentence was classified as complex when information was added to clarify some implicit information. As shown in the second example in Table [Please also inform this Office of the processing of your file by means of the enclosed form or by telephone (0001112), so that it is not held in abeyance.] Simplified: Per poter archiviare la pratica, chiediamo cortesemente di restituirci il modulo allegato, anche via fax, o di inviarci un messaggio di posta elettronica. [In order to be able to file the papers, we kindly ask you to return the attached form to us, also by fax, or send us an e-mail.] Original: L'Ufficio Anagrafe del Comune provvederà d'ufficio alle conseguenti variazioni nel registro della popolazione residente; alla messa in opera delle nuove targhe sull'edificio provvederanno direttamente gli Uffici comunali competenti. Si comunica inoltre che la suddetta variazione viene segnalata direttamente da questo ufficio ai seguenti enti: ENEL, SIT s.p.a. e Servizio Postale. [The Registry Office of the Municipality will provide ex officio for the consequent variations in the register of the resident population; the installation of the new plates on the building will be carried out directly by the competent municipal offices. Please also note that the above-mentioned variation will be notified directly by this office to the following entities: ENEL, SIT s.p.a. and Postal Service.] Simplified: Il Comune aggiornerà d'ufficio quanto di sua competenza (anagrafe, autorizzazioni, tributi, comunicazioni agli enti pensionistici ed all'Azienda Provinciale per i Servizi Sanitari), installerà la targhetta indicante il numero civico e comunicherà la variazione direttamente all'ENEL, alla SIT S.p.A. e all'Ente Poste Italiane. [The municipality will update ex officio all matters within its jurisdiction (registry office, authorisations, tributes, communications to pension authorities and to the Provincial Health Services Agency), install the plaque indicating the house number and communicate the change directly to ENEL, SIT S.p.A. and the Italian Post Office.] Table or in-domain terms (e.g., anagrafe [civil registry], tributi [tributes], enti pensionistici [pension authorities], Azienda Provinciale per i Servizi Sanitari [Provincial Health Services Agency]), which may have affected the pair ranking. Since sentences in Admin-It RD were manually aligned after simplification was performed at the document level, the annotators could better identify the information needed to be added or made explicit. Probably these sentences underwent more insertions than those in Adminit RS . When the simplification is operated directly at the sentence level, in fact, it is more difficult to understand which information to add, since the context is missing. This subset of Admin-It contains sentences from However, despite being in-domain, SimPA does not always help. For example, for sentence pairs containing Reorderings, the NPRM fine-tuned only on SimPA got the lowest RA. This can be explained by the fact that in more than half of the corpus only lexical level simplifications were performed. As also observed by We also analyze the scores obtained on sentence pairs with transformations involving verbal features. Here, the NPRM fine-tuned on OSE is the best, also reaching high scores when adding SimPA or NewsEs+SimPA to the data used for fine-tuning. However, using only SimPA results in the lowest scores in this set. This could be explained by the ARA experiments using OSE performed by Despite our best efforts, we cannot easily explain the performance of the NPRMs on sentence pairs with other operations. However, our analysis already offers some insights into how the models behave, serving as a first step for a more comprehensive study to be carried out in future work. In this paper, we investigated the behavior of a Neural Pairwise Ranking Model (NPRM) for assessing the readability of sentences from the Italian administrative language in zero-shot settings. To deal with data scarcity in this domain, we built Admin-It, a corpus of original-simplified parallel sentences in the Italian administrative language, containing three different styles of simplifications. This corpus allowed us to prove that NPRMs are effective in cross-domain and cross-lingual zeroshot settings, especially when simplifications were produced over single sentences and at several linguistic levels. We also conduced an error analysis and showed that the characteristics of the data used for fine-tuning rather than its size have an impact on a model's performance. In addition, we determined that simplifications where information was added are poorly handled by the models. In future work, we plan to analyze how NPRMs perform on sentences with the same simplification style (e.g., Admin-It RS ) annotated for different degrees of complexity by humans. We also plan to improve Admin-It RS to address the needs of specific targets, such as second language learners, who require the insertion of definitions of technical terms (not provided in the current version). To develop ARA models in this setting, we could leverage the alignments of Srikanth and Li (2021) that focus on elaborative simplifications. Furthermore, we plan to fine-tune models with in-domain data from languages with higher proximity to Italian, e.g., with datasets similar to the one built for Spanish by Table B Cross-domain scenario in English We conduced some preliminary experiments on NPRM at the sentence level. Firstly, we fine-tuned and tested the model based on bert-base-uncased on in-domain data, i.e., an English news corpus, OSE. Testing it via 5-Fold cross validation, we obtained a quite high ranking accuracy (0.959) 14 . Then, we analyzed 14 This experiment is also reported in Sec. 4. the model behavior in a cross-domain scenario on English (see Table In Table As described in Section 7.2, we analyzed the results obtained by some of the fine-tuned models on Admin-It OP , the Admin-It subset where the original-simplified pairs of sentences are rewritten by applying only one operation. The models selected for this analysis are those fine-tuned on a single corpus (i.e., Simpitiki W , OSE, NewsEn, NewsEs, and SimPA) and the best performing ones (i.e., NewsEn+NewsEs+OSE, OSE+NewsEs, OSE+NewsEs+SimPA, and OSE+SimPA). Results are reported in In Figure
1,114
1,897
1,114
A Human-Centric Evaluation Platform for Explainable Knowledge Graph Completion
Explanations for AI are expected to help human users understand AI-driven predictions. Evaluating plausibility, the helpfulness of the explanations, is therefore essential for developing eXplainable AI (XAI) that can really aid human users. Here we propose a human-centric evaluation platform 1 to measure plausibility of explanations in the context of eXplainable Knowledge Graph Completion (XKGC). The target audience of the platform are researchers and practitioners who want to 1) investigate real needs and interests of their target users in XKGC, 2) evaluate the plausibility of the XKGC methods. We showcase these two use cases in an experimental setting to illustrate what results can be achieved with our system.
A Knowledge Graph (KG) is a structured representation of knowledge that captures the relationships between entities. It is composed of triples in the format (subject, relation, object), denoted as t = (s, r, o), where two entities are connected by a specified relation. For example, in the triple (London, isCapitalOf, UK), London and UK are the entities, and isCapitalOf is the relation. These entities can be depicted as nodes in a knowledge graph, while the relation denotes a labeled link connecting the subject to the object. Knowledge graphs are beneficial for many NLP tasks, e.g., fact checking The applicability of KGs in downstream tasks, however, is often limited by their incompleteness the defined entities The embedding based KGC models, however, are black boxes that do not (and cannot) provide explanations of why the model makes a certain prediction. The lack of transparency significantly hampers users' trust and engagement with KGC systems, especially in the high-risk domains, such as medicine We thus target to evaluate what kind of explanations are helpful for the users because ultimately, the explanations should directly aid them. Therefore, it is important to measure the plausibility of the explanations: the extent to which an explanation generated by XAI is comprehensible and beneficial to human users Our evaluation platform offers the following novel contributions. First, it introduces a new evaluation paradigm that assesses how well explanations can assist users in judging the correctness of KGC predictions. In contrast to the prevalent human evaluation paradigm in the literature that requests annotators to simulate AI's behavior With these novel contributions, our evaluation platform can effectively measure plausibility of XKGC methods. Considering the diversity of humans, our system also provides various statistical tools to rigorously and comprehensively analyze the collected feedback for reliable conclusions. Additionally, our evaluation platform aids in identifying genuine requirements from users regarding explanations, thereby it can assist in developing and refining XKGC methods to generate explanations that are centered around human needs. Finally, we formulate our study on human-centric evaluation as practical guidelines, which can be replicated to design evaluations for other use cases in the future.
We build an online system to evaluate XKGCs in a human centric manner. Our system considers the real needs and interests of human users in collaboration with AI, allowing us to investigate: can humans assess correctness of a KGC prediction based on its explanations? Which explanations are helpful for human users? The answers to these questions provide hints for evaluating the ultimate goal of an XAI method: the generated explanations are expected to assist human users in understanding AI-driven predictions. To this end, our platform has two user views: one for researchers to set up a test and the other for testers to give feedback. Researchers can prepare the evaluation study by uploading a JSON file that contains both the predic- tions and possible explanations. Here is an example JSON file including one prediction and its explanation. If the researchers want to evaluate multiple predictions, then they only need to add these predictions in the json file. Each predicted triple is associated with a set of explanation triples. Each explanation triple has a score that indicates their importance, which can be used for filtering and ranking the explanations. This score can e.g. come from the XKGC method. In addition, each prediction has the correct attribute which indicates whether this prediction is correct or not. The false prediction can be viewed as a control setup, which allows us to test whether users can determine if a prediction is correct based on the given explanations. Additionally, it allows us to assess the engagement of testers (see Sec. 5 for details). The probability attribute specifies the likelihood of the predicted triple by the KGC method. After the JSON is uploaded (top-left panel of Figure After the evaluation test has been setup, the researchers can share the link of the online system with the testers to evaluate. The system can work with crowdsourcing websites, e.g. Amazon Mechanical Turk (AMT), to employ testers for human evaluations. Figure The top panels of Figure Once done, the tester can submit the feedback and move on to the next prediction. After the last prediction, we offer the tester an additional form to share any feedback with us. This page can also be used, e.g. to share an identifying code that allows us to utilize the evaluation system with AMT, where the code is used to check completeness and engagement for payment. The system is as a web application consisting of frontend (HTML5/JavaScript) and backend (Python). We will describe the respective components and the data flow (shown in Figure The backend is a Python-based software framework (Flask The frontend is implemented in JavaScript (An-gularJS Figure Our system is deployed on a powerful server with 48 Threads (24 cores), 256 GB memory and 1GB Full-Duplex Internet connection. In theory, it can support more than one thousand testers to visit the evaluation platform. Due to the complexity and costliness of human evaluation, as well as the diversity among human testers, the collected feedback tends to be both limited in quantity and diverse in quality. Consequently, statistical analysis assumes a critical role to draw reliable conclusions from human feedback. We include the following statistical analysis tools in our platform. Power analysis. In human evaluation, there is often an important question: How many testers are necessary to draw a solid conclusion? There is no a universally applicable minimum sample size for obtaining statistically significant results Hypothesis testing. Are the observed results in human feedback statistically significant or simply due to chance? Hypothesis testing, e.g. t-tests, Wilcoxon Signed-Rank test, Mann Whitney test and Brunner-Munzel test, can be employed to measure them. With hypothesis tests, we can distinguish between real effects and random variations in a rigorous manner. Mixed effect analysis. Human feedback is often subject to variability of individual differences, engagement levels, and other random variation. (Linear) mixed effect analysis Correlation Analysis. In addition, correlation analysis can also be applicable to analyze the relations among different metrics. For example, we suggest multiple metrics to quantify plausibility, including: accuracy rate of tester's assessment, confidence of testers, number of helpful explanations, and time cost. Correlation analysis can explore relationships between metrics, and may provide insights into the reliability and validity of the results. Human evaluation can be subject to various biases that may affect the reliability of the conclusions Engagement. Testers often exhibit varying levels of engagement and various thinking modes. To mitigate the impact of tester bias, we propose that each tester assesses ≥ 2 XKGC methods, analyzing the feedback with paired tests, especially when the number of available testers is limited. Additionally, testers' engagement tends to decrease over time. Therefore, it is crucial to impose a constraint on the total evaluation time (e.g. one hour per session). Furthermore, to ensure the testers' proper engagement during the evaluation process, we can randomly assign some straightforward predictions as checkpoints for validation. Equivalency. All testers should evaluate similar set of predictions in a similar order. This is to reduce deviations caused by individual predictions. Diversity. Testers may have the tendency to retain information from previous predictions, which can result in the earlier assessments influencing the later ones. Consequently, we recommend selecting predictions that are as distinct from each other as possible to mitigate this concern. Balance. Predictions should be balanced. Specifically, numbers of correct and erroneous predictions should be similar, and the order of predictions should be random, such that testers cannot simply guess prediction results. Human-understandable benchmark data. The data used in a human evaluation needs to be human understandable, otherwise testers have no clue how to assess predictions and explanations. While a seemingly obvious statement, in practice we found it difficult to find KGC data that satisfies this constraint. In addition, testers recruited for a human evaluation are often lay people, not professionals of an area, thus plain datasets without domain-specific knowledge (such as biology and healthcare) would be better. If the evaluated XKGCs are domain specific, e.g., disease diagnosis, then specialists should accordingly be employed. To demonstrate what results and findings can be acquired with the proposed system, we conducted two evaluations. XAI is human-centric in nature. There is no onefor-all solution to meet all users' expectations. Our human-centric evaluation platform can help the researchers and practitioners interview their users to find: (1) what the users really need for understanding the KGC predictions in their applications, and (2) whether the generated explanations by their methods make sense for their users. We conducted a series of interviews with the evaluation system. A human-understandable KGC dataset was selected as benchmark data. We used the kinship dataset With the evaluation system, we visualized the predictions and their explanations to the testers and interviewed: what will be a helpful explanations for them? and why do they think an explanation helpful? The interview is summarized in Table First, the interviews revealed that the testers often search for "paths" that link the nodes of the predicted triple to the nodes of explanations. See for example the "triangle" explanation in left panel of Figure 5 interviewees: 3 with machine learning background, 2 with good understanding about users of their AI system. Guide A guide is created, including textand video-introduction to the evaluation platform. 1. What will be a helpful explanations for users? 2. Why do users think an explanation helpful? Table Second, testers often find a rather small set of explanations helpful (2-3) and remark that a large number of explanations (e.g. >10) create confusion. Third, often it would be helpful for testers to have additional information from the knowledge graph -but this additional information was not identified by Method A. For example, Method A cannot create an explanation linking four entities, such as in right panel of Figure We also used the evaluation platform to compare two XKGC methods: which would be more helpful for users. The kinship dataset 1 Each tester evaluates 14 predicted triples to keep their engagement. The first two triples serve as practice to facilitate testers understanding and comfort with the system and the questions. The feedback is not included in statistical analysis. 3 The rest of the triples are different from each other. Each is randomly drawn from a unique relation (12 relation types in total in the dataset). Half of triples are correctly or incorrectly predicted to avoid dummy feedback. Paired test is employed. Half of triples are randomly selected for either XAI method. 6 The predicted triples are randomly shuffled. All testers evaluate the same set of predicted triples in the same order for fairness. Table 30 testers are invited to evaluate the predictions, following the steps illustrated in Section 2. We received the feedback from 23 of them. For each tester (anonymous) and each prediction, our platform collected the metrics: accuracy of assessment (denoted as Acc), confidence of assessment, number of helpful explanations (denoted as helpExpl), and time cost. Our platform also provides diverse statistical tools (see Section 4) to analyze the measurements, e.g. the results shown in the bottom panel of Figure Human evaluation has attracted increasing attention in XAI research due to its ultimate goal of aiding human to understand AI predictions. Many evaluation benchmarks are based on simulatability Existing KGC evaluation platforms focus on measurement of prediction performance. For instance, AI explanations only achieve their goal if the explanation is helpful to the human user. To measure this, we present a human-centric evaluation platform in the context of explainable knowledge graph completion. Distinguishing from the simulatabilitybased evaluation, our system assesses how well explanations assist users in judging the correctness of KGC predictions, and thus aligns better with human-AI interaction systems, where AI facilities humans rather than the other way around. To alleviate possible biases, we provide a set of guidelines in experiment design, and diverse analysis tools for reliable conclusions. The experiments demonstrate the findings and results that can be acquired with the proposed system.
721
2,363
721
Privacy Implications of Retrieval-Based Language Models
Retrieval-based language models (LMs) have demonstrated improved interpretability, factuality, and adaptability compared to their parametric counterparts by incorporating retrieved text from external datastores. While it is well known that parametric models are prone to leaking private data, it remains unclear how the addition of a retrieval datastore impacts model privacy. In this work, we present the first study of privacy risks in retrieval-based LMs, particularly kNN-LMs. Our goal is to explore the optimal design and training procedure in domains where privacy is of concern, aiming to strike a balance between utility and privacy. Crucially, we find that kNN-LMs are more susceptible to leaking private information from their private datastore than parametric models. We further explore mitigations of privacy risks: When privacy information is targeted and readily detected in the text, we find that a simple sanitization step would eliminate the risks while decoupling query and key encoders achieves an even better utility-privacy trade-off. Otherwise, we consider strategies of mixing public and private data in both datastore and encoder training. While these methods offer modest improvements, they leave considerable room for future work. Together, our findings provide insights for practitioners to better understand and mitigate privacy risks in retrieval-based LMs 1 .
Retrieval-based language models
Email: [email protected] [email protected] [email protected] … URL: [email protected] [email protected] … text passages that are most relevant to the prompt provided to the model. These retrieved results are then utilized as additional information when generating the model's response to the prompt. Retrievalbased language models offer promising prospects in terms of enhancing interpretability, factuality, and adaptability. However, in privacy-sensitive applications, utility usually comes at the cost of privacy leakage. Recent work has shown that large language models are prone to memorizing In this work, we present the first study of privacy risks in retrieval-based language models, with a focus on the nearest neighbor language models (kNN-LMs) We begin our investigation by examining a situation where the creator of the model only adds private data to the retrieval datastore during inference, as suggested by We further explore mitigation strategies for kNN-LMs in two different scenarios. The first is where private information is targeted, i.e., can be easily identified and removed (Section 5). We explore enhancing the privacy of kNN-LMs by eliminating privacy-sensitive text segments from both the datastore and the encoder's training process. This approach effectively eliminates the targeted privacy risks while resulting in minimal loss of utility. We then explore a finer level of control over private information by employing distinct encoders for keys (i.e., texts stored in the datastore) and queries (i.e., 2 Other retrieval-based language models such as RETRO prompts to the language model). Through our experimental analysis, we demonstrate that this design approach offers increased flexibility in striking a balance between privacy and model performance. The second is a more challenging scenario where the private information is untargeted, making it impractical to remove from the data (Section 6). To address this issue, we explore the possibility of constructing the datastore using public datapoints. We also consider training the encoders of the kNN-LM model using a combination of public and private datapoints to minimize the distribution differences between the public data stored in the datastore and the private data used during inference. Despite modest improvements from the methods we explored, the mitigation of untargeted attacks remains challenging and there is considerable room for future work. We hope our findings provide insights for practitioners to better understand and mitigate privacy risks in retrieval-based LMs. In this section, we first review the key components of kNN-LMs (Section 2.1). Then, we discuss the data extraction attacks for language models (Section 2.2). These aspects lay a foundation for the subsequent exploration and analysis of privacy risks related to kNN-LMs. A kNN-LM Encoders Given a vocabulary V, the encoder Enc K or Enc Q performs the task of mapping a given key or query c ∈ V * to a fixed-length vector representation. Typically, this encoding process is accomplished through a trained language model, where Enc K (c) or Enc Q (c) represents the vector hidden representation obtained from the output layer of the language model when provided with the input c. Although in the default kNN-LMs, Enc K and Enc Q are commonly identical functions, we explore different options in the work. Datastore The datastore, {(Enc K (c i ), w i )}, is a key-value store generated by running the encoder Enc K (•) over a corpus of text; Each key is the vector representation Enc K (c i ) for some context c i ∈ V * , and each value w i ∈ V is the ground-truth next word for the leftward context c i . A search index is then constructed based on the key-value store to enable retrieval. Inference At inference time, when predicting the next token for a query x ∈ V * , the model queries the datastore with encoded query Enc Q (x) to retrieve x's k-nearest neighbors N k according to a distance function d(•, •) 3 . Then the model computes a softmax over the (negative) distances, which gives p kNN (y|x), a distribution over the next token, in proportional to: where t is a temperature parameter, and k is a hyper-parameter that controls the number of retrieved neighbors. The prediction is then interpolated with p LM , the prediction from the original LM: where λ is an interpolation coefficient. Prior work The attack consists of two main steps: 1) generating candidate reconstructions by prompting the trained models, and 2) sorting the generated candidates based on a score that indicates the likelihood of being a memorized text. Further details about the attack can be found in Appendix A. While previous research has successfully highlighted the risks associated with data extraction in parametric language models, there remains a notable gap in our understanding of the risks (and any potential benefits) pertaining to retrieval-based language models like kNN-LMs. This study aims to address this gap and provide insights into the subject matter. In this section, we formally describe our problem setup (Section 3.1) and privacy measurements (Section 3.2). We then detail our evaluation setup (Section 3.3). We consider a scenario where a service provider (e.g. a financial institution) aims to enhance its customer experience by developing a kNN-LM and deploying it as an API service. Note that the development of kNN-LMs intended solely for personal use (e.g., constructing a kNN-LM email autocompleter by combining a public LM with a private email datastore) falls outside the scope of our study because it does not involve any attack channels that could be exploited by potential attackers. We assume that the service provider possesses its own private data (D private ) specific to its domain, in addition to publicly available data (D public ). We identify two key design choices which impact the quality and privacy of such a deployed service. First, the service provider chooses which data to be included in its datastore, and this may be public data (D public ), private data (D private ), or a mix of both. Second, they choose whether to use encoders that are pre-trained on publicly available data (Enc public ), or further finetuned on the private data (Enc private ). We posit that careful consideration of these design choices is needed to establish a balance between privacy preservation and utility. The service provider in such a scenario is concerned with making a useful API, while keeping their private data hidden from malicious users or attackers. Hence, the service provider's objective is to attain a high level of utility (as measured by perplexity) on a held-out set of D private while simultaneously minimizing the disclosure of private information. We quantify the metrics we consider for privacy in Section 3.2. We now describe how we evaluate the risk of data extraction attack within the scenario described earlier in Section 3.1. Threat model We assume that the service provider deploys a kNN-LM with API access to p(y|x). This API provides the attacker with the capability to compute perplexity, conduct text completion, and perform other relevant tasks. However, it's important to note that the attacker is restricted from accessing the internal parameters or the datastore of the deployed model. Our study considers two types of privacy risks, each associated with a particular type of attack: Targeted attacks We define targeted risk as a privacy risk that can be directly associated with a segment of text (e.g., personal identifiers such as addresses and telephone numbers.) and propose the targeted attack. The significance of a targeted attack becomes apparent when considering that targeted risks have been explicitly addressed in various privacy regulations (Centers for • We firstly detect all unique personal identifiers in the private dataset, denoted as {ρ i } p i=1 ; • We then sort the reconstruction candidates based on the membership metrics defined in Appendix A, and only keep the top-n candidates {c i } n i=1 ; • Finally, we detect {ρ i } q i=1 , the unique PIIs in the top-n candidates, and then count |{ρ i } p i=1 ∩ {ρ i } q i=1 |, namely how many original PIIs have been successfully reconstructed by the attack. A larger number means higher leakage of private PIIs. The untargeted attack is the case where the attacker aims to recover the entire training example, rather than a specific segment of text. Such attacks can potentially lead to the theft of valuable private training data. We adopt the attack proposed by • We firstly sort the reconstruction candidates based on the membership metrics defined in Appendix A, and only keep the top-n candidates {c i } n i=1 ; • For each candidate c i , we then find the closest example in the private dataset p i and compute the ROUGE-L score Note that the attack's performance evaluation employs the private dataset following established reconstruction attack practices, the attack itself never utilizes this dataset. Our main evaluation uses the Enron Email dataset We pre-process the Enron Email dataset by retaining only the email body (see Table This section presents our investigation of whether the addition of private data to the retrieval datastore during inference is an effective method for achieving a good trade-off between privacy (mea-sured by metrics defined in Section 3.2) and utility (measured by perplexity) in kNN-LMs. We are particularly interested in three scenarios: utilizing only Enc public (the publicly pretrained language model), utilizing only Enc private (the model fine-tuned from Enc public using private data), and utilizing Enc public with D private (the combination of the public model with the private datastore). As shown in Table When it comes to kNN-LMs, incorporating a private datastore (D private ) with a public model (Enc public ) yields even greater utility compared to relying solely on the fine-tuned model (Enc private ). However, this utility improvement also comes at the expense of increased privacy leakage. These findings suggest that the privacy concern stemming from the private datastore outweighs that resulting from the privately fine-tuned model, indicating a lack of robust privacy protection in the design of kNN-LMs. Additionally, we note that the combination of Enc private and D private achieves the highest utility but also incurs the highest privacy cost. Our previous findings indicate that the personalization of kNN-LMs with a private datastore is more susceptible to data extraction attacks compared to fine-tuning a parametric LM with private data. At the same time, leveraging private data offers substantial utility improvements. Is there a more effective way to leverage private data in order to achieve a better balance between privacy and utility in kNN-LMs? In this section we focus on addressing privacy leakage in the context of targeted attacks (see definition in Section 3.2), where the private information can be readily detected from text. We consider several approaches to tackle these challenges in Section 5.1 and Section 5.2, and present the results in Section 5.3. We also investigate the effect of hyper-parameters in Section 5.4. As demonstrated in Section 4, the existence of private examples in the kNN-LMs' datastore increase the likelihood of privacy leakage since they are retrieved and aggregated in the final prediction. Therefore, our first consideration is to create a sanitized datastore by eliminating privacy-sensitive text segments. We note that this verbatim level definition of "privacy leakage" is general and widely adopted. Notably, regulations such as HIPAA (Centers for Medicare & Medicaid Services, 1996) and CCPA (California State Legislature , 2018) offer explicit definitions of privacy-sensitive data. Consequently, these regulatory frameworks can serve as the basis for establishing the verbatim-level definition of "privacy leakage". For example, HIPAA defines 18 identifiers that are considered personally identifiable information (PII), including names, addresses, phone numbers, etc. We propose the following three options for sanitization: • Replacement with < |endoftext| >: replace each privacy-sensitive phrase with the < |endoftext| > token; • Replacement with dummy text: replace each privacy-sensitive phrase with a fixed dummy phrase based on its type. For instance, if telephone numbers are sensitive, they can be replaced with "123-456-789"; and • Replacement with public data: replace each privacy-sensitive phrase with a randomly selected public phrase of a similar type. An example is to replace each phone number with a public phone number on the Web. The encoders in a kNN-LM is another potential source of privacy leakage. While it is typically optimized on target domain data to enhance performance, fine-tuning directly on private data in privacy-sensitive tasks may result in privacy leaks (Table We propose using separate encoders for keys and queries in kNN-LMs, to allow for finer control over privacy preservation. For example, the encoder for queries can be the sanitized encoder, while the encoder for keys can be the non-sanitized one; This way, the query encoder can be more resistant to privacy leakage, while the keys encoder can provide better query results. While it is not a common practice in kNN-LMs, we view the separation of key and query encoders as a promising approach to reduce the discrepancy between the prompt and the datastore, and reduce privacy leakage. The privacy risk of a kNN-LM can also be impacted by its hyper-parameters such as the number of neighbors k, and the interpolation coefficient λ. It is important to consider these hyper-parameters in the customization of the kNN-LMs to ensure that the privacy-utility trade-off is well managed. As demonstrated in Table We finally analyze the impact of key hyperparameters on utility and privacy risks in kNN-LMs, using D private as datastore and Enc private for both Enc K and Enc Q . First, we vary λ, the inter- polation coefficient, and observe that increasing λ decreases perplexity but increases privacy risk (see Figure In this section, we explore potential methods to mitigate untargeted risks in kNN-LMs, which is a more challenging setting due to the opacity of the definition of privacy. It is important to note that the methods presented in this study are preliminary attempts, and fully addressing untargeted risks in kNN-LMs still remains a challenging task. Considering that storing D private in the datastore is the primary cause of data leakage (as discussed in Section 4), and the challenge of sanitizing private data in the face of untargeted risks, we propose the following approaches to leverage public data for mitigating these risks. Adding public data to datastore The quality of the retrieved neighbors plays a crucial role in the performance and accuracy of kNN-LMs. Although it is uncommon to include public datapoints that are not specifically designed for the task or domain into kNN-LMs' datastore, it could potentially aid in reducing privacy risks in applications that prioritize privacy. This becomes particularly relevant in light of previous findings, which suggest substantial privacy leakage from a private datastore. Fine-tuning encoders on private data with DP-SGD Differentially private stochastic gradient descent (DP-SGD) Fine-tuning encoders on a mixture of public and private data However, adding public data can potentially lead to a decrease in retrieval performance as there is a distribution gap between the public data (e.g., Web Crawl data) used to construct the datastore and the private data (e.g., email conversations) used for encoder fine-tuning. To address this issue, we propose further fine-tuning the encoder on a combination of public and private data to bridge the distribution gap and improve retrieval accuracy. The ratio for combining public and private datasets will be determined empirically through experimentation. Similarly to Section 5.2, we could also employ separate encoders for keys and queries in the context of untargeted risks, which allows for more precise control over privacy preservation. We mainly present our findings using the Enron Email dataset. In Appendix B, we provide results from the Medical Transcriptions dataset, and those findings align with our main findings. Table Using a public datastore reduces privacy risk but also results in a sudden drop in utility. If more stringent utility requirements but less strict privacy constraints are necessary, adding a few private examples to the public datastore, as shown in Table Table We also note that fine-tuning the encoder using DP-SGD only helps slightly reduce the extraction risk, despite the relatively strict privacy budget ε = 10.0. This is because due to the existence of a private datastore, each inference query in the kNN-LM process incurs supplementary privacy costs, leading to the final kNN-LM model not satisfying the ε-Differential Privacy criteria. We further try fine-tuning the encoder using a combination of public and private data, which results in Enc mixed . The training dataset comprises the entire set of private data of size N priv and N priv × r public data, where r takes values from {0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0}. We present attack results using r = 0.05 as it achieves the best perplexity. As shown in Table 7 Related Work Retrieval-based language models Language models have been shown to tend to memorize Although previous research has demonstrated the potential risks of data extraction in parametric language models, our study is the first investigation of the privacy risks associated with retrieval-based language models; we also propose strategies to mitigate them. The closest effort is This work presents the first study of privacy risks in retrieval-based language models, specifically focusing on kNN-LMs. Our objective is to investigate designs and training methodologies for kNN-LMs that strike a better privacy-utility trade-off. There are several conclusions from our investigation. First, our empirical study reveals that incorporating a private datastore in kNN-LMs leads to increased privacy risks (both targeted and untargeted) compared to parametric language models trained on private data. Second, for targeted attacks, our experimental study shows that sanitizing kNN-LMs to remove private information from both the datastore and encoders, and decoupling the encoders for keys and queries can eliminate the privacy risks without sacrificing utility, achieving perplexity of We discuss the limitations of this work as follows. • The current study mainly demonstrates the privacy implications of nearest neighbor language models, but there are many other variants of retrieval-based language models, such as RETRO • In the current study, we use WikiText-103 as the public domain for Enron Email, and PubMed-Patients for Medical Transcriptions. While we believe that these choices of public datasets are realistic, it is important to recognize that this selection may restrict the generalizability of our findings. We acknowledge this limitation and leave the exploration of alternative options for the public dataset as a direction for future work. • Furthermore, an unexplored aspect of our study is the potential combination of proposed strategies, such as decoupling keys and query encoders, with more diverse privacytechniques. A.1 Untargeted Attack The attacker generates candidates for reconstructions via querying the retrieval-augmented LM's sentence completion API with contexts. Following Carlini et al. ( Sort candidates by calibrated perplexity The second step is to perform membership inference on candidates generated from the previous step. We are using the calibrated perplexity in our study, which has been shown to be the most effective membership metric among all tested ones by The perplexity measures how likely the LM is to generate a piece of text. Concretely, given a language model f θ and a sequence of tokens x = x 1 , . . . , x l , Perplexity(f θ , x) is defined as the exponentiated average negative log-likelihood of x: A low perplexity implies a high likelihood of the LM generating the text; For a retrieval-augmented LM, this may result from the LM has been trained on the text or has used the text in its datastore. However, perplexity may not be a reliable indicator for membership: common texts may have very low perplexities even though they may not carry privacy-sensitive information. Previous work Prompts for the targeted attack We gather common preceding context for telephone numbers, email addresses, and URLs, and use them as prompts for the targeted attack. Table Attack parameters For the untargeted attack, we generate 100,000 candidates, and for the targeted attack, we generate 10,000 candidates. We use beam search with repetition penalty = 0.75 for the generation. Here, ε ∈ R >0 , δ ∈ [0, 1) are privacy parameters quantifying the privacy guarantee of the algorithm. DP Stochastic Gradient Descent (DP-SGD) It then clips the gradient ℓ 2 -norm to a maximum ℓ 2 -norm of C: Finally, it produces the private gradient ĝt by injecting Gaussian noise into the sum of the clipped per-example gradients: where (0, σ 2 C 2 I) is a Gaussian distribution with mean 0 and covariance σ 2 C 2 I, and the noise multiplier σ is computed from (ε, δ) by inverse privacy accounting (e.g., We also evaluate whether DP can mitigate extraction risks in kNN-LMs. Specifically, we fine-tune the pre-trained LM on the private dataset with DP-SGD. We vary the privacy budget ε and fix the failure probability δ to be 1/N , where N is the number of training examples. It's important to acknowledge that due to the utilization of a private datastore, each inference query in the kNN-LM Example #1: PAST MEDICAL HISTORY:, He has difficulty climbing stairs, difficulty with airline seats, tying shoes, used to public seating, and lifting objects off the floor. He exercises three times a week at home and does cardio. He has difficulty walking two blocks or five flights of stairs. Difficulty with snoring. He has muscle and joint pains including knee pain, back pain, foot and ankle pain, and swelling. He has gastroesophageal reflux disease... Example #2: HISTORY OF PRESENT ILLNESS: ,This is a 55-year-old female with a history of stroke, who presents today for followup of frequency and urgency with urge incontinence. This has been progressively worsening, and previously on VESIcare with no improvement. She continues to take Enablex 50 mg and has not noted any improvement of her symptoms. The nursing home did not do a voiding diary. She is accompanied by her power of attorney... Example #3: EXAM: , Ultrasound examination of the scrotum.,REASON FOR EXAM: , Scrotal pain.,FINDINGS: ,Duplex and color flow imaging as well as real time gray-scale imaging of the scrotum and testicles was performed. The left testicle measures 5.1 x 2.8 x 3.0 cm. There is no evidence of intratesticular masses. There is normal Doppler blood flow. The left epididymis has an unremarkable appearance. There is a trace hydrocele... Example #4: TESTICULAR ULTRASOUND,REASON FOR EXAM: ,Left testicular swelling for one day.,FINDINGS: ,The left testicle is normal in size and attenuation, it measures 3.2 x 1.7 x 2.3 cm. The right epididymis measures up to 9 mm. There is a hydrocele on the right side. Normal flow is seen within the testicle and epididymis on the right.,The left testicle is normal in size and attenuation, it measures 3.9 x 2.1 x 2.6 cm... Example #5: PHYSICAL EXAMINATION: , The patient is a 63-year-old executive who was seen by his physician for a company physical. He stated that he was in excellent health and led an active life. His physical examination was normal for a man of his age. Chest x-ray and chemical screening blood work were within normal limits. His PSA was elevated.,IMAGING:,Chest x-ray: Normal.,CT scan of abdomen and pelvis: No abnormalities... process incurs supplementary privacy costs, leading to the final kNN-LM model not satisfying the (ε, δ)-Differential Privacy criteria. As demonstrated in Table We primarily showcase our findings using the Enron Email dataset in the main paper, as its inclusion of personally identifiable information (PII) enables us to effectively evaluate both targeted and untargeted attacks. To validate our findings, we hereby replicate our experiments specifically for untargeted attacks on the Medical Transcriptions dataset. The Medical Transcriptions dataset The preliminary findings presented in Table We also observe on the Medicial Transcriptions dataset that separating the key and query encoders yields better results in striking a favorable trade-off between privacy and utility. As shown in Table
1,389
31
1,389
End of preview. Expand in Data Studio

Description

This dataset contains filtered ACL papers between certain abstract length ranges. The data includes columns such as paper_name, year, venue, url, bibkey, and cite_acl.

We also have the filtered_dataset.jsonl that holds the main text info.

Note: Some records might be missing certain fields, especially bibkey or cite_acl. We plan to fill them via partial manual / fuzzy matching.

License

Because these are ACL materials, the dataset is non-commercial for older content and requires attribution. For more details, see ACL policy.

Citation

If you use this dataset, please cite the original ACL papers accordingly:

Downloads last month
49