metadata
dict | paper
dict | review
dict | citation_count
int64 0
0
| normalized_citation_count
int64 0
0
| cited_papers
listlengths 0
0
| citing_papers
listlengths 0
0
|
---|---|---|---|---|---|---|
{
"id": "lt8bRpT0ejq",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=lt8bRpT0ejq",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "vUb2g5wPI8",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200351",
"forum_link": "https://openreview.net/forum?id=vUb2g5wPI8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Neural Topical Expansion Framework for Unstructured Persona-Oriented Dialogue Generation",
"authors": [
"Minghong Xu",
"Piji Li",
"Haoran Yang",
"Pengjie Ren",
"Zhaochun Ren",
"Zhumin Chen",
"Jun Ma"
],
"abstract": "Unstructured Persona-oriented Dialogue Systems (UPDS) has been demonstrated effective in generating persona consistent responses by utilizing predefined natural language user persona descriptions (e.g., “I am a vegan”). However, the predefined user persona descriptions are usually short and limited to only a few descriptive words, which makes it hard to correlate them with the dialogues. As a result, existing methods either fail to use the persona description or use them improperly when generating persona consistent responses. To address this, we propose a neural topical expansion framework, namely Persona Exploration and Exploitation (PEE), which is able to extend the predefined user persona description with semantically correlated content before utilizing them to generate dialogue responses. PEE consists of two main modules: persona exploration and persona exploitation. The former learns to extend the predefined user persona description by mining and correlating with existing dialogue corpus using a variational auto-encoder (VAE) based topic model. The latter learns to generate persona consistent responses by utilizing the predefined and extended user persona description. In order to make persona exploitation learn to utilize user persona description more properly, we also introduce two persona-oriented loss functions: Persona-oriented Matching (P-Match) loss and Persona-oriented Bag-of-Words (P-BoWs) loss which respectively supervise persona selection in encoder and decoder. Experimental results show that our approach outperforms state-of-the-art baselines, in terms of both automatic and human evaluations.",
"keywords": [],
"raw_extracted_content": "A Neural Topical Expansion Framework for\nUnstructured Persona-Oriented Dialogue Generation\nMinghong Xu1and Piji Li2∗and Haoran Yang3and Pengjie Ren4\nand Zhaochun Ren5∗and Zhumin Chen6and Jun Ma7\nAbstract. Unstructured Persona-oriented Dialogue Systems\n(UPDS) has been demonstrated effective in generating persona con-\nsistent responses by utilizing predefined natural language user per-sona descriptions (e.g., “I am a vegan”). However, the predefined\nuser persona descriptions are usually short and limited to only a few\ndescriptive words, which makes it hard to correlate them with the di-\nalogues. As a result, existing methods either fail to use the persona\ndescription or use them improperly when generating persona consis-\ntent responses. To address this, we propose a neural topical expansion\nframework, namely Persona Exploration and Exploitation (PEE),\nwhich is able to extend the predefined user persona description with\nsemantically correlated content before utilizing them to generate di-\nalogue responses. PEE consists of two main modules: persona explo-\nration and persona exploitation. The former learns to extend the pre-\ndefined user persona description by mining and correlating with ex-\nisting dialogue corpus using a variational auto-encoder (V AE) based\ntopic model. The latter learns to generate persona consistent re-\nsponses by utilizing the predefined and extended user persona de-\nscription. In order to make persona exploitation learn to utilize user\npersona description more properly, we also introduce two persona-\noriented loss functions: Persona-oriented Matching (P-Match) loss\nand Persona-oriented Bag-of-Words (P-BoWs) loss which respec-\ntively supervise persona selection in encoder and decoder. Experi-\nmental results show that our approach outperforms state-of-the-art\nbaselines, in terms of both automatic and human evaluations.\n1 Introduction\nPersona-oriented dialogue systems have attracted an increasing at-\ntention as they can generate persona consistent responses [3, 8,12, 14]. Existing persona-oriented dialogue systems can be classi-fied into two categories: Structured Persona-oriented Dialogue Sys-\ntems (SPDS) [19, 32, 33] and Unstructured Persona-oriented Di-\nalogue Systems (UPDS) [24, 31]. The former directly uses struc-\ntured user persona descriptions in the form of key-value pairs (e.g.,\n/angbracketleftSEX,M /angbracketright,/angbracketleftAGE,18/angbracketright), whereas the latter mines user persona de-\nscriptions from natural language utterances (e.g., “I like music.”, “ I\nlike the guitar .”, “I am a vegan.”). In this work, we focus on UPDS.\n1Shandong University, China, email: [email protected]\n2Tencent AI Lab, China, email: [email protected]\n3The Chinese University of Hong Kong, email: [email protected]\n4University of Amsterdam, The Netherlands, email: [email protected]\n5Shandong University, China, email: [email protected]\n6Shandong University, China, email: [email protected]\n7Shandong University, China, email: [email protected]\n∗Piji Li and Zhaochun Ren are corresponding authors.Table 1. An example of unstructured persona-oriented dialogue system.\n1. I like music.\nPersonas for 2. I like to skateboard.\nSpeaker B 3. I like the guitar.\n4 .Ia mavegan.\nA(u1): Wanna come over and watch the godfather?\nB(u2): I do not have a car, I have a skateboard.\nA(u3): Y ou can skateboard over. I do not live too far. I\nhave candy and soda to share.\nDialogue B(u4): No thanks, I do not eat any animal products.\nA(u5): I promise there are no animal products in my\ncandy and soda.\nB(u6): Most candy has some form of dairy.A sa vegan I\ncan not have that.\nRecently, there have been some studies which utilize the pre-\ndefined user persona descriptions to generate persona-oriented re-\nsponses [10, 24, 31]. However, the given user persona descriptionsare mostly short and limited to only a few descriptive words. As\na result, the existing methods have a hard time utilizing the user\npersona descriptions when generating responses. On the one hand,\nthey might fail to use user persona descriptions. For example, the\ngenerative profile memory network proposed in [31] simply attends\nover encoded persona description in decoder. It generates response “I\nhave a lot of candies. I am not sure.” for the case in Table 1 without\nconsidering user persona. On the other hand, they cannot use user\npersona descriptions properly sometimes. For example, the persona-\nCV AE proposed in [24] uses force decoding strategy to copy per-\nsona description. It generates response “I like to skateboard. What\nare your hobbies?” for the case in Table 1 which use the selected\npersona improperly and seriously affects its quality. The reason is\nthat with the limited descriptive words, it is hard for these models to\nunderstand and correlate the user persona descriptions when gener-\nating responses. We argue that this could be alleviated by extendingthe predefined user persona descriptions with semantically correlatedcontent. As shown in Table 1, the target is to generate the last utter-\nance (u\n6) based on the given persona descriptions and historical ut-\nterances (u 1-u5). One of the user persona descriptions for Speaker B\nis “I am a vegan”. However, only using this user persona description\nis not enough to generate huaman-like response (u 6) because “vegan”\nand “candy” are not directly related. In order to generate u 6, we need\nto take the following content into consideration simultaneously: (1)\nthe word “vegan” in B’s user persona description; (2) the semantic\ncorrelation between “vegan” and “dairy”; (3) speaker A mentioned\n“animal products” and “candy” in the query utterance; (4) the corre-\nlation among “dairy”, “animal products”, and “candy”.\nIn this work, we propose a neural topical expansion framework,\nnamely Persona Exploration and Exploitation (PEE), which is able\nto extend the predefined user persona descriptions with semanti-ECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2003512244\ncally correlated content before utilizing them to generate dialogue\nresponses. PEE consists of two main modules: persona explorationand persona exploitation. The former learns to extend the prede-fined user persona descriptions by mining and correlating with ex-\nisting dialogue corpus. Specifically, we employ a V AE-based topic\nmodel to conduct the unsupervised semantic modeling and extendpersona-related words by semantic matching. The latter learns to\ngenerate persona consistent responses by utilizing the predefined\nand extended persona information. Specifically, we design a mutual-\nreinforcement multi-hop memory retrieval mechanism which re-\ntrieves information from two types (predefined and extended) of per-\nsonas by considering their mutual influence. Furthermore, in order tomake persona exploitation learn to utilize user persona descriptionsmore properly, we also introduce two persona-oriented loss func-\ntions: P-Match loss and P-BoWs loss. P-Match loss supervises the\nchoice of predefined persona sentences in encoder. P-BoWs loss su-pervises to generate more persona-related words in decoder.\nThe main contributions of this paper are as follows:\n•We propose a persona exploration and exploitation (PEE) frame-work which can explore and exploit persona information to gen-erate informative persona-oriented responses.\n•We employ an V AE-based topic model to conduct the efficientunsupervised semantic learning for external persona informationmining and distillation.\n•We propose two learning strategies for persona exploitation:a mutual-reinforcement multi-hop memory retrieval mechanism\nand two persona-oriented loss functions.\n2 Realted work\nAs a challenging task in the area of natural language processing,\nopen-domain dialogue system has attracted great attention of re-searchers recently [15, 20, 22, 27]. But there are still some limita-tions and challenges in this area. Among the many issues, the lack\nof consistency is one of the most challenging difficulties. Therefore,\npersona-based dialogue system has been proposed to generate per-sona consistent and human-like responses [5, 16, 17, 26, 30]. Li\net al. [8] learn a user embedding to represent persona implicitly for\neach user without using explicitly persona information. Then, re-\nsearchers model user embedding with explicit persona information\nto generate responses. According to the format of persona infor-\nmation, those methods can be classified into two categories: struc-\ntured Persona-oriented Dialogue Systems (SPDS) and Unstructured\nPersona-oriented Dialogue Systems (UPDS).\nIn SPDS, Wang et al. [28] group users according to the gender at-\ntribute and the dialogue features in the same group can be shared.Qian et al. [19] endow the user with explicit structured persona in-formation (a key-value table) and design a profile detection mod-ule to select a persona information and inject it to the decoding pro-cess. Luo et al. [12] encode user persona description into distributedembeddings and take advantage of conversation history from otherusers with similar profiles, their model can adopt different recom-mendation policy based on the user profile. Due to the lack of large\nscale persona-labelled dataset, Zheng et al. [32] introduce a dataset\nwhere persona information is formulated to key-value pairs from\ndialogue content and they devise two technique to capture and ad-dress trait-related information. In UPDS, Zhang et al. [31] contributea persona-chat dataset with natural sentences persona informationand they propose a generative profile memory network to incorpo-rate persona information into responses. Lin et al. [10] model learn-ing different personas as different tasks via meta-learning algorithmwithout using explicit persona information since dialogue itself canreflect some persona information. Through this way, their model cangenerate personalised responses by leveraging only a few dialoguesamples instead of human-designed persona descriptions. To gener-ate diverse and sustainable conversations, Song et al. [24] propose amemory-augmented architecture to exploit persona information andutilized a conditional variational autoencoder which can address the\none-to-many generation problem.\nPrior studies are trained purely on predefined persona corpus,\nbut the limited information leads to generating uninformative re-\nsponses. Different from them, we employ V AE-based topic modelto extend persona information and propose two strategies (a mutual-reinforcement multi-hop memory retrieval mechanism and two\npersona-oriented loss functions) to integrate persona information toresponses.\n3 Method\n3.1 Overview\nWe assume that a conversation is conducted between two users.\nGiven a target user, we denote the user’s persona descriptions asP=(P\n1,P2,...,P np). Each persona sentence Pjis formulated\nasPj=(pj\n1,pj2,...,pj\nlp), wherepj\nirefers to a word. Suppose there\nare already kturns in a dialogue, so we have historical utterances\nasX=(X1,X2,...,X k), where each utterance Xiis depicted as\nXi=(xi\n1,xi2,...,xi\nlx)andxi\njdenotes a word. Accordingly, un-\nstructured persona-oriented dialogue generation aims to predict the\n(k+1)-th utterance, i.e., the response Y=(y1,y2,...,y ly), accord-\ning to the predefined persona descriptions Pand the historical utter-\nancesX:\np(Y|X,P)=ly/productdisplay\ni=1p(yi|X,P,y 1,...,y i−1). (1)\nAs illustrated in Figure 1, our PEE framework mainly consists\nof two stages: persona exploration and persona exploitation. Per-\nsona exploration employs a V AE-based topic model to conduct the\nunsupervised semantic modeling and obtains topic-relevant word\nrepresentations. Then it extends persona-related words by semantic\nmatching based on the predefined persona descriptions. Persona ex-\nploitation contains three components: (1) multi-source sequence en-coder, which encodes predefined persona descriptions into two kindsof key-value memories and encodes historical utterances into hid-den vectors; (2) persona information retrieval, which selects prede-fined persona descriptions based on historical utterances and con-siders the impact of personalized information involved in the his-tory; (3) persona-oriented response decoder, which exploits the pre-\ndefined and explored external persona information to generate re-\nsponses based on the specially designed mutual-reinforcement multi-hop memory retrieval mechanism. Moreover, two new optimizationobjectives, persona-oriented matching loss (P-Match) and persona-oriented bag-of-words loss (P-BoWs), are proposed to impel ourmodel to exploit the persona information more precisely. We willintroduce the technical details in the following sections.\n3.2 Persona Exploration\nBased on the predefined persona descriptions, the target of the per-sona exploration stage is to extend more persona-related words.Therefore the key is to investigate an effective method for the se-mantic learning of words. The extended persona words must lie in theM.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2245\n/g44/g349/g400/g410/g381/g396/g349/g272/g258/g367/g3/g104/g410/g410/g286/g396/g258/g374/g272/g286/g400/g3/g20/g38\n/g87/g86/g20/g14/g87/g86/g20/g16/g87/g86 /g892/g892/g892/g116/g381/g396/g282/g3/g3/g39/g396/g258/g374/g437/g367/g258/g396/g349/g410/g455/g3/g3\n/g87/g286/g396/g400/g381/g374/g258/g3/g68/g286/g373/g381/g396/g455/g3/g3/g28/g454/g410/g286/g396/g374/g258/g367/g3/g87/g286/g396/g400/g381/g374/g258/g3/g3/g3/g3/g3/g116/g381/g396/g282/g400/g3/g68/g286/g373/g381/g396/g455/g87/g92\n/g4/g410/g410/g286/g374/g410/g349/g381/g374/g892/g892/g892/g166/g166/g166\n/g21/g38/g94/g286/g374/g410/g286/g374/g272/g286/g3/g3/g39/g396/g258/g374/g437/g367/g258/g396/g349/g410/g455/g3/g3\n/g3/g3/g3/g87/g286/g396/g400/g381/g374/g258/g3/g3/g3/g68/g286/g373/g381/g396/g455/g94/g410/g258/g336/g286/g882/g410/g449/g381/g855/g3/g87/g286/g396/g400/g381/g374/g258/g3/g28/g454/g393/g367/g381/g349/g410/g258/g410/g349/g381/g374/g86\n/g80/g89/g75/g20/g89\n/g21/g89\n/g81/g89 /g856/g856/g856/g20/g214/g89\n/g21/g214/g89\n/g81/g89/g214 /g856/g856/g856\n/g72/g93/g94/g410/g258/g336/g286/g882/g381/g374/g286/g855/g3/g87/g286/g396/g400/g381/g374/g258/g3/g28/g454/g393/g367/g381/g396/g258/g410/g349/g381/g374\n/g94/g286/g373/g258/g374/g410/g349/g272/g3/g68/g258/g410/g272/g346/g349/g374/g336/g58\n/g87/g286/g396/g400/g381/g374/g258/g3\n/g47/g374/g296/g381/g396/g373/g258/g410/g349/g381/g374/g3\n/g90/g286/g410/g396/g349/g286/g448/g258/g367/g62/g87/g882/g68/g258/g410/g272/g346/g62/g87/g882/g17/g381/g116/g400\n/g22/g82\n/g60/g286/g455/g3/g381/g296/g3/g373/g286/g373/g381/g396/g455\n/g115/g258/g367/g437/g286/g3/g381/g296/g3/g373/g286/g373/g381/g396/g455/g10/g75/g73/g75/g73\n/g89/g75/g10/g10/g89/g73\n/g87/g286/g396/g400/g381/g374/g258/g3/g3/g24/g286/g400/g272/g396/g349/g393/g410/g349/g381/g374/g400\n/g68/g62/g87\n/g22/g38\n/g47/g374/g410/g286/g396/g258/g272/g410/g349/g381/g374/g3/g271/g286/g410/g449/g286/g286/g374/g3/g410/g449/g381/g3/g400/g410/g258/g336/g286/g400/g3/g116/g381/g396/g282/g882/g410/g381/g393/g349/g272/g3/g3/g116/g286/g349/g336/g346/g410/g3/g3/g68/g258/g410/g396/g349/g454/g3/g3\n/g72/g48\n/g86/g48/g90/g48\nFigure 1. An overview of our PEE framework. It consists of two stages: persona exploration and persona exploitation (multi-source sequence encoder,\npersona information retrieval and persona-oriented response decoder).\nsame topic with the predefined persona information, so as to guaran-\ntee the topic consistence of the conversations. Topic modeling meth-ods [25] are appropriate. Therefore, inspired by [23], we employ atopic model based on variational auto-encoder (V AE) [7] and makeadjustments according to our task to conduct the unsupervised globalsemantic modeling. Compared to the traditional methods such as La-tent Dirichlet Allocation (LDA) [2], V AE-based topic model is lesstime-consuming for training and more flexible for inferring latentrepresentations for new documents.\nAs shown in the upper side of Figure 1, the input of V AE-based\ntopic model is a document presentation vand the output v\n/primeis the\nreconstruction of input. We regard each conversation as a documentand represent it by tf-idf features. The encoding process can be for-\nmalized as:\nh\nv=fh(v),\nμ=fμ(hv),log(σ2)=fσ(hv),\nz=μ+σ·/epsilon1, /epsilon1∼N(0,1),(2)\nwheref∗(·)denotes non-linear transformation, μandσare mean\nand standard deviation vectors of multivariate normal distribution re-spectively, zis a latent vector sampled from the multivariate normal\ndistribution by reparameterization trick. We use the latent vector zto\nreconstruct the document:\nh\n/prime\nv=fh/prime(z),\nv/prime=fv/prime(h/primev).(3)\nWe learn all parameters via optimizing the evidence lower bound\n(ELBO) [7]. After the training, we draw a word-topic weight ma-\ntrixW∈RK×|V/prime|from the output layer fv/prime. The matrix represents\nthe topical saliency for each word, where Kis the number of top-\nics,V/primeis the vocabulary of topic model and |V/prime|is the vocabulary\nsize. Each column u∈RKinW can be regarded as a topic-based\nrepresentation for the corresponding word.\nGiven topic-relevant word representations, we extend words for\nevery dialogue. After removing stop-words in the dialogue, we filtera vocabulary set VP⊂V/primefrom predefined persona descriptions. For\neach word w∈VP, we select most relevant mexternal words based\non cosine similarities of topic-relevant word representations. Then,\nwe re-rank all external persona words of VPaccording to the cosine\nsimilarity score. If an external word is selected more than once, wejust record the highest score. Thereafter, we select the top n\nwwords\nof them. Finally, we convert each extended persona words into key-value representation by two multi-layer perceptron neural networks\nand store these representations in an external persona words mem-\noryM\ne.\n3.3 Persona Exploitation\nGiven predefined persona descriptions and extended persona-\nrelevant words, persona exploitation aims to integrate them to gen-erate informative responses. In this section, we detail three compo-nents of persona exploitation stage: multi-source sequence encoder,persona information retrieval and persona-oriented response decoder.\nMulti-Source Sequence Encoder. The input contains persona de-\nscriptions and historical utterances, we design two independent en-\ncoders for them.\nPersona memory encoder. We encode the predefined persona in-\nformation into sentence and word granularity presentations and store\nthem in two memories respectively. For each sentence P\ni, we obtain\na sentence representation ePiby a bidirectional Gated Recurrent Net-\nworks (Bi-GRU) [4]. Then we convert ePiinto keymS\niand value cSi\nby two multi-layer perceptron neural networks, and store them in the\nsentence granularity persona memory Ms. Simultaneously, for word\npi\nj, we obtain word representation epi\njfrom thej-th step of Bi-GRU\nfor thei-th sentence. Same as above, we convert each word repre-\nsentation into key and value, and store them in the word granularity\npersona memory Mw.M.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2246\n/g20/g38/g166/g62/g87/g882/g68/g258/g410/g272/g346\n/g21/g38/g22/g38/g166/g94/g286/g374/g410/g286/g374/g272/g286/g3/g39/g396/g258/g374/g437/g367/g258/g396/g349/g410/g455/g3/g3/g87/g286/g396/g400/g381/g374/g258/g3/g68/g286/g373/g381/g396/g455\n/g20/g82/g20/g84/g21/g84/g22/g84 /g21/g82\n/g22/g82\nFigure 2. An overview of Persona Information Retrieval.\nHistorical utterances encoder. In order to capture the relationship\namong the historical utterances X, we use a hierarchical recurrent\nencoder [21] to conduct the semantic modeling. From the second\nlevel of the hierarchical Bi-GRU, we obtain the final representations\neXfor the whole historical utterances and a sentence vector Cifor\neach utterance Xi.\nPersona Information Retrieval. After obtaining the representa-\ntions of historical utterances and sentence granularity persona mem-ory via the previous component, we use historical utterances to selectpersona information for response. Considering the key-value mem-\nory retrieval is a frequent component in the following modules, we\nprovide the general definition here. Assume that query vector is qand\nmemory Mcontains key mand value c, the retrieval operation retri\n(q,M)=ois defined as:\no=/summationdisplay\niaici,\nai=exp(si)/summationtext\njexp(sj),\nsj=qTmj,(4)\nwhere the output vector ois a weighted sum of values in memory and\nrepresents retrieved information.\nAs shown in Figure 2, we use each historical utterances to retrieve\nuser’s persona information in turn. During the chat process, some ofpersona information used in the history has an impact on the choiceof persona information for response. In order to take advantage ofthis impact, in the i-th step, we combine historical utterance presen-\ntationC\niand result of previous retrieval step oi−1as query vector qi,\nwhich can be formalized as:\nqi=/braceleftbiggCi,i=1 ;\nCi+oi−1,i >1.(5)\nThen we retrieve the sentence granularity persona memory Msby\nquery vector qi:\noi=retri(qi,Ms). (6)\nFinally, we concatenate the result of last retrieval step okand the\nwhole historical utterances representations eX:\ns0=[eX;ok], (7)\nwheres0is a merged vector used as the initial state of decoder.\nPersona-Oriented Response Decoder. The decoder is a GRU\nbased sequence prediction framework with an attention mechanismon the historical utterances and a mutual-reinforcement multi-hopmemory retrieval mechanism. Given the current input y\nt−1as well\nas the previous hidden state st−1, the recurrent calculation of GRU\nis defined as:\nst=GRU(yt−1,st−1). (8)\nThen we design an attention mechanism to absorb relevant informa-tion from historical utterances and a mutual-reinforcement multi-hopmemory retrieval mechanism to obtain the relevant persona informa-tion from predefined and explored external persona information.\nAttention on the historical utterances. produces a historical ut-\nterances vector u\nXat each decoding step by attending to historical\nutterances. We formalize it as:\nuX=n/summationdisplay\ni=1aihX\ni,\nai=exp(si)/summationtextn\nj=1exp(sj),\nsj=vTtanh(W sst+WthX\nj+b),(9)\nwherehX\nj(j=1,2,...,n)is thej-th word hidden state of historical\nutterances obtained from the first level of the hierarchical encoder for\nX.\nMutual-reinforcement multi-hop memory retrieval. Recall that\nwe build an external persona words memory Mefor persona ex-\nploration and a word granularity persona memory Mwin encoder.\nThere is an association between the two memories. For example,if we retrieve a word in the predefined persona descriptions whichis related to current conversation, the information in external per-\nsona memory related to this word will be more likely to be applied,\nvice versa. Therefore, the results of the two types of persona infor-mation retrievals are mutually influential and we propose a mutual-\nreinforcement multi-hop memory retrieval mechanism to model this\ninfluence.\nFirst, we use the current hidden state s\ntas query vector qto re-\ntrieve MwandMerespectively:\now=retri(q,Mw),\noe=retri(q,Me).(10)\nConsidering that the result of one memory retrieval (e.g. ow) will\naffect the next retrieval of another memory (e.g. Me), we update\nquery vector by adding the two retrieved results owandoe:\nqnew=qold+ow+oe. (11)\nThis update means that the results of the two retrievals will affect\neach other in the next hop. In our experiment, we use three hops\nunless otherwise stated.\nFinally, based on the exploitation of predefined and extended per-\nsona information, the output word distribution pytat time step tof\nthe decoder is produced by:\n˜st=fo([st;uX;ow;oe]),\npyt=softmax(˜ st),(12)\nwherefois the neural non-linear operation on the output layer.\n3.4 Persona-Orientated Loss.\nIn order to impel the model to exploit the persona informationmore precisely, besides the Negative Log-Likelihood loss (NLL), we\npropose two new persona-oriented loss functions: Persona-orientedMatching loss (P-Match) and Persona-oriented Bag-of-Words loss(P-BoWs). P-Match loss supervises the choice of predefined personasentences in persona information retrieval module and P-BoWs losssupervises to generate more persona-related words in decoder.M.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2247\nP-Match Loss. Recall that in the persona information retrieval\nmodule (Eq. 6), we can get a match weight over the sentence gran-\nularity persona memory Msin every step. Assume that the match\nweight in the last step is as∈R|P|. Intuitively, if the ground truth\nresponse contains the information from persona sentence Pi, then\nas\nishould obtain a large value. Is it possible to employ the relation\nbetween the ground truth response and the persona sentences to im-\nprove the modeling of persona information retrieval? To tackle this,we design the persona-oriented matching loss (P-Match). The 0-1la-\nbela∈R\n|P|is decided based on a threshold θaof the similarity\nbetween the persona sentences and the ground truth response. Jac-card Index\n8is employed for the similarity calculation. The P-Match\nloss is defined as:\nLP−Match=−|P|/summationdisplay\ni=1ailogas\ni. (13)\nP-BoWs Loss. Inspired by [13], we design a persona-oriented\nBag-of-Words loss function to enhance the ability of persona in-\nformation capturing. Specifically, We label each response with avocabulary-size vector b∈R\n|V|, where the non-stop words in the\ncurrent response will get values 1. If words are persona-based infor-\nmation, we increase the weight to 1+λ, where λis a positive value.\nWe use a multi-label classifier to generate BoWs representation Pb\n(sentence-level probability) by summing the scores of all positions of\nthe generated sentence in decoder: pb=sigmoid(|Y|/summationtext\nt=1˜st). We define\nP-BoWs loss using cross entropy:\nLP−BoWs=−1\n|V||V|/summationdisplay\ni=1[bilogpbi+\n(1−bi)log(1−pbi)].(14)\n3.5 Joint Training\nNegative log-likelihood loss (NLL) is employed as the basic opti-\nmization objective:\nLNLL=−1\n|Y||Y|/summationdisplay\nt=1ytlogpyt. (15)\nFinally, a unified optimization objective is designed by integrating\nP-Match loss, P-BoWs loss and the NLL loss:\nL=LNLL+γ1LP−Match+γ2LP−BoWs, (16)\nwhereγ1andγ2are trade-off parameters controlling the balance be-\ntween three loss functions.\n4 Experiments\nIn this section, we first introduce two datasets used in our experimentand list setups and baseline models. Next, we evaluate the perfor-mance of various models by automated evaluation and human evalu-ation.\n4.1 Datasets\nOur experiments use two public multi-turn dialogue datasets:Persona-Chat\n9[31] and DailyDialog10[9]. The Persona-Chat\n8http://en.wikipedia.org/wiki/Jaccard index\n9https://github.com/facebookresearch/ParlAI/tree/\nmaster/projects/personachat\n10http://yanran.li/dailydialogdataset contains 10907 dialogues between pairs of speakers, where968 dialogues are set aside for validation and 1000 for testing. Eachspeaker is described by 3-5 persona sentences. (e.g. “I like reading.”\nor “I am a nurse.”, etc). The total number of personas is 1155, and100 personas for validation and 100 for testing. The DailyDialogdataset is constructed by raw data crawled from various websites,which serves for English learners to practice English dialog in daily\nlife. It contains 13,118 multi-turn dialogues without persona descrip-\ntions, the number of turns are roughly 8 and the average number of\ntokens of each utterance is about 15.\nOur experiments are performed on the persona-chat dataset. In or-\nder to expand the knowledge space, we merge DailyDialog and thetraining set of Persona-Chat as the basic knowledge source to pre-train the topic model for persona exploration.\n4.2 Baselines\nWe consider the following comparison methods and their inputs con-sist of predefined persona descriptions, historical conversation utter-ances and the current query utterance.Seq2Seq [1]: the standard Sequence-to-Sequence Model with atten-\ntion. We concatenate persona descriptions and historical utterancesas a sequence input and generate the response.\nHRED [21]: Hierarchical Recurrent Encoder-Decoder model with\nattention. The input contains all sentences in persona and history con-\nversation.Profile Memory [31]: Generative Profile Memory network is a gen-\nerative model that encodes each of persona descriptions as a individ-ual memory representation in a memory network.Per.-CV AE [24]: Persona-CV AE is a memory-augmented architec-\nture which focus on the diverse generation of conversational re-sponses based on chatbot’s persona. In our experiment, we sampleone time from the latent z to generate a response.PED: Persona-oriented Encoder-Decoder Model. i.e., our PEE\nframework without the persona exploration, the P-BoWs loss and\nthe P-Match loss. Without external persona words memory, mutual-reinforcement multi-hop memory retrieval mechanism is equivalentto normal multi-hop memory retrieval mechanism.PED+PE: Our PEE framework without the P-BoWs loss and P-Match loss.PED+PE+P-BoWs: Our PEE framework without the P-Match loss.\nPED+PE+P-Match: Our PEE framework without the P-BoWs loss.\nPEE: PED + PE + P-BoWs + P-Match, i.e., our proposed PEE frame-\nwork.\n4.3 Experimental Settings\nWe treat each complete dialog (including personas) as a document,\nremove the stop words and select the top 10,000 frequent words totrain the V AE-based topic model. For the number of topics, we followprevious settings [29], [23] to set K = 50. In our experiments, we useGloV e [18] for word embedding and employ bi-directional GRU forencoders, and we set hidden states size is 512 and batch size is 64. We\nuse Adam optimizer [6] to train the model and set the learning rate\nis 0.0001. For testing, we use beam search with beam size 2. For all\nthe other hyperparameters, we tune them on the development set bygrid search. The number of extended persona-related words for eachdialoguen\nwis 100. The additional weight λin the P-BoWs target is\n1 and threshold θain labeling P-Match target process is 0.03. During\ntraining, trade-off parameter γ1is 0.1 and γ2is 0.1.M.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2248\nTable 2. Automatic evaluation results. The best results are bold.\nModel BLEU1 BLEU2 BLEU3 BLEU4 F1 Average Extrema Greedy\nSeq2Seq 20.1381 9.9395 5.2887 2.9840 17.7972 0.8551 0.4980 0.6751\nHRED 19.0920 9.5668 5.0191 2.7779 17.9184 0.8531 0.4882 0.6714\nProfile Memory 20.8713 9.8526 4.9942 2.6852 17.1553 0.8675 0.4835 0.6752\nPer.-CV AE 17.2315 7.2602 3.2081 1.4541 14.6121 0.8458 0.4688 0.6516\nPED 21.4611 10.6992 5.7845 3.3344 18.4759 0.8593 0.4993 0.6838\nPED+PE 21.8970 10.9987 5.9965 3.5334 18.4140 0.8643 0.4999 0.6856\nPED+PE+P-BoWs 21.9768 11.0710 6.0154 3.5574 18.2781 0.8626 0.4986 0.6822\nPED+PE+P-Match 22.4668 11.2560 5.9846 3.3031 18.2615 0.8592 0.4940 0.6803\nPEE 23.1926 11.5166 6.1248 3.4977 18.4130 0.8691 0.5010 0.6906\nTable 3. Human evaluation on four aspects: Fluency, Engagingness,\nConsistency and Persona Detection (PD). The value in parentheses is\nstandard deviation.\nModel Fluency Engagingness Consistency PD(%)\nSeq2Seq 4.08(0.71) 3.02(0.96) 3.00(1.03) 52.94(0.32)\nHRED 3.96(0.71) 2.73(1.05) 2.60(1.16) 64.71(0.32)\nProfile Memory 4.04(0.68) 3.08(1.01) 3.10(1.10) 58.82(0.40)\nPer.-CV AE 3.61(1.02) 2.63(1.09) 2.78(1.29) 85.29(0.34)\nPEE 4.13(0.76) 3.46(1.07) 3.44(1.13) 76.47(0.36)\nTable 4. Automatic evaluation results of PEE with different hops in\nmutual-reinforcement multi-hop retrieval mechanism.\nHops BLEU1 BLEU2 BLEU3 BLEU4 F1 Average Extrema Greedy\nPEE-1 22.5956 11.2877 6.0405 3.4315 18.32 0.8631 0.5009 0.6902\nPEE-2 22.9758 11.4999 6.2383 3.6327 18.68 0.8654 0.4979 0.6861\nPEE-3 23.1926 11.5166 6.1248 3.4977 18.41 0.8691 0.5010 0.6906\nPEE-4 22.3422 11.0628 5.8804 3.3678 18.25 0.8618 0.4985 0.6824\nPEE-5 22.2892 11.1789 5.9878 3.4148 18.55 0.8591 0.4993 0.6811\n4.4 Evaluation Metrics\nWe use different evaluation metrics (automated and human) to\ndemonstrate the effectiveness of our model. In this subsection, wewill give a brief introduction to those metrics.\nAutomatic Metrics. We report three different automatic metrics:\nBLEU@N: BLEU is an algorithm which has been widely used inmachine translation and dialogue system to evaluate the quality ofthe generated text. It measures the N-gram overlap between the gen-erated response and ground truth.\nF1-Measure: It measures the accuracy of the generated response\nconsidering both the precision and the recall. We treat the predictedand target response as bags of tokens, and compute their F1 score.Embedding-based similarity: Embedding Average (Average), Em-bedding Extrema (Extrema), and Embedding Greedy (Greedy) [11].These embedding-based metrics measure semantic similarity be-\ntween the generated response and the ground truth.\nHuman Metrics. It is not enough to only automatically evaluate\ndialogue systems, so we randomly sample about 100 dialogues from\ntest data and hire 5 volunteers to evaluate. We use four metrics: flu-ency, engagingness, consistency, and persona detection.Fluency: It measures the quality of generated sentence, e.g., whethergrammar is correct.\nEngagingness: It measures whether the generated sentence is appro-\npriate and interesting.\nConsistency: It measures whether the generated sentence has somerelationships with the history and persona description.Persona detection: For each dialogue, given generated responsesand two set of persona sentences (one is real and another is fake),we ask students to choose which one is the real description of chat-bot.\nThe first three metrics are scored between 1-5. For persona detec-\ntion, Scoring 1 means the choice is correct and 0 means the choice iswrong and 0.5 means people can not judge.\n5 Results and Analysis\n5.1 Experimental Results and Ablation Study\nAutomatic evaluation. Comparative automatic evaluation results\nare presented in Table 2. Our model outperforms baselines on all au-\ntomatic metrics. This demonstrates that our model generates moreappropriate responses by persona exploration and exploitation. Es-pecially, our model improves approximately 15.17% over seq2seqon BLEU1. Comparing with PED, PED+PE has better scores onmost metrics. This is because explored persona information con-\ntributes to generate more informative responses. Comparing with\nPED+PE, both PED+PE+p-BoWs and PED+PE+P-Match performbetter, which is because P-BoWs loss and P-Match loss supervisemodel to exploit persona information more precisely.\nAccording to automatic evaluation results of PEE with differ-\nent hops in mutual-reinforcement multi-hop retrieval mechanism inTable4, PEE-2 outperforms PEE-1 on most metrics. This demon-strates the interaction between two types persona information im-proves the performance of our model. Analyzing various indicators,the PEE works best when hop is 3. When the number of hops ex-\nceeds 3, the effect drops, that may because the query vector contains\nlittle information of current decoder state s\ntafter several update op-\nerations.\nHuman evaluation. The results of human evaluation are listed\nin Table 3. Our model significantly outperforms most of the base-lines in terms of all the metrics. Particularly, our model increases\napproximately 30.01% over profile memory on persona detection.\nThis demonstrates that persona exploration and exploitation is ben-eficial to improve the usage of persona information and enrich theresponses. Per.-CV AE has the highest persona detection metric, butit pays too much attention to persona, resulting in very poor gram-\nmar, relevance, and fluency of the generated responses.\n5.2 Persona Analysis\nIn order to further evaluate the ability of the model to combine per-\nsona, for each multi-turn dialogue, we count the number of wordsthat appear in both persona sentences and generated responses, anddivide the number by the total number of words in persona sentenceto get Persona use ratio. It measures the probability of personaM.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2249\nTable 5. Case studies.\nCase 1 Case 2\nPersonas forI worked at hollister in high school. I write short stories for a living.\nI am a professional skater. I used to work at a carnival.\nSpeaker BI play bluegrass music. I like to drink scotch to relax.\nI do not like chips. I like dark superhero movies.\nA: Hi. How is your night going? A: Hi, how are you doing tonight? I am good so far.\nB: Good, just left a bluegrass concert. B: I am good, relaxing with a glass of scotch to end the night.\nHistorical A: Ooh. Interesting. What else do you do for fun? A: That is nice and relaxing I love to get a good workout.\nB: Skate. I love it. What do you do? B: Yes. I have been busy working on a new short story to release.\nUtterances A: That is so exciting! I am currently a stay at home mom. A: That sounds very interesting hope all is good for you.\nB: cool, how many kids do you have? B: Yeah. What kind of movies are you interested in?\nA: I have three kids and pregnant with my fourth. I love being a mom. A: I like to watch romance and some scary movies is okay for me.\nExplored words concert, band, piano, guitar, rap ... ebook, thriller, horror, creepy, comic ...\nResponseseq2seq: That is cool. Do you have any pets? seq2seq: I like movies too. I am a loner.\nHRED: That is cool. Do you have any pets? HRED: That is cool. I am a fan of a movie.\nProfile Memory: I have a lot of kids, but i have a daughter that is so cool. Profile Memory: I have been watching movies on TV .\nPer.-CV AE: I am a coach I might play tomorrow. I do roofing. Per.-CV AE: I like movies too, I am not sure I like that. What are your hobbies?\nPEE: Wow that is a great thing. I like to play piano with my family. PEE: That is scary. I write a lot of horror stories.\nFigure 3. Average persona use ratio of all models in different turns.\nwords being used and punishes the repeated use of the same persona\ninformation in responses of different turns. We calculate the averagepersona use ratio of all models with different turns and show themin Figure 3. Per.-CV AE pays too much attention to persona, whichseriously affects the quality of the generated responses. We do notconsider Per.-CV AE here. We can see that our model outperforms allthe baseline methods. There are three reasons: first, persona informa-tion retrieval module thinks about the influence of persona in historywhen selecting persona information; second, external persona wordscontribute to utilize persona description that has an indirect relation-ship with current topic. third, the P-BoWs loss and the P-Match lossencourage the model to generate more persona-related words.\n5.3 Case Study\nTable 5 depicts some cases generated by PEE, Seq2Seq, HRED, andProfile Memory. From the comparisons, we can see that PEE modelcan use explored persona information to generate more persona-oriented informative responses. For example, in case 1, one of per-sona descriptions for speaker B is “I play bluegrass music.”, speakerA mentioned “kids” and “mom” in the query utterance. The exploredpersona word “piano” is related to “music” and “family”, the word“family” has correlation with “kids” and “mom”. So, the responsegenerated by PEE follows the clues above and implies persona in-formation simultaneously. What’s more, it leads the topic to a newfield that speakers are familiar with, making the next reply have morecontent to facilitate. When the previous topic ”work” is drawing toan end, our model can use persona and its extended words to converttopics to ”music”, however, responses of other baseline models donot reflect personality information.\nFigure 4. Visualization of matching weights on external persona words\nmemory in the last step of mutual-reinforcement multi-hop memory\nretrieval.\nSimilarly, in case 2, explored persona word “horror” is related with\n“scary movies” and “stories”, so our PEE model uses word “hor-ror” to rich response. In order to show the contribution of exploredpersona words more clearly and directly, we show the matchingweights on external persona words memory in the last step of mutual-reinforcement multi-hop memory retrieval mechanism in Figure 4.\n6 CONCLUSION\nIn this work, we propose a neural topical expansion framework,namely Persona Exploration and Exploitation (PEE), for the un-structured persona-oriented dialogue systems. Different from previ-ous work that trained purely on predefined persona description, ourmodel extends external persona information by a V AE-based topicmodel. By fusing predefined persona descriptions and external per-sona information, the responses our model generates can more accu-rately and properly represent the user persona while maintaining theconsistency of the dialogue. Experimental comparisons and analysisdemonstrated that our approach outperforms a set of state-of-the-artbaselines in terms of both automated metrics and human evaluations.For future work, we will extend persona information dynamicallyand jointly train persona exploration and exploitation.\n7 ACKNOWLEDGEMENT\nThis work is supported by the Natural Science Foundation of China(61972234, 61902219, 61672324, 61672322), the Tencent AI LabRhino-Bird Focused Research Program (JR201932), the Founda-tion of State Key Laboratory of Congitive Intelligence, iFLYTEK,M. Xu et al. / A Neural Topical Expansion Framework for Unstructured Persona-Oriented Dialogue Generation 2250\nP .R.China (COGOSC- 20190003), the Fundamental Research Funds\nof Shandong University, Ahold Delhaize, the Association of Univer-sities in the Netherlands (VSNU), and the Innovation Center for Ar-tificial Intelligence (ICAI).\nREFERENCES\n[1] D. Bahdanau, K. Cho, and Y . Bengio. Neural machine trans-\nlation by jointly learning to align and translate. arXiv preprint\narXiv:1409.0473, 2014.\n[2] D. M. Blei, A. Y . Ng, and M. I. Jordan. Latent dirichlet allo-\ncation. In Advances in Neural Information Processing Systems\n14, pages 601–608, 2002.\n[3] E. Chu, P . Vijayaraghavan, and D. Roy. Learning personas\nfrom dialogue with attentive memory networks. arXiv preprint\narXiv:1810.08717, 2018.\n[4] J. Chung, C ¸.G ¨ulc ¸ehre, K. Cho, and Y . Bengio. Empirical evalu-\nation of gated recurrent neural networks on sequence modeling.arXiv preprint arXiv:/1412.3555, 2014.\n[5] C. K. Joshi, F. Mi, and B. Faltings. Personalization in goal-\noriented dialog. arXiv preprint arXiv:1706.07503, 2017.\n[6] D. P . Kingma and J. Ba. Adam: A method for stochastic opti-\nmization. arXiv preprint arXiv:/1412.6980, 2014.\n[7] D. P . Kingma and M. Welling. Auto-encoding variational\nbayes. arXiv preprint arXiv:1312.6114, 2013.\n[8] J. Li, M. Galley, C. Brockett, G. Spithourakis, J. Gao, and\nB. Dolan. A persona-based neural conversation model. InProceedings of the 54th Annual Meeting of the Associationfor Computational Linguistics (V olume 1: Long Papers), pages\n994–1003, 2016.\n[9] Y . Li, H. Su, X. Shen, W. Li, Z. Cao, and S. Niu. Dailydialog: A\nmanually labelled multi-turn dialogue dataset. In Proceedings\nof the 8th International Joint Conference on Natural Language\nProcessing, 2017.\n[10] Z. Lin, A. Madotto, C.-S. Wu, and P . Fung. Personalizing di-\nalogue agents via meta-learningf. In Proceedings of the 57th\nAnnual Meeting of the Association for Computational Linguis-\ntics, pages 5454–5459, 2019.\n[11] C. Liu, R. Lowe, I. V . Serban, M. Noseworthy, L. Charlin, and\nJ. Pineau. How NOT to evaluate your dialogue system: Anempirical study of unsupervised evaluation metrics for dialogue\nresponse generation. arXiv preprint arXiv:/1603.08023, 2016.\n[12] L. Luo, W. Huang, Q. Zeng, Z. Nie, and X. Sun. Learning\npersonalized end-to-end goal-oriented dialog. arXiv preprint\narXiv:/1811.04604, 2018.\n[13] S. Ma, X. Sun, Y . Wang, and J. Lin. Bag-of-words as target for\nneural machine translation. arXiv preprint arXiv:/1805.04871,\n2018.\n[14] P . Mazar ´e, S. Humeau, M. Raison, and A. Bordes. Train-\ning millions of personalized dialogue agents. arXiv preprint\narXiv:/1809.01984, 2018.\n[15] C. Meng, P . Ren, Z. Chen, C. Monz, J. Ma, and M. de Rijke.\nRefnet: A reference-aware network for background based con-versation. In Proceedings of the 34th AAAI Conference on Ar-\ntificial Intelligence, 2020.\n[16] K. Mo, Y . Zhang, S. Li, J. Li, and Q. Yang. Personalizing a\ndialogue system with transfer reinforcement learning. In Pro-\nceedings of the 32th AAAI Conference on Artificial Intelligence ,\npages 5317–5324, 2018.\n[17] O. Olabiyi, A. Khazane, A. Salimov, and E. T. Mueller. Anadversarial learning framework for A persona-based multi-turn\ndialogue model. arXiv preprint arXiv:/1905.01992, 2019.\n[18] J. Pennington, R. Socher, and C. Manning. Glove: Global vec-\ntors for word representation. In Proceedings of the 2014 con-\nference on empirical methods in natural language processing,pages 1532–1543, 2014.\n[19] Q. Qian, M. Huang, H. Zhao, J. Xu, and X. Zhu. Assigning per-\nsonality/profile to a chatting machine for coherent conversationgeneration. In Proceedings of the 27th International Joint Con-\nference on Artificial Intelligence, pages 4279–4285, 2018.\n[20] P . Ren, Z. Chen, C. Monz, J. Ma, and M. de Rijke. Thinking\nglobally, acting locally: Distantly supervised global-to-local\nknowledge selection for background based conversation. In\nProceedings of the 34th AAAI Conference on Artificial Intel-ligence, 2020.\n[21] I. V . Serban, A. Sordoni, Y . Bengio, A. C. Courville, and\nJ. Pineau. Hierarchical neural network generative models for\nmovie dialogues. arXiv preprint arXiv:/1507.04808, 2015.\n[22] I. V . Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau,\nA. C. Courville, and Y . Bengio. A hierarchical latent vari-able encoder-decoder model for generating dialogues. arXiv\npreprint arXiv:/1605.06069, 2016.\n[23] N. A. Smith, D. Card, and C. Tan. Neural models for documents\nwith metadata. In Proceedings of the 56th Annual Meeting of\nthe Association for Computational Linguistics, 2018.\n[24] H. Song, W. Zhang, Y . Cui, D. Wang, and T. Liu. Exploiting\npersona information for diverse generation of conversationalresponses. In Proceedings of the the 28th International Joint\nConference on Artificial Intelligence, pages 5190–5196, 2019.\n[25] M. Steyvers and T. Griffiths. Probabilistic topic models. In\nT. Landauer, D. McNamara, S. Dennis, and W. Kintsch, editors,Latent Semantic Analysis: A Road to Meaning., 2006.\n[26] J. Urbanek, A. Fan, S. Karamcheti, S. Jain, S. Humeau, E. Di-\nnan, T. Rockt ¨aschel, D. Kiela, A. Szlam, and J. Weston. Learn-\ning to speak and act in a fantasy text adventure game. arXiv\npreprint arXiv:1903.03094, 2019.\n[27] O. Vinyals and Q. V . Le. A neural conversational model. arXiv\npreprint arXiv:/1506.05869, 2015.\n[28] J. Wang, X. Wang, F. Li, Z. Xu, Z. Wang, and B. Wang. Group\nlinguistic bias aware neural response generation. In Proceed-\nings of the 9th SIGHAN Workshop on Chinese Language Pro-cessing, pages 1–10, 2017.\n[29] X. Yan, J. Guo, Y . Lan, and X. Cheng. A biterm topic model\nfor short texts. In Proceedings of the 22nd International Con-\nference on World Wide Web, pages 1445–1456, 2013.\n[30] M. Yang, Z. Zhao, W. Zhao, X. Chen, J. Zhu, L. Zhou, and\nZ. Cao. Personalized response generation via domain adap-tation. In Proceedings of the 40th International ACM SIGIR\nConference on Research and Development in Information Re-\ntrieval, pages 1021–1024, 2017.\n[31] S. Zhang, E. Dinan, J. Urbanek, A. Szlam, D. Kiela, and J. We-\nston. Personalizing dialogue agents: I have a dog, do you have\npets too? In Proceedings of the 56th Annual Meeting of the As-\nsociation for Computational Linguistics (V olume 1: Long Pa-\npers), pages 2204–2213, 2018.\n[32] Y . Zheng, G. Chen, M. Huang, S. Liu, and X. Zhu. Personal-\nized dialogue generation with diversified traits. arXiv preprint\narXiv:/1901.09672, 2019.\n[33] Y . Zheng, R. Zhang, X. Mao, and M. Huang. A pre-training\nbased personalized dialogue generation model with persona-\nsparse data. arXiv preprint arXiv:1911.04700, 2019.M.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2251",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "0OCHMu4Std",
"year": null,
"venue": "ECAI 2023",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA230556",
"forum_link": "https://openreview.net/forum?id=0OCHMu4Std",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Fair Few-Shot Learning with Auxiliary Sets",
"authors": [
"Song Wang",
"Jing Ma",
"Lu Cheng",
"Jundong Li"
],
"abstract": "Recently, there has been a growing interest in developing machine learning (ML) models that can promote fairness, i.e., eliminating biased predictions towards certain populations (e.g., individuals from a specific demographic group). Most existing works learn such models based on well-designed fairness constraints in optimization. Nevertheless, in many practical ML tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance. This is because existing fairness constraints are designed to restrict the prediction disparity among different sensitive groups, but with few samples, it becomes difficult to accurately measure the disparity, thus rendering ineffective fairness optimization. In this paper, we define the fairness-aware learning task with limited training samples as the fair few-shot learning problem. To deal with this problem, we devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks. To compensate for insufficient training samples, we propose an essential strategy to select and leverage an auxiliary set for each meta-test task. These auxiliary sets contain several labeled training samples that can enhance the model performance regarding fairness in meta-test tasks, thereby allowing for the transfer of learned useful fairness-oriented knowledge to meta-test tasks. Furthermore, we conduct extensive experiments on three real-world datasets to validate the superiority of our framework against the state-of-the-art baselines.",
"keywords": [],
"raw_extracted_content": "Fair Few-Shot Learning with Auxiliary Sets\nSong Wanga, Jing Maa, Lu Chengband Jundong Lia\naUniversity of Virginia\nbUniversity of Illinois Chicago\nORCiD ID: Song Wang https://orcid.org/0000-0003-1273-7694, Jing Ma https://orcid.org/0000-0003-4237-6607,\nLu Cheng https://orcid.org/0000-0002-2503-2522, Jundong Li https://orcid.org/0000-0002-1878-817X\nAbstract. Recently, there has been a growing interest in developing\nmachine learning (ML) models that can promote fairness, i.e., elim-\ninating biased predictions towards certain populations (e.g., individ-uals from a specific demographic group). Most existing works learnsuch models based on well-designed fairness constraints in optimiza-tion. Nevertheless, in many practical ML tasks, only very few labeleddata samples can be collected, which can lead to inferior fairness per-formance. This is because existing fairness constraints are designedto restrict the prediction disparity among different sensitive groups,but with few samples, it becomes difficult to accurately measure thedisparity, thus rendering ineffective fairness optimization. In this pa-per, we define the fairness-aware learning task with limited trainingsamples as the fair few-shot learning problem. To deal with this prob-\nlem, we devise a novel framework that accumulates fairness-awareknowledge across different meta-training tasks and then generalizesthe learned knowledge to meta-test tasks. To compensate for insuf-ficient training samples, we propose an essential strategy to selectand leverage an auxiliary set for each meta-test task. These auxiliary\nsets contain several labeled training samples that can enhance themodel performance regarding fairness in meta-test tasks, thereby al-lowing for the transfer of learned useful fairness-oriented knowledgeto meta-test tasks. Furthermore, we conduct extensive experimentson three real-world datasets to validate the superiority of our frame-work against the state-of-the-art baselines.\n1 Introduction\nMachine learning (ML) tools have been increasingly utilized inhigh-stake tasks such as credit assessments [26] and crime predic-tions [22]. Despite their success, the data-driven nature of exist-ing machine learning methods makes them easily inherit the bi-ases buried in the training data and thus results in predictions withdiscrimination against some sensitive groups [33]. Here, sensitivegroups are typically defined by certain sensitive attributes such asrace and gender [35, 3, 4, 19, 45]. For example, a criminal riskassessment model can unfavorably assign a higher crime probabil-ity for specific racial groups [33]. In fact, such undesirable biasescommonly exist in various real-world applications such as toxicitydetection [6], recommendation systems [21], loan approval predic-tions [29], and recruitment [11].\nIn response, a surge of research efforts in both academia and in-\ndustry have been made for developing fair machine learning mod-els [9, 7]. These models have demonstrated their ability to effectivelymitigate unwanted bias in various applications [1, 47]. Many fair MLmethods [8, 10] incorporate fairness constraints to penalize predic-tions with statistical discrepancies among different sensitive groups.These methods often rely on sufficient training data from each sen-sitive group (e.g., collecting data from a specific region with an im-balanced population composition [49]). However, in many scenar-ios, only very few data samples can be collected, especially for thosefrom the minority group. This could render existing fair ML methodsineffective or even further amplify discrimination against the minor-ity group. To enhance the applicability of fair ML in practice [49],this work aims to address the crucial and urgent problem of fair few-\nshot learning: promoting fairness in few-shot learning tasks with alimited number of samples.\nOne feasible solution to address fair few-shot learning is to in-\ncorporate fairness techniques into few-shot learning methods. Par-ticularly, we first learn from meta-training tasks with adequate sam-\nples [32, 18, 39], and then leverage the learned knowledge and fine-tune the model on other disjoint meta-test tasks with few samples\nbased on fairness constraints. We define such a step of fine-tuning asfairness adaptation. However, there still remain two primary chal-lenges for our problem. First, the insufficiency of samples in meta-test tasks can result in unsatisfactory fairness adaptation perfor-mance. Although the model can adapt to meta-test tasks with limitedsamples via fine-tuning for classification, these samples may not besufficient to ensure fairness performance. Many fairness constraintsare designed to restrict the prediction disparity among different sen-sitive groups. However, in fair few-shot learning, the lack of samplesin each sensitive group inevitably increases the difficulties in measur-ing the prediction disparity. Moreover, in meta-test sets, the sensitiveattributes of data samples can often be extremely imbalanced (e.g., amajority of individuals belonging to the same race, while other sen-sitive groups have very few, or even no samples). In these cases, theconventional fairness constraints are often ineffective, or completelyinapplicable. Second, the generalization gap between meta-trainingtasks and meta-test tasks hinders the efficacy of fairness adaptation.Similar to other few-shot learning studies, the key point of fair few-shot learning is to leverage the learned knowledge from meta-trainingtasks to facilitate the model performance on meta-test tasks with fewsamples. In our problem, it is essential to leverage the learned knowl-edge for fairness adaptation. However, models that manage to reducedisparities on meta-training tasks do not necessarily achieve the sameperformance in fairness on meta-test tasks [10], due to the fact thatfairness constraints are data-dependent and thus lack generalizabil-ity [8]. As a result, it remains challenging to extract and leverage thelearned knowledge that is beneficial for fairness adaptation.ECAI 2023\nK. Gal et al. (Eds.)© 2023 The Authors.This article is published online with Open Access by IOS Press and distributed under the termsof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2305562517\nTo tackle these challenges, we devise a novel framework for\nfair few-shot learning, named FEAST (F air fE w-shot learning with\nAuxiliary SeTs). Specifically, we propose to leverage an auxiliary\nset for each meta-test task to promote fair adaptation with few sam-\nples while addressing the issues caused by insufficient samples. The\nauxiliary set is comprised of several samples from meta-training dataand is specific to each meta-test task. By incorporating these aux-iliary sets via a novel fairness-aware mutual information loss, the\nmodel can be effectively adapted to a meta-task with few sampleswhile preserving the fairness knowledge learned during training. Fur-thermore, to effectively leverage the learned knowledge from meta-training tasks for fairness adaptation, our proposed framework selectsthe auxiliary sets based on the fairness adaptation direction. This en-\nsures that the selected auxiliary sets share similar fairness adaptationdirections and thus can provide beneficial learned knowledge. Wesummarize our main contributions as follows:\n•Problem. We study the crucial problem of fair few-shot learning.\nWe introduce the importance of this problem, analyze the chal-lenges, and point out the limitations of existing studies. To thebest of our knowledge, this is the first work that addresses theseunique challenges in fair few-shot learning.\n•Method. We develop a novel fair few-shot learning framework\nthat (1) can leverage auxiliary sets to aid fairness adaptation withlimited samples, and (2) can select auxiliary sets with similar op-timization directions to promote fairness adaptation.\n•Experiments. We conduct extensive experiments on three real-\nworld fairness datasets under the few-shot scenario and demon-strate the superiority of our proposed framework in terms of fair-ness compared with a couple of state-of-the-art baselines.\n2 Problem Statement\nIn this section, we provide a formal definition for the problem of fairfew-shot learning that we study in this paper. Denote Z=X×Y as\nthe input space, where X⊂ R\nnis the input space with ndifferent\nfeatures and Y={1,2,...,N}is the label space with Ndiscrete\nclasses. We consider inputs X∈X , labelsY∈Y , and sensitive\nattributeA∈{0,1}. In the few-shot setting, the dataset Dis com-\nprised of two different smaller datasets: meta-training data Dtrand\nmeta-test data Dte. Moreover, D=Dtr∪DteandDtr∩Dte=∅,\ni.e.,|Dtr|+|Dte|=|D| . In general, few-shot settings assume that\nthere exist sufficient samples in Dtr, while samples in Dteare gen-\nerally scarce [18, 34].\nThe proposed framework is built upon the prevalent paradigm\nof episodic meta-learning [34, 32], which has demonstrated supe-rior performance in the field of few-shot learning [18, 39]. The pro-cess of episodic meta-learning consists of meta-training on D\ntrand\nmeta-test on Dte. During meta-training, the model is trained on a\nseries of meta-training tasks {T1,T2,...,T T}, where each meta-\ntraining task contains support set Sas the reference and a query\nsetQ to be classified. Tis the number of meta-training tasks.\nMore specifically, S={(x1,y1),(x2,y2),...,(xN×K,yN×K)}\ncontains N classes and K samples for each of these N classes\n(i.e., the N-wayK-shot setting). Meanwhile, the query set Q=\n{(xq\n1,yq\n1),(xq2,yq\n2),...,(xq\n|Q|,yq\n|Q|)}consists of |Q| different sam-\nples to be classified from these Nclasses. Subsequently, our goal is\nto develop a machine learning model that can accurately and fairly\npredict labels for samples in Dtewith limited labeled samples af-\nter training on Dtr. Formally, the studied problem of fair few-shot\nlearning can be formulated as follows.Definition 1. F air few-shot learning: Given meta-training data Dtr\nand a meta-test task T={S,Q}sampled from meta-test data Dte,\nour goal is to develop a fair learning model such that after meta-training on samples in D\ntr, the model can accurately and fairly pre-\ndict labels for samples in the query set Qwhen the only available\nreference is the limited samples in the support set S.\nNote that the support sets and the query sets are sampled from\nmeta-training data Dtr. That is, for any sample (xi,yi)in a meta-\ntraining task, (xi,yi)∼Ptr(X,Y), wherePtr(X,Y)is the meta-\ntraining task distribution from meta-training data Dtr. We then eval-\nuate the model on a series of meta-test tasks, which share the samestructure as meta-training tasks, except that the samples are now frommeta-test data D\nte. In other words, for any sample (xi,yi)during\nmeta-test, we have (xi,yi)∼Pte(X,Y), wherePte(X,Y)is the\nmeta-test task distribution from meta-test data Dte. Under the meta-\nlearning framework [18, 51, 20], the model needs to be first fine-tuned for several steps (i.e., fairness adaptation) using the supportset, and then performs fair classification for samples in the query set.\n3 Proposed Framework\nWe formulate the problem of fair few-shot learning in theN-wayK-\nshot meta-learning framework. The meta-training process typicallyinvolves a series of randomly sampled meta-training tasks, each ofwhich contains Ksamples for each of the Nclasses as the support\nset, along with several query samples to be classified. Under the few-shot scenario, it is challenging to conduct fairness adaptation on thesupport set due to the insufficiency of samples and the generalizationgap between meta-training tasks and meta-test tasks. Therefore, asillustrated in Fig. 1, we propose the use of auxiliary sets that canenhance fairness adaptation for each meta-test task. In this section,we first introduce the process of conducting fairness adaptation withauxiliary sets and then discuss the strategy to select auxiliary sets.\n3.1 Fairness Adaptation with Auxiliary Sets\nTo alleviate the issue of ineffective fairness adaptation to meta-testtasks caused by insufficient samples, we propose to leverage the sam-ples in meta-training tasks for fairness adaptation. Specifically, con-sidering a target meta-test task T=(S,Q), our goal is to utilize an\nauxiliary set Aobtained from meta-training data that can compensate\nfor inadequate samples in S. However, due to the distribution dif-\nference between meta-training tasks and meta-test tasks, it remainsnon-trivial to leverage the auxiliary set A, which follows a different\ndistribution from S. Since the data distribution in Adiffers from that\ninS, directly conducting fairness adaptation on Acan be ineffective\nfor fairness in S. Therefore, to enhance fairness adaptation with the\nhelp of the auxiliary set A, we propose to maximize the mutual in-\nformation (MI) between the support set Sand the auxiliary set A.I n\nconsequence, the fairness adaptation on Swill benefit from A.\nGenerally, the support set SinTcan be expressed as S=\n{(x\n1,y1),(x2,y2),...,(xN×K,yN×K)}, which contains Ksam-\nples for each of Nclasses.xiis an input sample, and yiis the corre-\nsponding label. We use ai∈{0,1}to denote its sensitive attribute.\nIn particular, we propose to construct an auxiliary set that shares thesame structure as the support set. In this way, the auxiliary set Acan\nbe represented as A={(x\n∗\n1,y∗\n1),(x∗2,y∗\n2),...,(x∗\n|A|,y∗\n|A|)}. Here\n|A|, i.e., the size of the auxiliary set, is set as a controllable hyper-\nparameter. Moreover, based on the classification model f(·), we can\nobtain the sample embedding xi∈Rd, and the classification prob-\nabilities pi=f(xi)∈RNforxi. Hereddenotes the embeddingS. Wang et al. / Fair Few-Shot Learning with Auxiliary Sets 2518\nSupport Query\nGenerator\nClassifierCandidate Sets\nAuxiliary 1\n Classification \nLoss\nAuxiliary 2\nMSE Loss\nAuxiliary 3\nAuxiliary\nA Meta-task\nFairness \nAdaptation\nMeta-update\nAdaptation\nDirectionAuxiliary Set\nSelection\nSensitive \nAttribute (Shape)\nLabel (Color)\nCalculate \nDirection\nMatch?\nFigure 1 : The overall framework of FEAST. Here different shapes denote different sensitive attributes, and colors represent sample classes.\nGiven a meta-task, the generator will output the estimated fairness adaptation direction, which is used to select an auxiliary set with the most\nsimilar direction from the candidate set. Then we conduct fairness adaptation with the auxiliary set on the current meta-task and performpredictions. The resulting fairness adaptation will be used to update the generator. Note that during training, the meta-task will be incorporatedinto the candidate auxiliary sets after the optimization of one episode.\ndimension of samples, and Nis the number of classes in T. Particu-\nlarly, we maximize the fairness-aware MI between SandAby\nmax\nθI(S;A)=m a x\nθ|S|/summationdisplay\ni=1|A|/summationdisplay\nj=1p(xi,x∗\nj;θ)logp(xi|x∗\nj;θ)\np(xi;θ), (1)\nwhereθdenotes the parameters of classification model f(·). Since\nthe MI term I(S;A) is difficult to obtain and also intractable, it is in-\nfeasible to directly maximize it [27]. Therefore, we first re-formulate\nthe MI term to make it computationally tractable based on the prop-erty of conditional probabilities:\nI(S;A)=|S|/summationdisplay\ni=1|A|/summationdisplay\nj=1p(xi|x∗\nj;θ)p(x∗j;θ)logp(xi|x∗\nj;θ)\np(xi;θ)\n=|S|/summationdisplay\ni=1|A|/summationdisplay\nj=1p(x∗\nj|xi;θ)p(xi;θ)logp(xi|x∗\nj;θ)\np(xi;θ).(2)\nSince the support set Sis randomly sampled, we can assume that the\nprior probability p(xi;θ)follows a uniform distribution and set it as\na constant: p(xi;θ)=1/|S|, which thus can be ignored in optimiza-\ntion. Therefore, it remains to estimate p(xi|x∗j;θ)andp(x∗j|xi;θ)to\nobtain the value of I(S;A).\n3.1.1 Estimation of p(xi|x∗\nj;θ)\nWe first denote S0andS1as the sets of samples with sensitive at-\ntributes of 0and1, respectively1.In other words, S=S0∪S 1and\nS0∩S 1=∅. Similarly, we define sets A0andA1for the auxiliary\nsetA. Then we propose to estimate p(xi|x∗\nj;θ)as follows:\np(xi|x∗\nj;θ)=⎧\n⎪⎨\n⎪⎩pi(y∗\nj)/summationtext\nxk∈Saipk(y∗\nj)ifai=a∗\nj,\n0 else.(3)\nHere pi(y∗\nj)∈Rdenotes the classification probability of xiregard-\ningy∗\nj, which is the label of x∗\nj. Intuitively, the probability measures\nthe alignment of the classification between the support sample xiand\n1For the sake of simplicity, we focus on tasks with only binary sensitive\nattributes in this paper. Nevertheless, our work can be easily generalized to\ntasks with multiple types of sensitive attributes.the auxiliary sample x∗\nj, which (1) shares the same sensitive attribute\nwithxiand (2) is also similar to xiregarding the classification out-\nput. In other words, maximizing p(xi|x∗j;θ)can increase the fairness\nadaptation consistency between sample xiand auxiliary samples that\nare specifically beneficial for the fairness adaptation with xi, thus\npromoting the fairness adaptation performance.\n3.1.2 Estimation of p(x∗\nj|xi;θ)\nThe term p(x∗\nj|xi;θ)in Eq. (2) is conditioned on xiand denotes\nthe probability of x∗jinferred by xi. Moreover, since the value of\np(xi|x∗j;θ)becomes zero when the sensitive attributes of xiandx∗j\nare different, we only need to estimate p(x∗j|xi;θ)whenxiandx∗j\nshare the same sensitive attributes, i.e., ai=a∗j. Therefore, since xi\nandx∗jmaintain the same sensitive attributes, we can estimate the\nprobability p(x∗j|xi;θ)based on the squared Euclidean distance be-\ntween their embeddings without explicitly considering their fairness-\naware correlation. In particular, we further normalize the probabilitywith a softmax function to formulate term p(x\n∗\nj|xi;θ)as follows:\np(x∗\nj|xi;θ)=exp/parenleftbig\n−/bardblxi−x∗\nj/bardbl2\n2/parenrightbig\n/summationtext\nx∗\nk∈Aa∗\njexp(−/bardblxi−x∗\nk/bardbl2\n2). (4)\nFurthermore, to ensure the consistency of sample representations in\nmeta-training and meta-test data, we apply the /lscript2normalization on\nboth xiandx∗\nj, which results in /bardblxi−x∗j/bardbl2\n2=2−2x/latticetop\ni·x∗j. In this\nmanner, the logarithmic term logp(x∗j|xi;θ)becomes:\nlog/parenleftbig\np(x∗\nj|xi;θ)/parenrightbig\n=l o g⎛\n⎝exp/parenleftbig\n−2+2 x/latticetop\ni·x∗j/parenrightbig\n/summationtext\nx∗\nk∈Aa∗\njexp/parenleftbig\n−2+2 x/latticetop\ni·x∗\nk/parenrightbig⎞\n⎠\n=2x/latticetop\ni·x∗j−log/summationdisplay\nx∗\nk∈Aa∗\njexp/parenleftBig\n2x/latticetop\ni·x∗k/parenrightBig\n.\n(5)\nFinally, the MI loss LMI can be derived as follows:\nLMI=1\n|A||A|/summationdisplay\nj=1/summationdisplay\nxi∈Sa∗\nj−pi(y∗\nj)/summationtext\nxk∈Saipk(y∗\nj)/parenleftBig\n2x/latticetop\ni·x∗j\n−log/summationdisplay\nx∗\nk∈Aa∗\njexp/parenleftBig\n2x/latticetop\ni·x∗k/parenrightBig/parenrightBigg\n.(6)S. Wang et al. / Fair Few-Shot Learning with Auxiliary Sets 2519\nThe overall fairness adaptation loss can be represented as the combi-\nnation of fairness regularization terms on the support set Sand the\nauxiliary set Aalong with the MI loss between SandA:\nLFA=LR(S)+γ(LR(A)+L MI), (7)\nwhereγis an adjustable weight hyper-parameter to control the im-\nportance of the auxiliary set. Specifically, LRdenotes the regularized\noptimization loss:\nLR(S)=1\n|S|/summationdisplay\n(x,y )∈S/lscript(f(x),y)+λR(S), (8)\nwhere/lscriptis the classification loss, and R(S)denotes the fairness reg-\nularization term.\n3.2 Auxiliary Sets Selection\nThe second problem of the generalization gap between meta-trainingand meta-test in fair few-shot learning can also pose a significantchallenge in fairness adaptation. To address this issue, we proposeto select the auxiliary set based on its similarity in fairness adapta-tion directions to the target meta-test task. In this way, incorporatingthe auxiliary set with a similar fairness adaptation direction can po-tentially leverage beneficial learned knowledge in meta-training toenhance fairness adaptation in the target meta-task. However, it isdifficult to identify the fairness adaptation direction of the auxiliaryset that aligns with the target meta-task. It is possible that the aux-iliary set holds a different or even opposite fairness adaptation di-rection from the target meta-task. As such, the incorporation of suchan auxiliary set can even harm the fairness adaptation performance.Therefore, to select the auxiliary set with a similar fairness adapta-tion direction to the target meta-test task, we introduce a dynamic\ndictionary, A\ncan , which stores all candidate auxiliary sets for se-\nlection, with the keys being their corresponding fairness adaptationdirections. This allows us to efficiently identify and select an aux-iliary set with a similar adaptation direction for the target meta-testtask, thereby improving the fairness adaptation performance in thepresence of the generalization gap.\nNotably, this dictionary will be dynamically updated by adding a\nnew auxiliary set after each meta-training step and meanwhile re-moving the oldest auxiliary set, of which the fairness adaptation di-rection is the most outdated. In this manner, the dictionary also actslike a queue, which means that the size can be flexible and indepen-dent to fit various scenarios. Specifically, after each step on a meta-training task T={S,Q}, we will enqueue the support set Sas a\ncandidate auxiliary set\n2intoAcan and remove the oldest auxiliary\nset. The key of enqueued S, which is the fairness adaptation direc-\ntion ofS, is set as the gradient of LR(S), i.e.,∇θLR(S), whereθ\ndenotes the model parameters of f(·).\nIdentifying the true fairness adaptation direction. With the help of\nthe dynamic dictionary as a queue during meta-training, it may stillremain difficult to obtain the fairness adaptation direction of the tar-get meta-test task T. This is because the fairness adaptation direction\nofScannot faithfully reveal the true direction due to potentially im-\nbalanced sensitive attributes. Therefore, to identify the true fairnessadaptation direction without directly conducting fairness adaptationon the support set S, we propose the use of a generator g(·), pa-\nrameterized by φ, to estimate the fairness adaptation results for each\nmeta-test task. In particular, the generator g(·)takes the support set\n2Note that the auxiliary set size is controllable via randomly removing sam-\nples in Sor incorporating new samples before enqueuing.Algorithm 1 Detailed training process of our framework.\nInput: Meta-training task distribution Ptrfrom the meta-training\ndataDtr, number of meta-training tasks T, number of fine-\ntuning steps τ.\nOutput: A trained fairness-aware classification model f(·)and a\ngenerator model g(·).\n1:Randomly initialize the dictionary queue Acan ;\n2:fori=1,2,...,T do\n3: Sample a meta-training task Ti={S,Q} ∼Ptr;\n4: Obtain the fairness adaptation direction via Eq. (10);\n5: Select an auxiliary set Afrom the candidate auxiliary set dic-\ntionaryAcan based on Eq. (11);\n6: fort=1,2,...,τ do\n7: Conduct one step of fairness adaptation according to Eq. (7)and Eq. (12);\n8: end for\n9: Meta-optimize classification model f(·)and generator g(·)\nbased on Eq. (13) and Eq. (14), respectively;\n10: Enqueue support set Sinto the dictionary queue Acan and\nremove the oldest candidate auxiliary set in Acan ;\n11: end for\nSas input and outputs an estimation of the gradient of LR(S), i.e.,\n∇θLR(S). To optimize the generator g(·), we introduce the Mean\nSquared Error (MSE) loss as the objective function as follows:\nLE=/bardblg(S)−∇θLR(S)/bardbl2\n2, (9)\nwhereg(S)∈Rdθis the generator output, and dθis the size of the\nclassification model parameter θ. It is worth mentioning that the input\nof the generator g(·)is an entire support set S, which means that\nthe generator should be able to capture the contextual information\nwithin the support set. For this reason, we propose to leverage thetransformer encoder architecture [38] followed by a Multiple LayerPerceptron (MLP) as the implementation of the generator. In specific,the output of the generator can be expressed as:\ng(S)= MLP/parenleftbig\nMean/parenleftbig\nTransformer/parenleftbig\nx\n1,x2,...,x |S|/parenrightbig/parenrightbig/parenrightbig\n. (10)\nIn this manner, the generator can estimate the corresponding fairnessadaptation direction from S, where the result can be used for select-\ning an auxiliary set.\nAfter the meta-training process on a series of meta-training tasks\n{T\n1,T2,...,T T}, we can obtain a dictionary of candidate auxiliary\nsets inAcan={A 1,A2,...,A |Acan|}along with their fairness\nadaptation directions as keys. Here we denote their correspondingkeys as k(A)∈R\ndθ. Then given a new meta-test task Ttest=\n{S test,Qtest}, the corresponding selected auxiliary set A∗can be se-\nlected via the following criterion:\nA∗= argmin\nA∈Acandist(g(Stest),k(A)), (11)\nwhere dist( ·,·)is a function to measure the distance between two\nvectors. In the experimentation, we implement it as the Euclideandistance. We can then efficiently select an auxiliary set from a sig-nificantly large dictionary based on the keys. It is noteworthy that tokeep consistency between meta-training and meta-test, we will alsoselect an auxiliary set for each meta-training task for optimization.\n3.3 Meta-optimization\nOur framework is optimized under the episodic meta-learningparadigm [18]. Specifically, let θdenote the total parameters of theS. Wang et al. / Fair Few-Shot Learning with Auxiliary Sets 2520\nclassification model f(·). In order to perform fairness adaptation, we\nfirst initialize the model parameters as θ0←θ. After that, given a\nspecific meta-task T={S,Q}, we conduct τsteps of gradient de-\nscent based on the fairness adaptation loss LFA calculated on the\nsupport set S. Thus, the fairness adaptation process in Tcan be for-\nmulated as follows:\nθt←θt−1−α∇θt−1LFA(S;θt−1), (12)\nwheret∈{1,2,...,τ}andL(S;θt−1)denotes the loss calculated\nbased on the support set Swith the parameters θt−1.τis the number\nof fine-tuning steps applied, and αis the learning rate in each fine-\ntuning step. After conducting τsteps of fine-tuning, we will meta-\noptimize the classification model f(·)with the loss calculated on the\nquery set Q. In specific, we meta-optimize the model parameters θ\nwith the following update function:\nθ=:θ−β1∇θLFA(Q;θτ), (13)\nwhereβ1is the meta-learning rate for the classification model f(·).\nFor the optimization of the generator g(·), parameterized by φ, the\nupdate can be formulated as follows:\nφ=:φ−β2∇φLE(S;θτ), (14)\nwhereLEis the MSE loss introduced in Eq. (9), and β2is the meta-\nlearning rate for the generator g(·). In this way, the model parameters\nφofg(·)will be updated based on loss LEafter the fairness adapta-\ntion of the classification model f(·). The detailed training process of\nour framework is demonstrated in Algorithm 1.\n4 Experimental Evaluations\n4.1 Datasets\nIn this subsection, we introduce the datasets used in our experi-\nments. To evaluate the performance of FEAST on fair few-shot learn-ing, we conduct experiments on three prevalent real-world datasets:Adult [15], Crime [22], and Bank [26]. The detailed dataset statisticsare provided in Table 1.\n•The Adult dataset contains information from 48,842 individualsfrom the 1994 US Census, where each instance is represented by14 features and a binary label. Here the label indicates whetherthe income of a person is higher than 50K dollars. Following thedata split setting in PDFM [49], we split the dataset into 34 subsetsbased on the country information of instances. We consider genderas the sensitive attribute.\n•The Crime dataset includes information on 2,216 communitiesfrom different states in the U.S., where each instance consists of98 features. Following [31], the binary label of each instance is ob-tained by converting the continuous crime rate based on whetherthe crime rate of a community is in the top 50% within the state.The sensitive attribute is whether African-Americans are amongthe highest or second highest populations in each community. Wefurther split this dataset into 46 subsets by considering each stateas a subset.\n•The Bank dataset consists of 41,188 individual instances in total.Specifically, each instance maintains 20 features along with a bi-nary label that indicates whether the individual has subscribed toa term deposit. Here, we consider marital status as the binary sen-sitive attribute. Moreover, the dataset is split into 50 subsets basedon the specific date records of instances.Table 1 : Statistics of three real-world datasets.\nDataset Adult Crime Bank\nSensitive Attribute Gender Race Marital Status\nLabel Income Crime Rate Deposit\n# Instances 48,482 2,216 41,188\n# Features 12 98 17\n# Subsets 34 46 50\n# Training Subsets 22 30 40\n# V alidation Subsets 68 5\n# Test Subsets 68 5\n4.2 Experimental Settings\nTo achieve a fair comparison of FEAST with competitive baselines,we conduct experiments with the state-of-the-art fair few-shot learn-ing methods and other few-shot learning methods with fairness con-straints. The details are provided below.\n•MAML [18]: This method utilizes a classic meta-learning frame-work to deal with the fair few-shot learning problem without ex-plicitly applying fairness constraints.\n•M-MAML [18]: This method uses the same framework as MAMLwhile modifying datasets by removing the sensitive attribute ofeach instance to enhance fairness during optimization.\n•Pretrain [49]: This method learns a single model on all meta-training data without episodic training. Moreover, a fairness con-straint is added to the training objective.\n•F-MAML [50]: This method applies a fairness constraint in eachepisode and tunes a Lagrangian multiplier shared across differentepisodes for fair few-shot learning tasks.\n•FM-dp and FM-eop (Fair-MAML) [31]: These two baselines pro-vide a regularization term for each episode based on demographicparity (DP) and equal opportunity (EOP), respectively.\n•PDFM [49]: This method leverages a primal-dual subgradient ap-proach to ensure that the learned model can be fast adapted to anew episode in fair few-shot learning.\nParticularly, we use the average classification accuracy (ACC) overT\ntestmeta-test tasks to evaluate the prediction performance. For fair-\nness performance, we propose to utilize demographic parity (DP)and equalized odds (EO), which are commonly used in existingworks [8, 48, 16, 44]. Since we consider the binary classificationdatasets, the output f(x)∈Rdenotes the prediction score of a spe-\ncific sample x. In this manner, the metrics can be calculated over\nT\ntestmeta-test tasks sampled from the meta-test task distribution Pte\nas follows:\nΔDP=ET∼Pte/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1\n|Q0|/summationdisplay\nx∈Q0f(x)−1\n|Q1|/summationdisplay\nx∈Q1f(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle, (15)\nΔEO=E\nT∼Pte/summationdisplay\ny∈{0, 1}/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1\n|Qy\n0|/summationdisplay\nx∈Qy\n0f(x)−1\n|Qy\n1|/summationdisplay\nx∈Qy\n1f(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle,\n(16)\nwhereQ\n0andQ1denote the query samples with a sensitive at-\ntribute of 0 and 1, respectively. Similarly, Qy\n0(orQy1) denotes\nthe query samples in Q0(orQ1) with label y.Pteis the meta-\ntest task distribution of meta-test sets Dn. Our code is released at\nhttps://github.com/SongW-SW/FEAST.S. Wang et al. / Fair Few-Shot Learning with Auxiliary Sets 2521\nTable 2 : Results w.r.t. fairness and prediction performance of FEAST and baselines under different settings for all three datasets.\nDataset Adult Crime Bank\nSetting 5-shot 10-shot 5-shot 10-shot 5-shot 10-shot\nMetric ΔDPΔEO ACC ΔDPΔEO ACC ΔDPΔEO ACC ΔDPΔEO ACC ΔDPΔEO ACC ΔDPΔEO ACC\nMAML 0.473 0.706 0.801 0.409 0.584 0.886 0.558 0.952 0.718 0.443 0.832 0.792 0.214 0.573 0.603 0.185 0.496 0.619\nM-MAML 0.447 0.689 0.826 0.381 0.555 0.857 0.359 0.732 0.711 0.300 0.569 0.757 0.214 0.544 0.600 0.175 0.459 0.619\nF-MAML 0.339 0.432 0.825 0.310 0.353 0.840 0.503 0.871 0.719 0.463 0.707 0.762 0.207 0.585 0.575 0.181 0.528 0.650\nFM-dp 0.313 0.502 0.814 0.241 0.438 0.844 0.385 0.722 0.741 0.329 0.604 0.771 0.238 0.614 0.586 0.187 0.553 0.604\nFM-eop 0.430 0.703 0.812 0.370 0.601 0.846 0.352 0.706 0.739 0.311 0.591 0.804 0.289 0.683 0.581 0.245 0.600 0.640\nPretrain 0.365 0.513 0.806 0.310 0.450 0.885 0.390 0.692 0.746 0.354 0.582 0.776 0.248 0.659 0.594 0.208 0.539 0.642\nPDFM 0.261 0.461 0.815 0.276 0.401 0.869 0.402 0.784 0.722 0.325 0.669 0.816 0.210 0.585 0.589 0.180 0.493 0.645\nFEAST 0.258 0.355 0.820 0.235 0.256 0.861 0.203 0.309 0.739 0.164 0.217 0.797 0.190 0.524 0.583 0.154 0.414 0.641\nFigure 2 : Ablation study on our framework FEAST on three datasets\nunder the 5-shot setting.\n4.3 Performance Comparison\nTable 2presents the fairness and prediction performance comparison\nof FEAST and all other baselines on fair few-shot learning. Specifi-\ncally, we report the results of ΔDP ,ΔEO, and classification accuracy\nover 500 meta-test tasks for 10 repetitions. We conduct experimentson both 5-shot and 10-shot settings (i.e., K=5 andK=1 0 ). From\nTable 2, we can have following observations:\n•Our framework FEAST consistently outperforms other baselinesin terms of fairness in all datasets under both 5-shot and 10-shotsettings. These results provide compelling evidence for the effec-tiveness of our framework FEAST in fair few-shot learning.\n•The performance improvement of FEAST over other baselines ismore significant on the Crime dataset. This is due to that in thisdataset, each subset consists of fewer samples. Consequently, thelearned fairness-aware meta-knowledge will be more difficult tobe transferred in baselines. Nevertheless, our proposed fairnessadaptation strategy based on mutual information can effectivelydeal with this scenario.\n•The accuracy of FEAST is comparable with other baselines,demonstrating that FEAST can substantially reduce biases with-out sacrificing its classification capability. This is because ourframework FEAST can select the auxiliary set with similar fair-ness adaptation directions and thus will not harm model perfor-mance regarding accuracy.\n•FEAST is more robust to the changes of the number of supportsamples per class, i.e., when the number decreases from 10 to 5,FEAST has the least performance drop in comparison to otherbaselines. We believe this is primarily because, with fewer sup-port samples, the problem of insufficient samples becomes moresignificant. Nevertheless, FEAST can effectively address this issuewith the incorporation of auxiliary sets into fairness adaptation.Figure 3 : Results of FEAST on Adult (left) and Crime (right) with\ndifferent values of γ.\n4.4 Impact of Each Component in FEAST\nIn this subsection, we conduct an ablation study on three datasetsunder the 5-shot setting to evaluate the effectiveness of differentcomponents in our framework by comparing FEAST with three de-generate versions: (1) FEAST without fairness adaptation based onMI, referred to as FEAST\\F. In this variant, the fairness adapta-tion process is simplified such that only fairness constraints are ap-plied. (2) FEAST without auxiliary set selection, i.e., the auxiliaryset is randomly sampled. We refer to this variant as FEAST\\A. (3)FEAST without both fairness adaptation and auxiliary set selection,referred to as FEAST\\FA. The results, as presented in Fig. 2, showthat FEAST outperforms all other variants, validating the importanceof both fairness adaptation and auxiliary set selection components infair few-shot learning. Of particular interest is that the removal ofthe MI fairness adaptation has a more significant adverse impact onthe Crime dataset, which contains significantly fewer meta-trainingsamples. This result highlights the crucial role of this componentin addressing the issue of insufficient training samples. In addition,when the two components are both removed, the fairness perfor-mance drops greatly. Such results indicate that the mutual impactbrought by these two components is also critical for our proposedframework FEAST.\n4.5 Effect of Loss Weight γ\nGiven the significance of the auxiliary sets in the fairness adapta-tion, in this subsection, we further examine in-depth how the auxil-iary sets will influence the performance of FEAST. Specifically, wevary the value of γ, which controls the importance of the auxiliary\nset loss during fairness adaptation. A higher value of γimplies a\nlarger importance weight on the auxiliary set and a smaller impor-tance weight on the target task. Due to the limitation of space, weS. Wang et al. / Fair Few-Shot Learning with Auxiliary Sets 2522\nFigure 4 : Results of FEAST on Adult under 5-shot (left) and 10-shot\n(right) settings with different values of |A|.\nevaluate the model’s performance on two datasets, Adult and Crime,\nusing various values of γ(similar results on the Bank dataset) on the\n5-shot setting. The results, as shown in Fig. 3, indicate that a valuearound 0.5 for γgenerally yields better fairness performance for both\ndatasets. This is mainly because a small γcan be insufficient to lever-\nage the fairness-aware meta-knowledge in auxiliary sets, while anexcessively large value of γcan result in the loss of crucial fairness\ninformation in the target meta-task. Moreover, the effect of differ-entγvalues is more significant on the Adult dataset. The reason is\nthat this dataset contains a larger number of samples in meta-trainingdata. As a result, the learned fairness-aware knowledge is richer inthe auxiliary sets, thus propagating the benefits from auxiliary sets.\n4.6 Effect of Auxiliary Set Size\nIn this section, we conduct experiments to evaluate the impactsbrought by varying the size of the auxiliary set A. Intuitively, the\nauxiliary set size |A| should be at least comparable with the support\nset, since an excessively small auxiliary set can be potentially insuffi-cient for fairness adaptation. Specifically, we conduct experiments ondataset Adult under both 5-shot and 10-shot settings to evaluate theeffect of auxiliary set size |A|. From the results presented in Fig. 4,\nwe can make the following observations: (1) The fairness results areless satisfactory with a smaller value of |A|, indicating that the ca-\npacity of Acan be important in FEAST. With a small auxiliary set\nA, the fairness adaptation effect will be reduced due to insufficientknowledge in A. (2) When further increasing the size of A, the fair-\nness performance does not accordingly increase. This demonstratesthat knowledge in a larger auxiliary set may not be helpful for fair-ness adaptation. (3) When the number of shots increases from 5 to10, the best value of |A| also increases, implying that with a larger\nsupport set, the auxiliary set should also be expanded to provide moreknowledge for fairness adaptation. In consequence, the fairness per-formance can be further improved.\n5 Related Work\n5.1 Few-shot Learning\nFew-shot learning aims to obtain satisfactory classification perfor-\nmance with only a few labeled samples as references [37, 36]. Thetypical approach is to accumulate transferable knowledge from meta-training tasks, which contain abundant labeled samples. Then suchknowledge is generalized to meta-test tasks with limited labeled sam-ples. Particularly, existing few-shot learning methods can be dividedinto two main categories: (1) Metric-based methods propose to learn\na metric function that matches samples in the query set with the sup-port samples to conduct classification [23, 34, 42, 41]. For exam-ple, Prototypical Networks [32] learn a prototype (i.e., the averageembedding of samples in the same class) for each class and thenclassify query samples according to the Euclidean distances betweenquery samples and each prototype. Matching Networks [39] outputpredictions for query samples via the similarity between query sam-ples and each support sample. (2) Optimization-based methods aim\nto first fine-tune model parameters based on gradients calculated onsupport samples and then conduct meta-optimization on each meta-task [25, 28, 43, 40]. As a classic example, MAML [18] learns ashared model parameter initialization for various meta-tasks with theproposed meta-optimization strategy. LSTM-based meta-learner [28]proposes an adjustable step size to update model parameters.\n5.2 Fairness-aware Machine Learning\nV arious fairness-aware algorithms have been proposed to mitigatethe unwanted bias in machine learning models. Generally, thereare two categories of statistical fairness notions: individual fair-\nness and group fairness. In particular, individual fairness requires\nthat the model results for similar individuals should also be simi-lar [16, 44, 13, 12]. Here, the similarity between individuals can bemeasured via specific metrics (e.g., Euclidean distance) learned dur-ing training or from prior knowledge. On the other hand, group fair-ness refers to the statistical parity between subgroups (typically de-fined by sensitive attributes, e.g., gender and race) via specific algo-rithms [46, 24, 19, 14]. Common fairness learning tasks include fairclassification [45, 17], regression [2, 5], and recommendations [30].Although these methods have demonstrated satisfactory performancein mitigating unfairness, it is noteworthy that existing works mainlyfocus on the settings where sufficient labeled samples are provided.As a result, it is challenging for these methods to accommodate few-shot scenarios with limited labeled samples.\nMore recently, several methods are proposed to deal with the fair\nfew-shot learning problem [31, 50]. For example, PDFM [49] uti-lizes a primal-dual subgradient approach to ensure fast adaptationto a novel meta-task. In [48], the authors propose to address fair-ness in supervised few-shot meta-learning models that are sensitiveto discrimination in historical data by detecting and controlling thedependency effect of sensitive attributes on target prediction. More-over, F-MAML [50] provides a fairness constraint for each episodeand tunes a Lagrangian multiplier shared across different episodesbased on a meta-learning mechanism. However, these methods can-not effectively solve the problem of insufficient samples and the gen-eralization gap.\n6 Conclusion\nIn this paper, we propose a novel problem of fair few-shot learning,which focuses on accurately and fairly predicting labels for samplesin unseen data while using limited labeled samples as references. Totackle the challenges posed by insufficient samples and the gener-alization gap between meta-training and meta-test, we propose aninnovative framework FEAST that utilizes learned fairness-awaremeta-knowledge by incorporating auxiliary sets. In particular, ourframework maximizes the mutual information between meta-tasksand the auxiliary sets to enhance fairness adaptation. Moreover, weselect auxiliary sets based on the estimated fairness adaptation direc-tion of meta-tasks to improve the fairness performance. We conductextensive experiments on three real-world datasets, and the resultsvalidate the superiority of FEAST over the state-of-the-art baselines.For future work, it is important to consider expanding the candidateauxiliary set with external knowledge, since samples in the datasetcan be insufficient. In this case, incorporating external informationfor fairness adaptation can be crucial.S. Wang et al. / Fair Few-Shot Learning with Auxiliary Sets 2523\n7 Acknowledgements\nThe work in this paper is supported by the National Science\nFoundation under grants (IIS-2006844, IIS-2144209, IIS-2223769,CNS2154962, and BCS-2228534), the Commonwealth Cyber Ini-tiative awards (VV -1Q23-007 and HV -2Q23-003), the JP MorganChase Faculty Research A ward, the Cisco Faculty Research A ward,the Jefferson Lab subcontract 23-D0163, and the UV A 4-V A collab-orative research grant.\nReferences\n[1] Maria Barrett, Y ova Kementchedjhieva, Y anai Elazar, Desmond Elliott,\nand Anders Søgaard, ‘Adversarial removal of demographic attributes\nrevisited’, in EMNLP, (2019).\n[2] Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael\nKearns, Jamie Morgenstern, Seth Neel, and Aaron Roth, ‘A convex\nframework for fair regression’, arXiv:1706.02409, (2017).\n[3] Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, V enkatesh Saligrama,\nand Adam T Kalai, ‘Man is to computer programmer as woman is to\nhomemaker? debiasing word embeddings’, in NeurIPS, (2016).\n[4] Joy Buolamwini and Timnit Gebru, ‘Gender shades: Intersectional\naccuracy disparities in commercial gender classification’, in F AccT,\n(2018).\n[5] Toon Calders, Asim Karim, Faisal Kamiran, Wasif Ali, and Xiangliang\nZhang, ‘Controlling attribute effect in linear regression’, in ICDM,\n(2013).\n[6] Lu Cheng, Ahmadreza Mosallanezhad, Y asin N Silva, Deborah L Hall,\nand Huan Liu, ‘Bias mitigation for toxicity detection via sequential de-cisions’, in SIGIR, (2022).\n[7] Lu Cheng, Kush R V arshney, and Huan Liu, ‘Socially responsible ai\nalgorithms: Issues, purposes, and challenges’, JAIR, (2021).\n[8] Ching-Y ao Chuang and Y oussef Mroueh, ‘Fair mixup: Fairness via in-\nterpolation’, in ICLR, (2021).\n[9] Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and\nAziz Huq, ‘Algorithmic decision making and the cost of fairness’, inSIGKDD, (2017).\n[10] Andrew Cotter, Maya Gupta, Heinrich Jiang, Nathan Srebro, Karthik\nSridharan, Serena Wang, Blake Woodworth, and Seungil Y ou, ‘Train-ing well-generalizing classifiers for fairness metrics and other data-dependent constraints’, in ICML, (2019).\n[11] Jeffrey Dastin, ‘Amazon scraps secret ai recruiting tool that showed bias\nagainst women’, in Ethics of Data and Analytics, (2018).\n[12] Y ushun Dong, Jing Ma, Song Wang, Chen Chen, and Jundong Li, ‘Fair-\nness in graph mining: A survey’, TKDE, (2023).\n[13] Y ushun Dong, Song Wang, Jing Ma, Ninghao Liu, and Jundong Li,\n‘Interpreting unfairness in graph neural networks via training node at-tribution’, in AAAI, (2023).\n[14] Y ushun Dong, Song Wang, Y u Wang, Tyler Derr, and Jundong Li, ‘On\nstructural explanation of bias in graph neural networks’, in SIGKDD,\n(2022).\n[15] Dheeru Dua, Casey Graff, et al., ‘Uci machine learning repository’,\n(2017).\n[16] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and\nRichard Zemel, ‘Fairness through awareness’, in ITCS, (2012).\n[17] Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheideg-\nger, and Suresh V enkatasubramanian, ‘Certifying and removing dis-parate impact’, in SIGKDD, (2015).\n[18] Chelsea Finn, Pieter Abbeel, and Sergey Levine, ‘Model-agnostic meta-\nlearning for fast adaptation of deep networks’, in ICML, (2017).\n[19] Moritz Hardt, Eric Price, and Nati Srebro, ‘Equality of opportunity in\nsupervised learning’, in NeurIPS, (2016).\n[20] Kexin Huang and Marinka Zitnik, ‘Graph meta learning via local sub-\ngraphs’, in NeurIPS, (2020).\n[21] Anja Lambrecht and Catherine Tucker, ‘Algorithmic bias? an empirical\nstudy of apparent gender-based discrimination in the display of stemcareer ads’, Management Science, (2019).\n[22] Moshe Lichman et al. Uci machine learning repository, 2013.[23] Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang,\n‘Learning to propagate for graph meta-learning’, in NeurIPS, (2019).\n[24] Christos Louizos, Kevin Swersky, Y ujia Li, Max Welling, and Richard\nZemel, ‘The variational fair autoencoder’, arXiv:1511.00830, (2015).[25] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel, ‘A\nsimple neural attentive meta-learner’, in ICLR, (2018).\n[26] Sérgio Moro, Paulo Cortez, and Paulo Rita, ‘A data-driven approach to\npredict the success of bank telemarketing’, Decision Support Systems,\n(2014).\n[27] Aaron van den Oord, Y azhe Li, and Oriol Vinyals, ‘Representation\nlearning with contrastive predictive coding’, in arXiv:1807.03748,\n(2018).\n[28] Sachin Ravi and Hugo Larochelle, ‘Optimization as a model for few-\nshot learning’, in ICLR, (2016).\n[29] Soumajyoti Sarkar and Hamidreza Alvari, ‘Mitigating bias in online\nmicrofinance platforms: A case study on kiva. org’, in ECMLPKDD,\n(2020).\n[30] Ashudeep Singh and Thorsten Joachims, ‘Fairness of exposure in rank-\nings’, in SIGKDD, (2018).\n[31] Dylan Slack, Sorelle A Friedler, and Emile Givental, ‘Fairness warn-\nings and fair-maml: learning fairly with minimal data’, in F AccT,\n(2020).\n[32] Jake Snell, Kevin Swersky, and Richard Zemel, ‘Prototypical networks\nfor few-shot learning’, in NeurIPS, (2017).\n[33] Megan Stevenson, ‘Assessing risk assessment in action’, Minn. L. Rev.,\n(2018).\n[34] Flood Sung, Y ongxin Y ang, Li Zhang, Tao Xiang, Philip HS Torr, and\nTimothy M Hospedales, ‘Learning to compare: relation network forfew-shot learning’, in CVPR, (2018).\n[35] Latanya Sweeney, ‘Discrimination in online ad delivery’, Communica-\ntions of the ACM, (2013).\n[36] Zhen Tan, Song Wang, Kaize Ding, Jundong Li, and Huan Liu, ‘Trans-\nductive linear probing: A novel framework for few-shot node classifi-cation’, arXiv:2212.05606, (2022).\n[37] Y onglong Tian, Y ue Wang, Dilip Krishnan, Joshua B Tenenbaum, and\nPhillip Isola, ‘Rethinking few-shot image classification: a good embed-ding is all you need?’, in ECCV, (2020).\n[38] Ashish V aswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion\nJones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin, ‘Attentionis all you need’, in NeurIPS, (2017).\n[39] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra,\net al., ‘Matching networks for one shot learning’, in NeurIPS, (2016).\n[40] Song Wang, Chen Chen, and Jundong Li, ‘Graph few-shot learning with\ntask-specific structures’, in NeurIPS, (2022).\n[41] Song Wang, Kaize Ding, Chuxu Zhang, Chen Chen, and Jundong Li,\n‘Task-adaptive few-shot node classification’, in SIGKDD, (2022).\n[42] Song Wang, Y ushun Dong, Xiao Huang, Chen Chen, and Jundong Li,\n‘Faith: Few-shot graph classification with hierarchical task graphs’, inIJCAI, (2022).\n[43] Song Wang, Xiao Huang, Chen Chen, Liang Wu, and Jundong Li, ‘Re-\nform: Error-aware few-shot knowledge graph completion’, in CIKM,\n(2021).\n[44] Mikhail Y urochkin, Amanda Bower, and Y uekai Sun, ‘Training indi-\nvidually fair ml models with sensitive subspace robustness’, in ICLR,\n(2020).\n[45] Muhammad Bilal Zafar, Isabel V alera, Manuel Gomez Rodriguez, and\nKrishna P Gummadi, ‘Fairness beyond disparate treatment & dis-parate impact: Learning classification without disparate mistreatment’,inWWW, (2017).\n[46] Rich Zemel, Y u Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork,\n‘Learning fair representations’, in ICML, (2013).\n[47] Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell, ‘Mitigating\nunwanted biases with adversarial learning’, in AIES, (2018).\n[48] Chen Zhao and Feng Chen, ‘Unfairness discovery and prevention for\nfew-shot regression’, in ICKG, (2020).\n[49] Chen Zhao, Feng Chen, Zhuoyi Wang, and Latifur Khan, ‘A primal-\ndual subgradient approach for fair meta learning’, in ICDM, (2020).\n[50] Chen Zhao, Changbin Li, Jincheng Li, and Feng Chen, ‘Fair meta-\nlearning for few-shot classification’, in ICKG, (2020).\n[51] Fan Zhou, Chengtai Cao, Kunpeng Zhang, Goce Trajcevski, Ting\nZhong, and Ji Geng, ‘Meta-gnn: On few-shot node classification ingraph meta-learning’, in CIKM, (2019).S. Wang et al. / Fair Few-Shot Learning with Auxiliary Sets 2524",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "C2mbNoywTN",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200351",
"forum_link": "https://openreview.net/forum?id=C2mbNoywTN",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Neural Topical Expansion Framework for Unstructured Persona-Oriented Dialogue Generation",
"authors": [
"Minghong Xu",
"Piji Li",
"Haoran Yang",
"Pengjie Ren",
"Zhaochun Ren",
"Zhumin Chen",
"Jun Ma"
],
"abstract": "Unstructured Persona-oriented Dialogue Systems (UPDS) has been demonstrated effective in generating persona consistent responses by utilizing predefined natural language user persona descriptions (e.g., “I am a vegan”). However, the predefined user persona descriptions are usually short and limited to only a few descriptive words, which makes it hard to correlate them with the dialogues. As a result, existing methods either fail to use the persona description or use them improperly when generating persona consistent responses. To address this, we propose a neural topical expansion framework, namely Persona Exploration and Exploitation (PEE), which is able to extend the predefined user persona description with semantically correlated content before utilizing them to generate dialogue responses. PEE consists of two main modules: persona exploration and persona exploitation. The former learns to extend the predefined user persona description by mining and correlating with existing dialogue corpus using a variational auto-encoder (VAE) based topic model. The latter learns to generate persona consistent responses by utilizing the predefined and extended user persona description. In order to make persona exploitation learn to utilize user persona description more properly, we also introduce two persona-oriented loss functions: Persona-oriented Matching (P-Match) loss and Persona-oriented Bag-of-Words (P-BoWs) loss which respectively supervise persona selection in encoder and decoder. Experimental results show that our approach outperforms state-of-the-art baselines, in terms of both automatic and human evaluations.",
"keywords": [],
"raw_extracted_content": "A Neural Topical Expansion Framework for\nUnstructured Persona-Oriented Dialogue Generation\nMinghong Xu1and Piji Li2∗and Haoran Yang3and Pengjie Ren4\nand Zhaochun Ren5∗and Zhumin Chen6and Jun Ma7\nAbstract. Unstructured Persona-oriented Dialogue Systems\n(UPDS) has been demonstrated effective in generating persona con-\nsistent responses by utilizing predefined natural language user per-sona descriptions (e.g., “I am a vegan”). However, the predefined\nuser persona descriptions are usually short and limited to only a few\ndescriptive words, which makes it hard to correlate them with the di-\nalogues. As a result, existing methods either fail to use the persona\ndescription or use them improperly when generating persona consis-\ntent responses. To address this, we propose a neural topical expansion\nframework, namely Persona Exploration and Exploitation (PEE),\nwhich is able to extend the predefined user persona description with\nsemantically correlated content before utilizing them to generate di-\nalogue responses. PEE consists of two main modules: persona explo-\nration and persona exploitation. The former learns to extend the pre-\ndefined user persona description by mining and correlating with ex-\nisting dialogue corpus using a variational auto-encoder (V AE) based\ntopic model. The latter learns to generate persona consistent re-\nsponses by utilizing the predefined and extended user persona de-\nscription. In order to make persona exploitation learn to utilize user\npersona description more properly, we also introduce two persona-\noriented loss functions: Persona-oriented Matching (P-Match) loss\nand Persona-oriented Bag-of-Words (P-BoWs) loss which respec-\ntively supervise persona selection in encoder and decoder. Experi-\nmental results show that our approach outperforms state-of-the-art\nbaselines, in terms of both automatic and human evaluations.\n1 Introduction\nPersona-oriented dialogue systems have attracted an increasing at-\ntention as they can generate persona consistent responses [3, 8,12, 14]. Existing persona-oriented dialogue systems can be classi-fied into two categories: Structured Persona-oriented Dialogue Sys-\ntems (SPDS) [19, 32, 33] and Unstructured Persona-oriented Di-\nalogue Systems (UPDS) [24, 31]. The former directly uses struc-\ntured user persona descriptions in the form of key-value pairs (e.g.,\n/angbracketleftSEX,M /angbracketright,/angbracketleftAGE,18/angbracketright), whereas the latter mines user persona de-\nscriptions from natural language utterances (e.g., “I like music.”, “ I\nlike the guitar .”, “I am a vegan.”). In this work, we focus on UPDS.\n1Shandong University, China, email: [email protected]\n2Tencent AI Lab, China, email: [email protected]\n3The Chinese University of Hong Kong, email: [email protected]\n4University of Amsterdam, The Netherlands, email: [email protected]\n5Shandong University, China, email: [email protected]\n6Shandong University, China, email: [email protected]\n7Shandong University, China, email: [email protected]\n∗Piji Li and Zhaochun Ren are corresponding authors.Table 1. An example of unstructured persona-oriented dialogue system.\n1. I like music.\nPersonas for 2. I like to skateboard.\nSpeaker B 3. I like the guitar.\n4 .Ia mavegan.\nA(u1): Wanna come over and watch the godfather?\nB(u2): I do not have a car, I have a skateboard.\nA(u3): Y ou can skateboard over. I do not live too far. I\nhave candy and soda to share.\nDialogue B(u4): No thanks, I do not eat any animal products.\nA(u5): I promise there are no animal products in my\ncandy and soda.\nB(u6): Most candy has some form of dairy.A sa vegan I\ncan not have that.\nRecently, there have been some studies which utilize the pre-\ndefined user persona descriptions to generate persona-oriented re-\nsponses [10, 24, 31]. However, the given user persona descriptionsare mostly short and limited to only a few descriptive words. As\na result, the existing methods have a hard time utilizing the user\npersona descriptions when generating responses. On the one hand,\nthey might fail to use user persona descriptions. For example, the\ngenerative profile memory network proposed in [31] simply attends\nover encoded persona description in decoder. It generates response “I\nhave a lot of candies. I am not sure.” for the case in Table 1 without\nconsidering user persona. On the other hand, they cannot use user\npersona descriptions properly sometimes. For example, the persona-\nCV AE proposed in [24] uses force decoding strategy to copy per-\nsona description. It generates response “I like to skateboard. What\nare your hobbies?” for the case in Table 1 which use the selected\npersona improperly and seriously affects its quality. The reason is\nthat with the limited descriptive words, it is hard for these models to\nunderstand and correlate the user persona descriptions when gener-\nating responses. We argue that this could be alleviated by extendingthe predefined user persona descriptions with semantically correlatedcontent. As shown in Table 1, the target is to generate the last utter-\nance (u\n6) based on the given persona descriptions and historical ut-\nterances (u 1-u5). One of the user persona descriptions for Speaker B\nis “I am a vegan”. However, only using this user persona description\nis not enough to generate huaman-like response (u 6) because “vegan”\nand “candy” are not directly related. In order to generate u 6, we need\nto take the following content into consideration simultaneously: (1)\nthe word “vegan” in B’s user persona description; (2) the semantic\ncorrelation between “vegan” and “dairy”; (3) speaker A mentioned\n“animal products” and “candy” in the query utterance; (4) the corre-\nlation among “dairy”, “animal products”, and “candy”.\nIn this work, we propose a neural topical expansion framework,\nnamely Persona Exploration and Exploitation (PEE), which is able\nto extend the predefined user persona descriptions with semanti-ECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2003512244\ncally correlated content before utilizing them to generate dialogue\nresponses. PEE consists of two main modules: persona explorationand persona exploitation. The former learns to extend the prede-fined user persona descriptions by mining and correlating with ex-\nisting dialogue corpus. Specifically, we employ a V AE-based topic\nmodel to conduct the unsupervised semantic modeling and extendpersona-related words by semantic matching. The latter learns to\ngenerate persona consistent responses by utilizing the predefined\nand extended persona information. Specifically, we design a mutual-\nreinforcement multi-hop memory retrieval mechanism which re-\ntrieves information from two types (predefined and extended) of per-\nsonas by considering their mutual influence. Furthermore, in order tomake persona exploitation learn to utilize user persona descriptionsmore properly, we also introduce two persona-oriented loss func-\ntions: P-Match loss and P-BoWs loss. P-Match loss supervises the\nchoice of predefined persona sentences in encoder. P-BoWs loss su-pervises to generate more persona-related words in decoder.\nThe main contributions of this paper are as follows:\n•We propose a persona exploration and exploitation (PEE) frame-work which can explore and exploit persona information to gen-erate informative persona-oriented responses.\n•We employ an V AE-based topic model to conduct the efficientunsupervised semantic learning for external persona informationmining and distillation.\n•We propose two learning strategies for persona exploitation:a mutual-reinforcement multi-hop memory retrieval mechanism\nand two persona-oriented loss functions.\n2 Realted work\nAs a challenging task in the area of natural language processing,\nopen-domain dialogue system has attracted great attention of re-searchers recently [15, 20, 22, 27]. But there are still some limita-tions and challenges in this area. Among the many issues, the lack\nof consistency is one of the most challenging difficulties. Therefore,\npersona-based dialogue system has been proposed to generate per-sona consistent and human-like responses [5, 16, 17, 26, 30]. Li\net al. [8] learn a user embedding to represent persona implicitly for\neach user without using explicitly persona information. Then, re-\nsearchers model user embedding with explicit persona information\nto generate responses. According to the format of persona infor-\nmation, those methods can be classified into two categories: struc-\ntured Persona-oriented Dialogue Systems (SPDS) and Unstructured\nPersona-oriented Dialogue Systems (UPDS).\nIn SPDS, Wang et al. [28] group users according to the gender at-\ntribute and the dialogue features in the same group can be shared.Qian et al. [19] endow the user with explicit structured persona in-formation (a key-value table) and design a profile detection mod-ule to select a persona information and inject it to the decoding pro-cess. Luo et al. [12] encode user persona description into distributedembeddings and take advantage of conversation history from otherusers with similar profiles, their model can adopt different recom-mendation policy based on the user profile. Due to the lack of large\nscale persona-labelled dataset, Zheng et al. [32] introduce a dataset\nwhere persona information is formulated to key-value pairs from\ndialogue content and they devise two technique to capture and ad-dress trait-related information. In UPDS, Zhang et al. [31] contributea persona-chat dataset with natural sentences persona informationand they propose a generative profile memory network to incorpo-rate persona information into responses. Lin et al. [10] model learn-ing different personas as different tasks via meta-learning algorithmwithout using explicit persona information since dialogue itself canreflect some persona information. Through this way, their model cangenerate personalised responses by leveraging only a few dialoguesamples instead of human-designed persona descriptions. To gener-ate diverse and sustainable conversations, Song et al. [24] propose amemory-augmented architecture to exploit persona information andutilized a conditional variational autoencoder which can address the\none-to-many generation problem.\nPrior studies are trained purely on predefined persona corpus,\nbut the limited information leads to generating uninformative re-\nsponses. Different from them, we employ V AE-based topic modelto extend persona information and propose two strategies (a mutual-reinforcement multi-hop memory retrieval mechanism and two\npersona-oriented loss functions) to integrate persona information toresponses.\n3 Method\n3.1 Overview\nWe assume that a conversation is conducted between two users.\nGiven a target user, we denote the user’s persona descriptions asP=(P\n1,P2,...,P np). Each persona sentence Pjis formulated\nasPj=(pj\n1,pj2,...,pj\nlp), wherepj\nirefers to a word. Suppose there\nare already kturns in a dialogue, so we have historical utterances\nasX=(X1,X2,...,X k), where each utterance Xiis depicted as\nXi=(xi\n1,xi2,...,xi\nlx)andxi\njdenotes a word. Accordingly, un-\nstructured persona-oriented dialogue generation aims to predict the\n(k+1)-th utterance, i.e., the response Y=(y1,y2,...,y ly), accord-\ning to the predefined persona descriptions Pand the historical utter-\nancesX:\np(Y|X,P)=ly/productdisplay\ni=1p(yi|X,P,y 1,...,y i−1). (1)\nAs illustrated in Figure 1, our PEE framework mainly consists\nof two stages: persona exploration and persona exploitation. Per-\nsona exploration employs a V AE-based topic model to conduct the\nunsupervised semantic modeling and obtains topic-relevant word\nrepresentations. Then it extends persona-related words by semantic\nmatching based on the predefined persona descriptions. Persona ex-\nploitation contains three components: (1) multi-source sequence en-coder, which encodes predefined persona descriptions into two kindsof key-value memories and encodes historical utterances into hid-den vectors; (2) persona information retrieval, which selects prede-fined persona descriptions based on historical utterances and con-siders the impact of personalized information involved in the his-tory; (3) persona-oriented response decoder, which exploits the pre-\ndefined and explored external persona information to generate re-\nsponses based on the specially designed mutual-reinforcement multi-hop memory retrieval mechanism. Moreover, two new optimizationobjectives, persona-oriented matching loss (P-Match) and persona-oriented bag-of-words loss (P-BoWs), are proposed to impel ourmodel to exploit the persona information more precisely. We willintroduce the technical details in the following sections.\n3.2 Persona Exploration\nBased on the predefined persona descriptions, the target of the per-sona exploration stage is to extend more persona-related words.Therefore the key is to investigate an effective method for the se-mantic learning of words. The extended persona words must lie in theM.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2245\n/g44/g349/g400/g410/g381/g396/g349/g272/g258/g367/g3/g104/g410/g410/g286/g396/g258/g374/g272/g286/g400/g3/g20/g38\n/g87/g86/g20/g14/g87/g86/g20/g16/g87/g86 /g892/g892/g892/g116/g381/g396/g282/g3/g3/g39/g396/g258/g374/g437/g367/g258/g396/g349/g410/g455/g3/g3\n/g87/g286/g396/g400/g381/g374/g258/g3/g68/g286/g373/g381/g396/g455/g3/g3/g28/g454/g410/g286/g396/g374/g258/g367/g3/g87/g286/g396/g400/g381/g374/g258/g3/g3/g3/g3/g3/g116/g381/g396/g282/g400/g3/g68/g286/g373/g381/g396/g455/g87/g92\n/g4/g410/g410/g286/g374/g410/g349/g381/g374/g892/g892/g892/g166/g166/g166\n/g21/g38/g94/g286/g374/g410/g286/g374/g272/g286/g3/g3/g39/g396/g258/g374/g437/g367/g258/g396/g349/g410/g455/g3/g3\n/g3/g3/g3/g87/g286/g396/g400/g381/g374/g258/g3/g3/g3/g68/g286/g373/g381/g396/g455/g94/g410/g258/g336/g286/g882/g410/g449/g381/g855/g3/g87/g286/g396/g400/g381/g374/g258/g3/g28/g454/g393/g367/g381/g349/g410/g258/g410/g349/g381/g374/g86\n/g80/g89/g75/g20/g89\n/g21/g89\n/g81/g89 /g856/g856/g856/g20/g214/g89\n/g21/g214/g89\n/g81/g89/g214 /g856/g856/g856\n/g72/g93/g94/g410/g258/g336/g286/g882/g381/g374/g286/g855/g3/g87/g286/g396/g400/g381/g374/g258/g3/g28/g454/g393/g367/g381/g396/g258/g410/g349/g381/g374\n/g94/g286/g373/g258/g374/g410/g349/g272/g3/g68/g258/g410/g272/g346/g349/g374/g336/g58\n/g87/g286/g396/g400/g381/g374/g258/g3\n/g47/g374/g296/g381/g396/g373/g258/g410/g349/g381/g374/g3\n/g90/g286/g410/g396/g349/g286/g448/g258/g367/g62/g87/g882/g68/g258/g410/g272/g346/g62/g87/g882/g17/g381/g116/g400\n/g22/g82\n/g60/g286/g455/g3/g381/g296/g3/g373/g286/g373/g381/g396/g455\n/g115/g258/g367/g437/g286/g3/g381/g296/g3/g373/g286/g373/g381/g396/g455/g10/g75/g73/g75/g73\n/g89/g75/g10/g10/g89/g73\n/g87/g286/g396/g400/g381/g374/g258/g3/g3/g24/g286/g400/g272/g396/g349/g393/g410/g349/g381/g374/g400\n/g68/g62/g87\n/g22/g38\n/g47/g374/g410/g286/g396/g258/g272/g410/g349/g381/g374/g3/g271/g286/g410/g449/g286/g286/g374/g3/g410/g449/g381/g3/g400/g410/g258/g336/g286/g400/g3/g116/g381/g396/g282/g882/g410/g381/g393/g349/g272/g3/g3/g116/g286/g349/g336/g346/g410/g3/g3/g68/g258/g410/g396/g349/g454/g3/g3\n/g72/g48\n/g86/g48/g90/g48\nFigure 1. An overview of our PEE framework. It consists of two stages: persona exploration and persona exploitation (multi-source sequence encoder,\npersona information retrieval and persona-oriented response decoder).\nsame topic with the predefined persona information, so as to guaran-\ntee the topic consistence of the conversations. Topic modeling meth-ods [25] are appropriate. Therefore, inspired by [23], we employ atopic model based on variational auto-encoder (V AE) [7] and makeadjustments according to our task to conduct the unsupervised globalsemantic modeling. Compared to the traditional methods such as La-tent Dirichlet Allocation (LDA) [2], V AE-based topic model is lesstime-consuming for training and more flexible for inferring latentrepresentations for new documents.\nAs shown in the upper side of Figure 1, the input of V AE-based\ntopic model is a document presentation vand the output v\n/primeis the\nreconstruction of input. We regard each conversation as a documentand represent it by tf-idf features. The encoding process can be for-\nmalized as:\nh\nv=fh(v),\nμ=fμ(hv),log(σ2)=fσ(hv),\nz=μ+σ·/epsilon1, /epsilon1∼N(0,1),(2)\nwheref∗(·)denotes non-linear transformation, μandσare mean\nand standard deviation vectors of multivariate normal distribution re-spectively, zis a latent vector sampled from the multivariate normal\ndistribution by reparameterization trick. We use the latent vector zto\nreconstruct the document:\nh\n/prime\nv=fh/prime(z),\nv/prime=fv/prime(h/primev).(3)\nWe learn all parameters via optimizing the evidence lower bound\n(ELBO) [7]. After the training, we draw a word-topic weight ma-\ntrixW∈RK×|V/prime|from the output layer fv/prime. The matrix represents\nthe topical saliency for each word, where Kis the number of top-\nics,V/primeis the vocabulary of topic model and |V/prime|is the vocabulary\nsize. Each column u∈RKinW can be regarded as a topic-based\nrepresentation for the corresponding word.\nGiven topic-relevant word representations, we extend words for\nevery dialogue. After removing stop-words in the dialogue, we filtera vocabulary set VP⊂V/primefrom predefined persona descriptions. For\neach word w∈VP, we select most relevant mexternal words based\non cosine similarities of topic-relevant word representations. Then,\nwe re-rank all external persona words of VPaccording to the cosine\nsimilarity score. If an external word is selected more than once, wejust record the highest score. Thereafter, we select the top n\nwwords\nof them. Finally, we convert each extended persona words into key-value representation by two multi-layer perceptron neural networks\nand store these representations in an external persona words mem-\noryM\ne.\n3.3 Persona Exploitation\nGiven predefined persona descriptions and extended persona-\nrelevant words, persona exploitation aims to integrate them to gen-erate informative responses. In this section, we detail three compo-nents of persona exploitation stage: multi-source sequence encoder,persona information retrieval and persona-oriented response decoder.\nMulti-Source Sequence Encoder. The input contains persona de-\nscriptions and historical utterances, we design two independent en-\ncoders for them.\nPersona memory encoder. We encode the predefined persona in-\nformation into sentence and word granularity presentations and store\nthem in two memories respectively. For each sentence P\ni, we obtain\na sentence representation ePiby a bidirectional Gated Recurrent Net-\nworks (Bi-GRU) [4]. Then we convert ePiinto keymS\niand value cSi\nby two multi-layer perceptron neural networks, and store them in the\nsentence granularity persona memory Ms. Simultaneously, for word\npi\nj, we obtain word representation epi\njfrom thej-th step of Bi-GRU\nfor thei-th sentence. Same as above, we convert each word repre-\nsentation into key and value, and store them in the word granularity\npersona memory Mw.M.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2246\n/g20/g38/g166/g62/g87/g882/g68/g258/g410/g272/g346\n/g21/g38/g22/g38/g166/g94/g286/g374/g410/g286/g374/g272/g286/g3/g39/g396/g258/g374/g437/g367/g258/g396/g349/g410/g455/g3/g3/g87/g286/g396/g400/g381/g374/g258/g3/g68/g286/g373/g381/g396/g455\n/g20/g82/g20/g84/g21/g84/g22/g84 /g21/g82\n/g22/g82\nFigure 2. An overview of Persona Information Retrieval.\nHistorical utterances encoder. In order to capture the relationship\namong the historical utterances X, we use a hierarchical recurrent\nencoder [21] to conduct the semantic modeling. From the second\nlevel of the hierarchical Bi-GRU, we obtain the final representations\neXfor the whole historical utterances and a sentence vector Cifor\neach utterance Xi.\nPersona Information Retrieval. After obtaining the representa-\ntions of historical utterances and sentence granularity persona mem-ory via the previous component, we use historical utterances to selectpersona information for response. Considering the key-value mem-\nory retrieval is a frequent component in the following modules, we\nprovide the general definition here. Assume that query vector is qand\nmemory Mcontains key mand value c, the retrieval operation retri\n(q,M)=ois defined as:\no=/summationdisplay\niaici,\nai=exp(si)/summationtext\njexp(sj),\nsj=qTmj,(4)\nwhere the output vector ois a weighted sum of values in memory and\nrepresents retrieved information.\nAs shown in Figure 2, we use each historical utterances to retrieve\nuser’s persona information in turn. During the chat process, some ofpersona information used in the history has an impact on the choiceof persona information for response. In order to take advantage ofthis impact, in the i-th step, we combine historical utterance presen-\ntationC\niand result of previous retrieval step oi−1as query vector qi,\nwhich can be formalized as:\nqi=/braceleftbiggCi,i=1 ;\nCi+oi−1,i >1.(5)\nThen we retrieve the sentence granularity persona memory Msby\nquery vector qi:\noi=retri(qi,Ms). (6)\nFinally, we concatenate the result of last retrieval step okand the\nwhole historical utterances representations eX:\ns0=[eX;ok], (7)\nwheres0is a merged vector used as the initial state of decoder.\nPersona-Oriented Response Decoder. The decoder is a GRU\nbased sequence prediction framework with an attention mechanismon the historical utterances and a mutual-reinforcement multi-hopmemory retrieval mechanism. Given the current input y\nt−1as well\nas the previous hidden state st−1, the recurrent calculation of GRU\nis defined as:\nst=GRU(yt−1,st−1). (8)\nThen we design an attention mechanism to absorb relevant informa-tion from historical utterances and a mutual-reinforcement multi-hopmemory retrieval mechanism to obtain the relevant persona informa-tion from predefined and explored external persona information.\nAttention on the historical utterances. produces a historical ut-\nterances vector u\nXat each decoding step by attending to historical\nutterances. We formalize it as:\nuX=n/summationdisplay\ni=1aihX\ni,\nai=exp(si)/summationtextn\nj=1exp(sj),\nsj=vTtanh(W sst+WthX\nj+b),(9)\nwherehX\nj(j=1,2,...,n)is thej-th word hidden state of historical\nutterances obtained from the first level of the hierarchical encoder for\nX.\nMutual-reinforcement multi-hop memory retrieval. Recall that\nwe build an external persona words memory Mefor persona ex-\nploration and a word granularity persona memory Mwin encoder.\nThere is an association between the two memories. For example,if we retrieve a word in the predefined persona descriptions whichis related to current conversation, the information in external per-\nsona memory related to this word will be more likely to be applied,\nvice versa. Therefore, the results of the two types of persona infor-mation retrievals are mutually influential and we propose a mutual-\nreinforcement multi-hop memory retrieval mechanism to model this\ninfluence.\nFirst, we use the current hidden state s\ntas query vector qto re-\ntrieve MwandMerespectively:\now=retri(q,Mw),\noe=retri(q,Me).(10)\nConsidering that the result of one memory retrieval (e.g. ow) will\naffect the next retrieval of another memory (e.g. Me), we update\nquery vector by adding the two retrieved results owandoe:\nqnew=qold+ow+oe. (11)\nThis update means that the results of the two retrievals will affect\neach other in the next hop. In our experiment, we use three hops\nunless otherwise stated.\nFinally, based on the exploitation of predefined and extended per-\nsona information, the output word distribution pytat time step tof\nthe decoder is produced by:\n˜st=fo([st;uX;ow;oe]),\npyt=softmax(˜ st),(12)\nwherefois the neural non-linear operation on the output layer.\n3.4 Persona-Orientated Loss.\nIn order to impel the model to exploit the persona informationmore precisely, besides the Negative Log-Likelihood loss (NLL), we\npropose two new persona-oriented loss functions: Persona-orientedMatching loss (P-Match) and Persona-oriented Bag-of-Words loss(P-BoWs). P-Match loss supervises the choice of predefined personasentences in persona information retrieval module and P-BoWs losssupervises to generate more persona-related words in decoder.M.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2247\nP-Match Loss. Recall that in the persona information retrieval\nmodule (Eq. 6), we can get a match weight over the sentence gran-\nularity persona memory Msin every step. Assume that the match\nweight in the last step is as∈R|P|. Intuitively, if the ground truth\nresponse contains the information from persona sentence Pi, then\nas\nishould obtain a large value. Is it possible to employ the relation\nbetween the ground truth response and the persona sentences to im-\nprove the modeling of persona information retrieval? To tackle this,we design the persona-oriented matching loss (P-Match). The 0-1la-\nbela∈R\n|P|is decided based on a threshold θaof the similarity\nbetween the persona sentences and the ground truth response. Jac-card Index\n8is employed for the similarity calculation. The P-Match\nloss is defined as:\nLP−Match=−|P|/summationdisplay\ni=1ailogas\ni. (13)\nP-BoWs Loss. Inspired by [13], we design a persona-oriented\nBag-of-Words loss function to enhance the ability of persona in-\nformation capturing. Specifically, We label each response with avocabulary-size vector b∈R\n|V|, where the non-stop words in the\ncurrent response will get values 1. If words are persona-based infor-\nmation, we increase the weight to 1+λ, where λis a positive value.\nWe use a multi-label classifier to generate BoWs representation Pb\n(sentence-level probability) by summing the scores of all positions of\nthe generated sentence in decoder: pb=sigmoid(|Y|/summationtext\nt=1˜st). We define\nP-BoWs loss using cross entropy:\nLP−BoWs=−1\n|V||V|/summationdisplay\ni=1[bilogpbi+\n(1−bi)log(1−pbi)].(14)\n3.5 Joint Training\nNegative log-likelihood loss (NLL) is employed as the basic opti-\nmization objective:\nLNLL=−1\n|Y||Y|/summationdisplay\nt=1ytlogpyt. (15)\nFinally, a unified optimization objective is designed by integrating\nP-Match loss, P-BoWs loss and the NLL loss:\nL=LNLL+γ1LP−Match+γ2LP−BoWs, (16)\nwhereγ1andγ2are trade-off parameters controlling the balance be-\ntween three loss functions.\n4 Experiments\nIn this section, we first introduce two datasets used in our experimentand list setups and baseline models. Next, we evaluate the perfor-mance of various models by automated evaluation and human evalu-ation.\n4.1 Datasets\nOur experiments use two public multi-turn dialogue datasets:Persona-Chat\n9[31] and DailyDialog10[9]. The Persona-Chat\n8http://en.wikipedia.org/wiki/Jaccard index\n9https://github.com/facebookresearch/ParlAI/tree/\nmaster/projects/personachat\n10http://yanran.li/dailydialogdataset contains 10907 dialogues between pairs of speakers, where968 dialogues are set aside for validation and 1000 for testing. Eachspeaker is described by 3-5 persona sentences. (e.g. “I like reading.”\nor “I am a nurse.”, etc). The total number of personas is 1155, and100 personas for validation and 100 for testing. The DailyDialogdataset is constructed by raw data crawled from various websites,which serves for English learners to practice English dialog in daily\nlife. It contains 13,118 multi-turn dialogues without persona descrip-\ntions, the number of turns are roughly 8 and the average number of\ntokens of each utterance is about 15.\nOur experiments are performed on the persona-chat dataset. In or-\nder to expand the knowledge space, we merge DailyDialog and thetraining set of Persona-Chat as the basic knowledge source to pre-train the topic model for persona exploration.\n4.2 Baselines\nWe consider the following comparison methods and their inputs con-sist of predefined persona descriptions, historical conversation utter-ances and the current query utterance.Seq2Seq [1]: the standard Sequence-to-Sequence Model with atten-\ntion. We concatenate persona descriptions and historical utterancesas a sequence input and generate the response.\nHRED [21]: Hierarchical Recurrent Encoder-Decoder model with\nattention. The input contains all sentences in persona and history con-\nversation.Profile Memory [31]: Generative Profile Memory network is a gen-\nerative model that encodes each of persona descriptions as a individ-ual memory representation in a memory network.Per.-CV AE [24]: Persona-CV AE is a memory-augmented architec-\nture which focus on the diverse generation of conversational re-sponses based on chatbot’s persona. In our experiment, we sampleone time from the latent z to generate a response.PED: Persona-oriented Encoder-Decoder Model. i.e., our PEE\nframework without the persona exploration, the P-BoWs loss and\nthe P-Match loss. Without external persona words memory, mutual-reinforcement multi-hop memory retrieval mechanism is equivalentto normal multi-hop memory retrieval mechanism.PED+PE: Our PEE framework without the P-BoWs loss and P-Match loss.PED+PE+P-BoWs: Our PEE framework without the P-Match loss.\nPED+PE+P-Match: Our PEE framework without the P-BoWs loss.\nPEE: PED + PE + P-BoWs + P-Match, i.e., our proposed PEE frame-\nwork.\n4.3 Experimental Settings\nWe treat each complete dialog (including personas) as a document,\nremove the stop words and select the top 10,000 frequent words totrain the V AE-based topic model. For the number of topics, we followprevious settings [29], [23] to set K = 50. In our experiments, we useGloV e [18] for word embedding and employ bi-directional GRU forencoders, and we set hidden states size is 512 and batch size is 64. We\nuse Adam optimizer [6] to train the model and set the learning rate\nis 0.0001. For testing, we use beam search with beam size 2. For all\nthe other hyperparameters, we tune them on the development set bygrid search. The number of extended persona-related words for eachdialoguen\nwis 100. The additional weight λin the P-BoWs target is\n1 and threshold θain labeling P-Match target process is 0.03. During\ntraining, trade-off parameter γ1is 0.1 and γ2is 0.1.M.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2248\nTable 2. Automatic evaluation results. The best results are bold.\nModel BLEU1 BLEU2 BLEU3 BLEU4 F1 Average Extrema Greedy\nSeq2Seq 20.1381 9.9395 5.2887 2.9840 17.7972 0.8551 0.4980 0.6751\nHRED 19.0920 9.5668 5.0191 2.7779 17.9184 0.8531 0.4882 0.6714\nProfile Memory 20.8713 9.8526 4.9942 2.6852 17.1553 0.8675 0.4835 0.6752\nPer.-CV AE 17.2315 7.2602 3.2081 1.4541 14.6121 0.8458 0.4688 0.6516\nPED 21.4611 10.6992 5.7845 3.3344 18.4759 0.8593 0.4993 0.6838\nPED+PE 21.8970 10.9987 5.9965 3.5334 18.4140 0.8643 0.4999 0.6856\nPED+PE+P-BoWs 21.9768 11.0710 6.0154 3.5574 18.2781 0.8626 0.4986 0.6822\nPED+PE+P-Match 22.4668 11.2560 5.9846 3.3031 18.2615 0.8592 0.4940 0.6803\nPEE 23.1926 11.5166 6.1248 3.4977 18.4130 0.8691 0.5010 0.6906\nTable 3. Human evaluation on four aspects: Fluency, Engagingness,\nConsistency and Persona Detection (PD). The value in parentheses is\nstandard deviation.\nModel Fluency Engagingness Consistency PD(%)\nSeq2Seq 4.08(0.71) 3.02(0.96) 3.00(1.03) 52.94(0.32)\nHRED 3.96(0.71) 2.73(1.05) 2.60(1.16) 64.71(0.32)\nProfile Memory 4.04(0.68) 3.08(1.01) 3.10(1.10) 58.82(0.40)\nPer.-CV AE 3.61(1.02) 2.63(1.09) 2.78(1.29) 85.29(0.34)\nPEE 4.13(0.76) 3.46(1.07) 3.44(1.13) 76.47(0.36)\nTable 4. Automatic evaluation results of PEE with different hops in\nmutual-reinforcement multi-hop retrieval mechanism.\nHops BLEU1 BLEU2 BLEU3 BLEU4 F1 Average Extrema Greedy\nPEE-1 22.5956 11.2877 6.0405 3.4315 18.32 0.8631 0.5009 0.6902\nPEE-2 22.9758 11.4999 6.2383 3.6327 18.68 0.8654 0.4979 0.6861\nPEE-3 23.1926 11.5166 6.1248 3.4977 18.41 0.8691 0.5010 0.6906\nPEE-4 22.3422 11.0628 5.8804 3.3678 18.25 0.8618 0.4985 0.6824\nPEE-5 22.2892 11.1789 5.9878 3.4148 18.55 0.8591 0.4993 0.6811\n4.4 Evaluation Metrics\nWe use different evaluation metrics (automated and human) to\ndemonstrate the effectiveness of our model. In this subsection, wewill give a brief introduction to those metrics.\nAutomatic Metrics. We report three different automatic metrics:\nBLEU@N: BLEU is an algorithm which has been widely used inmachine translation and dialogue system to evaluate the quality ofthe generated text. It measures the N-gram overlap between the gen-erated response and ground truth.\nF1-Measure: It measures the accuracy of the generated response\nconsidering both the precision and the recall. We treat the predictedand target response as bags of tokens, and compute their F1 score.Embedding-based similarity: Embedding Average (Average), Em-bedding Extrema (Extrema), and Embedding Greedy (Greedy) [11].These embedding-based metrics measure semantic similarity be-\ntween the generated response and the ground truth.\nHuman Metrics. It is not enough to only automatically evaluate\ndialogue systems, so we randomly sample about 100 dialogues from\ntest data and hire 5 volunteers to evaluate. We use four metrics: flu-ency, engagingness, consistency, and persona detection.Fluency: It measures the quality of generated sentence, e.g., whethergrammar is correct.\nEngagingness: It measures whether the generated sentence is appro-\npriate and interesting.\nConsistency: It measures whether the generated sentence has somerelationships with the history and persona description.Persona detection: For each dialogue, given generated responsesand two set of persona sentences (one is real and another is fake),we ask students to choose which one is the real description of chat-bot.\nThe first three metrics are scored between 1-5. For persona detec-\ntion, Scoring 1 means the choice is correct and 0 means the choice iswrong and 0.5 means people can not judge.\n5 Results and Analysis\n5.1 Experimental Results and Ablation Study\nAutomatic evaluation. Comparative automatic evaluation results\nare presented in Table 2. Our model outperforms baselines on all au-\ntomatic metrics. This demonstrates that our model generates moreappropriate responses by persona exploration and exploitation. Es-pecially, our model improves approximately 15.17% over seq2seqon BLEU1. Comparing with PED, PED+PE has better scores onmost metrics. This is because explored persona information con-\ntributes to generate more informative responses. Comparing with\nPED+PE, both PED+PE+p-BoWs and PED+PE+P-Match performbetter, which is because P-BoWs loss and P-Match loss supervisemodel to exploit persona information more precisely.\nAccording to automatic evaluation results of PEE with differ-\nent hops in mutual-reinforcement multi-hop retrieval mechanism inTable4, PEE-2 outperforms PEE-1 on most metrics. This demon-strates the interaction between two types persona information im-proves the performance of our model. Analyzing various indicators,the PEE works best when hop is 3. When the number of hops ex-\nceeds 3, the effect drops, that may because the query vector contains\nlittle information of current decoder state s\ntafter several update op-\nerations.\nHuman evaluation. The results of human evaluation are listed\nin Table 3. Our model significantly outperforms most of the base-lines in terms of all the metrics. Particularly, our model increases\napproximately 30.01% over profile memory on persona detection.\nThis demonstrates that persona exploration and exploitation is ben-eficial to improve the usage of persona information and enrich theresponses. Per.-CV AE has the highest persona detection metric, butit pays too much attention to persona, resulting in very poor gram-\nmar, relevance, and fluency of the generated responses.\n5.2 Persona Analysis\nIn order to further evaluate the ability of the model to combine per-\nsona, for each multi-turn dialogue, we count the number of wordsthat appear in both persona sentences and generated responses, anddivide the number by the total number of words in persona sentenceto get Persona use ratio. It measures the probability of personaM.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2249\nTable 5. Case studies.\nCase 1 Case 2\nPersonas forI worked at hollister in high school. I write short stories for a living.\nI am a professional skater. I used to work at a carnival.\nSpeaker BI play bluegrass music. I like to drink scotch to relax.\nI do not like chips. I like dark superhero movies.\nA: Hi. How is your night going? A: Hi, how are you doing tonight? I am good so far.\nB: Good, just left a bluegrass concert. B: I am good, relaxing with a glass of scotch to end the night.\nHistorical A: Ooh. Interesting. What else do you do for fun? A: That is nice and relaxing I love to get a good workout.\nB: Skate. I love it. What do you do? B: Yes. I have been busy working on a new short story to release.\nUtterances A: That is so exciting! I am currently a stay at home mom. A: That sounds very interesting hope all is good for you.\nB: cool, how many kids do you have? B: Yeah. What kind of movies are you interested in?\nA: I have three kids and pregnant with my fourth. I love being a mom. A: I like to watch romance and some scary movies is okay for me.\nExplored words concert, band, piano, guitar, rap ... ebook, thriller, horror, creepy, comic ...\nResponseseq2seq: That is cool. Do you have any pets? seq2seq: I like movies too. I am a loner.\nHRED: That is cool. Do you have any pets? HRED: That is cool. I am a fan of a movie.\nProfile Memory: I have a lot of kids, but i have a daughter that is so cool. Profile Memory: I have been watching movies on TV .\nPer.-CV AE: I am a coach I might play tomorrow. I do roofing. Per.-CV AE: I like movies too, I am not sure I like that. What are your hobbies?\nPEE: Wow that is a great thing. I like to play piano with my family. PEE: That is scary. I write a lot of horror stories.\nFigure 3. Average persona use ratio of all models in different turns.\nwords being used and punishes the repeated use of the same persona\ninformation in responses of different turns. We calculate the averagepersona use ratio of all models with different turns and show themin Figure 3. Per.-CV AE pays too much attention to persona, whichseriously affects the quality of the generated responses. We do notconsider Per.-CV AE here. We can see that our model outperforms allthe baseline methods. There are three reasons: first, persona informa-tion retrieval module thinks about the influence of persona in historywhen selecting persona information; second, external persona wordscontribute to utilize persona description that has an indirect relation-ship with current topic. third, the P-BoWs loss and the P-Match lossencourage the model to generate more persona-related words.\n5.3 Case Study\nTable 5 depicts some cases generated by PEE, Seq2Seq, HRED, andProfile Memory. From the comparisons, we can see that PEE modelcan use explored persona information to generate more persona-oriented informative responses. For example, in case 1, one of per-sona descriptions for speaker B is “I play bluegrass music.”, speakerA mentioned “kids” and “mom” in the query utterance. The exploredpersona word “piano” is related to “music” and “family”, the word“family” has correlation with “kids” and “mom”. So, the responsegenerated by PEE follows the clues above and implies persona in-formation simultaneously. What’s more, it leads the topic to a newfield that speakers are familiar with, making the next reply have morecontent to facilitate. When the previous topic ”work” is drawing toan end, our model can use persona and its extended words to converttopics to ”music”, however, responses of other baseline models donot reflect personality information.\nFigure 4. Visualization of matching weights on external persona words\nmemory in the last step of mutual-reinforcement multi-hop memory\nretrieval.\nSimilarly, in case 2, explored persona word “horror” is related with\n“scary movies” and “stories”, so our PEE model uses word “hor-ror” to rich response. In order to show the contribution of exploredpersona words more clearly and directly, we show the matchingweights on external persona words memory in the last step of mutual-reinforcement multi-hop memory retrieval mechanism in Figure 4.\n6 CONCLUSION\nIn this work, we propose a neural topical expansion framework,namely Persona Exploration and Exploitation (PEE), for the un-structured persona-oriented dialogue systems. Different from previ-ous work that trained purely on predefined persona description, ourmodel extends external persona information by a V AE-based topicmodel. By fusing predefined persona descriptions and external per-sona information, the responses our model generates can more accu-rately and properly represent the user persona while maintaining theconsistency of the dialogue. Experimental comparisons and analysisdemonstrated that our approach outperforms a set of state-of-the-artbaselines in terms of both automated metrics and human evaluations.For future work, we will extend persona information dynamicallyand jointly train persona exploration and exploitation.\n7 ACKNOWLEDGEMENT\nThis work is supported by the Natural Science Foundation of China(61972234, 61902219, 61672324, 61672322), the Tencent AI LabRhino-Bird Focused Research Program (JR201932), the Founda-tion of State Key Laboratory of Congitive Intelligence, iFLYTEK,M. Xu et al. / A Neural Topical Expansion Framework for Unstructured Persona-Oriented Dialogue Generation 2250\nP .R.China (COGOSC- 20190003), the Fundamental Research Funds\nof Shandong University, Ahold Delhaize, the Association of Univer-sities in the Netherlands (VSNU), and the Innovation Center for Ar-tificial Intelligence (ICAI).\nREFERENCES\n[1] D. Bahdanau, K. Cho, and Y . Bengio. Neural machine trans-\nlation by jointly learning to align and translate. arXiv preprint\narXiv:1409.0473, 2014.\n[2] D. M. Blei, A. Y . Ng, and M. I. Jordan. Latent dirichlet allo-\ncation. In Advances in Neural Information Processing Systems\n14, pages 601–608, 2002.\n[3] E. Chu, P . Vijayaraghavan, and D. Roy. Learning personas\nfrom dialogue with attentive memory networks. arXiv preprint\narXiv:1810.08717, 2018.\n[4] J. Chung, C ¸.G ¨ulc ¸ehre, K. Cho, and Y . Bengio. Empirical evalu-\nation of gated recurrent neural networks on sequence modeling.arXiv preprint arXiv:/1412.3555, 2014.\n[5] C. K. Joshi, F. Mi, and B. Faltings. Personalization in goal-\noriented dialog. arXiv preprint arXiv:1706.07503, 2017.\n[6] D. P . Kingma and J. Ba. Adam: A method for stochastic opti-\nmization. arXiv preprint arXiv:/1412.6980, 2014.\n[7] D. P . Kingma and M. Welling. Auto-encoding variational\nbayes. arXiv preprint arXiv:1312.6114, 2013.\n[8] J. Li, M. Galley, C. Brockett, G. Spithourakis, J. Gao, and\nB. Dolan. A persona-based neural conversation model. InProceedings of the 54th Annual Meeting of the Associationfor Computational Linguistics (V olume 1: Long Papers), pages\n994–1003, 2016.\n[9] Y . Li, H. Su, X. Shen, W. Li, Z. Cao, and S. Niu. Dailydialog: A\nmanually labelled multi-turn dialogue dataset. In Proceedings\nof the 8th International Joint Conference on Natural Language\nProcessing, 2017.\n[10] Z. Lin, A. Madotto, C.-S. Wu, and P . Fung. Personalizing di-\nalogue agents via meta-learningf. In Proceedings of the 57th\nAnnual Meeting of the Association for Computational Linguis-\ntics, pages 5454–5459, 2019.\n[11] C. Liu, R. Lowe, I. V . Serban, M. Noseworthy, L. Charlin, and\nJ. Pineau. How NOT to evaluate your dialogue system: Anempirical study of unsupervised evaluation metrics for dialogue\nresponse generation. arXiv preprint arXiv:/1603.08023, 2016.\n[12] L. Luo, W. Huang, Q. Zeng, Z. Nie, and X. Sun. Learning\npersonalized end-to-end goal-oriented dialog. arXiv preprint\narXiv:/1811.04604, 2018.\n[13] S. Ma, X. Sun, Y . Wang, and J. Lin. Bag-of-words as target for\nneural machine translation. arXiv preprint arXiv:/1805.04871,\n2018.\n[14] P . Mazar ´e, S. Humeau, M. Raison, and A. Bordes. Train-\ning millions of personalized dialogue agents. arXiv preprint\narXiv:/1809.01984, 2018.\n[15] C. Meng, P . Ren, Z. Chen, C. Monz, J. Ma, and M. de Rijke.\nRefnet: A reference-aware network for background based con-versation. In Proceedings of the 34th AAAI Conference on Ar-\ntificial Intelligence, 2020.\n[16] K. Mo, Y . Zhang, S. Li, J. Li, and Q. Yang. Personalizing a\ndialogue system with transfer reinforcement learning. In Pro-\nceedings of the 32th AAAI Conference on Artificial Intelligence ,\npages 5317–5324, 2018.\n[17] O. Olabiyi, A. Khazane, A. Salimov, and E. T. Mueller. Anadversarial learning framework for A persona-based multi-turn\ndialogue model. arXiv preprint arXiv:/1905.01992, 2019.\n[18] J. Pennington, R. Socher, and C. Manning. Glove: Global vec-\ntors for word representation. In Proceedings of the 2014 con-\nference on empirical methods in natural language processing,pages 1532–1543, 2014.\n[19] Q. Qian, M. Huang, H. Zhao, J. Xu, and X. Zhu. Assigning per-\nsonality/profile to a chatting machine for coherent conversationgeneration. In Proceedings of the 27th International Joint Con-\nference on Artificial Intelligence, pages 4279–4285, 2018.\n[20] P . Ren, Z. Chen, C. Monz, J. Ma, and M. de Rijke. Thinking\nglobally, acting locally: Distantly supervised global-to-local\nknowledge selection for background based conversation. In\nProceedings of the 34th AAAI Conference on Artificial Intel-ligence, 2020.\n[21] I. V . Serban, A. Sordoni, Y . Bengio, A. C. Courville, and\nJ. Pineau. Hierarchical neural network generative models for\nmovie dialogues. arXiv preprint arXiv:/1507.04808, 2015.\n[22] I. V . Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau,\nA. C. Courville, and Y . Bengio. A hierarchical latent vari-able encoder-decoder model for generating dialogues. arXiv\npreprint arXiv:/1605.06069, 2016.\n[23] N. A. Smith, D. Card, and C. Tan. Neural models for documents\nwith metadata. In Proceedings of the 56th Annual Meeting of\nthe Association for Computational Linguistics, 2018.\n[24] H. Song, W. Zhang, Y . Cui, D. Wang, and T. Liu. Exploiting\npersona information for diverse generation of conversationalresponses. In Proceedings of the the 28th International Joint\nConference on Artificial Intelligence, pages 5190–5196, 2019.\n[25] M. Steyvers and T. Griffiths. Probabilistic topic models. In\nT. Landauer, D. McNamara, S. Dennis, and W. Kintsch, editors,Latent Semantic Analysis: A Road to Meaning., 2006.\n[26] J. Urbanek, A. Fan, S. Karamcheti, S. Jain, S. Humeau, E. Di-\nnan, T. Rockt ¨aschel, D. Kiela, A. Szlam, and J. Weston. Learn-\ning to speak and act in a fantasy text adventure game. arXiv\npreprint arXiv:1903.03094, 2019.\n[27] O. Vinyals and Q. V . Le. A neural conversational model. arXiv\npreprint arXiv:/1506.05869, 2015.\n[28] J. Wang, X. Wang, F. Li, Z. Xu, Z. Wang, and B. Wang. Group\nlinguistic bias aware neural response generation. In Proceed-\nings of the 9th SIGHAN Workshop on Chinese Language Pro-cessing, pages 1–10, 2017.\n[29] X. Yan, J. Guo, Y . Lan, and X. Cheng. A biterm topic model\nfor short texts. In Proceedings of the 22nd International Con-\nference on World Wide Web, pages 1445–1456, 2013.\n[30] M. Yang, Z. Zhao, W. Zhao, X. Chen, J. Zhu, L. Zhou, and\nZ. Cao. Personalized response generation via domain adap-tation. In Proceedings of the 40th International ACM SIGIR\nConference on Research and Development in Information Re-\ntrieval, pages 1021–1024, 2017.\n[31] S. Zhang, E. Dinan, J. Urbanek, A. Szlam, D. Kiela, and J. We-\nston. Personalizing dialogue agents: I have a dog, do you have\npets too? In Proceedings of the 56th Annual Meeting of the As-\nsociation for Computational Linguistics (V olume 1: Long Pa-\npers), pages 2204–2213, 2018.\n[32] Y . Zheng, G. Chen, M. Huang, S. Liu, and X. Zhu. Personal-\nized dialogue generation with diversified traits. arXiv preprint\narXiv:/1901.09672, 2019.\n[33] Y . Zheng, R. Zhang, X. Mao, and M. Huang. A pre-training\nbased personalized dialogue generation model with persona-\nsparse data. arXiv preprint arXiv:1911.04700, 2019.M.Xuetal./ANeuralTopical Expansion Framework forUnstructur edPersona-Oriented Dialo gueGener ation 2251",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "o6Kbj-FT1ib",
"year": null,
"venue": "ECAI 2012",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-098-7-372",
"forum_link": "https://openreview.net/forum?id=o6Kbj-FT1ib",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Hard and Easy k-Typed Compact Coalitional Games: The Knowledge of Player Types Marks the Boundary",
"authors": [
"Gianluigi Greco",
"Enrico Malizia",
"Francesco Scarcello",
"Luigi Palopoli"
],
"abstract": "Coalitional games model scenarios where rational agents can form coalitions so as to obtain higher worths than by acting in isolation. Once a coalition forms and obtains its worth, the problem of how this worth can be fairly distributed has to be faced. Desirable worth distributions are usually referred to as solution concepts. Recent research pointed out that, while reasoning problems involving such solution concepts are hard in general for games specified in compact form (e.g., graph games), some of them, in particular the core, become tractable when agents come partitioned into a fixed number k of types, i.e., of classes of strategically equivalent players. The paper continues along this line of research, by firstly showing that two other relevant solution concepts, the kernel and the nucleolus, are tractable in this setting and independently of the specific game encoding, provided worth functions are given as a polynomial-time computable oracles. Then, it analyzes a different setting where games are still k-typed but the actual player partitioning is not a-priori known. Within this latter setting, the paper addresses the question about how efficiently strategic equivalence between pairs of players can be recognized, and reconsiders the computational complexity of the core, the kernel, and the nucleolus. All such problems and notions emerged to be intractable, thereby evidencing that the knowledge of player types marks the boundary of tractability for reasoning about k-typed coalitional games.",
"keywords": [],
"raw_extracted_content": "Hard and Easy k-Typed Compact Coalitional Games:\nThe Knowledge of Player Types Marks the Boundary\nGianluigi Greco1and Enrico Malizia2and Francesco Scarcello2and Luigi Palopoli2\nAbstract. Coalitional games model scenarios where rational agents\ncan form coalitions so as to obtain higher worths than by acting in\nisolation. Once a coalition forms and obtains its worth, the problem\nof how this worth can be fairly distributed has to be faced. Desir-able worth distributions are usually referred to as solution concepts.\nRecent research pointed out that, while reasoning problems involv-\ning such solution concepts are hard in general for games specified\nin compact form (e.g., graph games), some of them, in particular\nthe core, become tractable when agents come partitioned into a fixednumberkoftypes , i.e., of classes of strategically equivalent players.\nThe paper continues along this line of research, by firstly showingthat two other relevant solution concepts, the kernel and the nucle-\nolus , are tractable in this setting and independently of the specific\ngame encoding, provided worth functions are given as a polynomial-time computable oracles. Then, it analyzes a different setting wheregames are still k-typed but the actual player partitioning is not a-\npriori known. Within this latter setting, the paper addresses the ques-\ntion about how efficiently strategic equivalence between pairs of\nplayers can be recognized, and reconsiders the computational com-\nplexity of the core, the kernel, and the nucleolus. All such problems\nand notions emerged to be intractable, thereby evidencing that theknowledge of player types marks the boundary of tractability for rea-\nsoning about k-typed coalitional games.\n1 Introduction\nCoalitional games have been adopted by the AI community as useful\nformal tools to analyze cooperative behavior. Once a coalition forms\nand obtains its worth, one has to face the problem of how this worth\ncan be fairly distributed. Several solution concepts, such as the core,\nthe kernel , and the nucleolus (see, e.g., [12]), have been introduced\nand thoroughly studied through the years with the aim of character-\nizing fair worth distributions.\nLooking at players’ decision processes about worth distributions,\nit is sensible to assume players’ reasoning resources not to come un-\nbounded and to use the tools of computational complexity as a viablemean to model and reason about this bounded rationality principle.\nIn particular, it is easily noted that computational questions are of\ninterest whenever the function specifying the worth associated with\neach possible coalition is encoded in some succinct way, e.g., when\nit is given in terms of polynomially computable functions over somecombinatorial structure. Indeed, all problems trivialize if we explic-\nitly represent the entire extent of a worth function, which requires ex-\n1Dipartimento di Matematica, Universit `a della Calabria, I-87036 Rende,\nItaly, email: [email protected]\n2D.E.I.S., Universit `a della Calabria, I-87036 Rende, Italy, emails:\n{emalizia,palopoli,scarcello }@deis.unical.itProblem C(FP)Ck(FP)∧ TBFCk(FP)∗\nIN-CORE co-NP-c [8] in P [15, 1] co-NP-c\nCORE -NONEMPT . co-NP-c [8] in P [15, 1] co-NP-c\nIN-KERNEL ΔP\n2-c [8] in P co-NP-h\nIN-NUCLEOLUS ΔP2-c [7] in P co-NP-h\nNUCLEOLUS -COMP . FΔP2-c [7] in FP NP-h\nFigure 1. Summary of results. C(FP): games having polynomial-time\nworth functions; Ck(FP): games in C(FP)withkplayer-types at most; TBF:\ntype-based form, where types are given.∗Using randomized reductions.\nponential space in the number of involved players. Coalitional games\nwhose worth functions are encoded by means of some succinct rep-\nresentation mechanism will be hereinafter called compact games.\nUnfortunately, large part of the complexity analysis carried out on\ncompact coalitional games undebatably demonstrated that comput-ing with most of the aforementioned solution concepts is intractable\nin general. This emerges from the first column of the table reportedin Figure 1, where I\nN-X denotes the problem of deciding member-\nship in the solution concept X, C ORE -NONEMPTINESS is the prob-\nlem of deciding the non-emptiness of the core, and N UCLEOLUS -\nCOMPUTA TION is the problem of computing the nucleolus. There,\nnote that hardness results have been shown for specific compactgame settings (in particular, graph games [5] and marginal contri-\nbution nets [9]), while membership results hold over the whole class\nC(FP)of all those games whose worth functions are computable via\npolynomial-time FP oracles (see [8, 7]). As a matter of fact, however,all these results deal with settings where each player in the game may\nhave a distinctive behavior.\nOn the contrary, it is everyday life experience that people (and\nagents!), in reasoning within a specific decision context, behave ac-cording to some (sometimes, few) behavioral schemas, which are of-\nten known in advance to the scenario analyst. For instance, in many\napplications agents are naturally clustered according to technological\nfeatures (e.g., they model mobile phones sharing data in a wireless\nnetwork, and are classified according to bandwidth and energetic fea-\ntures). Therefore, it is often the case that we have a large number of\nagents, but in fact they belong to a limited number of categories, usu-ally called types , that determine their behavior in the game at hands.\nBeing this setting natural to many practical contexts and useful, it issensible to ask whether, or to which extent, the complexity of rea-soning with solution concepts for a given class of coalitional games\nis influenced by knowing that the number of players’ types is small,\nformally, is bounded by some fixed constant.\nThis is precisely the perspective introduced by Shrot et al. [14],\nwho defined the setting and mainly focused on graph games and\ngames with synergies among coalitions [4], and then put forward\nby Ueda et al. [15] and by Aadithya at al. [1], who extended theECAI 2012\nLuc De Raedt et al. (Eds.)\n© 2012 The Author(s).\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License.\ndoi:10.3233/978-1-61499-098-7-372372\nanalysis to arbitrary classes of games with FP worth functions: Let\nCk(FP)⊂C(FP) be the subclass of C(FP)of all those games whose\nplayers can be partitioned into at most ktypes, with kbeing a fixed\nnatural number, and let us say that a game in Ck(FP) is in type-based\nform if the type of each player is known a-priori. Then, our current\nknowledge is that I N-CORE and C ORE -NONEMPTINESS are feasible\nin polynomial time over games in Ck(FP) that are moreover given in\ntype-based form [15, 1]. In fact, extending the analysis to further so-lution concepts has been left as an open research issue [15].\nIn this paper, we start by addressing the above research issue, and\nour first contribution is to completely characterize the complexity ofthe kernel and the nucleolus. Indeed,\n⊿We show that I\nN-KERNEL ,IN-NUCLEOLUS ,a n dN UCLEOLUS -\nCOMPUTA TION are all feasible in polynomial time over games in\nCk(FP) that are given in type-based form (see the second column\nin Figure 1). Note that the nucleolus is always guaranteed to benon-empty (whenever some imputation exists) and to be a singlepoint contained in the kernel [12]. Thus, our results immediately\nentail that, in the given setting, a point in the kernel can also be\ncomputed efficiently.\nNote that the above tractability results assume that player types areknown a-priori. While this is certainly the case in many practical sce-narios, one might naturally wonder whether tractability results still\nhold if we know that players have a limited number of types, but we\ndo not know how they are actually partitioned, i.e., we do not knowthe type of each player. We address these questions too:\n⊿First, we focus on the basic problem of deciding whether two play-ers have the same type, and we show that it is intractable, formallyco-NP-complete over games in C(FP). Note that we already know\nfrom the literature [14] that the problem is intractable over gameswith synergies among coalitions. However, the result is hardly sur-\nprising given that such games are unlikely in C(FP), as the asso-\nciated worth function is already NP-hard to compute [4].\n⊿Then, we consider the problem of recognizing whether a game in\nC(FP) is actually in C\nk(FP), and we show that this is intractable\nas well (co-NP-complete).\nMotivated by the above bad news, we eventually consider a kind of\n“mixed” setting, where games actually belong to Ck(FP), but they are\nnot given in type-based form. That is, we know the maximum number\nkof distinct types in any game of the class, but we do not know\nthe type of each player. Even under the given promise, intractabilityresults still emerge, thereby evidencing that the knowledge of playertypes marks the boundary of tractability for reasoning about k-typed\ncoalitional games. In particular:\n⊿We show that deciding whether two players have the same type is\nco-NP-complete over games in C\nk(FP)(not in type-based form),\nwhere hardness holds under randomized reductions (see [16]). On\nthis class and under the same complexity model, computing thenumber of distinct player types is shown to be intractable too.\n⊿We reconsider all computation problems related to the core, the\nkernel, and the nucleolus, and we show that they are intractable\n(under randomized reductions) on the class C\nk(FP) for games that\nare not given in type-based form (see the third column in Figure 1).\nOrganization. Section 2 introduces the setting and the framework\nofk-typed coalitional games. The analysis of games given in type-\nbased form is reported in Section 3. Computational issues related\nto the problem of finding the actual partitioning of the players are\ndiscussed in Section 4, while the setting where games in Ck(FP) are\nnot given in type-based form is studied in Section 5.2 Formal Framework\nIn this section we recall some basic notions about game theory and\nintroduce the classes of games considered in the following.\n2.1 Coalitional Games\nA coalitional game Gis a pair/angbracketleftN,v/angbracketright,w h e r eN is the set of all the\nplayers and v:2N/mapsto→Ris the worth function. A vector (xi)i∈N\n(withxi∈R)i sa nimputation ofGif/summationtext\ni∈Nxi=v(N)and\nxi≥v({i}), for each i∈N. In the following, for an imputation x\nand a coalition S⊆N,w ed e n o t eb y x(S)the value/summationtext\ni∈Sxi.T h e\nset of all the imputations of Gis denoted by X(G). Several solution\nconcepts have been proposed to characterize the most desirable im-\nputations of coalitional games. Below, we recall the notions of core,\nkernel ,a n d nucleolus (see, e.g., [12]).\nCore. The core C(G)of a coalitional game G=/angbracketleftN,v/angbracketrightis the\nset of all imputations xthat are “stable”, for there is no coalition\nwhose members may receive a higher payoff than in xby leaving\nthe grand-coalition: C(G)={x∈X(G)|∄S⊆Nand(yi)i∈S\nsuch that y(S)=v(S)andyk>xk,∀k∈S}.\nKernel. For any pair of players iandjofG,l e tIi,jbe the set\nof all coalitions containing player ibut not player j.T h e excess\nof a coalition Satx∈X(G), denoted by e(S,x),i sd e fi n e da s\nv(S)−x(S).T h e surplussi,j(x) ofiagainstjatxissi,j(x)=\nmaxS∈Ii,je(S,x). Then, the kernel K(G)of a game G=/angbracketleftN,v/angbracketright\nis the set: K(G)={x∈X(G)|si,j(x)>sj,i(x)⇒xj=\nv({j}),∀i,j∈N,i/negationslash=j}.\nNucleolus. For any imputation xofG, we define the vector: θ(x)=\n(e(S1,x),e(S2,x),...,e(S2n−1,x)), where the various excesses\nof all coalitions (but the empty one) are arranged in non-increasing\norder. Let θ(x)[i] denote the i-th element of θ(x). For a pair of im-\nputations xandy, we say that θ(x)islexicographically smaller than\nθ(y), denoted by θ(x)≺θ(y), if there exists a positive integer q\nsuch that θ(x)[i]= θ(y)[i] for alli<q andθ(x)[q]<θ(y)[q].\nThen, the nucleolus N(G)of a game Gis the set N(G)={x∈\nX(G)|∄y∈X(G)s.t.θ(y)≺θ(x)} .\nRelationships. It is well-known that, for any coalitional game Gwith\nX(G)/negationslash=∅,|N(G)|=1;N(G)⊆K(G)(hence, K(G)/negationslash=∅); and\nifC(G)/negationslash=∅,t h e n N(G)⊆C(G)(see, e.g., [12]).\n2.2k-Typed Games\nGuided by the observation that obstructions to tractability of coali-\ntional games often emerge in scenarios where most players are “dif-\nferent”, Shrot et al. [14] recently re-considered several problems for\ncoalitional games, by studying their computational complexity by\ntaking the number of distinct player types as a parameter.\nFormally, given a coalitional game G=/angbracketleftN,v/angbracketright, Shrot et al. [14]\ndefine two players i,j∈Nasstrategically equivalent inG(or, sim-\nply, having the same type)i fv(S∪{i})=v(S∪{j})holds, for each\ncoalition S⊆Nsuch that S∩{i,j}=∅. Then, a coalitional game\nis said to be k-typed if its players can be partitioned into kclasses of\npairwise strategically equivalent players. The intuition is that a num-\nber of intractable problems related to solution concepts of compact\ncoalitional games might be efficiently solved on classes of k-typed\ngames, whenever kis some fixed natural number.\n2.3 Computational Setting and Representations\nComplexity Classes. The class P (resp., NP) is the set of decision\nproblems solvable by a deterministic (resp., non-deterministic) Tur-ing machine in polynomial time, that is, in time ||x||\nO(1),w h e r e||x||G.Grecoetal./HardandEasy k-TypedCompact Coalitional Games: TheKnowledg eofPlayer TypesMarks theBoundary 373\ndenotes the size of the input x. The class of problems whose comple-\nmentary problems are in NP is co-NP. Moreover, ΔP\n2is the class of\nproblems solvable in polynomial time by a deterministic machine us-\ning an NP Turing machine as an oracle. To capture the complexity of\ncomputation problems, we consider instead deterministic transduc-\ners, i.e., deterministic Turing machines Tequipped with a write-only\noutput tape. Then, denote by FP (resp., FΔP\n2) the class of all func-\ntions that can be computed by a deterministic transducer in polyno-\nmial time (resp., and by using an NP Turing machine as an oracle).\nGame Representation. We assume that the input for any decision\nproblem consists of a game G=/angbracketleftN,v/angbracketright, and that the game represen-\ntation includes the list of players, so that, for every coalition S⊆N,\n||S|| ≤ ||G|| holds. We say that Gis an FP-game if the worth function\nbelongs to FP. The class of all FP-games is denoted by C(FP).\nWell-known classes of FP-games are graph and hypergraph\ngames [5], marginal contribution nets [9], games in multi-issue do-\nmains [3], and weighted voting games [2]. For further compact rep-\nresentations schemes for coalitional games, we refer the interested\nreader to the classification described in [8].\n3 Complexity Analysis of k-Typed Games\nRecall that on arbitrary FP-games, the core, the kernel, and the nucle-olus are intractable solution concepts as evidenced in Figure 1. Our\ninterest here is to re-consider these concepts over FP-games, where\nthe number of distinct types is bounded by some fixed natural num-\nberk. Formally, for any fixed natural number k,l e tC\nk(FP) be the\nclass of all FP-games that are furthermore k-typed.\nIn particular, as commonly done in the literature, a coalitional k-\ntyped game Gis viewed in this section as a tuple /angbracketleft(N 1,...,N k),v/angbracketright,\nwhereN1,...,N kare disjoint sets of players, with all players in Ni\nhaving the same type. In this case, we say that Gis given in type-\nbased form. In fact, note that in this setting, one may always assume\nthat the worth function is given in the form vt:{1,...,|N 1|}×···×\n{1,...,|N k|}/mapsto→R, which is the kind of worth functions studied in\n[15, 1]. Indeed, this trivially follows by the result below.\nProposition 3.1 ([14]). Let/angbracketleft(N 1,...,N k),v/angbracketrightbe ak-typed game.\nGiven any two coalitions S,T⊆N1∪···∪Nk,i f|S∩Ni|=\n|T∩Ni|,f o re a c h i∈{1,...,k},t h e nv(S)=v(T).\nIn this paper, for notational uniformity, we prefer to use “stan-\ndard” worth functions, and to exploit instead a subset of all pos-\nsible coalitions spanning v: Assume that an arbitrary ordering of\nplayers in Nis fixed, and define the characteristic-coalitions set\nDG⊆2Nas the set of coalitions {(P1∪P2∪···∪Pk)⊆N|\nS⊆N,andPicontains the first |S∩Ni|players from set Ni,1≤\ni≤k}. Note that the size of DGis polynomial w.r.t. the size of G,a s\nit contains at most |N1|×|N2|×···×| Nk|coalitions.\nOn the class Ck(FP), if games are given in type-based form, I N-\nCORE and C ORE -NONEMPTINESS are in P [15, 1]. In the rest of the\nsection, we extend the analysis to other relevant solution concepts.\n3.1 Nucleolus\nWe start the analysis with the nucleolus. In this case, it is relevant\nto characterize the “structure” of this solution concept over k-typed\ncoalitional games. The following result shows that the nucleolus is infact “symmetric” w.r.t. player types.\nTheorem 3.2. LetG=/angbracketleftN,v/angbracketrightbe coalitional game, and let xbe the\nunique imputation in N(G). Then,x\ni=xjholds, for each pair of\nplayersiandjinNhaving the same type.Proof. Assume by contradiction that there are two players iandjin\nNhaving the same type and such that xi/negationslash=xj(in particular, w.l.o.g.,\nsuch that xi>xj). We claim that {x} /negationslash= N(G).\nLetx/primebe the worth assignment where the values assigned to iand\njare swapped, that is, such that x/prime\ni=xj,x/primej=xi,a n dx/prime\np=xp,\nfor eachp∈N\\{i,j}. Note that, for any coalition Ssuch that S∩\n{i,j}=∅or{i,j}⊆S , the total worth does not change, and hence\ne(S,x)=s(S,x/prime). It remains to consider all pairs of symmetric\ncoalitions T,T/primesuch that i∈Tandj/∈T,i/∈T/primeandj∈T/prime,a n d\nwith all other elements being the same, i.e., T\\{i,j}=T/prime\\{i,j}.\nNote that for each p∈T∩T/prime,xp=x/prime\np,a n dt h a t v(T)=v(T/prime)\nasiandjhave the same type. It follows that, for every such pair of\ncoalitions, e(T/prime,x/prime)=e(T,x)ande(T,x/prime)=e(T/prime,x); that is, their\nexcesses are just swapped. Therefore, the vector of excesses does not\nchange when considering x/primein place of x, and we get θ(x)=θ(x/prime),\nwhich is impossible because |N(G)|=1.\nFor the sake of completeness, note that the converse of Theo-\nrem 3.2 does not hold. For instance, on the game G0=/angbracketleft{a,b,c} ,v0/angbracketright\nsuch that v0({a})=v0({b})=v0({c})=1 ,v0({a,b,c})=3 ,\nv0({a,b})=1 ,v0({a,c})=2 ,a n dv0({b,c})=3 , the vector x\nwithxa=xb=xc=1 is the only imputation and hence belongs to\nN(G0), but the three players have different types.\nComputation. With the above result in place, let us focus on the\nproblem of computing the nucleolus. Let G=/angbracketleftN,v/angbracketrightbe a game, and\nconsider the following linear programming problem LPt,f o rt>0 :\nLPt={minε|x(S)=v(S)−εr,∀S∈Λr,∀1≤r≤t−1\nx(S)≥v(S)−ε,∀S⊆N\nx∈Ω},\nwhereΩis a convex subset of RN;εris the optimum value of\nthe program LPrevaluated at the r-th step; and Λr={S⊆\nN|x(S)=v(S)−εr,∀x∈Vr}withVr={x|\n(x,εr)is an optimal solution to LPr}is the set of all coalitions hav-\ning exactly excess εron all the optimal solutions of the program LPr.\nBy [11] (see also [6]), it is known that there is an index t∗such that\nLPt∗has exactly one optimal solution (x∗,/epsilon1t∗),a n dθ(x∗)≺θ(x)\nholds, for any x∈Ω. In particular, {x∗}=N(G), whenever Ω\nis the set X(G)of all imputations for G. Moreover, it is known that\nthe approach, with an adjustment discussed in [7], provides a F ΔP\n2\nmembership result for computing the nucleolus on games in C(FP).\nA corresponding ΔP2-hardness result is obtained even for the under-\nlying decision problem I N-NUCLEOLUS on graphical games [7]. Be-\nlow, we show that the problem is no longer intractable on the class\nCk(FP), if player types are known.\nTheorem 3.3. On the class Ck(FP), if games are given in type-based\nform, then NUCLEOLUS -COMPUTA TION is in FP.\nProof Sketch. LetG=/angbracketleft(N 1,...,N k),v/angbracketrightbe ak-typed coalitional\ngame, and consider the convex set /hatwideX(G)={x∈X(G)|\nxi=xj,for each pair i,jof players having the same type }.B yT h e -\norem 3.2, N(G)⊆/hatwideX(G), and thus N(G)can be computed by the\nabove sequence of linear programs by setting Ω=/hatwideX(G)⊆X(G)\n(see Lemma 6.5 in [11]). In fact, having restricted the feasible re-gions of these programs to /hatwideX(G), it follows that every inequality\nassociated with some coalition Sentails every other inequality ob-\ntained by replacing any variable x\ni(associated with a player) of a\ncertain type by any other variable xj(associated with a player) of\nthe same type. As a consequence, it is sufficient to consider only in-\nequalities associated with the coalitions in the characteristic set DG,\nin place of all subsets of N.G.Grecoetal./HardandEasy k-TypedCompact Coalitional Games: TheKnowledg eofPlayer TypesMarks theBoundary 374\nThus, in order to compute the nucleolus of G, instead of using LPt,\nwe build the following sequence of linear programming problems:\n/hatwiderLPt={minε|x(S)=v(S)−εr,∀S∈Λr,∀1≤r≤t−1\nx(S)≥v(S)−ε,∀S∈DG\nx∈/hatwideX(G)},\nwhereεris the optimum value of the program /hatwiderLPrevaluated at the\nr-th step, and Λr={S∈DG|x(S)=v(S)−εr,∀x∈Vr}with\nVr={x|(x,εr)is an optimal solution to /hatwiderLPr}.\nNote that any linear program in the above sequence contains just\npolynomially many distinct inequalities. We next show that such pro-\ngrams can be also computed and solved in polynomial time.\nLet us start with the first program /hatwiderLP1, which consists only of in-\nequalities associated with coalitions in DG(there are no equalities).\nBecauseGis in type-based form, all these inequalities may be com-\nputed in polynomial time by iterating over all possible combinations\nof numbers of players per type. Thus, by standard results in math-\nematical programming [13], the optimum value ε1of/hatwiderLP1can be\ncomputed in polynomial time.\nThen, in order to build /hatwiderLP2,w eh a v et ob u i l dt h es e t Λ1(the set\nof all coalitions from DGhaving exactly excess ε1on the optimal\nsolutions of LP1). Note that a coalition ¯Sbelongs to Λ1if and only\nif the set {x∈/hatwideX(G)|x(S)≥v(S)−ε1,∀S∈DG,andx(¯S)>\nv(¯S)−ε1}is empty, and this condition can be checked in polynomial\ntime. Thus, /hatwiderLP2can be built in polynomial time.\nEventually, we can inductively apply the method above to con-\nstruct/hatwiderLPt, for each t>0. Concerning the number of iterations, note\nthat, at each step t, at least one coalition from DGenters inΛt. Thus,\nafter at most polynomially many steps the process converges to thenucleolus, as the size of D\nGis polynomial w.r.t. the size of G.\nAs the nucleolus is a singleton set, we immediately obtain the fol-\nlowing corollary.\nCorollary 3.4. On the class Ck(FP), if games are given in type-based\nform, then IN-NUCLEOLUS is in P.\n3.2 Kernel\nTheorem 3.5. On the class Ck(FP), if games are given in type-based\nform, then IN-KERNEL is in P.\nProof Sketch. Recall the definition of the kernel. Notice that we have\nto verify the condition si,j(x)>sj,i(x)⇒xj=v({j}), for all dis-\ntinct players iandjofN. Thus, if computing the surplus si,j(x)is\nfeasible in polynomial time, then the whole procedure can be carried\nout in polynomial time. We claim that, in fact, this is the case.\nLetG=/angbracketleft(N 1,...,N k),v/angbracketrightbe ak-typed game, and recall that\nsi,j(x)=m a x S∈Ii,je(S,x),w h e r ee (S,x)=v(S)−x(S).L e t\nn1,...,n kbe the number of players in N1,...,N k, respectively.\nSo, we can rewrite the surplus as follows:\nsi,j(x)= m a x\n0≤tp≤np,∀p:1≤p≤k\ns.t.t1+··· +tk≥1max\nS∈Ii,j\n|S∩Nq|=tq,∀q:1≤q≤kv(S)−x(S).\nBecause of Proposition 3.1, note that v(S)=v(T)holds, for each\npair of coalitions SandTsuch that |S∩Ni|=|T∩Ni|=ti,f o r\neachi∈{1,...,k}. However, the imputation xmight be such that\n{(v(S)−x(S))|S⊆N1∪···∪N k}contains exponentially many\ndistinct values, as xis not necessarily a symmetric one. This problem\ncan be circumvented by exploiting the clustering of the players in\ntheir types. Indeed, for each cluster Ni, we sort its players based onthe ascending values of the worth they receive in x. Hence, we can\ncompute the termmax\nS∈Ii,j\n|S∩Nq|=tq,∀q:1≤q≤kv(S)−x(S)\nby simply evaluating v(S)−x(S)on the specific coalition Scon-\ntaining, for each cluster Ni,t h efi r s tt iplayers w.r.t. such order, by\nalways including iand excluding j. By this, the first maximization\nrequires iterating over polynomially many elements, and for each of\nthem the above polynomial-time method can be exploited to compute\nthe value of the subsequent term. Thus, the whole procedure can becarried out in polynomial time in the number of players.\n3.3 Specific Classes of Compact Games\nWe conclude the section by noticing that, as a corollary of the abovegeneral results, we can get the tractability of well-known classes of\ngames whose worth functions are computable in FP, and for which\ndetermining player types is feasible in polynomial time. Recall that,\nfor any fixed k,ak-typed graph game or game with synergies among\ncoalitions can be represented in type-based form (i.e., the clustering\nof its players can be found) in polynomial time [14]. In fact, given\nthe type-based form for such kinds of games, I\nN-CORE and C ORE -\nNONEMPTINESS can be solved in polynomial time, too [15, 1]. Be-\nlow, we complete the picture with the other solution concepts.\nCorollary 3.6. F or any fixed k,o nk-typed games given as graph\ngames or games with synergies among coalitions, IN-KERNEL ,IN-\nNUCLEOLUS , and NUCLEOLUS -COMPUTA TION are in P.\n4 On The Hardness of Finding Player Types\nIn [14], it has been observed that deciding whether two players have\nthe same type in games with synergies among coalitions [4] is an NP-\nhard problem—as discussed above, the problem is instead tractable\nif the number of agent types is fixed by a constant k.I nf a c t ,t h i s\nNP-hardness result is hardly surprising given that such games are un-likely FP-games, as the associated worth function is NP-hard to com-\npute [4]. Hence, the intrinsic difficulty of the worth function actually\nobscures the complexity of the problem defined on top of it. Our first\nresult is to strengthen this analysis, by showing that the problem re-\nmains intractable even on FP-games. In particular, we shall show that\nthe problem is complete for the class co-NP.\nBefore stating the result, we fix some definitions that will be used\nin the following. For any Boolean formula φover a set Xof vari-\nables, we define the FP-game G\nφ=/angbracketleftX,vφ/angbracketright, whose players coincide\nwith the variables in φ, and where, for each coalition S⊆X,\nvφ(S)=/braceleftBigg\n1,ifσ(S)|=φ, i.e.,σ(S)is a satisfying assignment\n0,otherwise,\nwithσ(S)denoting the truth assignment where a variable xievalu-\nates true if and only if the corresponding player xibelongs to S.\nMoreover, consider the following problem Critical Swap (CS):\nGiven a tuple /angbracketleftφ,xi,xj/angbracketright,w h e r eφ is a Boolean formula over a set\nXof variables and {xi,xj}⊆X , decide whether {xi,xj}is a crit-\nical pair (w.r.t.φ), i.e., decide whether there is a satisfying truth as-\nsignment ¯σsuch that: (1) ¯σ[xi]/negationslash=¯σ[xj]and (2) the assignment σ/prime,\nwhereσ/prime[xk]=¯σ[xk], for each xk∈X\\{xi,xj},σ/prime[xi]=¯σ[xj],\nandσ/prime[xj]=¯σ[xi], is not satisfying. It is easy to see that CS is\nNP-hard, by a reduction from SA T: For any Boolean formula γ,l e t\nφ=γ∧xa∧¬xbbe a new Boolean formula where xaandxb\nare fresh variables (i.e., not in γ). It is immediate to check that γis\nsatisfiable if and only if /angbracketleftφ,xa,xb/angbracketrightis a “yes” instance of CS.G.Grecoetal./HardandEasy k-TypedCompact Coalitional Games: TheKnowledg eofPlayer TypesMarks theBoundary 375\nTheorem 4.1. On the class C(FP), deciding whether two players\nhave the same type is co-NP -complete.\nProof Sketch. Consider the complementary problem of deciding\nwhether two players pandqdo not have the same type. We show\nthat the problem is NP-complete. Membership in NP is easily seen,\nas we can guess a coalition SwithS∩{p,q}=∅, and then check\nin polynomial time that v(S∪{p})/negationslash=v(S∪{q}).\nHardness is next proven via a reduction from problem CS.L e tφ\nbe a Boolean formula over a set Xof variables with {xi,xj}⊆X ,\nand let us build in polynomial time the game Gφ=/angbracketleftX,vφ/angbracketright.\nWe show that /angbracketleftφ,xi,xj/angbracketrightis a “yes” instance of CS⇔xiandxj\ndo not have the same type in Gφ.\n(⇒)Let¯σbe an assignment witnessing that /angbracketleftφ,xi,xj/angbracketrightis a “yes”\ninstance. Assume, w.l.o.g., that ¯σ[xi]=true and¯σ[xj]=false .\nLetS⊆Xbe the coalition such that σ(S)=¯σ, and note that\nxi∈Sandxj/∈S. Consider the coalition T=S\\{xi}, hence\nsuch that σ(T∪{xi})|=φ. By definition of a solution to CS,\nσ(T∪{xj})/negationslash|=φ. It follows that vφ(T∪{xi})=1 whilevφ(T∪\n{xj})=0 . Thus,xiandxjdo not have the same type.\n(⇐)Assume that /angbracketleftφ,xi,xj/angbracketrightis a “no” instance. We consider two\ncases. (1) φis unsatisfiable. In this case, vφ(S)=0 holds, for\neach coalition S⊆X,a n dxiandxjhave trivially the same type.\n(2)φis satisfiable . In this case, for each set T⊆X\\{xi,xj},\nwe have that either σ(T∪{xi})/negationslash|=φandσ(T∪{xj})/negationslash|=φ,o r\nσ(T∪{xi})|=φandσ(T∪{xj})|=φ. Hence, vφ(T∪{xi})=\nvφ(T∪{xj})holds, and xiandxjhave the same type.\nThe above is very bad news, but it does not immediately imply\nthat determining whether the number of player types is bounded by\nsome given constant is an intractable problem. Our second result is\nto characterize the complexity of this problem.\nTheorem 4.2. On the class C(FP), deciding whether a game is k-\ntyped is a co-NP-complete problem. Hardness holds even for k=1 .\nProof Sketch. We show that deciding whether there are at least k/prime=\nk+1 player types is NP-complete. For the membership, it suffices\nto guess a set Pofk/primeplayers together with k/prime(k/prime−1)/2coalitions,\nand then check in polynomial time that such coalitions witness that\nplayers in Pare pairwise not strategically equivalent.\nFor the hardness part, consider the problem Exists Critical Swap\n(ECS), in which given a Boolean formula φover a set Xof variables,\nwe have to decide whether there exists a critical pair {xi,xj}w.r.t.\nφ. It is easily seen that ECS is NP-hard. Indeed, for any Boolean\nformulaγ,l e tφ=γ∧xa∧¬xbbe a new Boolean formula where\nxaandxbare fresh variables. Then, γis satisfiable if and only if /angbracketleftφ/angbracketright\nis a “yes” instance of ECS.\nOur result then follows by showing that: φis a “yes” instance of\nECS⇔Gφhas at least two players with different type (hence k>1).\n(⇒)Assume that xiandxjare two variables in Xsuch that\n/angbracketleftφ,xi,xj/angbracketrightis a “yes” instance of CS. By the same line of rea-\nsoning as in the proof of Theorem 4.1, we have that xiandxjare\nnot strategically equivalent, and hence in Gφthere are at least 2\ndifferent types of players.\n(⇐)Assume now that, for each pair of variables xiandxjofφ,\nthe tuple/angbracketleftφ,xi,xj/angbracketrightis a “no” instance of CS. In the case where φ\nis unsatisfiable, vφ(S)=0 holds, for each coalitions S. Hence,\nevery player in Gφhave the same type. Consider then the case\nwhereφis satisfiable, but there is no critical pair {xi,xj}w.r.t.\nφ. In this latter case, for any chosen pair xiandxj, we can apply\nthe same line of reasoning as in the proof of Theorem 4.1 (case(2) of the (⇐ )-part), and conclude that xiandxjare strategically\nequivalent. As this holds for each pair of players, we have that allplayers have the same type.\n5 Shedding Lights on The Grey Area\nSo far, we have shown tractability results for the class Ck(FP) where\ngames are given in type-based form, and we have pointed out thatdeciding whether a game is actually in C\nk(FP) is an intractable prob-\nlem. Our analysis has thus still a missing piece: What happens if a\ngame is known to belong to Ck(FP), but it is not given in type-based\nform (i.e., with player types being actually unknown)? In this section,\nthe question will be addressed.\n5.1 On the Hardness of Bounded-Types Games\nOur first result is to show that identifying player types is likely in-tractable even on the class C\nk(FP) of games that actually have such\na bounded number of types. The proofs of intractability results are\nbased here on a complexity-theory setting developed to study prob-\nlems that are believed to be difficult but could not be classified using\nthe most common reductions (i.e., Karp or Turing reductions).\nConsider the problem SAT 1, where we have to decide the satis-\nfiability of a Boolean formula φ, under the promise that φadmits\nat most one satisfying assignment. This is the prototypical NP-hard\nproblem under randomized reductions [16]. It is widely believed that\nsuch problems are not feasible in polynomial time. For our aims here,it is not necessary to expand on the concept of randomized reduc-\ntions, and we refer the interested reader, for instance, to [10]. Indeed,\nthe promise of dealing with a fixed number of player types is next re-\nlated to SAT\n1via “standard” reductions from this problem, in order\nto prove the analogue of Theorem 4.1 and Theorem 4.2 for classes of\ngames having bounded types.\nTheorem 5.1. On the class Ck(FP), deciding whether a game is k/prime-\ntyped, for any constant k/primewithk/prime<k , and whether two players\nhave the same type are co-NP-complete under randomized reduc-\ntions. Hardness holds even for k=2 .\nProof Sketch. Membership results in co-NP follows by Theorem 4.1\nand Theorem 4.2. Concerning the hardness part, we exhibit a\npolynomial-time reduction from SAT 1.L e tφ/primebe a Boolean formula\nover the set X/primeof variables having one satisfying assignment at most,\nand define φ=φ/prime∧xα∧¬xβas a Boolean formula over the set\nX=X/prime∪{xα,xβ}. Note that φhas one satisfying assignment\nat most, where in particular xα(resp.,xβ) evaluates to true (resp.,\nfalse). Consider the associated game Gφ, and observe that if φis un-\nsatisfiable, then vφ(S)=0 holds, for each coalition S⊆X. Thus,\nin this case, there is only one type of players, and Gφis1-typed.\nAssume now that /tildewideσis the satisfying truth assignment for φ.L e t\n/tildewideSbe the coalition such that σ(/tildewideS)=/tildewideσ, and let xiandxjbe two\narbitrary players. Then, two cases have to be considered:\n(1) Assume that xiandxjare two players such that xi∈/tildewideSand\nxj/∈/tildewideS. Consider the coalition /tildewideT=/tildewideS\\{xi}, and note that\nvφ(/tildewideT∪{xi})=1 andvφ(/tildewideT∪{xj})=0 . Hence,xiandxjhave\ntwo different types.\n(2) Assume that either {xi,xj}⊆/tildewideSor{xi,xj}∩/tildewideS=∅.L e tTbe\nany coalition such that {xi,xj}∩T=∅. We claim that vφ(T∪\n{xi})=0 andvφ(T∪{xj})=0 hold. Indeed, first observe that\n/tildewideT∪{xi} /negationslash=/tildewideSand/tildewideT∪{xj} /negationslash=/tildewideS. Then, the claim follows by just\nnoticing that /tildewideSis the one coalition for which vφ(/tildewideS)=1 . Hence,\nin this case, xiandxjhave the same type.G.Grecoetal./HardandEasy k-TypedCompact Coalitional Games: TheKnowledg eofPlayer TypesMarks theBoundary 376\nBy combining the above two cases, we have that players of Gφcan\nbe partitioned into exactly two different strategic types: Players in /tildewideS,\nand players outside /tildewideS. Therefore, Gφis2-typed, but it is not 1-typed.\nIt follows that Gφis1-typed if and only if φ(and, hence, the original\nformulaφ/prime) is unsatisfiable. This shows that deciding whether a game\nis1-typed is co-NP-complete under randomized reductions.\nFinally, in order to show that deciding whether two players have\nthe same type is co-NP-complete under randomized reductions, it\nsuffices to observe that xαandxβhave the same type if and only if\nφ(and, hence, φ/prime) is unsatisfiable.\n5.2 Complexity of Solution Concepts\nNow, we turn to the analysis of the complexity of solution concepts.Figure 1 reports the intrinsic difficulty of various reasoning problems\ninvolving the core, the kernel, and the nucleolus on the class C(FP).\nAll results are intractability ones. Here, we complete the picture, by\nshowing that focusing on the class C\nk(FP) does not guarantee their\ntractability. We start with problems related to the core.\nTheorem 5.2. On the class Ck(FP), the problems IN-CORE and\nCORE -NONEMPTINESS are co-NP-complete under randomized re-\nductions. Hardness holds even for k=2 .\nProof Sketch. For both problems membership in co-NP follows by\nthe results for the larger class C(FP)(see Figure 1). For the hard-\nness of I N-CORE , consider the reduction in the proof of Theorem 5.1\nbased on the Boolean formulae φ/primeover variables in X/prime,a n dφover\nX=X/prime∪{xα,xβ}.L e tzbe a vector mapping each player to 0,\nand note that z(X)=vφ(X)=0 (recall here that in order to have\nvφ(S)>0, it is required that xβ/negationslash∈S). Then,z∈C(Gφ)if and only\nif for each S⊆X,vφ(S)=0 . By definition of the worth function,\nthis latter holds if and only if φ(henceφ/prime) is not satisfiable. From this\nobservation, we easily get the result for C ORE -NONEMPTINESS , too.\nIndeed, just recall that vφ(X)=0 holds for the grand-coalition X,\nand hence the above vector zis the only one that might in princi-\nple belong to X(Gφ)(as all worth values are non-negative). Thus,\nC(Gφ)/negationslash=∅if and only if z∈C(Gφ), which completes the proof.\nNote that, in the proof, we can even assume w.l.o.g. that φ/primeis such\nthatvφ(S)=0 , for each Swith|S|=1 , thereby showing that\nhardness holds even if zis guaranteed to be an imputation.\nWe now continue with the decision problems related to the nucle-\nolus and the kernel. Note that in the results below, the corresponding\nmembership results are missing.\nTheorem 5.3. On the class Ck(FP), IN-KERNEL and IN-\nNUCLEOLUS are co-NP -hard under randomized reductions. Hard-\nness holds even for k=2 .\nProof Sketch. Consider again the reduction in the proof of Theo-\nrem 5.1 based on the Boolean formulae φ/primeover variables in X/primeandφ\noverX=X/prime∪{xα,xβ}. Define a new game ¯Gφ=/angbracketleftX,¯vφ/angbracketrightwhere\n¯vφ(X)=1 and¯vφ(S)=v φ(S), for each S⊂X.L e tzbe the\nimputation assigning the worth 1/|X|to each player.\nFirst, we claim that z∈K(¯Gφ)holds if and only if φis not satisfi-\nable. Indeed, if φis not satisfiable, then ¯vφ(S)=0 , for each S⊂X.\nHence, for each pair of players xpandxq,sxp,xq(z)=sxq,xp(z)=\n−1/|X|, and hence zis in K(¯Gφ). On the other hand, if φis satis-\nfiable, then there is a coalition S(withxα∈Sandxβ/negationslash∈S)s u c h\nthat¯vφ(S)=1 . Thus, we have that sxα,xβ(z)=1−1/|X|>sxβ,xα(z)=−1/|X|.H o w e v e r , ¯vφ({xβ})=0/negationslash=1/|X|,a n d\nhencez/negationslash∈K(¯Gφ).\nWe complete the picture by claiming that z∈N(¯Gφ)holds if\nand only if φis not satisfiable. Indeed, if φis not satisfiable, then\n¯vφ(S)=0 , for each S⊂X, and it can be easily checked that\nsymmetrically distributing the worth of ¯vφ(X)over all players leads\nto the nucleolus. Instead, if φis satisfiable, then there is a coalition\nS(withxα∈Sandxβ/negationslash∈S) such that ¯vφ(S)=1 . Consider the\nimputation z/primewhere each player in S(resp., outside S)g e t sw o r t h\n1/|S|(resp., 0). Then, θ(z/prime)≺θ(z), and hence z/negationslash∈N(¯Gφ).\nA simple corollary of the complexity of I N-NUCLEOLUS is the\nfollowing characterization for the computation problem.Corollary 5.4. On the class C\nk(FP), NUCLEOLUS -COMPUTA TION\nisNP-hard under randomized reductions, even for k=2 .\nACKNOWLEDGEMENTS\nEnrico Malizia’s work was supported by the European Commission\nthrough the European Social Fund and by Calabria Region.\nREFERENCES\n[1] K. Aadithya, T. Michalak, and N. Jennings, ‘Representation of coali-\ntional games with algebraic decision diagrams’, UCB/EECS-2011-8,\nDepartment of Electrical Engineering and Computer Sciences, The\nUniversity of California at Berkeley, 2011.\n[2] G. Chalkiadakis, E. Elkind, and M. Wooldridge, Computational Aspects\nof Cooperative Game Theory , Morgan & Claypool Publishers, 2011.\n[3] V . Conitzer and T. Sandholm, ‘Computing shapley values, manipulating\nvalue division schemes, and checking core membership in multi-issue\ndomains’, in Proc. of AAAI’04, pp. 219–225.\n[4] V . Conitzer and T. Sandholm, ‘Complexity of constructing solutions in\nthe core based on synergies among coalitions’, Artificial Intelligence,\n170(6–7), 607–619, 2006.\n[5] X. Deng and C. H. Papadimitriou, ‘On the complexity of coopera-\ntive solution concepts’, Mathematics of Operations Research ,19(2),\n257–266, 1994.\n[6] D. Granot, F. Granot, and W. R. Zhu, ‘Characterization sets for the nu-\ncleolus’, International Journal of Game Theory ,27(3), 359–374, 1998.\n[7] G. Greco, E. Malizia, L. Palopoli, and F. Scarcello, ‘On the complexity\nof compact coalitional games’, in Proc. of IJCAI-09 , pp. 147–152.\n[8] G. Greco, E. Malizia, L. Palopoli, and F. Scarcello, ‘On the complexity\nof core, kernel, and bargaining set’, Artificial Intelligence, 175(12–13),\n1877–1910, 2011.\n[9] S. Ieong and Y . Shoham, ‘Marginal contribution nets: a compact rep-\nresentation scheme for coalitional games’, in Proc. of EC’05,p p .\n193–202.\n[10] M. Mahmoody and D. Xiao, ‘On the power of randomized reductions\nand the checkability of SAT’, in Proc. of CCC’10, pp. 64–75.\n[11] M. Maschler, B. Peleg, and L. S. Shapley, ‘Geometric properties of\nthe kernel, nucleolus, and related solution concepts’, Mathematics of\nOperations Research ,4(4), 303–338, 1979.\n[12] M. J. Osborne and A. Rubinstein, A Course in Game Theory ,T h eM I T\nPress, 1994.\n[13] C. H. Papadimitriou and K. Steiglitz, Combinatorial Optimization: Al-\ngorithms and Complexity , Dover Publications, 2nd edn., 1998.\n[14] T. Shrot, Y . Aumann, and S. Kraus, ‘On agent types in coalition forma-\ntion problems’, in Proc. of AAMAS 2010 , pp. 757–764.\n[15] S. Ueda, M. Kitaki, A. Iwasaki, and M. Yokoo, ‘Concise characteristic\nfunction representations in coalitional games based on agent types’, inProc. of IJCAI-11 , pp. 393–399.\n[16] L. G. Valiant and V . V . Vazirani, ‘NP is as easy as detecting unique\nsolutions’, Theoretical Computer Science ,47, 85–93, 1986.G.Grecoetal./HardandEasy k-TypedCompact Coalitional Games: TheKnowledg eofPlayer TypesMarks theBoundary 377",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "t5DyJCLsS_",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200313",
"forum_link": "https://openreview.net/forum?id=t5DyJCLsS_",
"arxiv_id": null,
"doi": null
}
|
{
"title": "CryptoSPN: Privacy-Preserving Sum-Product Network Inference",
"authors": [
"Amos Treiber",
"Alejandro Molina",
"Christian Weinert",
"Thomas Schneider",
"Kristian Kersting"
],
"abstract": "AI algorithms, and machine learning (ML) techniques in particular, are increasingly important to individuals’ lives, but have caused a range of privacy concerns addressed by, e.g., the European GDPR. Using cryptographic techniques, it is possible to perform inference tasks remotely on sensitive client data in a privacy-preserving way: the server learns nothing about the input data and the model predictions, while the client learns nothing about the ML model (which is often considered intellectual property and might contain traces of sensitive data). While such privacy-preserving solutions are relatively efficient, they are mostly targeted at neural networks, can degrade the predictive accuracy, and usually reveal the network’s topology. Furthermore, existing solutions are not readily accessible to ML experts, as prototype implementations are not well-integrated into ML frameworks and require extensive cryptographic knowledge. In this paper, we present CryptoSPN, a framework for privacy-preserving inference of sum-product networks (SPNs). SPNs are a tractable probabilistic graphical model that allows a range of exact inference queries in linear time. Specifically, we show how to efficiently perform SPN inference via secure multi-party computation (SMPC) without accuracy degradation while hiding sensitive client and training information with provable security guarantees. Next to foundations, CryptoSPN encompasses tools to easily transform existing SPNs into privacy-preserving executables. Our empirical results demonstrate that CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.",
"keywords": [],
"raw_extracted_content": "CryptoSPN: Privacy-Preserving Sum-Product Network\nInference\nAmos Treiber1and Alejandro Molina2and Christian Weinert1and\nThomas Schneider1and Kristian Kersting2\nAbstract. AI algorithms, and machine learning (ML) techniques in\nparticular, are increasingly important to individuals’ lives, but have\ncaused a range of privacy concerns addressed by, e.g., the EuropeanGDPR. Using cryptographic techniques, it is possible to perform in-ference tasks remotely on sensitive client data in a privacy-preservingway: the server learns nothing about the input data and the model pre-dictions, while the client learns nothing about the ML model (whichis often considered intellectual property and might contain tracesof sensitive data). While such privacy-preserving solutions are rel-atively efficient, they are mostly targeted at neural networks, candegrade the predictive accuracy, and usually reveal the network’stopology. Furthermore, existing solutions are not readily accessibleto ML experts, as prototype implementations are not well-integratedinto ML frameworks and require extensive cryptographic knowledge.\nIn this paper, we present CryptoSPN, a framework for privacy-\npreserving inference of sum-product networks (SPNs). SPNs are atractable probabilistic graphical model that allows a range of ex-act inference queries in linear time. Specifically, we show how toefficiently perform SPN inference via secure multi-party computa-tion (SMPC) without accuracy degradation while hiding sensitiveclient and training information with provable security guarantees.Next to foundations, CryptoSPN encompasses tools to easily trans-form existing SPNs into privacy-preserving executables. Our empiri-cal results demonstrate that CryptoSPN achieves highly efficient andaccurate inference in the order of seconds for medium-sized SPNs.\n1 INTRODUCTION\nIn our increasingly connected world, the abundance of user informa-tion and availability of data analysis techniques originating from ar-tificial intelligence (AI) research has brought machine learning (ML)techniques into daily life. While these techniques are already de-ployed in many applications like credit scoring, medical diagnosis,biometric verification, recommender systems, fraud detection, andlanguage processing, emerging technologies such as self-driving carswill further increase their popularity.\nPrivacy Concerns for ML Applications. These examples show\nthat progress in AI research certainly has improved user experience,potentially even saving lives when employed for medical or safetypurposes. However, as the prevalent usage in modern applications\n1Cryptography and Privacy Engineering Group, TU Darmstadt, Germany,\nemail addresses: {treiber,weinert,schneider}@encrypto.cs.tu-darmstadt.de\n2Artificial Intelligence and Machine Learning Lab, TU Darmstadt, Germany,\nemail addresses: {molina,kersting}@cs.tu-darmstadt.deoften requires the processing of massive amounts of sensitive infor-mation, the impact on user privacy has come into the public spotlight.\nThis culminated in privacy regulations such as the European Gen-\neral Data Protection Regulation (GDPR), which came into effectin 2018. Not only does the GDPR provide certain requirements toprotect user data, it also includes restrictions on decisions based onuser data, which may be interpreted as a “right to explanation” [17].Luckily, new deep probabilistic models that encode the joint dis-tribution, such as sum-product networks (SPNs) [36], can indicatewhether the model is fit to predict the data at hand, or raise a warn-ing otherwise. This increases trust as they “know when they do notknow”. Moreover, SPNs can also perform inference with missing in-formation [35], an important aspect for real-life applications.\nGenerally, probabilistic graphical models [24] provide a frame-\nwork for understanding what inference and learning are, and havetherefore emerged as one of the principal theoretical and practical ap-proaches to ML and AI [14]. However, one of the main challenges inprobabilistic modeling is the trade-off between the expressivity of themodels and the complexity of performing various types of inference,as well as learning them from data. This inherent trade-off is clearlyvisible in powerful – but intractable – models like Markov ran-dom fields, (restricted) Boltzmann machines, (hierarchical) Dirich-let processes, and variational autoencoders. Despite these models’successes, performing inference on them resorts to approximate rou-tines. Moreover, learning such models from data is generally harderas inference is a sub-routine of learning, requiring simplified assump-tions or further approximations. Having guarantees on tractability atinference and learning time is therefore a highly desired property inmany real-world scenarios.\nTractable graphical models such as SPNs guarantee exactly this:\nperforming exact inference for a range of queries. They compileprobabilistic inference routines into efficient computational graphssimilar to deep neural networks, but encode a joint probability dis-tribution. As a result, they can not only be used for one ML task,but support many different tasks by design, ranging from outlier de-tection (joint distribution) to classification or regression (conditionalinference). They have been successfully used in numerous real-worldapplications such as image classification, completion and generation,scene understanding, activity recognition, and language and speechmodeling. Despite these successes, it is unclear how one can developan SPN framework that is GDPR-friendly.\nAs a naive solution, SPN tasks can be performed only on client\ndevices to ensure that no sensitive information is handed out, butthis requires the service provider to ship a trained model to clients,thereby giving up valuable intellectual property and potentially leak-ing sensitive data as such models often contain traces of sensitiveECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2003131946\nPX\nP(X)=0.9\nConventional MLaaS\nPrivate ML inference\nClient ServerX\n0.9P/tildewideX\n/tildewideP(/tildewideX)/tildewidePSMPC\nFigure 1. Conventional MLaaS (top) vs. private ML inference (bottom) on\nclient feature vector Xusing the server’s model P. In private ML inference,\nthe input of both client and server is protected with a cryptographic SMPC\nprotocol: the parties learn only encrypted values ˜Xand˜P, respectively.\ntraining data, e.g., due to unintended memorization [7]. Therefore,\ncurrent “ML as a Service” (MLaaS) applications usually send and\nhence leak client data to a remote server operated by the serviceprovider to perform inference (cf. top of Figure 1).\nEven if the service provider and the remote server are consid-\nered trustworthy, the privacy of clients can still be compromisedby breaches, hacks, and negligent or malicious insiders. Such inci-dents occur frequently even at high-profile companies: recently, Mi-crosoft Outlook was hacked [45] and A T&T’s customer support was\nbribed [29]. Thus, it is not enough to protect client data just fromoutsiders, it must also be hidden from the server to ensure privacy.\nPreviously, protecting the identity of individuals via anonymiza-\ntion techniques was seen as sufficient when learning on or inferringfrom data of a collection of users. Such techniques reduce raw data tostill enable extraction of knowledge without individuals being identi-fiable. However, recent works conclude that current de-identificationmeasures are insufficient and unlikely satisfy GDPR standards [41].\nCryptography for ML Applications. We believe this indicates\nthat cryptographic measures should be employed to satisfy today’sprivacy demands. The cryptographic literature has actively developedprotocols and frameworks for efficient and privacy-preserving ML inthe past years. So far, efforts were focused on deep/convolutionalneural networks, see [38] for a recent systematization of knowledge.\nThere, usually a scenario is considered where the server holds amodel and performs private ML inference on a client’s data, with no\ninformation except for the inference result being revealed to the\nclient (cf. bottom of Figure 1).\nExisting frameworks mostly rely on homomorphic encryp-\ntion (HE), secure multi-party computation (SMPC), or a combinationof both, to enable private inference with various security, resource,and usage properties. As many ML tasks today already require in-tense computational resources, the overhead incurred by introducingcryptographic privacy mechanisms is substantial. Though a line ofprominent frameworks from CryptoNets [15] to XONN [39] has es-tablished increased efficiency and relatively low execution times forprivate inference, research has mainly focused on NNs by looking\nfor efficient ways to securely compute common activation functions,sometimes degrading accuracy by using more efficient approxima-tions. Existing frameworks only possess a low degree of automationand often require very low-level model descriptions, making it hardfor non-experts to run private inference using their own models. Ad-ditionally, for approaches using SMPC, it is very common that thetopology of the NN is leaked, which might reveal some model infor-mation to the client.Our Contributions. In this work, we present foundations and\ntools for privacy-preserving ML in the unexplored domain of sum-product networks (SPNs). Our framework, which we call Crypto-\nSPN, demonstrates that SPNs can very well be protected withcryptographic measures. Specifically, after presenting the necessarybackground for private ML and SPNs (Section 2), we show howto efficiently perform private SPN inference using SMPC (Sec-tion 3). We combine techniques from both AI and applied cryptog-raphy to achieve this. Contrary to popular SMPC-based approachesfor protecting NNs, ours leaks no information from the networktopology by using Random Tensorized SPNs (RA T-SPNs) [35].We implement CryptoSPN using the state-of-the-art SMPC frame-work ABY [12] and provide an open-source tool that can trans-form SPN instances from the SPFlow framework [33] into privacy-preserving executables (Section 4). CryptoSPN is easily usableby non-experts and intended to make private ML available tothe broader AI community working on a wide range of sophisti-cated models. In an experimental evaluation (Section 5), we showthat CryptoSPN performs private inference in reasonable time whilepreserving accuracy. With our work, we push private ML be-yond NNs and bring attention to the crucial, emerging task of makingavariety of ML applications private.\n2 BACKGROUND\nWe start with the necessary background on secure computation, ex-isting privacy-preserving ML solutions, and SPNs.\n2.1 Secure Computation (SC) of ML Tasks\nFirst described by [46], the concept of secure computation (SC) letscomputational parties (e.g., a client and a server) evaluate arbitraryfunctions on secret inputs without leaking any information but theresults. For example, a server can calculate statistics on client datawithout learning the raw data, or a group of clients can jointly sched-ule meetings without revealing their availability. The SC researchcommunity has put forth efficient schemes with practical implemen-tations for applications that rely on homomorphic encryption (HE) orsecure multi-party computation (SMPC). The former allows compu-tations directly on encrypted data, whereas in SMPC an interactiveprotocol is executed between parties that, in the end, reveals only thedesired output. A general rule of thumb is that SMPC requires morecommunication, whereas computation is the bottleneck for HE. Inthis work, we rely on secure two-party computation, i.e., SMPC with\ntwo parties: client and server.\n2.1.1 Privacy-Preserving Machine Learning\nWe shortly recapitulate the most influential works for preserving pri-vacy when performing machine learning tasks using SC techniques.\nPrivacy-preserving neural network inference was first pro-\nposed in [34, 42, 3]. Secure classification via hyper-plane deci-sion, naive Bayes, and decision trees was presented in [5]. Se-cureML [31] provides SMPC-friendly linear regression, logistic re-gression, and neural network training using SGD as well as se-cure inference. With CryptoNets [15], the race for the fastest NN-based privacy-preserving image classification began: MiniONN [28],Chameleon [40], Gazelle [20], XONN [39], and DELPHI [30] areonly some of the proposed frameworks.\nThese frameworks mostly offer privacy-preserving\ndeep/convolutional neural network inference based on HE or SMPCA. Treiber et al. / CryptoSPN: Privacy-Preserving Sum-Product Network Inference 1947\nprotocols, or even combinations of both techniques in different\ncomputational and security models. However, they are not readilyaccessible to ML experts, as prototype implementations are not well-integrated into ML frameworks and require extensive cryptographicknowledge to secure applications. Moreover, these frameworks areoften engineered towards delivering outstanding performance forbenchmarks with certain standard data sets (e.g., MNIST [26]), butfail to generalize in terms of accuracy and performance. There aresome but very few attempts to directly integrate privacy technologyinto ML frameworks: for TensorFlow there exists rudimentarysupport for differential privacy [1], HE [44], and SMPC [10], andfor Intel’s nGraph compiler there exists an HE backend [4]. V eryrecenty, Facebook’s AI researchers released CrypTen [18], whichprovides an SMPC integration with PyTorch. However, currently notmuch is known about the underlying cryptographic techniques and,therefore, its security guarantees.\nTrusted execution environments (TEEs) are an intriguing alter-\nnative to cryptographic protocols. They use hardware features toshield sensitive data processing tasks. TEEs are widely available,e.g., via Intel Software Guard Extensions (SGX), and therefore areexplored for efficiently performing ML tasks [25]. Unfortunately, In-tel SGX provides no provable security guarantees and requires soft-ware developers to manually incorporate defenses against softwareside-channel attacks, which is extremely difficult. Moreover, severeattacks on Intel SGX allowed to extract private data from the TEE,making SGX less secure than cryptographic protocols [8].\n2.1.2 SMPC\nIn SMPC, the function fto be computed securely is represented as\na Boolean circuit consisting of XOR and AND gates: each gate is se-curely computed based on the encrypted outputs of preceding gates,and only the values of the output wires of the entire circuit are de-crypted to obtain the overall output. The intermediate results leak noinformation and only the outputs are decrypted by running a corre-sponding sub-protocol.\nThe literature considers two security settings: semi-honest, where\nthe involved parties are assumed to honestly follow the protocolbut want to learn additional information about other parties’ inputs,and malicious, which even covers active deviations from the proto-\ncol. SMPC protocols are usually divided into two phases: a setup\nphase that can be performed independently of the inputs (e.g., dur-ing off-peak hours or at night), and an online phase that can only\nbe executed once the inputs are known. Most of the “expensive” op-erations can be performed in the setup phase in advance such thatthe online phase is very efficient. Two prominent SMPC protocolsare Yao’s garbled circuit (GC) [46] and the GMW protocol [16]. Aswe heavily rely on floating-point operations with high-depth circuits,we use Yao’s GC protocol, which has a constant round complex-ity (the round complexity of the GMW protocol depends on the cir-cuit depth and, hence, is not suited for our case).\n2.1.3 Yao’s GC\nWe present a schematic overview for the secure evaluation of a singlegate in Figure 2 and refer to [27] for further technical details.\nThe central idea of this protocol is to encode the function (more\nprecisely, its representation as a Boolean circuit) and the inputs suchthat the encoding reveals no information about the inputs but can stillbe used to evaluate the circuit. This encoding is called “garbling” andindividual garbled values can be seen as encryptions. We will use thecommon notation of ˜gand˜x,˜yto refer to a garbled gate or garbled\ninputs, respectively. The evaluation of the garbled circuit ˜C(consist-\ning of many garbled gates) using the garbled inputs, in turn, resultsin an encoding ˜z=˜C(˜x,˜y)of the output. The encoded output can\nonly be decoded jointly to the plain value z=C(x,y), i.e., both\nparties have to agree to do so. In the protocol, one of the parties,the “garbler”, is in charge of creating the garbled circuit. The otherparty, the “evaluator”, obtains the garbled circuit, evaluates it, andthen both parties jointly reveal the output.\nCircuit Garbling. The wires in the Boolean circuit Cof a func-\ntionfare assigned two randomly chosen labels / keys: k\n0andk1,\nindicating that the wire’s plain value is 0or1, respectively. Though\nthere is a label for each possible value {0,1}, only the garbled\nvalue˜x=kw\nxfor the actual plain value xof wirewis used to evalu-\nate the garbled circuit. The garbler creates both labels and thereforeis the only one who knows the mapping to plaintext values – for theevaluator, a single randomly-looking label reveals no information.\nThe garbler creates a randomly permuted “garbled” gate ˜gin the\nform of an encrypted truth table for each gate gin the circuit Coff,\nand sends all garbled gates to the evaluator. The key idea is to use anencryption scheme Enc that has two encryption keys. For each truth\ntable entry, the label associated with the plaintext value of the outgo-ing wire is then encrypted using the labels associated with the plainvalues of the two incoming wires as encryption keys (cf. Figure 2).\nEvaluator Garbler˜g\nx=0,y=0Enckw0\n0,kw1\n0(kw2\ng(0,0))\nx=0,y=1Enckw0\n0,kw1\n1(kw2\ng(0,1))\nx=1,y=0Enckw0\n1,kw1\n0(kw2\ng(1,0))\nx=1,y=1Enckw0\n1,kw1\n1(kw2\ng(1,1))⎫\n⎪⎬\n⎪⎭in random order\nSetup\nOnline\nInputx Inputy˜y=kw1y\nOblivious Transferkw0\n0,kw0\n1x\n˜x=kw0x\n˜z=kw2\ng(x,y )=˜g(˜x,˜y)\nFigure 2. Overview of Yao’s GC protocol for securely evaluating a binary\ngateg(output wire w2) on binary inputs x(input wire w0) andy(input\nwirew1), where ˜gis the garbled truth table. All labels kw\na(corresponding\nto wirewhaving value a) are generated uniformly at random by the garbler.\nThe protocol is composable and can be used to securely compute any circuit\nbased on the secure computation of one gate.\nGarbled Circuit Evaluation. Now, if the evaluator is in posses-\nsion of the labels ˜xand˜ycorresponding to the incoming wires’ val-\nuesxandy, then exactly one entry of ˜gcan be successfully decrypted\nusing˜xand˜yas decryption keys3.This will result in ˜z=˜g(˜x,˜y),\nthe label of the outgoing wire of gassociated with the desired plain-\ntext value z=g(x,y). Since only the desired entry can be decrypted\nand, given that the labels are chosen randomly and independently of\nthe wire values, the evaluator can perform this computation withoutlearning any plaintext information.\n3A special type of encryption scheme is used to detect if the decryption was\nsuccessful or not.A. Treiber et al. / CryptoSPN: Privacy-Preserving Sum-Product Network Inference 1948\nThe remaining challenge is that the evaluator needs to obtain the\ncorrect garbled inputs (i.e., the labels corresponding to its inputs)\nwithout revealing the inputs to the garbler. This is solved by a cryp-tographic protocol called oblivious transfer (OT), which enables oneparty with input bit bto obliviously obtain a string s\nbfrom another\nparty holding two input strings s0,s1without revealing band learn-\ning anything about s1−b. With this building block, Yao’s GC protocol\nis composed as follows (cf. Figure 2): In the setup phase, the garblercreates all wire labels for the garbled circuit ˜Cand sends ˜Cto the\nevaluator. During the online phase, the garbler sends the labels corre-sponding to its input to the evaluator. The evaluator’s garbled inputsare obtained via OT. Then, the evaluator decrypts ˜C. The output can\nbe jointly decrypted if the parties reveal the output label associations.\nProtocol Costs. Improvements on OT [19, 2] and the garbling\nscheme [22, 47] have significantly reduced the overhead of Yao’s GCprotocol, making it viable to be used in applications. Specifically,the GC created and sent in the setup phase requires 2κbits per bi-\nnary AND gate, where κis the symmetric security parameter (e.g.,\nκ= 128 for the currently recommended security level). For oblivi-\nously transferring the labels corresponding to the evaluator’s input x,\n|x|κ bits must be sent in the setup phase, as well as |x|(2κ+1 )\nbits in the online phase. Additionally, the labels for the garbler’s in-putymust be sent in the online phase (|y |κbits). The protocol only\nrequires a constant number of rounds of interaction.\n2.2 Sum-Product Networks (SPNs)\nRecent years have seen a significant interest in tractable probabilisticrepresentations such as Arithmetic Circuits (ACs) [9], Cutset Net-works [37], and SPNs [36]. In particular, SPNs, an instance of ACs,are deep probabilistic models that can represent high-treewidth mod-els [48] and facilitate exact inference for a range of queries in\ntime linear in the network size.\n2.2.1 Definition of SPNs\nFormally, an SPN is a rooted directed acyclic graph, consistingofsum, product, and leaf nodes. The scope of an SPN is the set\nof random variables (RVs) appearing on the network. An SPN canbe defined recursively as follows: (1) a tractable univariate distri-bution is an SPN; (2) a product of SPNs defined over differentscopes is an SPN; and (3) a convex combination of SPNs over thesame scope is an SPN. Thus, a product node in an SPN representsa factorization over independent distributions defined over differ-ent RVsP(X,Y)=P(X)P(Y), while a sum node stands for a\nmixture of distributions defined over the same variables P(X,Y)=\nwP\n1(X,Y)+( 1− w)P2(X,Y). From this definition, it follows\nthat the joint distribution modeled by such an SPN is a valid normal-ized probability distribution [36].\n2.2.2 Tractable Inference in SPNs\nTo answer probabilistic queries in an SPN, we evaluate the nodesstarting at the leaves. Given some evidence, the probability outputof querying leaf distributions is propagated bottom up. For productnodes, the values of the child nodes are multiplied and propagated totheir parents. For sum nodes, we sum the weighted values of the childnodes. The value at the root indicates the probability of the askedquery. To compute marginals, i.e., the probability of partial config-urations, we set the probability at the leaves for those variables to 1+\n× ×˜w1 ˜w2\n˜λ1˜λ2˜λ3˜λ4\n/tildewideX1/tildewideX2/tildewideX3/tildewideX4Oblivious Selection NetworkOblivious Selection Network⎛\n⎜⎜⎜⎜⎜⎜⎝⎞\n⎟⎟⎟⎟⎟⎟⎠\n(2) (1)(3) (1)\nFigure 3. CryptoSPN protocol flow for an exemplary miniature SPN\nwith Poisson leaves: (1) client and server have private inputs X1,...,4\nandw1,2,λ1,...,4 , respectively; (2) private evaluation of leaf-, sum- and prod-\nuct nodes using SMPC; (3) client receives SPN inference result.\nand then proceed as before. This allows us to also compute condi-\ntional queries such as P(Y|X)=P(X,Y)/P(X). Finally, using\na bottom-up and top-down pass, we can compute approximate MPEstates [36]. All these operations traverse the tree at most twice andtherefore can be achieved in linear time w.r.t. the size of the SPN.\n3 CryptoSPN INFERENCE\nGiven the AC structure of SPNs, SMPC is a fitting mechanism topreserve privacy in SPN inference as it relies on securely evaluat-ing a circuit. Compared to, e.g., NNs, SPNs do not have alternatinglinear and non-linear layers, which would complicate the applicationof SMPC protocols. Here, we are concerned with private SPN infer-\nence in a setting where the client has a private input and the server isin possession of a model; in the end, the server learns nothing, and theclient only learns the inference result (cf. bottom of Figure 1). Unfor-tunately, we cannot use the arithmetic version of the GMW protocol,as it only provides integer or fixed-point operations, which is insuffi-cient for tractable and normalized probabilistic inference such as thecase of SPNs. Instead, CryptoSPN uses Yao’s GC protocol that eval-uates Boolean circuits, which allows us to use floating point opera-tions by including Boolean sub-circuits corresponding to IEEE 754-compliant 32-o r64-bit floating point operations [11] in the circuit\nrepresentation of the to-be-evaluated SPN.\n3.1 Secure Evaluation of SPNs with SMPC\nOur approach (cf. Figure 3) is to transform the SPN into a Boolean\ncircuit and to then evaluate it via SMPC. The server input con-sists of all the model parameters of the SPN (i.e., weights for theweighted sums and parameters for the leaf distribution), the clientinput consists of the evidence, and the output is the root nodevalue. We perform all computations in the log-domain using thewell-known log-sum-exp trick, which also provides a runtime ad-\nvantage for our SMPC approach as it replaces products with moreefficient additions. Contrary to the convention, we use the log2 do-main in CryptoSPN since the circuits for log2 and exp2 operationsare significantly smaller than the natural log and exp operations.\nDue to the SMPC security properties, all the model parameters\nare hidden from the client and the input values or evidence from theA. Treiber et al. / CryptoSPN: Privacy-Preserving Sum-Product Network Inference 1949\nserver. However, this naive approach alone does not provide our de-\nsired privacy guarantees since the circuit evaluated in SMPC is pub-lic. Therefore, the topology of the SPN is leaked to the client, in-cluding which RVs (the scope) are used in which leaves. Dependingon how the SPN was learned, this might reveal information about theserver’s model, such as correlations among RVs, number of mixtures,etc. To hide this information, one could make use of generic private\nfunction evaluation techniques such as incorporating universal cir-\ncuits (UCs) [43]. UCs allow one party to choose a function as the\nprivate input, which is then obliviously evaluated on the other party’sinput such that nothing about the function or the input is revealed.Employing these generic techniques, however, would drastically in-crease the overhead we introduce via SMPC. For this reason, the re-lated work on SMPC for private NN inference usually assumes thatthe NN topology is public, with the impact on model privacy beingunclear in this situation. To mitigate these concerns in CryptoSPN,we tailor efficient techniques stemming from both AI as well as ap-plied cryptography research specifically to SPNs. The first methodhides specifics of the training data by using Random TensorizedSPNs (RA T-SPNs) [35], while the second method allows to hide thescope of any existing SPN without the need to re-learn a RA T-SPN.\n3.1.1 Hiding the Training Data\nIt is possible that the structure of a general SPN leaks informationabout the training data. To hide any information that could be re-vealed from the SPN structure, we propose to use RA T-SPNs [35].The RA T-SPN structure is built randomly via region graphs. Givena set of RVs X,aregion Ris defined as any non-empty subset\nofX. Given any region R,aK -partition PofRis a collection\nofKnon-overlapping sub-regions R\n1,...,R K, whose union is R,\ni.e.,P={R1,...,R K},∀k:Rk/negationslash=∅,∀k/negationslash=l:Rk∩Rl=∅,/uniontext\nkRk=R. This partitioning algorithm randomly splits the RVs.\nFurthermore, we recursively split the regions until we reach a de-sired partitioning depth. Here, we consider only 2-partitions. Fromthese region graphs, we can construct an SPN specifying the numberof uniform leaves per RV . Since the structure-building algorithm isdata-agnostic (it only knows the number of RVs in the dataset), thereis no information leakage. This also means that any initial randomstructure for |X|, the number of random variables, is a valid initial\nstructure for any other dataset with the same number of dimensions.After obtaining the structure, we use a standard optimization algo-rithm for parameter estimation. The structure produced by the RA T-SPN algorithm is regular, and the values of the parameters after theoptimization encode the knowledge needed to build the joint distri-bution. In our scheme, the parameters are only visible to the serviceprovider. Using a random structure also enables us to choose the sizeof the SPNs, which allows service providers to trade off model com-plexity, efficiency, and accuracy.\n3.1.2 Hiding the Scope\nSince the scope of a node is defined by the scope of its children, itsuffices to hide the leaves’ scopes. Concretely, for each leaf, we haveto hide which X\njforj∈{1,...,n}from the client’s RVs X=\nX1,...,X nis selected. This corresponds to an oblivious array ac-\ncess in each leaf, where an array can be accessed without revealing\nthe accessed location j. There exist efficient methods to do this based\non homomorphic encryption [6] or secure evaluation of selection net-works [23] via SMPC. A recent study of private decision tree evalu-ation [21] shows that selection networks outperform selection basedon HE in both total and online runtime. Hence, we obliviously se-lect RVs via securely evaluating a selection network in CryptoSPN.\nSimilar to the usage in decision trees, we add just one selection\nnetwork below the SPN instead of selecting one variable per leaf.That is, the variable input of the secure leaf computation (see be-low) is the outcome of the selection network, which selects the vari-ablesX\nφ(1),...,X φ(m )for themleaves in the SPN from the n\nclient inputs according to a server input φ:[m]/mapsto→[n]denoting\nwhich leaf i≤musesXφ(i).I fm≥n(which we assume is true\nsince RVs are usually used more than once), the complexity of sucha selection network is [23]:\nC\nsel\nn,m=1\n2(n+m)log2(n)+mlog2(m)−n+1,\nbeating the trivial solution of O(nm). This requires |X|κ(n+\n2Csel\nn,m)bits of setup communication and |X|n(2κ+1 ) bits on-\nline [21]. Hereby, one can hide the scope of any SPN, includingones learned through other methods [13] (although the topology isstill leaked). We propose to use this approach to increase privacy incases where leaking the topology is deemed acceptable, or where re-learning the structure of an already existing SPN is infeasible.\n3.1.3 Hiding the RVs and Leaf Parameters\nBecause the secure computation of each floating point operation in-troduces overhead, our approach at the leaf level is to let the re-spective parties locally pre-compute as many terms as possible be-fore inputting them into the secure SPN evaluation. For Gaussians inthe log2 domain, the result can be evaluated in SMPC with just twomultiplications and two additions based on the client’s RV input X\nj\nand server inputs μ,s1=−log2(2πσ2), ands2=log2(e)\n2σ2based on\nparameters mean μand variance σ2:\n/tildewidePGauss/parenleftbig\nXj;μ,σ2/parenrightbig\n=˜s1−(/tildewideXj−˜μ)2˜s2.\nThus, for each leaf, the SPN circuit requires CGauss\nb=2 (CADD\nb+\nCMUL\nb)AND gates, where COP\nbdenotes the number of AND gates\nfor ab-bit floating point operation OP , cf. [11]4.Additionally, for the\nentire SPN, ICGauss\nb=nbbits of client input and ISGaussb=3mb bits\nof server input are added, where mis the amount of leaves and nis\nthe number of RVs.\nSimilarly, we can securely compute Poissons with just one mul-\ntiplication and two additions based on the client’s RV inputs Xj\nandc1=−log2(Xj!), and server inputs s1=l o g2(λ)ands2=\n−λlog2(e)based on mean λ:\n/tildewidePPois(Xj;λ)=/tildewideXj·˜s1+˜c1+˜s2.\nThis results in the leaf size CPois\nb=2CADD\nb+CMUL\nb with input\nsizes ICPois\nb=2nband ISPoisb=2mb for the entire SPN.\nBernoullis consist of just one MUX gate, selecting from two server\ninputspandq=1−pbased on the binary client RV input Xj:\n/tildewidePBern(Xj;p)=MUX (/tildewideXj,˜p,˜q)=/braceleftBigg\n˜p, ifXj=1\n˜q, ifXj=0.\nHence, they have a complexity of |p|AND gates, yielding the\ncostsCBern\nb=b,ICBernb=n, and ISBernb=2mb.\nDue to the log2 domain, computations of a product node just in-\ntroduce a complexity of (ch(s)−1)CADD\nb , wherech(s)denotes the\n4For instance, CADD\n32= 1820,CMUL\n32= 3016,CEXP2\n32= 9740, and\nCLOG2\n32= 10568.A. Treiber et al. / CryptoSPN: Privacy-Preserving Sum-Product Network Inference 1950\namount of children of a node s. For the same reason, the complexity\nof a sum node is:\nCLOG2\nb+(ch(s)−1)CADD\nb+ch(s)(CADD\nb+CEXP2\nb).\n3.2 Efficiency\nPutting all of the presented building blocks together, we get the\nfollowing amount of AND gates (the only relevant cost metricfor Yao’s GC protocol) for an SPN with nRVs and mleaves of\ndistribution D∈{ Gauss, Pois, Bern} that operates with b-bit pre-\ncision and consists of a set of sum nodes Sand product nodes P,\nwherech(s)fors∈S∪Pdenotes the amount of children of node s:\nCSPN\nb=mCD\nb+/summationdisplay\ns∈S/parenleftBig\nCLOG2\nb+(ch(s)−1)CADD\nb\n+ch(s)/parenleftBig\nCADD\nb+CEXP2\nb/parenrightBig/parenrightBig\n+/summationdisplay\np∈P(ch(p)−1)CADD\nb.\nIn addition, we also have ICD\nbclient input bits stemming from\nthe RVs and ISDb+b/summationtext\ns∈Sch(s)server input bits stemming from\nthe leaf parameters as well as the sum weights. Therefore, us-\ning Yao’s GC protocol, CryptoSPN has the following communicationcosts in bits in the setup phase:\nκ/parenleftBig\nIC\nD\nb+2CSPN\nb/parenrightBig\nand in the online phase:\nκ/parenleftBigg\n2ICDb+ISDb+b/summationdisplay\ns∈Sch(s)/parenrightBigg\n+ICDb,\nwhereκis the symmetric security parameter (e.g., κ= 128).\nIf one does not use RA T-SPNs and instead our scope-hiding pri-\nvate SPN evaluation, obliviously selecting the leaves’ RVs for Gaus-\nsian, Poisson, and Bernoulli leaves has the following online commu-nication, respectively: nb(2κ+1) ,2nb(2κ+1) , andn(κ+1) bits.\nThe setup communication is κb/parenleftbig\nn+2C\nsel\nn,m/parenrightbig\n,κb/parenleftbig\n2n+2Csel\n2n,m/parenrightbig\n,\nandκ/parenleftbig\nn+2Csel\nn,m/parenrightbig\nbits, respectively.\n3.3 Security Guarantees\nAs we use RA T-SPNs, the underlying structure, the size and depthof the SPN are known to the client. However, this structure is ran-domly generated and comes only from hyper-parameters. Therefore,the structure is independent of training data and leaks no private in-formation. The number of random variables is known by both parties,as usual (e.g., [15, 20, 28, 30, 39, 40]). And while the value of theinput variables is hidden, the output is not and might reveal someinformation but is data that inherently has to be revealed.\nThe protocols we use in our implementation of CryptoSPN are\nprovably secure in the semi-honest model [27]. In the studied setting,it is reasonable to assume the server is semi-honest, as reputable ser-vice providers are confined by regulations and potential audits. Fur-thermore, detected malicious behaviour would hurt their reputation,providing an economic incentive to behave honestly. However, theseregulations and incentives do not exist for the client’s device, whichcan be arbitrarily modified by the client or harmful software.\nFortunately, CryptoSPN can easily be extended to provide security\nagainst malicious clients as it relies on Yao’s GC protocol. There, theonly messages sent by the (potentially malicious) client are in theoblivious transfer. Thus, one just needs to instantiate a maliciouslysecure OT protocol to achieve security against malicious clients,which incurs only a negligible performance overhead [2].4 IMPLEMENTATION & SPFlow\nINTEGRATION\nWe implemented CryptoSPN using the state-of-the-art SMPC frame-work ABY [12] with the floating point operation sub-circuits of [11]and the selection network circuit of [21]. ABY implements vari-ous SMPC protocols in C++ and provides APIs for the secure evalua-tion of supplied circuits within these protocols. It also supports single\ninstruction, multiple data (SIMD) instructions, which allows Crypto-\nSPN to batch-process multiple queries at the same time. Notably,like most other SMPC frameworks, ABY requires a very low-levelcircuit description of the function that is computed securely, mak-ing it hard for AI researchers and others without a background incryptography to actually perform private ML inference. Motivatedby this gap, we integrate CryptoSPN with SPFlow [33], an open-source Python library that provides an interface for SPN learning,manipulation, and inference. For users, CryptoSPN appears as an-other SPFlow export that enables private SPN inference. Specifically,CryptoSPN allows ML experts to easily transform an SPN in SPFlowinto a privacy-preserving ABY program with just the SPN as input.The resulting ABY program can be compiled into an executable forsimple deployment on the client and server side. CryptoSPN is avail-able at https://encrypto.de/code/CryptoSPN.\n5 EXPERIMENTAL EV ALUATION\nWe evaluate CryptoSPN on random SPNs trained with SPFlowfor the standard datasets provided in [13], and on regular SPNsfornips, a count dataset from [32]. We evaluate models with\nboth32- and64-bit precision to study the trade-off between accu-\nracy and efficiency. The experiments are performed on two machineswith Intel Core i9-7960X CPUs and 128GB of RAM. We use a sym-\nmetric security parameter of κ= 128 bits according to current rec-\nommendations. The connection between both machines is restrictedto100Mbit/ sbandwidth and a round-trip time of 100ms to simu-\nlate a realistic wide-area network (W AN) for a client-server setting.\nOur benchmarks are given in Table 1. Compared to previous works\nfocused on NNs, we evaluate a variety of datasets, which shows thatCryptoSPN can easily transform any SPN into a privacy-preservingversion. In addition to the theoretical analysis of Section 3.2, we alsoinvestigate RA T-SPNs of various sizes for the nltcs dataset of [13]\nto gain a practical sense of how different SPN parameters affect ourruntime. Moreover, we use two regular SPNs trained for nips to see\nhow hiding the scope (cf. Section 3.1.2) increases the runtime.\nGenerally, our results shown in Table 1 demonstrate that we\nachieve tractable setup and highly efficient online performance formedium-sized SPNs. Specifically, the setup phase requires costs inthe order of minutes and gigabytes, while the online phase takes onlya few seconds and megabytes. Though multiple seconds might seemlike a significant slow-down in some cases, this is certainly justifiedin many scenarios where privacy demands outweigh the costs of pri-vacy protection (such as legal requirements for medical diagnostics).\nWhile no single parameter appears to be decisive for the runtimes,\nwe observe that some parameters are much more significant:\n1. The number of sums has a significantly larger effect than products\nor leaves, which is expected given the log2 and exp2 operations.But, since the absolute amount of sums is still relatively small, theadditional input weights do not affect online communication.\n2. Though differences in the number of RVs, product nodes, leaves,\nand edges do influence the runtimes, deviations have to be verylarge to take an effect. For instance, when examining the SPNsA. Treiber et al. / CryptoSPN: Privacy-Preserving Sum-Product Network Inference 1951\nTable 1. Benchmarks of private SPN inference with CryptoSPN in a W AN network. The SPN has |X|RVs, |S|sum nodes, and |P|product nodes. Setup\nand online runtime as well as communication are measured for both 32- and64-bit precision. All SPNs are RA T-SPNs with Bernoulli leaves except the ones for\nnips, which are regular SPNs with Poisson leaves. †indicates usage of a selection network for hiding RV assignments.\ndataset |X||S||P| #leaves #edges #layerssetup (s) setup (GB) online (s) online (MB)\n32b64b32b64b32b64b32b64b\naccidents 111 22 4420 11100 27161 7 365 825 4.34 9.83 22.55 5 .51 5.53 0.9\nbaudio 100 22 4420 10000 26061 7 359 812 4.28 9.67 22.15 3 .61 4.42 8.7\nbbc 1058 2 880 42320 44721 5 248 577 2.95 6.87 18.64 1 .94 3.88 7.5\nbnetflix 100 2 4400 20000 32001 5 264 604 3.15 7.20 17.14 0 .82 2.54 5.1\nbook 500 2 880 20000 22401 5 134 312 1.60 3.71 9.82 2.32 0.94 1.8\nc20ng 910 2 880 36400 38801 5 218 524 2.59 6.03 16.43 7 .63 7.77 5.4\ncr52 889 10 1768 35560 41985 7 304 701 3.62 8.34 21.14 9 .13 8.17 6.1\ncwebkb 839 10 1768 33560 39985 7 294 677 3.50 8.06 20.64 6 .83 6.07 2.0\ndna 180 22 4420 18000 34061 7 400 907 4.76 10.81 24.45 9 .42 2.54 5.1\njester 100 2 4400 20000 32001 5 264 604 3.15 7.20 17.14 0 .92 2.54 5.1\nkdd 64 10 1768 2560 8985 7 137 308 1.62 3.67 8.52 0.84.38.5\nkosarek 190 2 2200 19000 25001 5 178 410 2.12 4.87 12.32 8 .72 0.54 1.0\nmsnbc 17 10 1768 680 7105 7 127 285 1.51 3.40 8.01 9.12.34.7\nmsweb 294 22 4420 29400 45461 7 458 1042 5.45 12.42 29.16 9 .03 4.26 8.4\nplants 69 2 4400 13800 25801 5 233 531 2.77 6.32 14.73 5 .31 6.23 2.4\npumsb star 163 2 4400 32600 44601 5 328 754 3.91 8.98 22.15 2 .03 5.47 0.9\ntmovie 500 10 1768 20000 26425 7 225 515 2.68 6.14 14.93 5 .22 2.14 4.3\ntretail 135 2 4400 27000 39001 5 300 688 3.57 8.19 19.84 7 .22 9.75 9.4\nnltcs16 2 880 640 3041 5 36 81 0.43 0.96 2.75.81.12.1\n16 2 2200 1600 7601 5 90 202 1.07 2.41 5.91 3.82.75.4\n16 2 4400 3200 15201 5 179 404 2.13 4.82 11.32 7 .45.31 0.7\n16 10 1768 640 7065 7 127 285 1.51 3.39 8.01 9.62.34.6\n16 22 4420 1600 17661 7 316 712 3.77 8.48 19.34 7 .75.71 1.5\nnips100 7 17 1061 1084 1126 71 0.30 0.83 2.15.01.32.6\n29†76†0.33†0.89†2.2†5.3†1.8†3.1†\n100 15 43 2750 2807 1566 182 0.77 2.16 4.61 2.43.06.1\n73†196†0.85†2.33†4.8†12.9†4.4†7.4†\nforaccidents, baudio, and msweb, it takes roughly twice\nthe amount of RVs and edges (the SPN for msweb) compared to\nthe others to reach a significant runtime deviation.\n3. When looking at the SPNs for nltcs, the first three SPNs have\nroughly the same density and the runtime seems to scale according\nto their size. The last two SPNs, however, have a noticeably higherdensity but comparable size and result in much higher runtimes.Thus, density (especially the amount of edges) is a much moresignificant parameter than plain network size.\nYet, depending on the SPN, the costs of other, less important param-eters can outweigh the costs of individual parameters. This is in linewith our theoretical analysis in Section 3.2: the circuit’s size dependson the number of children (with different costs for sums and prod-ucts) as well as the number of RVs and leaves. The amount of layershas no direct effect because the round complexity of Yao’s GC pro-tocol is independent of the depth. As for the regular SPNs for nips,\none can observe that the effects of hiding RV assignments are in-significant compared to the overall performance.\nUsing64-bit precision roughly doubles the costs of 32-bit pre-\ncision, which is expected as the sub-circuits are about twice thesize [11]. Comparing the difference of the resulting log-probabilitieswhen evaluating the SPNs in CryptoSPN to the plain evalua-tion with SPFlow, we get an RMSE of 4.2×10\n−9for32-bit\nand2.3×10−17for64-bit models. We stress that this insignificant\nloss in accuracy is not due to the cryptographic measures, but ratherdue to the more SMPC-friendly computation in the log2 domain.6 CONCLUSION\nResolving privacy issues in ML applications is becoming a chal-lenging duty for researchers, not least due to recent legal regula-tions such as the GDPR. By combining efforts from both AI andapplied cryptography research, we presented CryptoSPN, which suc-cessfully addresses this challenge for the evaluation of sum-productnetworks (SPNs) that support a wide variety of desired ML tasks. Theprotocols of CryptoSPN together with the tools developed for MLexperts deliver efficient yet extremely accurate SPN inference whileproviding unprecedented protection guarantees that even cover thenetwork scope and structure. With our work serving as a foundation,future research can investigate further efficiency improvements (e.g.,via quantization techniques appropriate for SPNs), hiding the struc-ture of SPNs that cannot be re-trained, and private SPN learning.\nACKNOWLEDGEMENTS\nThis project has received funding from the European Research Coun-cil (ERC) under the European Union’s Horizon 2020 research andinnovation programme (grant agreement No. 850990 PSOTI). Itwas co-funded by the Deutsche Forschungsgemeinschaft (DFG) —SFB 1119 CROSSING/236615297 and GRK 2050 Privacy &Trust/251805230, and by the BMBF and HMWK within A THENE.KK also acknowledges the support of the Federal Ministry of Educa-tion and Research (BMBF), grant number 01IS18043B “MADESI”.A. Treiber et al. / CryptoSPN: Privacy-Preserving Sum-Product Network Inference 1952\nREFERENCES\n[1] Galen Andrew, Steve Chien, and Nicolas Papernot. TensorFlow privacy.\nhttps://github.com/tensorflow/privacy, 2019.\n[2] Gilad Asharov, Yehuda Lindell, Thomas Schneider, and Michael\nZohner, ‘More efficient oblivious transfer extensions’, Journal of Cryp-\ntology, (2017).\n[3] Mauro Barni, Pierluigi Failla, Riccardo Lazzeretti, Ahmad-Reza\nSadeghi, and Thomas Schneider, ‘Privacy-preserving ECG classifica-\ntion with branching programs and neural networks’, IEEE Transactions\non Information F orensics and Security (TIFS), (2011).\n[4] Fabian Boemer, Anamaria Costache, Rosario Cammarota, and Casimir\nWierzynski, ‘nGraph-HE2: A high-throughput framework for neuralnetwork inference on encrypted data’, in Workshop on Encrypted Com-\nputing & Applied Homomorphic Cryptography (WAHC), (2019).\n[5] Raphael Bost, Raluca Ada Popa, Stephen Tu, and Shafi Goldwasser,\n‘Machine learning classification over encrypted data’, in Network and\nDistributed System Security Symposium (NDSS), (2015).\n[6] Justin Brickell, Donald E Porter, Vitaly Shmatikov, and Emmett\nWitchel, ‘Privacy-preserving remote diagnostics’, in ACM Conference\non Computer and Communications Security (CCS), (2007).\n[7] Nicholas Carlini, Chang Liu, ´Ulfar Erlingsson, Jernej Kos, and Dawn\nSong, ‘The secret sharer: Evaluating and testing unintended memoriza-tion in neural networks’, in USENIX Security, (2019).\n[8] Guoxing Chen, Sanchuan Chen, Y uan Xiao, Yinqian Zhang, Zhiqiang\nLin, and Ten-Hwang Lai, ‘SgxPectre: Stealing Intel secrets from SGXenclaves via speculative execution’, in IEEE European Symposium on\nSecurity and Privacy (EuroS&P), (2019).\n[9] Arthur Choi and Adnan Darwiche, ‘On relaxing determinism in arith-\nmetic circuits’, in ICML, (2017).\n[10] Morten Dahl, Jason Mancuso, Yann Dupis, Ben Decoste, Morgan Gi-\nraud, Ian Livingstone, Justin Patriquin, and Gavin Uhma, ‘Private ma-chine learning in TensorFlow using secure computation’, arXiv preprint\narXiv:1810.08130, (2018).\n[11] Daniel Demmler, Ghada Dessouky, Farinaz Koushanfar, Ahmad-Reza\nSadeghi, Thomas Schneider, and Shaza Zeitouni, ‘Automated synthesisof optimized circuits for secure computation’, in ACM Conference on\nComputer and Communications Security (CCS), (2015).\n[12] Daniel Demmler, Thomas Schneider, and Michael Zohner, ‘ABY -A\nframework for efficient mixed-protocol secure two-party computa-tion’, in Network and Distributed System Security Symposium (NDSS),\n(2015).\n[13] Robert Gens and Pedro M Domingos, ‘Learning the structure of sum-\nproduct networks’, in ICML, (2013).\n[14] Zoubin Ghahramani, ‘Probabilistic machine learning and artificial in-\ntelligence’, Nature, (2015).\n[15] Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin E Lauter,\nMichael Naehrig, and John Wernsing, ‘Cryptonets: Applying neuralnetworks to encrypted data with high throughput and accuracy’, inICML, (2016).\n[16] Oded Goldreich, Silvio Micali, and Avi Wigderson, ‘How to play any\nmental game’, in STOC, (1987).\n[17] Bryce Goodman and Seth Flaxman, ‘European Union regulations on al-\ngorithmic decision-making and a “right to explanation”’, AI Magazine,\n(2017).\n[18] David Gunning, Awni Hannun, Mark Ibrahim, Brian Knott, Laurens\nvan der Maaten, Vinicius Reis, Shubho Sengupta, Shobha V enkatara-man, and Xing Zhou. CrypTen: A new research tool for secure machinelearning with PyTorch. https://ai.facebook.com/blog/c\nrypten-a-new-research-tool-for-secure-machine-learning-with-pytorch/, 2019.\n[19] Y uval Ishai, Joe Kilian, Kobbi Nissim, and Erez Petrank, ‘Extending\noblivious transfers efficiently’, in CRYPTO, (2003).\n[20] Chiraag Juvekar, Vinod V aikuntanathan, and Anantha Chandrakasan,\n‘GAZELLE: A low latency framework for secure neural network infer-ence’, in USENIX Security, (2018).\n[21] ´Agnes Kiss, Masoud Naderpour, Jian Liu, N Asokan, and ThomasSchneider, ‘SoK: Modular and efficient private decision tree evalua-tion’, Privacy Enhancing Technologies (PETs), (2019).\n[22] Vladimir Kolesnikov and Thomas Schneider, ‘Improved garbled cir-\ncuit: Free XOR gates and applications’, in International Colloquium on\nAutomata, Languages, and Programming (ICALP), (2008).\n[23] Vladimir Kolesnikov and Thomas Schneider, ‘A practical universal cir-\ncuit construction and secure evaluation of private functions’, in Finan-cial Cryptography and Data Security (FC), (2008).\n[24] Daphne Koller and Nir Friedman, Probabilistic Graphical Models:\nPrinciples and Techniques, MIT Press, 2009.\n[25] Roland Kunkel, Do Le Quoc, Franz Gregor, Sergei Arnautov, Pramod\nBhatotia, and Christof Fetzer, ‘TensorSCONE: A secure TensorFlowframework using Intel SGX’, arXiv preprint arXiv:1902.04413, (2019).\n[26] Yann LeCun, Corinna Cortes, and Christopher Burges. MNIST Dataset.\nhttp://yann.lecun.com/exdb/mnist/, 1998.\n[27] Yehuda Lindell and Benny Pinkas, ‘A proof of security of Yao’s proto-\ncol for two-party computation’, Journal of Cryptology, (2009).\n[28] Jian Liu, Mika Juuti, Yao Lu, and N Asokan, ‘Oblivious neural net-\nwork predictions via MiniONN transformations’, in ACM Conference\non Computer & Communications Security (CCS), (2017).\n[29] Louise Matsakis. How A T&T insiders were bribed to ‘unlock’ millions\nof phones. https://www.wired.com/story/att-insider\ns-bribed-unlock-phones/, 2019.\n[30] Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting\nZheng, and Raluca Ada Popa, ‘DELPHI: A cryptographic inferenceservice for neural networks’, in USENIX Security, (2020).\n[31] Payman Mohassel and Y upeng Zhang, ‘SecureML: A system for scal-\nable privacy-preserving machine learning’, in IEEE Symposium on Se-\ncurity and Privacy (S&P), (2017).\n[32] Alejandro Molina, Sriraam Natarajan, and Kristian Kersting, ‘Poisson\nsum-product networks: A deep architecture for tractable multivariatepoisson distributions’, in AAAI, (2017).\n[33] Alejandro Molina, Antonio V ergari, Karl Stelzner, Robert Peharz,\nPranav Subramani, Nicola Di Mauro, Pascal Poupart, and Kris-tian Kersting, ‘SPFlow: An easy and extensible library for deepprobabilistic learning using sum-product networks’, arXiv preprint\narXiv:1901.03704, (2019).\n[34] Claudio Orlandi, Alessandro Piva, and Mauro Barni, ‘Oblivious neural\nnetwork computing via homomorphic encryption’, EURASIP Journal\non Information Security, (2007).\n[35] Robert Peharz, Antonio V ergari, Karl Stelzner, Alejandro Molina, Xi-\naoting Shao, Martin Trapp, Kristian Kersting, and Zoubin Ghahramani,‘Random sum-product networks: A simple and effective approach toprobabilistic deep learning’, in UAI , (2019).\n[36] Hoifung Poon and Pedro M Domingos, ‘Sum-product networks: A new\ndeep architecture’, in UAI , (2011).\n[37] Tahrima Rahman, Prasanna Kothalkar, and Vibhav Gogate, ‘Cutset net-\nworks: A simple, tractable, and scalable approach for improving theaccuracy of Chow-Liu trees’, in ECML PKDD, (2014).\n[38] M Sadegh Riazi, Bita Darvish Rouhani, and Farinaz Koushanfar, ‘Deep\nlearning on private data’, IEEE Security and Privacy Magazine, (2019).\n[39] M Sadegh Riazi, Mohammad Samragh, Hao Chen, Kim Laine, Kristin\nLauter, and Farinaz Koushanfar, ‘XONN: XNOR-based oblivious deepneural network inference’, in USENIX Security, (2019).\n[40] M Sadegh Riazi, Christian Weinert, Oleksandr Tkachenko, Ebrahim M\nSonghori, Thomas Schneider, and Farinaz Koushanfar, ‘Chameleon: Ahybrid secure computation framework for machine learning applica-tions’, in ACM ASIA Conference on Computer and Communications\nSecurity (ASIACCS), (2018).\n[41] Luc Rocher, Julien M Hendrickx, and Yves-Alexandre de Montjoye,\n‘Estimating the success of re-identifications in incomplete datasets us-ing generative models’, Nature Communications, (2019).\n[42] Ahmad-Reza Sadeghi and Thomas Schneider, ‘Generalized universal\ncircuits for secure evaluation of private functions with application todata classification’, in International Conference on Information Secu-\nrity and Cryptology (ICISC), (2008).\n[43] Leslie G V aliant, ‘Universal circuits (preliminary report)’, in STOC,\n(1976).\n[44] Tim van Elsloo, Giorgio Patrini, and Hamish Ivey-Law, ‘SEALion:\nA framework for neural network inference on encrypted data’, arXiv\npreprint arXiv:1904.12840, (2019).\n[45] Tom Warren. Microsoft admits Outlook.com hackers were able\nto access emails. https://theverge.com/2019/4/15/\n18311112/microsoft-outlook-web-email-hack-response-comment, 2019.\n[46] Andrew Yao, ‘How to generate and exchange secrets’, in FOCS, (1986).\n[47] Samee Zahur, Mike Rosulek, and David Evans, ‘Two halves make a\nwhole’, in EUROCRYPT, (2015).\n[48] Han Zhao, Mazen Melibari, and Pascal Poupart, ‘On the relation-\nship between sum-product networks and Bayesian networks’, in ICML,\n(2015).A. Treiber et al. / CryptoSPN: Privacy-Preserving Sum-Product Network Inference 1953",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "LawGf_aSNfd",
"year": null,
"venue": "ACL (1) 2019",
"pdf_link": "https://aclanthology.org/P19-1223.pdf",
"forum_link": "https://openreview.net/forum?id=LawGf_aSNfd",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E3: Entailment-driven Extracting and Editing for Conversational Machine Reading",
"authors": [
"Victor Zhong",
"Luke Zettlemoyer"
],
"abstract": "Conversational machine reading systems help users answer high-level questions (e.g. determine if they qualify for particular government benefits) when they do not know the exact rules by which the determination is made (e.g. whether they need certain income levels or veteran status). The key challenge is that these rules are only provided in the form of a procedural text (e.g. guidelines from government website) which the system must read to figure out what to ask the user. We present a new conversational machine reading model that jointly extracts a set of decision rules from the procedural text while reasoning about which are entailed by the conversational history and which still need to be edited to create questions for the user. On the recently introduced ShARC conversational machine reading dataset, our Entailment-driven Extract and Edit network (E3) achieves a new state-of-the-art, outperforming existing systems as well as a new BERT-based baseline. In addition, by explicitly highlighting which information still needs to be gathered, E3 provides a more explainable alternative to prior work. We release source code for our models and experiments at https://github.com/vzhong/e3.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 2310–2320\nFlorence, Italy, July 28 - August 2, 2019. c\r2019 Association for Computational Linguistics2310E3: Entailment-driven Extracting and Editing for Conversational\nMachine Reading\nVictor Zhong\nUniversity of Washington\[email protected] Zettlemoyer\nUniversity of Washington\[email protected]\nAbstract\nConversational machine reading systems help\nusers answer high-level questions (e.g. deter-\nmine if they qualify for particular govern-\nment benefits) when they do not know the ex-\nact rules by which the determination is made\n(e.g. whether they need certain income levels\nor veteran status). The key challenge is that\nthese rules are only provided in the form of a\nprocedural text (e.g. guidelines from govern-\nment website) which the system must read to\nfigure out what to ask the user. We present\na new conversational machine reading model\nthat jointly extracts a set of decision rules\nfrom the procedural text while reasoning about\nwhich are entailed by the conversational his-\ntory and which still need to be edited to create\nquestions for the user. On the recently intro-\nduced ShARC conversational machine read-\ning dataset, our Entailment-driven Extract and\nEdit network ( E3) achieves a new state-of-the-\nart, outperforming existing systems as well as\na new BERT-based baseline. In addition, by\nexplicitly highlighting which information still\nneeds to be gathered, E3provides a more ex-\nplainable alternative to prior work. We release\nsource code for our models and experiments\nathttps://github.com/vzhong/e3 .\n1 Introduction\nIn conversational machine reading (CMR), a sys-\ntem must help users answer high-level questions\nby participating in an information gathering dia-\nlog. For example, in Figure 1 the system asks a\nseries of questions to help the user decide if they\nneed to pay tax on their pension. A key chal-\nlenge in CMR is that the rules by which the deci-\nsion is made are only provided in natural language\n(e.g. the rule text in Figure 1). At every step of the\nconversation, the system must read the rules text\nand reason about what has already been said in to\nformulate the best next question.\n# 4. Tax when you live abroadIf you’re not a UK resident, you don’t usually pay UK tax on your pension. But you might have to pay tax in the country you live in. There are a few exceptions - for example, UK civil service pensions will always be taxed in the UK.I get my money from a business I have. We get our funding from a private bank.Rule textUser scenario\nDo I need to pay UK tax on my pension?Initial user questionAre you a UK resident?NoAre you receiving UK civil service pensions?Previous questionPrevious user responseModel outputFigure 1: A conversational machine reading example.\nThe model is given a rule text document, which con-\ntains a recipe of implicit rules (underlined) for answer-\ning the initial user question. At the start of the conver-\nsation, the user presents a scenario describing their sit-\nuation. During each turn, the model can ask the user\na follow-up question to inquire about missing infor-\nmation, or conclude the dialogue by answering yes,\nno, orirrelevant .irrelevant means that the\nrule text cannot answer the question. We show previ-\nous turns as well as the corresponding inquired rules in\ngreen. The scenario is shown in red and in this case\ndoes not correspond to a rule. The model inquiry for\nthis turn and its corresponding rule are shown in blue.\nWe present a new model that jointly reasons\nabout what rules are present in the text and which\nare already entailed by the conversational history\nto improve question generation. More specifically,\nwe propose the Entailment-driven Extract and Edit\nnetwork ( E3).E3learns to extract implicit rules in\nthe document, identify which rules are entailed by\nthe conversation history, and edit rules that are not\nentailed to create follow-up questions to the user.\nDuring each turn, E3parses the rule text to extract\nspans in the text that correspond to implicit rules\n(underlined in Figure 1). Next, the model scores\nthe degree to which each extracted rule is entailed\n2311by the initial user scenario (red in Figure 1) and by\nprevious interactions with the user (green in Fig-\nure 1). Finally, the model decides on a response by\ndirectly answering the question ( yes/no), stating\nthat the rule text does not contain sufficient infor-\nmation to answer the question ( irrelevant ),\nor asking a follow-up question about an extracted\nrule that is not entailed but needed to determine the\nanswer (blue in Figure 1). In the case of inquiry,\nthe model edits an extracted rule into a follow-up\nquestion. To our knowledge, E3is the first extract-\nand-edit method for conversational dialogue, as\nwell as the first method that jointly infers implicit\nrules in text, estimates entailment, inquires about\nmissing information, and answers the question.\nWe compare E3to the previous-best systems\nas well as a new, strong, BERT-based extrac-\ntive question answering model (BERTQA) on the\nrecently proposed ShARC CMR dataset (Saeidi\net al., 2018). Our results show that E3is more\naccurate in its decisions and generates more rele-\nvant inquiries. In particular, E3outperforms the\nprevious-best model by 5.7% in micro-averaged\ndecision accuracy and 4.3 in inquiry BLEU4.\nSimilarly, E3outperforms the BERTQA base-\nline by 4.0% micro-averaged decision accuracy\nand 2.4 in inquiry BLEU4. In addition to out-\nperforming previous methods, E3is explainable\nin the sense that one can visualize what rules the\nmodel extracted and how previous interactions and\ninquiries ground to the extracted rules. We re-\nlease source code for E3and the BERTQA model\nathttps://github.com/vzhong/e3 .\n2 Related Work\nDialogue tasks. Recently, there has been grow-\ning interest in question answering (QA) in a di-\nalogue setting (Choi et al., 2018; Reddy et al.,\n2019). CMR (Saeidi et al., 2018) differs from\ndialogue QA in the domain covered (regulatory\ntext vs Wikipedia). A consequence of this is that\nCMR requires the interpretation of complex de-\ncision rules in order to answer high-level ques-\ntions, whereas dialogue QA typically contains\nquestions whose answers are directly extractable\nfrom the text. In addition, CMR requires the for-\nmulation of free-form follow-up questions in or-\nder to identify whether the user satisfies decision\nrules, whereas dialogue QA does not. There has\nalso been significant work on task-oriented dia-\nlogue, where the system must inquire about miss-ing information in order to help the user achieve a\ngoal (Williams et al., 2013; Henderson et al., 2014;\nMrkˇsi´c et al., 2017; Young et al., 2013). However,\nthese tasks are typically constrained to a fixed on-\ntology (e.g. restaurant reservation), instead of a la-\ntent ontology specified via natural language docu-\nments.\nDialogue systems. One traditional approach for\ndesigning dialogue systems divides the task into\nlanguage understanding/state-tracking (Mrk ˇsi´c\net al., 2017; Zhong et al., 2018), reasoning/policy\nlearning (Su et al., 2016), and response gener-\nation (Wen et al., 2015). The models for each\nof these subtasks are then combined to form a\nfull dialogue system (Young et al., 2013; Wen\net al., 2017). The previous best system for\nShARC (Saeidi et al., 2018) similarly breaks\nthe CMR task into subtasks and combines hand-\ndesigned sub-models for decision classification,\nentailment, and follow-up generation. In contrast,\nthe core reasoning (e.g. non-editor) components\nofE3are jointly trained, and does not require\ncomplex hand-designed features.\nExtracting latent rules from text. There is a\nlong history of work on extracting knowledge\nautomatically from text (Moulin and Rousseau,\n1992). Relation extraction typically assumes that\nthere is a fixed ontology onto which extracted\nknowledge falls (Mintz et al., 2009; Riedel et al.,\n2013). Other works forgo the ontology by using,\nfor example, natural language (Angeli and Man-\nning, 2014; Angeli et al., 2015). These extractions\nfrom text are subsequently used for inference over\na knowledge base (Bordes et al., 2013; Dettmers\net al., 2018; Lin et al., 2018) and rationalizing\nmodel predictions (Lei et al., 2016). Our work is\nmore similar with the latter type in which knowl-\nedge extracted are not confined to a fixed ontology\nand instead differ on a document basis. In addi-\ntion, the rules extracted by our model are used for\ninference over natural language documents. Fi-\nnally, these rules provide rationalization for the\nmodel’s decision making, in the sense that the user\ncan visualize what rules the model extracted and\nwhich rules are entailed by previous turns.\n3 Entailment-driven Extract and Edit\nnetwork\nIn conversational machine reading, a system reads\na document that contains a set of implicit decision\n2312\nQuestion xQ\n<latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit>xQ\n<latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit>Rule text xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>Scenario xS\n<latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit>xS\n<latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit>Follow-up QA xH,1\n<latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit>xH,1\n<latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit>BERT Transformer encoderRule extraction layerFollow-up QA xH,2\n<latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit>xH,2\n<latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit>Follow-up QA xH,nH\n<latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit>xH,nH\n<latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit>…Input self attention layerDecisionclassfierRule self attention layerr1\n<latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit>r1\n<latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit>r2\n<latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit>r2\n<latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit>rnR\n<latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit>rnR\n<latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit>zyes\n<latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit>zyes\n<latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit>zno\n<latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit>zno\n<latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit>zirrelevant\n<latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit>zirrelevant\n<latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit>…x<latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit><latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit><latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit><latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit>U<latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit><latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit><latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit><latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit>R1\n<latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit><latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit><latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit><latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit>RnR\n<latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit><latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit><latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit><latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit>···<latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit>RulescorerA1\n<latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit><latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit><latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit><latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit>AnR\n<latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit><latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit><latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit><latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit>···<latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit>C<latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit><latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit><latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit><latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit>Extraction ModuleEntailment scorergi\n<latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit><latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit><latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit><latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit>hi\n<latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit><latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit><latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit><latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit>Entailment ModuleDecision Module\nzinquire\n<latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit>zinquire\n<latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit>Figure 2: The Entailment-driven Extract and Edit network.\nrules. The user presents a scenario describing their\nsituation, and asks the system an underspecified\nquestion. In order to answer the user’s question,\nthe system must ask the user a series of follow-up\nquestions to determine whether the user satisfies\nthe set of decision rules.\nThe key challenges in CMR are to identify im-\nplicit rules present in the document, understand\nwhich rules are necessary to answer the ques-\ntion, and inquire about necessary rules that are\nnot entailed by the conversation history by ask-\ning follow-up questions. The three core mod-\nules of E3, the extraction, entailment, and de-\ncision modules, combine to address these chal-\nlenges. Figure 2 illustrates the components of E3.\nFor ease of exposition, we describe E3for a sin-\ngle turn in the conversation. To make the refer-\nences concrete in the following sections, we use as\nan example the inputs and outputs from Figure 1.\nThis example describes a turn in a conversation in\nwhich the system helps the user determine whether\nthey need to pay UK taxes on their pension.\n3.1 Extraction module\nThe extraction module extracts spans from the\ndocument that correspond to latent rules. Let\nxD,xQ,xS,xH;idenote words in the rule text,\nquestion, scenario, and the inquiry and user re-\nsponse during the ith previous turn of the dia-\nlogue after Nturns have passed. We concate-\nnate these inputs into a single sequence x=\n[xQ;xD;xS;xH;1;\u0001\u0001\u0001xH;N]joined by sentinel to-\nkens that mark the boundaries of each input. To\nencode the input for the extraction module, we use\nBERT, a transformer-based model (Vaswani et al.,\n2017) that achieves consistent gains on a variety\nof NLP tasks (Devlin et al., 2019). We encodexusing the BERT encoder, which first converts\nwords into word piece tokens (Wu et al., 2016),\nthen embeds these tokens along with their posi-\ntional embeddings and segmentation embeddings.\nThese embeddings are subsequently encoded via a\ntransformer network, which allows for inter-token\nattention at each layer. Let nxbe the number\nof tokens in the concatenated input xanddUbe\nthe output dimension of the BERT encoder. For\nbrevity, we denote the output of the BERT encoder\nasU= BERT(x)2Rnx\u0002dUand refer readers\nto Devlin et al. (2019) for detailed architecture.\nIn order to extract the implicit decision rules\nfrom the document, we compute a start score \u000bi\nand an end score \fifor eachith token as\n\u000bi=\u001b(W\u000bUi+b\u000b)2R (1)\n\fi=\u001b(W\fUi+b\f)2R (2)\nwhereW\u000b;W\f2RdU,b\u000b;b\f2R, and\u001bis the\nsigmoid function.\nFor each position siwhere\u000biis larger than\nsome threshold \u001c, we find the closest proceeding\npositionei\u0015siwhere\fei>\u001c. Each pair (si;ei)\nthen forms an extracted span corresponding to a\nruleRiexpressed in the rule text. In the example\nin Figure 1, the correct extracted spans are “UK\nresident” and “UK civil service pensions”.\nFor theith rule, we use self-attention to build a\nrepresentation Aiover the span (si;ei).\n\rk=W\rUk+b\r2R;si\u0014k\u0014ei(3)\n\rk= softmax ( \r)k2R;si\u0014k\u0014ei(4)\nAi=eiX\nk=si\rkUk2RdU (5)\nwhereW\r2RdUandb\r2R. Here,\rk;\rk\nare respectively the unnormalized and normalized\nscores for the self-attention layer.\n2313LetnRdenote the number spans in the rule text,\neach of which corresponds to a ground truth rule.\nThe rule extraction loss is computed as the sum of\nthe binary cross entropy losses for each rule Ri.\nLre=nRX\niLstart;i+Lend;i (6)\nLetnDdenote the number of tokens in the rule\ntext,si,eithe ground truth start and end positions\nfor theith rule, and 1fthe indicator function that\nreturns 1 if and only if the condition fholds. Re-\ncall from Eq (1) that \u000bjand\fjdenote the proba-\nbilities that token jis the start and end of a rule.\nThe start and end binary cross entropy losses for\ntheith rule are computed as\nLstart;i=\u0000nDX\nj1j=silog (\u000bj) + 1j6=silog (1\u0000\u000bj)\nLend;i=\u0000nDX\nj1j=eilog (\fj) + 1j6=eilog (1\u0000\fj)\n3.2 Entailment module\nGiven the extracted rules R=fR1;\u0001\u0001\u0001RnRg, the\nentailment module estimates whether each rule is\nentailed by the conversation history, so that the\nmodel can subsequently inquire about rules that\nare not entailed. For the example in Figure 1, the\nrule “UK resident” is entailed by the previous in-\nquiry “Are you a UK resident”. In contrast, the\nrule “UK civil service pensions” is not entailed by\neither the scenario or the conversation history, so\nthe model needs to inquire about it. In this partic-\nular case the scenario does not entail any rule.\nFor each extracted rule, we compute a score\nthat indicates the extent to which this particular\nrule has already been discussed in the initial sce-\nnarioSand in previous turns Q. In particular, let\nN(Ri;S)denote the number of tokens shared by\nRiandS,N(Ri)the number of tokens in Ri, and\nN(S)the number of tokens in S. We compute the\nscenario entailment score gias\npr(Ri;S) =N(Ri;S)\nN(Ri)(7)\nre(Ri;S) =N(Ri;S)\nN(S)(8)\ngi= f1(Ri;S) =2pr(Ri;S)re(Ri;S)\npr(Ri;S) + re(Ri;S)(9)\nwhere pr,re, and f1respectively denote the pre-\ncision, recall, and F1 scores. We compute a simi-\nlar score to represent the extent to which the ruleRihas been discussed in previous inquiries. Let\nQkdenote tokens in the kth previous inquiry. We\ncompute the history entailment score hibetween\nthe extracted rule Riand allnHprevious inquiries\nin the conversation history as\nhi= max\nk=1;\u0001\u0001\u0001nHf1(Ri;Qk) (10)\nThe final representation of the ith rule,Ai, is then\nthe concatenation of the span self-attention and the\nentailment scores.\nAi= [Ai;gi;hi]2RdU+2(11)\nwhere [x;y]denotes the concatenation of xand\ny. We also experiment with embedding and en-\ncoding similarity based approaches to compute en-\ntailment, but find that this F1 approach performs\nthe best. Because the encoder utilizes cross atten-\ntion between different components of the input,\nthe representations UandAiare able to capture\nnotions of entailment. However, we find that ex-\nplicitly scoring entailment via the entailment mod-\nule further discourages the model from making re-\ndundant inquiries.\n3.3 Decision module\nGiven the extracted rules Rand the entailment-\nenriched representations for each rule Ai, the de-\ncision module decides on a response to the user.\nThese include answering yes/noto the user’s\noriginal question, determining that the rule text is\nirrelevant to the question, or inquiring about\na rule that is not entailed but required to answer\nthe question. For the example in Figure 1, the rule\n“UK civil service pensions” is not entailed, hence\nthe correct decision is to ask a follow-up question\nabout whether the user receives this pension.\nWe start by computing a summary Cof the in-\nput using self-attention\n\u001ek=W\u001eUk+b\u001e2R (12)\n\u001ek= softmax\u0000\n\u001e\u0001\nk2R (13)\nC=eiX\nk=si\u001ekUk2RdU (14)\nwhereW\u001e2RdU,b\u001e2R, and\u001e,\u001eare re-\nspectively the unnormalized and normalized self-\nattention weights. Next, we score the choices\nyes,no,irrelevant , andinquire .\nz=WzC+bz2R4(15)\n2314wherezis a vector containing a class score\nfor each of the yes,no,irrelevant , and\ninquire decisions.\nFor inquiries, we compute an inquiry score ri\nfor each extracted rule Ri.\nri=WzAi+bz2R (16)\nwhereWz2RdU+2andbz2R. Letkindicate\nthe correct decision, and iindicate the correct in-\nquiry, if the model is supposed to make an inquiry.\nThe decision loss is\nLdec =\u0000log softmax( z)k (17)\n\u0000 1k=inquire log softmax( r)i\nDuring inference, the model first determines the\ndecisiond= argmaxkzk. If the decision dis\ninquire , the model asks a follow-up question\nabout theith rule such that i= argmaxjrj. Oth-\nerwise, the model concludes the dialogue with d.\nRephrasing rule into question via editor. In\nthe event that the model chooses to make an in-\nquiry about an extracted rule Ri,Riis given to\nan subsequent editor to rephrase into a follow-up\nquestion. For the example in 1, the editor edits the\nspan “UK civil service pensions” into the follow-\nup question “Are you receiving UK civil service\npensions?” Figure 3 illustrates the editor.\nThe editor takes as input xedit= [Ri;xD], the\nconcatenation of the extracted rule to rephrase Ri\nand the rule text xD. As before, we encode using\na BERT encoder to obtain Uedit= BERT(xedit).\nThe encoder is followed by two decoders that re-\nspective generate the pre-span edit Ri;preand post-\nspan editRi;post. For the example in Figure 1,\ngiven the span “UK civil service pensions”, the\npre-span and post span edits that form the question\n“Are you receiving UK civil service pensions?”\nare respectively “Are you receiving” and “?”\nTo perform each edit, we employ an attentive\ndecoder (Bahdanau et al., 2015) with Long Short-\nTerm Memory (LSTM) (Hochreiter and Schmid-\nhuber, 1997). Let htdenote the decoder state at\ntimet. We compute attention atover the input.\n\u0010k=Ueditht\u000012R (18)\n\u0010k= softmax( \u0010)k2R (19)\nat=X\nk\u0010kUedit;k2RdU (20)\nLetV2RnV\u0002dVdenote the embedding ma-\ntrix corresponding to nVtokens in the vocabulary.\nProposed rule Ri\n<latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit>Ri\n<latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit>Rule text xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>BERT Transformer encoderPre-span attentive decoderPre-span edit Ri,pre\n<latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit>Ri,pre\n<latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit>xedit\n<latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit><latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit><latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit><latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit>Post-span attentive decoderPost-span edit Ri,post\n<latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit>Ri,post\n<latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit>Uedit\n<latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit><latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit><latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit><latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit>Figure 3: The editor of E3.\nTo generate the tth tokenwt, we use weight tying\nbetween the output layer and the embedding ma-\ntrix (Press and Wolf, 2017).\nvt= embed( V;wt\u00001) (21)\nht= LSTM ([ vt;at];ht\u00001)2RdU(22)\not=Wo[ht;at] +bo2RdV (23)\np(wt) = softmax( Vot)2RnV (24)\nwt= argmaxkp(wt)k (25)\nWe use a separate attentive decoder to gener-\nate the pre-span edit Ri;preand the post-span edit\nRi;post. The decoders share the embedding matrix\nand BERT encoder but do not share other parame-\nters. The output of the editor is the concatenation\nof tokens [Ri;pre;Ri;Ri;post].\nThe editing loss consists of the sequential cross\nentropy losses from generating the pre-span edit\nand the post-span edit. Let npredenote the number\nof tokens and ^wt;prethetth tokens in the ground\ntruth pre-span edit. The pre-span loss is\nLpre=\u0000npreX\ntlogp( ^wt;pre) (26)\nThe editing loss is then the sum of the pre-span\nand post-span losses, the latter of which is ob-\ntained in a manner similar to Eq (26).\nLedit=Lpre+Lpost (27)\n4 Experiment\nWe train and evaluate the Entailment-driven Ex-\ntract and Edit network on the ShARC CMR\ndataset. In particular, we compare our method\nto three other models. Two of these models\nare proposed by Saeidi et al. (2018). They are\nan attentive sequence-to-sequence model that at-\ntends to the concatenated input and generates\nthe response token-by-token (Seq2Seq), and a\nstrong hand-engineered pipeline model with sub-\nmodels for entailment, classification, and genera-\ntion (Pipeline). For the latter, Saeidi et al. (2018)\n2315Model Micro Acc. Macro Acc. BLEU1 BLEU4 Comb.\nSeq2Seq 44.8 42.8 34.0 7.8 3.3\nPipeline 61.9 68.9 54.4 34.4 23.7\nBERTQA 63.6 70.8 46.2 36.3 25.7\nE3(ours) 67.6 73.3 54.1 38.7 28.4\nTable 1: Model performance on the blind, held-out test set of ShARC. The evaluation metrics are micro and macro-\naveraged accuracy in classifying bewteen the decisions yes,no,irrelevant , and inquire . In the event of\nan inquiry, the generated follow-up question is further evaluated using the BLEU score. In addition to official\nevaluation metrics, we also show a combined metric (“Comb.”), which is the product between the macro-averaged\naccuracy and the BLEU4 score.\nshow that these sub-models outperform neural\nmodels such as the entailment model by Parikh\net al. (2016), and that the combined pipeline\noutperforms the attentive sequence-to-sequence\nmodel. In addition, we propose an extractive\nQA baseline based on BERT (BERTQA). Simi-\nlar models achieved state-of-the-art on a variety\nof QA tasks (Rajpurkar et al., 2016; Reddy et al.,\n2019). We refer readers to Section A.1 of the ap-\npendices for implementation details BERTQA.\n4.1 Experimental setup\nWe tokenize using revtok1and part-of-speech tag\n(for the editor) using Stanford CoreNLP (Manning\net al., 2014). We fine-tune the smaller, uncased\npretrained BERT model by Devlin et al. (2019)\n(e.g.bert-base-uncased ).2We optimize us-\ning ADAM (Kingma and Ba, 2015) with an initial\nlearning rate of 5e-5 and a warm-up rate of 0.1.\nWe regularize using Dropout (Srivastava et al.,\n2014) after the BERT encoder with a rate of 0.4.\nTo supervise rule extraction, we reconstruct full\ndialogue trees from the ShARC training set and\nextract all follow-up questions as well as bullet\npoints from each rule text and its corresponding di-\nalogue tree. We then match these extracted clauses\nto spans in the rule text, and consider these noisy\nmatched spans as supervision for rule extraction.\nDuring inference, we use heuristic bullet point ex-\ntraction3in conjunction with spans extracted by\nthe rule extraction module. This results in minor\nperformance improvements ( \u00181% micro/macro\nacc.) over only relying on the rule extraction mod-\nule. In cases where one rule fully covers another,\n1https://github.com/jekbradbury/revtok\n2We use the BERT implementation from\nhttps://github.com/huggingface/\npytorch-pretrained-BERT\n3We extract spans from the text that starts with the “*”\ncharacter and ends with another “*” character or a new line.we discard the covered shorter rule. Section A.2\ndetails how clause matching is used to obtain noisy\nsupervision for rule extraction.\nWe train the editor separately, as jointly training\nwith a shared encoder worsens performance. The\neditor is trained by optimizing Leditwhile the rest\nof the model is trained by optimizing Ldec+\u0015Lre.\nWe use a rule extraction threshold of \u001c= 0:5and\na rule extraction loss weight of \u0015= 400 . We\nperform early stopping using the product of the\nmacro-averaged accuracy and the BLEU4 score.\nFor the editor, we use fixed, pretrained embed-\ndings from GloVe (Pennington et al., 2014), and\nuse dropout after input attention with a rate of 0.4.\nBefore editing retrieved rules, we remove prefix\nand suffix adpositions, auxiliary verbs, conjunc-\ntions, determiners, or punctuation. We find that\ndoing so allows the editor to convert some ex-\ntracted rules (e.g. or sustain damage) into sensible\nquestions (e.g. did you sustain damage?).\n4.2 Results\nOur performance on the development and the\nblind, held-out test set of ShARC is shown in Ta-\nble 1. Compared to previous results, E3achieves\na new state-of-the-art, obtaining best performance\non micro and macro-averaged decision classifica-\ntion accuracy and BLEU4 scores while maintain-\ning similar BLEU1 scores. These results show\nthatE3both answers the user’s original question\nmore accurately, and generates more coherent and\nrelevant follow-up questions. In addition, Fig-\nure 4 shows that because E3explicitly extracts im-\nplicit rules from the document, the model’s pre-\ndictions are explainable in the sense that the user\ncan verify the correctness of the extracted rules\nand observe how the scenario and previous inter-\nactions ground to the extracted rules.\n2316\n# 1. OverviewYou get the Additional State Pension automatically if you’re eligible for it, unless you’ve contracted out of it.At no time were my contributions lower than any else’s in the SERP or ever paid into a private pension.Do I get additional state pension automatically?Have you contracted out of the state?YesYes: 0.01 No: 0.99 Irrelevant: 0.00 Inquire: 0.0NoRule textScenarioQuestionPrevious interactionsDecisionModel responseNoGround truth answerAre you eligible for it?Yes0.280.670.000.720.550.00(a)\nIf you are a female Vietnam Veteran with a child who has a birth defect or you are a child of a female Vietnam with a birth defect, the child may be eligible for VA-financed care.I make $14,000 and would like to keep making that until I return to Zimbabwe.Is my child eligible for VA-financed health care?Rule textScenarioQuestionYes: 0.04 No: 0.04 Irrelevant: 0.00 Inquire: 0.92Are you female Vietnam Veteran with a child who has a birth defect?Previous interactionsDecisionModel responseAre you a female Vietnam Veteran?Ground truth answer0.660.000.000.340.000.00(b)\nFigure 4: Predictions by E3. Extracted spans are underlined in the text. The three scores are the inquiry score ri\n(blue), history entailment score hi(red), and scenario entailment score gi(green) of the nearest extracted span.\nModel Micro Acc. Macro Acc. BLEU1 BLEU4 Comb.\nE368.0 73.4 66.9 53.7 39.4\n-edit 68.0 73.4 53.1 46.2 31.4\n-edit, entail 68.0 73.1 50.2 40.3 29.5\n-edit, entail, extract (BERTQA) 63.4 70.6 47.4 37.4 23.7\nTable 2: Ablation study of E3on the development set of ShARC. The ablated variants of E3include versions:\nwithout the editor; without the editor and entailment module; without the editor, entailment module, and extraction\nmodule, which reduces to the BERT for question answering model by Devlin et al. (2019).\n4.3 Ablation study\nTable 2 shows an ablation study of E3on the de-\nvelopment set of ShARC.\nRetrieval outperforms word generation.\nBERTQA (“-edit, entail, extract”), which E3re-\nduces to after removing the editor, entailment,\nand extraction modules, presents a strong baseline\nthat exceeds previous results on all metrics except\nfor BLEU1. This variant inquires about spans ex-\ntracted from the text, which, while more relevant\nas indicated by the higher BLEU4 score, does not\nhave the natural qualities of a question, hence it\nhas a lower BLEU1. Nonetheless, the large gains\nof BERTQA over the attentive Seq2Seq model\nshows that retrieval is a more promising technique\nfor asking follow-up questions than word-by-wordgeneration. Similar findings were reported for\nquestion answering by Yatskar (2019).\nExtraction of document structure facilitates\ngeneralization. Adding explicit extraction of\nrules in the document (“-edit, entail”) forces the\nmodel to interpret all rules in the document ver-\nsus only focusing on extracting the next inquiry.\nThis results in better performance in both decision\nclassification and inquiry relevance compared to\nthe variant that is not forced to interpret all rules.\nModeling entailment improves rule retrieval.\nThe “-edit” model explicitly models whether an\nextracted rule is entailed by the user scenario and\nprevious turns. Modeling entailment allows the\nmodel to better predict whether a rule is entailed,\n2317\nyesno\nirrelevantinquire\nPredicted labelyes\nno\nirrelevant\ninquireTrue label530 147 0 127\n117 541 0 108\n0 0 133 5\n107 113 2 340\n0100200300400500Figure 5: Confusion matrix of decision predictions on\nthe development set of ShARC.\nand thus more often inquire about rules that are\nnot entailed. Figure 4a illustrates one such exam-\nple in which both extracted rules have high entail-\nment score, and the model chooses to conclude the\ndialogue by answering noinstead of making fur-\nther inquiries. Adding entailment especially im-\nproves in BLEU4 score, as the inquiries made by\nthe model are more relevant and appropriate.\nEditing retrieved rules results in more fluid\nquestions. While E3without the editor is able to\nretrieve rules that are relevant, these spans are not\nfluent questions that can be presented to the user.\nThe editor is able to edit the extracted rules into\nmore fluid and coherent questions, which results\nfurther gains particularly in BLEU1.\n4.4 Error analysis\nIn addition to ablation studies, we analyze er-\nrorsE3makes on the development set of ShARC.\nDecision errors. Figure 5 shows the confusion\nmatrix of decisions. We specifically examine ex-\namples in which E3produces an incorrect deci-\nsion. On the ShARC development set there are\n726 such cases, which correspond to a 32.0% er-\nror rate. We manually analyze 100 such exam-\nples to identify commons types of errors. Within\nthese, in 23% of examples, the model attempts to\nanswer the user’s initial question without resolv-\ning a necessary rule despite successfully extract-\ning the rule. In 19% of examples, the model iden-\ntifies and inquires about all necessary rules but\ncomes to the wrong conclusion. In 18% of exam-\nples, the model makes a redundant inquiry about a\nrule that is entailed. In 17% of examples, the ruletext contains ambiguous rules. Figure 4b contains\none such example in which the annotator identi-\nfied the rule “a female Vietnam Veteran”, while\nthe model extracted an alternative longer rule “a\nfemale Vietnam Veteran with a child who has a\nbirth defect”. Finally, in 13% of examples, the\nmodel fails to extract some rule from the docu-\nment. Other less common forms of errors include\nfailures by the entailment module to perform nu-\nmerical comparison, complex rule procedures that\nare difficult to deduce, and implications that re-\nquire world knowledge. These results suggests\nthat improving the decision process after rule ex-\ntraction is an important area for future work.\nInquiry quality. On 340 examples (15%) in the\nShARC development set, E3generates an inquiry\nwhen it is supposed to. We manually analyze 100\nsuch examples to gauge the quality of generated\ninquiries. On 63% of examples, the model gener-\nates an inquiry that matches the ground-truth. On\n14% of examples, the model makes inquires in a\ndifferent order than the annotator. On 12% of ex-\namples, the inquiry refers to an incorrect subject\n(e.g. “are you born early” vs. “is your baby born\nearly”. This usually results from editing an entity-\nless bullet point (“* born early”). On 6% of exam-\nples, the inquiry is lexically similar to the ground\ntruth but has incorrect semantics (e.g. “do you\nneed savings” vs. “is this information about your\nsavings”). Again, this tends to result from editing\nshort bullet points (e.g. “* savings”). These results\nindicate that when the model correctly chooses to\ninquire, it largely inquires about the correct rule.\nThey also highlight a difficulty in evaluating CMR\n— there can be several correct orderings of in-\nquiries for a document.\n5 Conclusion\nWe proposed the Entailment-driven Extract and\nEdit network ( E3), a conversational machine read-\ning model that extracts implicit decision rules\nfrom text, computes whether each rule is entailed\nby the conversation history, inquires about rules\nthat are not entailed, and answers the user’s ques-\ntion. E3achieved a new state-of-the-art result on\nthe ShARC CMR dataset, outperforming existing\nsystems as well as a new extractive QA baseline\nbased on BERT. In addition to achieving strong\nperformance, we showed that E3provides a more\nexplainable alternative to prior work which do not\nmodel document structure.\n2318Acknowledgments\nThis research was supported in part by the ARO\n(W911NF-16-1-0121) and the NSF (IIS-1252835,\nIIS-1562364). We thank Terra Blevins, Sewon\nMin, and our anonymous reviewers for helpful\nfeedback.\nReferences\nGabor Angeli, Melvin Jose Johnson Premkumar, and\nChristopher D. Manning. 2015. Leveraging linguis-\ntic structure for open domain information extraction.\nInACL.\nGabor Angeli and Christopher D. Manning. 2014. Nat-\nuralli: Natural logic inference for common sense\nreasoning. In EMNLP .\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-\ngio. 2015. Neural machine translation by jointly\nlearning to align and translate. In ICLR .\nAntoine Bordes, Nicolas Usunier, Alberto Garcia-\nDuran, Jason Weston, and Oksana Yakhnenko.\n2013. Translating embeddings for modeling multi-\nrelational data. In NIPS .\nEunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-\ntau Yih, Yejin Choi, Percy Liang, and Luke Zettle-\nmoyer. 2018. QuAC: Question answering in con-\ntext. In EMNLP .\nTim Dettmers, Minervini Pasquale, Stenetorp Pon-\ntus, and Sebastian Riedel. 2018. Convolutional 2D\nknowledge graph embeddings. In AAAI .\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In NAACL .\nMatthew Henderson, Blaise Thomson, and Jason D\nWilliams. 2014. The second dialog state tracking\nchallenge. In SIGDIAL .\nSepp Hochreiter and J ¨urgen Schmidhuber. 1997. Long\nshort-term memory. Neural Computation .\nDiederik P. Kingma and Jimmy Ba. 2015. Adam: A\nmethod for stochastic optimization. In ICLR .\nTao Lei, Regina Barzilay, and Tommi Jaakkola. 2016.\nRationalizing neural predictions. In EMNLP .\nXi Victoria Lin, Richard Socher, and Caiming Xiong.\n2018. Multi-hop knowledge graph reasoning with\nreward shaping. In EMNLP .\nChristopher D. Manning, Mihai Surdeanu, John Bauer,\nJenny Rose Finkel, Steven Bethard, and David Mc-\nClosky. 2014. The Stanford CoreNLP natural lan-\nguage processing toolkit. In ACL.Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju-\nrafsky. 2009. Distant supervision for relation extrac-\ntion without labeled data. In ACL.\nB. Moulin and D. Rousseau. 1992. Automated knowl-\nedge acquisition from regulatory texts. IEEE Ex-\npert.\nNikola Mrk ˇsi´c, Diarmuid O S ´eaghdha, Tsung-Hsien\nWen, Blaise Thomson, and Steve Young. 2017.\nNeural belief tracker: Data-driven dialogue state\ntracking. In ACL.\nAnkur Parikh, Oscar T ¨ackstr ¨om, Dipanjan Das, and\nJakob Uszkoreit. 2016. A decomposable attention\nmodel for natural language inference. In EMNLP .\nJeffrey Pennington, Richard Socher, and Christo-\npher D. Manning. 2014. GloVe: Global vectors for\nword representation. In EMNLP .\nOfir Press and Lior Wolf. 2017. Using the output em-\nbedding to improve language models. In ACL.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and\nPercy Liang. 2016. SQuAD: 100, 000+ questions\nfor machine comprehension of text. In EMNLP .\nSiva Reddy, Danqi Chen, and Christopher D Manning.\n2019. CoQA: A conversational question answering\nchallenge. TACL .\nSebastian Riedel, Limin Yao, Andrew McCallum, and\nBenjamin M. Marlin. 2013. Relation extraction\nwith matrix factorization and universal schemas. In\nNAACL .\nMarzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer\nSingh, Tim Rockt ¨aschel, Mike Sheldon, Guillaume\nBouchard, and Sebastian Riedel. 2018. Interpreta-\ntion of natural language rules in conversational ma-\nchine reading. In EMNLP .\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky,\nIlya Sutskever, and Ruslan Salakhutdinov. 2014.\nDropout: A simple way to prevent neural networks\nfrom overfitting. JMLR .\nPei-Hao Su, Milica Gasic, Nikola Mrk ˇsi´c, Lina M. Ro-\njas Barahona, Stefan Ultes, David Vandyke, Tsung-\nHsien Wen, and Steve Young. 2016. On-line active\nreward learning for policy optimisation in spoken di-\nalogue systems. In ACL.\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In NIPS .\nTsung-Hsien Wen, Milica Gasic, Nikola Mrk ˇsi´c, Pei-\nHao Su, David Vandyke, and Steve Young. 2015.\nSemantically conditioned lstm-based natural lan-\nguage generation for spoken dialogue systems. In\nEMNLP .\n2319Tsung-Hsien Wen, David Vandyke, Nikola Mrk ˇsi´c,\nMilica Ga ˇsi´c, Lina M. Rojas Barahona, Pei-Hao Su,\nStefan Ultes, and Steve Young. 2017. A network-\nbased end-to-end trainable task-oriented dialogue\nsystem. In EACL .\nJason D Williams, Antoine Raux, Deepak Ramachan-\ndran, and Alan Black. 2013. The dialog state track-\ning challenge. In SIGDIAL .\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V .\nLe, Mohammad Norouzi, Wolfgang Macherey,\nMaxim Krikun, Yuan Cao, Qin Gao, Klaus\nMacherey, Jeff Klingner, Apurva Shah, Melvin\nJohnson, Xiaobing Liu, Lukasz Kaiser, Stephan\nGouws, Yoshikiyo Kato, Taku Kudo, Hideto\nKazawa, Keith Stevens, George Kurian, Nishant\nPatil, Wei Wang, Cliff Young, Jason Smith, Jason\nRiesa, Alex Rudnick, Oriol Vinyals, Gregory S.\nCorrado, Macduff Hughes, and Jeffrey Dean. 2016.\nGoogle’s neural machine translation system: Bridg-\ning the gap between human and machine translation.\nCoRR , abs/1609.08144.\nMark Yatskar. 2019. A qualitative comparison of coqa,\nsquad 2.0 and quac. In NAACL .\nSteve Young, Milica Ga ˇsi´c, Blaise Thomson, and Ja-\nson D Williams. 2013. POMDP-based statistical\nspoken dialog systems: A review. Proceedings of\nthe IEEE .\nVictor Zhong, Caiming Xiong, and Richard Socher.\n2018. Global-locally self-attentive dialogue state\ntracker. In ACL.\n2320A Appendices\nA.1 BertQA Baseline\nOur BertQA baseline follows that proposed by De-\nvlin et al. (2019) for the Stanford Question\nAnswering Dataset (SQuAD) (Rajpurkar et al.,\n2016). Due to the differences in context between\nShARC and SQuAD, we augment the input to\nthe BERTQA model in a manner similar to Sec-\ntion 3.1. The distinction here is that we addition-\nally add the decision types “yes”, “no”, and “ir-\nrelevant” as parts of the input such that the prob-\nlem is fully solvable via span extraction. Similar\nto Section 3.1, let Udenote the BERT encoding of\nthe length-ninput sequence. The BERTQA model\npredicts a start score sand an end score e.\ns= softmax( UWs+bs)2Rn(28)\ne= softmax( UWe+be)2Rn(29)\nWe take the answer as the span (i;j)that gives\nthe highest score siejsuch thatj >=i. Be-\ncause we augment the input with decision labels,\nthe model can be fully supervised via extraction\nendpoints.\nA.2 Creating noisy supervision for span\nextraction via span matching\nThe ShARC dataset is constructed from full dia-\nlogue trees in which annotators exhaustively anno-\ntate yes/no branches of follow-up questions. Con-\nsequently, each rule required to answer the ini-\ntial user question forms a follow-up question in\nthe full dialogue tree. In order to identify rule\nspans in the document, we first reconstruct the di-\nalogue trees for all training examples in ShARC.\nFor each document, we trim each follow-up ques-\ntion in its corresponding dialogue tree by remov-\ning punctuation and stop words. For each trimmed\nquestion, we find the shortest best-match span in\nthe document that has the least edit distance from\nthe trimmed question, which we take as the corre-\nsponding rule span. In addition, we extract sim-\nilarly trimmed bullet points from the document\nas rule spans. Finally, we deduplicate the rule\nspans by removing those that are fully covered by\na longer rule span. Our resulting set of rule spans\nare used as noisy supervision for the rule extrac-\ntion module. This preprocessing code is included\nwith our code release.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "lp9U0OzpBjV",
"year": null,
"venue": "CoRR 2019",
"pdf_link": "http://arxiv.org/pdf/1906.05373v2",
"forum_link": "https://openreview.net/forum?id=lp9U0OzpBjV",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E3: Entailment-driven Extracting and Editing for Conversational Machine Reading",
"authors": [
"Victor Zhong",
"Luke Zettlemoyer"
],
"abstract": "Conversational machine reading systems help users answer high-level questions (e.g. determine if they qualify for particular government benefits) when they do not know the exact rules by which the determination is made(e.g. whether they need certain income levels or veteran status). The key challenge is that these rules are only provided in the form of a procedural text (e.g. guidelines from government website) which the system must read to figure out what to ask the user. We present a new conversational machine reading model that jointly extracts a set of decision rules from the procedural text while reasoning about which are entailed by the conversational history and which still need to be edited to create questions for the user. On the recently introduced ShARC conversational machine reading dataset, our Entailment-driven Extract and Edit network (E3) achieves a new state-of-the-art, outperforming existing systems as well as a new BERT-based baseline. In addition, by explicitly highlighting which information still needs to be gathered, E3 provides a more explainable alternative to prior work. We release source code for our models and experiments at https://github.com/vzhong/e3.",
"keywords": [],
"raw_extracted_content": "E3: Entailment-driven Extracting and Editing for Conversational\nMachine Reading\nVictor Zhong\nUniversity of Washington\[email protected] Zettlemoyer\nUniversity of Washington\[email protected]\nAbstract\nConversational machine reading systems help\nusers answer high-level questions (e.g. deter-\nmine if they qualify for particular govern-\nment benefits) when they do not know the ex-\nact rules by which the determination is made\n(e.g. whether they need certain income levels\nor veteran status). The key challenge is that\nthese rules are only provided in the form of a\nprocedural text (e.g. guidelines from govern-\nment website) which the system must read to\nfigure out what to ask the user. We present\na new conversational machine reading model\nthat jointly extracts a set of decision rules\nfrom the procedural text while reasoning about\nwhich are entailed by the conversational his-\ntory and which still need to be edited to create\nquestions for the user. On the recently intro-\nduced ShARC conversational machine read-\ning dataset, our Entailment-driven Extract and\nEdit network ( E3) achieves a new state-of-the-\nart, outperforming existing systems as well as\na new BERT-based baseline. In addition, by\nexplicitly highlighting which information still\nneeds to be gathered, E3provides a more ex-\nplainable alternative to prior work. We release\nsource code for our models and experiments\nathttps://github.com/vzhong/e3 .\n1 Introduction\nIn conversational machine reading (CMR), a sys-\ntem must help users answer high-level questions\nby participating in an information gathering dia-\nlog. For example, in Figure 1 the system asks a\nseries of questions to help the user decide if they\nneed to pay tax on their pension. A key chal-\nlenge in CMR is that the rules by which the deci-\nsion is made are only provided in natural language\n(e.g. the rule text in Figure 1). At every step of the\nconversation, the system must read the rules text\nand reason about what has already been said in to\nformulate the best next question.\n# 4. Tax when you live abroadIf you’re not a UK resident, you don’t usually pay UK tax on your pension. But you might have to pay tax in the country you live in. There are a few exceptions - for example, UK civil service pensions will always be taxed in the UK.I get my money from a business I have. We get our funding from a private bank.Rule textUser scenario\nDo I need to pay UK tax on my pension?Initial user questionAre you a UK resident?NoAre you receiving UK civil service pensions?Previous questionPrevious user responseModel outputFigure 1: A conversational machine reading example.\nThe model is given a rule text document, which con-\ntains a recipe of implicit rules (underlined) for answer-\ning the initial user question. At the start of the conver-\nsation, the user presents a scenario describing their sit-\nuation. During each turn, the model can ask the user\na follow-up question to inquire about missing infor-\nmation, or conclude the dialogue by answering yes,\nno, orirrelevant .irrelevant means that the\nrule text cannot answer the question. We show previ-\nous turns as well as the corresponding inquired rules in\ngreen. The scenario is shown in red and in this case\ndoes not correspond to a rule. The model inquiry for\nthis turn and its corresponding rule are shown in blue.\nWe present a new model that jointly reasons\nabout what rules are present in the text and which\nare already entailed by the conversational history\nto improve question generation. More specifically,\nwe propose the Entailment-driven Extract and Edit\nnetwork ( E3).E3learns to extract implicit rules in\nthe document, identify which rules are entailed by\nthe conversation history, and edit rules that are not\nentailed to create follow-up questions to the user.\nDuring each turn, E3parses the rule text to extract\nspans in the text that correspond to implicit rules\n(underlined in Figure 1). Next, the model scores\nthe degree to which each extracted rule is entailedarXiv:1906.05373v2 [cs.CL] 13 Feb 2020\nby the initial user scenario (red in Figure 1) and by\nprevious interactions with the user (green in Fig-\nure 1). Finally, the model decides on a response by\ndirectly answering the question ( yes/no), stating\nthat the rule text does not contain sufficient infor-\nmation to answer the question ( irrelevant ),\nor asking a follow-up question about an extracted\nrule that is not entailed but needed to determine the\nanswer (blue in Figure 1). In the case of inquiry,\nthe model edits an extracted rule into a follow-up\nquestion. To our knowledge, E3is the first extract-\nand-edit method for conversational dialogue, as\nwell as the first method that jointly infers implicit\nrules in text, estimates entailment, inquires about\nmissing information, and answers the question.\nWe compare E3to the previous-best systems\nas well as a new, strong, BERT-based extrac-\ntive question answering model (BERTQA) on the\nrecently proposed ShARC CMR dataset (Saeidi\net al., 2018). Our results show that E3is more\naccurate in its decisions and generates more rele-\nvant inquiries. In particular, E3outperforms the\nprevious-best model by 5.7% in micro-averaged\ndecision accuracy and 4.3 in inquiry BLEU4.\nSimilarly, E3outperforms the BERTQA base-\nline by 4.0% micro-averaged decision accuracy\nand 2.4 in inquiry BLEU4. In addition to out-\nperforming previous methods, E3is explainable\nin the sense that one can visualize what rules the\nmodel extracted and how previous interactions and\ninquiries ground to the extracted rules. We re-\nlease source code for E3and the BERTQA model\nathttps://github.com/vzhong/e3 .\n2 Related Work\nDialogue tasks. Recently, there has been grow-\ning interest in question answering (QA) in a di-\nalogue setting (Choi et al., 2018; Reddy et al.,\n2019). CMR (Saeidi et al., 2018) differs from\ndialogue QA in the domain covered (regulatory\ntext vs Wikipedia). A consequence of this is that\nCMR requires the interpretation of complex de-\ncision rules in order to answer high-level ques-\ntions, whereas dialogue QA typically contains\nquestions whose answers are directly extractable\nfrom the text. In addition, CMR requires the for-\nmulation of free-form follow-up questions in or-\nder to identify whether the user satisfies decision\nrules, whereas dialogue QA does not. There has\nalso been significant work on task-oriented dia-\nlogue, where the system must inquire about miss-ing information in order to help the user achieve a\ngoal (Williams et al., 2013; Henderson et al., 2014;\nMrkˇsi´c et al., 2017; Young et al., 2013). However,\nthese tasks are typically constrained to a fixed on-\ntology (e.g. restaurant reservation), instead of a la-\ntent ontology specified via natural language docu-\nments.\nDialogue systems. One traditional approach for\ndesigning dialogue systems divides the task into\nlanguage understanding/state-tracking (Mrk ˇsi´c\net al., 2017; Zhong et al., 2018), reasoning/policy\nlearning (Su et al., 2016), and response gener-\nation (Wen et al., 2015). The models for each\nof these subtasks are then combined to form a\nfull dialogue system (Young et al., 2013; Wen\net al., 2017). The previous best system for\nShARC (Saeidi et al., 2018) similarly breaks\nthe CMR task into subtasks and combines hand-\ndesigned sub-models for decision classification,\nentailment, and follow-up generation. In contrast,\nthe core reasoning (e.g. non-editor) components\nofE3are jointly trained, and does not require\ncomplex hand-designed features.\nExtracting latent rules from text. There is a\nlong history of work on extracting knowledge\nautomatically from text (Moulin and Rousseau,\n1992). Relation extraction typically assumes that\nthere is a fixed ontology onto which extracted\nknowledge falls (Mintz et al., 2009; Riedel et al.,\n2013). Other works forgo the ontology by using,\nfor example, natural language (Angeli and Man-\nning, 2014; Angeli et al., 2015). These extractions\nfrom text are subsequently used for inference over\na knowledge base (Bordes et al., 2013; Dettmers\net al., 2018; Lin et al., 2018) and rationalizing\nmodel predictions (Lei et al., 2016). Our work is\nmore similar with the latter type in which knowl-\nedge extracted are not confined to a fixed ontology\nand instead differ on a document basis. In addi-\ntion, the rules extracted by our model are used for\ninference over natural language documents. Fi-\nnally, these rules provide rationalization for the\nmodel’s decision making, in the sense that the user\ncan visualize what rules the model extracted and\nwhich rules are entailed by previous turns.\n3 Entailment-driven Extract and Edit\nnetwork\nIn conversational machine reading, a system reads\na document that contains a set of implicit decision\nQuestion xQ\n<latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit>xQ\n<latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit>Rule text xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>Scenario xS\n<latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit>xS\n<latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit>Follow-up QA xH,1\n<latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit>xH,1\n<latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit>BERT Transformer encoderRule extraction layerFollow-up QA xH,2\n<latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit>xH,2\n<latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit>Follow-up QA xH,nH\n<latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit>xH,nH\n<latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit>…Input self attention layerDecisionclassfierRule self attention layerr1\n<latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit>r1\n<latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit>r2\n<latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit>r2\n<latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit>rnR\n<latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit>rnR\n<latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit>zyes\n<latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit>zyes\n<latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit>zno\n<latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit>zno\n<latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit>zirrelevant\n<latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit>zirrelevant\n<latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit>…x<latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit><latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit><latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit><latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit>U<latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit><latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit><latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit><latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit>R1\n<latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit><latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit><latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit><latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit>RnR\n<latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit><latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit><latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit><latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit>···<latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit>RulescorerA1\n<latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit><latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit><latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit><latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit>AnR\n<latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit><latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit><latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit><latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit>···<latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit>C<latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit><latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit><latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit><latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit>Extraction ModuleEntailment scorergi\n<latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit><latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit><latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit><latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit>hi\n<latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit><latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit><latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit><latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit>Entailment ModuleDecision Module\nzinquire\n<latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit>zinquire\n<latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit>Figure 2: The Entailment-driven Extract and Edit network.\nrules. The user presents a scenario describing their\nsituation, and asks the system an underspecified\nquestion. In order to answer the user’s question,\nthe system must ask the user a series of follow-up\nquestions to determine whether the user satisfies\nthe set of decision rules.\nThe key challenges in CMR are to identify im-\nplicit rules present in the document, understand\nwhich rules are necessary to answer the ques-\ntion, and inquire about necessary rules that are\nnot entailed by the conversation history by ask-\ning follow-up questions. The three core mod-\nules of E3, the extraction, entailment, and de-\ncision modules, combine to address these chal-\nlenges. Figure 2 illustrates the components of E3.\nFor ease of exposition, we describe E3for a sin-\ngle turn in the conversation. To make the refer-\nences concrete in the following sections, we use as\nan example the inputs and outputs from Figure 1.\nThis example describes a turn in a conversation in\nwhich the system helps the user determine whether\nthey need to pay UK taxes on their pension.\n3.1 Extraction module\nThe extraction module extracts spans from the\ndocument that correspond to latent rules. Let\nxD,xQ,xS,xH;idenote words in the rule text,\nquestion, scenario, and the inquiry and user re-\nsponse during the ith previous turn of the dia-\nlogue after Nturns have passed. We concate-\nnate these inputs into a single sequence x=\n[xQ;xD;xS;xH;1;\u0001\u0001\u0001xH;N]joined by sentinel to-\nkens that mark the boundaries of each input. To\nencode the input for the extraction module, we use\nBERT, a transformer-based model (Vaswani et al.,\n2017) that achieves consistent gains on a variety\nof NLP tasks (Devlin et al., 2019). We encodexusing the BERT encoder, which first converts\nwords into word piece tokens (Wu et al., 2016),\nthen embeds these tokens along with their posi-\ntional embeddings and segmentation embeddings.\nThese embeddings are subsequently encoded via a\ntransformer network, which allows for inter-token\nattention at each layer. Let nxbe the number\nof tokens in the concatenated input xanddUbe\nthe output dimension of the BERT encoder. For\nbrevity, we denote the output of the BERT encoder\nasU= BERT(x)2Rnx\u0002dUand refer readers\nto Devlin et al. (2019) for detailed architecture.\nIn order to extract the implicit decision rules\nfrom the document, we compute a start score \u000bi\nand an end score \fifor eachith token as\n\u000bi=\u001b(W\u000bUi+b\u000b)2R (1)\n\fi=\u001b(W\fUi+b\f)2R (2)\nwhereW\u000b;W\f2RdU,b\u000b;b\f2R, and\u001bis the\nsigmoid function.\nFor each position siwhere\u000biis larger than\nsome threshold \u001c, we find the closest proceeding\npositionei\u0015siwhere\fei>\u001c. Each pair (si;ei)\nthen forms an extracted span corresponding to a\nruleRiexpressed in the rule text. In the example\nin Figure 1, the correct extracted spans are “UK\nresident” and “UK civil service pensions”.\nFor theith rule, we use self-attention to build a\nrepresentation Aiover the span (si;ei).\n\rk=W\rUk+b\r2R;si\u0014k\u0014ei(3)\n\rk= softmax ( \r)k2R;si\u0014k\u0014ei(4)\nAi=eiX\nk=si\rkUk2RdU (5)\nwhereW\r2RdUandb\r2R. Here,\rk;\rk\nare respectively the unnormalized and normalized\nscores for the self-attention layer.\nLetnRdenote the number spans in the rule text,\neach of which corresponds to a ground truth rule.\nThe rule extraction loss is computed as the sum of\nthe binary cross entropy losses for each rule Ri.\nLre=nRX\niLstart;i+Lend;i (6)\nLetnDdenote the number of tokens in the rule\ntext,si,eithe ground truth start and end positions\nfor theith rule, and 1fthe indicator function that\nreturns 1 if and only if the condition fholds. Re-\ncall from Eq (1) that \u000bjand\fjdenote the proba-\nbilities that token jis the start and end of a rule.\nThe start and end binary cross entropy losses for\ntheith rule are computed as\nLstart;i=\u0000nDX\nj1j=silog (\u000bj) + 1j6=silog (1\u0000\u000bj)\nLend;i=\u0000nDX\nj1j=eilog (\fj) + 1j6=eilog (1\u0000\fj)\n3.2 Entailment module\nGiven the extracted rules R=fR1;\u0001\u0001\u0001RnRg, the\nentailment module estimates whether each rule is\nentailed by the conversation history, so that the\nmodel can subsequently inquire about rules that\nare not entailed. For the example in Figure 1, the\nrule “UK resident” is entailed by the previous in-\nquiry “Are you a UK resident”. In contrast, the\nrule “UK civil service pensions” is not entailed by\neither the scenario or the conversation history, so\nthe model needs to inquire about it. In this partic-\nular case the scenario does not entail any rule.\nFor each extracted rule, we compute a score\nthat indicates the extent to which this particular\nrule has already been discussed in the initial sce-\nnarioSand in previous turns Q. In particular, let\nN(Ri;S)denote the number of tokens shared by\nRiandS,N(Ri)the number of tokens in Ri, and\nN(S)the number of tokens in S. We compute the\nscenario entailment score gias\npr(Ri;S) =N(Ri;S)\nN(Ri)(7)\nre(Ri;S) =N(Ri;S)\nN(S)(8)\ngi= f1(Ri;S) =2pr(Ri;S)re(Ri;S)\npr(Ri;S) + re(Ri;S)(9)\nwhere pr,re, and f1respectively denote the pre-\ncision, recall, and F1 scores. We compute a simi-\nlar score to represent the extent to which the ruleRihas been discussed in previous inquiries. Let\nQkdenote tokens in the kth previous inquiry. We\ncompute the history entailment score hibetween\nthe extracted rule Riand allnHprevious inquiries\nin the conversation history as\nhi= max\nk=1;\u0001\u0001\u0001nHf1(Ri;Qk) (10)\nThe final representation of the ith rule,Ai, is then\nthe concatenation of the span self-attention and the\nentailment scores.\nAi= [Ai;gi;hi]2RdU+2(11)\nwhere [x;y]denotes the concatenation of xand\ny. We also experiment with embedding and en-\ncoding similarity based approaches to compute en-\ntailment, but find that this F1 approach performs\nthe best. Because the encoder utilizes cross atten-\ntion between different components of the input,\nthe representations UandAiare able to capture\nnotions of entailment. However, we find that ex-\nplicitly scoring entailment via the entailment mod-\nule further discourages the model from making re-\ndundant inquiries.\n3.3 Decision module\nGiven the extracted rules Rand the entailment-\nenriched representations for each rule Ai, the de-\ncision module decides on a response to the user.\nThese include answering yes/noto the user’s\noriginal question, determining that the rule text is\nirrelevant to the question, or inquiring about\na rule that is not entailed but required to answer\nthe question. For the example in Figure 1, the rule\n“UK civil service pensions” is not entailed, hence\nthe correct decision is to ask a follow-up question\nabout whether the user receives this pension.\nWe start by computing a summary Cof the in-\nput using self-attention\n\u001ek=W\u001eUk+b\u001e2R (12)\n\u001ek= softmax\u0000\n\u001e\u0001\nk2R (13)\nC=eiX\nk=si\u001ekUk2RdU (14)\nwhereW\u001e2RdU,b\u001e2R, and\u001e,\u001eare re-\nspectively the unnormalized and normalized self-\nattention weights. Next, we score the choices\nyes,no,irrelevant , andinquire .\nz=WzC+bz2R4(15)\nwherezis a vector containing a class score\nfor each of the yes,no,irrelevant , and\ninquire decisions.\nFor inquiries, we compute an inquiry score ri\nfor each extracted rule Ri.\nri=WzAi+bz2R (16)\nwhereWz2RdU+2andbz2R. Letkindicate\nthe correct decision, and iindicate the correct in-\nquiry, if the model is supposed to make an inquiry.\nThe decision loss is\nLdec =\u0000log softmax( z)k (17)\n\u0000 1k=inquire log softmax( r)i\nDuring inference, the model first determines the\ndecisiond= argmaxkzk. If the decision dis\ninquire , the model asks a follow-up question\nabout theith rule such that i= argmaxjrj. Oth-\nerwise, the model concludes the dialogue with d.\nRephrasing rule into question via editor. In\nthe event that the model chooses to make an in-\nquiry about an extracted rule Ri,Riis given to\nan subsequent editor to rephrase into a follow-up\nquestion. For the example in 1, the editor edits the\nspan “UK civil service pensions” into the follow-\nup question “Are you receiving UK civil service\npensions?” Figure 3 illustrates the editor.\nThe editor takes as input xedit= [Ri;xD], the\nconcatenation of the extracted rule to rephrase Ri\nand the rule text xD. As before, we encode using\na BERT encoder to obtain Uedit= BERT(xedit).\nThe encoder is followed by two decoders that re-\nspective generate the pre-span edit Ri;preand post-\nspan editRi;post. For the example in Figure 1,\ngiven the span “UK civil service pensions”, the\npre-span and post span edits that form the question\n“Are you receiving UK civil service pensions?”\nare respectively “Are you receiving” and “?”\nTo perform each edit, we employ an attentive\ndecoder (Bahdanau et al., 2015) with Long Short-\nTerm Memory (LSTM) (Hochreiter and Schmid-\nhuber, 1997). Let htdenote the decoder state at\ntimet. We compute attention atover the input.\n\u0010k=Ueditht\u000012R (18)\n\u0010k= softmax( \u0010)k2R (19)\nat=X\nk\u0010kUedit;k2RdU (20)\nLetV2RnV\u0002dVdenote the embedding ma-\ntrix corresponding to nVtokens in the vocabulary.\nProposed rule Ri\n<latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit>Ri\n<latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit>Rule text xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>BERT Transformer encoderPre-span attentive decoderPre-span edit Ri,pre\n<latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit>Ri,pre\n<latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit>xedit\n<latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit><latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit><latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit><latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit>Post-span attentive decoderPost-span edit Ri,post\n<latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit>Ri,post\n<latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit>Uedit\n<latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit><latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit><latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit><latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit>Figure 3: The editor of E3.\nTo generate the tth tokenwt, we use weight tying\nbetween the output layer and the embedding ma-\ntrix (Press and Wolf, 2017).\nvt= embed( V;wt\u00001) (21)\nht= LSTM ([ vt;at];ht\u00001)2RdU(22)\not=Wo[ht;at] +bo2RdV (23)\np(wt) = softmax( Vot)2RnV (24)\nwt= argmaxkp(wt)k (25)\nWe use a separate attentive decoder to gener-\nate the pre-span edit Ri;preand the post-span edit\nRi;post. The decoders share the embedding matrix\nand BERT encoder but do not share other parame-\nters. The output of the editor is the concatenation\nof tokens [Ri;pre;Ri;Ri;post].\nThe editing loss consists of the sequential cross\nentropy losses from generating the pre-span edit\nand the post-span edit. Let npredenote the number\nof tokens and ^wt;prethetth tokens in the ground\ntruth pre-span edit. The pre-span loss is\nLpre=\u0000npreX\ntlogp( ^wt;pre) (26)\nThe editing loss is then the sum of the pre-span\nand post-span losses, the latter of which is ob-\ntained in a manner similar to Eq (26).\nLedit=Lpre+Lpost (27)\n4 Experiment\nWe train and evaluate the Entailment-driven Ex-\ntract and Edit network on the ShARC CMR\ndataset. In particular, we compare our method\nto three other models. Two of these models\nare proposed by Saeidi et al. (2018). They are\nan attentive sequence-to-sequence model that at-\ntends to the concatenated input and generates\nthe response token-by-token (Seq2Seq), and a\nstrong hand-engineered pipeline model with sub-\nmodels for entailment, classification, and genera-\ntion (Pipeline). For the latter, Saeidi et al. (2018)\nModel Micro Acc. Macro Acc. BLEU1 BLEU4 Comb.\nSeq2Seq 44.8 42.8 34.0 7.8 3.3\nPipeline 61.9 68.9 54.4 34.4 23.7\nBERTQA 63.6 70.8 46.2 36.3 25.7\nE3(ours) 67.6 73.3 54.1 38.7 28.4\nTable 1: Model performance on the blind, held-out test set of ShARC. The evaluation metrics are micro and macro-\naveraged accuracy in classifying bewteen the decisions yes,no,irrelevant , and inquire . In the event of\nan inquiry, the generated follow-up question is further evaluated using the BLEU score. In addition to official\nevaluation metrics, we also show a combined metric (“Comb.”), which is the product between the macro-averaged\naccuracy and the BLEU4 score.\nshow that these sub-models outperform neural\nmodels such as the entailment model by Parikh\net al. (2016), and that the combined pipeline\noutperforms the attentive sequence-to-sequence\nmodel. In addition, we propose an extractive\nQA baseline based on BERT (BERTQA). Simi-\nlar models achieved state-of-the-art on a variety\nof QA tasks (Rajpurkar et al., 2016; Reddy et al.,\n2019). We refer readers to Section A.1 of the ap-\npendices for implementation details BERTQA.\n4.1 Experimental setup\nWe tokenize using revtok1and part-of-speech tag\n(for the editor) using Stanford CoreNLP (Manning\net al., 2014). We fine-tune the smaller, uncased\npretrained BERT model by Devlin et al. (2019)\n(e.g.bert-base-uncased ).2We optimize us-\ning ADAM (Kingma and Ba, 2015) with an initial\nlearning rate of 5e-5 and a warm-up rate of 0.1.\nWe regularize using Dropout (Srivastava et al.,\n2014) after the BERT encoder with a rate of 0.4.\nTo supervise rule extraction, we reconstruct full\ndialogue trees from the ShARC training set and\nextract all follow-up questions as well as bullet\npoints from each rule text and its corresponding di-\nalogue tree. We then match these extracted clauses\nto spans in the rule text, and consider these noisy\nmatched spans as supervision for rule extraction.\nDuring inference, we use heuristic bullet point ex-\ntraction3in conjunction with spans extracted by\nthe rule extraction module. This results in minor\nperformance improvements ( \u00181% micro/macro\nacc.) over only relying on the rule extraction mod-\nule. In cases where one rule fully covers another,\n1https://github.com/jekbradbury/revtok\n2We use the BERT implementation from\nhttps://github.com/huggingface/\npytorch-pretrained-BERT\n3We extract spans from the text that starts with the “*”\ncharacter and ends with another “*” character or a new line.we discard the covered shorter rule. Section A.2\ndetails how clause matching is used to obtain noisy\nsupervision for rule extraction.\nWe train the editor separately, as jointly training\nwith a shared encoder worsens performance. The\neditor is trained by optimizing Leditwhile the rest\nof the model is trained by optimizing Ldec+\u0015Lre.\nWe use a rule extraction threshold of \u001c= 0:5and\na rule extraction loss weight of \u0015= 400 . We\nperform early stopping using the product of the\nmacro-averaged accuracy and the BLEU4 score.\nFor the editor, we use fixed, pretrained embed-\ndings from GloVe (Pennington et al., 2014), and\nuse dropout after input attention with a rate of 0.4.\nBefore editing retrieved rules, we remove prefix\nand suffix adpositions, auxiliary verbs, conjunc-\ntions, determiners, or punctuation. We find that\ndoing so allows the editor to convert some ex-\ntracted rules (e.g. or sustain damage) into sensible\nquestions (e.g. did you sustain damage?).\n4.2 Results\nOur performance on the development and the\nblind, held-out test set of ShARC is shown in Ta-\nble 1. Compared to previous results, E3achieves\na new state-of-the-art, obtaining best performance\non micro and macro-averaged decision classifica-\ntion accuracy and BLEU4 scores while maintain-\ning similar BLEU1 scores. These results show\nthatE3both answers the user’s original question\nmore accurately, and generates more coherent and\nrelevant follow-up questions. In addition, Fig-\nure 4 shows that because E3explicitly extracts im-\nplicit rules from the document, the model’s pre-\ndictions are explainable in the sense that the user\ncan verify the correctness of the extracted rules\nand observe how the scenario and previous inter-\nactions ground to the extracted rules.\n# 1. OverviewYou get the Additional State Pension automatically if you’re eligible for it, unless you’ve contracted out of it.At no time were my contributions lower than any else’s in the SERP or ever paid into a private pension.Do I get additional state pension automatically?Have you contracted out of the state?YesYes: 0.01 No: 0.99 Irrelevant: 0.00 Inquire: 0.0NoRule textScenarioQuestionPrevious interactionsDecisionModel responseNoGround truth answerAre you eligible for it?Yes0.280.670.000.720.550.00(a)\nIf you are a female Vietnam Veteran with a child who has a birth defect or you are a child of a female Vietnam Veteran with a birth defect, the child may be eligible for VA-financed care.I make $14,000 and would like to keep making that until I return to Zimbabwe.Is my child eligible for VA-financed health care?Rule textScenarioQuestionYes: 0.04 No: 0.04 Irrelevant: 0.00 Inquire: 0.92Are you female Vietnam Veteran with a child who has a birth defect?Previous interactionsDecisionModel responseAre you a female Vietnam Veteran?Ground truth answer0.660.000.000.340.000.00(b)\nFigure 4: Predictions by E3. Extracted spans are underlined in the text. The three scores are the inquiry score ri\n(blue), history entailment score hi(red), and scenario entailment score gi(green) of the nearest extracted span.\nModel Micro Acc. Macro Acc. BLEU1 BLEU4 Comb.\nE368.0 73.4 66.9 53.7 39.4\n-edit 68.0 73.4 53.1 46.2 33.9\n-edit, entail 68.0 73.1 50.2 40.3 29.5\n-edit, entail, extract (BERTQA) 63.4 70.6 47.4 37.4 23.7\nTable 2: Ablation study of E3on the development set of ShARC. The ablated variants of E3include versions:\nwithout the editor; without the editor and entailment module; without the editor, entailment module, and extraction\nmodule, which reduces to the BERT for question answering model by Devlin et al. (2019).\n4.3 Ablation study\nTable 2 shows an ablation study of E3on the de-\nvelopment set of ShARC.\nRetrieval outperforms word generation.\nBERTQA (“-edit, entail, extract”), which E3re-\nduces to after removing the editor, entailment,\nand extraction modules, presents a strong baseline\nthat exceeds previous results on all metrics except\nfor BLEU1. This variant inquires about spans ex-\ntracted from the text, which, while more relevant\nas indicated by the higher BLEU4 score, does not\nhave the natural qualities of a question, hence it\nhas a lower BLEU1. Nonetheless, the large gains\nof BERTQA over the attentive Seq2Seq model\nshows that retrieval is a more promising technique\nfor asking follow-up questions than word-by-wordgeneration. Similar findings were reported for\nquestion answering by Yatskar (2019).\nExtraction of document structure facilitates\ngeneralization. Adding explicit extraction of\nrules in the document (“-edit, entail”) forces the\nmodel to interpret all rules in the document ver-\nsus only focusing on extracting the next inquiry.\nThis results in better performance in both decision\nclassification and inquiry relevance compared to\nthe variant that is not forced to interpret all rules.\nModeling entailment improves rule retrieval.\nThe “-edit” model explicitly models whether an\nextracted rule is entailed by the user scenario and\nprevious turns. Modeling entailment allows the\nmodel to better predict whether a rule is entailed,\nyesno\nirrelevantinquire\nPredicted labelyes\nno\nirrelevant\ninquireTrue label530 147 0 127\n117 541 0 108\n0 0 133 5\n107 113 2 340\n0100200300400500Figure 5: Confusion matrix of decision predictions on\nthe development set of ShARC.\nand thus more often inquire about rules that are\nnot entailed. Figure 4a illustrates one such exam-\nple in which both extracted rules have high entail-\nment score, and the model chooses to conclude the\ndialogue by answering noinstead of making fur-\nther inquiries. Adding entailment especially im-\nproves in BLEU4 score, as the inquiries made by\nthe model are more relevant and appropriate.\nEditing retrieved rules results in more fluid\nquestions. While E3without the editor is able to\nretrieve rules that are relevant, these spans are not\nfluent questions that can be presented to the user.\nThe editor is able to edit the extracted rules into\nmore fluid and coherent questions, which results\nfurther gains particularly in BLEU1.\n4.4 Error analysis\nIn addition to ablation studies, we analyze er-\nrorsE3makes on the development set of ShARC.\nDecision errors. Figure 5 shows the confusion\nmatrix of decisions. We specifically examine ex-\namples in which E3produces an incorrect deci-\nsion. On the ShARC development set there are\n726 such cases, which correspond to a 32.0% er-\nror rate. We manually analyze 100 such exam-\nples to identify commons types of errors. Within\nthese, in 23% of examples, the model attempts to\nanswer the user’s initial question without resolv-\ning a necessary rule despite successfully extract-\ning the rule. In 19% of examples, the model iden-\ntifies and inquires about all necessary rules but\ncomes to the wrong conclusion. In 18% of exam-\nples, the model makes a redundant inquiry about a\nrule that is entailed. In 17% of examples, the ruletext contains ambiguous rules. Figure 4b contains\none such example in which the annotator identi-\nfied the rule “a female Vietnam Veteran”, while\nthe model extracted an alternative longer rule “a\nfemale Vietnam Veteran with a child who has a\nbirth defect”. Finally, in 13% of examples, the\nmodel fails to extract some rule from the docu-\nment. Other less common forms of errors include\nfailures by the entailment module to perform nu-\nmerical comparison, complex rule procedures that\nare difficult to deduce, and implications that re-\nquire world knowledge. These results suggests\nthat improving the decision process after rule ex-\ntraction is an important area for future work.\nInquiry quality. On 340 examples (15%) in the\nShARC development set, E3generates an inquiry\nwhen it is supposed to. We manually analyze 100\nsuch examples to gauge the quality of generated\ninquiries. On 63% of examples, the model gener-\nates an inquiry that matches the ground-truth. On\n14% of examples, the model makes inquires in a\ndifferent order than the annotator. On 12% of ex-\namples, the inquiry refers to an incorrect subject\n(e.g. “are you born early” vs. “is your baby born\nearly”. This usually results from editing an entity-\nless bullet point (“* born early”). On 6% of exam-\nples, the inquiry is lexically similar to the ground\ntruth but has incorrect semantics (e.g. “do you\nneed savings” vs. “is this information about your\nsavings”). Again, this tends to result from editing\nshort bullet points (e.g. “* savings”). These results\nindicate that when the model correctly chooses to\ninquire, it largely inquires about the correct rule.\nThey also highlight a difficulty in evaluating CMR\n— there can be several correct orderings of in-\nquiries for a document.\n5 Conclusion\nWe proposed the Entailment-driven Extract and\nEdit network ( E3), a conversational machine read-\ning model that extracts implicit decision rules\nfrom text, computes whether each rule is entailed\nby the conversation history, inquires about rules\nthat are not entailed, and answers the user’s ques-\ntion. E3achieved a new state-of-the-art result on\nthe ShARC CMR dataset, outperforming existing\nsystems as well as a new extractive QA baseline\nbased on BERT. In addition to achieving strong\nperformance, we showed that E3provides a more\nexplainable alternative to prior work which do not\nmodel document structure.\nAcknowledgments\nThis research was supported in part by the ARO\n(W911NF-16-1-0121) and the NSF (IIS-1252835,\nIIS-1562364). We thank Terra Blevins, Sewon\nMin, and our anonymous reviewers for helpful\nfeedback.\nReferences\nGabor Angeli, Melvin Jose Johnson Premkumar, and\nChristopher D. Manning. 2015. Leveraging linguis-\ntic structure for open domain information extraction.\nInACL.\nGabor Angeli and Christopher D. Manning. 2014. Nat-\nuralli: Natural logic inference for common sense\nreasoning. In EMNLP .\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-\ngio. 2015. Neural machine translation by jointly\nlearning to align and translate. In ICLR .\nAntoine Bordes, Nicolas Usunier, Alberto Garcia-\nDuran, Jason Weston, and Oksana Yakhnenko.\n2013. Translating embeddings for modeling multi-\nrelational data. In NIPS .\nEunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-\ntau Yih, Yejin Choi, Percy Liang, and Luke Zettle-\nmoyer. 2018. QuAC: Question answering in con-\ntext. In EMNLP .\nTim Dettmers, Minervini Pasquale, Stenetorp Pon-\ntus, and Sebastian Riedel. 2018. Convolutional 2D\nknowledge graph embeddings. In AAAI .\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In NAACL .\nMatthew Henderson, Blaise Thomson, and Jason D\nWilliams. 2014. The second dialog state tracking\nchallenge. In SIGDIAL .\nSepp Hochreiter and J ¨urgen Schmidhuber. 1997. Long\nshort-term memory. Neural Computation .\nDiederik P. Kingma and Jimmy Ba. 2015. Adam: A\nmethod for stochastic optimization. In ICLR .\nTao Lei, Regina Barzilay, and Tommi Jaakkola. 2016.\nRationalizing neural predictions. In EMNLP .\nXi Victoria Lin, Richard Socher, and Caiming Xiong.\n2018. Multi-hop knowledge graph reasoning with\nreward shaping. In EMNLP .\nChristopher D. Manning, Mihai Surdeanu, John Bauer,\nJenny Rose Finkel, Steven Bethard, and David Mc-\nClosky. 2014. The Stanford CoreNLP natural lan-\nguage processing toolkit. In ACL.Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju-\nrafsky. 2009. Distant supervision for relation extrac-\ntion without labeled data. In ACL.\nB. Moulin and D. Rousseau. 1992. Automated knowl-\nedge acquisition from regulatory texts. IEEE Ex-\npert.\nNikola Mrk ˇsi´c, Diarmuid O S ´eaghdha, Tsung-Hsien\nWen, Blaise Thomson, and Steve Young. 2017.\nNeural belief tracker: Data-driven dialogue state\ntracking. In ACL.\nAnkur Parikh, Oscar T ¨ackstr ¨om, Dipanjan Das, and\nJakob Uszkoreit. 2016. A decomposable attention\nmodel for natural language inference. In EMNLP .\nJeffrey Pennington, Richard Socher, and Christo-\npher D. Manning. 2014. GloVe: Global vectors for\nword representation. In EMNLP .\nOfir Press and Lior Wolf. 2017. Using the output em-\nbedding to improve language models. In ACL.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and\nPercy Liang. 2016. SQuAD: 100, 000+ questions\nfor machine comprehension of text. In EMNLP .\nSiva Reddy, Danqi Chen, and Christopher D Manning.\n2019. CoQA: A conversational question answering\nchallenge. TACL .\nSebastian Riedel, Limin Yao, Andrew McCallum, and\nBenjamin M. Marlin. 2013. Relation extraction\nwith matrix factorization and universal schemas. In\nNAACL .\nMarzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer\nSingh, Tim Rockt ¨aschel, Mike Sheldon, Guillaume\nBouchard, and Sebastian Riedel. 2018. Interpreta-\ntion of natural language rules in conversational ma-\nchine reading. In EMNLP .\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky,\nIlya Sutskever, and Ruslan Salakhutdinov. 2014.\nDropout: A simple way to prevent neural networks\nfrom overfitting. JMLR .\nPei-Hao Su, Milica Gasic, Nikola Mrk ˇsi´c, Lina M. Ro-\njas Barahona, Stefan Ultes, David Vandyke, Tsung-\nHsien Wen, and Steve Young. 2016. On-line active\nreward learning for policy optimisation in spoken di-\nalogue systems. In ACL.\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In NIPS .\nTsung-Hsien Wen, Milica Gasic, Nikola Mrk ˇsi´c, Pei-\nHao Su, David Vandyke, and Steve Young. 2015.\nSemantically conditioned lstm-based natural lan-\nguage generation for spoken dialogue systems. In\nEMNLP .\nTsung-Hsien Wen, David Vandyke, Nikola Mrk ˇsi´c,\nMilica Ga ˇsi´c, Lina M. Rojas Barahona, Pei-Hao Su,\nStefan Ultes, and Steve Young. 2017. A network-\nbased end-to-end trainable task-oriented dialogue\nsystem. In EACL .\nJason D Williams, Antoine Raux, Deepak Ramachan-\ndran, and Alan Black. 2013. The dialog state track-\ning challenge. In SIGDIAL .\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V .\nLe, Mohammad Norouzi, Wolfgang Macherey,\nMaxim Krikun, Yuan Cao, Qin Gao, Klaus\nMacherey, Jeff Klingner, Apurva Shah, Melvin\nJohnson, Xiaobing Liu, Lukasz Kaiser, Stephan\nGouws, Yoshikiyo Kato, Taku Kudo, Hideto\nKazawa, Keith Stevens, George Kurian, Nishant\nPatil, Wei Wang, Cliff Young, Jason Smith, Jason\nRiesa, Alex Rudnick, Oriol Vinyals, Gregory S.\nCorrado, Macduff Hughes, and Jeffrey Dean. 2016.\nGoogle’s neural machine translation system: Bridg-\ning the gap between human and machine translation.\nCoRR , abs/1609.08144.\nMark Yatskar. 2019. A qualitative comparison of coqa,\nsquad 2.0 and quac. In NAACL .\nSteve Young, Milica Ga ˇsi´c, Blaise Thomson, and Ja-\nson D Williams. 2013. POMDP-based statistical\nspoken dialog systems: A review. Proceedings of\nthe IEEE .\nVictor Zhong, Caiming Xiong, and Richard Socher.\n2018. Global-locally self-attentive dialogue state\ntracker. In ACL.\nA Appendices\nA.1 BertQA Baseline\nOur BertQA baseline follows that proposed by De-\nvlin et al. (2019) for the Stanford Question\nAnswering Dataset (SQuAD) (Rajpurkar et al.,\n2016). Due to the differences in context between\nShARC and SQuAD, we augment the input to\nthe BERTQA model in a manner similar to Sec-\ntion 3.1. The distinction here is that we addition-\nally add the decision types “yes”, “no”, and “ir-\nrelevant” as parts of the input such that the prob-\nlem is fully solvable via span extraction. Similar\nto Section 3.1, let Udenote the BERT encoding of\nthe length-ninput sequence. The BERTQA model\npredicts a start score sand an end score e.\ns= softmax( UWs+bs)2Rn(28)\ne= softmax( UWe+be)2Rn(29)\nWe take the answer as the span (i;j)that gives\nthe highest score siejsuch thatj >=i. Be-\ncause we augment the input with decision labels,\nthe model can be fully supervised via extraction\nendpoints.\nA.2 Creating noisy supervision for span\nextraction via span matching\nThe ShARC dataset is constructed from full dia-\nlogue trees in which annotators exhaustively anno-\ntate yes/no branches of follow-up questions. Con-\nsequently, each rule required to answer the ini-\ntial user question forms a follow-up question in\nthe full dialogue tree. In order to identify rule\nspans in the document, we first reconstruct the di-\nalogue trees for all training examples in ShARC.\nFor each document, we trim each follow-up ques-\ntion in its corresponding dialogue tree by remov-\ning punctuation and stop words. For each trimmed\nquestion, we find the shortest best-match span in\nthe document that has the least edit distance from\nthe trimmed question, which we take as the corre-\nsponding rule span. In addition, we extract sim-\nilarly trimmed bullet points from the document\nas rule spans. Finally, we deduplicate the rule\nspans by removing those that are fully covered by\na longer rule span. Our resulting set of rule spans\nare used as noisy supervision for the rule extrac-\ntion module. This preprocessing code is included\nwith our code release.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "HkefetzFwH",
"year": null,
"venue": null,
"pdf_link": "https://arxiv.org/pdf/1906.05373.pdf",
"forum_link": "https://openreview.net/forum?id=HkefetzFwH",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E3: Entailment-driven Extracting and Editing for Conversational Machine Reading",
"authors": [
"Victor Zhong",
"Luke Zettlemoyer"
],
"abstract": "Conversational machine reading systems help users answer high-level questions (e.g. determine if they qualify for particular government benefits) when they do not know the exact rules by which the determination is made(e.g. whether they need certain income levels or veteran status). The key challenge is that these rules are only provided in the form of a procedural text (e.g. guidelines from government website) which the system must read to figure out what to ask the user. We present a new conversational machine reading model that jointly extracts a set of decision rules from the procedural text while reasoning about which are entailed by the conversational history and which still need to be edited to create questions for the user. On the recently introduced ShARC conversational machine reading dataset, our Entailment-driven Extract and Edit network (E3) achieves a new state-of-the-art, outperforming existing systems as well as a new BERT-based baseline. In addition, by explicitly highlighting which information still needs to be gathered, E3 provides a more explainable alternative to prior work. We release source code for our models and experiments at this https://github.com/vzhong/e3.\n",
"keywords": [],
"raw_extracted_content": "E3: Entailment-driven Extracting and Editing for Conversational\nMachine Reading\nVictor Zhong\nUniversity of Washington\[email protected] Zettlemoyer\nUniversity of Washington\[email protected]\nAbstract\nConversational machine reading systems help\nusers answer high-level questions (e.g. deter-\nmine if they qualify for particular govern-\nment benefits) when they do not know the ex-\nact rules by which the determination is made\n(e.g. whether they need certain income levels\nor veteran status). The key challenge is that\nthese rules are only provided in the form of a\nprocedural text (e.g. guidelines from govern-\nment website) which the system must read to\nfigure out what to ask the user. We present\na new conversational machine reading model\nthat jointly extracts a set of decision rules\nfrom the procedural text while reasoning about\nwhich are entailed by the conversational his-\ntory and which still need to be edited to create\nquestions for the user. On the recently intro-\nduced ShARC conversational machine read-\ning dataset, our Entailment-driven Extract and\nEdit network ( E3) achieves a new state-of-the-\nart, outperforming existing systems as well as\na new BERT-based baseline. In addition, by\nexplicitly highlighting which information still\nneeds to be gathered, E3provides a more ex-\nplainable alternative to prior work. We release\nsource code for our models and experiments\nathttps://github.com/vzhong/e3 .\n1 Introduction\nIn conversational machine reading (CMR), a sys-\ntem must help users answer high-level questions\nby participating in an information gathering dia-\nlog. For example, in Figure 1 the system asks a\nseries of questions to help the user decide if they\nneed to pay tax on their pension. A key chal-\nlenge in CMR is that the rules by which the deci-\nsion is made are only provided in natural language\n(e.g. the rule text in Figure 1). At every step of the\nconversation, the system must read the rules text\nand reason about what has already been said in to\nformulate the best next question.\n# 4. Tax when you live abroadIf you’re not a UK resident, you don’t usually pay UK tax on your pension. But you might have to pay tax in the country you live in. There are a few exceptions - for example, UK civil service pensions will always be taxed in the UK.I get my money from a business I have. We get our funding from a private bank.Rule textUser scenario\nDo I need to pay UK tax on my pension?Initial user questionAre you a UK resident?NoAre you receiving UK civil service pensions?Previous questionPrevious user responseModel outputFigure 1: A conversational machine reading example.\nThe model is given a rule text document, which con-\ntains a recipe of implicit rules (underlined) for answer-\ning the initial user question. At the start of the conver-\nsation, the user presents a scenario describing their sit-\nuation. During each turn, the model can ask the user\na follow-up question to inquire about missing infor-\nmation, or conclude the dialogue by answering yes,\nno, orirrelevant .irrelevant means that the\nrule text cannot answer the question. We show previ-\nous turns as well as the corresponding inquired rules in\ngreen. The scenario is shown in red and in this case\ndoes not correspond to a rule. The model inquiry for\nthis turn and its corresponding rule are shown in blue.\nWe present a new model that jointly reasons\nabout what rules are present in the text and which\nare already entailed by the conversational history\nto improve question generation. More specifically,\nwe propose the Entailment-driven Extract and Edit\nnetwork ( E3).E3learns to extract implicit rules in\nthe document, identify which rules are entailed by\nthe conversation history, and edit rules that are not\nentailed to create follow-up questions to the user.\nDuring each turn, E3parses the rule text to extract\nspans in the text that correspond to implicit rules\n(underlined in Figure 1). Next, the model scores\nthe degree to which each extracted rule is entailedarXiv:1906.05373v2 [cs.CL] 13 Feb 2020\nby the initial user scenario (red in Figure 1) and by\nprevious interactions with the user (green in Fig-\nure 1). Finally, the model decides on a response by\ndirectly answering the question ( yes/no), stating\nthat the rule text does not contain sufficient infor-\nmation to answer the question ( irrelevant ),\nor asking a follow-up question about an extracted\nrule that is not entailed but needed to determine the\nanswer (blue in Figure 1). In the case of inquiry,\nthe model edits an extracted rule into a follow-up\nquestion. To our knowledge, E3is the first extract-\nand-edit method for conversational dialogue, as\nwell as the first method that jointly infers implicit\nrules in text, estimates entailment, inquires about\nmissing information, and answers the question.\nWe compare E3to the previous-best systems\nas well as a new, strong, BERT-based extrac-\ntive question answering model (BERTQA) on the\nrecently proposed ShARC CMR dataset (Saeidi\net al., 2018). Our results show that E3is more\naccurate in its decisions and generates more rele-\nvant inquiries. In particular, E3outperforms the\nprevious-best model by 5.7% in micro-averaged\ndecision accuracy and 4.3 in inquiry BLEU4.\nSimilarly, E3outperforms the BERTQA base-\nline by 4.0% micro-averaged decision accuracy\nand 2.4 in inquiry BLEU4. In addition to out-\nperforming previous methods, E3is explainable\nin the sense that one can visualize what rules the\nmodel extracted and how previous interactions and\ninquiries ground to the extracted rules. We re-\nlease source code for E3and the BERTQA model\nathttps://github.com/vzhong/e3 .\n2 Related Work\nDialogue tasks. Recently, there has been grow-\ning interest in question answering (QA) in a di-\nalogue setting (Choi et al., 2018; Reddy et al.,\n2019). CMR (Saeidi et al., 2018) differs from\ndialogue QA in the domain covered (regulatory\ntext vs Wikipedia). A consequence of this is that\nCMR requires the interpretation of complex de-\ncision rules in order to answer high-level ques-\ntions, whereas dialogue QA typically contains\nquestions whose answers are directly extractable\nfrom the text. In addition, CMR requires the for-\nmulation of free-form follow-up questions in or-\nder to identify whether the user satisfies decision\nrules, whereas dialogue QA does not. There has\nalso been significant work on task-oriented dia-\nlogue, where the system must inquire about miss-ing information in order to help the user achieve a\ngoal (Williams et al., 2013; Henderson et al., 2014;\nMrkˇsi´c et al., 2017; Young et al., 2013). However,\nthese tasks are typically constrained to a fixed on-\ntology (e.g. restaurant reservation), instead of a la-\ntent ontology specified via natural language docu-\nments.\nDialogue systems. One traditional approach for\ndesigning dialogue systems divides the task into\nlanguage understanding/state-tracking (Mrk ˇsi´c\net al., 2017; Zhong et al., 2018), reasoning/policy\nlearning (Su et al., 2016), and response gener-\nation (Wen et al., 2015). The models for each\nof these subtasks are then combined to form a\nfull dialogue system (Young et al., 2013; Wen\net al., 2017). The previous best system for\nShARC (Saeidi et al., 2018) similarly breaks\nthe CMR task into subtasks and combines hand-\ndesigned sub-models for decision classification,\nentailment, and follow-up generation. In contrast,\nthe core reasoning (e.g. non-editor) components\nofE3are jointly trained, and does not require\ncomplex hand-designed features.\nExtracting latent rules from text. There is a\nlong history of work on extracting knowledge\nautomatically from text (Moulin and Rousseau,\n1992). Relation extraction typically assumes that\nthere is a fixed ontology onto which extracted\nknowledge falls (Mintz et al., 2009; Riedel et al.,\n2013). Other works forgo the ontology by using,\nfor example, natural language (Angeli and Man-\nning, 2014; Angeli et al., 2015). These extractions\nfrom text are subsequently used for inference over\na knowledge base (Bordes et al., 2013; Dettmers\net al., 2018; Lin et al., 2018) and rationalizing\nmodel predictions (Lei et al., 2016). Our work is\nmore similar with the latter type in which knowl-\nedge extracted are not confined to a fixed ontology\nand instead differ on a document basis. In addi-\ntion, the rules extracted by our model are used for\ninference over natural language documents. Fi-\nnally, these rules provide rationalization for the\nmodel’s decision making, in the sense that the user\ncan visualize what rules the model extracted and\nwhich rules are entailed by previous turns.\n3 Entailment-driven Extract and Edit\nnetwork\nIn conversational machine reading, a system reads\na document that contains a set of implicit decision\nQuestion xQ\n<latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit>xQ\n<latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit><latexit sha1_base64=\"ZdiyFL6aYJA/a9Emz+9ereuoRjg=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fjp/mnQHJQrbtVdgKwTLycVyNEYlL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwhs/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1WvWKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA3+o2y</latexit>Rule text xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>Scenario xS\n<latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit>xS\n<latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit><latexit sha1_base64=\"vq3QVi8kDwnF8pWB4L82k+3HLrQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF4+V2g9oQ9lsJ+3SzSbsbsQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJYPZpqgH9GR5CFn1Fip+TRoDsoVt+ouQNaJl5MK5GgMyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814Y2fcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKvVaHkcRzuAcLsGDa6jDHTSgBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwA7Ao20</latexit>Follow-up QA xH,1\n<latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit>xH,1\n<latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit><latexit sha1_base64=\"CU4DPryCjv9j50xB8+hIT8GTuTQ=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBg5RECnoseOmxgv2ANpTNdtMu3WzC7kQsoT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMCxIpDLrut1PY2Nza3inulvb2Dw6PyscnbROnmvEWi2WsuwE1XArFWyhQ8m6iOY0CyTvB5G7udx65NiJWDzhNuB/RkRKhYBSt1HkaZI0rbzYoV9yquwBZJ15OKpCjOSh/9YcxSyOukElqTM9zE/QzqlEwyWelfmp4QtmEjnjPUkUjbvxsce6MXFhlSMJY21JIFurviYxGxkyjwHZGFMdm1ZuL/3m9FMNbPxMqSZErtlwUppJgTOa/k6HQnKGcWkKZFvZWwsZUU4Y2oZINwVt9eZ20r6ueW/Xua5V6LY+jCGdwDpfgwQ3UoQFNaAGDCTzDK7w5ifPivDsfy9aCk8+cwh84nz/JnY8m</latexit>BERT Transformer encoderRule extraction layerFollow-up QA xH,2\n<latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit>xH,2\n<latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit><latexit sha1_base64=\"enF29fJV1ijw/HaUyBVoxrns8M0=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJKUgh4LXnqsYD+gDWWznbRLN5uwuxFL6I/w4kERr/4eb/4bt20O2vpg4PHeDDPzgkRwbVz329nY3Nre2S3sFfcPDo+OSyenbR2nimGLxSJW3YBqFFxiy3AjsJsopFEgsBNM7uZ+5xGV5rF8MNME/YiOJA85o8ZKnadB1riuzgalsltxFyDrxMtJGXI0B6Wv/jBmaYTSMEG17nluYvyMKsOZwFmxn2pMKJvQEfYslTRC7WeLc2fk0ipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDWz7hMUoOSLReFqSAmJvPfyZArZEZMLaFMcXsrYWOqKDM2oaINwVt9eZ20qxXPrXj3tXK9lsdRgHO4gCvw4Abq0IAmtIDBBJ7hFd6cxHlx3p2PZeuGk8+cwR84nz/LIo8n</latexit>Follow-up QA xH,nH\n<latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit>xH,nH\n<latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit><latexit sha1_base64=\"sAKQk4fv24rOzgAwMxhTm9Wsx54=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKdRjwUuPFeyHtMuSTbNtaJJdkqxYlv4KLx4U8erP8ea/MW33oK0PBh7vzTAzL0w408Z1v53CxubW9k5xt7S3f3B4VD4+6eg4VYS2Scxj1QuxppxJ2jbMcNpLFMUi5LQbTm7nfveRKs1ieW+mCfUFHkkWMYKNlR6egqx5JYPmLChX3Kq7AFonXk4qkKMVlL8Gw5ikgkpDONa677mJ8TOsDCOczkqDVNMEkwke0b6lEguq/Wxx8AxdWGWIoljZkgYt1N8TGRZaT0VoOwU2Y73qzcX/vH5qohs/YzJJDZVkuShKOTIxmn+PhkxRYvjUEkwUs7ciMsYKE2MzKtkQvNWX10nnuuq5Ve+uVmnU8jiKcAbncAke1KEBTWhBGwgIeIZXeHOU8+K8Ox/L1oKTz5zCHzifP3LYkB4=</latexit>…Input self attention layerDecisionclassfierRule self attention layerr1\n<latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit>r1\n<latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit><latexit sha1_base64=\"Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFIKcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif18swuglyodIMuWLLRVEmCSZk/jcZCs0ZyqkllGlhbyVsTDVlaNOp2BC81ZfXSfuq7rl1775RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit>r2\n<latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit>r2\n<latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit><latexit sha1_base64=\"I7VMZpTwCBMvwPRJRU9Q64c8e8M=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XffUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxxeythE6ooMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD//LjY0=</latexit>rnR\n<latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit>rnR\n<latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit><latexit sha1_base64=\"L48TVjMPthfolF+cr4aAaWWL6tA=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrbWwkbU0WZsQlVbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit>zyes\n<latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit>zyes\n<latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit><latexit sha1_base64=\"MewD0k4ZtvhTJqLV3S7p7CGJsnQ=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2llo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRRBJBpSEca9333Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxPLcFEMZsVkQlWmBhbVMWW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w==</latexit>zno\n<latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit>zno\n<latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit><latexit sha1_base64=\"23k5hj9mv0vpUcXAY3RSMY5bcNE=\">AAAB9HicbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HggczrmXe3KimDNjff/bK2xsbm3vFHdLe/sHh0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlHBvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYIJpq5rIiMscbEup5KroRg9cvrpHVVDfxqcF+r1Gt5HUU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit>zirrelevant\n<latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit>zirrelevant\n<latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit><latexit sha1_base64=\"cmXPbPA3k7RAXZcMXiBD84brFe4=\">AAAB/nicbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3b39A/fwqK3jVFFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNNIBBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZPP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F7jMF1PCpJYQqZrNiOiKKUGMbq9gS/OUvr5L2Zc33av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRRl6Rq/ozXlyXpx352MxWnKKnWP0B87nD2/llmE=</latexit>…x<latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit><latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit><latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit><latexit sha1_base64=\"BJzBhsLwXSTB5Lw6Dgv99f7gkUY=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15OKpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bHDojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLkoTAUxMZl/TYZcITNiagllittbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqtTyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4gOM7g==</latexit>U<latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit><latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit><latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit><latexit sha1_base64=\"i+1/OIqJ7WOlfZ3Tw4+VUkqns5I=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMMfRZIhL1EFKNgkv0DTcCH1KFNA4FdsPJ7dzvPqHSPJH3ZppiENOR5BFn1Fip7Q+qNbfuLkDWiVeQGhRoDapf/WHCshilYYJq3fPc1AQ5VYYzgbNKP9OYUjahI+xZKmmMOsgXh87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZZgYlWy6KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUUWZsNhUbgrf68jrpXNU9t+61G7Vmo4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94zL</latexit>R1\n<latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit><latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit><latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit><latexit sha1_base64=\"vzdjoaDzv9nOg/w228vuA3tuVlE=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKexKQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2ZZbTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6CgtSgQGtY/RqMFEkFlZZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJcFKccWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66VzVA78e3DdqzUYRRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/ACPgjZY=</latexit>RnR\n<latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit><latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit><latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit><latexit sha1_base64=\"9QkYS9HYIiCV3i5Jm1pK77jwtww=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6nhUijeRoGS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cXIOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VNObGzxfnzsiFVYYkSrQthWSh/p7IaWzMNA5tZ0xxbFa9ufif188wuvVzodIMuWLLRVEmCSZk/jsZCs0ZyqkllGlhbyVsTDVlaBOq2BC81ZfXSeeq7rl17+G61rgu4ijDGZzDJXhwAw24hya0gcEEnuEV3pzUeXHenY9la8kpZk7hD5zPH0kRj3o=</latexit>···<latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit>RulescorerA1\n<latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit><latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit><latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit><latexit sha1_base64=\"Kbzzmn1Oe6Ur0QF1ftChpfZF1ug=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbCxlRRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit>AnR\n<latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit><latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit><latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit><latexit sha1_base64=\"H9wlhGkXhTHP9mp8ey4PKoIiZyM=\">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbbTbt0swm7E6GE/ggvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3peW6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC3krYiGrK0CZUsSF4yy+vkvZF3XPr3v1lrXFZxFGGEziFc/DgChpwB01oAYMxPMMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit>···<latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit><latexit sha1_base64=\"ggNYy28tHbW2zILQMm4kk1oYvY8=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxttMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZUU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ==</latexit>C<latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit><latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit><latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit><latexit sha1_base64=\"y5YGHW4NRn032l4c2SASqYAwvmQ=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RS2tnd294r7pYPDo+OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwwTVuu+5ifEzqgxnAuelQaoxoWxKx9i3VNIItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2ooszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz00oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ==</latexit>Extraction ModuleEntailment scorergi\n<latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit><latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit><latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit><latexit sha1_base64=\"8v5kiaA9t7Fx9mcIlNKMYFYePVo=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW//4PCoenzS0UmmGLZZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kkecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkbU0WZselUbAje6svrpHNV99y6d9+oNRtFHGU4g3O4BA+uoQl30II2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit>hi\n<latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit><latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit><latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit><latexit sha1_base64=\"EtR5b7t+XdzbNQGDeG4n6cxuEuQ=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByO/c7T6g0T+SjmaYYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01NkFNlOBM4q/QzjSllEzrCnqWSxqiDfHHqjFxYZUiiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3QQ5l2lmULLloigTxCRk/jcZcoXMiKkllClubyVsTBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQQt8YDCCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit>Entailment ModuleDecision Module\nzinquire\n<latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit>zinquire\n<latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit><latexit sha1_base64=\"UMvom97kWv8j4eUEwQDxge3lRDI=\">AAAB+3icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc88JE0aVdt1va219Y3Nru7JT3d3bPzi0j2pdFacSkw6OWSz7IVKEUUE6mmpG+okkiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOYpxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYmme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyRyYo1mxmCsKTmVgdPkERYm7qqpgRvOfIq6V40PLfh3TbrrWZZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQQ=</latexit>Figure 2: The Entailment-driven Extract and Edit network.\nrules. The user presents a scenario describing their\nsituation, and asks the system an underspecified\nquestion. In order to answer the user’s question,\nthe system must ask the user a series of follow-up\nquestions to determine whether the user satisfies\nthe set of decision rules.\nThe key challenges in CMR are to identify im-\nplicit rules present in the document, understand\nwhich rules are necessary to answer the ques-\ntion, and inquire about necessary rules that are\nnot entailed by the conversation history by ask-\ning follow-up questions. The three core mod-\nules of E3, the extraction, entailment, and de-\ncision modules, combine to address these chal-\nlenges. Figure 2 illustrates the components of E3.\nFor ease of exposition, we describe E3for a sin-\ngle turn in the conversation. To make the refer-\nences concrete in the following sections, we use as\nan example the inputs and outputs from Figure 1.\nThis example describes a turn in a conversation in\nwhich the system helps the user determine whether\nthey need to pay UK taxes on their pension.\n3.1 Extraction module\nThe extraction module extracts spans from the\ndocument that correspond to latent rules. Let\nxD,xQ,xS,xH;idenote words in the rule text,\nquestion, scenario, and the inquiry and user re-\nsponse during the ith previous turn of the dia-\nlogue after Nturns have passed. We concate-\nnate these inputs into a single sequence x=\n[xQ;xD;xS;xH;1;\u0001\u0001\u0001xH;N]joined by sentinel to-\nkens that mark the boundaries of each input. To\nencode the input for the extraction module, we use\nBERT, a transformer-based model (Vaswani et al.,\n2017) that achieves consistent gains on a variety\nof NLP tasks (Devlin et al., 2019). We encodexusing the BERT encoder, which first converts\nwords into word piece tokens (Wu et al., 2016),\nthen embeds these tokens along with their posi-\ntional embeddings and segmentation embeddings.\nThese embeddings are subsequently encoded via a\ntransformer network, which allows for inter-token\nattention at each layer. Let nxbe the number\nof tokens in the concatenated input xanddUbe\nthe output dimension of the BERT encoder. For\nbrevity, we denote the output of the BERT encoder\nasU= BERT(x)2Rnx\u0002dUand refer readers\nto Devlin et al. (2019) for detailed architecture.\nIn order to extract the implicit decision rules\nfrom the document, we compute a start score \u000bi\nand an end score \fifor eachith token as\n\u000bi=\u001b(W\u000bUi+b\u000b)2R (1)\n\fi=\u001b(W\fUi+b\f)2R (2)\nwhereW\u000b;W\f2RdU,b\u000b;b\f2R, and\u001bis the\nsigmoid function.\nFor each position siwhere\u000biis larger than\nsome threshold \u001c, we find the closest proceeding\npositionei\u0015siwhere\fei>\u001c. Each pair (si;ei)\nthen forms an extracted span corresponding to a\nruleRiexpressed in the rule text. In the example\nin Figure 1, the correct extracted spans are “UK\nresident” and “UK civil service pensions”.\nFor theith rule, we use self-attention to build a\nrepresentation Aiover the span (si;ei).\n\rk=W\rUk+b\r2R;si\u0014k\u0014ei(3)\n\rk= softmax ( \r)k2R;si\u0014k\u0014ei(4)\nAi=eiX\nk=si\rkUk2RdU (5)\nwhereW\r2RdUandb\r2R. Here,\rk;\rk\nare respectively the unnormalized and normalized\nscores for the self-attention layer.\nLetnRdenote the number spans in the rule text,\neach of which corresponds to a ground truth rule.\nThe rule extraction loss is computed as the sum of\nthe binary cross entropy losses for each rule Ri.\nLre=nRX\niLstart;i+Lend;i (6)\nLetnDdenote the number of tokens in the rule\ntext,si,eithe ground truth start and end positions\nfor theith rule, and 1fthe indicator function that\nreturns 1 if and only if the condition fholds. Re-\ncall from Eq (1) that \u000bjand\fjdenote the proba-\nbilities that token jis the start and end of a rule.\nThe start and end binary cross entropy losses for\ntheith rule are computed as\nLstart;i=\u0000nDX\nj1j=silog (\u000bj) + 1j6=silog (1\u0000\u000bj)\nLend;i=\u0000nDX\nj1j=eilog (\fj) + 1j6=eilog (1\u0000\fj)\n3.2 Entailment module\nGiven the extracted rules R=fR1;\u0001\u0001\u0001RnRg, the\nentailment module estimates whether each rule is\nentailed by the conversation history, so that the\nmodel can subsequently inquire about rules that\nare not entailed. For the example in Figure 1, the\nrule “UK resident” is entailed by the previous in-\nquiry “Are you a UK resident”. In contrast, the\nrule “UK civil service pensions” is not entailed by\neither the scenario or the conversation history, so\nthe model needs to inquire about it. In this partic-\nular case the scenario does not entail any rule.\nFor each extracted rule, we compute a score\nthat indicates the extent to which this particular\nrule has already been discussed in the initial sce-\nnarioSand in previous turns Q. In particular, let\nN(Ri;S)denote the number of tokens shared by\nRiandS,N(Ri)the number of tokens in Ri, and\nN(S)the number of tokens in S. We compute the\nscenario entailment score gias\npr(Ri;S) =N(Ri;S)\nN(Ri)(7)\nre(Ri;S) =N(Ri;S)\nN(S)(8)\ngi= f1(Ri;S) =2pr(Ri;S)re(Ri;S)\npr(Ri;S) + re(Ri;S)(9)\nwhere pr,re, and f1respectively denote the pre-\ncision, recall, and F1 scores. We compute a simi-\nlar score to represent the extent to which the ruleRihas been discussed in previous inquiries. Let\nQkdenote tokens in the kth previous inquiry. We\ncompute the history entailment score hibetween\nthe extracted rule Riand allnHprevious inquiries\nin the conversation history as\nhi= max\nk=1;\u0001\u0001\u0001nHf1(Ri;Qk) (10)\nThe final representation of the ith rule,Ai, is then\nthe concatenation of the span self-attention and the\nentailment scores.\nAi= [Ai;gi;hi]2RdU+2(11)\nwhere [x;y]denotes the concatenation of xand\ny. We also experiment with embedding and en-\ncoding similarity based approaches to compute en-\ntailment, but find that this F1 approach performs\nthe best. Because the encoder utilizes cross atten-\ntion between different components of the input,\nthe representations UandAiare able to capture\nnotions of entailment. However, we find that ex-\nplicitly scoring entailment via the entailment mod-\nule further discourages the model from making re-\ndundant inquiries.\n3.3 Decision module\nGiven the extracted rules Rand the entailment-\nenriched representations for each rule Ai, the de-\ncision module decides on a response to the user.\nThese include answering yes/noto the user’s\noriginal question, determining that the rule text is\nirrelevant to the question, or inquiring about\na rule that is not entailed but required to answer\nthe question. For the example in Figure 1, the rule\n“UK civil service pensions” is not entailed, hence\nthe correct decision is to ask a follow-up question\nabout whether the user receives this pension.\nWe start by computing a summary Cof the in-\nput using self-attention\n\u001ek=W\u001eUk+b\u001e2R (12)\n\u001ek= softmax\u0000\n\u001e\u0001\nk2R (13)\nC=eiX\nk=si\u001ekUk2RdU (14)\nwhereW\u001e2RdU,b\u001e2R, and\u001e,\u001eare re-\nspectively the unnormalized and normalized self-\nattention weights. Next, we score the choices\nyes,no,irrelevant , andinquire .\nz=WzC+bz2R4(15)\nwherezis a vector containing a class score\nfor each of the yes,no,irrelevant , and\ninquire decisions.\nFor inquiries, we compute an inquiry score ri\nfor each extracted rule Ri.\nri=WzAi+bz2R (16)\nwhereWz2RdU+2andbz2R. Letkindicate\nthe correct decision, and iindicate the correct in-\nquiry, if the model is supposed to make an inquiry.\nThe decision loss is\nLdec =\u0000log softmax( z)k (17)\n\u0000 1k=inquire log softmax( r)i\nDuring inference, the model first determines the\ndecisiond= argmaxkzk. If the decision dis\ninquire , the model asks a follow-up question\nabout theith rule such that i= argmaxjrj. Oth-\nerwise, the model concludes the dialogue with d.\nRephrasing rule into question via editor. In\nthe event that the model chooses to make an in-\nquiry about an extracted rule Ri,Riis given to\nan subsequent editor to rephrase into a follow-up\nquestion. For the example in 1, the editor edits the\nspan “UK civil service pensions” into the follow-\nup question “Are you receiving UK civil service\npensions?” Figure 3 illustrates the editor.\nThe editor takes as input xedit= [Ri;xD], the\nconcatenation of the extracted rule to rephrase Ri\nand the rule text xD. As before, we encode using\na BERT encoder to obtain Uedit= BERT(xedit).\nThe encoder is followed by two decoders that re-\nspective generate the pre-span edit Ri;preand post-\nspan editRi;post. For the example in Figure 1,\ngiven the span “UK civil service pensions”, the\npre-span and post span edits that form the question\n“Are you receiving UK civil service pensions?”\nare respectively “Are you receiving” and “?”\nTo perform each edit, we employ an attentive\ndecoder (Bahdanau et al., 2015) with Long Short-\nTerm Memory (LSTM) (Hochreiter and Schmid-\nhuber, 1997). Let htdenote the decoder state at\ntimet. We compute attention atover the input.\n\u0010k=Ueditht\u000012R (18)\n\u0010k= softmax( \u0010)k2R (19)\nat=X\nk\u0010kUedit;k2RdU (20)\nLetV2RnV\u0002dVdenote the embedding ma-\ntrix corresponding to nVtokens in the vocabulary.\nProposed rule Ri\n<latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit>Ri\n<latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit><latexit sha1_base64=\"at/BQF41yKDPfgtKsdfFt0g2d7w=\">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mmmCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBNW657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJJDUq2XBSmgpiYzP8mQ66QGTG1hDLF7a2EjamizNh0SjYEb/XlddK+qnpu1burVRq1PI4inME5XIIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwAido2k</latexit>Rule text xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>xD\n<latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit><latexit sha1_base64=\"I1M3fGSWO3kv4+L5LyBLnhP0+WU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj65nffkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uuFV3DrJKvJxUIEejX/7qDWKWRigNE1Trrucmxs+oMpwJnJZ6qcaEsjEdYtdSSSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsTEEsoUt7cSNqKKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>BERT Transformer encoderPre-span attentive decoderPre-span edit Ri,pre\n<latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit>Ri,pre\n<latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit><latexit sha1_base64=\"Bz83/3D2qKkrNUWUrrgo/f5oyiQ=\">AAAB/HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0QQv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6XdvZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkgaU6Pj213AUkzSiQhOOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVVePg8/Q6dGGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXXs5EkmoqyOJQmHKkY1Q0gUZMUqJ5ZggmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ722q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB3eeU3Q==</latexit>xedit\n<latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit><latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit><latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit><latexit sha1_base64=\"rLHYFr2yGjRP7ghXVdRwZNX4urM=\">AAAB+HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViiXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7e/RmFCspgKTThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk001SQ5aEo40gnqGgBhUxSovnMEEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqqcApncAEuXEEbbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON</latexit>Post-span attentive decoderPost-span edit Ri,post\n<latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit>Ri,post\n<latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit><latexit sha1_base64=\"0SaTPdbqamEmTDTmYKx4bmgeaa0=\">AAAB/XicbVDLSgMxFM3UV62v8bFzEyyCCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wdna3tndc/cP2lomCpMWlkyqboQ0YVSQlqGGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSBUUaIbuV38gccKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKAA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80PDhEEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+oog2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+fwDLJ5Vm</latexit>Uedit\n<latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit><latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit><latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit><latexit sha1_base64=\"QHUp87PAqosTo6TNrrJrX/0JI2M=\">AAAB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q77bdU2Nre2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TTWDQSYB84hBP5reln7/EaSiqXjQswwCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs068irRQhW5of43ilOQchCYMKzX03EwHBZaaEgbzxihXkGEyxWMYGiowBxUUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXITFFRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UddXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov1bn0sR2tWtXOC/sD6/AE+xZNq</latexit>Figure 3: The editor of E3.\nTo generate the tth tokenwt, we use weight tying\nbetween the output layer and the embedding ma-\ntrix (Press and Wolf, 2017).\nvt= embed( V;wt\u00001) (21)\nht= LSTM ([ vt;at];ht\u00001)2RdU(22)\not=Wo[ht;at] +bo2RdV (23)\np(wt) = softmax( Vot)2RnV (24)\nwt= argmaxkp(wt)k (25)\nWe use a separate attentive decoder to gener-\nate the pre-span edit Ri;preand the post-span edit\nRi;post. The decoders share the embedding matrix\nand BERT encoder but do not share other parame-\nters. The output of the editor is the concatenation\nof tokens [Ri;pre;Ri;Ri;post].\nThe editing loss consists of the sequential cross\nentropy losses from generating the pre-span edit\nand the post-span edit. Let npredenote the number\nof tokens and ^wt;prethetth tokens in the ground\ntruth pre-span edit. The pre-span loss is\nLpre=\u0000npreX\ntlogp( ^wt;pre) (26)\nThe editing loss is then the sum of the pre-span\nand post-span losses, the latter of which is ob-\ntained in a manner similar to Eq (26).\nLedit=Lpre+Lpost (27)\n4 Experiment\nWe train and evaluate the Entailment-driven Ex-\ntract and Edit network on the ShARC CMR\ndataset. In particular, we compare our method\nto three other models. Two of these models\nare proposed by Saeidi et al. (2018). They are\nan attentive sequence-to-sequence model that at-\ntends to the concatenated input and generates\nthe response token-by-token (Seq2Seq), and a\nstrong hand-engineered pipeline model with sub-\nmodels for entailment, classification, and genera-\ntion (Pipeline). For the latter, Saeidi et al. (2018)\nModel Micro Acc. Macro Acc. BLEU1 BLEU4 Comb.\nSeq2Seq 44.8 42.8 34.0 7.8 3.3\nPipeline 61.9 68.9 54.4 34.4 23.7\nBERTQA 63.6 70.8 46.2 36.3 25.7\nE3(ours) 67.6 73.3 54.1 38.7 28.4\nTable 1: Model performance on the blind, held-out test set of ShARC. The evaluation metrics are micro and macro-\naveraged accuracy in classifying bewteen the decisions yes,no,irrelevant , and inquire . In the event of\nan inquiry, the generated follow-up question is further evaluated using the BLEU score. In addition to official\nevaluation metrics, we also show a combined metric (“Comb.”), which is the product between the macro-averaged\naccuracy and the BLEU4 score.\nshow that these sub-models outperform neural\nmodels such as the entailment model by Parikh\net al. (2016), and that the combined pipeline\noutperforms the attentive sequence-to-sequence\nmodel. In addition, we propose an extractive\nQA baseline based on BERT (BERTQA). Simi-\nlar models achieved state-of-the-art on a variety\nof QA tasks (Rajpurkar et al., 2016; Reddy et al.,\n2019). We refer readers to Section A.1 of the ap-\npendices for implementation details BERTQA.\n4.1 Experimental setup\nWe tokenize using revtok1and part-of-speech tag\n(for the editor) using Stanford CoreNLP (Manning\net al., 2014). We fine-tune the smaller, uncased\npretrained BERT model by Devlin et al. (2019)\n(e.g.bert-base-uncased ).2We optimize us-\ning ADAM (Kingma and Ba, 2015) with an initial\nlearning rate of 5e-5 and a warm-up rate of 0.1.\nWe regularize using Dropout (Srivastava et al.,\n2014) after the BERT encoder with a rate of 0.4.\nTo supervise rule extraction, we reconstruct full\ndialogue trees from the ShARC training set and\nextract all follow-up questions as well as bullet\npoints from each rule text and its corresponding di-\nalogue tree. We then match these extracted clauses\nto spans in the rule text, and consider these noisy\nmatched spans as supervision for rule extraction.\nDuring inference, we use heuristic bullet point ex-\ntraction3in conjunction with spans extracted by\nthe rule extraction module. This results in minor\nperformance improvements ( \u00181% micro/macro\nacc.) over only relying on the rule extraction mod-\nule. In cases where one rule fully covers another,\n1https://github.com/jekbradbury/revtok\n2We use the BERT implementation from\nhttps://github.com/huggingface/\npytorch-pretrained-BERT\n3We extract spans from the text that starts with the “*”\ncharacter and ends with another “*” character or a new line.we discard the covered shorter rule. Section A.2\ndetails how clause matching is used to obtain noisy\nsupervision for rule extraction.\nWe train the editor separately, as jointly training\nwith a shared encoder worsens performance. The\neditor is trained by optimizing Leditwhile the rest\nof the model is trained by optimizing Ldec+\u0015Lre.\nWe use a rule extraction threshold of \u001c= 0:5and\na rule extraction loss weight of \u0015= 400 . We\nperform early stopping using the product of the\nmacro-averaged accuracy and the BLEU4 score.\nFor the editor, we use fixed, pretrained embed-\ndings from GloVe (Pennington et al., 2014), and\nuse dropout after input attention with a rate of 0.4.\nBefore editing retrieved rules, we remove prefix\nand suffix adpositions, auxiliary verbs, conjunc-\ntions, determiners, or punctuation. We find that\ndoing so allows the editor to convert some ex-\ntracted rules (e.g. or sustain damage) into sensible\nquestions (e.g. did you sustain damage?).\n4.2 Results\nOur performance on the development and the\nblind, held-out test set of ShARC is shown in Ta-\nble 1. Compared to previous results, E3achieves\na new state-of-the-art, obtaining best performance\non micro and macro-averaged decision classifica-\ntion accuracy and BLEU4 scores while maintain-\ning similar BLEU1 scores. These results show\nthatE3both answers the user’s original question\nmore accurately, and generates more coherent and\nrelevant follow-up questions. In addition, Fig-\nure 4 shows that because E3explicitly extracts im-\nplicit rules from the document, the model’s pre-\ndictions are explainable in the sense that the user\ncan verify the correctness of the extracted rules\nand observe how the scenario and previous inter-\nactions ground to the extracted rules.\n# 1. OverviewYou get the Additional State Pension automatically if you’re eligible for it, unless you’ve contracted out of it.At no time were my contributions lower than any else’s in the SERP or ever paid into a private pension.Do I get additional state pension automatically?Have you contracted out of the state?YesYes: 0.01 No: 0.99 Irrelevant: 0.00 Inquire: 0.0NoRule textScenarioQuestionPrevious interactionsDecisionModel responseNoGround truth answerAre you eligible for it?Yes0.280.670.000.720.550.00(a)\nIf you are a female Vietnam Veteran with a child who has a birth defect or you are a child of a female Vietnam Veteran with a birth defect, the child may be eligible for VA-financed care.I make $14,000 and would like to keep making that until I return to Zimbabwe.Is my child eligible for VA-financed health care?Rule textScenarioQuestionYes: 0.04 No: 0.04 Irrelevant: 0.00 Inquire: 0.92Are you female Vietnam Veteran with a child who has a birth defect?Previous interactionsDecisionModel responseAre you a female Vietnam Veteran?Ground truth answer0.660.000.000.340.000.00(b)\nFigure 4: Predictions by E3. Extracted spans are underlined in the text. The three scores are the inquiry score ri\n(blue), history entailment score hi(red), and scenario entailment score gi(green) of the nearest extracted span.\nModel Micro Acc. Macro Acc. BLEU1 BLEU4 Comb.\nE368.0 73.4 66.9 53.7 39.4\n-edit 68.0 73.4 53.1 46.2 33.9\n-edit, entail 68.0 73.1 50.2 40.3 29.5\n-edit, entail, extract (BERTQA) 63.4 70.6 47.4 37.4 23.7\nTable 2: Ablation study of E3on the development set of ShARC. The ablated variants of E3include versions:\nwithout the editor; without the editor and entailment module; without the editor, entailment module, and extraction\nmodule, which reduces to the BERT for question answering model by Devlin et al. (2019).\n4.3 Ablation study\nTable 2 shows an ablation study of E3on the de-\nvelopment set of ShARC.\nRetrieval outperforms word generation.\nBERTQA (“-edit, entail, extract”), which E3re-\nduces to after removing the editor, entailment,\nand extraction modules, presents a strong baseline\nthat exceeds previous results on all metrics except\nfor BLEU1. This variant inquires about spans ex-\ntracted from the text, which, while more relevant\nas indicated by the higher BLEU4 score, does not\nhave the natural qualities of a question, hence it\nhas a lower BLEU1. Nonetheless, the large gains\nof BERTQA over the attentive Seq2Seq model\nshows that retrieval is a more promising technique\nfor asking follow-up questions than word-by-wordgeneration. Similar findings were reported for\nquestion answering by Yatskar (2019).\nExtraction of document structure facilitates\ngeneralization. Adding explicit extraction of\nrules in the document (“-edit, entail”) forces the\nmodel to interpret all rules in the document ver-\nsus only focusing on extracting the next inquiry.\nThis results in better performance in both decision\nclassification and inquiry relevance compared to\nthe variant that is not forced to interpret all rules.\nModeling entailment improves rule retrieval.\nThe “-edit” model explicitly models whether an\nextracted rule is entailed by the user scenario and\nprevious turns. Modeling entailment allows the\nmodel to better predict whether a rule is entailed,\nyesno\nirrelevantinquire\nPredicted labelyes\nno\nirrelevant\ninquireTrue label530 147 0 127\n117 541 0 108\n0 0 133 5\n107 113 2 340\n0100200300400500Figure 5: Confusion matrix of decision predictions on\nthe development set of ShARC.\nand thus more often inquire about rules that are\nnot entailed. Figure 4a illustrates one such exam-\nple in which both extracted rules have high entail-\nment score, and the model chooses to conclude the\ndialogue by answering noinstead of making fur-\nther inquiries. Adding entailment especially im-\nproves in BLEU4 score, as the inquiries made by\nthe model are more relevant and appropriate.\nEditing retrieved rules results in more fluid\nquestions. While E3without the editor is able to\nretrieve rules that are relevant, these spans are not\nfluent questions that can be presented to the user.\nThe editor is able to edit the extracted rules into\nmore fluid and coherent questions, which results\nfurther gains particularly in BLEU1.\n4.4 Error analysis\nIn addition to ablation studies, we analyze er-\nrorsE3makes on the development set of ShARC.\nDecision errors. Figure 5 shows the confusion\nmatrix of decisions. We specifically examine ex-\namples in which E3produces an incorrect deci-\nsion. On the ShARC development set there are\n726 such cases, which correspond to a 32.0% er-\nror rate. We manually analyze 100 such exam-\nples to identify commons types of errors. Within\nthese, in 23% of examples, the model attempts to\nanswer the user’s initial question without resolv-\ning a necessary rule despite successfully extract-\ning the rule. In 19% of examples, the model iden-\ntifies and inquires about all necessary rules but\ncomes to the wrong conclusion. In 18% of exam-\nples, the model makes a redundant inquiry about a\nrule that is entailed. In 17% of examples, the ruletext contains ambiguous rules. Figure 4b contains\none such example in which the annotator identi-\nfied the rule “a female Vietnam Veteran”, while\nthe model extracted an alternative longer rule “a\nfemale Vietnam Veteran with a child who has a\nbirth defect”. Finally, in 13% of examples, the\nmodel fails to extract some rule from the docu-\nment. Other less common forms of errors include\nfailures by the entailment module to perform nu-\nmerical comparison, complex rule procedures that\nare difficult to deduce, and implications that re-\nquire world knowledge. These results suggests\nthat improving the decision process after rule ex-\ntraction is an important area for future work.\nInquiry quality. On 340 examples (15%) in the\nShARC development set, E3generates an inquiry\nwhen it is supposed to. We manually analyze 100\nsuch examples to gauge the quality of generated\ninquiries. On 63% of examples, the model gener-\nates an inquiry that matches the ground-truth. On\n14% of examples, the model makes inquires in a\ndifferent order than the annotator. On 12% of ex-\namples, the inquiry refers to an incorrect subject\n(e.g. “are you born early” vs. “is your baby born\nearly”. This usually results from editing an entity-\nless bullet point (“* born early”). On 6% of exam-\nples, the inquiry is lexically similar to the ground\ntruth but has incorrect semantics (e.g. “do you\nneed savings” vs. “is this information about your\nsavings”). Again, this tends to result from editing\nshort bullet points (e.g. “* savings”). These results\nindicate that when the model correctly chooses to\ninquire, it largely inquires about the correct rule.\nThey also highlight a difficulty in evaluating CMR\n— there can be several correct orderings of in-\nquiries for a document.\n5 Conclusion\nWe proposed the Entailment-driven Extract and\nEdit network ( E3), a conversational machine read-\ning model that extracts implicit decision rules\nfrom text, computes whether each rule is entailed\nby the conversation history, inquires about rules\nthat are not entailed, and answers the user’s ques-\ntion. E3achieved a new state-of-the-art result on\nthe ShARC CMR dataset, outperforming existing\nsystems as well as a new extractive QA baseline\nbased on BERT. In addition to achieving strong\nperformance, we showed that E3provides a more\nexplainable alternative to prior work which do not\nmodel document structure.\nAcknowledgments\nThis research was supported in part by the ARO\n(W911NF-16-1-0121) and the NSF (IIS-1252835,\nIIS-1562364). We thank Terra Blevins, Sewon\nMin, and our anonymous reviewers for helpful\nfeedback.\nReferences\nGabor Angeli, Melvin Jose Johnson Premkumar, and\nChristopher D. Manning. 2015. Leveraging linguis-\ntic structure for open domain information extraction.\nInACL.\nGabor Angeli and Christopher D. Manning. 2014. Nat-\nuralli: Natural logic inference for common sense\nreasoning. In EMNLP .\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-\ngio. 2015. Neural machine translation by jointly\nlearning to align and translate. In ICLR .\nAntoine Bordes, Nicolas Usunier, Alberto Garcia-\nDuran, Jason Weston, and Oksana Yakhnenko.\n2013. Translating embeddings for modeling multi-\nrelational data. In NIPS .\nEunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-\ntau Yih, Yejin Choi, Percy Liang, and Luke Zettle-\nmoyer. 2018. QuAC: Question answering in con-\ntext. In EMNLP .\nTim Dettmers, Minervini Pasquale, Stenetorp Pon-\ntus, and Sebastian Riedel. 2018. Convolutional 2D\nknowledge graph embeddings. In AAAI .\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In NAACL .\nMatthew Henderson, Blaise Thomson, and Jason D\nWilliams. 2014. The second dialog state tracking\nchallenge. In SIGDIAL .\nSepp Hochreiter and J ¨urgen Schmidhuber. 1997. Long\nshort-term memory. Neural Computation .\nDiederik P. Kingma and Jimmy Ba. 2015. Adam: A\nmethod for stochastic optimization. In ICLR .\nTao Lei, Regina Barzilay, and Tommi Jaakkola. 2016.\nRationalizing neural predictions. In EMNLP .\nXi Victoria Lin, Richard Socher, and Caiming Xiong.\n2018. Multi-hop knowledge graph reasoning with\nreward shaping. In EMNLP .\nChristopher D. Manning, Mihai Surdeanu, John Bauer,\nJenny Rose Finkel, Steven Bethard, and David Mc-\nClosky. 2014. The Stanford CoreNLP natural lan-\nguage processing toolkit. In ACL.Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju-\nrafsky. 2009. Distant supervision for relation extrac-\ntion without labeled data. In ACL.\nB. Moulin and D. Rousseau. 1992. Automated knowl-\nedge acquisition from regulatory texts. IEEE Ex-\npert.\nNikola Mrk ˇsi´c, Diarmuid O S ´eaghdha, Tsung-Hsien\nWen, Blaise Thomson, and Steve Young. 2017.\nNeural belief tracker: Data-driven dialogue state\ntracking. In ACL.\nAnkur Parikh, Oscar T ¨ackstr ¨om, Dipanjan Das, and\nJakob Uszkoreit. 2016. A decomposable attention\nmodel for natural language inference. In EMNLP .\nJeffrey Pennington, Richard Socher, and Christo-\npher D. Manning. 2014. GloVe: Global vectors for\nword representation. In EMNLP .\nOfir Press and Lior Wolf. 2017. Using the output em-\nbedding to improve language models. In ACL.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and\nPercy Liang. 2016. SQuAD: 100, 000+ questions\nfor machine comprehension of text. In EMNLP .\nSiva Reddy, Danqi Chen, and Christopher D Manning.\n2019. CoQA: A conversational question answering\nchallenge. TACL .\nSebastian Riedel, Limin Yao, Andrew McCallum, and\nBenjamin M. Marlin. 2013. Relation extraction\nwith matrix factorization and universal schemas. In\nNAACL .\nMarzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer\nSingh, Tim Rockt ¨aschel, Mike Sheldon, Guillaume\nBouchard, and Sebastian Riedel. 2018. Interpreta-\ntion of natural language rules in conversational ma-\nchine reading. In EMNLP .\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky,\nIlya Sutskever, and Ruslan Salakhutdinov. 2014.\nDropout: A simple way to prevent neural networks\nfrom overfitting. JMLR .\nPei-Hao Su, Milica Gasic, Nikola Mrk ˇsi´c, Lina M. Ro-\njas Barahona, Stefan Ultes, David Vandyke, Tsung-\nHsien Wen, and Steve Young. 2016. On-line active\nreward learning for policy optimisation in spoken di-\nalogue systems. In ACL.\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In NIPS .\nTsung-Hsien Wen, Milica Gasic, Nikola Mrk ˇsi´c, Pei-\nHao Su, David Vandyke, and Steve Young. 2015.\nSemantically conditioned lstm-based natural lan-\nguage generation for spoken dialogue systems. In\nEMNLP .\nTsung-Hsien Wen, David Vandyke, Nikola Mrk ˇsi´c,\nMilica Ga ˇsi´c, Lina M. Rojas Barahona, Pei-Hao Su,\nStefan Ultes, and Steve Young. 2017. A network-\nbased end-to-end trainable task-oriented dialogue\nsystem. In EACL .\nJason D Williams, Antoine Raux, Deepak Ramachan-\ndran, and Alan Black. 2013. The dialog state track-\ning challenge. In SIGDIAL .\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V .\nLe, Mohammad Norouzi, Wolfgang Macherey,\nMaxim Krikun, Yuan Cao, Qin Gao, Klaus\nMacherey, Jeff Klingner, Apurva Shah, Melvin\nJohnson, Xiaobing Liu, Lukasz Kaiser, Stephan\nGouws, Yoshikiyo Kato, Taku Kudo, Hideto\nKazawa, Keith Stevens, George Kurian, Nishant\nPatil, Wei Wang, Cliff Young, Jason Smith, Jason\nRiesa, Alex Rudnick, Oriol Vinyals, Gregory S.\nCorrado, Macduff Hughes, and Jeffrey Dean. 2016.\nGoogle’s neural machine translation system: Bridg-\ning the gap between human and machine translation.\nCoRR , abs/1609.08144.\nMark Yatskar. 2019. A qualitative comparison of coqa,\nsquad 2.0 and quac. In NAACL .\nSteve Young, Milica Ga ˇsi´c, Blaise Thomson, and Ja-\nson D Williams. 2013. POMDP-based statistical\nspoken dialog systems: A review. Proceedings of\nthe IEEE .\nVictor Zhong, Caiming Xiong, and Richard Socher.\n2018. Global-locally self-attentive dialogue state\ntracker. In ACL.\nA Appendices\nA.1 BertQA Baseline\nOur BertQA baseline follows that proposed by De-\nvlin et al. (2019) for the Stanford Question\nAnswering Dataset (SQuAD) (Rajpurkar et al.,\n2016). Due to the differences in context between\nShARC and SQuAD, we augment the input to\nthe BERTQA model in a manner similar to Sec-\ntion 3.1. The distinction here is that we addition-\nally add the decision types “yes”, “no”, and “ir-\nrelevant” as parts of the input such that the prob-\nlem is fully solvable via span extraction. Similar\nto Section 3.1, let Udenote the BERT encoding of\nthe length-ninput sequence. The BERTQA model\npredicts a start score sand an end score e.\ns= softmax( UWs+bs)2Rn(28)\ne= softmax( UWe+be)2Rn(29)\nWe take the answer as the span (i;j)that gives\nthe highest score siejsuch thatj >=i. Be-\ncause we augment the input with decision labels,\nthe model can be fully supervised via extraction\nendpoints.\nA.2 Creating noisy supervision for span\nextraction via span matching\nThe ShARC dataset is constructed from full dia-\nlogue trees in which annotators exhaustively anno-\ntate yes/no branches of follow-up questions. Con-\nsequently, each rule required to answer the ini-\ntial user question forms a follow-up question in\nthe full dialogue tree. In order to identify rule\nspans in the document, we first reconstruct the di-\nalogue trees for all training examples in ShARC.\nFor each document, we trim each follow-up ques-\ntion in its corresponding dialogue tree by remov-\ning punctuation and stop words. For each trimmed\nquestion, we find the shortest best-match span in\nthe document that has the least edit distance from\nthe trimmed question, which we take as the corre-\nsponding rule span. In addition, we extract sim-\nilarly trimmed bullet points from the document\nas rule spans. Finally, we deduplicate the rule\nspans by removing those that are fully covered by\na longer rule span. Our resulting set of rule spans\nare used as noisy supervision for the rule extrac-\ntion module. This preprocessing code is included\nwith our code release.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "r1ejU8BQqr",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=r1ejU8BQqr",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Official Blind Review #2",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Ks8zhWQk2zI",
"year": null,
"venue": "EC2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Ks8zhWQk2zI",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Expressiveness and robustness of first-price position auctions.",
"authors": [
"Paul Dütting",
"Felix A. Fischer",
"David C. Parkes"
],
"abstract": "It is desirable for an economic mechanism that its properties hold in a robust way across multiple equilibria and under varying assumptions regarding the information available to the participants. In this paper we focus on the design of position auctions and seek mechanisms that guarantee high revenue in every efficient equilibrium under both complete and incomplete information. Our main result identifies a generalized first-price auction with multi-dimensional bids as the only standard design capable of achieving this goal, even though valuations are one-dimensional. The fact that expressiveness beyond the valuation space is necessary for robustness provides an interesting counterpoint to previous work, which has highlighted the benefits of simple bid spaces. From a technical perspective, our results are interesting because they establish equilibrium existence for a multi-dimensional bid space, where standard techniques for establishing equilibrium existence break down.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "3KOntCCjzLT",
"year": null,
"venue": "EC2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=3KOntCCjzLT",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Mechanism with unique learnable equilibria.",
"authors": [
"Paul Dütting",
"Thomas Kesselheim",
"Éva Tardos"
],
"abstract": "The existence of a unique equilibrium is the classic tool for ensuring predictiveness of game theory. Typical uniqueness results, however, are for Nash and Bayes-Nash equilibria and do not guarantee that natural game playing dynamic converges to this equilibrium. In fact, there are well known examples in which the equilibrium is unique, yet natural learning behavior does not converge to it. Motivated by this, we strive for stronger uniqueness results. We do not only require that there is a unique equilibrium, but also that this equilibrium must be learnable. We adopt correlated equilibrium as our solution concept, as simple and natural learning algorithms guarantee that the empirical distribution of play converges to the space of correlated equilibria. Our main result is to show uniqueness of correlated equilibria in a large class of single-parameter mechanisms with matroid structure. We also show that our uniqueness result extends to problems with polymatroid structure under some conditions. Our model includes a number of special cases interesting on their own right, such as procurement auctions and Bertrand competitions. An interesting feature of our model is that we do not need to assume that the players have quasi-linear utilities, and hence can incorporate models with risk averse players and certain forms of externalities.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "TkVvSAkyjff",
"year": null,
"venue": "EC2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=TkVvSAkyjff",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Modularity and greed in double auctions.",
"authors": [
"Paul Dütting",
"Tim Roughgarden",
"Inbal Talgam-Cohen"
],
"abstract": "Designing double auctions is a complex problem, especially when there are restrictions on the sets of buyers and sellers that may trade with one another. The goal of this paper is to develop ``black-box reductions'' from double-auction design to the exhaustively-studied problem of designing single-sided mechanisms. We consider several desirable properties of a double auction: feasibility, dominant-strategy incentive-compability, the still stronger incentive constraints offered by a deferred-acceptance implementation, exact and approximate welfare maximization, and budget-balance. For each of these properties, we identify sufficient conditions on the two one-sided mechanisms --- one for the buyers, one for the sellers --- and on the method of composition, that guarantee the desired property of the double auction. Our framework also offers new insights into classic double-auction designs, such as the VCG and McAfee auctions with unit-demand buyers and unit-supply sellers.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "v0G7ceSknKh0",
"year": null,
"venue": "EC2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=v0G7ceSknKh0",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The performance of deferred-acceptance auctions.",
"authors": [
"Paul Dütting",
"Vasilis Gkatzelis",
"Tim Roughgarden"
],
"abstract": "Deferred-acceptance auctions are auctions for binary single-parameter mechanism design problems whose allocation rule can be implemented using an adaptive reverse greedy algorithm. Milgrom and Segal [2014] recently introduced these auctions and proved that they satisfy a remarkable list of incentive guarantees: in addition to being dominant-strategy incentive-compatible, they are weakly group-strategyproof, can be implemented by ascending-clock auctions, and admit outcome-equivalent full-information pay-as-bid versions. Neither forward greedy mechanisms nor the VCG mechanism generally possess any of these additional incentive properties. The goal of this paper is to initiate the study of deferred-acceptance auctions from an approximation standpoint. We study these auctions through the lens of two canonical welfare-maximization problems, in knapsack auctions and in combinatorial auctions with single-minded bidders. For knapsack auctions, we prove a separation between deferred-acceptance auctions and arbitrary dominant-strategy incentive-compatible mechanisms. While the more general class can achieve an arbitrarily good approximation in polynomial time, and a constant-factor approximation via forward greedy algorithms, the former class cannot obtain an approximation guarantee sub-logarithmic in the number of items m, even with unbounded computation. We also give a polynomial-time deferred-acceptance auction that achieves an approximation guarantee of O(log m) for knapsack auctions.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "V8SLau7xgQ1",
"year": null,
"venue": "EC2014",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/2600057.2602874",
"forum_link": "https://openreview.net/forum?id=V8SLau7xgQ1",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Multiplicative bidding in online advertising.",
"authors": [
"MohammadHossein Bateni",
"Jon Feldman",
"Vahab S. Mirrokni",
"Sam Chiu-wai Wong"
],
"abstract": "In this paper, we initiate the study of the multiplicative bidding language adopted by major Internet search companies. In multiplicative bidding, the effective bid on a particular search auction is the product of a base bid and bid adjustments that are dependent on features of the search (for example, the geographic location of the user, or the platform on which the search is conducted). We consider the task faced by the advertiser when setting these bid adjustments, and establish a foundational optimization problem that captures the core difficulty of bidding under this language. We give matching algorithmic and approximation hardness results for this problem; these results are against an information-theoretic bound, and thus have implications on the power of the multiplicative bidding language itself. Inspired by empirical studies of search engine price data, we then codify the relevant restrictions of the problem, and give further algorithmic and hardness results. Our main technical contribution is an O(log n)-approximation for the case of multiplicative prices and monotone values. We also provide empirical validations of our problem restrictions, and test our algorithms on real data against natural benchmarks. Our experiments show that they perform favorably compare with the baseline.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "SioZDvNe_6S",
"year": null,
"venue": "IEEE Trans. Pattern Anal. Mach. Intell.1992",
"pdf_link": "https://ieeexplore.ieee.org/iel1/34/3436/00120328.pdf",
"forum_link": "https://openreview.net/forum?id=SioZDvNe_6S",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Some Defects in Finite-Difference Edge Finders.",
"authors": [
"Margaret M. Fleck"
],
"abstract": "This work illustrates and explains various artifacts in the output of five finite difference edge finders, those of J.F. Canny (1983, 1986), R.A. Boie et al. (1986) and R.A. Boie and I.J. Cox (1987), and three variations on that of D. Marr and E.C. Hildreth (1980), reimplemented with a common output format and method of noise suppression. These artifacts include gaps in boundaries, spurious boundaries, and deformation of region shape.< ",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KY9yn2cQriD",
"year": null,
"venue": "IEEE Trans. Pattern Anal. Mach. Intell. 1992",
"pdf_link": "https://ieeexplore.ieee.org/iel1/34/3436/00120328.pdf",
"forum_link": "https://openreview.net/forum?id=KY9yn2cQriD",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Some Defects in Finite-Difference Edge Finders",
"authors": [
"Margaret M. Fleck"
],
"abstract": "This work illustrates and explains various artifacts in the output of five finite difference edge finders, those of J.F. Canny (1983, 1986), R.A. Boie et al. (1986) and R.A. Boie and I.J. Cox (1987), and three variations on that of D. Marr and E.C. Hildreth (1980), reimplemented with a common output format and method of noise suppression. These artifacts include gaps in boundaries, spurious boundaries, and deformation of region shape.< ",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "zMlNnT0DzM",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=zMlNnT0DzM",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Nearly Optimal Pricing Algorithms for Production Constrained and Laminar Bayesian Selection",
"authors": [
"Nima Anari",
"Rad Niazadeh",
"Amin Saberi",
"Ali Shameli"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "MEFjJ1SzDW",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=MEFjJ1SzDW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Perils of Exploration under Competition: A Computational Modeling Approach",
"authors": [
"Guy Aridor",
"Kevin Liu",
"Aleksandrs Slivkins",
"Zhiwei Steven Wu"
],
"abstract": "We empirically study the interplay between exploration and competition. Systems that learn from interactions with users often engage in exploration: making potentially suboptimal decisions in order to acquire new information for future decisions. However, when multiple systems are competing for the same market of users, exploration may hurt a system's reputation in the near term, with adverse competitive effects. In particular, a system may enter a \"death spiral\", when the short-term reputation cost decreases the number of users for the system to learn from, which degrades the system's performance relative to competition and further decreases the market share. We ask whether better exploration algorithms are incentivized under competition. We run extensive numerical experiments in a stylized duopoly model in which two firms deploy multi-armed bandit algorithms and compete for myopic users. We find that duopoly and monopoly tend to favor a primitive \"greedy algorithm\" that does not explore and leads to low consumer welfare, whereas a temporary monopoly (a duopoly with an early entrant) may incentivize better bandit algorithms and lead to higher consumer welfare. Our findings shed light on the first-mover advantage in the digital economy by exploring the role that data can play as a barrier to entry in online markets.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "mTSxl06LD",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=mTSxl06LD",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Optimal Pricing in Markets with Non-Convex Costs",
"authors": [
"Navid Azizan Ruhi",
"Yu Su",
"Krishnamurthy Dvijotham",
"Adam Wierman"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "D_86dM83N06",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=D_86dM83N06",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Allocation for Social Good: Auditing Mechanisms for Utility Maximization",
"authors": [
"Taylor Lundy",
"Alexander Wei",
"Hu Fu",
"Scott Duke Kominers",
"Kevin Leyton-Brown"
],
"abstract": "We consider the problem of a nonprofit organization (\"center\") that must divide resources among subsidiaries (\"agents\"), based on agents' reported demand forecasts, with the aim of maximizing social good (agents' valuations for the allocation minus any payments that are imposed on them). We investigate the impact of a common feature of the nonprofit setting: the center's ability to audit agents who receive allocations, comparing their actual consumption with their reported forecasts. We show that auditing increases the power of mechanisms for utility maximization, both in unit-demand settings and beyond: in unit-demand settings, we consider both constraining ourselves to an allocation function studied in past work and allowing the allocation function to vary; beyond unit demand, we adopt the VCG allocation but modify the payment rule. Our ultimate goal is to show how to leverage auditing mechanisms to maximize utility in repeated allocation problems where payments are not possible; we show how any static auditing mechanism can be transformed to operate in such a setting, using the threat of reduced future allocations in place of monetary payments.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "EEpXj7QjHgk",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=EEpXj7QjHgk",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Robust Commitments and Partial Reputation",
"authors": [
"Vidya Muthukumar",
"Anant Sahai"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "N6SxR0lSB00",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=N6SxR0lSB00",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Imitative Follower Deception in Stackelberg Games",
"authors": [
"Jiarui Gan",
"Haifeng Xu",
"Qingyu Guo",
"Long Tran-Thanh",
"Zinovi Rabinovich",
"Michael J. Wooldridge"
],
"abstract": "Information uncertainty is one of the major challenges facing applications of game theory. In the context of Stackelberg games, various approaches have been proposed to deal with the leader's incomplete knowledge about the follower's payoffs, typically by gathering information from the leader's interaction with the follower. Unfortunately, these approaches rely crucially on the assumption that the follower will not strategically exploit this information asymmetry, i.e., the follower behaves truthfully during the interaction according to their actual payoffs. As we show in this paper, the follower may have strong incentives to deceitfully imitate the behavior of a different follower type and, in doing this, benefit significantly from inducing the leader into choosing a highly suboptimal strategy. This raises a fundamental question: how to design a leader strategy in the presence of a deceitful follower? To answer this question, we put forward a basic model of Stackelberg games with (imitative) follower deception and show that the leader is indeed able to reduce the loss due to follower deception with carefully designed policies. We then provide a systematic study of the problem of computing the optimal leader policy and draw a relatively complete picture of the complexity landscape; essentially matching positive and negative complexity results are provided for natural variants of the model. Our intractability results are in sharp contrast to the situation with no deception, where the leader's optimal strategy can be computed in polynomial time, and thus illustrate the intrinsic difficulty of handling follower deception. Through simulations we also examine the benefit of considering follower deception in randomly generated games.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "LWGMHsVaRYz",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=LWGMHsVaRYz",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Complexity of Black-Box Mechanism Design with Priors",
"authors": [
"Evangelia Gergatsouli",
"Brendan Lucier",
"Christos Tzamos"
],
"abstract": "We study black-box reductions from mechanism design to algorithm design for welfare maximization in settings of incomplete information. Given oracle access to an algorithm for an underlying optimization problem, the goal is to simulate an incentive compatible mechanism. The mechanism will be evaluated on its expected welfare, relative to the algorithm provided, and its complexity is measured by the time (and queries) needed to simulate the mechanism on any input. While it is known that black-box reductions are not possible in many prior-free settings, settings with priors appear more promising: there are known reductions for Bayesian incentive compatible (BIC) mechanism design for general classes of welfare maximization problems. This dichotomy begs the question: which mechanism design problems admit black-box reductions, and which do not? Our main result is that black-box mechanism design is impossible under two of the simplest settings not captured by known positive results. First, for the problem of allocating n goods to a single buyer whose valuation is additive and independent across the goods, subject to a downward-closed constraint on feasible allocations, we show that there is no polytime (in n) BIC black-box reduction for expected welfare maximization. Second, for the setting of multiple single-parameter agents---where polytime BIC reductions are known---we show that no polytime reductions exist when the incentive requirement is tightened to Max-In-Distributional-Range. In each case, we show that achieving a sub-polynomial approximation to the expected welfare requires exponentially many queries, even when the set of feasible allocations is known to be downward-closed.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "GkidOGS6TSO",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=GkidOGS6TSO",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Seeding with Costly Network Information",
"authors": [
"Dean Eckles",
"Hossein Esfandiari",
"Elchanan Mossel",
"M. Amin Rahimian"
],
"abstract": "Seeding the most influential individuals based on the contact structure can substantially enhance the extent of a spread over the social network. Most of the influence maximization literature assumes the knowledge of the entire network graph. However, in practice, obtaining full knowledge of the network structure is very costly. We propose polynomial-time algorithms that provide almost tight approximation guarantees using a bounded number of queries to the graph structure. We also provide impossibility results to lower bound the query complexity and show tightness of our guarantees.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "O63QLyIV-U",
"year": null,
"venue": "EC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329572",
"forum_link": "https://openreview.net/forum?id=O63QLyIV-U",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Local Non-Bayesian Social Learning with Stubborn Agents",
"authors": [
"Daniel Vial",
"Vijay G. Subramanian"
],
"abstract": "In recent years, people have increasingly turned to social networks like Twitter and Facebook for news. In contrast to traditional news sources, these platforms allow users to simultaneously read news articles and share opinions with other users. Among other effects, this has led to the rise of fake news, sometimes spread via bots (automated social media accounts masquerading as real users). In this work, we devise and analyze a mathematical model describing such platforms. The model includes a large number of agents attempting to learn an underlying true state of the world in an iterative fashion. At each iteration, these agents update their beliefs about the true state based on noisy observations of the true state and the beliefs of a subset of other agents. These subsets may include a special type of agent we call bots, who attempt to convince others of an erroneous true state rather than learn (modeling users spreading fake news). This process continues for a finite number of iterations we call the learning horizon. Our analysis details three cases for the outcome of this process: agents may learn the true state, mistake the erroneous state promoted by the bots as true, or believe the state falls between the true and erroneous states. Which outcome occurs depends on the relationship between the number of bots and the learning horizon. This leads to several interesting consequences; for example, we show that agents can initially learn the true state but later forget it and believe the erroneous state to be true instead. In short, we argue that varying the learning horizon can lead to dramatically different outcomes. This is in contrast to existing works studying models like ours, which typically fix a finite horizon or only consider an infinite horizon.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "njDoixpyTnr",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=njDoixpyTnr",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Simple Mechanisms for Profit Maximization in Multi-item Auctions",
"authors": [
"Yang Cai",
"Mingfei Zhao"
],
"abstract": "We study a classical Bayesian mechanism design problem where a seller is selling multiple items to a buyer. We consider the case where the seller has costs to produce the items, and these costs are private information to the seller. How can the seller design a mechanism to maximize her profit? Two well-studied problems, revenue maximization in multi-item auctions and signaling in ad auctions, are special cases of our problem. We show that there exists a simple mechanism whose profit is at least 1/11 the optimal profit, when the buyer has a constraint-additive valuation over independent items. The approximation factor becomes 6 when the buyer is additive. Our result holds even when the seller's costs are correlated across items. We introduce a new class of mechanisms called permit-selling mechanisms. These mechanisms have two stages. For each item i, we create a separate permit that allows the buyer to purchase the item at its cost. In the first stage, we sell the permits without revealing any information about the costs. In the second stage, the seller reveals all the costs, and the buyer can buy item i by only paying the cost $c_i$ if the buyer has purchased the permit for item i in the first stage. We show that the best permit-selling mechanism or the best posted price mechanism is already a constant factor approximation to the optimal profit (6 for additive, and 11 for constrained additive). Indeed, we do not require the optimal permit-selling mechanism, only selling the permits separately or as a grand bundle suffices to achieve the above approximation ratio. Our proof is enabled by constructing a benchmark for the optimal profit via a novel dual solution and a new connection to revenue maximization in multi-item auctions with a subadditive bidder.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "StOLmNL6If",
"year": null,
"venue": "EC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329562",
"forum_link": "https://openreview.net/forum?id=StOLmNL6If",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Simple and Approximately Optimal Pricing for Proportional Complementarities",
"authors": [
"Yang Cai",
"Nikhil R. Devanur",
"Kira Goldner",
"R. Preston McAfee"
],
"abstract": "We study a new model of complementary valuations, which we call \"proportional complementarities.'' In contrast to common models, such as hypergraphic valuations, in our model, we do not assume that the extra value derived from owning a set of items is independent of the buyer's base valuations for the items. Instead, we model the complementarities as proportional to the buyer's base valuations, and these proportionalities are known market parameters. Our goal is to design a simple pricing scheme that, for a single buyer with proportional complementarities, yields approximately optimal revenue. We define a new class of mechanisms where some number of items are given away for free, and the remaining items are sold separately at inflated prices. We find that the better of such a mechanism and selling the grand bundle earns a 12-approximation to the optimal revenue for pairwise proportional complementarities. This confirms the intuition that items should not be sold completely separately in the presence of complementarities. In the more general case, a buyer has a maximum of proportional positive hypergraphic valuations, where a hyperedge in a given hypergraph describes the boost to the buyer's value for item i given by owning any set of items T in addition. The maximum-out-degree of such a hypergraph is d, and k is the positive rank of the hypergraph. For valuations given by these parameters, our simple pricing scheme is an O(min{d,k})-approximation.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "tzN6rMTuEL",
"year": null,
"venue": "EC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329630",
"forum_link": "https://openreview.net/forum?id=tzN6rMTuEL",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Energy Equilibria in Proof-of-Work Mining",
"authors": [
"Amos Fiat",
"Anna Karlin",
"Elias Koutsoupias",
"Christos H. Papadimitriou"
],
"abstract": "The Bitcoin protocol induces miners, through monetary rewards, to expend energy in order to add blocks to the chain. We show that, when energy costs are substantial and taken into account, counterintuitive and unintended strategic behavior results: In a simple bounded-horizon setting with two identical miners there is a unique pure symmetric equilibrium in which both miners first \"slow down\" in order to decrease the crypto complexity and then take advantage of this decrease. If miners have different energy efficiencies and are restricted to choose the same hash rate for many epochs, there is a unique pure equilibrium in which miners either participate at low levels that depend in intricate ways on all the other miners' efficiencies, or choose to abstain from mining if their efficiency is too low. In the general setting in which miners can adapt their hash rates over time, we show that, unless the number of miners is very small, the only possible pure equilibria are rather chaotic, with miners quitting and starting again periodically --- or there is no pure equilibrium at all. We discuss the implications of these results for the stability of proof-of-work protocols.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "SCM2b6nwMKy",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=SCM2b6nwMKy",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Spatial Capacity Planning",
"authors": [
"Omar Besbes",
"Francisco Castro",
"Ilan Lobel"
],
"abstract": "We study the relationship between capacity and performance for a service firm with spatial operations, in the sense that requests arrive with origin-destination pairs. An example of such a system is a ride-hailing platform in which each customer arrives in the system with the need to travel from an origin to a destination. We propose a state-dependent queueing model that captures spatial frictions as well as spatial economies of scale through the service rate. In a classical M/M/n queueing model, the square root safety (SRS) staffing rule is known to balance server utilization and customer wait times. By contrast, we find that the SRS rule does not lead to such a balance in spatial systems. In a spatial environment, pickup times increase the load in the system; furthermore, they are an endogenous source of extra workload that leads the system to only operate efficiently if there is sufficient imbalance between supply and demand. In heavy traffic, we derive the mapping from load to operating regimes and establish implications on various metrics of interest. In particular, to obtain a balance of utilization and wait times, the service firm should use a higher safety factor, proportional to the offered load to the power of 2/3. We also discuss implications of these results for general systems.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Diy02GhiBRy",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Diy02GhiBRy",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Static Pricing: Universal Guarantees for Reusable Resources",
"authors": [
"Omar Besbes",
"Adam N. Elmachtoub",
"Yunjie Sun"
],
"abstract": "We consider a fundamental pricing model in which a fixed number of units of a reusable resource are used to serve customers. Customers arrive to the system according to a stochastic process and upon arrival decide whether or not to purchase the service, depending on their willingness-to-pay and the current price. The service time during which the resource is used by the customer is stochastic and the firm may incur a service cost. This model represents various markets for reusable resources such as cloud computing, shared vehicles, rotable parts, and hotel rooms. In the present paper, we analyze this pricing problem when the firm attempts to maximize a weighted combination of three central metrics: profit, market share, and service level. Under Poisson arrivals, exponential service times, and standard assumptions on the willingness-to-pay distribution, we establish a series of results that characterize the performance of static pricing in such environments. In particular, while an optimal policy is fully dynamic in such a context, we prove that a static pricing policy simultaneously guarantees 78.9% of the profit, market share, and service level from the optimal policy. Notably, this result holds for any service rate and number of units the firm operates. In the special case where there are two units and the induced demand is linear, we also prove that the static policy guarantees 95.5% of the profit from the optimal policy. Our numerical findings on a large testbed of instances suggest that the latter result is quite indicative of the profit obtained by the static pricing policy across all parameters.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "zt5MvHWkod",
"year": null,
"venue": "EC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329608",
"forum_link": "https://openreview.net/forum?id=zt5MvHWkod",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Optimal Strategies of Blotto Games: Beyond Convexity",
"authors": [
"Soheil Behnezhad",
"Avrim Blum",
"Mahsa Derakhshan",
"Mohammad Taghi Hajiaghayi",
"Christos H. Papadimitriou",
"Saeed Seddighin"
],
"abstract": "The Colonel Blotto game, first introduced by Borel in 1921, is a well-studied game theory classic. Two colonels each have a pool of troops that they divide simultaneously among a set of battlefields. The winner of each battlefield is the colonel who puts more troops in it and the overall utility of each colonel is the sum of weights of the battlefields that s/he wins. Over the past century, the Colonel Blotto game has found applications in many different forms of competition from advertisements to politics to sports. Two main objectives have been proposed for this game in the literature: (i) maximizing the guaranteed expected payoff, and (ii) maximizing the probability of obtaining a minimum payoff u. The former corresponds to the conventional utility maximization and the latter concerns scenarios such as elections where the candidates' goal is to maximize the probability of getting at least half of the votes (rather than the expected number of votes). In this paper, we consider both of these objectives and show how it is possible to obtain (almost) optimal solutions that have few strategies in their support. One of the main technical challenges in obtaining bounded support strategies for the Colonel Blotto game is that the solution space becomes non-convex. This prevents us from using convex programming techniques in finding optimal strategies which are essentially the main tools that are used in the literature. However, we show through a set of structural results that the solution space can, interestingly, be partitioned into polynomially many disjoint convex polytopes that can be considered independently. Coupled with a number of other combinatorial observations, this leads to polynomial time approximation schemes for both of the aforementioned objectives. We also provide the first complexity result for finding the maximin of Blotto-like games: we show that computing the maximin of a generalization of the Colonel Blotto game that we call General Colonel Blotto is exponential time-complete.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "wiNZDsYEGfv",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=wiNZDsYEGfv",
"arxiv_id": null,
"doi": null
}
|
{
"title": "High-Multiplicity Fair Allocation: Lenstra Empowered by N-fold Integer Programming",
"authors": [
"Robert Bredereck",
"Andrzej Kaczmarczyk",
"Dusan Knop",
"Rolf Niedermeier"
],
"abstract": "We study the (parameterized) computational complexity of problems in the context of fair allocations of indivisible goods. More specifically, we show fixed-parameter tractability results for a broad set of problems concerned with envy-free, Pareto-efficient allocations of items (with agent-specific utility functions) to agents. In principle, this implies efficient exact algorithms for these in general computationally intractable problems whenever we face instances with few agents and low maximum (absolute) utility values. This holds true also in high-multiplicity settings where we may have high numbers of identical items. On the technical side, our approach provides algorithmic meta-theorems covering a rich set of fair allocation problems in the additive preferences model. To achieve this, our main technical contribution is to make an elaborate use of tools from integer linear programming. More specifically, we exploit results originally going back to a famous theorem of Lenstra [Math. Oper. Res. 1983] concerning (the fixed-parameter tractability of) Integer Linear Programs (ILPs) with bounded dimension (that is, the dimension shall be considered as a (small) parameter) and the more recent framework of (combinatorial) N-fold ILPs. We reveal and exploit a fruitful interaction between these two cornerstones in the theory of integer linear programming, which may be of independent interest in applications going beyond fair allocations.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "VDJJBeN4_Jm",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=VDJJBeN4_Jm",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Identifying Bid Leakage in Procurement Auctions: Machine Learning Approach",
"authors": [
"Dmitry Ivanov",
"Alexander Nesterov"
],
"abstract": "We propose a novel machine-learning-based approach to detect bid leakage in first-price sealed-bid auctions. We extract and analyze the data on more than 1.4 million Russian procurement auctions between 2014 and 2018. As bid leakage in each particular auction is tacit, the direct classification is impossible. Instead, we reduce the problem of bid leakage detection to Positive-Unlabeled Classification. The key idea is to regard the losing participants as fair and the winners as possibly corrupted. This allows us to estimate the prior probability of bid leakage in the sample, as well as the posterior probability of bid leakage for each specific auction. We find that at least 16% of auctions are exposed to bid leakage. Bid leakage is more likely in auctions with a higher reserve price, lower number of bidders and lower price fall, and where the winning bid is received in the last hour before the deadline.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "XxsmYLrX23",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=XxsmYLrX23",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Regression Equilibrium",
"authors": [
"Omer Ben-Porat",
"Moshe Tennenholtz"
],
"abstract": "Prediction is a well-studied machine learning task, and prediction algorithms are core ingredients in online products and services. Despite their centrality in the competition between online companies who offer prediction-based products, the strategic use of prediction algorithms remains unexplored. The goal of this paper is to examine strategic use of prediction algorithms. We introduce a novel game-theoretic setting that is based on the PAC learning framework, where each player (aka a prediction algorithm aimed at competition) seeks to maximize the sum of points for which it produces an accurate prediction and the others do not. We show that algorithms aiming at generalization may wittingly mispredict some points to perform better than others on expectation. We analyze the empirical game, i.e., the game induced on a given sample, prove that it always possesses a pure Nash equilibrium, and show that every better-response learning process converges. Moreover, our learning-theoretic analysis suggests that players can, with high probability, learn an approximate pure Nash equilibrium for the whole population using a small number of samples.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "PNt8fJehn1",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=PNt8fJehn1",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Congressional Classification Challenge: Domain Specificity and Partisan Intensity",
"authors": [
"Hao Yan",
"Sanmay Das",
"Allen Lavoie",
"Sirui Li",
"Betsy Sinclair"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Oo1AMyPnyE",
"year": null,
"venue": "EC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329619",
"forum_link": "https://openreview.net/forum?id=Oo1AMyPnyE",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Incorporating Compatible Pairs in Kidney Exchange: A Dynamic Weighted Matching Model",
"authors": [
"Zhuoshu Li",
"Kelsey Lieberman",
"William Macke",
"Sofia Carrillo",
"Chien-Ju Ho",
"Jason Wellen",
"Sanmay Das"
],
"abstract": "Kidney exchange has been studied extensively from the perspective of market design, and a significant focus has been on better algorithms for finding chains and cycles to increase the number of possible matches. A more dramatic benefit could come from incorporating compatible pairs into the mechanism, but this possibility has been relatively understudied. In order to incentivize a compatible pair to participate in exchange, they must be offered a higher quality match for the recipient that can be performed without adding extra waiting time. In this paper, we make two main contributions to the study of incorporating compatible pairs in exchanges. First, we leverage the recently proposed Living Donor Kidney Profile Index (LKDPI) to measure match quality, and develop a novel simulator (based on data from a major transplant center) for the joint distribution of compatibility and quality across pairs. This simulator allows us to study the benefits of including compatible pairs under different models and assumptions. Second, we introduce a hybrid online/batch matching model with impatient (compatible) and patient (incompatible) pairs to capture the need for immediacy. We introduce new algorithms for matching in this model, including one based on online primal-dual techniques. Overall, our results indicate great potential in terms of both increased numbers of transplants of incompatible pairs (almost doubling the number transplanted) as well as improved match quality for recipients in compatible pairs (increasing expected graft survival by between 1 and 2 years). The results are also promising for hard-to-match subpopulations, including blood group O recipients.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ADYjo0VPIx0",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=ADYjo0VPIx0",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Simple versus Optimal Contracts",
"authors": [
"Paul Dütting",
"Tim Roughgarden",
"Inbal Talgam-Cohen"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KGHWT_CyFqw",
"year": null,
"venue": "EC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329567",
"forum_link": "https://openreview.net/forum?id=KGHWT_CyFqw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Formal Barriers to Longest-Chain Proof-of-Stake Protocols",
"authors": [
"Jonah Brown-Cohen",
"Arvind Narayanan",
"Alexandros Psomas",
"S. Matthew Weinberg"
],
"abstract": "The security of most existing cryptocurrencies is based on a concept called Proof-of-Work, in which users must solve a computationally hard cryptopuzzle to authorize transactions (\"one unit of computation, one vote''). This leads to enormous expenditure on hardware and electricity in order to collect the rewards associated with transaction authorization. Proof-of-Stake is an alternative concept that instead selects users to authorize transactions proportional to their wealth (\"one coin, one vote\"). Some aspects of the two paradigms are the same. For instance, obtaining voting power in Proof-of-Stake has a monetary cost just as in Proof-of-Work: a coin cannot be freely duplicated any more easily than a unit of computation. However some aspects are fundamentally different. In particular, exactly because Proof-of-Stake is wasteless, there is no inherent resource cost to deviating (commonly referred to as the \"Nothing-at-Stake'' problem). In contrast to prior work, we focus on incentive-driven deviations (any participant will deviate if doing so yields higher revenue) instead of adversarial corruption (an adversary may take over a significant fraction of the network, but the remaining players follow the protocol). The main results of this paper are several formal barriers to designing incentive-compatible proof-of-stake cryptocurrencies (that don't apply to proof-of-work).",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "oNK5mIiauXH",
"year": null,
"venue": "EC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329634",
"forum_link": "https://openreview.net/forum?id=oNK5mIiauXH",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Iterated Deep Reinforcement Learning in Games: History-Aware Training for Improved Stability",
"authors": [
"Mason Wright",
"Yongzhao Wang",
"Michael P. Wellman"
],
"abstract": "Deep reinforcement learning (RL) is a powerful method for generating policies in complex environments, and recent breakthroughs in game-playing have leveraged deep RL as part of an iterative multiagent search process. We build on such developments and present an approach that learns progressively better mixed strategies in complex dynamic games of imperfect information, through iterated use of empirical game-theoretic analysis (EGTA) with deep RL policies. We apply the approach to a challenging cybersecurity game defined over attack graphs. Iterating deep RL with EGTA to convergence over dozens of rounds, we generate mixed strategies far stronger than earlier published heuristic strategies for this game. We further refine the strategy-exploration process, by fine-tuning in a training environment that includes out-of-equilibrium but recently seen opponents. Experiments suggest this history-aware approach yields strategies with lower regret at each stage of training.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "NyOl4n12xst",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=NyOl4n12xst",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Adversarial Contract Design for Private Data Commercialization",
"authors": [
"Parinaz Naghizadeh",
"Arunesh Sinha"
],
"abstract": "The proliferation of data collection and machine learning techniques has created an opportunity for commercialization of private data by data aggregators. In this paper, we study this data monetization problem as a mechanism design problem, specifically using a contract-theoretic approach. Our proposed adversarial contract design framework provides a fundamental extension to the classic contract theory set-up in order to account for the heterogeneity in honest buyers' demands for data, as well as the presence of adversarial buyers who may purchase data to compromise its privacy. We propose the notion of Price of Adversary $(PoAdv)$ to quantify the effects of adversarial users on the data seller's revenue, and provide bounds on the $PoAdv$ for various classes of adversary utility. We also provide a fast approximate technique to compute contracts in the presence of adversaries.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "MJQuc3zzLv",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=MJQuc3zzLv",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Prior-free Data Acquisition for Accurate Statistical Estimation",
"authors": [
"Yiling Chen",
"Shuran Zheng"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "A1JMeu-mXA",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=A1JMeu-mXA",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Fair Cake-Cutting in Practice",
"authors": [
"Maria Kyropoulou",
"Josué Ortega",
"Erel Segal-Halevi"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "1HxUZh7iAT3",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=1HxUZh7iAT3",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Tight Weight-dependent Competitive Ratios for Online Edge-weighted Bipartite Matching and Beyond",
"authors": [
"Will Ma",
"David Simchi-Levi"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "mrmbCm_zseg",
"year": null,
"venue": "EC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329563",
"forum_link": "https://openreview.net/forum?id=mrmbCm_zseg",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Smoothed Analysis of Multi-Item Auctions with Correlated Values",
"authors": [
"Alexandros Psomas",
"Ariel Schvartzman",
"S. Matthew Weinberg"
],
"abstract": "Consider a seller with m heterogeneous items for sale to a single additive buyer whose values for the items are arbitrarily correlated. It was previously shown that, in such settings, distributions exist for which the seller's optimal revenue is infinite, but the best \"simple\" mechanism achieves revenue at most one (Briest et al. 2015, Hart and Nisan 2012), even when m=2. This result has long served as a cautionary tale discouraging the study of multi-item auctions without some notion of \"independent items\". In this work we initiate a smoothed analysis of such multi-item auction settings. We consider a buyer whose item values are drawn from an arbitrarily correlated multi-dimensional distribution then randomly perturbed with magnitude δ under several natural perturbation models. On one hand, we prove that the above construction is surprisingly robust to certain natural perturbations of this form, and the infinite gap remains. On the other hand, we provide a smoothed model such that the approximation guarantee of simple mechanisms is smoothed-finite. We show that when the perturbation has magnitude δ, pricing only the grand bundle guarantees an O(1/δ)-approximation to the optimal revenue. That is, no matter the (worst-case) initially correlated distribution, these tiny perturbations suffice to bring the gap down from infinite to finite. We further show that the same guarantees hold when n buyers have values drawn from an arbitrarily correlated mn-dimensional distribution (without any dependence on n). Taken together, these analyses further pin down key properties of correlated distributions that result in large gaps between simplicity and optimality.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "E-o86ZI8kaw",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=E-o86ZI8kaw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Truthful Aggregation of Budget Proposals",
"authors": [
"Rupert Freeman",
"David M. Pennock",
"Dominik Peters",
"Jennifer Wortman Vaughan"
],
"abstract": "We study a participatory budgeting setting in which a single divisible resource (such as money or time) must be divided among a set of projects. For example, participatory budgeting could be used to decide how to divide a city's tax surplus between its departments of health, education, infrastructure, and parks. A voter might propose a division of the tax surplus among the four departments into the fractions (30%, 40%, 20%, 10%). The city could invite each citizen to submit such a budget proposal, and they could then be aggregated by a suitable mechanism. In this paper, we seek mechanisms of this form that are resistant to manipulation by the voters. In particular, we require that no voter can, by lying, move the aggregate division toward her preference on one alternative without moving it away from her preference by at least as much on other alternatives.\n \n \n In other words, we seek budget aggregation mechanisms that are incentive compatible when each voter's disutility for a budget division is equal to the 1 distance between that division and the division she prefers most. Goel et al. [4] showed that choosing an aggregate budget division that maximizes the welfare of the voters-that is, a division that minimizes the total 1 distance from each voter's report-is both incentive compatible and Pareto-optimal under this voter utility model. However, this utilitarian aggregate has a tendency to overweight majority preferences, creeping back towards all-or-nothing allocations. For example, imagine that a hundred voters prefer (100%, 0%) while ninety-nine prefer (0%, 100%). The utilitarian aggregate is (100%, 0%) even though the mean is close to (50%, 50%). In many participatory budgeting scenarios, the latter solution is more in the spirit of consensus. To capture this idea of fairness, we define a notion of proportionality, requiring that when voters are single-minded (as in this example), the fraction of the budget assigned to each alternative is equal to the proportion of voters who favor that alternative. Do there exist aggregators that are both incentive compatible and proportional?",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Wx0bxV5F2Bq",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Wx0bxV5F2Bq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "School Choice in Chile",
"authors": [
"José R. Correa",
"Rafael Epstein",
"Juan Escobar",
"Ignacio Rios",
"Bastián Bahamondes",
"Carlos Bonet",
"Natalie Epstein",
"Nicolas Aramayo",
"Martin Castillo",
"Andrés Cristi",
"Boris Epstein"
],
"abstract": "Centralized school admission mechanisms are an attractive way of improving social welfare and fairness in large educational systems. In this paper we report the design and implementation of the newly established school choice mechanism in Chile, where over 274,000 students applied to more than 6,400 schools. The Chilean system presents unprecedented design challenges that make it unique. On the one hand, it is a simultaneous nationwide system, making it one of the largest school admission problems worldwide. On the other hand, the system runs at all school levels, from Pre-K to 12th grade, raising at least two issues of outmost importance; namely, the system needs to guarantee their current seat to students applying for a school change, and the system has to favor the assignment of siblings to the same school. As in other systems around the world, we develop a model based on the celebrated Deferred Acceptance algorithm. The algorithm deals not only with the aforementioned issues, but also with further practical features such as soft-bounds and overlapping types. In this context we analyze new stability definitions, present the results of its implementation and conduct simulations showing the benefits of the innovations of the implemented system.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "yCpvUDYqR3S",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=yCpvUDYqR3S",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Optimal Algorithm for Bayesian Incentive-Compatible Exploration",
"authors": [
"Lee Cohen",
"Yishay Mansour"
],
"abstract": "We consider a social planner faced with a stream of myopic selfish agents. The goal of the social planner is to maximize the social welfare, however, it is limited to using only information asymmetry (regarding previous outcomes) and cannot use any monetary incentives. The planner recommends actions to agents, but her recommendations need to be Bayesian Incentive Compatible to be followed by the agents.\n \n \n Our main result is an optimal algorithm for the planner, in the case that the actions realizations are deterministic and have limited support, making significant important progress on this open problem. Our optimal protocol has two interesting features. First, it always completes the exploration of a priori more beneficial actions before exploring a priori less beneficial actions. Second, the randomization in the protocol is correlated across agents and actions (and not independent at each decision time).",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "O9YXulBBWt",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=O9YXulBBWt",
"arxiv_id": null,
"doi": null
}
|
{
"title": "No Stratification Without Representation",
"authors": [
"Gerdus Benadè",
"Paul Gölz",
"Ariel D. Procaccia"
],
"abstract": "Sortition is an alternative approach to democracy, in which representatives are not elected but randomly selected from the population. Most electoral democracies fail to accurately represent even a handful of protected groups. By contrast, sortition guarantees that every subset of the population will in expectation fill their fair share of the available positions. This fairness property remains satisfied when the sample is stratified based on known features. Moreover, stratification can greatly reduce the variance in the number of positions filled by any unknown group, as long as this group correlates with the strata. Our main result is that stratification cannot increase this variance by more than a negligible factor, even in the presence of indivisibilities and rounding. When the unknown group is unevenly spread across strata, we give a guarantee on the reduction in variance with respect to uniform sampling. We also contextualize stratification and uniform sampling in the space of fair sampling algorithms. Finally, we apply our insights to an empirical case study.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Izpe9OJoa_l",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Izpe9OJoa_l",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Individual Fairness in Hindsight",
"authors": [
"Swati Gupta",
"Vijay Kamble"
],
"abstract": "Since many critical decisions impacting human lives are increasingly being made by algorithms, it is important to ensure that the treatment of individuals under such algorithms is demonstrably fair under reasonable notions of fairness. One compelling notion proposed in the literature is that of individual fairness (IF), which advocates that similar individuals should be treated similarly (Dwork et al. 2012). Originally proposed for offline decisions, this notion does not, however, account for temporal considerations relevant for online decision-making. In this paper, we extend the notion of IF to account for the time at which a decision is made, in settings where there exists a notion of conduciveness of decisions as perceived by the affected individuals. We introduce two definitions: (i) fairness-across-time (FT) and (ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF where treatment of individuals is required to be individually fair relative to the past as well as future, while in FH, we require a one-sided notion of individual fairness that is defined relative to only the past decisions. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model. Linear regret relative to optimal individually fair decisions is inevitable under FT for non-trivial examples. On the other hand, we design a new algorithm: Cautious Fair Exploration (CAFE), which satisfies FH and achieves sub-linear regret guarantees for a broad range of settings. We characterize lower bounds showing that these guarantees are order-optimal in the worst case. FH can thus be embedded as a primary safeguard against unfair discrimination in algorithmic deployments, without hindering the ability to take good decisions in the long-run.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "w8UwmbyaKjh",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=w8UwmbyaKjh",
"arxiv_id": null,
"doi": null
}
|
{
"title": "How Do Classifiers Induce Agents to Invest Effort Strategically?",
"authors": [
"Jon M. Kleinberg",
"Manish Raghavan"
],
"abstract": "Algorithms are often used to produce decision-making rules that classify or evaluate individuals. When these individuals have incentives to be classified a certain way, they may behave strategically to influence their outcomes. We develop a model for how strategic agents can invest effort in order to change the outcomes they receive, and we give a tight characterization of when such agents can be incentivized to invest specified forms of effort into improving their outcomes as opposed to \"gaming\" the classifier. We show that whenever any \"reasonable\" mechanism can do so, a simple linear mechanism suffices.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "cBoZJ4Sq-Jb",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=cBoZJ4Sq-Jb",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Learning in Structured MDPs with Convex Cost Functions: Improved Regret Bounds for Inventory Management",
"authors": [
"Shipra Agrawal",
"Randy Jia"
],
"abstract": "We consider a stochastic inventory control problem under censored demands, lost sales, and positive lead times. This is a fundamental problem in inventory management, with significant literature establishing near-optimality of a simple class of policies called \"base-stock policies\" for the underlying Markov Decision Process (MDP), as well as convexity of long run average-cost under those policies. We consider the relatively less studied problem of designing a learning algorithm for this problem when the underlying demand distribution is unknown. The goal is to bound regret of the algorithm when compared to the best base-stock policy. We utilize the convexity properties and a newly derived bound on bias of base-stock policies to establish a connection to stochastic convex bandit optimization. Our main contribution is a learning algorithm with a regret bound of ~O (L√T+D) for the inventory control problem. Here L is the fixed and known lead time, and D is an unknown parameter of the demand distribution described roughly as the number of time steps needed to generate enough demand for depleting one unit of inventory. Notably, even though the state space of the underlying MDP is continuous and L-dimensional, our regret bounds depend linearly on L. Our results significantly improve the previously best known regret bounds for this problem where the dependence on L was exponential and many further assumptions on demand distribution were required. The techniques presented here may be of independent interest for other settings that involve large structured MDPs but with convex cost functions.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "RpIYY7mVY4U",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=RpIYY7mVY4U",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Price of Privacy in the Keynesian Beauty Contest",
"authors": [
"Hadi Elzayn",
"Zachary Schutzman"
],
"abstract": "The Keynesian Beauty Contest is a classical game in which strategic agents seek to both accurately guess the true state of the world as well as the average action of all agents. We study an augmentation of this game where agents are concerned about revealing their private information and additionally suffer a loss based on how well an observer can infer their private signals. We solve for an equilibrium of this augmented game and quantify the loss of social welfare as a result of agents acting to obscure their private information, which we call the 'price of privacy'. We analyze two versions of this this price: one from the perspective of the agents measuring their diminished ability to coordinate due to acting to obscure their information and another from the perspective of an aggregator whose statistical estimate of the true state of the world is of lower precision due to the agents adding random noise to their actions. We show that these quantities are high when agents care very strongly about protecting their personal information and low when the quality of the signals the agents receive is poor.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "eOHKMTReebZ",
"year": null,
"venue": "ECAI 2010",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-60750-606-5-759",
"forum_link": "https://openreview.net/forum?id=eOHKMTReebZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Using Background Knowledge to Support Coreference Resolution",
"authors": [
"Volha Bryl",
"Claudio Giuliano",
"Luciano Serafini",
"Kateryna Tymoshenko"
],
"abstract": "Systems based on statistical and machine learning methods have been shown to be extremely effective and scalable for the analysis of large amount of textual data. However, in the recent years, it becomes evident that one of the most important direction of improvement in natural language processing (NLP) tasks, like word sense disambiguation, coreference resolution, relation extraction, and other tasks related to knowledge extraction, is by exploiting semantics. While in the past, the unavailability of rich and complete semantic descriptions constituted a serious limitation of their applicability, nowadays, the Semantic Web made available a large amount of logically encoded information (e.g. ontologies, RDF(S)-data, linked data, etc.), which constitute a valuable source of semantics. However, web semantics cannot be easily plugged into machine learning systems. Therefore the objective of this paper is to define a reference methodology for combining semantics information available in the web under the form of logical theories, with statistical methods for NLP. The major problems that we have to solve to implement our methodology concern (i) the selection of the correct and minimal knowledge among the large amount available in the web, (ii) the representation of uncertain knowledge, and (iii) the resolution and the encoding of the rules that combine knowledge retrieved from Semantic Web sources with semantics in the text. In order to evaluate the appropriateness of our approach, we present an application of the methodology to the problem of intra-document coreference resolution, and we show by means of some experiments on the ACE 2005 dataset, how the injection of knowledge is correlated to the improvement of the performance of our approach on this tasks.",
"keywords": [],
"raw_extracted_content": "Using Background Knowledge to Support\nCoreference Resolution\nVolha Bryl and Claudio Giuliano and Luciano Serafini and Kateryna Tymoshenko1\nAbstract. Systems based on statistical and machine learning meth-\nods have been shown to be extremely effective and scalable for the\nanalysis of large amount of textual data. However, in the recentyears, it becomes evident that one of the most important directionof improvement in natural language processing (NLP) tasks, likeword sense disambiguation, coreference resolution, relation extrac-tion, and other tasks related to knowledge extraction, is by exploit-ing semantics. While in the past, the unavailability of rich and com-plete semantic descriptions constituted a serious limitation of theirapplicability, nowadays, the Semantic Web made available a large\namount of logically encoded information (e.g. ontologies, RDF(S)-\ndata, linked data, etc.), which constitute a valuable source of seman-\ntics. However, web semantics cannot be easily plugged into machinelearning systems. Therefore the objective of this paper is to define areference methodology for combining semantics information avail-able in the web under the form of logical theories, with statisticalmethods for NLP . The major problems that we have to solve to im-plement our methodology concern (i) the selection of the correct andminimal knowledge among the large amount available in the web,(ii) the representation of uncertain knowledge, and (iii) the resolu-\ntion and the encoding of the rules that combine knowledge retrieved\nfrom Semantic Web sources with semantics in the text. In order toevaluate the appropriateness of our approach, we present an applica-tion of the methodology to the problem of intra-document corefer-ence resolution, and we show by means of some experiments on theACE 2005 dataset, how the injection of knowledge is correlated to\nthe improvement of the performance of our approach on this tasks.\n1 Introduction\nThe task of coreference resolution consists in identifying noun\nphrases (or mentions) that refer to the same real-world entity. Forexample, it is required to identify that the mentions Barack Obama\nand president are coreferent in the text “Barack Obama will make an\nappearance on the TV show. The president is scheduled to come onFriday evening. ” This constitutes an important subtask in many nat-\nural language processing (NLP) applications, such as, information\nextraction, textual entailment, and question answering.\nMachine learning (ML) is widely used to approach the coreference\ntask. State-of-the-art coreference resolvers are mostly extensions of\nthe Soon et al. approach in which a mention-pair classifier is trained\nusing solely surface-level features to determine whether two men-\ntions are coreferring or not [21].\nIn the last decade, two independent research lines have extended\nthe Soon et al. approach yielding significant improvements in accu-racy. The first aims at defining a more sophisticated ML framework\n1Fondazione Bruno Kessler, Via Sommarive 18, 38123 Trento, Italyto overcome the limits of the mention-pair model. Entity-mentionand mention-ranking models and their combination cluster-ranking\nare some of the relevant approaches proposed (e.g. [5, 11]).\nThe second research line investigates the usage of semantic knowl-\nedge sources to augment the feature space. Here the majority of the\napproaches exploit WordNet\n2and, more recently, Wikipedia3or cor-\npora annotated with semantic classes (e.g. [13, 15]) to define seman-tic features, e.g. the semantic relations and the semantic similaritybetween two mentions.\nNowadays, the Semantic Web made available a large amount of\nlogically encoded information (e.g. ontologies, RDF(S)-data, linked\ndata, etc.), which constitute a valuable source of semantics. However,\nthe extension of state-of-the-art coreference methods with these re-\nsources is not a trivial task due to the following reasons:\n•theheterogeneity and the ambiguity of the schemes adopted by the\ndifferent resources of the Semantic Web. This means, for instance,\nthat the same relation can be encoded by different URIs, and thatURIs are used by different resources for denoting different rela-tions.\n•theirregular coverage of the knowledge available in the web. This\nmeans that for some “famous” entities the Semantic Web contains\na large amount of knowledge, and only a little is relevant for solv-\ning coreference, while for other entities there is no knowledge atall.\n•thelogical-statistical knowledge integration problem i.e., the fact\nthat algorithms for coreference resolution are based on statisti-cal feature models, while background knowledge in the Semantic\nWeb is encoded in some logical form.\nIn this paper, we define a methodology for coreference resolution that\nexploits background knowledge available in the web, by proposing\nthree practical solutions of the beforementioned problems:\n•To tackle the first problem, we propose a method to map terms in\ntext to URIs through DBpedia [2] mediation. Since most of the re-sources available in the Semantic Web are linked to DBpedia, wecan use it as a semantic mediator. So we propose to link text with\nDBpedia entries and then to exploit the linking between DBpediaand the other resources to access the knowledge encoded in them.DBpedia represents a practical choice, as it is playing a centralrole in the development of the Semantic Web, given the large and\ngrowing number of resources linked to it, which makes DBpedia\none of the central interlinking hubs of the emerging Web of Data.\n•To tackle the issue of selecting the subset of knowledge relevant\nfor coreference, we propose to include only the knowledge that\n2http://wordnet.princeton.edu/\n3http://wikipedia.org/ECAI 2010\nH. Coelho et al. (Eds.)\nIOS Press, 2010\n© 2010 The authors and IOS Press. All rights reserved.\ndoi:10.3233/978-1-60750-606-5-759759\nrelates two or more entities of the same document, and knowledge\nrelated to some syntactic feature. For the first type of knowledge,for instance, we consider the class-membership relation (e.g. weselect the knowledge President (Barack\nObama ), when the text\ncontains “president” and “Barack Obama”), aliases relation (e.g.\nwe select USA =United States when the text contains both\n“the Unites States” and “USA ”) and so on. As far as knowledge\nconnected with syntactic features we select, for instance, axiomsabout gender, like wife (x)→female (x).\n•The problem of integration of statistical (feature-based) informa-\ntion together with background knowledge expressed in RDF/OWL\nformalism, has been tackled by using an inference engine that sup-port uncertain reasoning. We select the Alchemy tool [1] since itallows for the integration of uncertain knowledge, and facts ex-\npressed in first-order language. Alchemy provides both reason-\ning and learning functionalities, though we only use the reasoningpart. The extension of this work, however, could require learningcapabilities.\nTo evaluate the methodology, we run a number of experiments,\nwhich are reported in Section 5. The results show that our methodperforms in the order of the state-of-the-art coreference algorithms,and, what is more important, that there is a correlation between thepresence of the background knowledge and the improvement of per-formance. This allows us to draw two types of conclusions. First, us-ing background knowledge provides a tangible advantage for coref-\nerence resolution, and second, by using the methodology presented\nin this paper, more improvement could be obtained by simply making\navailable new background knowledge to the system.\n2 Related work\nSoon et al. [21] propose a machine learning framework for coref-erence resolution, which has become a basis for many later ap-\nproaches [13, 15, 25]. Soon et al. propose a set of twelve features:\nlexical, grammatical, positional and semantic. The latter includesthe semantic class agreement and alias features. Semantic classesof mentions are obtained from WordNet, alias feature is calculatedonly for pairs of named entities. They achieve precision of 67.6%,\nrecall of 58.6% and F-measure of 62.60% on the MUC-6 data set,\nand precision, recall and F-measure of 65.5%, 56.1% and 60.64% on\nMUC-7, correspondingly.\nIn the work by Ng and Cardie [13] all the feature subsets from [21]\nare expanded to 56 features, including the new semantic features\nobtained from WordNet based on ancestor-descendency relationship\nand graph distance between mentions. However, experiments withthe full feature set show the decrease in the precision on the commonnouns to 40.1%. To improve the performance, a number of featureswere discared, among which there are the semantic ones. With re-duced set of features precision and recall are 74.9% and 64.1% onthe MUC-6 dataset, and 70.8% and 57.4% on MUC-7.\nAccording to Ng [12], approaches like [21] and [13] assign to a\ncommon noun the most frequent sense from WordNet, which maybe the reason why in [21] the semantic class agreement feature has\nzero F-measure. To assign semantic classes to mentions, Ng trains a\nclassifier on the BBN entity corpus. They propose to use the obtained\nsemantic classes both as features and constraints in eight differentways, thus improving precision of the common noun resolution by2-6% over [21].\nPoesio et al. [14] resolve the coreferences that cannot be resolved\njust by the string matches. They use machine learning techniquesto find the best combination of local focus features and lexical dis-tance features, which were calculated using Google API and Word-Net 1.7.1. Google features were based on the frequency of prede-fined patterns, which indicate the coreference. Features from Word-Net were based on its hypernym structure. They obtained F-measure\nof 79.6% using the WordNet features and 77.7% using the Google\nfeatures.\nMany approaches use Wikipedia as a source of semantic infor-\nmation. Yang and Su [25] exploit Wikipedia to extract the patterns,\nwhich indicate the semantic relatedness. They add pattern based fea-\ntures to the feature set of [21], thus improving the recall up to 4.3%and F-measure up to 2.1% on the ACE 2004 data set.\nPonzetto and Strube [15] expand the semantic feature subset\nof [21] by adding two semantic similarity features obtained fromWordNet taxonomy and six features obtained from the Wikipedia ar-ticle texts and category structure. They improve F-measure by 3.4%over the baseline [21] on the ACE 2003 (BNEWS/NWIRE) dataset.To find the correct Wikipedia articles for the mentions the authorsquery Wikipedia for pages titled as the head lemma. If the disam-biguation page is hit, they use an heuristic algorithm. However, theWikipedia search engine, when queried for a term, very often returnsan article about its most frequent sense.\nHaghighi and Klein [9] propose a modular approach. In one of\nthe modules they check mention pairs for compatibility. For this pur-\npose they create corpora from 25k articles of the English Wikipedia\nand 1.8 million sentences of a newswire. It helps them to improve\nthe pairwise F1 from 55.5% to 58% on the ACE2004-ROTH-DEVcorpus over other non-semantic modules of their system.\nPoesio et al. [23] propose BART, a modular toolkit for coreference\nresolution. In the feature extraction module the semantic features arethe features from [21] and [15]. They reach 65.8% F-measure onMUC-6 and 62.9% F-measure om MUC-7.\nOther possible source of semantic information is a knowledge base\nsystem Wikitology 2.0. Finin et al. [7] constructed it on the basis ofinformation from Wikipedia and structured knowledge from DBpe-dia and FreeBase. They use Wikitology 2.0 to solve the ACE taskof cross-document coreference resolution. Finin et al. extract intra-document entities using the BBN Serif System. They transform enti-ties into the so-called entity documents EDOCS, which contain vari-ous information about the entity’s mentions. EDOCS are mapped to\nWikitology 2.0. For a given entity the knowledge base returns the\nvector of matches against Wikipedia article entries and the vectorof matches against Wikipedia categories. Finin et al. define twelve\nWikitology features based on similarity measures of article/category\nvectors. Evaluation was performed on the ACE 2008 dataset.\n3 Background Knowledge Acquisition\nThis section describes how we train and evaluate a system for acquir-ing background knowledge from resources linked to DBpedia.\n3.1 Linking to DBpedia\nIn order to acquire background knowledge from the Semantic Web,we need to link each mention in a given text to a DBpedia entry andthen to exploit the existing links among DBpedia and the other Webresources (e.g., Y AGO [22], an ontology extracted from Wikipediaand unified with WordNet) to access the knowledge encoded in them.The linking problem is casted as a word sense disambiguation (WSD)exercise, in which each mention in text (excluding pronouns) hasbe disambiguated using Wikipedia to provide the sense inventoryV.Bryl etal./Using Background Knowledg etoSupport Coreference Resolution 760\nand the training data. The idea of using Wikipedia to train a su-\npervised WSD system was first proposed in [3]. Notice that linkingto Wikipedia entails linking to its structured twin DBpedia, conse-quently from now on we use the terms Wikipedia page and DBpedia\nentry interchangeably. The proposed approach is summarized as fol-lows.\n3.1.1 Training Set\nTo create the training set, for each mention m, we collect from\nthe English Wikipedia dump all contexts where mis an anchor of\nan internal link.4The set of target articles represents the senses of\nmin DBpedia and the contexts are used as labeled training ex-\namples. For example, the proper noun Bush is a link anchor in\n17,067 different contexts that point to 20 different DBpedia en-\ntries,George_W._Bush, Bush_(band), and Dave_Bush are\nsome example of possible senses. The set of contexts with their cor-\nresponding senses is then used to train the WSD system described\nbelow. For example, the context “Alternative Rock bands from the\nmid-90 ’s , including Bush , Silverchair , and Sponge. ” is a training\ninstance for the sense defined by the DBpedia entry Bush_(band),\nits label.\n3.1.2 Learning Algorithm\nTo disambiguate mentions in text, we implemented a kernel-based\napproach like in [8]. Different kernel functions are employed to inte-grate syntactic, semantic, and pragmatic knowledge sources typicallyused in the WSD literature. The strategy adopted by kernel methodsconsists of splitting the learning problem into two parts. They firstembed the input data in a suitable feature space, and then use a lin-ear algorithm (e.g., support vector machines) to discover nonlinear\npatterns in the input space. The kernel function is the only task-\nspecific component of the learning algorithm. For each knowledge\nsource a specific kernel has been defined. By exploiting the propertyof kernels, basic kernels are then combined to define the WSD kernel.Specifically, we used a combination of gap-weighted subsequences,bag-of-words, and latent semantic kernels [20].\nGap-weighted subsequences kernel. This kernel learns syntac-\ntic and associative relations between words in a local context. Weextended the gap-weighted subsequences kernel to subsequencesof word forms, stems, part-of-speech tags, and orthographic fea-tures (capitalization, punctuation, numerals, etc.). We defined gap-\nweighted subsequences kernels to work on subsequences of length\nup to 5.\nBag-of-words kernel. This kernel learns domain, semantic, top-\nical information. Bag-of-words kernel takes as input a a wide con-text window around the target mention. Words are represented using\nstems. The main drawback of this approach is the need of a large\namount of training data to reliably estimate model parameters.\nLatent semantic kernel. To overcome the drawback of the bag-\nof-words, we incorporate sematic information acquired from En-glish Wikipedia in an unsupervised way by means of latent seman-\ntic kernel. This kernel extracts semantic information through co-\noccurrence analysis in the corpus. The technique used to extract theco-occurrence statistics relies on a singular value decomposition ofthe term-by-document matrix.\n4A context corresponds to a line of text in the Wikipedia dump and it is\nrepresented as a paragraph in a Wikipedia article.3.1.3 Implementation details\nThe latent semantic model is derived from the 200,000 most visitedWikipedia articles, after removing terms that occur less than 5 times,\nthe resulting dictionary contain about 300,000 and 150,000 terms re-\nspectively. We used the SVDLIBC package to compute the SVD,truncated to 400 dimensions.\n5To classify each mention in DBpedia\nentries, we used a LIBSVM package.6No parameter optimization\nwas performed.\n3.2 Evaluation\nFor evaluation, we use a subset of the English ACE 2005 trainingset\n7, which comprises 9 documents with 353 proper nouns. We re-\nstricted the evaluation to proper nouns as Y AGO, our source of back-ground knowledge, has a limited coverage of common nouns. We\ncarried out the evaluation by manually checking the DBpedia link\nassigned by the WSD system. The evaluation showed that the WSDsystem achieved precision, recall, and F\n1of 85%, 91%, and 88%,\nrespectively. The baseline system based on the most frequent heuris-tic achieved precision, recall, and F\n1of 82% R = 88% F-Measure\n= 85% respectively. In addition, we conducted an error analysis. Wediscovered that 37% of the errors are due to missing DBpedia entries,31% to lack of training data, and 32% to classification errors.\n4 Coreference Resolution with Background\nKnowledge\nIn this section we explain how we have implemented the frameworkto test the main hypothesis of the paper: whether the use of back-ground knowledge obtained from the structured Semantic Web re-sources improves the performance of NLP tasks, namely, the coref-erence resolution task. The key choice here is the selection of an in-ference tool to be used for a task. When the tool is selected, its inputsneed to be constructed, that is, we need to define a model for solving\nthe task and find out how the text corpus (enriched with background\nknowledge as described in the previous section) is to be processed.\n4.1 Tool selection: Alchemy\nA recently introduced family of approaches to the tasks of corefer-\nence resolution try to represent the coreference task into some log-ical theory that supports the representation of uncertain knowledge.Among these approaches we can find a number of works [17, 10, 4]based on the formalism called Markov logic [6], which is a first-orderprobabilistic language which combines first-order logic with proba-bilistic graphical models.\nIn essence, Markov logic model is a set of first-order rules with\nweights associated to each rule. Weights can be learned from the\navailable evidence (training data) or otherwise defined, and then in-\nference is performed on a new (test) data. Such a representation of\nthe model is intuitive and allows for the background knowledge beintegrated naturally into it. It has been shown that Markov logicframework is competitive in solving NLP tasks (see, for instance,[16, 19, 18], and [1] for more references). Another advantage of theweighted first-order representation is that the model can be easily\n5http://tedlab.mit.edu/ ˜dr/svdlibc/\n6http://www.csie.ntu.edu.tw/ ˜cjlin/libsvm/\n7http://www.itl.nist.gov/iad/mig//tests/ace/ace05/\nindex.htmlV.Bryl etal./Using Background Knowledg etoSupport Coreference Resolution 761\nextended with extra (background) knowledge by simply adding log-\nical axioms, thus minimizing the engineering effort and making theknowledge enrichment step more straightforward and intuitive.\nGiven the above, the inference tool we have selected to be used\nin the coreference resolution tasks is the inference module of the\nAlchemy system [1], with Markov logic as a representation language.\nA key concept in Markov logic is the one of Markov logic network,\nwhich is a set of pairs (F\ni,w i), where Fiis a first-order formula and\nwiis a real number. Together with a set of constants, it defines a\nMarkov network, which contains a node for each possible grounded\npredicate, with the value of a node equal to 1 if the predicate is trueand 0 otherwise. There is an edge between two nodes in the networkif the corresponding grounded predicates appear together at least inone grounding of at least one formulae F\ni. A clique in such a graph\ncorresponds to a grounded formula. A feature fi(x)is associated to a\nclique, and has the value is 1 if the corresponding grounded formulaholds and 0 otherwise. The Markov logic network defines the jointprobability distribution over possible worlds x(where xis a set of\nvalues of all the grounded predicates in the network) as follows:\nP(X=x)=1\nZexp FX\nj=1wjfj(x)!\n,f j(x)∈{0,1}\nwhere Fis the number of formulae in the MLN and ni(x) is\nthe number of true groundings of Fiinx. To perform inference\non Markov logic models, Alchemy combines weighted satisfiability(SA T) solvers and Markov chain Monte Carlo inference techniquefor graphical models [6].\nThe Alchemy inference module takes as inputs (i) a Markov logic\nmodel, that is, a list of weighted first-order rules, and (ii) an evidence\ndatabase, that is, the list of known properties (true of false values of\npredicates) of domain objects. In the case of coreference resolution,\ndomain objects are the named entities in the text, and the properties\nthey might have are gender, number, distance, semantic class, etc. In\nthe two following subsections we discuss in details how these two\nparts of input are constructed. As an output, the Alchemy inference\nmodule produces a list of all possible coreference pairs with associ-ated probabilities. The post-processing of the output is discussed inSection 4.4.\n4.2 Markov logic model\nIn defining a model for coreference resolution, we were inspiredwith Soon et. al. baseline [21], which uses the following features:\npairwise distance (in terms of number of sentences), string match,\nalias, number, gender and semantic class agreement, pronoun, defi-nite/demonstrative noun phrase and both proper names feature. Thisapproach achieves F-measure of 62.2% in the MUC-6 coreferencetask and of 60.4% on the MUC-7 coreference task.\nA Markov logic model consists of a list of predicates and a set of\n(weighted) first-order formulae. Some predicates in our model cor-\nrespond to Soon et. al. features: binary predicates such as distance\nbetween two named entities and string match, and unary predicatessuch as proper name, semantic class, number (singular or plural) and\ngender (male, female or unknown). Also, we use string overlap in\naddition to string match and define yet another predicate to describedistance, which refers to the number of named entities of the sametype between two given ones (e.g. if there are no other named en-tities classified as “person” between “Obama” and “President”, thedistance is 0). Finally, predicate corefer(mention,mention) describes\nthe relation of interest, and is called query predicate in Alchemy ter-\nminology, that is, we are interested in evaluating the probability ofeach grounding of this predicate given the known properties of allthe mentions.\nThe second part of the model definition concerns constructing the\nfirst-order rules appropriate for a given task. We have defined therules that connect the above properties of the mentions with the coref-erence property. Some of the examples are given below\n8.\nString matching or overlap is very likely to indicate coreference\nfor proper names, while for common nouns it is still likely but makes\nmore sense in combination with a distance property:\n20match (x, y )∧proper (x)∧proper (y)→coref er (x, y )\n3match (x, y )∧noun(x )∧noun(y )∧dist 0(x, y)→coref er (x, y )\nGender and number agreement between two neighboring mentions\nof the same type provides a relatively strong evidence for corefer-ence:\n4male(x) ∧male(y )∧singular (x)∧singular (y)∧\n∧fo llo w (x, y )→coref er (x, y )\nWe also define hard constraints, that is, crisp first-order formulae\nthat should hold in any given world, for instance:\n20singular (x)∧plural (y)→¬ coref er (x, y )\n¬coref er (x, x).\ncoref er (x, y )∧→ coref er (y, x ).\nFullstop after the formula refers to an infinite weight, which, in turn,\nmeans that the formula holds with the probability equal to 1.\nIn this paper we do not consider weight learning, so weights are as-\nsigned manually. We do not consider pronoun mentions as the back-ground knowledge is relevant for proper name/common noun pairsin the first place.\nIn addition to the syntactic predicates and rules described above,\nwe introduce a set of predicates and rules that deal with backgroundknowledge extracted from a structured Semantic Web knowledgesource, Y AGO ontology [22] in our case. In this paper, we used just\ntwo pairwise semantic properties of mentions: semantic type match\nand a sort of alias feature derived from Y AGO means relations (e.g.\n“United States” may be also referred to as “US”, “America”, etc.).We define type match relation for proper name/common noun pairsonly (e.g. one of the types of the proper name “Obama” matches witha common noun “president”), and introduce also the unique match\npredicate, which describes the situation in which the proper name(the first argument of the predicate) is the only one in the wholedocument to have a given type. For instance, if a document talksabout “Obama” and “Clinton” unique type match with the commonnoun “president” does not hold for neither of the proper names. TheMarkov logic model is extended with the a number of rules relatingthese semantic predicates with coreference property. The argumentsof a semantic predicate should be of the same named entity type (per-son, location, facility, etc.). Non-unique type match property is com-bined with the follow distance relation.\n4.3 Evidence database\nThe second input to the Alchemy inference module is an evidencedatabase, i.e. the known values of non-query predicates listed in theprevious section. Normally, coreference resolution task is performedon the document corpus, in which each document is firstly prepro-cessed. Preprocessing consists in identifying the named entities (per-sons, locations, organization, etc.), as well as their syntactic proper-ties, such as part of speech, number, gender, pairwise distance, etc.\n8Full model is available at https://copilosk.fbk.eu/images/1/\n1f/Coreference.txtV.Bryl etal./Using Background Knowledg etoSupport Coreference Resolution 762\nThe data corpus we use for the experiments is ACE 2005 data\nset, with around 600 documents from the news domain. We work on\na corpus in which each word is annotated with around 40 features(token and document ID, Part of Speech tags by TextPro\n9, etc.). This\nallowed us to extract the syntactic properties of the mentions suchas number, gender (proper names in the corpus were annotated basedon male/female name lists), parwise distance and pronoun and propername property. For gender, we also defined two lists of tokens (whichincluded “man”,“girl”, “wife”, “Mr.”, etc.).\nWe worked on the gold standard annotation for named entities,\nand considered five named entity types: PERson, LOCation, FACil-\nity, GeoPoliticalEntity and ORGanization.\nAs already mentioned above, for extracting semantic properties of\nthe named entities, as a source of background knowledge, we use\nY AGO [22], an ontology extracted from Wikipedia and unified withWordNet. Y AGO ontology contains 1 million entities and 5 millionfacts. To extract knowledge from Y AGO for a given mention, weused the DBpedia link assigned to this mention. The information weextracted from Y AGO in this first experiment concerns the type and\nmeans facts about Y AGO concepts. Namely, for every (proper name,\nnoun) pair of the named entities of the same type we compare the\nproper name Y AGO types with the noun token. In case of the match,the Y AGO type match property for a pair is set to true. Differently,means property is extracted for all relevant pairs of named entities.\n4.4 Alchemy inference and post-processing\nWe perform Alchemy inference separately for each named entitytype (PER, LOC, FAC, GPE, ORG), and then combine the results.Note that the size of the document corpus does not impact the qual-ity of the results as documents are processed independently, one byone.\nThe Alchemy inference module, which takes as input the weighted\nMarkov logic model and the database containing the properties ofmentions, produces as a result the probabilities of coreference foreach of NxN possible pairs of mentions, where Nis the number of\nmentions:\ncoref er (m\ni,m j)pij,0≤pij≤1,i , j =1,N\nAfter having obtained this, we setup a probability threshold (e.g.\np=0 .9) and consider only those pairs for which pij≥p. On these\npairs, we perform a transitive closure. Then the pairwise scores and,after a simple clustering step, MUC scores [24] can be calculated.\nThe resulting output of the whole approach includes a list of coref-\nerence chains for each document in the corpus, and the measures ofthe efficiency of the approach, namely, the concrete values of recall,precision and their harmonic mean (F1). We discuss the evaluationof the efficiency in next section.\n5 Evaluation\nTable 1 presents MUC scores of the experiments without and with theuse of background knowledge extracted from Y AGO for the wholeACE data set (598 documents) for all five types of named entities(ALL), for geopolitical entities (GPE) and persons (PER), accord-ingly. The improvement in F1for the whole corpus is around 2%.\nNotice that the recall is improved by 5%, whereas precision goesdown by almost 2%. For GPE named entities the improvement isaround 3%, while for persons it is just around 1.5%. Lower improve-ment was achieved for the other three NE types (locations, organi-zations and facilities), so we do not report these results here. We\nconsider such an improvement to be promising, given that only two\nY AGO properties were exploited, type and means, and the possibil-\nity to extract and use background knowledge relevant for common\nnouns was not explored.\n9TextPro –http://textpro.fbk.eu/NE type YAG O R P F1\nALL no 0.7272 0.8230 0.7722\nALL yes 0.7778 0.8053 0.7913\nGPE no 0.7499 0.9404 0.8344\nGPE yes 0.8588 0.8631 0.8610\nPER no 0.6989 0.7447 0.7211\nPER yes 0.7205 0.7495 0.7347\nTable 1. MUC scores for all, GPE and PER NE types\nMoreover, we have evaluated the dependency between the cover-\nage of the extracted background knowledge on the corpus and the\nimprovement in coreference resolution performance. Tables 2 and 3report the results for GPE named entity type and means Y AGO re-\nlation, and PER named entity type and type Y AGO relation, accord-\ningly. Note that the coverage, which is calculated here as number ofextracted alias/type matches divided by the total number of pairs ina document, is relatively low. This is related to the observation we\nmade about the potential ways of extending the coverage.\nIn the tables, #docs stands for the total number of documents hav-\ning a coverage in a given range, and R-d (F1-d) stands for the dif-\nference between recall (F1) with and without background knowledge\nextracted from Y AGO. Recall, precision and F-measure for the casesof absence/presence of the background knowledge are reported inpairs in R,Pand F1columns, accordingly. We observe that with the\ngrow of the coverage both recall and F1 generally increase, whichsupports our hypotheses of the use of background knowledge (ex-tracted from the structured Semantic Web resources) being a promis-ing direction for coreference resolution tasks and, hopefully, for otherNLP tasks.\n%cov #docs R R-d P F1 F1-d\n0–2 56 0.7462 0.1030 0.9281 0.8272 0.0517\n0.8491 0.9109 0.8789\n2–4 52 0.7566 0.1149 0.9427 0.8395 0.0611\n0.8715 0.9318 0.9006\n4–6 39 0.7583 0.1250 0.9550 0.8454 0.0628\n0.8833 0.9345 0.9082\n6–10 36 0.7855 0.1079 0.9583 0.8633 0.0530\n0.8934 0.9404 0.9163\n10–14 16 0.7407 0.1975 1.000 0.8511 0.0646\n0.9383 0.8941 0.9157\n14–28 11 0.6974 0.2105 0.9815 0.8154 0.1234\n0.9079 0.9718 0.9388\nTable 2. GPE, correlation between means coverage and R/F1 improvement\n6 Conclusion and future work\nIn this paper we have defined a methodology for combining semanticinformation available in the web under the form of logical theories,with statistical methods for natural language processing tasks. Thefirst problem we solved in order to empower an NLP task with theknowledge from publicly available large scale knowledge sources,concerns the mapping of terms in the text to concepts in DBpe-dia, and then, to other knowledge resources linked to DBpedia, e.g.Y AGO ontology. An important aspect of the mapping that was ad-\ndressed in the paper is word sense disambiguation. We have applied\nthe proposed approach to the task of intra-document coreference res-\nolution. We have proposed a method for selecting a subset of knowl-\nedge relevant for a given text for solving the coreference task, andV.Bryl etal./Using Background Knowledg etoSupport Coreference Resolution 763\n%cov #docs R R-d P F1 F1-d\n0–1 121 0.6877 0.0126 0.7245 0.7056 0.0081\n0.7003 0.7278 0.7138\n1–2 67 0.7058 0.0263 0.7294 0.7174 0.0169\n0.7320 0.7366 0.7343\n2–4 60 0.6993 0.0460 0.7875 0.7408 0.0299\n0.7453 0.7980 0.7707\n4–7 33 0.7327 0.0461 0.7852 0.7580 0.0289\n0.7788 0.7953 0.7870\n7–21 18 0.8079 0.0742 0.9024 0.8525 0.0413\n0.8821 0.9058 0.8938\nTable 3. PER, correlation between type coverage and R/F1 improvement\nhave implemented the coreference resolution process with the help\nof the inference module of the Alchemy tool. The latter is based onMarkov logic formalism and allows combining logical and statisticalrepresentation and inference. We have evaluated the results on theACE 2005 data set to show the correlation between introducing thenew semantic knowledge and the improvement of the performance.\nTo the best of our knowledge, there are no approaches nor to\ncoreference resolution, neither to other NLP tasks, which make useof structured semantic knowledge available in the web. One of thekey points in addressing this problem is combining the logic based\nrepresentation of the model with statistical reasoning. Such model\nrepresentation and the available Semantic Web knowledge resources“speak the same language”, which is the language of logic.\nFuture work directions include, in the first place, further exploit-\ning Y AGO ontology to extract more properties and rules to supportcoreference resolution. Also, we are interested in experimenting withthe full task, which includes named entity recognition module andlearning the weights of the formulae of the model from the trainingdata. Exploiting other knowledge sources (e.g. Cyc\n10or Freebase11)\nand testing the proposed reference methodology on the other NLPtask, like semantic relation extraction, are the other challenging fu-ture work directions.\nAcknowledgments\nThe research leading to these results has received funding fromthe ITCH project (http://itch.fbk.eu), sponsored by the\nItalian Ministry of University and Research and by the Au-tonomous Province of Trento, and the Copilosk project (http://copilosk.fbk.eu), a Joint Research Project under Future In-ternet – Internet of Content program of the Information TechnologyCenter, Fondazione Bruno Kessler.\nREFERENCES\n[1] Alchemy – http://alchemy.cs.washington.edu/.\n[2] S ¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard\nCyganiak, and Zachary G. Ives. Dbpedia: A nucleus for a web of open\ndata. In ISWC/ASWC, pages 722–735, 2007.\n[3] Silviu Cucerzan. Large-scale named entity disambiguation based on\nWikipedia data. In Proceedings of the 2007 Joint Conference on Em-\npirical Methods in Natural Language Processing and Computational\nNatural Language Learning (EMNLP-CoNLL), pages 708–716, 2007.\n[4] Aron Culotta, Michael L. Wick, and Andrew McCallum. First-order\nprobabilistic models for coreference resolution. In Human Language\nTechnology Conference of the North American Chapter of the Associa-\ntion of Computational Linguistics, pages 81–88, 2007.\n10http://www.cyc.co\n11http://www.freebase.com/[5] Pascal Denis and Jason Baldridge. Joint determination of anaphoric-\nity and coreference resolution using integer programming. In Human\nLanguage Technologies 2007: The Conference of the North American\nChapter of the Association for Computational Linguistics, pages 236–\n243, 2007.\n[6] Pedro Domingos, Stanley Kok, Daniel Lowd, Hoifung Poon, Matthew\nRichardson, and Parag Singla. Markov logic. In Probabilistic Induc-\ntive Logic Programming, volume 4911 of Lecture Notes in Computer\nScience, pages 92–117. Springer, 2008.\n[7] Tim Finin, Zareen Syed, James Mayfield, Paul McNamee, and Chris-\ntine Piatko. Using wikitology for cross-document entity coreference\nresolution. In Proceedings of the AAAI Spring Symposium on Learning\nby Reading and Learning to Read, 2009.\n[8] Claudio Giuliano, Alfio Massimiliano Gliozzo, and Carlo Strapparava.\nKernel methods for minimally supervised wsd. Computational Linguis-\ntics, 35(4):513–528, 2009.\n[9] Aria Haghighi and Dan Klein. Simple coreference resolution with rich\nsyntactic and semantic features. In Proceedings of the 2009 Conference\non Empirical Methods in Natural Language Processing, pages 1152–\n1161, 2009.\n[10] Shujian Huang, Yabing Zhang, Junsheng Zhou, and Jiajun Chen. Coref-\nerence resolution using Markov Logic Networks. In Proceedings of\nCICLing 2009, 2009.\n[11] Vincent Ng. Learning noun phrase anaphoricity to improve coreference\nresolution: issues in representation and optimization. In ACL ’04: Pro-\nceedings of the 42nd Annual Meeting on Association for ComputationalLinguistics, page 151, 2004.\n[12] Vincent Ng. Semantic class induction and coreference resolution. In\nACL . The Association for Computer Linguistics, 2007.\n[13] Vincent Ng and Claire Cardie. Improving machine learning approaches\nto coreference resolution. In Proceedings of the 40th Annual Meeting\non Association for Computational Linguistics, pages 104–111, 2002.\n[14] Massimo Poesio, Rahul Mehta, Axel Maroudas, and Janet Hitzeman.\nLearning to resolve bridging references. In ACL , pages 143–150, 2004.\n[15] Simone Paolo Ponzetto and Michael Strube. Exploiting semantic role\nlabeling, wordnet and wikipedia for coreference resolution. In Human\nLanguage Technology Conference of the North American Chapter ofthe Association of Computational Linguistics, pages 192–199, 2006.\n[16] Hoifung Poon and Pedro Domingos. Joint inference in information\nextraction. In AAAI’07: Proceedings of the 22nd national conference\non Artificial intelligence, pages 913–918, 2007.\n[17] Hoifung Poon and Pedro Domingos. Joint unsupervised coreference\nresolution with Markov Logic. In Proceedings of the 2008 Conference\non Empirical Methods in Natural Language Processing, pages 650–\n659, 2008.\n[18] Sebastian Riedel and Ivan Meza-Ruiz. Collective semantic role la-\nbelling with markov logic. In Proceedings of the Twelfth Conference\non Computational Natural Language Learning, pages 193–197, 2008.\n[19] Stefan Schoenmackers, Oren Etzioni, and Daniel S. Weld. Scaling tex-\ntual inference to the web. In Proceedings of the Conference on Empiri-\ncal Methods in Natural Language Processing, pages 79–88, 2008.\n[20] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analy-\nsis. Cambridge University Press, 2004.\n[21] Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Y ong Lim. A\nmachine learning approach to coreference resolution of noun phrases.Computational Linguistic, 27(4):521–544, 2001.\n[22] Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. Yago: a\ncore of semantic knowledge. In WWW ’07: Proceedings of the 16th\ninternational conference on World Wide Web, pages 697–706. ACM\nPress, 2007.\n[23] Yannick V ersley, Simone Paolo Ponzetto, Massimo Poesio, Vladimir\nEidelman, Alan Jern, Jason Smith, Xiaofeng Yang, and AlessandroMoschitti. Bart: a modular toolkit for coreference resolution. In Pro-\nceedings of the 46th Annual Meeting of the Association for Compu-tational Linguistics on Human Language Technologies, pages 9–12,\n2008.\n[24] Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and\nLynette Hirschman. A model-theoretic coreference scoring scheme. In\nMUC6 ’95: Proceedings of the 6th conference on Message understand-\ning, pages 45–52, 1995.\n[25] Xiaofeng Yang and Jian Su. Coreference resolution using semantic re-\nlatedness information from automatically discovered patterns. In Pro-\nceedings of the 45th Annual Meeting of the Association of Computa-tional Linguistics, pages 528–535, June 2007.V.Bryl etal./Using Background Knowledg etoSupport Coreference Resolution 764",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "r1eHWOZuor",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=r1eHWOZuor",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Part 1. Scalability to larger graphs and generalization to real data",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "BJ4RdF-uZB",
"year": null,
"venue": "ECCV (7) 2018",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=BJ4RdF-uZB",
"arxiv_id": null,
"doi": null
}
|
{
"title": "VQA-E: Explaining, Elaborating, and Enhancing Your Answers for Visual Questions",
"authors": [
"Qing Li",
"Qingyi Tao",
"Shafiq R. Joty",
"Jianfei Cai",
"Jiebo Luo"
],
"abstract": "Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We also conduct a user study to validate the quality of the synthesized explanations. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "aBqPBsF87Xo",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=aBqPBsF87Xo",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Author Response to Reviewer e1RP",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rWXOJAcz485",
"year": null,
"venue": "EACL 2021",
"pdf_link": "https://aclanthology.org/2021.eacl-main.151.pdf",
"forum_link": "https://openreview.net/forum?id=rWXOJAcz485",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models",
"authors": [
"Tianxing He",
"Bryan McCann",
"Caiming Xiong",
"Ehsan Hosseini-Asl"
],
"abstract": "In this work, we explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e.g., Roberta) for natural language understanding (NLU) tasks. Our experiments show that EBM training can help the model reach a better calibration that is competitive to strong baselines, with little or no loss in accuracy. We discuss three variants of energy functions (namely scalar, hidden, and sharp-hidden) that can be defined on top of a text encoder, and compare them in experiments. Due to the discreteness of text data, we adopt noise contrastive estimation (NCE) to train the energy-based model. To make NCE training more effective, we train an auto-regressive noise model with the masked language model (MLM) objective.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics , pages 1754–1761\nApril 19 - 23, 2021. ©2021 Association for Computational Linguistics1754Joint Energy-based Model Training for Better Calibrated\nNatural Language Understanding Models\nTianxing He\nMIT\[email protected] McCann\u0006\nSalesforce Research\[email protected]\nCaiming Xiong\nSalesforce Research\[email protected] Hosseini-Asl\nSalesforce Research\[email protected]\nAbstract\nIn this work, we explore joint energy-based\nmodel (EBM) training during the finetuning\nof pretrained text encoders (e.g., Roberta) for\nnatural language understanding (NLU) tasks.\nOur experiments show that EBM training can\nhelp the model reach a better calibration that\nis competitive to strong baselines, with little\nor no loss in accuracy. We discuss three vari-\nants of energy functions (namely scalar ,hid-\nden, and sharp-hidden ) that can be defined on\ntop of a text encoder, and compare them in ex-\nperiments. Due to the discreteness of text data,\nwe adopt noise contrastive estimation (NCE)\nto train the energy-based model. To make\nNCE training more effective, we train an auto-\nregressive noise model with the masked lan-\nguage model (MLM) objective.\n1 Introduction\nCalibration refers to how well a classification\nmodel’s confidence (reflected by its output pos-\nterior probability) aligns with its actual accuracy.\nAs deep learning models achieve amazing accu-\nracy in computer vision (He et al., 2015) or natural\nlanguage processing (NLP) (Liu et al., 2019; De-\nvlin et al., 2018), more research attention has been\ndrawn to the calibration aspect of these models. As\nshown by Guo et al. (2017), the high accuracy from\ndeep models does not always lead to better calibra-\ntion. This motivates an important line of works\nattempting to achieve a better trade-off between\naccuracy and calibration.\nMost existing calibration methods (Guo et al.,\n2017; Kumar et al., 2019; Zadrozny and Elkan,\n2001) generally rescale the posterior distribution\npredicted from the classifier after training. Such\npost-processing methods require a held-out devel-\nopment set with a decent number of samples to be\n\u0006Bryan McCann contributed to this work while he was\nat Salesforce Research.available. To overcome this constraint, Jung et al.\n(2020) uses a penalty term to encourage better cali-\nbration during training.\nIn another line of work, Grathwohl et al. (2019)\nshows that one can jointly train an energy-based\nmodel (EBM) during the standard training of a\nneural classifier. Although calibration is not explic-\nitly addressed during EBM training, the calibration\nof the resulting model is shown to be greatly im-\nproved. Some intuitions of the underlying reasons\nwill be given in Section 2.3. However, the training\nframework proposed by Grathwohl et al. (2019) is\ndesigned for image classifiers, and it can not be\nreadily applied to discrete text data.\nIn this work, we propose a framework that uses\nnoise contrastive estimation (NCE) to jointly train\nan energy-based model during the finetuning of\npretrained text encoders (e.g., BERT (Devlin et al.,\n2018) or Roberta (Liu et al., 2019)) for NLU tasks.\nWe compare several variants of energy functions\nthat can be defined on top of the encoder. Our\nexperiments show that the resulting models achieve\ncompetitive calibration results comparing to strong\nbaselines, with little or no loss in accuracy.\n2 Framework\n2.1 Notations and Background\nWe focus on the finetuning of pretrained text en-\ncoder on NLU tasks. We assume samples from\nthe data distribution PDare in the form of px;yq\npairs, where xusually refers to a single or a pair of\nsentences, and yrefers to the corresponding label.\nThe number of classes are denoted by |Y|.\nGiven input x, we first use a text encoder model\n(e.g., BERT or Roberta) to encode it and we denote\nthis embedding as encpxq. For the target classifica-\ntion task, a classifier fCLS, which could be a simple\nlinear transform or a multi-layer perception (MLP),\nwill be applied to encpxq. We denote the output\n1755logits asfCLSpencpxqq, whose dimension is equal\nto the number of possible classes |Y|. They-th\nlogit is denoted by fCLSpencpxqqrys. The posterior\ndistributionP\u0012py|xqis obtained by applying a soft-\nmax operation to the logits, where \u0012refers to the\nparameters in the model.\nIn standard finetuning, the cross-entropy (CE)\nloss and gradient based optimizers are used to train\nthe classifier:\nLCE\u0010E\npx;yq\u0012PDp\u0001logP\u0012py|xqq: (1)\nIn the next few sections, we discuss how we define\nand jointly train an EBM on top of the text encoder.\n2.2 Definitions of Energy Function\nAn energy-based model (LeCun et al., 2006) ex-\npressesP\u0012pxqas:\nP\u0012pxq\u0010expp\u0001E\u0012pxqq\nZ; (2)\nwhereZis the normalization factor, and is usually\nintractable to compute. We refer to E\u0012pxq, which\nreturns a scalar value, as the energy function . We\nnow define three variants of energy functions.\nVariant scalar :We introduce another linear\nlayergSwhose output is a scalar. And we use\nit to define the energy function:\n^E\u0012pxq\u0010gSpencpxqq: (3)\nVariant hidden :As pointed out by Grathwohl\net al. (2019), there’s an EBM “hidden” in every neu-\nral classifier with softmax output, and the energy\nfunction for xcan be derived1as:\n^E\u0012pxq\u0010\u0001 LogSumExp|Y|\ny\u00101pfCLSpencpxqqrysq:\n(4)\nDifferent from the scalar variant, here the energy\nfunction directly uses the logits for prediction (vi-\nsualized in Figure 1). Hence the impact on the\nmodel’s classification behavior could be larger.\nVariant sharp-hidden :Thehidden variant has\na potential weakness that, the correlation between\ninputxand the prediction yis not addressed be-\ncause the energy is distributed among all the logits.\nMotivated by this potential issue, we propose the\nfollowing “sharp” variant:\n^E\u0012pxq\u0010\u0001 max\nyfCLSpencpxqqrys: (5)\nNote that (5) can be viewed as an approximation to\n(4), and we find it to work well in practice.\n1Please see Appendix A for the detailed derivation.\nFigure 1: Comparison of the scalar and the hidden vari-\nants of energy functions. The modules introduced for\nEBM are shaded in green.\nFinally, for each variant, we define the energy\nfunction to be E\u0012pxq\u0010^E\u0012pxq\u0001logPNpxq, where\nPNis the noise distribution introduced for NCE.\nWe will motivate this design choice below.\n2.3 NCE Training\nWe use noise contrastive estimation (NCE) (Gut-\nmann and Hyv ¨arinen, 2010; Ma and Collins, 2018)\nto jointly train the energy model. NCE trains the\nmodel to discriminate between data samples and\nnoise samples from a given noise distribution PN.\nWe formulate the NCE loss below:\nLNCE\u0010E\nx\u0000\u0012pD\u0001log~P\u0012px\u0000q\n~P\u0012px\u0000q\u0000K\u0004PNpx\u0000q\u0000\nK\u0004E\nx\u0001\u0012pN\u0001logK\u0004PNpx\u0001q\n~P\u0012px\u0001q\u0000K\u0004PNpx\u0001q;(6)\nwhereKis the ratio of noise samples. Note\nthat ~P\u0012pxqdoes not need to be normalized by\nconstruction, therefore we set it to be ~P\u0012pxq \u0010\nexpp\u0001E\u0012pxqq. In our experiments, we mostly re-\nport results with noise ratio K\u00108, while in some\ncases we find that a small ratio of K\u00101works\nslightly better. We have also tried with larger ratio\nsuch as 16, but the gain is minimal.\nIf we directly use the formulations of ^E\u0012pxqde-\nfined in last section as the energy function, the\noptimization will be difficult because of the PNpxq\nterms (which could be of very small value) in\nthe NCE objective. To overcome this issue, we\nfollow Deng et al. (2020) and define E\u0012pxq \u0010\n^E\u0012pxq\u0001logPNpxq. In this way, the PNpxqterms\nare canceled, and the objective is simplified to:\nLNCE\u0010E\nx\u0000\u0012pD\u0001log1\n1\u0000K\u0004expp^E\u0012px\u0000qq\u0000\nK\u0004E\nx\u0001\u0012pN\u0001logK\nK\u0000expp\u0001^E\u0012px\u0001qq:(7)\nIn training, we jointly optimize LCEandLNCE\nwith the Adam optimizer (Kingma and Ba, 2014):\nLjoint\u0010LCE\u0000LNCE: (8)\n1756Intuitively, joint EBM training makes the model\naware ofPpxq, instead of only focusing on predict-\ningPpy|xqas in standard finetuning. This aware-\nness can potentially help with calibration because\nthe model can be more conservative when it detects\nthe input is out-of-distribution.\n2.4 Construction of Noise Distribution\nFor the choice of noise distribution PN, in our pre-\nliminary trials, we finetune the GPT-2 language\nmodel (Radford et al., 2019) with samples from the\ntarget training set using the standard LM objective.\nHowever during NCE training, we find that the en-\nergy model can easily discriminate between data\nsamples and noise samples, which makes training\nineffective. To alleviate this issue, we adopt an\nobjective similar2to the masked language model\n(MLM) loss (Devlin et al., 2018) during the fine-\ntuning of the noise model (GPT-2): With a given\nmask ratioM, we randomly mask part of x, and\ntrain the model to complete it:\nLMLM\u0010 E\nx\u0012PD;xm\u0012Pmaskpxm|x;Mq\u0001logPNpx|xmq:(9)\nDuring noise sample generation, adopting the same\nmask ratioM, we feed a masked xmto the LM\n(xis from the training set), and use the generated\nsample as the noise sample. In this way, the noise\ndistribution is made closer to the data distribution.\nIn our experiments we set M\u00100:4. During gener-\nation, we use top- k(Fan et al., 2018) sampling with\nk\u001020. More details are provided in Appendix B.\n3 Experiments\nSetting We consider finetuning the Roberta-base\nmodel3, on eight GLUE tasks (Wang et al., 2018).\nWe do not include results on STS-B because it is a\nregression task. To measure calibration error, we\nfollow Jung et al. (2020); Grathwohl et al. (2019)\nand use the expected calibration error (ECE) metric\nwithB(number of bins) set to 20. To save space,\nwe defer detailed definition of ECE to Appendix C.\nFor baseline or NCE training, we follow the rec-\nommended hyper-parameters (learning rate, batch\nsize, etc.) for Roberta (Liu et al., 2019). Since NCE\ntraining requires more computation (because of the\nnoise ratio), we have tried finetuning the baseline\nwith more steps, but we find that gives worse ECE\nand very little or no improvement on accuracy.\n2The difference is that we still train the model to generate\nthe full sentence, instead of only the masked words.\n3Our code is based on https://github.com/\nhuggingface/transformers .\n0.0 0.2 0.4 0.6 0.8 1.00.0%10.0%20.0%30.0%40.0%percentageQNLI\nbaseline\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\n0.0 0.2 0.4 0.6 0.8 1.00%10%20%30%40%50%SST2\nbaseline\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\n0.0 0.2 0.4 0.6 0.8 1.0\n(binned) confidence0.00.20.40.60.81.0accuracy\nidentity\nbaseline\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\n0.0 0.2 0.4 0.6 0.8 1.0\n(binned) confidence0.00.20.40.60.81.0\nidentity\nbaseline\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hiddenFigure 2: Visualization of calibration on QNLI and\nSST-2. In the histogram plots, we use 10 bins instead\nof 20 for better readability. An enlarged version of this\nfigure is provided in Appendix D.\nWe compare EBM training with three strong\nbaselines for calibration: posterior calibrated train-\ning(PosCal) (Jung et al., 2020), temperature scal-\ning(T-Scal) (Guo et al., 2017), and scaling-binning\ncalibrator (Scal-bin) (Kumar et al., 2019). For\nPosCal and Scal-bin, we use the published code.\nScal-bin and T-Scal require a development set\nfor parameter learning and a test set for evaluation,\nbut for each GLUE task we only have one labeled\ndevelopment set available. Therefore, in this work\nwe treat half of the standard development set as test\nset, and keep the other half as development set.\nResults In Table 1 and Table 2 we compare test-\nset accuracy4and ECE for different methods on the\nGLUE tasks. For fair comparison between Scal-\nbin / T-Scal and EBM training (which does not use\nthe development set), we apply them to the whole\ntraining set. We also report their performance when\napplied to the development set for reference.\nIn most tasks, all three EBM variants get substan-\ntial improvement in ECE with little or no loss in\naccuracy comparing to the (strong) baseline meth-\nods. Moreover, the performance of EBM training\nis comparable to Scal-bin / T-Scal applied to the\ndevelopment set, while their performance degrades\nwhen the development set is not available. Among\nthe three variants, on average, the sharp-hidden\nvariant achieves the best accuracy, while the hid-\ndenvariant achieves best calibration. We visualize\nthe calibration error in Figure 2.\n4For CoLA we report with Matthews correlation coeffi-\ncient (mcc).\n1757SST-2 MNLI MNLI(mm) QNLI QQP MRPC CoLA Average\nMethod acc. ECE acc. ECE acc. ECE acc. ECE acc. ECE acc. ECE mcc. ECE perf. ECE\nBaseline .942 .050 .876 .067 .872 .068 .929 .043 .904 .034 .862 .133 .539 .182 .802 .102\nScal-bin(train) .940 .036 .872 .051 .869 .056 .931 .034 .904 .035 .843 .092 .586 .146 .791 .096\nT-Scal(train) .942 .042 .876 .058 .872 .060 .929 .030 .904 .034 .862 .126 .539 .175 .802 .096\nPosCal .944 .040 .876 .067 .872 .067 .930 .039 .905 .032 .867 .129 .540 .184 .810 .092\n(EBM)scalar .942 .033 .871 .038 .871 .047 .927 .016 .899 .034 .862 .098 .540 .150 .801 .073\n(EBM)hidden .956 .032 .869 .032 .868 .044 .923 .016 .900 .033 .867 .099 .545 .131 .807 .063\n(EBM)s-hidden .947 .038 .875 .027 .872 .031 .930 .016 .900 .032 .862 .089 .563 .133 .815 .069\nScal-bin(dev) .944 .019 .876 .030 .870 .032 .931 .021 .905 .021 .862 .062 .557 .048 .802 .052\nT-Scal(dev) .942 .037 .876 .024 .872 .026 .929 .018 .904 .026 .862 .126 .539 .109 .802 .072\nTable 1: Test-set accuracy and ECE results for different methods on GLUE tasks. “s-hidden” refers to the sharp-\nhidden variant. The leading zeros are omitted to save space. Note that T-Scal and Scal-bin are applied to the\ntraining set or the development set, respectively. Due to space constraint, results on RTE and WNLI are deferred to\nTable 2. The average value is compute on all nine test sets. For each task, the method that achieves best calibration\nwithout using the development set are shown in bold.\nRTE WNLI\nMethod acc. ECE acc. ECE\nBaseline .724 .279 .571 .058\nScal-bin(train) .717 .271 .457 .144\nT-Scal(train) .724 .275 .571 .063\nPosCal .789 .206 .571 .060\n(EBM)scalar .753 .207 .542 .033\n(EBM)hidden .797 .148 .542 .036\n(EBM)s-hidden .811 .182 .571 .073\nScal-bin(dev) .731 .042 .542 .189\nT-Scal(dev) .724 .235 .571 .046\nTable 2: (Following Table 1) Main results on RTE and\nWNLI.\n2000 4000 6000 8000\ntraining iterations0.020.040.060.080.10ECEbaseline\npostcal\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\n0.84 0.86 0.88 0.90 0.92\ntest-set accuracy0.010.020.030.040.05ECEbaseline\npostcal\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\nFigure 3: (QNLI) Left: How ECE changes during train-\ning. Right: The trade-off between accuracy and ECE\nfor checkpoints (every 500 iterations) during training.\n9.5\n 9.0\n 8.5\n 8.0\n 7.5\nenergy value0.00.10.20.30.40.50.60.7entropy\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\nFigure 4: The entropy of the posterior ( P\u0012p\u0004|xq) versus\nenergy value ^E\u0012pxqfor SST-2 test-set samples.Text: when the film ended, i felt tired and drained and\nwanted to lie on my own deathbed. Label: 1\n^E\u0012pxq: -9.37 Baseline: (.999, .001) ÑEBM: (.998, .002)\nText: sit through this one, you won’t need a magic watch\nto stop time; your dvd player will do it for you. Label: 1\n^E\u0012pxq: -7.57 Baseline: (.006, .994) ÑEBM: (.345, .655)\nTable 3: The change of the model’s confidence (poste-\nrior distribution) for low and high-energy data samples\nof SST-2. The EBM variant shown is sharp-hidden . We\nalso provide QNLI examples in Appendix D.\nIn Figure 3, we plot how test-set ECE changes\nduring training. It is shown as the training reaches\nthe high-accuracy area, the calibration for baseline\nmodel becomes worse, while EBM training is able\nto reach a better trade-off between accuracy and\ncalibration.\nHow does the model get better calibration? In\nFigure 4, we compute and plot the energy value\n^E\u0012pxqversus the entropy of the posterior distribu-\ntionHpP\u0012p\u0004|xqq \u0010°|Y|\ny\u00101\u0001P\u0012py|xqlogP\u0012py|xq,\nfor samples in the SST-2 test set. It is shown\nthat models trained with the hidden andsharp-\nhidden variants tend to assign more conservative\npredictions (reflected by higher entropy) for higher-\nenergy (less likely) samples. We suspect this is due\nto the strong coupling between the energy function\nand the classification logits. We provide concrete\nexamples in Table 3. However, we need to mention\nthat we do not observe this interesting trend (Figure\n4) in all datasets (e.g., QNLI).\n4 Related Works\nFinally, we review applications of NCE or energy-\nbased models in the NLP literature. Due to its self-\nnormalizing property, NCE training has been used\nfor faster inference (Mnih and Teh, 2012; Chen\net al., 2015; Labeau and Allauzen, 2018) of auto-\n1758regressive language models. It has also been used\nin an attempt to train a sentence-level bi-directional\nLM (He et al., 2016).\nMore closely related to our work, Deng et al.\n(2020) adopts NCE to train an EBM defined on\ntop of a text encoder (the scalar variant), and uses\nit to improve language generation. EBM has also\nbeen recently used in non-autoregressive machine\ntranslation (Tu et al., 2020).\n5 Conclusion\nIn this work, we explore joint EBM training dur-\ning the finetuning of pretrained text encoders with\nnoise contrastive estimation. We find that joint\nEBM training can greatly improve the calibration\nof NLU models, with little or no loss on accuracy.\nReferences\nXie Chen, Xunying Liu, Mark J. F. Gales, and Philip C.\nWoodland. 2015. Recurrent neural network lan-\nguage model training with noise contrastive estima-\ntion for speech recognition. In ICASSP , pages 5411–\n5415. IEEE.\nYuntian Deng, Anton Bakhtin, Myle Ott, Arthur Szlam,\nand Marc’Aurelio Ranzato. 2020. Residual energy-\nbased models for text generation. In International\nConference on Learning Representations .\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2018. BERT: pre-training of\ndeep bidirectional transformers for language under-\nstanding. CoRR , abs/1810.04805.\nAngela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi-\nerarchical neural story generation. In Proceedings\nof the 56th Annual Meeting of the Association for\nComputational Linguistics (Volume 1: Long Papers) ,\npages 889–898, Melbourne, Australia. Association\nfor Computational Linguistics.\nWill Grathwohl, Kuan-Chieh Wang, J ¨orn-Henrik Ja-\ncobsen, David Duvenaud, Mohammad Norouzi, and\nKevin Swersky. 2019. Your classifier is secretly an\nenergy based model and you should treat it like one.\nChuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Wein-\nberger. 2017. On calibration of modern neural net-\nworks. volume 70 of Proceedings of Machine Learn-\ning Research , pages 1321–1330, International Con-\nvention Centre, Sydney, Australia. PMLR.\nMichael Gutmann and Aapo Hyv ¨arinen. 2010. Noise-\ncontrastive estimation: A new estimation principle\nfor unnormalized statistical models. In Proceedings\nof the Thirteenth International Conference on Artifi-\ncial Intelligence and Statistics , volume 9 of Proceed-\nings of Machine Learning Research , pages 297–304,\nChia Laguna Resort, Sardinia, Italy. PMLR.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian\nSun. 2015. Deep residual learning for image recog-\nnition. arXiv preprint arXiv:1512.03385 .\nTianxing He, Yu Zhang, Jasha Droppo, and Kai Yu.\n2016. On training bi-directional neural network\nlanguage model with noise contrastive estimation.\nCoRR , abs/1602.06064.\nTaehee Jung, Dongyeop Kang, Hua Cheng, Lucas\nMentch, and Thomas Schaaf. 2020. Posterior cali-\nbrated training on sentence classification tasks. In\nProceedings of the 58th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 2723–\n2730, Online. Association for Computational Lin-\nguistics.\nDiederik P. Kingma and Jimmy Ba. 2014. Adam:\nA method for stochastic optimization. Cite\narxiv:1412.6980Comment: Published as a confer-\nence paper at the 3rd International Conference for\nLearning Representations, San Diego, 2015.\nAnanya Kumar, Percy S Liang, and Tengyu Ma. 2019.\nVerified uncertainty calibration. In Advances in Neu-\nral Information Processing Systems 32 , pages 3792–\n3803. Curran Associates, Inc.\nMatthieu Labeau and Alexandre Allauzen. 2018.\nLearning with noise-contrastive estimation: Easing\ntraining by learning to scale. In Proceedings of\nthe 27th International Conference on Computational\nLinguistics , pages 3090–3101, Santa Fe, New Mex-\nico, USA. Association for Computational Linguis-\ntics.\nYann LeCun, Sumit Chopra, Raia Hadsell, Fu Jie\nHuang, and et al. 2006. A tutorial on energy-based\nlearning. In PREDICTING STRUCTURED DATA .\nMIT Press.\nYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-\ndar Joshi, Danqi Chen, Omer Levy, Mike Lewis,\nLuke Zettlemoyer, and Veselin Stoyanov. 2019.\nRoberta: A robustly optimized BERT pretraining ap-\nproach. CoRR , abs/1907.11692.\nZhuang Ma and Michael Collins. 2018. Noise con-\ntrastive estimation and negative sampling for con-\nditional models: Consistency and statistical effi-\nciency. In Proceedings of the 2018 Conference on\nEmpirical Methods in Natural Language Processing ,\npages 3698–3707, Brussels, Belgium. Association\nfor Computational Linguistics.\nAndriy Mnih and Yee Whye Teh. 2012. A fast and sim-\nple algorithm for training neural probabilistic lan-\nguage models. In Proceedings of the 29th Inter-\nnational Coference on International Conference on\nMachine Learning , ICML’12, page 419–426, Madi-\nson, WI, USA. Omnipress.\nAlec Radford, Jeff Wu, Rewon Child, David Luan,\nDario Amodei, and Ilya Sutskever. 2019. Language\nmodels are unsupervised multitask learners.\n1759Lifu Tu, Richard Yuanzhe Pang, Sam Wiseman, and\nKevin Gimpel. 2020. ENGINE: Energy-based infer-\nence networks for non-autoregressive machine trans-\nlation. In Proceedings of the 58th Annual Meet-\ning of the Association for Computational Linguistics ,\npages 2819–2826, Online. Association for Computa-\ntional Linguistics.\nAlex Wang, Amanpreet Singh, Julian Michael, Fe-\nlix Hill, Omer Levy, and Samuel Bowman. 2018.\nGLUE: A multi-task benchmark and analysis plat-\nform for natural language understanding. In Pro-\nceedings of the 2018 EMNLP Workshop Black-\nboxNLP: Analyzing and Interpreting Neural Net-\nworks for NLP , pages 353–355, Brussels, Belgium.\nAssociation for Computational Linguistics.\nBianca Zadrozny and Charles Elkan. 2001. Obtaining\ncalibrated probability estimates from decision trees\nand naive bayesian classifiers. In In Proceedings\nof the Eighteenth International Conference on Ma-\nchine Learning , pages 609–616. Morgan Kaufmann.\n1760Appendices\nA Derivation of the hidden Variant\nRemember from Section 2.1, the posterior distribu-\ntion is obtained from a softmax operation on the\nlogits, in other words:\nP\u0012py|xq9exppfCLSpencpxqqrysq: (10)\nWithout changing any parameters, one can re-\nuse the logits to define an energy based model of\nthe joint distribution of data point x and labels y\nvia:\nP\u0012px;yq\u0010exppfCLSpencpxqqrysq\nZp\u0012q; (11)\nwhereZp\u0012qis the normalizing factor. Note that\nEquation 11 is consistent with Equation 10 in the\nsense that Equation 10 is a direct consequence of\nEquation 11.\nNow by marginalizing out y, we get:\nP\u0012pxq\u0010°|Y|\ny\u00101exppfCLSpencpxqqrysq\nZp\u0012q;(12)\nwhich is equivalent to\nP\u0012pxq\u0010expp\u0001E\u0012pxqq\nZp\u0012q; (13)\nwhere\nE\u0012pxq\u0010\u0001 LogSumExpypfCLSpencpxqqrysq:\n(14)\nFor more intuition behind this derivation we refer\nreaders to Grathwohl et al. (2019).\nB Details About the Noise Distribution\nWe show some examples of generated noise sam-\nples and the masking in Table 4. Note that the\nmasks could be applied to a consecutive span of\nwords (Masking is applied to each token indepen-\ndently with probability M).\nInput: absolutely and completely <M> (ridiculous )\nGen: absolutely and completely hilarious\nInput: <M> (as a ) young <M> (woman ) of great charm,\n<M> (generosity ) and diplomacy\nGen: of a young man with a great charm, wit and\ndiplomacy\nTable 4: Example of generated noise samples on SST-2.\nThe original words that are masked are also shown.\nAnother possible way to get noise samples is that\nwe can sample from BERT or Roberta with maskedinput. However, due to the nature of masked lan-\nguage modeling and the architecture of BERT /\nRoberta, the sampled tokens will be independent\nof each other, which could result in unnatural noise\nsamples. That is why we choose to utilize an auto-\nregressive LM (e.g., GPT-2).\nC Definition of ECE\nGiven an input sample x, for each label y, we say\nthat the model predicts that xbelongs to label y\nwith confidence P\u0012py|xq. Assuming the test-set\ncontainsnsamples, we will have n\u0002|Y|predic-\ntions.\nECE first partitions all predictions into B\nequally-spaced bins by its confidence. Following\nJung et al. (2020); Grathwohl et al. (2019), we set\nB\u001020, which means the width of each bin is\n0.05. For example, the first bin contains all predic-\ntions that have confidence in the range of r0;0:05q.\nThen for each bin ECE computes how the average\nof confidence is different from its actual accuracy:\nECE\u00101\n|Y||Y|¸\ny\u00101B¸\nb\u00101|Byb|\nn|accpBybq\u0001confpBybq|;(15)\nwherenis the number of samples in the test set,\nandaccpBybqis simply the ratio of samples ( x)\nwhose true label is indeed yinByb.\nD Auxiliary Results and Examples\nExamples of the model’s confidence for low and\nhigh-energy data samples in QNLI are shown in\nTable 5.\nThe histogram of energy values ^E\u0012pxqfor sam-\nples in the test set of QNLI and SST-2 are shown\nin Figure 5.\nIn Figure 6, we provide an enlarged version of\nFigure 2.\n1761Text: Q: What city north of New York was settled by\nHuguenots? A: Huguenot immigrants did not disperse\nor settle in different parts of the country, but rather,\nformed three societies or congregations; one in the city of\nNew York, another 21 miles north of New York\nin a town which they named New Rochelle, and\na third further upstate in New Paltz. Label: 1\n^E\u0012pxq: -8.48 Baseline: (.997, .003) ÑEBM: (.995, .005)\nText: Q: What is the source of oxygen production through\nelectrocatalytic means? A: A similar method is the\nelectrocatalytic O2 evolution from oxides\nand oxoacids. Label: 1\n^E\u0012pxq: 4.22 Baseline: (.252, .748) ÑEBM: (.472, .527)\nTable 5: The change of the model’s confidence (posterior distribution) for low and high-energy data samples in the\ntest set of QNLI. The EBM variant shown is sharp-hidden .\n7.5\n 5.0\n 2.5\n 0.0 2.5 5.0 7.5\nenergy value0.0%5.0%10.0%15.0%20.0%25.0%percentageQNLI\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\n10\n 8\n 6\n 4\n 2\nenergy value0%10%20%30%40%50%60%70%80%percentageSST2\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\nFigure 5: The histogram of energy values ^E\u0012pxqfor samples in the test set of QNLI and SST-2.\n0.0 0.2 0.4 0.6 0.8 1.00.0%10.0%20.0%30.0%40.0%percentageQNLI\nbaseline\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\n0.0 0.2 0.4 0.6 0.8 1.00%10%20%30%40%50%SST2\nbaseline\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\n0.0 0.2 0.4 0.6 0.8 1.0\n(binned) confidence0.00.20.40.60.81.0accuracy\nidentity\nbaseline\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\n0.0 0.2 0.4 0.6 0.8 1.0\n(binned) confidence0.00.20.40.60.81.0\nidentity\nbaseline\n(EBM)scalar\n(EBM)hidden\n(EBM)s-hidden\nFigure 6: Visualization of calibration on QNLI and SST-2. Enlarged version.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "nyjKL9n9X8J",
"year": null,
"venue": "ACSAC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=nyjKL9n9X8J",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Koinonia: verifiable e-voting with long-term privacy",
"authors": [
"Huangyi Ge",
"Sze Yiu Chau",
"Victor E. Gonsalves",
"Huian Li",
"Tianhao Wang",
"Xukai Zou",
"Ninghui Li"
],
"abstract": "Despite years of research, many existing e-voting systems do not adequately protect voting privacy. In most cases, such systems only achieve \"immediate privacy\", that is, they only protect voting privacy against today's adversaries, but not against a future adversary, who may possess better attack technologies like new cryptanalysis algorithms and/or quantum computers. Previous attempts at providing long-term voting privacy (dubbed \"everlasting privacy\" in the literature) often require additional trusts in parties that do not need to be trusted for immediate privacy. In this paper, we present a framework of adversary models regarding e-voting systems, and analyze possible threats to voting privacy under each model. Based on our analysis, we argue that secret-sharing based voting protocols offer a more natural and elegant privacy-preserving solution than their encryption-based counterparts. We thus design and implement Koinonia, a voting system that provides long-term privacy against powerful adversaries and enables anyone to verify that each ballot is well-formed and the tallying is done correctly. Our experiments show that Koinonia protects voting privacy with a reasonable performance.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Ut96cc9_AAb",
"year": null,
"venue": "CADE 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Ut96cc9_AAb",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E-MaLeS 1.1",
"authors": [
"Daniel Kühlwein",
"Stephan Schulz",
"Josef Urban"
],
"abstract": "Picking the right search strategy is important for the success of automatic theorem provers. E-MaLeS is a meta-system that uses machine learning and strategy scheduling to optimize the performance of the first-order theorem prover E. E-MaLeS applies a kernel-based learning method to predict the run-time of a strategy on a given problem and dynamically constructs a schedule of multiple promising strategies that are tried in sequence on the problem. This approach has significantly improved the performance of E 1.6, resulting in the second place of E-MaLeS 1.1 in the FOF divisions of CASC-J6 and CASC@Turing.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "9dYr4pFCLIW",
"year": null,
"venue": "NeurIPS 2021 Poster",
"pdf_link": "/pdf/898c90e91f397bf8492cb139e73396dfba8cf025.pdf",
"forum_link": "https://openreview.net/forum?id=9dYr4pFCLIW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "BCORLE($\\lambda$): An Offline Reinforcement Learning and Evaluation Framework for Coupons Allocation in E-commerce Market",
"authors": [
"Yang Zhang",
"Bo Tang",
"Qingyu Yang",
"Dou An",
"Hongyin Tang",
"Chenyang Xi",
"Xueying LI",
"Feiyu Xiong"
],
"abstract": "Coupons allocation is an important tool for enterprises to increase the activity and loyalty of users on the e-commerce market. One fundamental problem related is how to allocate coupons within a fixed budget while maximizing users' retention on the e-commerce platform. The online e-commerce environment is complicated and ever changing, so it requires the coupons allocation policy learning can quickly adapt to the changes of the company's business strategy. Unfortunately, existing studies with a huge computation overhead can hardly satisfy the requirements of real-time and fast-response in the real world. Specifically, the problem of coupons allocation within a fixed budget is usually formulated as a Lagrangian problem. Existing solutions need to re-learn the policy once the value of Lagrangian multiplier variable $\\lambda$ is updated, causing a great computation overhead. Besides, a mature e-commerce market often faces tens of millions of users and dozens of types of coupons which construct the huge policy space, further increasing the difficulty of solving the problem. To tackle with above problems, we propose a budget constrained offline reinforcement learning and evaluation with $\\lambda$-generalization (BCORLE($\\lambda$)) framework. The proposed method can help enterprises develop a coupons allocation policy which greatly improves users' retention rate on the platform while ensuring the cost does not exceed the budget. Specifically, $\\lambda$-generalization method is proposed to lead the policy learning process can be executed according to different $\\lambda$ values adaptively, avoiding re-learning new polices from scratch. Thus the computation overhead is greatly reduced. Further, a novel offline reinforcement learning method and an off-policy evaluation algorithm are proposed for policy learning and policy evaluation, respectively. Finally, experiments on the simulation platform and real-world e-commerce market validate the effectiveness of our approach.",
"keywords": [
"Application of E-commerce Market;Coupons Allocation",
"Constrained Markov Decision Process",
"Offline Reinforcement Learning",
"Off-policy Evaluation"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "yUNQBMsLGA",
"year": null,
"venue": "NeurIPS 2021 Poster",
"pdf_link": "/pdf/898c90e91f397bf8492cb139e73396dfba8cf025.pdf",
"forum_link": "https://openreview.net/forum?id=yUNQBMsLGA",
"arxiv_id": null,
"doi": null
}
|
{
"title": "BCORLE($\\lambda$): An Offline Reinforcement Learning and Evaluation Framework for Coupons Allocation in E-commerce Market",
"authors": [
"Yang Zhang",
"Bo Tang",
"Qingyu Yang",
"Dou An",
"Hongyin Tang",
"Chenyang Xi",
"Xueying LI",
"Feiyu Xiong"
],
"abstract": "Coupons allocation is an important tool for enterprises to increase the activity and loyalty of users on the e-commerce market. One fundamental problem related is how to allocate coupons within a fixed budget while maximizing users' retention on the e-commerce platform. The online e-commerce environment is complicated and ever changing, so it requires the coupons allocation policy learning can quickly adapt to the changes of the company's business strategy. Unfortunately, existing studies with a huge computation overhead can hardly satisfy the requirements of real-time and fast-response in the real world. Specifically, the problem of coupons allocation within a fixed budget is usually formulated as a Lagrangian problem. Existing solutions need to re-learn the policy once the value of Lagrangian multiplier variable $\\lambda$ is updated, causing a great computation overhead. Besides, a mature e-commerce market often faces tens of millions of users and dozens of types of coupons which construct the huge policy space, further increasing the difficulty of solving the problem. To tackle with above problems, we propose a budget constrained offline reinforcement learning and evaluation with $\\lambda$-generalization (BCORLE($\\lambda$)) framework. The proposed method can help enterprises develop a coupons allocation policy which greatly improves users' retention rate on the platform while ensuring the cost does not exceed the budget. Specifically, $\\lambda$-generalization method is proposed to lead the policy learning process can be executed according to different $\\lambda$ values adaptively, avoiding re-learning new polices from scratch. Thus the computation overhead is greatly reduced. Further, a novel offline reinforcement learning method and an off-policy evaluation algorithm are proposed for policy learning and policy evaluation, respectively. Finally, experiments on the simulation platform and real-world e-commerce market validate the effectiveness of our approach.",
"keywords": [
"Application of E-commerce Market;Coupons Allocation",
"Constrained Markov Decision Process",
"Offline Reinforcement Learning",
"Off-policy Evaluation"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "RXsPgvw4sp",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=RXsPgvw4sp",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Answers",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "7pV2ei5DYH",
"year": null,
"venue": "WSDM 2020",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=7pV2ei5DYH",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Hierarchical User Profiling for E-commerce Recommender Systems",
"authors": [
"Yulong Gu",
"Zhuoye Ding",
"Shuaiqiang Wang",
"Dawei Yin"
],
"abstract": "Hierarchical user profiling that aims to model users' real-time interests in different granularity is an essential issue for personalized recommendations in E-commerce. On one hand, items (i.e. products) are usually organized hierarchically in categories, and correspondingly users' interests are naturally hierarchical on different granularity of items and categories. On the other hand, multiple granularity oriented recommendations become very popular in E-commerce sites, which require hierarchical user profiling in different granularity as well. In this paper, we propose HUP, a Hierarchical User Profiling framework to solve the hierarchical user profiling problem in E-commerce recommender systems. In HUP, we provide a Pyramid Recurrent Neural Networks, equipped with Behavior-LSTM to formulate users' hierarchical real-time interests at multiple scales. Furthermore, instead of simply utilizing users' item-level behaviors (e.g., ratings or clicks) in conventional methods, HUP harvests the sequential information of users' temporal finely-granular interactions (micro-behaviors, e.g., clicks on components of items like pictures or comments, browses with navigation of the search engines or recommendations) for modeling. Extensive experiments on two real-world E-commerce datasets demonstrate the significant performance gains of the HUP against state-of-the-art methods for the hierarchical user profiling and recommendation problems. We release the codes and datasets at https://github.com/guyulongcs/WSDM2020_HUP.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KpAAVJrqDm",
"year": null,
"venue": "Inf. Sci. 2016",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=KpAAVJrqDm",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Assessing e-mail intent and tasks in e-mail messages",
"authors": [
"Maya Sappelli",
"Gabriella Pasi",
"Suzan Verberne",
"Maaike de Boer",
"Wessel Kraaij"
],
"abstract": "In this paper, we analyze corporate e-mail messages as a medium to convey work tasks. Research indicates that categorization of e-mail could alleviate the common problem of information overload. Although e-mail clients provide possibilities of e-mail categorization, not many users spend effort on proper e-mail management. Since e-mail clients are often used for task management, we argue that intent- and task-based categorizations might be what is missing from current systems. We propose a taxonomy of tasks that are expressed through e-mail messages. With this taxonomy, we manually annotated two e-mail datasets (Enron and Avocado), and evaluated the validity of the dimensions in the taxonomy. Furthermore, we investigated the potential for automatic e-mail classification in a machine learning experiment. We found that approximately half of the corporate e-mail messages contain at least one task, mostly informational or procedural in nature. We show that automatic detection of the number of tasks in an e-mail message is possible with 71% accuracy. One important finding is that it is possible to use the e-mails from one company to train a classifier to classify e-mails from another company. Detecting how many tasks a message contains, whether a reply is expected, or what the spatial and time sensitivity of such a task is, can help in providing a more detailed priority estimation of the message for the recipient. Such a priority-based categorization can support knowledge workers in their battle against e-mail overload.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "MZjidkc9Tvn",
"year": null,
"venue": "EACL 2021",
"pdf_link": "https://aclanthology.org/2021.eacl-main.274.pdf",
"forum_link": "https://openreview.net/forum?id=MZjidkc9Tvn",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Challenges in Automated Debiasing for Toxic Language Detection",
"authors": [
"Xuhui Zhou",
"Maarten Sap",
"Swabha Swayamdipta",
"Yejin Choi",
"Noah A. Smith"
],
"abstract": "Biased associations have been a challenge in the development of classifiers for detecting toxic language, hindering both fairness and accuracy. As potential solutions, we investigate recently introduced debiasing methods for text classification datasets and models, as applied to toxic language detection. Our focus is on lexical (e.g., swear words, slurs, identity mentions) and dialectal markers (specifically African American English). Our comprehensive experiments establish that existing methods are limited in their ability to prevent biased behavior in current toxicity detectors. We then propose an automatic, dialect-aware data correction method, as a proof-of-concept. Despite the use of synthetic labels, this method reduces dialectal associations with toxicity. Overall, our findings show that debiasing a model trained on biased toxic language data is not as effective as simply relabeling the data to remove existing biases.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics , pages 3143–3155\nApril 19 - 23, 2021. ©2021 Association for Computational Linguistics3143Challenges in Automated Debiasing for Toxic Language Detection\nXuhui Zhou~Maarten Sap|Swabha Swayamdipta}Noah A. Smith|}Yejin Choi|}\n~Department of Linguistics, University of Washington\n|Paul G. Allen School of Computer Science & Engineering, University of Washington\n}Allen Institute for Artificial Intelligence\[email protected], [email protected]\nfswabhas,noah,yejinc [email protected]\nAbstract\nWarning : this paper contains content that may\nbe offensive or upsetting.\nBiased associations have been a challenge in\nthe development of classifiers for detecting\ntoxic language, hindering both fairness and\naccuracy. As potential solutions, we inves-\ntigate recently introduced debiasing methods\nfor text classification datasets and models, as\napplied to toxic language detection. Our focus\nis on lexical (e.g., swear words, slurs, iden-\ntity mentions) and dialectal markers (specifi-\ncally African American English). Our com-\nprehensive experiments establish that existing\nmethods are limited in their ability to prevent\nbiased behavior in current toxicity detectors.\nWe then propose an automatic, dialect-aware\ndata correction method, as a proof-of-concept\nstudy. Despite the use of synthetic labels,\nthis method reduces dialectal associations with\ntoxicity. Overall, our findings show that debi-\nasing a model trained on biased toxic language\ndata is not as effective as simply relabeling the\ndata to remove existing biases.\n1 Introduction\nCurrent hate speech or toxic language detection1\nsystems exhibit problematic and discriminatory\nbehavior that causes them to have disparate nega-\ntive impact on minority populations (Yasin, 2018;\nGuynn, 2020; Kim et al., 2020; Dias Oliva et al.,\n2020). Tweets simply containing a minority iden-\ntity mention are commonly flagged as toxic by cur-\nrent systems, in contrast to those containing ma-\njority identity mentions, as illustrated in Figure 1.\nAt the core of the issue are dataset biases , i.e.,\nspurious correlations between surface patterns and\nannotated toxicity labels (§2), which stem from\nthe data creation process (Sap et al., 2019). Pre-\nvious work has outlined two such biases for hate\n1We use hate speech andtoxic language interchangeably\nin this work, though their definitions do not perfectly align.\n Detected \ntoxicity score \nI identify as a black \ngay woman .\nI identify as a \nstraight white man. \nFucking love \nthis. \nAdolf Hilter is a \ngreat person. \nIdentity \nbias\n(Lexical) \nSwear \nword \nbias\n(Lexical) \nW hat ’ s up , br o! \nWussup, n*gga !\n Dialect/ \nRacial \n bias \nPers. \nAPI \nPers. \nAPI \nPers. \nAPI \nFigure 1: Lexical items and dialect markers cause prob-\nlematic behavior for toxic language detection systems\nsuch as the widely used PerspectiveAPI. In the top two\nexample pairs, statements with minority identity men-\ntions and swear words used inoffensively are flagged as\ntoxic, but majority identity mentions or offensive state-\nments without overt swearing are missed. The bottom\npair shows dialect-based racial bias for two inoffensive\ngreetings, where markers of African American English\n(AAE) trigger the toxicity detector.\nspeech datasets (both shown in Figure 1): lexical\nbias which associates toxicity with the presence of\ncertain words (e.g., profanities, identity mentions;\nDixon et al., 2018; Dinan et al., 2019) and di-\nalectal bias , where toxicity is correlated with sur-\nface markers of African American English ( AAE;\nDavidson et al., 2019; Sap et al., 2019). When\ntrained on biased datasets, models acquire andex-\nacerbate these biases (e.g., flagging text by Black\nauthors as more toxic than by white authors; Sap\net al., 2019; Zhang et al., 2018).\nConcurrently, there has been elevated interest in\ndeveloping debiasing methods for standard natural\nlanguage understanding (NLU) tasks, i.e., meth-\nods that aim to decrease over-reliance on spurious\ncorrelations in NLU models (Clark et al., 2019; He\net al., 2019; Karimi Mahabadi et al., 2020; Bras\net al., 2020). This raises a natural question: are\n3144current debiasing approaches effective for mitigat-\ning biases specific to toxic language detection?\nIn this work, we address the above question by\ninvestigating two classes of debiasing approaches\nto mitigate lexical and dialectal biases—one that\nemploys additional training objectives for bias re-\nmoval, and another that filters training instances\nlikely exhibiting spurious biases (§3). Through\ncomprehensive experiments, we show that both\napproaches face major challenges in mitigating bi-\nases from a model trained on a biased dataset (in\nour case, the dataset from Founta et al., 2018)\nfor toxic language detection. While data filter-\ning results in reduced bias associations in the data,\nmodels trained on filtered datasets still pick up on\nlexical (§4) and dialectal biases (§5). We find\nthat dialectal biases are particularly challenging\nto address, as has also been shown by Xia et al.\n(2020). “Debiased” models still disproportion-\nately flag text in certain dialects as toxic. Notably,\nmitigating dialectal bias through current debiasing\nmethods does not mitigate a model’s propensity to\nlabel tweets by Black authors as more toxic than\nby white authors.\nWe additionally explore an alternative proof-of-\nconcept study—relabeling supposedly toxic train-\ning instances whose automatic translations into a\nmajority dialect are deemed non-toxic by the clas-\nsifier. To this end, we create a synthetic dataset via\nfew-shot dialect translation system built with GPT-\n3 (Brown et al., 2020). While only an illustrative\nsolution, it nevertheless takes into account the di-\nalectal context of the tweet, resulting in a model\nless prone to dialectal and racial biases (§6). Over-\nall, our findings indicate that debiasing a model al-\nready trained on biased toxic language data can be\nchallenging, compared to relabeling the data to re-\nmove existing biases. Our code and data are pub-\nlicly available on Github.2\n2 Biases in Toxic Language Detection\nWe test the use of debiasing3methods for the\ntask of toxic language detection, which aims to\nflag rude, offensive, hateful, or toxic language on\nthe internet, with the goal of moderating online\ncommunities (Roberts, 2019; Vidgen et al., 2019).\n2https://github.com/XuhuiZhou/Toxic_\nDebias\n3Our definition of “bias” is specific to the social biases\nin toxic language detection datasets, grounded as lexical and\ndialectal biases; see Blodgett et al. (2020) for a detailed in-\nvestigation of the term “bias”.This task differs in several ways from the natu-\nral language understanding (NLU) tasks that debi-\nasing methods have been successful on, such as\ntextual entailment (e.g., SNLI, MNLI; Bowman\net al., 2015; Williams et al., 2018) or reading com-\nprehension (e.g., SQuAD; Rajpurkar et al., 2016).\nFirst, compared to these NLU tasks where there\nis one correct label, the toxicity of language is\ninherently more nuanced, subjective, and contex-\ntual, which causes toxic language datasets to have\nlower agreement in general (Ross et al., 2017).\nSecond, the dataset biases in NLU are predom-\ninantly artifacts introduced during data creation\n(e.g., negations, exaggerations; Schwartz et al.,\n2017; Gururangan et al., 2018), whereas those in\ntoxic language detection are grounded in the so-\ncial dynamics of the world (Spears, 1998; Tech-\nnau, 2018). For example, viewing AAE as a more\ntoxic or less proper variety of English is a form of\nlinguistic discrimination that upholds racial hierar-\nchies in the United States (Rosa and Flores, 2017).\nIn this work, we consider two broad categories\nof toxic language dataset biases—lexical (§2.1)\nand dialectal (§2.2). Our experiments focus on\na single, widely used dataset (§2.3) from Founta\net al. (2018).\n2.1 Lexical Biases (T OXTRIG)\nCurrent toxic language detection systems often\nrely on the presence or absence of certain words\n(e.g., swear words, identity mentions) to make\ntheir predictions (Dixon et al., 2018; Dinan et al.,\n2019). While most previous analyses of this bias\nrelied on a simple list of “bad” words (Davidson\net al., 2019; Dinan et al., 2019),4we take a more\nnuanced view of how lexical items can convey tox-\nicity, inspired by work in pragmatics and sociolin-\nguistics of rudeness (Dynel, 2015; Kasper, 1990,\ninter alia ). Specifically, we manually split our\nfull list of words into three distinct categories de-\npending on the extent to which they carry profane\nor hateful meanings or are simply associated with\nhateful contexts.5We refer to the full set of words\nas T OXTRIG, for Toxicity Triggers, which is in-\ncluded in our released repository.6\n4https://tinyurl.com/list-of-bad-words\n5We note, however, that this categorization is in itself sub-\njective.\n6https://github.com/XuhuiZhou/Toxic_\nDebias/blob/master/data/word_based_bias_\nlist.csv\n3145Non-offensive minority identity mentions\n(NOI) refers to descriptive mentions of minori-\ntized demographic or social identities (e.g., gay,\nfemale ,Muslim ). While these mentions are not\nusually inherently offensive by themselves, they\nare often found in offensive statements that are\nhateful towards minorities (Dixon et al., 2018).\nWe detect these identity mentions in text using a\nlist of 26 regular expressions.\nPossibly offensive minority identity mentions\n(OI) are mentions of minoritized identities that\ncould denote profanity or hate depending on prag-\nmatic and contextual interpretations. This includes\nslurs and objectifying outdated terms to refer to\nminority groups, which are usually understood as\nattacks. Additionally, this includes reclaimed slurs\n(queer ,n*gga ), which connote less offensive in-\ntent when spoken by in-group members compared\nto out-group members (Croom, 2013).\nPossibly offensive non-identity mentions (O NI)\ncontains swear words and other profanities, which\nare usually offensive but not associated to any so-\ncial groups (e.g., f*ck,sh*t). Note that the prag-\nmatic interpretation of these words is not neces-\nsarily always toxic or offensive (Dynel, 2012), as\nthey are often used to convey closeness between\nthe speaker and listener or emphasize the emo-\ntionality of a statement (e.g., second example in\nin Figure 1).\n2.2 Dialectal Biases ( AAE)\nCurrent toxic language detection systems also as-\nsociate higher toxicity with dialectal markers of\nAfrican American English ( AAE; Sap et al., 2019;\nDavidson et al., 2019). Since AAE is a vari-\nety of English that is common among African\nAmericans and often signals a cultural identity\nin the US (Green, 2002), this dialect-based racial\nbias causes speech by Black authors to be sup-\npressed more often than non-Black authors (Sap\net al., 2019), thereby exacerbating racial inequal-\nity (Rosa, 2019).\nIn our experiments, we estimate the dialect of\na tweet using a topic model from Blodgett et al.\n(2016). This model was trained on 60M tweets,\nwhere the dialect of the tweet was inferred from\nthe model coordinates, which yielded a probability\nof a tweet being in one of four dialects (African-\nAmerican English, white-aligned English, His-\npanic, and other). In this study, we only focuson African-American English ( AAE) and white-\naligned English ( WAE) tweets; both definitions\nare based on US English, as per Blodgett et al.\n(2016).7Our experiments either use the proba-\nbility of a tweet being in these dialects, or assign\ntweets their estimated-most-probable dialect.\n2.3 Dataset for Toxic Language Detection\nWe focus our analyses on a widely used hate\nspeech dataset of English tweets (Founta et al.,\n2018). The tweets were collected using a multi-\nround bootstrapping procedure, and were labeled\nout of context8for toxic language. We focus on\nthe 86k tweets that are annotated as hateful, abu-\nsive, or neither and discard those labelled as spam.\nWe aggregate the abusive and hateful labels into a\nsingle toxic category, yielding 32k toxic and 54k\nnon-toxic tweets.9\n3 Debiasing Methods\nWe consider two types of debiasing methods from\ncurrent literature. The first type addresses known,\npre-defined biases—such as lexical and dialectal\nbiases for hate speech detection, via a model-\nbased approach involving additional training ob-\njectives (§3.1). In contrast, the second type is ag-\nnostic to prior knowledge about biases, and in-\nstead filters out examples that appear “too easy”\nand might hence contain spurious correlations\n(§3.2).\n3.1 Debiased Training for Pre-Defined\nToxicity Biases\nWe use the L EARNED -MIXIN method of Clark\net al. (2019), which achieved high out-of-\ndistribution (OOD) performance on several NLU\ntasks, for debiased training. This method trains\nan ensemble containing a bias-only model which\nonly uses pre-defined features corresponding to\nknown biases, and a fullmodel which uses all fea-\ntures. Intuitively, the ensemble encourages the full\n7We avoid using disputed terms such as general Ameri-\ncan English ,standard American English , ormainstream US\nEnglish , which are frequently used for WAE, since we be-\nlieve that no dialect should be privileged with the designation\n“general”, “standard”, or “mainstream” (Rosa, 2019).\n8Only the tweet text—no profile information or conversa-\ntional context—was shown to annotators.\n9We also explored using another widely used hate speech\ndataset (Davidson et al., 2017), which collected tweets us-\ning a seed list of swear words and slurs. However, in line\nwith findings by Xia et al. (2020), debiasing led to degener-\nate behavior due to the data collection process, as discussed\nin Appendix B.\n3146model to rely more on features unrelated to the\nbiases. Once trained, the bias-only model is dis-\ncarded, and only the “bias-free” full model is used\nfor inference, following Clark et al. (2019).\nBias-only model Given its effectiveness on bag-\nof-words (BoW) features, we use an SVM classi-\nfier as the lexical-bias-only model. For example,\nthe T OXTRIG-only model counts the frequency of\nTOXTRIGwords in each tweet. Our dialectal-bias-\nonly model uses the probability of dialects ( AAE,\nWAE, Hispanic, and other) obtained from a dialect\ndetector (Blodgett et al., 2016) as features in a\nSVM classifier.\nFull model We fine-tune a RoBERTa-large clas-\nsifier (Liu et al., 2019), a state-of-the-art classifier\nfor the toxicity detection task. See Appendix A.1\nfor more modeling details.\nNote that we only consider the L EARNED -\nMIXIN-ONI and L EARNED -MIXIN-TOXTRIG\nmodels for lexical debiasing, due to poor ac-\ncuracies of the bias-only models for NOI and\nOI.10\n3.2 Data Filtering for Spurious Biases\nIn addition to debiasing methods that handle\nknown biases, we also explore automated ap-\nproaches which filter out instances exhibiting un-\nspecified, spurious biases. Specifically, we de-\nscribe below two data selection methods that have\nshown strong OOD performance.\nAFLite (Bras et al., 2020) is an algorithm based\non the key intuition that examples predicted cor-\nrectly by the simplest methods likely exhibit spu-\nrious biases. An ensemble of simple linear clas-\nsifiers is trained and tested on different partitions\nof the data; test instances which are “predictable”,\nor classified correctly by most classifiers in the\nensemble are discarded. The algorithm is iter-\native, and is repeated until a target data size is\nachieved. Models trained on this filtered dataset\nachieve higher performance on OOD and adver-\nsarially constructed test sets, compared to the orig-\ninal model, on several text and image classification\ndatasets. This indicates a reduction in spurious bi-\nases in the filtered data.\n10The NOI and OI bias-only models reach 63% and 67%\naccuracy, respectively, which is empirically hard for the en-\nsemble to use. This is likely due to low coverage in the train\nset of those categories (4.43% NOI and 4.25% OI).DataMaps (Swayamdipta et al., 2020) show\nthe presence of distinct regions in a dataset—\nnamely, easy, hard and ambiguous—defined with\nrespect to a given model. These regions are\ndiscovered based on the training dynamics of a\nmodel, determined by the model’s confidence in\nthe true class, for each example, as well as the\nvariability of this confidence, throughout train-\ning epochs. Swayamdipta et al. (2020) show that\ntraining exclusively on the hard and ambiguous\nregions of the data results in high OOD perfor-\nmance, indicating lower prevalance of spurious\nbiases. The easy region is the largest in size\nfor RoBERTa; however, experiments showed that\ntraining exclusively on these examples hurt OOD\ngeneralization on different NLU tasks. Following\nthis work, we create DataMaps-Easy, DataMaps-\nAmbiguous, and DataMaps-Hard subsets for our\ndataset.\nFollowing Swayamdipta et al. (2020), we set\nthe target filtered subset size to 33% of the orig-\ninal training set for both filtering methods, but our\nfiltering additionally preserved the original label\nproportions. We then fine-tune a RoBERTa-large\nclassifer on these filtered subsets; see Appendix\nA.2 for more details.\n4 Experiments: Lexical Biases\nWe investigate the effect of debiasing approaches\n(§3) on removing lexical biases in hate speech de-\ntection. First, we discuss the evaluation frame-\nwork for measuring bias reduction (§4.1). We\npresent quantitative (§4.2) and qualitative (§4.3)\nresults on lexical bias removal for all debiasing ap-\nproaches, and OOD evaluation for debiased train-\ning methods (§4.4). See Appendix A.3 for hyper-\nparameters and other experimental settings.\n4.1 Evaluation Framework\nWe report the performance of all models as over-\nall accuracy and F1with respect to the toxic class.\nGiven that current hate speech systems tend to rely\nheavily on the presence of NOI, OI, and O NI men-\ntions (§2.1) for labeling text as toxic, we use false\npositive rate (FPR) over each of these categories to\nmeasure the degree of bias in the model, following\nHardt et al. (2016) and Xia et al. (2020). Specif-\nically, we report the FPR of a model on tweets\ncontaining NOI (FPR NOI), OI (FPR OI), and O NI\n(FPR ONI), as well the F1corresponding to each of\nthese classes. Intuitively, the lower the FPR \u0003, the\n3147RNOI#ROI#RONI#\nOriginal 0.0445 0.2641 0.671833% trainRandom 0.0345 0.2603 0.6683\nAFLite 0.0434 0.2458 0.6016\nDataMaps-Ambig. 0.0126 0.1968 0.5839\nDataMaps-Hard 0.0081 0.1853 0.5849\nDataMaps-Easy 0.0772 0.3661 0.7720\nTable 1: Lexical associations between toxicity and\nTOXTRIG mentions in the original dataset (Founta\net al., 2018) and various filtered counterparts. Ran-\ndom, AFLite, and DataMaps all contain only 33% of\nthe original data after filtering. Lower Pearson Rcor-\nrelation value indicates less superficial patterns in the\ndataset, i.e., less bias. Takeaway: The hard and am-\nbiguous subsets given by DataMaps contain the lowest\namount of lexical associations, indicated in boldface.\nless the model infers lexical associations for toxi-\ncity, and hence is less biased.\nEvaluation for Filtered Datasets We addition-\nally consider metrics based on spurious lexical as-\nsociations for data filtering approaches. This mea-\nsures prevalence of spurious surface patterns in the\nfiltered datasets, which might propagate to mod-\nels trained on the data. Specifically, we report the\nPearson’s correlation between the gold standard\ntoxicity label and whether or not it contains NOI,\nOI, or O NI mentions. These correlations are de-\nnoted asRONI,RNOI, andROI, respectively; lower\nvalues indicate reduction in lexical biases.\nBaselines We consider comparison against two\nnatural baselines: a vanilla RoBERTa-large classi-\nfier trained on the original dataset (Original). We\nalso consider a baseline trained on a random selec-\ntion of the training data (Random), for comparison\nwith data filtering methods for debiasing. Each\nsubset is trained on 33% of the training data.\n4.2 Results for Lexical Bias Reduction\nFirst, we measure the reduction in lexical bi-\nases in filtered datasets, as given by AFLite and\nDataMaps. As shown in Table 1, subsets given\nby AFLite and the ambiguous and hard regions\nproduced by DataMaps reduce the overall asso-\nciations between T OXTRIG words and toxicity,\ncompared to the original and random baselines;\nDataMaps-Hard has the largest reduction. On the\nother hand, as expected, DataMaps-Easy shows\nanincreased association between T OXTRIGmen-\ntions and toxicity, showing that the these examples\ndisplay overt lexical biases.Table 2 shows results for lexical bias reduc-\ntion using both debiased training approaches, as\nwell as models trained on datasets filtered us-\ning AFLite and all three regions from DataMaps.\nBoth debiased training approaches, LM IXIN-ONI\nand LM IXIN-TOXTRIG, reduce FPR ONIas well\nas FPR OIby a large amount. However, both\napproaches also hurt in-distribution test perfor-\nmance, indicating that O NI and other T OXTRIG\nfeatures are essential for good performance.11In\ncontrast, the models trained on hard and am-\nbiguous subsets from DataMaps both preserve in-\ndistribution performance, even though they are\ntrained only a third of the original data. They also\nreduce the rate of falsely predicting NOI mentions\nas toxic (FPR NOI), while not showing much im-\nprovement for O NI and maintaining FPR OIof the\noriginal baseline.\nSurprisingly, the model trained on the easy sub-\nset from DataMaps shows good bias reduction on\nthe NOI and O NI categories, while matching the\nrandom selection baseline for OI. This is despite\nDataMaps-Easy showing an increased association\nbetween T OXTRIG mentions and toxicity (Table\n1). Notably, the F1for all categories suffers un-\nder this model, indicating that it is less competent\nthan the baseline. These results suggest that re-\nduced associations in the data might not necessar-\nily lead to debiased models trained on the same\ndata. Overall, no single approach outperforms all\nothers across different categories for lexical debi-\nasing.\n4.3 Qualitative Analysis\nA qualitative study of the Founta et al. (2018) test\nset shows the presence of many annotation errors.\nWe show three representative annotation errors in\nTable 3. The first example contains an atypical ex-\nample of toxicity, towards white folks, which the\nannotators might have been unaware of. It also\ncontains a link which annotators had access to, but\nnot models. The second contains the word p*ss\nwhich the annotators may have relied for their as-\nsessment. The third encourages violence/abuse to-\nwards an identity which isn’t typically the target of\nviolence. Interestingly, the DataMaps-Easy pre-\ndictions agree with all the gold standard annota-\ntions; perhaps such annotation errors and ambigu-\nity are responsible for the performance discussed\n11When we combine the bias-only model and the full\nmodel, we obtain competitive performance (see Appendix\nA.4).\n3148Test (12893) NOI (602) OI (553) O NI (3236)\nAcc.\" F1\" F1\" FPR NOI#F1\" FPR OI#F1\" FPR ONI#\nVanilla 94.21 0:0 92.33 0:0 89.76 0:3 10.24 1:398.84 0:185.71 0:097.34 0:1 64.72 0:8\nLM IXIN-ONI 89.65 1:5 85.59 2:5 87.04 1:1 13.99 1:598.87 0:085.71 0:087.87 4:5 43.74 3:1\nLM IXIN-TOXTRIG 90.44 0:7 86.94 1:1 85.47 0:3 11.15 1:797.64 0:371.43 0:090.41 1:8 44.55 1:533% trainRandom 94.07 0:1 92.18 0:1 89.48 0:4 9.33 0:798.93 0:083.33 3:497.40 0:1 67.15 0:6\nAFLite 93.86 0:1 91.94 0:1 90.21 0:4 11.26 1:198.90 0:085.71 0:097.32 0:1 67.97 3:4\nDataMaps-Ambig. 94.33 0:1 92.45 0:1 89.16 0:7 7.39 1:098.87 0:085.71 0:097.54 0:0 64.39 1:4\nDataMaps-Hard 94.50 0:0 92.61 0:189.54 0:4 6.26 0:998.84 0:085.71 0:097.43 0:0 61.95 1:1\nDataMaps-Easy 94.00 0:1 91.94 0:2 86.81 0:6 5.92 0:798.87 0:083.33 3:497.17 0:1 60.33 3:8\nTable 2: Evaluation of lexical bias removal for all debiasing methods on the Founta et al. (2018) test set. Results\nshow the mean and s.d. (subscript) of accuracy and F1across 3 runs, as well as F1and false positive rate exclusively\non test examples containing specific T OXTRIGmentions— NOI, OI and O NI, along with the number of examples\nin each category. The lower the FPR *, the less the model infers lexical associations for toxicity. The first block\nshows debiased training approaches, along with the vanilla classifier, which are trained on the full dataset. The\nsecond block shows data filtering approaches, all trained on only 33% of the training data. Best performance in\neach block is boldfaced. Takeaway: While data filtering approaches achieve overall higher performance, debiased\ntraining approaches perform better on lexical bias reduction, in aggregate.\nGold DM-\nHardDM-\nEasy\n@user THIS? LMAOOOOO...do yourself a solid and stay out of Black people’s mentions and\nmind your caucasia.../leaf A /leaf\nRT @user I wish I wasn’t so annoying like I even piss myself off A /leaf A\n@user If you want to attack people, attack fundamentalists of all faiths. Attack those who\ncondemn 1.5bn people out of hand./leaf A /leaf\nTable 3: Examples of test set tweets with their gold-standard annotations and predictions from models trained on\nDataMaps-Hard (DM-Hard) and DataMaps-Easy (DM-Easy) subsets. Adenotes tweets with toxic labels, and /leaf\nrepresents non-toxic labels. We anonymize the usernames to protect user privacy.\nin §4.2. These annotation ambiguities might also\nimpair our measurement for models’ performance\nand debiasing effects, and expose a limitation of\nthese datasets.\n4.4 Adversarial Evaluation: O NI-Adv\nTo further study the reliance of debiased models\non the O NI words, we use the test set from Di-\nnan et al. (2019), which contains 3000 manually\ncreated sentences, 300 toxic. The toxic sentences\nbarely contain any O NI mentions, e.g., Cheese is\nmade by french people who smell . We call this\ntest set O NI-Adv (for adversarial) since it chal-\nlenges models with a reversal in the association\nbetween toxicity and offensive non-identity words\n(e.g., “ f*ck”, “sh*t”).\nWe reportF1for all models in Figure 2, which\nshows how well a model identifies toxicity in of-\nfensive tweets that do not contain overtly lexical\ncues of toxicity. The debiased training approaches\nimprove over the baselines; data filtering methods\ndo not. One reason for this might be that data\nfiltering methods were trained on much less datathan both LM IXIN models. Regardless, none of\nthe models we test are good at predicting subtle,\nnon-overt toxicity.\n5 Experiments: Dialectal and Racial\nBiases\nWe test the efficacy of the bias reduction methods\nfrom §3 for dialectal bias (§2.2) reduction.\n5.1 Dialectal Biases\nFor our dialectal bias experiments, we first infer\nthe dialect of a tweet as described in §2.2. Then,\nanalogous to the lexical bias evaluation, we quan-\ntify the dialectal debiasing using the Pearson’s cor-\nrelation between estimated probabilities of AAE\nand toxicity ( RAAE), and the false positive rates of\nmodels on AAE tweets (FPR AAE). See Appendix\nA.3 for hyperparameters and other experimental\nsettings.\nResults in Table 4 show that almost all data fil-\ntering and debiasing methods reduce dialectal bi-\nases, with DataMaps-Easy as the exception (con-\n3149\nFigure 2: Challenge set evaluation for lexical biases,\ncomparing all debiasing methods with baselines, using\nthe O NI-Adv test set. Takeaway:F1(\")measures show\nthat all models perform poorly at identifying toxic text\nnot containing overtly lexical cues of toxicity. In gen-\neral, debiased training approaches outperform the orig-\ninal model on this challenge set, while data filtering is\nnot as effective.\nsistent with Table 1). Notably, DataMaps-Hard\nperforms the best at dialectal debiasing, both in\nterms of toxicity- AAE correlation ( RAAE) and in\nterms of false flagging of toxicity (FPR AAE). Inter-\nestingly, most models’ decrease in false flagging\nis small, suggesting room for improvement.\n5.2 Racial Biases\nTo quantify the real-world impact of dialect-\nbased racial bias, we measure the rates of toxi-\ncity predicted by models on a corpus of tweets\nfor which the race of authors is available, but\nnot annotations of toxicity. Specifically, we con-\nsider the dataset released by Preot ¸iuc-Pietro and\nUngar (2018), which consists of 5.4M tweets,\ncollected from 4,132 survey participants (3,184\nWhite, 374 African American) with self-reported\nrace/ethnicity and Twitter user handles.12\nWe quantify our models’ racial bias by measur-\ning the difference in rates of flagging tweets by\nAfrican American authors and those by white au-\nthors, following Sap et al. (2019).13\nListed in Table 5, our results show that auto-\nmatic debiasing methods do not consistently de-\ncrease the racial discrepancy in flagging toxicity.\nNotably, the toxicity rates on tweets by African\nAmerican authors—and the diferences compared\nto white authors—are similar across all debias-\n12For efficiency, we randomly select 12k tweets from the\ndataset as the OOD test set.\n13Note that we assume that authors from all races have the\nsame likelihood of writing toxic language.Test\nRAAE#F1\" FPR AAE#\nVanilla 0.4079 92.33 0:0 16.84 0:3\nLM IXIN-Dialect - 92.26 0:1 16.07 0:433% trainRandom 0.4027 92.18 0:1 16.67 0:6\nAFLite 0.3577 91.94 0:1 16.84 0:8\nDataMaps-Ambig. 0.2965 92.45 0:1 15.99 0:4\nDataMaps-Hard 0.2878 92.61 0:1 13.71 0:2\nDataMaps-Easy 0.5347 91.94 0:2 19.46 2:8\nAAE-relabeled 0.3453 91.64 0:3 12.69 0:0\nTable 4: Dialectal bias evaluation for all debiasing\nmethods (§5), as well as the relabeling approach (§6)\non the Founta et al. (2018) test set. We report F1and\nthe false positive rate with respect to tweets in AAE\n(FPR AAE), reflecting dialectal bias (lower is less bi-\nased), showing mean and s.d. (subscript) across 3 runs.\n(Top Block) Debiased training approaches, along with\nthe vanilla classifier, are all trained on the full dataset.\n(Middle Block) Random, AFLite and DataMaps all\nare trained on only 33% of the training data. Best\nperformance for each training set size is in boldface.\nTakeaway: Both debiasing approaches improve per-\nformance over baselines, with DataMaps-Hard proving\nthe most effective at debiasing. (Bottom Block) AAE-\nrelabeling results in a model which despite following a\nnoisy process yields even larger improvements for di-\nalectal debiasing.\ning methods and baselines, except for DataMaps-\nEasy, which shows the most racial bias in toxic-\nity flagging. Surprisingly, DataMaps-Hard, which\nmitigated dialectal bias the best out of all debi-\nasing methods, also shows high discrepancy be-\ntween author races. Confirming previous results,\nthis suggests that debiasing these systems requires\nmore than automatic debiasing methods.\n6 Towards Data Relabeling\nBased on our quantitative and qualitative analy-\nses, we believe there still is room for improve-\nment in debiasing hate speech detection. There-\nfore, we turn our attention to the role of label noise\nin datasets. Partly inspired by our qualitative anal-\nyses of debiased models’ predictions, we design\na proof-of-concept study where we automatically\ncorrect the label of tweets using a(n automatic) di-\nalectal translation of the tweet, inspired by previ-\nous work showing that highlighting AAE tweets’\ndialect led them to be labeled as less toxic (Sap\net al., 2019). We conclude this study by discussing\nthe limitations and ethical implications of the syn-\nthetic data, and cautioning against its real-world\napplication.\n3150W-Tox. AA-Tox. \u0001#AA/W #\nOriginal 7.24 12.61 5.37 1.74\nLM IXIN-Dialect 7.50 12.55 5.06 1.6733% trainRandom 8.28 13.24 4.96 1.60\nAFLite 7.32 11.64 4.33 1.59\nDataMaps-Ambig. 6.75 12.17 5.42 1.80\nDataMaps-Hard 6.36 11.67 5.31 1.84\nDataMaps-Easy 8.46 16.30 7.83 1.94\nAAE-relabeled 6.93 10.60 3.67 1.53\nTable 5: Racial disparity in toxicity prediction re-\nported on Preot ¸iuc-Pietro and Ungar (2018). W-Tox.\nindicates % of white users’ tweets being flagged as\ntoxic, AA-Tox. indicates % of African American users’\ntweets being flagged as toxic, \u0001refers to the differ-\nence between AA-Tox. and W-Tox., and AA/W refers\nto the ratio between AA-Tox. and W-Tox. Takeaway:\nMethods generally fail in debiasing on this OOD test\nset except the relabeling approach shows some benefit.\nFocusing on dialectal bias, our key assumption\nis that an AAE tweet and its corresponding WAE\nversion should have the same toxicity label, there-\nfore toxic AAE tweets whose WAE versions are\nnon-toxic are candidates for label correction.14\nHowever, gold-standard translations of AAE to\nWAE would require qualified translators, and au-\ntomatic AAE-to-WAE translation systems do not\nexist, to the best of our knowledge. Therefore,\nwe create a proof-of-concept study—we set up a\nAAE toWAE “translation” system using the few-\nshot capabilities of the GPT-3 language model\n(Brown et al., 2020). Under this mechanism, we\nprompt GPT-3 with four translation pairs (taken\nfrom Spears, 1998) and an AAE tweet from our\ntraining data, and generate its WAE “translation”.\nThe list of prompts, as well as further details, are\nprovided in Appendix C. Note that we do notrec-\nommend this approach to build large scale parallel\ndata for dialects, as discussed under ethical impli-\ncations and limitations.\nNext, as per our heuristic, we only relabel toxic\nAAE tweets whose WAE translation is predicted as\nnon-toxic by either our vanilla classifier trained\non the original Founta et al. (2018) dataset, or an\nidentical classifier trained on the WAE translated\ntweets. The resulting dataset ( AAE-relabeled) is\nthe same size as the original dataset, but with 954\n(12%) out of 8260 toxic AAE tweets relabeled as\n14Note that this assumption does not hold for lexical items,\nbecause substituting lexical items (e.g., swapping a minority\nmention for a majority mention) would drastically change the\ndenotational meaning of the sentence.non-toxic (examples in Table 6). To assess the va-\nlidity of the relabeling, the first three authors man-\nually annotated toxicity of 50 randomly selected\nrelabeled tweets. On average, authors agreed with\n84% of the relabeling decisions.\nThen, we evaluate the dialectal bias of AAE-\nrelabeled and quantify the dialect and racial pre-\ndiction biases from a RoBERTa-large classifier\ntrained on AAE-relabeled, following §5. As shown\nin the last row of Table 4, this relabeling scheme\ndecreases dialectal bias more than any other debi-\nasing method, specifically as measured by the FPR\nonAAE tweets, with one point drop in F1score.\nTheF1score on the “gold” test data (Table 4) are\nnot fully reliable, as test data contain label biases\nand better performance could come from exploit-\ning these biases. As shown in Table 5, the model\ntrained on AAE-relabeled has the lowest racial dis-\nparity in toxicity flagging rates compared to all\nother methods.\nThese results highlight that debiasing meth-\nods are much less effective at mitigating dialec-\ntal dataset biases compared to data relabeling.\nFor future investigations, we recommend obtain-\ning human-written AAE-WAE pairs (e.g., as done\nby Groenwold et al., 2020). Additionally, to en-\nsure less biased toxicity labeling, we recommend\nrecruiting AAE speakers or experts for avoid-\ning over-identification of AAE-markers as toxic\n(Spears, 1998; Croom, 2013). Alternatively, we\nrecommend exploring more holistic representa-\ntions of social biases or toxicity (e.g., Social Bias\nFrames; Sap et al., 2020).\nEthical Implications & Limitations\nThe above synthetic setting is meant to illustrate\nthe role of labeling quality on biases in annota-\ntions. We strongly caution against using this ap-\nproach in real-world applications, such as build-\ning parallel datasets for dialects. First, due to\nhow its training data was selected, GPT-3 has\nlikely not been exposed to many African Ameri-\ncan English varieties during training (Jo and Ge-\nbru, 2020). Second, pretrained language models\nare known to generate toxic language at non-trivial\nrates (Gehman et al., 2020), which could cause dif-\nferential toxicity in the translations.\n7 Related Work\nDebiasing Toxicity Detection As the popularity\nof hate speech and toxic language detection sys-\n3151AAE GPT-3 WAE Translation Gold New\nRT @user I can’t stand a bad texter bruh like don’t\nbe mad if I forget about yo assRT @user I can’t stand a bad texter bro like don’t\nbe mad if I forget about youA/leaf\nRT @user Retweet if you fuck with this!!!! RT @user Retweet if you like this! A/leaf\nRT @user That nigga needs anger management RT @user That guy needs anger management A/leaf\nRT @user oh fucking hell take a day off man RT @user oh fuck take a day off man A A\nTable 6: Examples of AAE tweets with their GPT-3 based WAE translation, and original gold standard and new\nannotations based on AAE-relabeled. For the first three tweets, the (biased) gold labels are changed by models\npredicting the new labels on their WAE translations. Aindicates presence of toxicity, and /leafrepresents non-\ntoxic. We anonymize the usernames to protect user privacy.\ntems has grown, several biases have been found in\ndataset and models, spurring various debiasing ef-\nforts to mitigate these individual biases (e.g., gen-\nder bias, racial bias; Park et al., 2018; Sap et al.,\n2019; Davidson et al., 2019). Some work tackles\nidentity-based biases, e.g., using data re-balancing\n(Dixon et al., 2018), or adversarial feature learn-\ning (Vaidya et al., 2019). Less work has tackled\nracial or dialectal bias. Notably, Xia et al. (2020)\nuse adversarial training to prevent the model from\nassociating toxicity with AAE, showing only small\nimprovements in fairness. Based on those results,\nwe do not explore adversarial methods, opting in-\nstead for ensemble-based methods of predefined\nbias reduction. In contemporary work, Mozafari\net al. (2020) use a re-weighting mechanism, which\nshows some effects in debiasing racial bias. We\nleave it for future work to evaluate this method\nin our setting. In contrast to all previous work,\nour experiments also measure the effectiveness of\nbias-agnostic methods.\nOther General Debiasing Methods Several ap-\nproaches for debiasing NLU tasks have been pro-\nposed lately. Some approaches rely on adversarial\ntraining to remove protected attributes (e.g. gen-\nder or race), from a model’s internal representa-\ntions (Zhang et al., 2018; Wang et al., 2019; Xia\net al., 2020). Other approaches include confi-\ndence regularization (Utama et al., 2020), as well\nas other product of expert approaches (He et al.,\n2019; Karimi Mahabadi et al., 2020) similar to\nthe debiased training approach from Clark et al.\n(2019), which is the only debiased training we em-\nploy due to its relatively strong performance.\n8 Conclusion\nWe investigate whether toxic language detection\nsystems can be debiased using recently introduced\nmethods for debiasing text classification in NLUtasks. Focusing on two types of biases, lexical and\ndialectal, our experiments show that these meth-\nods face significant challenges in reducing the bi-\nased behavior in toxicity detectors. This indicates\nthat biases in toxic language detection might be\ndifferent in nature compared to spurious associa-\ntions studied in typical NLU settings. We studied\na synthetic scheme for relabeling examples with\npotential dialectal biases; our results indicate that\ncorrecting noisy labels results in better bias reduc-\ntion. Our findings suggest that instead of solely\nrelying on development of automatic debiasing\nfor existing, imperfect datasets, future work fo-\ncus primarily on the quality of the underlying data\nfor hate speech detection, such as accounting for\nspeaker identity and dialect. Indeed, such efforts\ncould act as an important step towards making sys-\ntems less discriminatory, and hence safe and us-\nable.\nAcknowledgments\nWe thank the anonymous reviewers and Laura\nVianna for helpful comments on this work. This\nresearch was supported in part by NSF grants\n1813153 and 1714566.\nReferences\nSu Lin Blodgett, Solon Barocas, Hal Daum ´e, III, and\nHanna Wallach. 2020. Language (technology) is\npower: A critical survey of “bias” in NLP. In Proc.\nof ACL .\nSu Lin Blodgett, Lisa Green, and Brendan O’Connor.\n2016. Demographic dialectal variation in social me-\ndia: A case study of African-American English. In\nProc. of EMNLP .\nSamuel R. Bowman, Gabor Angeli, Christopher Potts,\nand Christopher D. Manning. 2015. A large anno-\ntated corpus for learning natural language inference.\nInProc. of EMNLP .\n3152Ronan Le Bras, Swabha Swayamdipta, Chandra Bha-\ngavatula, Rowan Zellers, Matthew Peters, Ashish\nSabharwal, and Yejin Choi. 2020. Adversarial fil-\nters of dataset biases. In Proc. of ICML .\nTom B. Brown, Benjamin Mann, Nick Ryder, Melanie\nSubbiah, Jared Kaplan, Prafulla Dhariwal, Arvind\nNeelakantan, Pranav Shyam, Girish Sastry, Amanda\nAskell, Sandhini Agarwal, Ariel Herbert-V oss,\nGretchen Krueger, Tom Henighan, Rewon Child,\nAditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,\nClemens Winter, Christopher Hesse, Mark Chen,\nEric Sigler, Mateusz Litwin, Scott Gray, Benjamin\nChess, Jack Clark, Christopher Berner, Sam Mc-\nCandlish, Alec Radford, Ilya Sutskever, and Dario\nAmodei. 2020. Language models are few-shot\nlearners. In Proc. of NeurIPS .\nChristopher Clark, Mark Yatskar, and Luke Zettle-\nmoyer. 2019. Don’t take the easy way out: Ensem-\nble based methods for avoiding known dataset bi-\nases. In Proc. of EMNLP .\nAdam M Croom. 2013. How to do things with slurs:\nStudies in the way of derogatory words. In Lan-\nguage & communication .\nThomas Davidson, Debasmita Bhattacharya, and Ing-\nmar Weber. 2019. Racial bias in hate speech and\nabusive language detection datasets. In Abusive\nLanguage Workshop (at ACL) .\nThomas Davidson, Dana Warmsley, Michael Macy,\nand Ingmar Weber. 2017. Automated hate speech\ndetection and the problem of offensive language. In\nProceedings of the International AAAI Conference\non Web and Social Media .\nThiago Dias Oliva, Dennys Marcelo Antonialli, and\nAlessandra Gomes. 2020. Fighting hate speech, si-\nlencing drag queens? artificial intelligence in con-\ntent moderation and risks to lgbtq voices online. In\nSexuality & Culture .\nEmily Dinan, Samuel Humeau, Bharath Chintagunta,\nand Jason Weston. 2019. Build it break it fix it for\ndialogue safety: Robustness from adversarial human\nattack. In Proc. of EMNLP .\nLucas Dixon, John Li, Jeffrey Scott Sorensen, Nithum\nThain, and L. Vasserman. 2018. Measuring and\nmitigating unintended bias in text classification. In\nProc. of AES .\nMarta Dynel. 2012. Swearing methodologically : the\n(im)politeness of expletives in anonymous commen-\ntaries on youtube. In Journal of English Studies .\nMarta Dynel. 2015. The landscape of impoliteness re-\nsearch. In Journal of Politeness Research .\nAntigoni-Maria Founta, Constantinos Djouvas, De-\nspoina Chatzakou, Ilias Leontiadis, Jeremy Black-\nburn, Gianluca Stringhini, Athena Vakali, Michael\nSirivianos, and Nicolas Kourtellis. 2018. Large\nscale crowdsourcing and characterization of twitter\nabusive behavior. In Proc. of WSM .Samuel Gehman, Suchin Gururangan, Maarten Sap,\nYejin Choi, and Noah A. Smith. 2020. Realtoxici-\ntyprompts: Evaluating neural toxic degeneration in\nlanguage models. In Findings of EMNLP .\nLisa Green. 2002. African American English: A Lin-\nguistic Introduction . Cambridge University Press.\nSophie Groenwold, Lily Ou, Aesha Parekh, Samhita\nHonnavalli, Sharon Levy, Diba Mirza, and\nWilliam Yang Wang. 2020. Investigating African-\nAmerican vernacular english in Transformer-Based\ntext generation. In Proc. of EMNLP .\nSuchin Gururangan, Swabha Swayamdipta, Omer\nLevy, Roy Schwartz, Samuel R. Bowman, and\nNoah A. Smith. 2018. Annotation artifacts in nat-\nural language inference data. In Proc. of NAACL .\nJessica Guynn. 2020. What civil rights groups want\nfrom facebook boycott: Stop hate speech and ha-\nrassment of black users.\nMoritz Hardt, Eric Price, and Nati Srebro. 2016.\nEquality of opportunity in supervised learning. In\nProc. of NeurIPS .\nHe He, Sheng Zha, and Haohan Wang. 2019. Unlearn\ndataset bias in natural language inference by fitting\nthe residual. In EMNLP Workshop on Deep Learn-\ning Approaches for Low-Resource NLP .\nEun Seo Jo and Timnit Gebru. 2020. Lessons from\narchives: strategies for collecting sociocultural data\nin machine learning. In Proc. of FAT .\nRabeeh Karimi Mahabadi, Yonatan Belinkov, and\nJames Henderson. 2020. End-to-end bias mitigation\nby modelling biases in corpora. In Proc. of ACL .\nGabriele Kasper. 1990. Linguistic politeness: current\nresearch issues. In Journal of Pragmatics . Elsevier.\nJae Yeon Kim, Carlos Ortiz, Sarah Nam, Sarah Santi-\nago, and Vivek Datta. 2020. Intersectional bias in\nhate speech and abusive language datasets.\nYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-\ndar Joshi, Danqi Chen, Omer Levy, Mike Lewis,\nLuke Zettlemoyer, and Veselin Stoyanov. 2019.\nRoberta: A robustly optimized bert pretraining ap-\nproach. In arXiv preprint arXiv:1907.11692 .\nMarzieh Mozafari, Reza Farahbakhsh, and No ¨el\nCrespi. 2020. Hate speech detection and racial bias\nmitigation in social media based on bert model. In\nPLOS ONE . Public Library of Science.\nJi Ho Park, Jamin Shin, and Pascale Fung. 2018. Re-\nducing gender bias in abusive language detection. In\nProc. of EMNLP .\nDaniel Preot ¸iuc-Pietro and Lyle Ungar. 2018. User-\nlevel race and ethnicity predictors from twitter text.\nInProc. of COLING .\n3153Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and\nPercy Liang. 2016. SQuAD: 100,000+ questions\nfor machine comprehension of text. In Proc. of\nEMNLP , pages 2383–2392.\nSarah T Roberts. 2019. Behind the screen: Content\nmoderation in the shadows of social media . Yale\nUniversity Press.\nJonathan Rosa. 2019. Looking like a language, sound-\ning like a race . Oxford University Press.\nJonathan Rosa and Nelson Flores. 2017. Unsettling\nrace and language: Toward a raciolinguistic perspec-\ntive. In Language In Society . Cambridge University\nPress.\nBj¨orn Ross, Michael Rist, Guillermo Carbonell, Ben-\njamin Cabrera, Nils Kurowsky, and Michael Wo-\njatzki. 2017. Measuring the reliability of hate\nspeech annotations: the case of the european refugee\ncrisis. In NLP 4 CMC Workshop .\nMaarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi,\nand Noah A. Smith. 2019. The risk of racial bias in\nhate speech detection. In Proc. of ACL .\nMaarten Sap, Saadia Gabriel, Lianhui Qin, Dan Juraf-\nsky, Noah A Smith, and Yejin Choi. 2020. Social\nbias frames: Reasoning about social and power im-\nplications of language. In Proc. of ACL .\nRoy Schwartz, Maarten Sap, Ioannis Konstas, Li Zilles,\nYejin Choi, and Noah A Smith. 2017. The effect\nof different writing tasks on linguistic style: A case\nstudy of the roc story cloze task. In Proc. of CoNLL .\nArthur K Spears. 1998. African-American language\nuse: Ideology and so-called obscenity. In African-\nAmerican English: Structure, History and Use .\nRoutledge New York.\nSwabha Swayamdipta, Roy Schwartz, Nicholas Lourie,\nYizhong Wang, Hannaneh Hajishirzi, Noah A.\nSmith, and Yejin Choi. 2020. Dataset cartography:\nMapping and diagnosing datasets with training dy-\nnamics. In Proc. of EMNLP .\nBj¨orn Technau. 2018. Going beyond hate speech: The\npragmatics of ethnic slur terms. Lodz Papers in\nPragmatics , 14(1):25–43.\nPrasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna\nGurevych. 2020. Mind the trade-off: Debiasing\nNLU models without degrading the in-distribution\nperformance. In Proc. of ACL .\nAmeya Vaidya, Feng Mai, and Yue Ning. 2019. Em-\npirical analysis of multi-task learning for reducing\nmodel bias in toxic comment detection. In Proc. of\nICWSM .\nBertie Vidgen, Helen Margetts, and Alex Harris. 2019.\nHow much online abuse is there? In Alan Turing\nInstitute .Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei\nChang, and V . Ordonez. 2019. Balanced datasets are\nnot enough: Estimating and mitigating gender bias\nin deep image representations. In Proc. of ICCV .\nAdina Williams, Nikita Nangia, and Samuel Bowman.\n2018. A broad-coverage challenge corpus for sen-\ntence understanding through inference. In Proc. of\nNAACL .\nThomas Wolf, Lysandre Debut, Victor Sanh, Julien\nChaumond, Clement Delangue, Anthony Moi, Pier-\nric Cistac, Tim Rault, R’emi Louf, Morgan Funtow-\nicz, and Jamie Brew. 2019. Huggingface’s trans-\nformers: State-of-the-art natural language process-\ning.\nMengzhou Xia, Anjalie Field, and Yulia Tsvetkov.\n2020. Demoting racial bias in hate speech detection.\nInProc. of Social NLP .\nDanyaal Yasin. 2018. Black and banned: Who is free\nspeech for?\nBrian Hu Zhang, Blake Lemoine, and Margaret\nMitchell. 2018. Mitigating unwanted biases with ad-\nversarial learning. In Proc. of AES . Association for\nComputing Machinery.\n3154Appendix\nA Further Details for Models\nA.1 Model Debiasing\nThe LEARNED -MIXIN ensemble allows the model\nto explicitly determine how much to trust the bias\ngiven the input:\n^pi=softmaxflog(pi) +g(xi) logbig\nwhere xiis theith input text, piandbiis the\ntoxicity prediction produced by RoBERTa, and\nbias-only model respectively, and gis a para-\nmetric function, which is defined as softplus (w\u0001\nhi), where wis a learned vector, hiis the last\nhidden layer of the model for example xi, and\nthe softplus (x) = log(1 + expx). To prevent\nthe LEARNED -MIXIN ensemble from ignoring bi,\nClark et al. (2019) add an entropy penalty ( H) to\nthe loss:\nR=\u000bH(softmaxfg(xi) logbig)\nWhereH(z) =\u0000P\njzjlogzjis the entropy and\n\u000bis a hyperparameter.\nA.2 Data Filtering\nFor the data filtering methods, we first filter data to\n50% of the original data as in Swayamdipta et al.\n(2020). Then we further downsample the dataset\nto 33% of the original data to control that each\ntraining set has the same toxic ratio as the origi-\nnal training set. This step is to avoid confounding\nour results with different toxic ratio among differ-\nent training sets.\nA.3 Training Settings\nFor all the experiments, we fine-tune RoBERTa-\nlarge (Liu et al., 2019) over the corresponding cor-\npus with one GTX2080 Ti. We use the default hy-\nperparameters as provided in the HuggingFace\nTransformers library (Wolf et al., 2019), with\ntwo major changes: we use a learning rate of 10\u00005\nand 8 batch size in all experiments.\nA.4 Prediction Combining with Bias-only\nModel\nTo prevent the possibility that our LM IXIN-\nTOXTRIG/ONI is not well trained, thus resulting\nin the decrease of models’ in-distribution perfor-\nmance, we use the joint-prediction from the main\nand bias-only model to infer the in-distribution testset and they obtain 94.15% and 94.17% accuracy,\nrespectively. This is competitive performance as\nshown in Table 2.\nB Alternative Dataset of Toxic Language\nDavidson et al. (2017) collected data from Twit-\nter, starting with 1,000 terms from HateBase (an\nonline database of hate speech terms) as seeds,\nwhich the process relies on lexical biases. We\nfind that performing data filtering methods over\nthis dataset leads to degenerate behaviour. Specifi-\ncally, as shown in Table 7, the easy region demon-\nstrates least spurious correlation due to its heavily\nskewed class distribution, which further prevent us\nfrom downsampling to control the toxic ratio. We\nalso train LM IXIN-TOXTRIGand LM IXIN-dialect\nover the dataset. Table 8 shows that FPR of the\ndebiased model increase instead except for the OI\ncategory and Table 9’s results behave in-line with\nTable 4.\nC Few-shot AAE-to- WAE Translation\nNote that we do notrecommend the following\napproach to build large scale parallel data for\ndialects, as discussed under ethical implications\nand limitations (§6).\nWe use GPT-3 (Brown et al., 2020) to create\na few-shot AAE-to-WAE translation system, us-\ning the following set of example translation pairs\ndrawn from Spears (1998):\nAAE: Get your triflin’ ass out of here.\nWAE: Get your trifling self out of here.\nAAE: I saw his ass yesterday.\nWAE: I saw him yesterday.\nAAE: His ass is gonna get fried.\nWAE: He is gonna get fried\nAAE: Wassup, nigga?\nWAE: What’s up bro?\nAAE:htweeti\nWAE:\nNote that Spears (1998) refers to WAE as White\nlanguage varieties, and deals with English preva-\nlent in the United States.\nWe prepend the formatted example pairs to each\nAAE tweet in our training data, and generate the\ntranslation from GPT-3 using top-0.95 nucleus\nsampling with a temperature of 0.5. Prompts, for-\nmatting, and generation parameters were chosen\nbased on manual inspection of the output.\n3155Toxic Ratio RNOI#ROI#RONI#RAAE#\nOriginal y 0.8308 0.0287 0.4320 0.2610 0.4061\nRandom 0.8312 0.0288 0.4312 0.2621 0.4011\nAFLite 0.7669 0.0342 0.4708 0.2835 0.4236\nDataMaps-Ambig. 0.6736 0.0493 0.4683 0.3230 0.4445\nDataMaps-Hard 0.6645 0.0521 0.4533 0.3190 0.4426\nDataMaps-Easy 0.9972 0.0135 0.0771 0.0396 0.0928\nTable 7: Lexical and dialectal associations between toxicity in the original dataset (Davidson et al., 2017) and\nvarious filtered counterparts. Random, AFLite, and DataMaps all contain only 50% of the original data after\nfiltering. (We could not perform downsampling on these datasets due to their heavily skewed label distribution.)\nLower Pearson Rcorrelation value indicates less superficial patterns in the dataset, thus are less biased. The easy\nsubset gives the best results here are due to its severe inbalanced label distribution.\nTest NOI OI O NI\nAcc.\"F1\"F1\" FPR NOI#F1\" FPR OI#F1\" FPR ONI#\nOriginal 96.37 97.81 96.42 25.00 99.86 57.14 99.57 63.64\nLM IXIN-TOXTRIG 96.15 97.69 96.19 28.57 99.78 42.86 99.28 72.73\nTable 8: Lexical bias removal evaluation for debiasing methods. Original refers to the model trained over the full\ntraining set. The test set is further categorized into tweets that contained relevant T OXTRIGwords.F1indicates\nmodels’ performance while the false positive rate (FPR *) reflects models’ bias. The lower the FPR *is, the less\nbiased the model tend to be.\nDebiasing Method Test\nRAAE Acc.\"F1\" FPR AAE#\nOriginal 0.4079 96.37 97.81 24.76\nLM IXIN-Dialect - 96.48 97.88 22.86\nTable 9: Dialectal bias evaluation for all debiasing\nmethods, on both in-distribution test set as well as out-\nof-distribution dialect and race priming test sets. In ad-\ndition to accuracy and F1, we report the false positive\nrate with respect to tweets in AAE (FPR AAE), reflecting\ndialectal bias (lower is less debiased). Each method is\nbased on a RoBERTa-large classifier.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "PYz-jgEiQHJw",
"year": null,
"venue": "HCI (28) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=PYz-jgEiQHJw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The E-training Caravans: An e-Inclusion Initiative in Saudi Arabia",
"authors": [
"Hend S. Al-Khalifa"
],
"abstract": "Today’s technological world requires that individuals are capable of using Information and Communications Technology (ICT) effectively. In fact, more and more services are offered using technology, e.g. communication with family and friends, carrying out business, and interacting with governments. To close the gap between \"the technology-empowered communities and the technology-excluded communities\" an initiative called the e-training caravan is presented in this paper. This initiative aims to enable the segments of society from dealing with telecommunications and information technology effectively, bridging the digital divide and raising awareness of the importance of ICT for all individuals. This initiative focuses on population of rural areas and low-income areas. In this paper we discuss the e-training caravan initiative proposed by the Ministry of Communication and Information Technology (MCIT) in Saudi Arabia, and highlight its objectives and training program. We also discuss the results obtained after running the caravan for one year along with the encountered barriers.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "RuXpoqoKch",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200388",
"forum_link": "https://openreview.net/forum?id=RuXpoqoKch",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Black-Box Adversarial Attacks Against Deep Learning Based Malware Binaries Detection with GAN",
"authors": [
"Junkun Yuan",
"Shaofang Zhou",
"Lanfen Lin",
"Feng Wang",
"Jia Cui"
],
"abstract": "For efficient malware detection, there are more and more deep learning methods based on raw software binaries. Recent studies show that deep learning models can easily be fooled to make a wrong decision by introducing subtle perturbations to inputs, which attracts a large influx of work in adversarial attacks. However, most of the existing attack methods are based on manual features (e.g., API calls) or in the white-box setting, making the attacks impractical in current real-world scenarios. In this work, we propose a novel attack framework called GAPGAN, which generates adversarial payloads (padding bytes) with generative adversarial networks (GANs). To the best of our knowledge, it is the first work that performs end-to-end black-box attacks at the byte-level against deep learning based malware binaries detection. In our attack framework, we map input discrete malware binaries to continuous space, then feed it to the generator of GAPGAN to generate adversarial payloads. We append payloads to the original binaries to craft an adversarial sample while preserving its functionality. We propose to use a dynamic threshold for reducing the loss of the effectiveness of the payloads when mapping it from continuous format back to the original discrete format. For balancing the attention of the generator to the payloads and the adversarial samples, we use an automatic weight tuning strategy. We train GAPGAN with both malicious and benign software. Once the training is finished, the generator can generate an adversarial sample with only the input malware in less than twenty milliseconds. We apply GAPGAN to attack the state-of-the-art detector MalConv and achieve 100% attack success rate with only appending payloads of 2.5% of the total length of the data for detection. We also attack deep learning models with different structures under different defense methods. The experiments show that GAPGAN outperforms other state-of-the-art attack models in efficiency and effectiveness.",
"keywords": [],
"raw_extracted_content": "Black-Box Adversarial Attacks Against Deep Learning\nBased Malware Binaries Detection with GAN\nJunkun Yuan1, Shaofang Zhou1, Lanfen Lin*1and Feng Wang2and Jia Cui3\nAbstract. For efficient malware detection, there are more and more\ndeep learning methods based on raw software binaries. Recent stud-\nies show that deep learning models can easily be fooled to make awrong decision by introducing subtle perturbations to inputs, whichattracts a large influx of work in adversarial attacks. However, mostof the existing attack methods are based on manual features (e.g.,\nAPI calls) or in the white-box setting, making the attacks impracti-\ncal in current real-world scenarios. In this work, we propose a novel\nattack framework called GAPGAN, which generates adversarial pay-\nloads (padding bytes) with generative adversarial networks (GANs).To the best of our knowledge, it is the first work that performs end-to-end black-box attacks at the byte-level against deep learning basedmalware binaries detection. In our attack framework, we map inputdiscrete malware binaries to continuous space, then feed it to thegenerator of GAPGAN to generate adversarial payloads. We appendpayloads to the original binaries to craft an adversarial sample whilepreserving its functionality. We propose to use a dynamic thresholdfor reducing the loss of the effectiveness of the payloads when map-ping it from continuous format back to the original discrete format.For balancing the attention of the generator to the payloads and theadversarial samples, we use an automatic weight tuning strategy. Wetrain GAPGAN with both malicious and benign software. Once the\ntraining is finished, the generator can generate an adversarial sam-\nple with only the input malware in less than twenty milliseconds. We\napply GAPGAN to attack the state-of-the-art detector MalConv and\nachieve 100% attack success rate with only appending payloads of\n2.5% of the total length of the data for detection. We also attackdeep learning models with different structures under different de-fense methods. The experiments show that GAPGAN outperformsother state-of-the-art attack models in efficiency and effectiveness.\n1 INTRODUCTION\nDeep neural networks have achieved great success, more and morework prefers to use deep learning for efficient malware detection.Among them, some work (e.g., [5] and [12]) detects malware basedon manual features (e.g., API calls) which may contain malicious be-haviour of a program, some work (e.g., [21], [24], and [4]) directlyuses information of software without running it, and other work (e.g.,[13] and [20]) integrates the above strategies or uses other methods,like visualization. Recently, there is a trend of using raw binaries for\n1College of Computer Science and Technology, Zhejiang University, Zhe-\njiang, China, {yuanjk, tanes, llf}@zju.edu.cn\n* Corresponding author is Lanfen Lin\n2Department of Computer and Information Technology Zhejiang Police Col-\nlege, Zhejiang, China, [email protected]\n3China Information Technology Security Evaluation Center, Beijing, China,\[email protected] detection, which can efficiently mine the latent relationships\namong different sections of the file. With the rapid development ofmalware, the defense efficiency becomes crucial in today’s realis-tic scenarios, making the end-to-end detection based on raw binariesmore promising.\nHowever, many research work ([25], [7], [17], [9], and [27]) has\ndemonstrated that deep neural networks are susceptible to adversar-ial attacks. The attackers add small perturbations to the original datathat is imperceptible to humans, which can mislead the classifiersto do wrong decisions. These studies point out a serious threat tothe security of deep learning algorithms and AI applications. In mal-ware detection, most of the adversarial attacks (e.g., [14], [15], and[3]) rely on the complete information of the detector (i.e., white-boxattacks). However, there are limitations to this kind of attack, e.g.,the target model must be fully exposed to the attackers. Meanwhile,previous attack work (e.g., [11], [2], and [23]) are based on manualfeatures that are speculated to be used for training the detector. Ifthe speculation is wrong or once the defender changes its trainingstrategies, this kind of attacks will be invalid. The wide use of rawbinaries based detection also makes such an attack that needs plentyof resources and time to extract features inapplicable.\nDifferent from the manual features, the original binaries data can-\nnot be simply changed even with small modifications, or their func-tionality will be damaged. Besides, the size of binaries data varieswidely, which further increases the attack difficulty. We also find thatsubtle perturbations will be ignored when transforming adversarialpayloads in continuous space back to discrete binary when we savethe generated adversarial samples, which affects the effectiveness ofadversarial attacks. Therefore, how to perform effective and practi-cal black-box attacks to the deep learning models based on malwarebinaries while protecting the original functionality remains a greatchallenge.\nIn this paper, we put forward a novel attack framework GAPGAN\nwhich generates adversarial payloads via GANs. To the best of ourknowledge, it is the first work that performs end-to-end black-boxattacks at the byte-level against deep learning based malware binariesdetection. We apply GAPGAN to attack the state-of-the-art detectorMalConv [21] as well as other deep learning models with different\nstructures. The experiments show that our model can achieve a high\nattack success rate, and it outperforms other state-of-the-art attackmethods in efficiency and effectiveness.\nWe have the following contributions:\n1. We propose a novel adversarial attack framework GAPGAN,\nwhich performs end-to-end black-box attacks at the byte-levelagainst deep learning based malware binaries detection, makingthe attacks more efficient and effective.\n2. In GAPGAN, the generator generates adversarial payloads and ap-ECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2003882536\npends it to the original data to craft a malware adversarial sample\nwhile preserving its functionality. Once the training process is fin-ished, the generator can efficiently generate each adversarial sam-ple in less than twenty milliseconds.\n3. We propose to use a dynamic threshold for reducing the loss of the\neffectiveness of the payloads when mapping it from continuousspace back to discrete binaries. For balancing the attention of thegenerator to the payloads and the adversarial samples, we adopt\nan automatic weight tuning strategy.\n4. We apply GAPGAN to attack the state-of-the-art malware detector\nMalConv. The experiments show that the adversarial samples gen-\nerated by GAPGAN can achieve an attack success rate of 100%when appending adversarial payloads of 2.5% of the length of thedata for detection. The experiments also show that GAPGAN out-performs other state-of-the-art attack methods in efficiency andeffectiveness under different defenses.\nThe remaining part of the paper is composed of five parts: In Sec-\ntion 2, we introduce the background and related work. In Section 3,we explain the details of our attack framework GAPGAN. In Section4, we describe the experimental setup, including datasets, metrics,and target models. In Section 5, we show the results of our experi-ments. In Section 6, we sum up our work and give a conclusion.\n2 BACKGROUND AND RELATED WORK\n2.1 Adversarial Attacks Against Malware\nDetection\nMost of the traditional machine learning and deep learning meth-\nods for malware detection (e.g., [5] and [12]) focus on manual fea-tures that are extracted from programs’ behavior information, likesignature and API calls. For this kind of detection methods, earlierattack work is mainly based on the manual features which are sup-posed to be used by the defender. Some work proposes to use APIs\nas binary features, then adopt deep learning models to generate ad-\nversarial samples ([11] and [8]). A different approach based on APIcall sequences uses an optimization process to perform adversarialattacks [23]. [2] proposes to use reinforcement learning for attack-ing, it comprises numerous manual information as features, e.g., PEheader metadata, section metadata and byte histogram. Xu et al. [29]put forward a genetic programming based attack method to performstochastic manipulations on the structures of file. However, these at-tacks need expert experience and plenty of time to obtain effectivefeatures, and once the features used for attacking is known by thedefenders, the fast update detectors can easily evade the attacks.\nRecent malware detection work (e.g., [21], [24], and [4]) pays\nmore attention to use deep learning models on raw software binaries,as deep neural networks can efficiently mine latent characteristics inraw data without mass data preprocessing and prior experience. Tocatch up with the updated malware detection technologies, the at-tackers start to seek new methods that can be applied to raw softwarebinaries (e.g., [14], [15], and [3]). Different from the extracted fea-tures, the raw binaries data cannot be simply changed or it may lose\nimportant functionality. Besides, the raw binaries have variable input\nsizes, which can further make these attacks more tricky than previ-ous.\n[14] proposes the first adversarial attack work at the byte-level,\nwhich combines gradient ascent and greedy strategies. It appendsbytes one by one to the end of the file for preserving their function-ality. However, it performs white-box attacks that have limitations inreal-world scenarios, and the model needs to calculate the gradientfor each padding byte which consumes a lot of time and resources.[15] also puts forward an approach for discrete sequences by inject-ing local perturbations. However, it is in the white-box setting andnot efficient. [3] proposes both white-box and black-box methods.\nIn the black-box method, it randomly selects and appends benign\ndata blocks to the malware data, tests the results at each time. It con-sumes plenty of time to get the effective blocks before performingattacks. This approach is simple but tedious and inefficient, which isnot applicable for effective malware adversarial samples generation.In contrast, we will show that our end-to-end framework can attackin the black-box setting and generates adversarial samples in far lesstime.\n2.2 Generative Adversarial Networks (GANs)\nGenerative adversarial networks (GANs) [6] are widely used in com-puter vision tasks (e.g., [30], [16] and [1]) in recent years. Accordingto their high level of imitation ability, some work (e.g., [11] and [28])adopts GANs for adversarial attacks. The most representative attackmethods use the approach called distillation [10] to fit the discrim-inator with the outputs of the target model, train the generator forgenerating data that can mislead the discriminator. In this way, theadversarial samples can attack the target model indirectly, i.e., thetransferability of the adversarial samples [19]. Different from pre-\nvious work, we use the generator to generate adversarial payloads,\nwhich is used to craft an adversarial sample without damaging itsfunctionality. In our model, once the training process of GANs is fin-ished, the generator can independently generate malware adversarialsamples in a very short time with only the input malware binaries.\n3 BLACK-BOX ATTACKS TO MALW ARE\nDETECTION WITH GAN\nIn this section, we will briefly explain the formal definition of theinput binaries and the adversarial samples, then introduce the frame-work and strategies details of GAPGAN.\n3.1 Problem Definition\nBinary file of software consists of a sequence of bytes belonging tothe discrete space X={0,...,255}. Let b=(b\n1,...,b n)∈Xn\ndenote a binary, where nis the length of byte sequence, varying from\nfile to file. The binary file bhas labels y∈{ − 1,1}, wherey=1\nindicates that it is a benign software bben, otherwise it is a malware\nbmal.\nThe malware detector aims at learning a mapping function f:\nx→{ − 1,1}which satisfies f(bmal)=−1 andf(xben)=1 .O n\nthe contrary, the goal of the adversarial attacks is to find a modelgand generate an effective adversarial sample b\nadv =g(bmal)\nto make the malware detector classify it as benign software, i.e.,f(b\nadv)=1 . In the meanwhile, badv must preserve the original\nfunction of bmal.\n3.2 GAPGAN Framework\nFigure 1 shows the overview of the proposed framework GAPGAN.It contains two stages: training process and attack process. In thetraining process, we train the generator network Gand the discrim-\ninatorDconcurrently, where Gintends to generate adversarial pay-\nloads for input malware and concatenate them to craft adversarialsamples, while Dtries to distill the target black-box detector fandJ.Yuanetal./Black-Box Adversarial Attac ksAgainst DeepLearning Based Malwar eBinaries Detection withGAN 2537\nOriginal\nmalware\nOriginal\nbenign softwareDatasets for \ntrainingMalware\nsample\nBenign\nsoftware sampleAdversarial\nsample\nAdversarial\nsampleAdversarial\npayloadsGeneratorAppend\nNorm\nAppend\nNorm\nAppend\nNormConcat\nConcat AttackData pool Discriminator\nTraining processQuery\nAdversarial\npayloadsTrained\ngenerator\nMalware\nsampleBlack-box\ndetector\nOriginal\nmalware\nAttack processBlack-box\ndetectorRespond\nDatasets for \nattack\nFigure 1: Overview of GAPGAN\nimitates the decision of ffor both orignial benign samples and the\ngenerated adversarial samples. In the attack process, we only need\nthe trained generator to attack black-box detector.\nFor protecting original functionality of malware when crafting its\nadversarial sample, there have been some popular methods like us-ing debug logs and compressing data before runtime, but they aretime-consuming and laborious. Other attacks performed by carefullychoosing and manipulating are sophisticated that may require spe-cific experience and not applicable for efficient adversarial attacks.Inspired by previous work ([3] and [14]), we choose to append bytes(payloads) at the end of file to preserve their functionality, which issimple and does not require any expert experience.\nSince the length of software file nvaries greatly, we first ap-\npend zeros (represented as the blue part in Figure 1) to the end ofinput binaries to match the input size tof the network as b\n/prime=\n(b1,..,b n,0,...,0)∈Xt, wheret≥n. In this way, we can feed\nevery sample with different lengths to a specific network with fixedsize. Then, we map each byte in discrete binaries to a continuousspace [−1,1]by normalization. We define the normalized input as\nx, where x=(x\n1,...,x t)∈Rt.\nAfter data preprocessing, the normalized malware xmal is fed to\nG. ThenGgenerates adversarial payloads aadv (represented as the\nred part in Figure 1) based on the corresponding characteristics ofx\nmal:\naadv=G(xmal) (1)\nWe append aadv to the end of xmal to craft an adversarial mal-\nware sample xadv:\nxadv=[xmal,aadv] (2)\nwhere [·,·]denotes concatenation operation.\nFor training D, bothxadv andxbenare integrated into the data\npool. In each iteration, we sample a batch of mixed examples fromthe data pool, then use them to query the black-box detector f. Next,we use the label responded by fto fitD, making the decision bound-\nary ofDas close to fas possible.\nDuring training, the generator Glearns to create samples which\ncan evade the discriminator D. In addition, with the improvement\nof the similarity between Dand our target model f, the adversarial\nattack ability of Gtofwill improve as well. Finally, the adversarial\nsamples generated by Gcan also evade feffectively because of the\ntransferability of adversarial attacks.\nOnce the training process is finished, we can use the trained Gto\ngenerate adversarial samples in a very short time with only the inputmalware. It is worth noticing that we abandon the padding zeros forreducing the whole length of payloads in the attack process. To ourpractical experience, it will make the attack success rate decrease alittle, but the loss is acceptable. In addition, we need to convert theadversarial samples back to discrete space as executable file.\nIn order to make our framework adapt to malware binaries and\npayloads with different lengths, the generator network is designed tohave variable input and output size. To be more specific, the genera-tor first extracts features of inputs with two convolution layers. Then,it resizes the high-level features with fully-connected layers. Aftertwo layers of deconvolution and one layer of 1∗1convolution, the\nadversarial payloads are generated. On the other hand, the discrim-inator performs binary classification with convolutional layers andfully-connected layers. Notice that if the size of the input data andthe length of payloads that we decide to generate are determined,we can use them as input to easily tune the structure of GAPGANbecause of the fully-connected layers in both the generator and thediscriminator.\n3.3 Black-box Attacks Strategy\nGenerator. The generator Gaims at learning the characteristics of\nxmal whose original label is y=−1, and generate corresponding\neffective sample xadv, which can mislead Dto predict it as \"benign\"J. Yuan et al. / Black-Box Adversarial Attacks Against Deep Learning Based Malware Binaries Detection with GAN 2538\nwhose label is y=1. In our practical experience, Goften pays more\nattention to D’s prediction result of the xadvwhich brings a serious\nproblem, i.e., the effectiveness of adversarial payloads aadv cannot\nbe improved well. Therefore, we make Gconsider both the global and\nthe local (i.e., xadvandaadv) effectiveness of xadv. The adversarial\nloss function of the generator Gis:\nLG=−(1−β)Ex∼pxadv(D(x))−βEa∼paadv(D(a)) (3)\nwhereβis a hyperparameter that keeps a balance of the generator’s\nattention between xadvandaadv. We try to find the best value of β,\nhowever, a fixed βcan not always perform the best, because the con-\nditions of networks are different every time the attack program runs.\nTherefore, the best βshould have the ability of adaptive adjustment.\nInspired by [18], we consider automatically tuning βbased on the\noutputs of Dtoxadvandaadv, which represent the attack effective-\nness of them respectively. We give the automatic tuning mechanism:\nβ=exp(Ex∼pxadv(D(x)))\nexp(Ex∼pxadv(D(x))) +exp(Ea∼paadv(D(a)))(4)\nIfxadvis more effective than aadv, then the expectation of the output\nofDtoxadvis larger. The automatic tuning mechanism will increase\nβto improve the learning rate of aadv indirectly. We will show its\nefficacy in our experiments.\nDiscriminator. We use the discriminator Dto dynamically distill\nthe target black-box model f. To be more specifically, we sample a\nbatch of mixed data from the data pool, get labels by querying f. The\nsamples and their corresponding labels are used for fitting Dbased\non the distance metric H. The distillation loss of Dis:\nLD=Ex∼xadvH(D(x),f(x)) + Ex∼xbenH(D(x),f(x)) (5)\nDtries to learn the decision strategies of fonxben andxadv.I n\nthis way,Dis treated as a substitute detector, which is used for trans-\nferring the attack effectiveness of adversarial samples to the ultimatetarget black-box model f.\nDynamic threshold strategy. In the attack process, we will gen-\nerate adversarial samples and save them locally. However, we findthat subtle perturbations will be ignored when we map the adversar-ial samples from adversarial continuous space back to the discretespace of binaries. A large part of payloads containing attack effec-\ntiveness will be ignored because of their small values. To solve this\nproblem, we propose to use a dynamic threshold strategy to limit the\nminimum value of payloads:\ne=/braceleftBigg\ne, if|e|>/epsilon1∗\ni\nTmax\n0,e l s e(6)\nwhereerepresents each byte in payloads, iis the current training iter-\nation time, Tmax is the maximum training iteration time, and /epsilon1is the\nmaximum threshold value. We directly set the bytes with small val-\nues as zeros below the threshold. However, if we use a static thresh-old, the learning process of Gwill get lost, leading to terrible ad-\nversarial attack results (see experimental results in section 5). It isbecause most of the bytes generated by Gwhich is just after ini-\ntialization are very small, they will be set as zeros in the beginning.Hence we use /epsilon1∗\ni\nTmaxto dynamically increase the threshold, in\nthis way, Gcan gradually adjust its attack strategy to the constraints.\nMore concretely, if a byte with small value but certain attack effec-tiveness, it will first be set to zero with the threshold. Then Gwill\ncontinue to add the perturbations to the byte or other adjacent bytefor improving the adversarial attack effectiveness in this area. Finally,Algorithm 1 Black-box Attacks to Malware Detection\nInput: Training set S={(x0,y0),...,(xk−1,yk−1)}, generator\nG(x;θG0), discriminator D(x;θD0), target black-box model f,\nmax training iteration Tmax , maximum threshold /epsilon1, weight β\nOutput: A well-trained Generator G(x;θG)\nfori=0→Tmax−1do\nSamplemexamples from Sand get training set for D,Sd=\n{(xd0,yd0),...,(xdm−1,ydm−1)}\nforxdiinSddo\nQueryfand getf(xdi)\nend forUseS\ndto update θDwith∇θD(Ex∼xmalH(D(x),f(x)) +\nEx∼xbenH(D(x),f(x)))\nSamplemexamples (y adv i=−1) fromSand get training set\nforG,Sadv={(xadv0,yadv0),...,(xadv m−1,yadv m−1)}\nfor(xadv i,yadv i)inSadv do\nGenerate adversarial payloads aadv i=G(xadv i)\nforeinaadv ido\nif|e|</epsilon1∗i\nTmaxthen\ne←0\nend if\nend forx\nadv i←[xadv i,aadv i]\nend for\nCalculate β=exp(E x∼pxadv(D(x)))\nexp(E x∼pxadv(D(x)))+exp(E a∼paadv(D(a)))\nUseSadvto update θGwith∇θG−(1−β)Ex∼pxadv(D(x))−\nβEa∼paadv(D(a))\nend for\nreturnθG\nTable 1: Malware and benign software data are collected from different\nsources and split into four datasets according to their length distributions.\nDatasets Class Number Max Mean Source\n1Malware 3,436 93,986 51,715 VirusTotal\nBenign 3,436 98,304 41,651 Chocolatey\n2Malware 5,000 195,584 80,707 VirusTotal\nBenign 5,000 196,608 98,072 Chocolatey\n3Malware 10,000 394,128 126,276 VirusTotal\nBenign 10,000 393,640 128,808 Chocolatey\n4Malware 3,000 196,189 117,812 Kaggle 2015\nBenign 3,000 195,320 92,526 Chocolatey\nthe byte or other adjacent byte will be modified to fill the adversarial\nattack loss of the byte which is set to zero. It is worth noting that allthe adjustment process will perform automatically with the gradientdescent algorithm after we set /epsilon1.\nOn the basis of the above work, the overall algorithm of black-box\nattacks to malware detection is shown in Algorithm 1. In Section 5,\nwe will show the details of our attack experiments.\n4 EXPERIMENTAL SETUP\nThis section introduce the preparation of our attack experiments, in-\ncluding datasets, evaluation metrics, and the target models which wechoose and train.\n4.1 Datasets and Evaluation Metrics\nMalware and benign software data are collected from differentsources for our adversarial attack experiments, as shown in TableJ.Yuanetal./Black-Box Adversarial Attac ksAgainst DeepLearning Based Malwar eBinaries Detection withGAN 2539\nTable 2: The detection accuracies of MalConv and other deep learning mod-\nels used as target black-box models. A: CNN-based model; B: CNN-LSTM-based model; C: CNN-GRU-based model; D: Parallel-CNN-based model.\nDatasets MalConv A B C D\n1 96.40% ----\n2 96.42% 94.94% 95.99% 95.30% 94.70%3 97.22% ----\n4 95.55% 95.02% 95.27% 95.24% 95.30%\n1. The malware samples are downloaded from VirusTotal4and Mi-\ncrosoft Malware Classification Challenge (Kaggle 2015) [22]. The\nbenign software samples are downloaded by Chocolatey Software5,\na package manager for Windows. The data are split into four datasetsto make the binaries length distributions of malware close to benignsoftware in each dataset. Dataset 1, 2, and 3 with different maximumand mean length are used for exploring the impact of binaries lengthon attacks. Dataset 2 and 4 with different sources but close lengthdistributions are used for evaluating the generalization of attack al-gorithms.\nWe randomly split each dataset into two parts, one (70%) is for\ntraining the black-box model, the other (30%) is for adversarial at-tacks. In this way, the data used for training black-box model and foradversarial attacks are disjoint.\nTo evaluate the performance of adversarial attack methods, we\nchoose the attack success rate (ASR) metric:\nASR =/summationtext\nn\ni=1I(f(xmal i)=−1∧f(xadv i)=1 )/summationtextnj=1I(f(xmal j)=−1)(7)\nwhereIis indicator function, it equals to 1 if the expression is true,\nor 0 otherwise. ASR represents the rate of malware samples that are\ndetected by the black-box model but evade successfully after beingcrafted to be adversarial samples.\n4.2 Target Black-box Models\nWe choose the state-of-the-art malware detector MalConv [21] as ourprimary target black-box model. MalConv first embeds each byte ininput binaries to 8-dimensional vector, then uses two convolution lay-ers with different activation functions for classification. We train aMalConv detector with input size 2,000,000 for each dataset. Table2 shows the test accuracy of each MalConv detector after training. Itdemonstrates that the trained MalConv detectors have similar perfor-mance with that in [21].\nIn order to test the generalization of the adversarial attack meth-\nods, we also use four deep learning models with different structuresas target models. For reducing the large dimensions of input binaries,we consider adding CNN structures to each deep learning model.Each byte in input binaries are embedded to 8-dimensional vector inthe same way as MalConv. We train these four models on dataset 2and 4, the detection accuracies are shown in Table 2. It shows thatthe four detectors also reach good classification accuracy.\n5 EXPERIMENTAL RESULTS\nIn this section, we show the effectiveness of GAPGAN in adversarialattack experiments. We also compare it with other state-of-the-artattack methods under different defenses.\n4http://www.virustotal.com\n5https://chocolatey.org/Table 3: Attack success rate (ASR) of the adversarial samples generated by\nGAPGAN against MalConv models on different datasets. Payloads rate rep-resents the rate of the length of payloads to that of binaries for detection.\nPayloads Rate Dataset 1 Dataset 2 Dataset 3 Dataset 4\n1% 64.66% 6.28% 2.15% 4.13%\n2.5% 100.00% 36.10% 18.14% 30.99%\n5% 100.00% 77.78% 43.27% 53.49%\n10% 100.00% 98.21% 72.89% 76.88%\n20% 100.00% 100.00% 88.95% 87.41%\n0 5 10 15 20\nPayloads Rate (%)\n(a) Attacks on Dataset 1020406080100Attack Success Rate\nRandom\nGAPGAN\n0 5 10 15 20\nPayloads Rate (%)\n(b) Attacks on Dataset 2020406080100Attack Success Rate\nRandom\nGAPGAN\n0 5 10 15 20\nPayloads Rate (%)\n(c) Attacks on Dataset 3020406080100Attack Success Rate\nRandom\nGAPGAN\n0 5 10 15 20\nPayloads Rate (%)\n(d) Attacks on Dataset 4020406080100Attack Success Rate\nRandom\nGAPGAN\nFigure 2: Comparison of ASR of adversarial samples generated by GAPGAN\nand generated randomly against MalConv detectors on different datasets withdifferent payloads rates.\n5.1 Black-box Attacks with GAPGAN\nWe first apply GAPGAN to attack the four trained MalConv detectors\n(we have shown them in the previous section) with different lengthsof adversarial payloads. Payloads rate is used for representing therate of the length of payloads to that of binaries for detection. Ac-cording to the performance of attacks on different datasets shownin Table 3, it shows that GAPGAN can perform effective black-boxattacks against MalConv models. As can be seen from the resultson dataset 2 and dataset 4, the adversarial samples have a high at-tack success rate on different data. Besides, the adversarial binarieswhose original length is shorter may have better attack effectiveness,because of the increasement of the payloads rate. It is worth men-tioning that the ASR of adversarial samples generated from dataset 1can reach 100% with only a small proportion of payloads, i.e., 2.5%of the total length of the data for detection.\nWe find that when ASR has already reached high value, using\nlarger payloads may just improve little attack success rate (e.g., ASRis 98.21% when appending payloads with a rate of 10%, but it justimprove 1.79% when appending the payloads with twice the length).However, the risk of being detected and the cost will increase withthe increasement of payloads’ length. Meanwhile, in order to provethe effectiveness of adversarial payloads generated by GAPGAN, wecompare it with random payloads as shown in Figure 2. We see thatJ. Yuan et al. / Black-Box Adversarial Attacks Against Deep Learning Based Malware Binaries Detection with GAN 2540\nTable 4: Comparison of the state-of-the-art adversarial attack methods in the\nmalware detection task. Run time is the time for generating an adversarialsamples in attack process, which is evaluated by generating 3000 adversarialsamples with each method.\nAdversarial attack methods\nOpt. [14] AdvSeq [23] MalGAN [11] GAPGAN\nBlack-box /check/check /check\nRun time >2h - 0.02s 0.02s\nAttack level Bytes API calls API calls Bytes\nTable 5: Comparison of the attack performance of Random, Opt. and GAP-GAN against different detectors on dataset 2 and 4. Random: random pay-loads. The payloads rates are 10% in these experiments.\nDetectorDataset 2 Dataset 4\nRandom Opt. GAPGAN Random Opt. GAPGAN\nMalConv 60.21% 99.87% 98.21% 57.52% 68.34% 76.88%\nA 57.84% 90.41% 76.04% 17.10% 85.09% 51.31%\nB 44.04% 93.32% 99.35% 46.50% 77.24% 68.67%\nC 64.25% 92.74% 84.40% 55.72% 78.17% 64.96%\nD 70.47% 97.23% 99.93% 9.03% 74.49% 87.80%\nadversarial samples with payloads generated by GAPGAN have far\nbetter attack effectiveness compared with payloads generated ran-domly. It can also be seen that ASR of random payloads is pro-portional to the payload rates, while adversarial payloads increaserapidly with the increasement of the payloads rates and the growthrate of ASR slows down when it reaches high value. We consider that\nthere exists optimum payload rates for each dataset, i.e., the growth\nrate of ASR will decline fast when appending larger payloads.\n5.2 Comparison with State-of-the-art Attack\nMethods\nWe compare GAPGAN with other state-of-the-art adversarial attack\nmethods in malware detection task, i.e., Opt. method based on gra-dient optimization [14], AdvSeq method based on sequences of APIcalls [23], and MalGAN method based on API calls and GANs [11].The results of them is shown in Table 4. It can be seen that only Opt.method performs attacks in the white-box setting. From the perspec-tive of attack efficiency, once the attack models are trained, GAP-GAN and MalGAN generate adversarial samples far faster than othermethods (AdvSeq is also based on sophisticated optimization pro-\ncesses, which is supposed not efficient). However, only GAPGAN\nperforms efficient black-box attacks at the byte-level, which is morethreatening in real-world scenarios.\nIn order to further explore the effectiveness of attack approaches\nagainst binaries based detection, we choose to compare GAPGANwith Opt. method, i.e., the byte-level attack methods. The adversar-ial samples with random payloads are chosen for comparison. It canbe seen from Table 5, both of the two attack methods have goodattack performance against different detectors. However, GAPGANperforms efficient black-box attacks which is considered crucial foradversarial attacks in application.\n5.3 Attack Performance Under Different Defense\nMethods\nA lot of defense methods have been proposed to defend various of at-tacks. The most popular way to make the model robust to adversarialsamples is adversarial training [7], which introduces adversarial per-turbations in the training process to make the deep learning modelsTable 6: Comparison of the attack performance of Random, Opt. and GAP-\nGAN to different detectors under defenses. RND: random nullification datadefense method; Adv.: adversarial training defense method. The payloadsrates are 10% in these experiments.\nDefense DetectorDataset 2 Datast 4\nRandom Opt. GAPGAN Random Opt. GAPGAN\nRNDMalconv 24.64% 51.23% 63.69% 49.59% 41.25% 75.73%\nA 20.67% 57.84% 45.00% 0.76% 37.14% 23.64%\nB 0.00% 62.29% 87.47% 5.79% 37.82% 41.07%\nC 7.65% 39.47% 34.57% 22.91% 29.74% 39.22%\nD 9.52% 43.58% 92.35% 3.06% 54.41% 71.09%\nAdv.Malconv 23.87% 29.78% 57.04% 13.10% 22.17% 30.46 %\nA 0.00% 15.14% 23.72% 0.00% 7.72% 9.49%\nB 0.00% 27.17% 39.17% 0.00% 9.38% 15.82%\nC 1.04% 19.77% 24.18% 4.99% 13.47% 18.13%\nD 0.00% 31.65% 41.73% 0.00% 17.97% 27.60%\nTable 7: Attack success rate of adversarial samples generated by GAPGANwith different /epsilon1andβ./epsilon1is set to 0.06. βis set to 0.5 in the static case. W:\nwithout using the method; S: using static parameter; D: using dynamic pa-rameter.\nβ/epsilon1 Dataset 1 Dataset 2 Dataset 3 Dataset 4\nW W 70.94% 70.97% 22.94% 58.47%WD100.00% 94.09% 61.38% 72.12%\nD W 83.12% 87.95% 46.44% 74.56%\nSD 100.00% 96.06% 69.38% 75.33%\nD S 8.74% 6.49% 1.38% 7.98%\nDD 100.00% 98.21% 72.89% 76.88%\ntune the decision strategies. Another efficient defense method [26]\nrandomly nullifies the input data to eliminate the attack effective-ness of adversarial samples. We compare the attack effectiveness ofrandom payloads with that of adversarial samples generated by GAP-GAN and Opt. under these defenses.\nFor simulating real-world scenarios, we assume that the attacker\ndoes not know any information on the defenses. In the experiments ofRND defense methods, we randomly nullify 10% of input data andtest the attack success rate of adversarial samples. Since the struc-\ntures of the detectors contain an embedding layer, the gradient can-\nnot be transferred in the adversarial training defense method. There-fore, we propose to use substitute models that are used for distillingthe detectors to generate training data with adversarial perturbations.The new training data are used for improving the robustness of thedetectors. The adversarial samples generated with the previous detec-tors will be evaluated on the retrained detectors. Table 6 shows theresults of attacks under defenses. We show that in most cases, the at-tack performance of GAPGAN outperforms Opt. method, especiallyunder the defense of adversarial training. A possible explanation isthat Opt. method overly relies on the structures and gradient informa-tion of the target models. In addition, a byte in payloads is generatedby the gradients of the current adversarial sample by Opt. method.The attack effectiveness is greatly damaged when the connectionsbetween bytes are cut off, i.e., in the random nullification process ofRND defense. By comparison, GAPGAN considers the attack abil-\nity of the whole adversarial samples, making it more effective under\ndefenses.\n5.4 Effectiveness of Dynamic Threshold and\nAutomatic Weight Tuning\nAs we explained in Section 3, we put forward a dynamic thresholdstrategy to limit the minimum value of payloads and an automaticweight tuning mechanism to balance the attention of the generator toboth the payloads and the adversarial samples. We perform ablationJ.Yuanetal./Black-Box Adversarial Attac ksAgainst DeepLearning Based Malwar eBinaries Detection withGAN 2541\nstudies to verify the effectiveness of these strategies with different\nparameters, i.e., without, static and dynamic, as shown in Table 7.It shows that our dynamic threshold and automatic weight tuningstrategies significantly improve the effectiveness of adversarial sam-ples. However, if we set the static threshold, then most of the bytesgenerated by the generator will directly be set as zeros at the be-ginning of the experiments. It makes the generator lose the correctdirection of attacks, leading to poor results.\n6 CONCLUSION\nIn this paper, we propose an adversarial attack framework GAPGANto generate adversarial samples against binaries based malware de-tection via GANs. In our model, we append adversarial payloadsgenerated by the generator to the original malware binaries to craft anadversarial sample without damaging its original functionality. Theexperiments show that GAPGAN can effectively attack the state-of-the-art detector MalConv as well as other deep learning models withdifferent structures. The results also show that our model outper-forms other state-of-the-art attack approaches under current defenses\nin efficiency and effectiveness.\nGAPGAN is the first practical end-to-end black-box attack frame-\nwork against malware detection, posing a threat to the next genera-\ntion of popular detection technology, i.e., raw binaries based malwaredetection. While our work focuses on malware binaries, it can easilybe extended to other fields, such as adversarial text or graph gener-ation. This makes GAPGAN a promising attack framework for im-proving the robustness of the defense methods of malware detectionor other tasks that it can be used for.\nACKNOWLEDGEMENTS\nThis work was supported in part by the Application Programof New Model of Intelligent Manufacturing under the Grant No.2016ZNZZ01, in part by Major Scientific Research Project of Zhe-jiang Lab under the Grant No.2018DG0ZX01.\nREFERENCES\n[1] Kenan E Ak, Joo Hwee Lim, Jo Yew Tham, and Ashraf A Kassim,\n‘Attribute manipulation generative adversarial networks for fashion im-\nages’, ICCV, (2019).\n[2] Hyrum S Anderson, Anant Kharkar, Bobby Filar, and Phil Roth, ‘Evad-\ning machine learning malware detection’, Black Hat, (2017).\n[3] Bingcai Chen, Zhongru Ren, Chao Yu, Iftikhar Hussain, and Jintao Liu,\n‘Adversarial examples for cnn-based malware detectors’, IEEE Access,\n7, 54360–54371, (2019).\n[4] Scott E. Coull and Christopher Gardner, ‘Activation analysis of a byte-\nbased deep neural network for malware classification’, in IEEE Security\nand Privacy Workshops, pp. 21–27, (2019).\n[5] Chun-I Fan, Han-Wei Hsiao, Chun-Han Chou, and Yi-Fan Tseng, ‘Mal-\nware detection systems based on API log data mining’, in IEEE Com-\nputer Society Signature Conference on Computers, Software and Appli-\ncations, pp. 255–260, (2015).\n[6] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David\nWarde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio,\n‘Generative adversarial nets’, in NIPS, pp. 2672–2680, (2014).\n[7] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy, ‘Explaining\nand harnessing adversarial examples’, in International Conference on\nLearning Representations, (2015).\n[8] Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael\nBackes, and Patrick D. McDaniel, ‘Adversarial perturbations againstdeep neural networks for malware classification’, arXiv:1606.04435,\n(2016).[9] Xiang He, Sibei Yang, Guanbin Li, Haofeng Li, Huiyou Chang, and\nYizhou Yu, ‘Non-local context encoder: Robust biomedical image seg-mentation against adversarial attacks’, in AAAI, pp. 8417–8424, (2019).\n[10] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean, ‘Distilling the knowl-\nedge in a neural network’, arXiv preprint arXiv:1503.02531, (2015).\n[11] Weiwei Hu and Ying Tan, ‘Generating adversarial malware examples\nfor black-box attacks based on GAN’, arXiv:1702.05983, (2017).\n[12] Youngjoon Ki, Eunjin Kim, and Huy Kang Kim, ‘A novel approach\nto detect malware based on API call sequence analysis’, Interna-\ntional Journal of Distributed Sensor Networks:11 , 659101:1–659101:9,\n(2015).\n[13] Jin-Young Kim, Seok-Jun Bu, and Sung-Bae Cho, ‘Zero-day malware\ndetection using transferred generative adversarial networks based ondeep autoencoders’, Inf. Sci.:460-461, 83–102, (2018).\n[14] Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca,\nGiorgio Giacinto, Claudia Eckert, and Fabio Roli, ‘Adversarial malwarebinaries: Evading deep learning for malware detection in executables’,inEUSIPCO, pp. 533–537, (2018).\n[15] Felix Kreuk, Assi Barak, Shir Aviv-Reuven, Moran Baruch, Benny\nPinkas, and Joseph Keshet, ‘Deceiving end-to-end deep learn-ing malware detectors using adversarial examples’, arXiv preprint\narXiv:1802.04528, (2018).\n[16] Dongwook Lee, Junyoung Kim, Won-Jin Moon, and Jong Chul Ye,\n‘Collagan: Collaborative GAN for missing image data imputation’, inCVPR, pp. 2487–2496, (2019).\n[17] Juncheng Li, Frank R. Schmidt, and J. Zico Kolter, ‘Adversarial camera\nstickers: A physical camera-based attack on deep learning systems’, inThe International Conference on Machine Learning, pp. 3896–3904,\n(2019).\n[18] Shikun Liu, Edward Johns, and Andrew J. Davison, ‘End-to-end multi-\ntask learning with attention’, in CVPR, pp. 1871–1880, (2019).\n[19] Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song, ‘Delving into\ntransferable adversarial examples and black-box attacks’, in Interna-\ntional Conference on Learning Representations, (2017).\n[20] Fabio Martinelli, Francesco Mercaldo, Andrea Saracino, and Cor-\nrado Aaron Visaggio, ‘I find your behavior disturbing: Static and dy-\nnamic app behavioral analysis for detection of android malware’, in\nInternational Conference on Privacy, Security and Trust, pp. 129–136,\n(2016).\n[21] Edward Raff, Jon Barker, Jared Sylvester, Robert Brandon, Bryan\nCatanzaro, and Charles K. Nicholas, ‘Malware detection by eating awhole EXE’, inAAAI Workshops, pp. 268–276, (2018).\n[22] Royi Ronen, Marian Radu, Corina Feuerstein, Elad Yom-Tov, and\nMansour Ahmadi, ‘Microsoft malware classification challenge’, arXiv\npreprint arXiv:1802.10135, (2018).\n[23] Ishai Rosenberg, Asaf Shabtai, Lior Rokach, and Yuval Elovici,\n‘Generic black-box end-to-end attack against state of the art API callbased malware classifiers’, in International Symposium on Research in\nAttacks, Intrusions and Defenses, pp. 490–510, (2018).\n[24] Arindam Sharma, Pasquale Malacaria, and M. H. R. Khouzani, ‘Mal-\nware detection using 1-dimensional convolutional neural networks’, inIEEE European Symposium on Security and Privacy Workshops, pp.\n247–256, (2019).\n[25] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Du-\nmitru Erhan, Ian Goodfellow, and Rob Fergus, ‘Intriguing properties of\nneural networks’, arXiv preprint arXiv:1312.6199, (2013).\n[26] Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia\nII, Xinyu Xing, Xue Liu, and C. Lee Giles, ‘Adversary resistant deepneural networks with an application to malware detection’, in ACM\nSIGKDD Conference on Knowledge Discovery and Data Mining, pp.1145–1153, (2017).\n[27] Xingxing Wei, Siyuan Liang, Ning Chen, and Xiaochun Cao, ‘Transfer-\nable adversarial attacks for image and video object detection’, in IJCAI,\npp. 954–960, (2019).\n[28] Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and\nDawn Song, ‘Generating adversarial examples with adversarial net-works’, in IJCAI, pp. 3905–3911, (2018).\n[29] Weilin Xu, Yanjun Qi, and David Evans, ‘Automatically evading clas-\nsifiers: A case study on PDF malware classifiers’, in The Network and\nDistributed System Security Symposium, (2016).\n[30] Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang, ‘DM-GAN: dy-\nnamic memory generative adversarial networks for text-to-image syn-thesis’, in CVPR, pp. 5802–5810, (2019).J.Yuanetal./Black-Box Adversarial Attac ksAgainst DeepLearning Based Malwar eBinaries Detection withGAN 2542",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "-bNwqYcmpXx",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=-bNwqYcmpXx",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "dcqvb8ohZ_",
"year": null,
"venue": "CIKM 2018",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=dcqvb8ohZ_",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Deep Graph Embedding for Ranking Optimization in E-commerce",
"authors": [
"Chen Chu",
"Zhao Li",
"Beibei Xin",
"Fengchao Peng",
"Chuanren Liu",
"Remo Rohs",
"Qiong Luo",
"Jingren Zhou"
],
"abstract": "Matching buyers with most suitable sellers providing relevant items (e.g., products) is essential for e-commerce platforms to guarantee customer experience. This matching process is usually achieved through modeling inter-group (buyer-seller) proximity by e-commerce ranking systems. However, current ranking systems often match buyers with sellers of various qualities, and the mismatch is detrimental to not only buyers' level of satisfaction but also the platforms' return on investment (ROI). In this paper, we address this problem by incorporating intra-group structural information (e.g., buyer-buyer proximity implied by buyer attributes) into the ranking systems. Specifically, we propose De ep Gr aph E mbe dding (DEGREE), a deep learning based method, to exploit both inter-group and intra-group proximities jointly for structural learning. With a sparse filtering technique, DEGREE can significantly improve the matching performance with computation resources less than that of alternative deep learning based methods. Experimental results demonstrate that DEGREE outperforms state-of-the-art graph embedding methods on real-world e-commence datasets. In particular, our solution boosts the average unit price in purchases during an online A/B test by up to 11.93%, leading to better operational efficiency and shopping experience.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "G4emSnbnqCl",
"year": null,
"venue": "Intelligent Tutoring Systems 2004",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=G4emSnbnqCl",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Workshop on Applications of Semantic Web Technologies for E-learning p",
"authors": [
"Lora Aroyo",
"Darina Dicheva",
"Peter Brusilovsky",
"Paloma Díaz",
"Vania Dimitrova",
"Erik Duval",
"Jim E. Greer",
"Tsukasa Hirashima",
"Heinz Ulrich Hoppe",
"Geert-Jan Houben",
"Mitsuru Ikeda",
"Judy Kay",
"Kinshuk",
"Erica Melis",
"Tanja Mitrovic",
"Ambjörn Naeve",
"Ossi Nykänen",
"Gilbert Paquette",
"Simos Retalis",
"Demetrios G. Sampson",
"Katherine M. Sinitsa",
"Amy Soller",
"Steffen Staab",
"Julita Vassileva",
"Felisa Verdejo",
"Gerd Wagner"
],
"abstract": "SW-EL’04 will focus on issues related to using concepts, ontologies and semantic web technologies to build e-learning applications. It follows the successful workshop on Concepts and Ontologies in Web-based Educational Systems, held in conjunctions with ICCE’2002 in Auckland, New Zealand. Due to the great interest, the 2004 edition of the workshop will be organized in three sessions held at three different conferences. The aim is to discuss the current problems in e-learning from different perspectives, including those of web-based intelligent tutoring systems and adaptive hypermedia courseware, and the implications of applying semantic web standards and technologies for solving them.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "vj14BMNbwXq",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=vj14BMNbwXq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer E1on (Part 2/2)",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "OfMdN8sK5v",
"year": null,
"venue": "EACL 2021",
"pdf_link": "https://aclanthology.org/2021.eacl-main.163.pdf",
"forum_link": "https://openreview.net/forum?id=OfMdN8sK5v",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Applying the Transformer to Character-level Transduction",
"authors": [
"Shijie Wu",
"Ryan Cotterell",
"Mans Hulden"
],
"abstract": "The transformer has been shown to outperform recurrent neural network-based sequence-to-sequence models in various word-level NLP tasks. Yet for character-level transduction tasks, e.g. morphological inflection generation and historical text normalization, there are few works that outperform recurrent models using the transformer. In an empirical study, we uncover that, in contrast to recurrent sequence-to-sequence models, the batch size plays a crucial role in the performance of the transformer on character-level tasks, and we show that with a large enough batch size, the transformer does indeed outperform recurrent models. We also introduce a simple technique to handle feature-guided character-level transduction that further improves performance. With these insights, we achieve state-of-the-art performance on morphological inflection and historical text normalization. We also show that the transformer outperforms a strong baseline on two other character-level transduction tasks: grapheme-to-phoneme conversion and transliteration.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics , pages 1901–1907\nApril 19 - 23, 2021. ©2021 Association for Computational Linguistics1901Applying the Transformer to Character-level Transduction\nShijie WuZRyan CotterellQ;6Mans HuldenX\nZJohns Hopkins University6University of Cambridge\nQETH Z ¨urichXUniversity of Colorado Boulder\[email protected] [email protected] [email protected]\nAbstract\nThe transformer (Vaswani et al., 2017) has\nbeen shown to outperform recurrent neural\nnetwork-based sequence-to-sequence models\nin various word-level NLP tasks. Yet for\ncharacter-level transduction tasks, e.g. mor-\nphological inflection generation and histori-\ncal text normalization, there are few works\nthat outperform recurrent models using the\ntransformer. In an empirical study, we un-\ncover that, in contrast to recurrent sequence-\nto-sequence models, the batch size plays a\ncrucial role in the performance of the trans-\nformer on character-level tasks, and we show\nthat with a large enough batch size, the trans-\nformer does indeed outperform recurrent mod-\nels. We also introduce a simple technique\nto handle feature-guided character-level trans-\nduction that further improves performance.\nWith these insights, we achieve state-of-the-art\nperformance on morphological inflection and\nhistorical text normalization. We also show\nthat the transformer outperforms a strong base-\nline on two other character-level transduction\ntasks: grapheme-to-phoneme conversion and\ntransliteration.\n1 Introduction\nThe transformer (Vaswani et al., 2017) has become\na popular architecture for sequence-to-sequence\ntransduction in NLP. It has achieved state-of-the-\nart performance on a range of common word-level\ntransduction tasks: neural machine translation (Bar-\nrault et al., 2019), question answering (Devlin et al.,\n2019) and abstractive summarization (Dong et al.,\n2019). In addition, the transformer forms the back-\nbone of the widely-used BERT (Devlin et al., 2019).\nYet for character-level transduction tasks like mor-\nphological inflection, the dominant model has re-\nmained a recurrent neural network-based sequence-\nCode will be available at https://github.com/\nshijie-wu/neural-transducer .\n16 32 64 128 256 512\nBatch Size7678808284868890ACC\nWu and Cotterell (2019)\nWu and Cotterell (2019) (Our Eval)\nWu and Cotterell (2019) + LR Warmup\nVanilla Transformer\nFeature Invariant TransformerFigure 1: Development set accuracy for 5 languages\non morphological inflection with different batch sizes.\nWe evince our two primary contributions: (1) we set the\nnew state of the art morphological inflection using the\ntransformer and (2) we demonstrate the transformer’s\ndependence on the batch size .\nto-sequence model with attention (Cotterell et al.,\n2018). This is not for lack of effort—but rather, it is\nthe case that the transformer has consistently under-\nperformed in experiments on average (Tang et al.,\n2018b).1As anecdotal evidence of this, we note\nthat in the 2019 SIGMORPHON shared task on\ncross-lingual transfer for morphological inflection,\nno participating system was based on the trans-\nformer (McCarthy et al., 2019).\nCharacter-level transduction models are often\ntrained with less data than their word-level coun-\nterparts: In contrast to machine translation, where\nmillions of training samples are available, the 2018\nSIGMORPHON shared task (Cotterell et al., 2018)\nhigh-resource setting only provides \u001910k training\nexamples per language. It is also not obvious that\nnon-recurrent architectures such as the transformer\n1This claim is also based on the authors’ personal commu-\nnication with other researchers in morphology in the corridors\nof conferences and through email.\n1902\nraemsPSTVV.PTCP<s></s>VanillaraemsPSTVV.PTCP<s></s>Feature InvariantTokenPositionType01234567890000123456FFFCCCCC++++++++++++++++++++++++++++Figure 2: Handling of feature-guided character-level transduction with special position and type embeddings in the\nencoder. Fdenotes features while Cdenotes characters. We use morphological inflection as an example, inflecting\nsmear into its past participle form, smeared .\nshould provide an advantage at many character-\nlevel tasks: For instance, Gehring et al. (2017) and\nVaswani et al. (2017) suggest that transformers (and\nconvolutional models in general) should be better\nat remembering long-range dependencies. In the\ncase of morphology, none of these considerations\nseem relevant: inflecting a word (a) requires little\ncapacity to model long-distance dependencies and\nis largely a monotonic transduction; (b) it involves\nno semantic disambiguation, the tokens in question\nbeing letters; (c) it is not a task for which paral-\nlelization during training appears to help, since\ntraining time has never been an issue in morphol-\nogy tasks.2\nIn this work, we provide state-of-the-art num-\nbers for morphological inflection and historical\ntext normalization, a novel result in the litera-\nture. We also show the transformer outperforms\na strong recurrent baseline on two other character-\nlevel tasks: grapheme-to-phoneme (g2p) conver-\nsion and transliteration. We find that a single hy-\nperparameter, batch size, is largely responsible for\nthe previous poor results. Despite having fewer pa-\nrameters, the transformer outperforms the recurrent\nsequence-to-sequence baselines on all four tasks.\nWe conduct a short error analysis on the task of\nmorphological inflection to round out the paper.\n2 The Transformer for Characters\nThe Transformer. The transformer, originally\ndescribed by Vaswani et al. (2017), is a self-\nattention-based encoder-decoder model. The en-\ncoder hasNlayers, consisting of a multi-head self-\nattention layer and a two-layer feed-forward layer\nwith ReLU activation, both equipped with a skip\nconnection. The decoder has a similar structure\nas the encoder except that, in each decoder layer\n2Many successful CoNLL–SIGMORPHON shared task\nparticipants report training their models on laptop CPUs.between the self-attention layer and feed-forward\nlayer, a multi-head attention layer attends to the\noutput of the encoder. Layer normalization (Ba\net al., 2016) is applied to the output of each skip\nconnection. Sinusoidal positional embeddings are\nused to incorporate positional information without\nthe need for recurrence or convolution. Here, we\ndescribe two modifications we make to the trans-\nformer for character-level tasks.\nA Smaller Transformer. As the dataset sizes in\ncharacter-level transduction tasks are significantly\nsmaller than in machine translation, we employ a\nsmaller transformer with N= 4encoder-decoder\nlayers. We use 4 self-attention heads. The em-\nbedding size is dmodel = 256 and the hidden size\nof the feed-forward layer is dFF= 1024 . In\nthe preliminary experiments, we found that using\nlayer normalization before self-attention and the\nfeed-forward layer performed slightly better than\nthe original model. It is also the default setting\nof a popular implementation of the transformer\n(Vaswani et al., 2018). The transformer alone has\naround 7.37M parameters, excluding character em-\nbeddings and the linear mapping before the softmax\nlayer. We decode the model left to right in a greedy\nfashion.\nFeature Invariance. Some character-level trans-\nduction is guided by features. For example, in\nthe case of morphological reinflection, the task re-\nquires a set of morphological attributes that control\nwhat form a citation form is inflected into (see\nFig. 2 for an example). However, the order of the\nfeatures is irrelevant. In a recurrent neural network,\nfeatures are input in some predefined order as spe-\ncial characters and pre- or postpended to the input\ncharacter sequence representing the citation form.\nThe same is true for a vanilla transformer model, as\nshown on the left-hand side of Fig. 2. This leads to\n1903LS\f2 Vanilla Feature Invariant\n0 0.999 89.34 89.80\n0 0.98 89.62 89.92\n0.1 0.999 89.48 90.02\n0.1 0.98 89.98 90.28\nTable 1: Average development accuracy on morpho-\nlogical inflection with different LS and \f2, which de-\nnote hyperparameter of label smoothing and Adam op-\ntimizer respectively.\ndifferent relative distances between a character and\na set of features.3To avoid such an inconsistency,\nwe propose a simple remedy: We set the positional\nencoding of features to 0 and only start counting\nthe positions for characters. Additionally, we add\na special token to indicate whether a symbol is a\nword character or a feature. The right-hand side\nof Fig. 2 evinces how we have the same relative\ndistance between characters and features.\n3 Empirical Findings\nTasks. We consider four character-level transduc-\ntion tasks: morphological inflection, grapheme-to-\nphoneme conversion, transliteration, and historical\ntext normalization. For morphological inflection,\nwe use the 2017 SIGMORPHON shared task data\n(Cotterell et al., 2017) with 52 languages. The\nperformance is evaluated by accuracy (ACC) and\nedit distance (Dist). For the g2p task, we use the\nunstressed CMUDict (Weide, 1998) and NETtalk\n(Sejnowski and Rosenberg, 1987) resources. We\nuse the splits from Wu et al. (2018). We evaluate un-\nder word error rate (WER) and phoneme error rate\n(PER). For transliteration, we use the NEWS 2015\nshared task data (Zhang et al., 2015).4For histori-\ncal text normalization, we follow Bollmann (2019)\nand use datasets for Spanish (S ´anchez-Mart ´ınez\net al., 2013), Icelandic and Swedish (Pettersson\net al., 2013), Slovene (Scherrer and Erjavec, 2013,\n2016; Ljube ˇsic et al., 2016), Hungarian and Ger-\nman (Pettersson, 2016).5We evaluate using accu-\nracy (ACC) and character error rate of incorrect\nprediction (CER i).\nOptimization. We use Adam (Kingma and Ba,\n2014) with a learning rate of 0:001and an inverse\n3While the features could be encoded with a binary vector\nfollowed by MLP, it introduces a representation bottleneck for\nencoding features.\n4We do not have access to the test set.\n5We do not include English due to licensing issues.\nFigure 3: Distribution of incorrectly inflected forms in\nthe test set of the inflection task over all 52 languages\ngrouped by desired output word length.\nsquare root learning rate scheduler (Vaswani et al.,\n2017) with 4k steps during the warm-up. We train\nthe model for 20k gradient updates and save and\nevaluate the model every 400 gradient updates. We\nselect the best model out of 50 checkpoints based\non development set accuracy. The number of gradi-\nent updates and checkpoints are roughly the same\nas Wu and Cotterell (2019), the single model state\nof the art on the 2017 SIGMORPHON dataset. We\nuse their model as a baseline model. For all experi-\nments, we use a single predefined random seed.\n3.1 A Controlled Hyperparameter Study\nTo demonstrate the importance of hyperparame-\nter tuning for the transformer on character-level\ntasks, we perform a small controlled hyperparame-\nter study. This is important since researchers had\npreviously failed to achieve high-performing re-\nsults with the transformer on character-level tasks.\nHere, we look at morphological inflection on the\nfive languages in the 2017 SIGMORPHON dataset\nwhere submitted systems performed the worst:\nLatin, Faroese, French, Hungarian, and Norwegian\n(Nynorsk). We set the dropout to 0.3, \f2of Adam\nto 0.999 (the default value), and do not use label\nsmoothing. We do not tune any other hyperparam-\neter except the following three hyperparameters.\nThe Importance of Batch Size. While recurrent\nmodels like Wu and Cotterell use a batch size of 20,\nhalving the learning rate when stuck and employ-\ning early stopping, we find that a less aggressive\nlearning rate scheduler, allowing the model to train\nlonger, outperforms these hyperparameters. Fig. 1\nshows that the significant impact of batch size on\nthe transformer . The transformer performance in-\n1904ACC Dist\nSilfverberg et al. (2017)* 92.97 0.170\nWu et al. (2018) 93.60 0.128\nWu and Cotterell (2019) 94.40 0.113\nWu and Cotterell (2019) (Our eval) 94.81 0.123\nMakarov et al. (2017)* 95.12 0.100\nBergmanis et al. (2017)* 95.32 0.100\nTransformer (Dropout = 0.3) 95.59 0.088\nTransformer (Dropout = 0.1) 95.56 0.090\nTable 2: Average test performance on morphological\ninflection of Transformer against models from the liter-\nature.\u0003denotes model ensembling.\ncreases steadily as the batch size is increased, sim-\nilarly to what Popel and Bojar (2018) observe for\nmachine translation. The transformer only outper-\nforms the recurrent baseline when the batch size is\nat least 128, which is much larger than batch size\ncommonly used in recurrent models.6Note that the\nmodel of Wu and Cotterell has 8.66M parameters,\n17% more than the transformer model. To get an\napples-to-apples comparison, we apply the same\nlearning rate scheduler to Wu and Cotterell; this\ndoes not yield similar improvements and underper-\nforms with respect to the traditional learning rate\nscheduler. Our feature invariant transformer also\noutperforms the vanilla transformer model. We\nset the batch size to 400 for our main experiments.\nNote the batch size of 400 is especially large (4%\nof training data) considering the training size is\nonly 10k.\nOther Hyperparameters. Vaswani et al. (2017)\napplies label smoothing (Szegedy et al., 2016) of\n0.1 to the transformer model and shows that it hurts\nperplexity, but improves BLEU scores for machine\ntranslation. Instead of the default 0.999 \f2for\nAdam, Vaswani et al. (2017) uses 0.98 and we find\nthat both choices benefit character-level transduc-\ntion tasks as well (see Tab. 1).\n3.2 New State-of-the-Art Results\nWe train our feature invariant transformer on the\nfour character-level tasks, exhibiting state-of-the-\nart results on morphological inflection and histori-\ncal text normalization.\n6It is also large in the context of character-level tasks,\nwhich typically have around 10k training examples. Batch\nsize of 400 would imply approximately 4% of training data in\na single gradient update.ACC CER iACCsCERs\ni\nLjube ˇsi´c et al. (2016) 91.78 0.392 90.37 0.360\nLjube ˇsi´c et al. (2016) (LM) 91.56 0.399 89.93 0.368\nBollmann (2018) 91.27 0.381 89.73 0.350\nTang et al. (2018a) 91.67 0.389 90.32 0.358\nFlachs et al. (2019) - - 90.06 -\nTransformer (Dropout = 0.3) 91.30 0.340 89.99 0.330\nTransformer (Dropout = 0.1) 91.85 0.352 90.61 0.334\nTable 3: Average test performance on historical text\nnormalization of Transformer against models from the\nliterature.sdenote subset of dataset as Flachs et al.\n(2019) only experiment with subset of languages.\nWER PER ACC MFS\nWu et al. (2018) 28.20 0.068 41.10 0.894\nWu and Cotterell (2019) 28.20 0.069 41.20 0.895\nTransformer (Dropout = 0.3) 28.08 0.070 43.39 0.897\nTransformer (Dropout = 0.1) 27.63 0.069 41.35 0.891\nTable 4: Average test performance on Grapheme-to-\nPhoneme and dev performance on Transliteration of\nTransformer against models from the literature.\nMorphological Inflection. As shown in Tab. 2,\nthe feature invariant transformer produces state-of-\nthe-art results on the 2017 SIGMORPHON shared\ntasks, improving upon ensemble-based systems by\n0.27 points. We observe that as the dataset de-\ncreases in size, a model with a larger dropout value\nperforms slightly better. A brief tally of phenomena\nthat are difficult to learn for many machine learn-\ning models, categorized along typical linguistic\ndimensions (such as word-internal sound changes,\nvowel harmony, circumfixation, ablaut, and umlaut\nphenomena) fail to reveal any consistent pattern of\nadvantage to the transformer model. In fact, errors\nseem to be randomly distributed with an overall ad-\nvantage of the transformer model. Curiously, errors\ngrouped along the dimension of word length reveal\nthat as word forms grow longer, the transformer\nadvantage shrinks (Fig. 3).\nHistorical Text Normalization. Tab. 3 shows\nthat the transformer model with dropout of 0.1, as\nin the case of morphological inflection, improves\nupon the previous state of the art, although the\nmodel with a dropout of 0.3 yields a slightly better\nCER i.\nG2P and Transliteration. Tab. 4 shows that\nthe transformer outperforms previously published\nstrong recurrent models on two tasks despite hav-\ning fewer parameters. A dropout rate of 0.3 yields\n1905significantly better performance on the translitera-\ntion task while a dropout rate of 0.1 is stronger on\nthe g2p task. This shows that transformers can and\ndo outperform recurrent transducers on common\ncharacter-level tasks when properly tuned.\n4 Related Work\nCharacter-level transduction is largely dominated\nby attention-based LSTM sequence-to-sequence\n(Luong et al., 2015) models (Cotterell et al., 2018).\nCharacter-level transduction tasks usually involve\ninput-output pairs that share large substrings and\nalignments between these are often monotonic.\nModels that address the task tend to focus on ex-\nploiting such structural bias. Instead of learning\nthe alignments, Aharoni and Goldberg (2017) use\nexternal monotonic alignments from the SIGMOR-\nPHON 2016 shared task baseline Cotterell et al.\n(2016). Makarov et al. (2017) use this approach\nto win the CoNLL-SIGMORPHON 2017 shared\ntask on morphological inflection (Cotterell et al.,\n2017). Wu et al. (2018) shows that explicitly model-\ning alignment (hard attention) between source and\ntarget characters outperforms soft attention. Wu\nand Cotterell (2019) further shows that enforcing\nmonotonicity in a hard attention model improves\nperformance.\n5 Conclusion\nUsing a large batch size and feature invariant input\nallows the transformer to achieve strong perfor-\nmance on character-level tasks. However, it is un-\nclear what linguistic errors the transformer makes\ncompared to recurrent models on these tasks. Fu-\nture work should analyze the errors in detail as\nGorman et al. (2019) does for recurrent models.\nWhile Wu and Cotterell shows that the monotonic-\nity bias benefits character-level tasks, it is not evi-\ndent how to enforce monotonicity on multi-headed\nself-attention. Future work should consider how\nto best incorporate monotonicity into the model,\neither by enforcing it strictly (Wu and Cotterell,\n2019) or by pretraining the model to copy (Anasta-\nsopoulos and Neubig, 2019).\nReferences\nRoee Aharoni and Yoav Goldberg. 2017. Morphologi-\ncal inflection generation with hard monotonic atten-\ntion. In Proceedings of the 55th Annual Meeting of\nthe Association for Computational Linguistics (Vol-ume 1: Long Papers) , pages 2004–2015, Vancouver,\nCanada. Association for Computational Linguistics.\nAntonios Anastasopoulos and Graham Neubig. 2019.\nPushing the limits of low-resource morphological in-\nflection. In Proceedings of the 2019 Conference on\nEmpirical Methods in Natural Language Processing\nand the 9th International Joint Conference on Natu-\nral Language Processing (EMNLP-IJCNLP) , pages\n984–996, Hong Kong, China. Association for Com-\nputational Linguistics.\nJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hin-\nton. 2016. Layer normalization. arXiv preprint\narXiv:1607.06450 .\nLo¨ıc Barrault, Ond ˇrej Bojar, Marta R. Costa-juss `a,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, Christof Monz, Mathias M ¨uller,\nSantanu Pal, Matt Post, and Marcos Zampieri. 2019.\nFindings of the 2019 conference on machine transla-\ntion (WMT19). In Proceedings of the Fourth Con-\nference on Machine Translation (Volume 2: Shared\nTask Papers, Day 1) , pages 1–61, Florence, Italy. As-\nsociation for Computational Linguistics.\nToms Bergmanis, Katharina Kann, Hinrich Sch ¨utze,\nand Sharon Goldwater. 2017. Training data aug-\nmentation for low-resource morphological inflection.\nInProceedings of the CoNLL SIGMORPHON 2017\nShared Task: Universal Morphological Reinflection ,\npages 31–39, Vancouver. Association for Computa-\ntional Linguistics.\nMarcel Bollmann. 2018. Normalization of historical\ntexts with neural network models . Ph.D. thesis,\nBochum, Ruhr-Universit ¨at Bochum.\nMarcel Bollmann. 2019. A large-scale comparison of\nhistorical text normalization systems. In Proceed-\nings of the 2019 Conference of the North American\nChapter of the Association for Computational Lin-\nguistics: Human Language Technologies, Volume 1\n(Long and Short Papers) , pages 3885–3898, Min-\nneapolis, Minnesota. Association for Computational\nLinguistics.\nRyan Cotterell, Christo Kirov, John Sylak-Glassman,\nG´eraldine Walther, Ekaterina Vylomova, Arya D.\nMcCarthy, Katharina Kann, Sebastian Mielke, Gar-\nrett Nicolai, Miikka Silfverberg, David Yarowsky,\nJason Eisner, and Mans Hulden. 2018. The CoNLL–\nSIGMORPHON 2018 shared task: Universal mor-\nphological reinflection. In Proceedings of the\nCoNLL–SIGMORPHON 2018 Shared Task: Univer-\nsal Morphological Reinflection , pages 1–27, Brus-\nsels. Association for Computational Linguistics.\nRyan Cotterell, Christo Kirov, John Sylak-Glassman,\nG´eraldine Walther, Ekaterina Vylomova, Patrick\nXia, Manaal Faruqui, Sandra K ¨ubler, David\nYarowsky, Jason Eisner, and Mans Hulden. 2017.\nCoNLL-SIGMORPHON 2017 shared task: Univer-\nsal morphological reinflection in 52 languages. In\n1906Proceedings of the CoNLL SIGMORPHON 2017\nShared Task: Universal Morphological Reinflection ,\npages 1–30, Vancouver. Association for Computa-\ntional Linguistics.\nRyan Cotterell, Christo Kirov, John Sylak-Glassman,\nDavid Yarowsky, Jason Eisner, and Mans Hulden.\n2016. The SIGMORPHON 2016 shared Task—\nMorphological reinflection. In Proceedings of the\n14th SIGMORPHON Workshop on Computational\nResearch in Phonetics, Phonology, and Morphol-\nogy, pages 10–22, Berlin, Germany. Association for\nComputational Linguistics.\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In Proceedings of the 2019 Conference\nof the North American Chapter of the Association\nfor Computational Linguistics: Human Language\nTechnologies, Volume 1 (Long and Short Papers) ,\npages 4171–4186, Minneapolis, Minnesota. Associ-\nation for Computational Linguistics.\nLi Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi-\naodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou,\nand Hsiao-Wuen Hon. 2019. Unified language\nmodel pre-training for natural language understand-\ning and generation. In Advances in Neural Informa-\ntion Processing Systems , pages 13042–13054.\nSimon Flachs, Marcel Bollmann, and Anders Søgaard.\n2019. Historical text normalization with delayed\nrewards. In Proceedings of the 57th Annual Meet-\ning of the Association for Computational Linguis-\ntics, pages 1614–1619, Florence, Italy. Association\nfor Computational Linguistics.\nJonas Gehring, Michael Auli, David Grangier, Denis\nYarats, and Yann N. Dauphin. 2017. Convolutional\nsequence to sequence learning. In Proceedings\nof the 34th International Conference on Machine\nLearning-Volume 70 , pages 1243–1252. JMLR.\nKyle Gorman, Arya D. McCarthy, Ryan Cotterell,\nEkaterina Vylomova, Miikka Silfverberg, and Mag-\ndalena Markowska. 2019. Weird inflects but OK:\nMaking sense of morphological generation errors.\nInProceedings of the 23rd Conference on Computa-\ntional Natural Language Learning (CoNLL) , pages\n140–151, Hong Kong, China. Association for Com-\nputational Linguistics.\nDiederik P. Kingma and Jimmy Ba. 2014. Adam: A\nmethod for stochastic optimization. arXiv preprint\narXiv:1412.6980 .\nNikola Ljube ˇsic, Katja Zupan, Darja Fi ˇser, and Tomaz\nErjavec. 2016. Normalising Slovene data: historical\ntexts vs. user-generated content. In Proceedings of\nthe 13th Conference on Natural Language Process-\ning (KONVENS 2016) , pages 146–155.\nNikola Ljube ˇsi´c, Katja Zupan, Darja Fi ˇser, and Toma ˇz\nErjavec. 2016. Normalising Slovene data: historicaltexts vs. user-generated content. In Proceedings of\nthe 13th Conference on Natural Language Process-\ning (KONVENS 2016) , pages 146–155.\nThang Luong, Hieu Pham, and Christopher D. Man-\nning. 2015. Effective approaches to attention-based\nneural machine translation. In Proceedings of the\n2015 Conference on Empirical Methods in Natu-\nral Language Processing , pages 1412–1421, Lis-\nbon, Portugal. Association for Computational Lin-\nguistics.\nPeter Makarov, Tatiana Ruzsics, and Simon Clematide.\n2017. Align and copy: UZH at SIGMORPHON\n2017 shared task for morphological reinflection. In\nProceedings of the CoNLL SIGMORPHON 2017\nShared Task: Universal Morphological Reinflection ,\npages 49–57, Vancouver. Association for Computa-\ntional Linguistics.\nArya D. McCarthy, Ekaterina Vylomova, Shijie Wu,\nChaitanya Malaviya, Lawrence Wolf-Sonkin, Gar-\nrett Nicolai, Christo Kirov, Miikka Silfverberg, Se-\nbastian J. Mielke, Jeffrey Heinz, Ryan Cotterell, and\nMans Hulden. 2019. The SIGMORPHON 2019\nshared task: Morphological analysis in context and\ncross-lingual transfer for inflection. In Proceedings\nof the 16th Workshop on Computational Research in\nPhonetics, Phonology, and Morphology , pages 229–\n244, Florence, Italy. Association for Computational\nLinguistics.\nEva Pettersson. 2016. Spelling normalisation and lin-\nguistic analysis of historical text for information ex-\ntraction . Ph.D. thesis, Acta Universitatis Upsalien-\nsis.\nEva Pettersson, Be ´ata Megyesi, and J ¨org Tiedemann.\n2013. An SMT approach to automatic annotation\nof historical text. In Proceedings of the workshop\non computational historical linguistics at NODAL-\nIDA 2013; May 22-24; 2013; Oslo; Norway. NEALT\nProceedings Series 18 , 087, pages 54–69. Link ¨oping\nUniversity Electronic Press.\nMartin Popel and Ond ˇrej Bojar. 2018. Training tips\nfor the transformer model. The Prague Bulletin of\nMathematical Linguistics , 110(1):43–70.\nFelipe S ´anchez-Mart ´ınez, Isabel Mart ´ınez-Sempere,\nXavier Ivars-Ribes, and Rafael C. Carrasco. 2013.\nAn open diachronic corpus of historical Spanish:\nannotation criteria and automatic modernisation of\nspelling. arXiv preprint arXiv:1306.3692 .\nYves Scherrer and Toma ˇz Erjavec. 2013. Modernizing\nhistorical Slovene words with character-based smt.\nInBSNLP 2013-4th Biennial Workshop on Balto-\nSlavic Natural Language Processing .\nYves Scherrer and Toma ˇz Erjavec. 2016. Modernising\nhistorical Slovene words. Natural Language Engi-\nneering , 22(6):881–905.\nTerrence J. Sejnowski and Charles R. Rosenberg. 1987.\nParallel networks that learn to pronounce English\ntext. Complex Systems , 1.\n1907Miikka Silfverberg, Adam Wiemerslage, Ling Liu, and\nLingshuang Jack Mao. 2017. Data augmentation for\nmorphological reinflection. In Proceedings of the\nCoNLL SIGMORPHON 2017 Shared Task: Univer-\nsal Morphological Reinflection , pages 90–99, Van-\ncouver. Association for Computational Linguistics.\nChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe,\nJon Shlens, and Zbigniew Wojna. 2016. Rethinking\nthe inception architecture for computer vision. In\nProceedings of the IEEE conference on computer vi-\nsion and pattern recognition , pages 2818–2826.\nGongbo Tang, Fabienne Cap, Eva Pettersson, and\nJoakim Nivre. 2018a. An evaluation of neural\nmachine translation models on historical spelling\nnormalization. In Proceedings of the 27th Inter-\nnational Conference on Computational Linguistics ,\npages 1320–1331, Santa Fe, New Mexico, USA. As-\nsociation for Computational Linguistics.\nGongbo Tang, Mathias M ¨uller, Annette Rios, and Rico\nSennrich. 2018b. Why self-attention? a targeted\nevaluation of neural machine translation architec-\ntures. In Proceedings of the 2018 Conference on\nEmpirical Methods in Natural Language Processing ,\npages 4263–4272, Brussels, Belgium. Association\nfor Computational Linguistics.\nAshish Vaswani, Samy Bengio, Eugene Brevdo, Fran-\ncois Chollet, Aidan N Gomez, Stephan Gouws,\nLlion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki\nParmar, et al. 2018. Tensor2tensor for neural ma-\nchine translation. arXiv preprint arXiv:1803.07416 .\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Advances in neural information pro-\ncessing systems , pages 5998–6008.\nR.L. Weide. 1998. The Carnegie Mellon pronouncing\ndictionary.\nShijie Wu and Ryan Cotterell. 2019. Exact hard mono-\ntonic attention for character-level transduction. In\nProceedings of the 57th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 1530–\n1537, Florence, Italy. Association for Computational\nLinguistics.\nShijie Wu, Pamela Shapiro, and Ryan Cotterell. 2018.\nHard non-monotonic attention for character-level\ntransduction. In Proceedings of the 2018 Confer-\nence on Empirical Methods in Natural Language\nProcessing , pages 4425–4438, Brussels, Belgium.\nAssociation for Computational Linguistics.\nMin Zhang, Haizhou Li, Rafael E. Banchs, and A Ku-\nmaran. 2015. Whitepaper of NEWS 2015 shared\ntask on machine transliteration. In Proceedings of\nthe Fifth Named Entity Workshop , pages 1–9, Bei-\njing, China. Association for Computational Linguis-\ntics.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "xyTdVL2LaWs",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=xyTdVL2LaWs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer e1cd",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rkEb8vJPf",
"year": null,
"venue": null,
"pdf_link": "/pdf/dd9561d328ff052c9b11c75d1046a1a5f53097a4.pdf",
"forum_link": "https://openreview.net/forum?id=rkEb8vJPf",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Representation Learning for Seismic Hawkes Processes",
"authors": [
"David Belanger",
"Yaniv Ovadia",
"Maxwell Bileschi",
"Brendan Meade"
],
"abstract": "Today, more than half a billion people live under the threat of devastating earthquakes. The 21st century has already seen earthquakes kill more than 800,000 people and cause more than 300 billion in damage. Despite these impacts and decades worth of research into the physics of seismic events, existing earthquake predictions are often too inaccurate to be useful for issuing actionable warnings. It is possible that deep learning could help close this gap, but as researchers we must proceed with care. First, there is a limited supply of historical data, and thus overfitting is a key concern. Second, it is important that models' predictions and parameters are interpretable, so that they can be used to generate and validate hypotheses about the underlying physical process. In response, we provide a case study of applying deep learning to forecasting seismic events in Southern California. We replace small components of a popular Hawkes process model for earthquake forecasting with black-box neural networks, with the goal of maintaining a similar level of interpretability as the original model. Using experiments on about three decades of earthquake hypocenter and magnitude estimates, we visualize our learned representations for earthquake events and discuss interpretability-accuracy tradeoffs. Our visualization may be useful to provide refinements to the Utsu/Ohmori law for the time-decay of aftershock productivity (Utsu, 1971).",
"keywords": [
"hawkes process",
"deep learning",
"earthquakes"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rJ-Bocb_bB",
"year": null,
"venue": "ECCV (1) 2008",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=rJ-Bocb_bB",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Joint Parametric and Non-parametric Curve Evolution for Medical Image Segmentation",
"authors": [
"Mahshid Farzinfar",
"Zhong Xue",
"Eam Khwang Teoh"
],
"abstract": "This paper proposes a new joint parametric and nonparametric curve evolution algorithm of the level set functions for medical image segmentation. Traditional level set algorithms employ non-parametric curve evolution for object matching. Although matching image boundaries accurately, they often suffer from local minima and generate incorrect segmentation of object shapes, especially for images with noise, occlusion and low contrast. On the other hand, statistical model-based segmentation methods allow parametric object shape variations subject to some shape prior constraints, and they are more robust in dealing with noise and low contrast. In this paper, we combine the advantages of both of these methods and jointly use parametric and non-parametric curve evolution in object matching. Our new joint curve evolution algorithm is as robust as and at the same time, yields more accurate segmentation results than the parametric methods using shape prior information. Comparative results on segmenting ventricle frontal horn and putamen shapes in MR brain images confirm both robustness and accuracy of the proposed joint curve evolution algorithm.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Cp_cdtbgZ5X",
"year": null,
"venue": "CoRR 2022",
"pdf_link": "http://arxiv.org/pdf/2209.05112v1",
"forum_link": "https://openreview.net/forum?id=Cp_cdtbgZ5X",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A challenge-based survey of e-recruitment recommendation systems",
"authors": [
"Yoosof Mashayekhi",
"Nan Li",
"Bo Kang",
"Jefrey Lijffijt",
"Tijl De Bie"
],
"abstract": "E-recruitment recommendation systems recommend jobs to job seekers and job seekers to recruiters. The recommendations are generated based on the suitability of the job seekers for the positions as well as the job seekers' and the recruiters' preferences. Therefore, e-recruitment recommendation systems could greatly impact job seekers' careers. Moreover, by affecting the hiring processes of the companies, e-recruitment recommendation systems play an important role in shaping the companies' competitive edge in the market. Hence, the domain of e-recruitment recommendation deserves specific attention. Existing surveys on this topic tend to discuss past studies from the algorithmic perspective, e.g., by categorizing them into collaborative filtering, content based, and hybrid methods. This survey, instead, takes a complementary, challenge-based approach, which we believe might be more practical to developers facing a concrete e-recruitment design task with a specific set of challenges, as well as to researchers looking for impactful research projects in this domain. We first identify the main challenges in the e-recruitment recommendation research. Next, we discuss how those challenges have been studied in the literature. Finally, we provide future research directions that we consider promising in the e-recruitment recommendation domain.",
"keywords": [],
"raw_extracted_content": "arXiv:2209.05112v1 [cs.IR] 12 Sep 2022Achallenge-based survey of e-recruitmentrecommendation systems\nYOOSOF MASHAYEKHI ,IDLAB-DepartmentofElectronicsandInformationSystems ( ELIS),GhentUniversity,\nBelgium\nNAN LI,IDLAB - Department ofElectronicsand Information Systems ( ELIS), Ghent University, Belgium\nBOKANG ,IDLAB - Department of Electronicsand Information Systems ( ELIS), Ghent University, Belgium\nJEFREYLIJFFIJT ,IDLAB - Department of Electronicsand InformationSystems ( ELIS), Ghent University, Belgium\nTIJLDEBIE ,IDLAB - Department of Electronicsand InformationSystems ( ELIS), Ghent University, Belgium\nE-recruitment recommendation systemsrecommend jobstojobseek ersandjobseekerstorecruiters.Therecommendations aregen-\neratedbasedonthesuitabilityofthejobseekersforthepositions aswellasthejobseekers’andtherecruiters’preferences.The refore,\ne-recruitment recommendation systemscould greatly impact jobs eekers’careers.Moreover,by affecting thehiringprocesses of th e\ncompanies,e-recruitmentrecommendationsystemsplayanimportant roleinshapingthecompanies’competitiveedgeinthemarket.\nHence, thedomain of e-recruitment recommendation deservesspecifi c attention. Existing surveys on this topic tend to discuss past\nstudies from the algorithmic perspective, e.g., by categorizing th em into collaborative filtering, content based, and hybrid methods.\nThis survey, instead, takes a complementary, challenge-based ap proach, which we believe might be more practical to developers\nfacing a concrete e-recruitment designtask witha specific setof chal lenges,as wellas to researcherslooking forimpactful research\nprojects in this domain. We first identifythe main challenges in the e-re cruitment recommendation research. Next, we discuss how\nthosechallengeshavebeenstudiedintheliterature.Finally,wep rovidefutureresearchdirections thatweconsiderpromising inthe\ne-recruitment recommendation domain.\nCCS Concepts: • Generaland reference →Surveys and overviews ; •Informationsystems →Recommender systems .\nAdditionalKeyWords andPhrases:Job recommendation, E-recruitment recommendation\n1 INTRODUCTION\nWith the ever-increasing use of the world wide web, many peop leseek jobs on e-recruitment platforms [ 106]. These\nplatforms, such as LinkedIn1, usually provide recommendations for job seekers to apply t o several jobs and for re-\ncruiterstoselect suitablejob seekers for their jobpositi ons[54,75].\nThe recommendation in e-recruitment is an important subfiel d of recommendation systems. Recommending the\nproperjobseekerstorecruiterscouldincreasetheefficienc yofthehiringprocess,andrecommendingtherightjobsto\njobseekerscouldhaveapositiveimpactonjobseekers’care erpaths;ontheotherhand,lowqualityrecommendations\nthat poorly match job seekers with vacancies do not only cost time and effort of both recruiters and job seekers but\nalsocouldhaveanegative impactonthelabormarket,compan ies’competitiveness,andpeople’slivesinthelongrun.\nHence, thedomainof recommendationine-recruitment requi res specificattention.\nIn this study,we review the papers in the past decade aboute- recruitment recommendation systems. Existing sur-\nveys [35,49]one-recruitment recommendationsystems usuallyfocuson categorizingpapersbasedontheir methods\nsuchascollaborativefiltering,contentbased,hybrid,etc .Therangeofchallengesthatthesedifferentmethodsaddres s,\n1https://www.linkedin.com/\nAuthors’ addresses: Yoosof Mashayekhi , [email protected], IDLAB - Department of Elect ronics and Information Systems (ELIS), Ghent\nUniversity,Ghent,Belgium,9000; NanLi,[email protected],IDLAB-DepartmentofElectronicsandIn formationSystems(ELIS),GhentUniversity,Gent,\nBelgium, 9000; Bo Kang, [email protected], IDLAB - Department of Electronics and I nformation Systems (ELIS), Ghent University, Gent, Belgiu m,\n9000;JefreyLijffijt ,jefrey.lijffi[email protected],IDLAB-Department ofElectroni cs and InformationSystems(ELIS), Ghent University,Gent, Belgium,9000;\nTijlDe Bie ,[email protected],IDLAB -Departmentof Electronics a nd Information Systems(ELIS), Ghent University,Gent, Bel gium,9000.\n1\n2 Mashayekhi et al.\non the other hand, has been less central to these prior survey s. Therefore, in this survey we focus on the challenges\nfore-recruitment recommendationsystems and how thosecha llenges havebeen studiedin theliterature.\nWe believe thechallenge-based approach used in this survey is useful bothfor developers of e-recruitment recom-\nmendationsystemsandforresearchers inthefield.Indeed,d eveloperswilltypicallylookforsolutionstothepractica l\nchallenges that naturally pose themselves in the design of t heir e-recruitment recommendation system, so in their\ndesign process the challenges will typically come before th e possible algorithmic approaches. For researchers, our\nchallenge-based approach may help in identifying the most i mpactful research problems of the domain and the pro-\nposed solution approaches to address them that have already been attempted. Moreover, open challenges and future\nresearch directions are alsodiscussed toprovidemoreinsi ght forfutureresearch inthis domain.\nTerminology .Differententitiescouldberecommendedine-recruitmentr ecommendationsystems.Thee-recruitment\nrecommendationsystemscouldbecategorizedintothreegro upsbasedonthe entitiesbeingrecommended :jobrec-\nommendation ,job seeker recommendation , andreciprocal recommendation . In the rest of the paper, we use the term\ne-recruitmentrecommendation torefer toall recommendationsystems inthis research area .\nUnlessotherwisestated,theterms useranditemcanrefertojobseekers,jobpositions,orrecruiters,depe ndingon\nthe context: users receive the recommended lists, and items are the entities recommended to users. Throughout this\npaper,thetermsjob,jobposting,jobposition,vacancy,an dopeningareusedinterchangeably torefertoajobvacancy.\nThetermsrecruiteroremployerarealsousedinterchangeab lytorefertothepersonresponsibleforajobposition.CVs\nand resumes denote the textual content of job seekers. We ref er to all features and textual content of the users (job\nseekersorjobpostings)bythetermuserprofile.Sincediffer enttermsareusedforthejob/jobseekerrecommendation\nin the literature, we also use phrases such as matching job se ekers with job positions (e.g., [ 80,141]), person-job fit\n(e.g., [81,110]), and recommendation in e-recruitment (e.g., [ 30,48]) to denote the same concept of recommendation\nine-recruitment here.\nContributions . This survey will provide an overview of the literature in th e past decade (from 2012 onwards) on\ne-recruitment recommendation systems. Itcontains thefol lowingcontributions:\n•Underscoringtheimportanceofasurveyonthistopic,welis tanddiscusssomeimportantspecificcharacteristics\nof e-recruitment recommendation systems thatmake it clear whythey requirea dedicatedapproach.\n•We identify and briefly discuss eight challenges that were fr equently addressed by research papers covered in\nthis survey, and where appropriate explain how they are the r esult of specific characteristics of e-recruitment\nrecommendation systems.\n•Foreachofthesechallenges,wediscussthepapersthathave specificallytargetedit,andwebrieflydiscusstheir\napproaches.\n•Weprovidefutureresearchdirectionsanddiscussthechall engesthathavebeeninvestigatedlessinrecentyears.\n•Wepresentastructuredoverviewofthecollected123papers inTable1intheAppendix.Theavailableproperties\nof each paper in Table 1are the recommendation type based on the recommended entiti es (job, job seeker,\nreciprocal),recommendation methodtype, and thechalleng es thatthepaperhas addressed.\n•We maintain a website2containing the content of Table 1along with paper metadata (e.g. venue, url, au-\nthors, etc.) and summaries of the selected papers. We hope th is can further facilitate the future research in\ne-recruitment recommendationsystems.\n2https://aida-ugent.github.io/e-recruitment-recsys-c hallenges/\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 3\nFor the rest of this section, we first discuss more in detail ho w our survey complements the existing surveys in\nSection1.1. Next, we describe how the papers were collected and filtered in Section 1.2. Finally, Section 1.3presents\nthestructureofthis survey.\n1.1 Differences with previous recent surveys\nThe two recent surveys on e-recruitment recommendation sys tems [35,49] organized the literature differently from\nthe present survey. The work by Freire and de Castro [ 49] focused on method types, data sources, and assessment\nmethods.TheworkbydeRuijtand Bhulai[ 35]gave anin-depth discussionaboutthee-recruitmentrecom mendation\nsystem methods with a focus on categorizing hybrid and ensem ble hybrid methods. Although de Ruijt and Bhulai\n[35]exploredsomeaspectsand challenges ofe-recruitment rec ommendationsystems such as largescale,ethical, and\nreciprocalaspects,their discussion onthosechallenges a nd aspectsis brief and limited.\nSince the typeof recommendation methodsis well discussed i n previous papers,this aspect is not the focusof the\npresentstudy.Giventhelimitationsofprevioussurveys,w efocusonthechallengesine-recruitmentrecommendation\nsystems and discuss the solutions that have been proposed fo r those challenges from a technical point of view. Our\nsurvey is valuable in that we emphasize the distinguishing n ature of e-recruitment and organize the literature with\nrespecttothespecialdifficulties and challenges in e-recru itment recommendation.\n1.2 Literature search methodology\nWecrawleddatafromdblp3usingtenkeywords:{‘jobrecommender’,‘jobrecommendati on’,‘jobmatching’,‘e-recruitment’,\n‘e-recruiting’, ‘online recruitment’, ‘person-job fit’, ‘ vacancy recommendation’, ‘candidate recommendation’, ‘o ccupa-\ntionrecommendation’}andasaresult,515paperswerecolle cted.Weonlykeptpaperspublishedafter(including)2012,\nwithatleastfivecitationsifpublishedbefore(including) 2019.Papersthatdonotrecommendactualjobsorjobseekers\n(e.g., papersrecommending ajobtype)wereremoved aswell. Thisapproachresultedin99papersintotal.Wefurther\ncollected 24 papers from industry leaders and known experts from top conferences and journals. In total, 123 papers\narekept forfurther examination.\n1.3 Structure ofthe survey\nThestructureoftherestofthepaperisasfollows.InSectio n2,wediscussthepropertiesthatdistinguishe-recruitment\nrecommendation systems from other recommendation systems . Section 3contains our findings, in which Section 3.1\ngives abird’s eye view ofallthechallenges identifiedinthi s survey, Section 3.2to3.9address thedifferent challenges\nrespectively, and Section 3.10briefly talks about the remaining papers not covered in the ch allenge sections. Finally,\nSection4concludes ourfindings and discusses thelimitations ofthis survey, openchallenges and futuredirections.\n2 SPECIFIC CHARACTERISTICS AND PROPERTIESOFE-RECRUITMEN T RECOMMENDATION\nSYSTEMS\nIn this section, we discuss the differences between e-recrui tment and traditional recommendation systems. Although\nmanychallengesandcharacteristicsarecommonbetweenane -recruitmentrecommendationsystemandatraditional\none, such as e-commerce ora movierecommender, certainaspe ctsset e-recruitment recommendationsystems apart:\n3https://dblp.org/\nPreprint. Underreview.\n4 Mashayekhi et al.\n(1)Oneworker,onejob(OWOJ) :Atacertainperiodoftime,apersoncanonlyworkatoneoraf ewjobs,andalso\ncompanieshireoneorafewemployeesforajobposting[ 22].Moreover,jobseekersandjobpositionsaremostly\navailable for a limited time and become inactive after they a re employed or filled. In contrast, in a traditional\nrecommender, the same items can be recommended to many users , and users consume several items. The e-\nrecruitmentrecommendationsystemshavetoconsiderthisa spectintherecommendation.First,thenumberof\nrecommendations for each job/jobseeker may have tobekept r elatively smallsince onlyone ora few of them\ncan succeed.Moreover,job seekers/jobs usuallycompetewi th each otherfor thesamejobs/job seekers. Hence,\nthe recommendation of a job at which others have a higher chan ce of success could be less interesting. This\ncompetitionaspectshouldideallybetakeninto considerat ion ingenerating therecommendations.\n(2)Two-sided (TS) : In traditional recommendation systems, the success of a re commendation usually depends\non the action of one user. For example, in e-commerce a recomm endation is successful if the user decides to\nbuyaproduct.However,ine-recruitmentrecommendationsy stems,theultimatesuccess ofarecommendation\ndepends on whether it results in employment. The actions by o ne user, such as applying for a job position\nby a job seeker could only show the interest of the job seeker t o the job position, while the success of the\nrecommendation also depends on the recruiter of the job post ing who makes an offer for the job. Hence, e-\nrecruitment recommendationsystems have multiplestakeho lders (e.g., jobseekers and employers).\n(3)Suitabilityaswellaspreference(SP) :Whileusers’preferencesplayanimportantroleinallreco mmendation\nsystems, e-recruitmentrecommendationsystems recommend jobs/jobseekers basedonsuitabilityand skillsas\nwell [60]. One way to define suitability and user preference is as foll ows.Suitability represents the degree\nof matchness between a job seeker and job position based on ty pically but not exclusively knowledge, skills,\ndiplomas, and years of experience of thejob seekers and thej ob positionrequirements. User preference , onthe\nother hand, represents one’s inclination towards certain i tems. For example, a job seeker might be suitable for\nseveral positions, but prefer to work for a specific company f or various reasons such as higher salary, social\nconnections, etc. In addition, a recruiter often has to pick one job seeker among multiple equally suitable job\nseekersbasedonthepreferencessuchassocialconnections ,personality,etc.Hence,thesuitabilityofajobseeker\nfor a job and their preferences will in general not be equal, w hich poses specific challenges to e-recruitment\nrecommendation systems.\n(4)Multi-faceted(MF) :Ine-recruitmentrecommendationsystems,bothsuitabili tyandpreferenceare,infact,de-\npendentonmanydifferentfacetswithdifferentdatatypes.Fo rajobseeker,theirpreviousjobhistory,diplomas,\nseniority, interests, skills,location,socialfittothejo b environment, etc.couldberelevant forane-recruitment\nrecommendation system. For a job posting, its required skil ls, required diplomas, seniority, location, organiza-\ntional culture, etc. might be available and could be used in a n e-recruitment recommendation system. Hence,\nthenatureofdataavailableinthee-recruitmentdomainisu suallymulti-facetedandrequiresspecificattention\nin designing e-recruitment recommendation systems.\n(5)High-stakes (HS) : E-recruitment is a high risk domain because it can have a lon g-term impact on people’s\ncareersandhence,theircareerfulfillment.Moreover,itpl aysanimportantroleinshapingthecompanies’com-\npetitiveedgeinthemarket.E-recruitmentisevendefinedas oneofthehigh-riskdomainsaccordingtotheEU’s\nAIact(proposal)[ 32].Hence,consideringfairnessandtrustworthinessaspect sismoreessentialine-recruitment\nrecommendation systems compared tothetraditional ones.\n(6)Shortinteractionhistory(SIH) :Ine-recruitmentrecommendationsystems,jobseekersonl yinteractwiththe\nsystemwhiletheyareseekinganewjob,andtheywillprobabl ystopusingitaftertheyareemployed.Moreover,\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 5\nnew job positions appear and disappear frequently [ 58]. In contrast, in a traditional recommendation system\nusers and items oftenhave a longhistorywithinthesystem.\n3 SURVEY STRUCTURED ACCORDINGTOCHALLENGES FACED INTHE DEV ELOPMENTOF\nE-RECRUITMENT RECOMMENDATIONSYSTEMS\nIn this survey, we identify some challenges in e-recruitmen t recommendation systems that have been addressed by\nstudiesinrecentyears.Althoughtherewouldbemanyotherc hallengesinthee-recruitmentrecommendationdomain,\nwe focusonthemostcommonones here.\nWe first list the main challenges in e-recruitment recommend ation systems and describe each of the challenges in\nSection3.1.Next,weintroducethemethodsthathavebeenproposedtode alwitheachofthechallenges inSection 3.2\nto3.9.Finally,wediscussthepapersthatarenotincludedinthes ectionscoveringchallengesinSection 3.10.Moreover,\nineachsection,weprovideavisualoverviewoftheproblems andsolutions(Fig. 1toFig.8).Theycontainthesolutions\nthat weobserved in the literature. Of course, other solutions that have not y et been described in the literature may\nexist.\n3.1 Apreview ofthe challenges\n1)Dataquality :E-recruitmentrecommendationsystemsoftenhaveapletho raofdatasources,includinginterac-\ntions and textual data from the job seekers (CVs) and job post ings (job descriptions). There are many relevant\nfacets in the available data (MF aspect 2.4), but with variable quality. Moreover, some facets, e.g. sk ills, might\nbeimplicitandneedtobeextractedfromunstructureddata. Somecommonissuesaboutdealingwithsuchdata\nare:\na.Data cleaning and preprocessing . Recommendation systems usually use features extracted fr om textual\ndata, which is usuallynoisy. Hence, data cleaning preproce ssing are necessary and crucial for better feature\nextractionand downstream tasks.\nb.Semanticgap . Thetextual datais usuallywrittenby different people,and different terms are oftenused to\naddress thesame concept.Thissemantic gap results inpoors emantic matching.\nc.Skill extraction . Althoughmany facets might beimplicit and need tobeextrac ted withcarefully designed\nmethods,wefocusonskills,whicharethemostimportantfea tureinmatchingjobseekers withjobpostings.\nUsing job seekers’ skills and the job postings’ required ski lls is necessary for increasing the performance of\ne-recruitment recommendation systems. Hence, skill extra ctionfrom the textualdata is another challenging\ntask inthee-recruitment recommendation systems.\nd.Multi-linguality . In some countries/platforms, job seekers’ resumes and job descriptions are written in\nseveral languages. Insuch cases,e-recruitment recommend ationsystems shouldsupportmultiplelanguages\nforthetextualcontent.\ne.Datasparsity . Manyrecommendationsystemssufferfromdatasparsityissu es,e-recruitmentrecommenda-\ntionis noexception(SIH aspect 2.6).Thereasonisthatjobseekers mayonlyusethesystem a few t imes and\nthen leave theplatformforever after a successful job-hunt ing; thesame is truefor vacant job positions: new\njobs might appearona dailybasis butdisappearquicklyafte r receiving satisfying applications.\n2)Heterogeneous data, and multiple interaction types and dat a sources : E-recruitment recommendation\nsystemscouldusemoredatasourcescomparedtomanyotherki ndsofrecommendationsystems,astheymight\nhave access to job seekers’ previous work experiences, inte rviews, the textual content of their resumes/job\nPreprint. Underreview.\n6 Mashayekhi et al.\ndescriptions,skills,andpreferences (MFaspect 2.4).Theavailabilityofunstructured,semi-structuredands truc-\ntureddata makes e-recruitment recommendationsystems hav e todealwith theheterogeneous natureofdata.\nInaddition,therearealsomanyinteractiontypesintherec ommendationsystems betweenjobseekers andjob\npostings,e.g.,view,click,apply,chat,favorite,like,a ndcomment.Usingdifferentinteractiontypesbetweenjob\nseekers and job postings could be both a challenge and an oppo rtunity in the development of e-recruitment\nrecommendation systems.\nMoreover, recommendationsystems couldalso makeuse ofoth er data sources besides job market related data,\nsuch as job seekers’ and job postings’ informationinsocial networks, blogs,etc.\n3)Cold Start : The cold start problem in recommendation systems refers to the problem of recommending to\nnew users or recommending new items with few or no interactio ns. This problem might be more acute for\ne-recruitment recommendation systems than the traditiona l ones since new jobs tend toappear and disappear\nfrequently (SIH aspect 2.6). The jobs usually disappear after a successful match, and n ew jobs with the same\ntitle are often postedas new items. In contrast, the product swith the same name in traditional recommenders\nare usually treated as the same item, and only their availabi lity changes over time (in cases such as movie\nrecommenders, theproductis always available).\nUsing data other than interactions could often alleviate th e cold start problem in recommendation systems.\nHence, it is helpful to have the many facets available in the j ob seekers’ and job postings’ profiles (MF aspect\n2.4).\nAlsonotethatine-recruitmentrecommendationsystemster ms,thereareuser(jobseekerorjob)coldstartand\nitem (job or job seeker) cold start problems. In job recommen dation, user cold start refers to job seeker cold\nstartand item coldstartrefers tojob coldstart,and it is th eother wayaround injob seeker recommendation.\n4)User preferences as well as suitability : To find the best matches between job seekers and vacancies, i t is\ncrucialtousetheknowledgeandskillsofthejobseekersand therequirementsofjobpositions.However,users’\npreferences are equally importantfor a personalized recom mendation system (SP aspect 2.3). Moreover, users’\npreferences mightchangeovertime,whichshouldbetakenin toconsiderationbytherecommendationsystems.\n5)Interpretability and explainability : Providing explainable recommendations and designing int erpretable\nmodelsareimportantine-recruitmentrecommendationsyst ems(HSaspect 2.5).Jobseekerscouldbenefitfrom\nexplanationsoftheirrecommendationssinceimportantcar eerdecisionswilldependontheirchoices.Moreover,\nproviding explainableresults helps design user-friendly applications forjob-seekers and recruiters.\n6)Specificobjectives :E-recruitment recommendationsystems usuallyhave a mult i-objectivenature, since they\nneed tosatisfy multiplestakeholders,including jobseeke rs, recruiters,and serviceproviders (TS aspect 2.2).\nInaddition,e-recruitmentrecommendationsystemscouldh avespecificobjectives,suchasbalancingthenumber\nofrecommendations eachjobseeker/jobpostingreceive orr ecommendingitemswithahigh chanceofsuccess\nregarding the competitors (OWOJ aspect 2.1), or avoiding false positives to make sure that users wouldn ’t be\nbotheredbytoomanyspams.\n7)Bias and fairness : Recommendation systems suffer from all kinds of well-known biases, some of which have\nraised societal and ethical concerns. Providing fair recom mendations in e-recruitment is even more essential\nthan theothertypes since e-recruitment is a high-stakes do main(HS aspect 2.5).It is crucialto mitigatebiases\nfor job seekers, such as gender bias, as well as biases regard ing the job postings, such as recency bias (recent\njob postings maybemorepopular).\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 7\nData quality\nData cleaning\nand preprocessing\nNLP [9,11,12,23,\n36–38,43,59,62,\n77,82,96,103,115,\n116,120,128,134]Semantic gap\nOntology [ 59,\n62,63,99,124]Skill extraction\nFrom text\nNLP [44,57,62,\n63,96,98,126]From skills\nInferred skills\n[44,57,126]Calibrated\nskills [123]Multi-linguality\nMulti-lingual\nlanguage\nmodel [80,99]Data sparsity\nReducing number\nof individual\njobs/job seekers\n(e.g. by clustering)\n[29,40,82]Densifying in-\nteraction graph\nby content\nsimilarity [ 122]\nFig. 1. Anoverviewof the dataquality challenge\n8)Largescale :Theever-increasingamountsofdatabringthepressingcha llengeofscalabilitytothee-recruitment\nrecommendation systems. More specifically, large scale dat a may cause issues in both training and inference\nphases: ineach phase, there couldbeissues withspeedand st orage/memoryconsumption.\n3.2 Dataquality\nSince most e-recruitment recommendation systems use inter actions as well as textual data (resumes and job descrip-\ntions) to model theuser profileor to construct features, var ious data qualityissues affect the qualityof recommenda-\ntions. Most issues in this section are about textual data qua lity since the facets available in e-recruitment (MF aspect\n2.4)aresometimeshiddeninfreetext.Webrieflydiscussdiffere nt approachesforeachdataqualityissuediscussedin\nSection3.1.1.Anoverviewofthissectionwhichincludesthecategorieso fthedataqualityissuesandthecorresponding\nsolutionsintheliterature, is presented inFig. 1.\nData cleaning and preprocessing (Section3.1.1.a). E-recruitment recommendation systems usually use textu al\ncontent to acquire features for job seekers and job descript ions, which could further be used in recommendation\nmethods.However,thetextualcontentsareusuallywritten bydifferentpeopleandarenoisy.Therefore,datacleaning\nand data preprocessing fortextualdataare crucialforprov iding high qualityrecommendations.\nAlthoughmostapproachesusingtextualcontenthavetodoso medatacleaningandpreprocessing,weonlydiscuss\ntheworksthathaveexplicitlyfocusedonNLPtechniquestod ealwithsuchissues.Thedatacleaningandpreprocessing\nusually involve common NLP techniques such as tokenization , removing stop words, stemming, and lemmatization\n[9,11,12,23,36–38,43,59,62,77,82,96,103,115,116,120,128,134].\nSemanticgap (Section3.1.1.b).Sincethetextualdataiswrittenbydifferentpeople,e-re cruitmentrecommendation\nsystems suffer from a semantic gap between contents from diffe rent sources, such as resumes and job descriptions.\nDifferent terms might have been used to refer to the same conce pt. Moreover, the same term could have different\nmeanings depending onthecontext.\nPreprint. Underreview.\n8 Mashayekhi et al.\nHeterogeneous data\nComputing\nsimilarity scores\nbetween fields\n[31,42,59,90–\n92,95,115]Learning embed-\ndings for each field\n[64,65,93,141]Multiple inter-\naction types\nConversion to\nratings [138]Weighting in-\nteraction type\nMuti-task [ 50]Sample importance\nweighting [ 129]External\ndata sources\nFriends’ fea-\ntures [27,38]Others [20,44,45]\nFig. 2. Anoverviewof the heterogeneous data, and multiple interaction types and dat asources challenge\nAlthoughmost papersthat uselanguage modelsor learnrepre sentations of textualdatacan alleviate thesemantic\ngap to some degree, we only discuss the papers that explicitl y focus on this issue. The most common approach that\nis employed in the literature to tackle the semantic gap is to map skills/concepts to the nodes in an ontology (by\nexploitingalanguagemodel,usingNamedEntityRecognitio n(NER),usingNamedEntityDisambiguation(NED),etc.)\nand tousetheshared nodes torefer tothesame skills/concep ts [59,62,63,99,124].\nSkillextraction (Section3.1.1.c).E-recruitmentrecommendationsystems mostlymatchjobs eekers withjobpost-\nings based ontheir expertiseand skills. Since job seekers’ profilesand jobdescriptions are oftenavailable as freetex t\nwithnostructure,skillextractionfromthetextualdatais importantforsomee-recruitmentrecommendationsystems.\nSome papers have employed NLP techniques such as n-gram toke nization [ 96], NER [57,63,96,98], part-of-speech\ntagging (PoS tagging) [ 57],using skill dictionaries or ontology[ 44,57,62,63,96], or other techniques (e.g., using the\ncontextofaskillterm,calledskillheadwords)[ 126]toextractskillsfromthetext.Jobseekers’and jobpostin gs’skills\nhave also been expanded using skill similarities or relatio ns provided by word embedding models (e.g., word2vec)\n[57,126],andbydomainspecificontologiesorskilltaxonomies[ 44].Given theextractedskillsforjobseekersandjob\npostingsbyanin-houseskilltaggerinLinkedIn, Shietal.[ 123]selectedskillsforjobpostingsconsideringthemarket\nsupply(enough jobseekers having thatskill) oftheskills a nd alsotheimportanceofeach skill in a jobposting.\nMulti-linguality (Section3.1.1.d).Somee-recruitment recommendationsystems aremulti-li ngual,i.e., thetextual\ncontentofresumes and jobdescriptionscouldbeinmultiple languages. Moreover,matchingresumes andjobdescrip-\ntionswithdifferent languagesresultsincross-linguality challenges. Suchissueshavebeenstudiedin[ 80,99],wherea\nmulti-linguallanguagemodelwasusedtosupportmultiplel anguages.Lavi etal.[ 80]designed aSiamesearchitecture\ntofine-tune themulti-lingualBert usingthehistorical dat aof recruiters’interactions withcandidates.\nDatasparsity (Section3.1.1.e).E-recruitmentrecommendationsystems oftensufferfrom d atasparsityissues (SIH\naspect2.6) due to thefact that similar job positions are usually consi dered as separate entities. Moreover, job seekers\noftenstopusingtheplatformafterbeingemployed.Althoug hmostapproachesthatusecontentintherecommendation\ncould alleviate the data sparsity issue to some extent (e.g. [13]), we only discuss the works that study data sparsity\nexplicitly.\nOneapproachthathasbeenstudiedtocopewiththedataspars ityissueistoreducethenumberofdistinctjobposi-\ntionsbysplittingajobpositionintoajobtitleandacompan yname[82]orbyclusteringsimilarjobpositions[ 29,40].\nAnother approach designed by Shalaby et al. [ 122] is to densify the graph of jobs, which is created based on int erac-\ntions, by adding content similarity links between the entit ies (job seekers and job positions). The recommendations\narethen generated using this graph ofjobs.\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 9\n3.3 Heterogeneous data,andmultiple interaction types and datasources\nE-recruitment recommendation systems could use the hetero geneous data of job seekers and job postings, including\nlocation, textual resume/job description, skills, etc. (M F aspect2.4). Moreover, different types of behavioral data are\navailable,whereusingsuchdataischallenginginrecommen dationsystems.Inaddition,jobseekers’andjobpositions ’\ndata could be enriched by their information from external so urces. We briefly discuss the papers dealing with these\nthreeaspectsthatare alsodescribedin Section 3.1.2.Anoverview ofthis sectionis presented in Fig. 2.\nSince resumes and job descriptions are among the most import ant data sources for e-recruitment, it is necessary\nto carefully use them as well as the behavioral data. Job seek er profiles, resumes, and job descriptions sometimes\nhave several fields with different data types. Hence, the heterogeneousnatureof thedata shouldbeconsidered in\ndesigning recommendation systems in e-recruitment.\nMany papers use features with different types in a recommenda tion algorithm (e.g., decision trees, deep neural\nnetworks,etc.)eitherdirectlyorbysomefeaturerepresen tationtechniquessuchasone-hotencoding,wordembedding ,\netc.(e.g.,[ 97,110]).However,somemethodsareexplicitlydesignedtoworkwi thheterogenousdata.Hence,wemostly\nfocusonthosepapers.Somestudieshavecombinedthesimila rityscoresbetweenthesamefields(e.g.,education,work\nexperience, etc.) of resumes and jobpostings [ 31,42,59,90–92,115]or betweenallfields in resumes and job postings\n[95]. Learning embeddings for each of the fields/data sources of job seeker profiles and job postings, and using the\ninteractions of those embeddings to match job seekers with j ob postings is another approach employed to deal with\nheterogeneousdata[ 64,65,93,141].Morespecifically,Zhaoetal.[ 141]providedrecommendationsbasedonthefused\nembeddings of job seekers and jobs, where they combine the em beddings learned from the textual content, job-skill\ninformation graph, and geolocation data. In the deep neural networks proposed in [ 64,65], the embeddings for the\nsamefields/fieldtypesof resumes and jobpostingswere learn edbytheirinner interactions. In[ 64],a multi-headself-\nattentionmodulewasthenappliedtotheembeddings fordiffe rent fieldsasthefieldouterinteractionmodule.In[ 93],\ndifferent embeddings arelearnedfordifferent fieldsofjobse ekers bytheir interactions intheneuralnetwork.Finally,\nthe learned embeddings were passed to a multi-layer percept ron to compute the matching score between a resume\nand a jobposting[ 64,65,93].\nMoreover, there could be multiple types of interactions between a job seeker and a job position, such as click,\napply, like, favorite, invite, interview, hire, etc., wher e some of them are initiated by the job seeker and some by\nrecruiters. Zhang and Cheng [ 138] transformed the implicit feedback (click, bookmark, repl y, and click) into ratings\nandproposedatwo-stageensemblemethodforgeneratingthe recommendations.Fuetal.[ 50]proposedadeepneural\nnetworktocapturethedynamicpreferencesofthejobseeker sandrecruitersbylearningamulti-taskobjectiveoftheir\nbehavioral data (e.g., click, apply,chat,invite, match). Volkovset al. [ 129]proposeda content-based recommendation\nsystem considering different interaction types as positive with different weights for sampling and used XGBoost to\noptimizethebinary classificationloss.\nTo find a better match between job seekers and vacancies, info rmation other than skills such as personality and\ntraits has also been found to be useful. Some studies have tri ed to use auxiliary information gathered from external\ndata sources such as friends’ features in social networks [ 27,38] and personal websites [ 20,44,45] to build more\ncomprehensive profiles and improve therecommendations.\nPreprint. Underreview.\n10 Mashayekhi et al.\nCold start\nRecommending\nbased on the\ninteractions of\nsimilar jobs/job\nseekers [15,29,\n58,68,82,90,91,\n104,121,122,138]Recommending\nbased on the\nfeatures of jobs\nand job seekers [ 15,\n58,60,86,119,120,\n122,129,135,137]\nFig. 3. Anoverviewof the cold start challenge\n3.4 Cold start\nAs discussed in Section 3.1.3, cold start in recommendation systems refers to the problem of recommending to new\nusers or items with no or few interaction data. This problem c ouldbe more acute for e-recruitment recommendation\nsystems because job opening positions are usually treated a s distinct items even if they have the same job title and\ndescription,andhencethosejobopeningswouldbetreateda snewitems(SIHaspect 2.6).E-recruitmentrecommenders\ncouldsuffer frombothjob seeker coldstartand job coldstart problems.\nUsing content to provide recommendations could alleviate t he cold start problem. In the e-recruitment domain,\nmany facets are often available for this purpose(MF aspect 2.4). Hence, the papers with content based approaches or\nmethodsthatusefeaturesbasedonthecontentcoulddealwit hthecoldstartproblemtosomeextent.However,weonly\ndiscussthepapersthatexplicitlyaddressthecoldstartpr oblem.Thepapersdealingwithcoldstartfollowtwogeneral\napproaches:recommendingusingtheinteractions ofsimila r jobs/jobseekers orpredictingtherecommendationscore\nbasedonjobseekers’andjobs’features.Somepapersalsoem ploybothapproachestodealwiththecoldstartproblem.\nAnoverview ofthissection,includingthesolutionspropos edbyrecentstudiesforthecoldstartproblem,ispresented\ninFig.3.\nTwo approaches have been used in the literature that recomme nd based on the interactions of similar jobs/job\nseekers.First,tocomputethematchingscoresbetweenjobsandnewj obseekers,somestudiesfindsimilarjobseekers\ntothenewonesbasedoncontentfeaturesandthenusetheknow n(e.g.,previouslyinteracted)matchingscoresbetween\nthemandthejobs[ 29,68,90,91,104].Inpapers[ 90,91],jobsarerecommendedtonewgraduatestudentsbasedonthe\njob offers of similar graduates. In another studyby Chen et al . [29],a context-aware multi-arm bandit was employed\nforgeneratingjobrecommendations,wherethejobrecommen dationscoresfornewjobseekerswerecomputedbased\non the interaction history of similar job seekers. This meth od could also deal with the job cold start in case of job\nseeker recommendation due to the symmetric nature of their m odel architecture. Second, to compute the matching\nscores between new jobs and job seekers, some studies find job s with similar content to the new ones and use the\nknown (e.g., previouslyinteracted) matching scores betwe en them and thejobseekers [ 15,58,82,104,121,122,138].\nSome studies predict the matching scores between job seekers and jobs usi ng their features to deal with\nthe cold start problem (e.g., using a machine learning metho d or a scoring function). The job categories that new\njob seekers are interested in are predicted using job seeker s’ textual content [ 122] or attributes [ 60] and are further\nexploited to provide job recommendations. Other papers hav e provided recommendations based on job seekers’ and\njobs’ content, which tackle both job seeker cold start ad job cold start problems [ 15,119,120,135,137] (Although\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 11\nUser preferences\nBehavioral inter-\nactions, e.g., [ 79,\n113,122,132,136]Explicit prefer-\nences [61,125]Preference model\n[12,16–18,60]Dynamic\npreferences\nNeural architectures\n(e.g. LSTM)\n[50,88,104]Time-dependent\nfeatures [ 88]Time-dependent\nloss function [ 88]\nFig. 4. Anoverviewof the user preferences as well as suitability challenge\nmanycontent-basedmethodscouldtacklethecoldstartprob lemwiththesameapproach,hereweonlycitethepapers\nthat have explicitly addressed the cold start problem). Bes ides features extracted from job seekers’ and jobs’ content ,\nseveral studies [ 58,86,119,129] also extracted features for job seekers based on the jobs th ey have interacted with\nbefore. Hence, they candeal withthejob coldstartproblem.\n3.5 Userpreferences aswellas suitability\nAlthough considering user preferences is important in all r ecommendation systems, e-recruitment recommendation\nsystemsshouldalsoconsidersuitabilityingeneratingthe recommendations,i.e.matchingjobseekerswithjobpostin gs\nbased on the similarity of their skills and requirements (SP aspect2.3). Since matching based on the suitability of job\nseekers for job positions has been the main focus of e-recrui tment recommendation systems, we discuss the studies\nfocusing on capturing user preference. Suitabilityis usua llycaptured by matching the requirements of a job position\nwiththeskillsandotherfeatureofthejobseekers,whilepr eferenceisoftencapturedbyotherfactorsintheprofilesof\njobseekersanjobpostings,suchaslocation,interests,et c.,orbybehavioralinteractions.AsdiscussedinSection 3.1.4,\njobseekers’preferences mightchangeovertimeandmodelin gthedynamicpreferences isalsoachallengingtaskine-\nrecruitmentrecommendation.Inthissection,wefirstdiscu ssthemethodsexplicitlymodelinguserpreferences either\nbased onexplicit preferences in user profiles or using a pref erence model.Next, we present the approaches targeting\nthedynamic natureofuser preferences. Anoverview ofthis s ectionis presented in Fig. 4.\nBehavioralinteractions betweenjobseekers andjobpostings,suchasclick,apply,i nvite, etc.,canshowtheuser\npreferences to some extent. Hence, E-recruitment recommen dation systems that use such behavioral interactions in\ntheir methodareconsidering userpreferences ingeneratin g recommendations (e.g., [ 79,113,122,132,136]).\nAnother way that user preferences are taken into considerat ion in recommendation is by using explicit prefer-\nencesspecified in the user profile (e.g., interests, location, etc .) or in a dashboard. Gutiérrez et al. [ 61] designed a\ndashboard for job seekers to visualize and explore availabl e vacancies based on their preferences. A fuzzy-based rec-\nommendation was proposed by Slama and Darmon [ 125] that matches job seekers with job postings based on their\nfuzzypreferences in their profiles. Althoughmany studies u sesuch features in therecommendation, we only discuss\nthepapersthatexplicitlyfocusontheuser preferences.\nUser preferences are sometimes notobtained directlybutra ther through a preference model .Somestudies learn\nsuch models from explicit feedback [ 12,16–18]. Bills and Ng [ 18] proposed a matching model aimed at adults with\nAutism.They askedbothjob seekers and employers somequest ionstoformthepreference vectorsforbothsides and\nusedthemintheGale-Shapleystablematchingalgorithm[ 51]toprovidetherecommendations.Anotherwaythatuser\npreferences aremodeledisbyusingimplicitfeedbackandco ntent.GuptaandGarg[ 60]designed preference matrices\nfordifferent jobseeker groupsgenerated from historicalda taand usedthem intheir hybrid recommender.\nPreprint. Underreview.\n12 Mashayekhi et al.\nInterpretability\nand explainability\nDeep neural\nnetworks\nAttention weights\n[81,109,110,140]Explaining embedding\ndimensions [ 143]Other methods\nInterpretable\nmachine learning\nmethods [ 97,98]Explicit relations in\ndata [61,99,127]\nFig. 5. Anoverviewofthe interpretability and explainability challenge\nToconsiderthe dynamic natureofuserpreferences,thechangeinuserpreferencesi susuallycapturedthroughthe\ninteractions in time. Nigam et al.[ 104] employed a recurrent neural network (Bidirectional LSTM) withthe attention\nmechanism to capture users’ change of preferences over time . Liu et al. [ 88] proposed an ensemble recommenda-\ntion system with three different recommenders. Observing th e fact that users tend to re-interact with items, a time\nreweighted linearranking modelwasdesigned tocomputethe matchingscoreofajobseeker and ajobpostingbased\nonthe frequency of their previous interactions. The time-d ependent weights were learned by optimizinga smoothed\nhinge loss. Next, a temporal matrix factorization algorith m was designed by introducing a time-related loss term to\nconsider the time of the interactions. Finally, an encoder- decoder model based on LSTM was employed to model the\nsequence of job seekers-jobs interactions. Fu et al. [ 50] proposeda person-job fit model to transform job seekers and\njobs heterogeneous dynamic preferences (preferences base d ondifferent interactions suchas click,apply,chat,invit e,\nand match) into a unified preference space. First, job seeker s and jobs were encoded using hierarchical LSTM. Next,\ntheir dynamic preferences were captured through a Dynamic M ulti-Key Value Memory Network. This network has\na global key matrix for each interaction type (along with the ir attention weights) and a memory matrix for each job\nseeker/jobpreferences.Finally,totransferthepreferen cesfromauxiliarybehavior(interactiontypesotherthanm atch)\ntothematchingtask,theparameterswerelearnedbyamulti- taskobjective,whichistheweightedsumoflossforeach\ninteractiontype.\n3.6 Interpretability and explainability\nInterpretabilityoftenreferstothemodeltransparencyan dtheabilitytounderstandwhyandhowthemodelgenerates\nthepredictions.Ontheotherhand,explainabilityoftenre fers totheabilitytoexplainthepredictionsinhumanterms ,\neven for complex models. However, interpretability and exp lainability have been often been used interchangeably,\nand we also use the two terms interchangeably in this section . As described in Section 3.1.5, providing explanations\nforrecommendations ine-recruitment is a challenging and i mportanttask sincetherecommendations affect people’s\nfuturecareersandexplanationshelpthemmakemoreinsight fuldecisions (HSaspect 2.5).Webrieflydiscussdifferent\napproachesproposedintheliteraturetoachieveinterpret abilityandexplainabilityfore-recruitmentrecommendat ions\nintherestofthissection,whichincludesusingmethodstop rovideexplainabilityindeepneuralnetworkmodels,using\ninterpretablemachinelearning methods,andusingexplici trelations indatatoprovideexplainability. Anoverview o f\ntheapproaches thataddress interpretabilityand explaina bility is presented inFig. 5.\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 13\nSpecific objectives\nMultiple\nstakeholders\nReciprocal rec-\nommenders\nRecommending based on the labeled data of both sides used for\ntraining [ 13–15,25,50,58,64,65,72,77,80,81,84,86,89,92–\n94,98,101,109,110,117,119,129,131,135,136,140,141,143]Recommending based on the features of jobs and job seekers or some\ninference rules [ 8,10,18,20,27,31,42,48,62,96,111,117,125,126]OWOJ aspect 2.1\nStable match-\ning [18]Job redis-\ntribuition [ 22]Other objectives\nSpecific objective\nfunctions [ 137]\nFig. 6. Anoverviewof the specific objectives challenge\nOne way explainability is addressed in the deep neural models that use resumes and job descriptions for\nperson-job fit prediction is to visualize the attention weig hts. The attention weights could show the importance of\ndifferent words,sentences, oranypartoftheresume/job des criptionintheresume/job description[ 109,110]andalso\ntheir importance in matching with the target job descriptio n/resume words, sentences, or any part of it [ 81,109,110,\n140].Anotherwaytoaddressexplainabilityindeepneuralmode lsisproposedbyZhuetal.[ 143].Foreachdimension\nin the final representation of resumes and jobs resulting fro m the deep model, high-frequency words were gathered\nfrom other resumes and jobs that have high values for that dim ension. Hence, a level of explainability was provided\nforeach jobpostingorresume.\nAnother approach by which explainability is provided in the literature is by applying interpretable machine\nlearningmethods suchas decision treestohuman-readable features [ 97,98].\nIn other studies, explainability is provided using explicit relations in data . In [61] a dashboard was provided to\nview the job seekers’ affinity with the required skills for the jobs that are recommended. In [ 127], recommendations\nwere generated using a knowledge graph together with a templ ate for explainability, where the template was then\ncompleted using the nodes in the knowledge graph. Mentec et a l. [99] provide explanations by the similarity of job\nseekers’ and jobpostings’ skills using anskill ontology.\n3.7 Specific objectives\nE-recruitment recommendation systems usuallyshould sati sfy multiplestakeholders, such as employers, job seekers,\nandsometimestherecommendationplatform,whichbenefitsf rommatchingjobseekerswithjobs(TSaspect 2.2).The\nplatforms’ benefits are often included in the job seeker’s an d employers’ benefits since job seekers’ and employers’\nsatisfaction also leads to more revenue for the recommendat ion platform. Hence, most studies try to improve the\nrecommendations for job seekers and employers. In addition , some studies have considered specific objectives for e-\nrecruitment recommendation systems (e.g., OWOJ aspect 2.1). We briefly discuss the papers dealing with such issues\nthatare alsodescribedin Section 3.1.6.Anoverview ofthis sectionis presented in Fig. 6.\nSincereciprocalrecommenders recommendjobseekerstojobpostingsandviceversa,theyus uallyconsiderthe\nbenefits of job seekers and employers at the same time. Some st udies use historical interactions between job seekers\nand employers that show the interests of both sides for train ing. The labeled data for such methods usually includes\nPreprint. Underreview.\n14 Mashayekhi et al.\nBias and fairness\nFairness for\njob seekers\nReranking [ 53] Debiasing em-\nbeddings [ 70]Fairness for jobs\nUnbiased loss\nfunction [ 28]\nFig. 7. Anoverviewof the biasand fairness challenge\ninterviewandrecruitmentdata[ 13–15,25,50,58,64,65,72,80,81,84,86,89,93,98,101,109,110,119,129,131,135,136,\n140,141,143],actionssuchasfavoriteorclickdatabybothjobseekersa ndrecruiters[ 92,117],ormanuallyannotated\ndata [77,94].On theotherhand, somemethodscomputethematching degre e of ajob seeker and a job postingbased\non the similarity of their contents, skills or other feature s, or by some inference rules [ 8,10,18,20,27,31,42,48,62,\n63,96,111,117,125,126],which couldrecommend jobs tojobseekers and viceversa wi th this approach.\nOther thanthe reciprocalnature of recommendation in e-rec ruitment, some studies have triedto consider thefact\nthat in the job market, for a fixed period of time, each job seek er is hired for one (or a few) job position and vice\nversa (OWOJaspect2.1).Astablematchingalgorithmwasemployed in[ 18]tofind recommendationsforjobseekers\nand recruiters considering this aspect.Moreover, a job app lication redistributionat LinkedIn was proposedin [ 22] to\nprevent job postings from receiving too many or too few appli cations. To achieve this goal, the job recommendation\nscores were penalized orboostedbased onthepredictednumb er of applicationsusing a dynamic forecastingmodel.\nOther objectives have also been investigated for e-recruitment recommendat ion systems. One of such objectives\nfor e-recruitment systems is to prevent job seekers from rec eiving spam. To address this issue, false positives were\npenalizedharshly inthehybrid job recommendationpropose dbyYanget al.[ 137].\n3.8 Bias andfairness\nThe problems related to bias and fairness in AI have gained mo re attention in recent years. Since e-recruitment af-\nfects people’s career choices, it is crucial to consider the fairness aspects of the recommendations (HS aspect 2.5):\ne-recruitment is even defined as one of the high-risk domains according to the EU’s AI act (proposal) [ 32]. Realizing\nthe limitation of pure algorithmic debiasing methods, some researchers have argued that mitigating bias and unfair-\nnessine-recruitmentdeservesaninterdisciplinarypoint ofviewinvolvinglegalandethicalconsiderations([ 112,118]).\nWangetal.[ 130]addressedthelimitationofcurrentdebiasingtechnology byconductinganonlineuserstudyshowing\nthatbiasedrecommendations arepreferred byjobseekers,w hich indicatesthathumanbiasshouldbeaddressedfrom\nnew perspectivesornew technology.\nFrom a technical point of view, fairness concerns may exist o n both sides [ 1], namely for job seekers and also for\njobpostings,since recommendation ine-recruitment is mul ti-stakeholder.Someexamples ofsuch biases and fairness\nconcerns are job seekers’ racial or gender discrimination [ 85], popularity bias [ 2], selection bias [ 28], etc. Moreover,\nfairness concerns exist for both users and items in e-recrui tment recommendation systems, e.g., job seekers with a\ncertainsensitive attributemightnotberecommendedforsp ecificjobsandalsomightnotreceivespecificjobsintheir\nrecommendations.Webrieflydiscussthepapersdealingwith fairness issues,whicharealsodescribedinSection 3.1.7.\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 15\nLarge scale\nTraining phase\nItem-based\nmethods [ 122]Scalable algorithms\n(e.g. parallel\nmethods) [ 139]Inference phase\nTwo-stage meth-\nods [21,141]Both phases\nBig data com-\nputations [ 23]Reducing the num-\nber of individual\njobs/job seekers\n(e.g., by clustering)\n[29,40,100]\nFig. 8. Anoverviewofthe large scale challenge\nWe first present the studies focusing on fairness for job seek ers and then the papers addressing fairness issues for\njobpostings.Anoverview oftheapproachesthataddress fai rness issuesine-recruitmentrecommendationsystems is\npresented in Fig. 7.\nTo provide fair recommendations concerning job seekers , Geyik et al. [ 53] proposed a fairness-aware frame-\nworkforrankingjobseekersasusedinsearchandrecommendi ngjobseekers.Fourdeterministicrerankingalgorithms\nwereproposedtomitigatebiasedpredictiontowardsanysen sitivegroup.Islametal.[ 70]addressedthegenderbiasin\njob recommedationby proposing a neural fair collaborative filtering model(NFCF). Job seeker embeddings were pre-\ntrainedfromnon-e-recruitment recommendationdata(e.g. , movierecommendation) and thendebiasedwithasimilar\ntechniqueofdebiasing wordvectorssothatthegender compo nentisremovedfromeach jobseeker embedding.Next,\nthedebiasedjobseekerembeddingswereusedinthefine-tuni ngstageforjobrecommendationtoensurethatsensitive\nattributesdonotaffect theoutputsofthesystem.\nTo provide fairness for job postings , Chen et al. [ 28] tackled the recency bias in job recommendation. They\nconsideredtherecencybiasasatypeofselectionbiasimpos edbythejobseekersanddesignedanunbiasedlossusing\ninverse propensityweighting ina neural collaborativefilt eringmodel.\n3.9 Largescale\nReal-worldjobrecommendationsystemshavetodealwithmil lionsofjobseekersandjobpostings.Hence,recommend-\ning at large scale needs to be considered in online job market platforms. We briefly discuss the papers dealing with\nlargescaleissues describedinSection 3.1.8,whichincludereducingexecutiontimeandconsumedstorag e/memoryin\ntraining and inference phases. An overview of the approache s that address large scale issues in e-recruitment recom-\nmendation systems is presented inFig. 8.\nTo deal with the execution time and consumed storage/memory issues during the training phase , a study from\nCareerBuilder4[122]createdanitem-basedgraphofjobswithedgesrepresentin g jobsimilaritiesbasedonbehavioral\nand content-based signals. An item-based graph of jobs with different similarity scores was used rather than a user-\nbased (job seeker based) oruser-item (job-job seeker) grap h for scalability.Asubgraph ofthis job graph wasselected\nbya jobseeker’s resumeorpastclicks,and therecommendati ons weregenerated byapplyingPageRank algorithmto\nthis subgraph. In a study at LinkedIn [ 139], a scalable algorithm (a parallel block-wise coordinate d escent algorithm)\nwas designed forlearning theGLMix modeltopredicttheuser response.\n4https://www.careerbuilder.com/\nPreprint. Underreview.\n16 Mashayekhi et al.\nTodealwiththeresponsetimeinthe inferencephase atwo-stagearchitectureisoftenusedbytheindustryleade rs,\nwhere the first stage selects a pool of candidates from a large number of items using a computationally inexpensive\nmodel, and the second stage reranks the results using a more e xpensive model. One example of the two-stage archi-\ntectures was designed for recommendation at CareerBuilder [141]. The first stage was designed to select hundreds\nof candidates from millions using FAISS [ 74] to find the nearest neighbors of an entity in the embedding sp ace. The\nembeddings were calculated through three components; a dee p neural network to learn from the textual data, a rep-\nresentationframework tolearnfromthreegraphsconstruct edfromjobsandskills[ 33],and ageolocationembedding\ncalculator [ 89]. The second stage was designed to rerank the candidates usi ng a weighted linear combination of the\nfirst stage scores and context-based scores. In [ 21], a candidate selection model, CasMoS, was proposed as the fi rst\nstage in the two-stage recommendation framework at LinkedI n. CasMoS is the framework that learns the first stage\nmodel,candidate selection,using theWeighted AND(WAND) q ueryoperator[ 24].\nFrom another perspective, to deal with scalability issues b oth in the training and inference phases , Boukari et\nal. [23] employed Apache Spark, a tool to process big data, to recomm end jobs to job seekers using a content-based\nalgorithm.Another proposedapproach todeal with bigdata a nd thelargenumber of entities is tocluster jobsand/or\njob seekers [ 29,40,100].\n3.10 Papersnotincluded in previous sections\nSomeofthecollectedpapersarenotincludedintheprevious sectionsbecausetheydidnotdirectlyaddressanyofthe\nchallenges discussed in this survey [ 5–7,19,26,30,33,34,39,41,46,47,56,66,67,69,73,75,78,83,87,102,105,107,\n108,114,133,142,144]. However, some papers tackle a specific challenge in e-recr uitment recommendation systems,\nsuchasdealingwithmissingfeatures [ 73]orapplyingdifferent recommendationstrategies fordiffer ent groupsofjob\nseekers [69,73]. We did not discuss such challenges in these papers since ei ther there were not many papers dealing\nwiththesameissuesortheseissueswereconsideredtobeofl esserpracticalsignificanceascomparedtothechallenges\nhighlighted in the present survey. Practical challenges an d lessons learned from the e-recruitment recommendation\nsystem at LinkedIn are alsodiscussed intwotalks [ 54,76].\n4 CONCLUSION\nInthissectionweprovideourfinalremarks.Wefirstprovidea summaryofthissurveyinSection 4.1.Next,wediscuss\nthelimitationsofthissurveyinSection 4.2.Finally,openchallengesandfutureresearchdirectionso frecommendation\nine-recruitment arediscussed inSection 4.3.\n4.1 Summary\nE-recruitmentrecommendationincludesrecommendingjobs tojobseekersandjobseekerstojobs.Weidentifiedeight\nchallenges that have been studied in the past decade for reco mmendation in e-recruitment. Since the available data\nfor training an e-recruitment recommendation model includ e the interactions between job seekers and job positions\ntogether withtheir features and textualcontents,several studieshave addressed dataquality issues.\nJob seekers’ and jobs’ data usually include textual content , location, categorical features, etc., which could also\nbe enriched by external data sources. Moreover, there are ma ny interaction types, such as click, apply, invite, chat,\ninterview, etc., in e-recruitment platforms.Therefore, d ealing with heterogeneousdata,andmultiple interaction\ntypesanddatasources is another challenge in e-recruitment.\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 17\nSince job positions with the same content are often represen ted as different entities in e-recruitment recommen-\ndation systems (different job entities with distinct IDs may have the same title/content), cold start problem needs\nmoreattentionine-recruitmentrecommendationcomparedt othetraditionalrecommenders.Theavailabilityofmany\nfacets in e-recruitment domaincouldhelp alleviate thecol dstartproblem.\nTraditionalrecommendationsystems mainlyconsideruserp references forgeneratingtherecommendations,while\ne-recruitment recommendationsystems have tomatch job see kers with jobs based onthejobseekers’ skills and jobs’\nrequired skills as well. Hence, e-recruitment recommendat ion systems should consider user preferences as well as\nsuitability .\nExplainable recommendations in general help users make bet ter decisions. Nonetheless, interpretability and ex-\nplainability areevenmoreimportantine-recruitmentrecommendationsy stemssincee-recruitmentrecommendation\nhas a great influence onjobseekers’ futurecareers and alsoo ntheemployers of companies.\nRecommendationsystemsinaspecificdomaincouldhave specificobjectives .Ine-recruitment,thegoalisusually\nto satisfy multiplestakeholders, including job seekers, r ecruiters, and service providers. Moreover, e-recruitmen t rec-\nommendation systems should consider thefact that each job s eeker couldbeemployed for oneor a few jobpositions\nand vice versa, which canintroducenew objectives forrecom mendationsystems.\nBias and fairness issues are challenging for most recommendation systems. In e-recruitment, it is even more\ncriticaltoprovide fair recommendations duetothepossibl ehigh-stakes involved forbothjob seekers and employers.\nFinally,large scale issues are unignorable in designing real-world recommenda tion systems. Since e-recruitment\nrecommendationsystemsusuallyhavetoprovideservicesfo rthousands/millionsofjobseekersandjobpositions,they\nhave toconsider thelargescaleaspectsof therecommendati onsystem.\n4.2 Limitations ofthe survey\nWe have selected and elaborated the main challenges in the e- recruitment recommendation from our point of view,\nbut there could be other challenges in this domain. For examp le, extracting features from textual data with different\ngranularitycouldalsobeconsidered asanother challenge, albeitnotspecifictothee-recruitment domain.Identifyin g\nmorechallenges and categorizing papersbased ontheir appr oaches toaddress them remain forthefuture.\nSincee-recruitmentrecommendationcouldbeareciprocalr ecommendationtask(recommendingjobstojobseekers\nand vice versa), reviewing the challenges in other reciproc alrecommendation systems (e.g., online dating) couldalso\nbeuseful for designing e-recruitment recommendation syst ems. Weomittedpapers from other reciprocal recommen-\ndationdomains tolimitthescopeof this survey.\n4.3 Openchallenges andfuture research directions\nWhiletherehasbeenmuchusefulworkinaddressingcertaina spectsofe-recruitmentrecommendationsystems,there\nare still some open challenges in this domain that could be in vestigated in future research works. Some of such chal-\nlenges thatwe personallyconsider promisinginclude:\n•Oneworker,onejob (OWOJaspect 2.1).Sinceeachjobseekercanonlybeemployedforoneorafewjo bsand\na job can be assigned to one or a few candidates, balancing the recommendations in a way that job postings\ndo not receive too many or too few applications is of great imp ortance. Moreover, each job/job seeker should\nreceive recommendations with a high chance of success. This would require the recommendation system to\nconsidertherelativeprobabilityofmatching,thatis,how likelyone’srecommendedjobswouldbesuccessfully\nPreprint. Underreview.\n18 Mashayekhi et al.\nmatched with other job seekers. Although some aspects of the se issues have been addressed in a few papers\n(see Section 3.7), this challenge stillneeds furtherinvestigation formor einsights and new solutions.\n•Careerpathrecommendation .Somejobseekers choosetheir next jobs ina waythathelps th em reach their\ndreamjobsinthefuture.Thisproblemhasbeenaddressedbya fewcareerpathrecommendationsystems,which\nistorecommendintermediatejobstoreachthefinalcareergo al[55].Thislineofresearchcouldbeinvestigated\nin futurestudies.\n•Domainadaptation .Domainadaptationtechniquescanimprovemodelperforman cewithlimitedlabeleddata,\nbuttheapplicationofsuchtechniquesine-recruitment rec ommendationhasnotbeenwellinvestigated except\nfor inafew studiessuch as [ 14].Methodsfordomainadaptationbetweendifferent job secto rs,languages, plat-\nforms,countries,etc.,wouldbeworthinvestigating toimp rovetheperformanceofe-recruitmentrecommenda-\ntionsystems.\n•Multi-linguality . Many platforms/countries have resumes and job postings in multiple languages. Hence, e-\nrecruitmentrecommendationsystemsinthoseplatforms/co untriesshouldsupportmultiplelanguagesandcross-\nmatchingresumesandjobpostingswithdifferentlanguages. Althoughsomepapershaveaddressedthisproblem\n(see Section 3.2), furtherinvestigations are stillinneed toprovidebette rsupportformulti-lingual platforms.\n•Conversational .Conversational recommendation systems performmulti-tu rndialoguewith users to achieve\nrecommendationrelatedgoals[ 71].Althoughconversationalrecommendationsystemshavebe comemorepop-\nularinrecentyears [ 52],fewstudieshaveexploredconversational settingsinthe e-recruitmentdomain[ 12,16,\n17,99].Conversationalrecommendationcanelicitthecurrentus er’spreference,provideexplanations,makeuse\nof theexplicit feedback,etc.,which makes it valuabletoe- recruitment and worthwhileforfuturestudies.[ 52].\n•Specificjobseekers .Somegroupsofjobseekersmayneedspecialattentionbye-r ecruitmentrecommendation\nsystems. First, user interfaces need to bedesigned specific allyfor certain user groups to enhance their interac-\ntionswiththesystem(e.g.,forpeoplewithspecialneeds). Thisaspectshouldalsobeconsideredforsomegroups\nofrecruiters.Moreover,somegroupsofjobseekersmightbe fitforsomespecificjobs.Forexample,adultswith\nautismareamongthemostunderemployeddemographics[ 18].However,theyhavespecialskillstocontribute\nto the workplace if applied to the right job [ 18]. Although there have been some job recommenders designed\nforspecific job seekers such as students and new graduates [ 46,75,90–92,107,121,142], the elderly [ 10], and\npeoplewith specialneeds [ 18,124],exploringtheneedsofmoresubgroupsofjobseekers could greatly benefit\nthe e-recruitment field. More specifically, designing a taxo nomy of different groups of job seekers with their\ncharacteristics and needs would be a good starting point, wh ich could further encourage collecting data for\ndesigning recommendation methods that can take the differen ces between different groups of job seekers into\nconsideration.\n•Fairness . Fair recommendation in e-recruitment is even more importa nt than that in other recommendation\nsystems because people’s career choices are influenced by th eir recommended jobs and the recommendation\nmayalsohavealong-term impactonthelabormarket(HSaspec t2.5).Althoughtherehas beengrowing atten-\ntiontofairness issues ingeneralrecommendationsettings ,notmanypapersspecificallyaddresstheseissuesin\ne-recruitment recommendation systems (as shown in Section 3.8). One reason couldbethat the fairness issues\nare morecomplicatedthantheother recommendationsystems duetothereciprocalnatureand multiplestake-\nholders involved in e-recruitment.Another reason might be thatthere arerelatively few opendatasets for this\nspecific field,as elaboratedbelow.\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 19\nAnother challenge in research for e-recruitment recommend ation systems is that few publicdatasets are available.\nAs far as we know, there areonly twopublic datasets: CareerBuilder2012 dataset5onKaggle6from thee-recruitment\nplatform CareerBuilder7andZhilian dataset8from a Chinese e-recruitment platform Zhilian9. The two datasets for\nthe RecSys challenges 2016 [ 3] and 2017 [ 4] provided by the e-recruitment platform Xing10, although used in some\nrelatedstudies,arenotpubliclyavailable.Advancesine- recruitmentrecommendationsystemsfromacademicresearc h\ndependontheavailabilityofpublicdatasets:morepublicl yavailabledatacouldhelptoestablishstrongerbenchmark s;\na larger datasetsof varietycouldalsofacilitatenew ideas toappearinthefield.\nACKNOWLEDGMENTS\nThe research leading to these results has received funding f rom the European Research Council under the European\nUnion’sSeventhFrameworkProgramme(FP7/2007-2013)(ERC GrantAgreementno.615517),andundertheEuropean\nUnion’s Horizon 2020 research and innovation programme (ER C Grant Agreement no. 963924), from the Flemish\nGovernment under the “Onderzoeksprogramma Artificiële Int elligentie (AI) Vlaanderen” programme, and from the\nFWO (projectno. G091017N,G0F9816N,3G042220).\nREFERENCES\n[1] HimanAbdollahpouri,GediminasAdomavicius,RobinBur ke,IdoGuy,DietmarJannach,ToshihiroKamishima,JanKras nodebski,andLuizPizzato.\n2020. Multistakeholder recommendation: Surveyand resear chdirections. UserModeling and User-Adapted Interaction 30, 1(2020), 127–158.\n[2] HimanAbdollahpouri,MasoudMansoury,RobinBurke,and BamshadMobasher.2020. Addressingthemultistakeholderi mpactofpopularitybias\ninrecommendation through calibration. arXiv preprint arXiv:2007.12230 (2020).\n[3] Fabian Abel, András Benczúr, Daniel Kohlsdorf, Martha L arson, and Róbert Pálovics. 2016. Recsys challenge 2016: Jo b recommendations. In\nProceedings ofthe 10thACMconference onrecommendersystem s.425–426.\n[4] FabianAbel,YasharDeldjoo, MehdiElahi,and DanielKoh lsdorf.2017. Recsyschallenge 2017:Offlineand online evalu ation.InProceedings ofthe\neleventh acmconferenceon recommendersystems .372–373.\n[5] Shaha Al-Otaibiand Mourad Ykhlef. 2017. Hybrid immuniz ing solution for job recommender system. Frontiersof Computer Science 11, 3 (2017),\n511–527.\n[6] Nikolaos D Almalis, George A Tsihrintzis, and Nikolaos K aragiannis. 2014. A content based approach for recommendin g personnel for job\npositions. In IISA 2014,The5th International Conferenceon Information ,Intelligence, Systemsand Applications . IEEE, 45–49.\n[7] Nikolaos D Almalis, George A Tsihrintzis, and Nikolaos K aragiannis. 2014. A content based approach for recommendin g personnel for job\npositions. In IISA 2014,The5th International Conferenceon Information ,Intelligence, Systemsand Applications . IEEE, 45–49.\n[8] NikolaosDAlmalis,GeorgeATsihrintzis,NikolaosKara giannis,andAggelikiDStrati.2015. FoDRA-Anewcontent-b asedjobrecommendation\nalgorithm for job seeking and recruiting. In 2015 6th International Conference on Information, Intelli gence, Systems and Applications (IISA) . IEEE,\n1–7.\n[9] Honorio Apaza, Américo Ariel Rubin de Celis Vidal, and Jo simar Edinson Chire Saire. 2021. Job Recommendation Based o n Curriculum Vitae\nUsing Text Mining.In Future ofInformation and CommunicationConference .Springer,1051–1059.\n[10] ShomaArita,AtsushiHiyama,andMichitakaHirose.201 7. Gber:Asocialmatchingappwhichutilizestime,place,an dskillsofworkersandjobs.\nInCompanion ofthe 2017ACMConferenceon ComputerSupported C ooperative Work and Social Computing .127–130.\n[11] Shivam Bansal, Aman Srivastava, and Anuja Arora. 2017. Topic modeling driven content based jobs recommendation en gine for recruitment\nindustry. Procedia computerscience 122(2017), 865–872.\n[12] VitoBellini,GiovanniMariaBiancofiore,TommasoDiNo ia,Eugenio DiSciascio,Fedelucio Narducci,and ClaudioPo mo. 2020. GUapp:aconver-\nsationalagentfor jobrecommendationfor theItalianpubli cadministration.In 2020IEEEConferenceonEvolving and Adaptive Intelligent S ystems\n(EAIS).IEEE, 1–7.\n5https://www.kaggle.com/c/job-recommendation\n6https://www.kaggle.com/\n7https://www.careerbuilder.com/\n8https://tianchi.aliyun.com/dataset/dataDetail?dataI d=31623\n9https://www.zhaopin.com\n10https://www.xing.com\nPreprint. Underreview.\n20 Mashayekhi et al.\n[13] Shuqing Bian, Xu Chen, Wayne Xin Zhao, Kun Zhou, Yupeng H ou, Yang Song, Tao Zhang, and Ji-Rong Wen. 2020. Learning to m atch jobs\nwith resumes from sparse interaction data using multi-view co-teaching network. In Proceedings of the 29th ACM International Conference on\nInformation& Knowledge Management .65–74.\n[14] Shuqing Bian, Wayne Xin Zhao, Yang Song, Tao Zhang, and J i-Rong Wen. 2019. Domain adaptation for person-job fit with t ransferable deep\nglobal match network. In Proceedings of the 2019 Conference on Empirical Methods in Na tural Language Processingand the 9th International Joint\nConferenceon Natural Language Processing(EMNLP-IJCNLP) .4810–4820.\n[15] Mattia Bianchi, Federico Cesaro, Filippo Ciceri, Matt ia Dagrada, Alberto Gasparin, Daniele Grattarola, Ilyas In ajjar, Alberto Maria Metelli, and\nLeonardoCella.2017. Content-based approachesfor cold-s tartjobrecommendations. In Proceedings ofthe Recommender SystemsChallenge 2017 .\n1–5.\n[16] Giovanni Maria Biancofiore, Tommaso Di Noia, Eugenio Di Sciascio, Fedelucio Narducci, and Paolo Pastore. 2021. Gua pp: a knowledge-aware\nconversationalagent forjob recommendation. In Proceedings of the JointKaRS &ComplexRec Workshop. CEUR-WS .\n[17] GiovanniMariaBiancofiore,TommasoDiNoia,EugenioDi Sciascio,FedelucioNarducci,andPaoloPastore.2021. GUa pp:EnhancingJobRecom-\nmendations with Knowledge Graphs..In Proceedings of the 11thItalian InformationRetrieval Works hop. CEUR-WS .\n[18] Joseph Bills and Yiu-kai Dennis Ng. 2021. Looking for Jo bs? Matching Adults with Autism with Potential Employers fo r Job Opportunities. In\n25thInternational DatabaseEngineering &Applications Sy mposium. 212–221.\n[19] RonieCBituin,RonielleBAntonio,andJamesAEsquivel .2020. HarmonicMeansbetweenTF-IDFandAngleofSimilarit ytoIdentifyProspective\nApplicants inaRecruitment Setting. In 20203rdInternational Conference on Algorithms,Computin gand ArtificialIntelligence . 1–5.\n[20] Jacob Bollinger, David Hardtke, and Ben Martin. 2012. U sing social data for resume job matching. In Proceedings of the 2012 workshop on Data-\ndriven userbehavioral modelling and miningfromsocial med ia. 27–30.\n[21] Fedor Borisyuk, Krishnaram Kenthapadi, David Stein, a nd Bo Zhao. 2016. CaSMoS: A framework for learning candidate selection models over\nstructured queries and documents. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining .\n441–450.\n[22] Fedor Borisyuk, Liang Zhang, and Krishnaram Kenthapad i. 2017. LiJAR: A system for job application redistribution towards efficient career\nmarketplace.In Proceedings of the23rdACMSIGKDDInternational Conference onKnowledge Discoveryand DataMining .1397–1406.\n[23] Shayma Boukari, Sondes Fayech, and Rim Faiz. 2020. Hunt alent: A candidates recommendation system for automatic re cruitment via LinkedIn.\nIn2020Seventh International Conferenceon Social NetworksA nalysis,Management and Security (SNAMS) . IEEE, 1–7.\n[24] AndreiZBroder,DavidCarmel,MichaelHerscovici,Aya Soffer,andJasonZien.2003. Efficientqueryevaluationusing atwo-levelretrievalprocess.\nInProceedings of the twelfthinternational conferenceon Info rmation and knowledge management .426–434.\n[25] AlanCardoso, Fernando Mourão,and Leonardo Rocha. 202 1. The matching scarcityproblem:When recommenders do not c onnect the edges in\nrecruitmentservices. ExpertSystemswith Applications 175 (2021), 114764.\n[26] TommasoCarpi,MarcoEdemanti,ErvinKamberoski,Elen aSacchi,PaoloCremonesi,RobertoPagano,andMassimoQuad rana.2016. Multi-stack\nensemblefor jobrecommendation. In Proceedings of theRecommender SystemsChallenge . 1–4.\n[27] Sisay Chala and Madjid Fathi. 2017. Job seeker to vacanc y matching using social network analysis. In 2017 IEEE International Conference on\nIndustrial Technology (ICIT) .IEEE, 1250–1255.\n[28] Ruey-ChengChen,QingyaoAi,GayaJayasinghe,andWBru ceCroft.2019. Correctingforrecencybiasinjobrecommend ation.InProceedingsof\nthe 28thACMInternational Conference onInformation and Kn owledge Management . 2185–2188.\n[29] Wenbo Chen, Pan Zhou, Shaokang Dong, Shimin Gong, Mengl an Hu, Kehao Wang, and Dapeng Wu. 2018. Tree-based contextua l learning for\nonline joborcandidate recommendation with bigdatasuppor t inprofessionalsocialnetworks. IEEEAccess 6 (2018), 77725–77739.\n[30] Oualid Chenni, Yanis Bouda, Hamid Benachour, and Chahn ez Zakaria. 2015. A content-based recommendation approach using semantic user\nprofileine-recruitment.In International Conference onTheory and Practiceof Natural C omputing .Springer,23–32.\n[31] BrunoCoelho,FernandoCosta,andGilMGonçalves.2015 . Hyred:hybridjobrecommendationsystem.In 201512thInternationalJointConference\none-Business and Telecommunications (ICETE) ,Vol. 2.IEEE, 29–38.\n[32] Council of European Union. 2022. Proposal for a REGULAT ION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCILLAYING DOWN\nHARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL I NTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLA-\nTIVE ACTS2014. https://eur-lex.europa.eu/legal-content/EN/TXT/?qid =1623335154975&uri=CELEX%3A52021PC0206\n[33] Vachik S Dave, Baichuan Zhang, Mohammad Al Hasan, Khali feh AlJadda, and Mohammed Korayem. 2018. A combined represe ntation learn-\ning approach for better job and skill recommendation. In Proceedings of the 27th ACM International Conference on Info rmation and Knowledge\nManagement .1997–2005.\n[34] ToonDePessemier,KrisVanhecke,andLucMartens.2016 . Ascalable,high-performanceAlgorithm forhybridjobrec ommendations. In Proceed-\ningsofthe Recommender SystemsChallenge . 1–4.\n[35] Cornéde Ruijtand Sandjai Bhulai.2021. Jobrecommende r systems:A review. arXiv preprint arXiv:2111.13576 (2021).\n[36] Mamadou Diaby and Emmanuel Viennet. 2014. Taxonomy-ba sed job recommender systems on Facebook and LinkedIn profile s. In2014 IEEE\nEighthInternational Conferenceon ResearchChallenges in Information Science (RCIS) . IEEE, 1–6.\n[37] MamadouDiaby,EmmanuelViennet,andTristanLaunay.2 013. Towardthenextgenerationofrecruitmenttools:anonl inesocialnetwork-based\njobrecommender system.In 2013IEEE/ACMInternational Conference on Advances in Soci al NetworksAnalysis and Mining(ASONAM2013) .IEEE,\n821–828.\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 21\n[38] Mamadou Diaby, Emmanuel Viennet, and Tristan Launay. 2 014. Exploration of methodologies to improve job recommend er systems on social\nnetworks. Social NetworkAnalysisand Mining 4, 1(2014), 1–17.\n[39] GiacomoDomeniconi,GianlucaMoro,AndreaPagliarani ,KarinPasini,andRobertoPasolini.2016. Jobrecommendat ionfromsemanticsimilarity\noflinkedin users’skills.In International Conference onPattern Recognition Applicati ons and Methods , Vol. 2.SciTePress,270–277.\n[40] ShaokangDong,ZijianLei,PanZhou,KaiguiBian,andGu anghuiLiu.2017.Jobandcandidaterecommendationwithbig datasupport:acontextual\nonline learningapproach.In GLOBECOM2017-2017IEEEGlobal CommunicationsConference . IEEE, 1–7.\n[41] VerenaEitle,FelixPeters,AndreasWelsch,andPeterB uxmann.2021.TheImpactofCVRecommenderSystemsonProced uralJusticeinRecruiting:\nAnExperiment inCandidateSelection. (2021).\n[42] ZiadElgammal,AbdullahBarmu,HamzaHassan,KhaledEl gammal,TanselÖzyer,andRedaAlhajj.2021. MatchingAppli cantswithPositionsfor\nBetter Allocation of Employees inthe JobMarket.In 202122nd International ArabConferenceon Information Tec hnology (ACIT) .IEEE, 1–5.\n[43] Ahmed Elsafty, Martin Riedl, and ChrisBiemann. 2018. D ocument-based recommender system for job postings using de nse representations.In\nProceedings of the 2018 Conference of the North AmericanChap ter of the Association for Computational Linguistics:Huma n Language Technologies,\nVolume3 (IndustryPapers) . 216–224.\n[44] Evanthia Faliagka, LazarosIliadis, Ioannis Karydis, Maria Rigou, Spyros Sioutas, Athanasios Tsakalidis,and Gi annis Tzimas.2014. On-line con-\nsistentranking one-recruitment:seekingthe truth behind awell-formed CV. ArtificialIntelligence Review 42,3(2014), 515–528.\n[45] Evanthia Faliagka,Athanasios Tsakalidis,and Gianni s Tzimas.2012. An integrated e-recruitment system for auto mated personality mining and\napplicantranking. Internet research (2012).\n[46] PeiniFeng,CharlesJiahaoJiang,JialeWang,Sunny Yeu ng,andXijieLi.2021. JobRecommendationSystemBasedonAn alyticHierarchyProcess\nand K-meansClustering.In 2021The 13thInternational Conferenceon ComputerModelin g and Simulation . 104–113.\n[47] Francis C Fernández-Reyes and Suraj Shinde. 2019. CV Re trieval System based on job description matching using hybr id word embeddings.\nComputerSpeech &Language 56 (2019), 73–79.\n[48] MauricioNoris Freire and Leandro Nunes de Castro. 2020 . A Frameworkfor e-Recruitment Recommender Systems. In International Conference\nonArtificialIntelligence and Soft Computing .Springer,165–175.\n[49] MauricioNorisFreireandLeandroNunesdeCastro.2021 . e-Recruitmentrecommendersystems:asystematicreview. KnowledgeandInformation\nSystems63,1 (2021), 1–20.\n[50] Bin Fu, Hongzhi Liu, Yao Zhu, Yang Song, Tao Zhang, and Zh onghai Wu. 2021. Beyond Matching: Modeling Two-Sided Multi -Behavioral Se-\nquencesfor DynamicPerson-JobFit. In International Conference onDatabase SystemsforAdvanced Applications .Springer,359–375.\n[51] David Gale and Lloyd S Shapley. 2013. College admission s and the stability of marriage. The American Mathematical Monthly 120, 5 (2013),\n386–391.\n[52] Chongming Gao, Wenqiang Lei, Xiangnan He, Maarten de Ri jke, and Tat-Seng Chua. 2021. Advances and challenges in con versational recom-\nmender systems:A survey. AI Open2 (2021), 100–126.\n[53] SahinCemGeyik,Stuart Ambler,and KrishnaramKenthap adi.2019. Fairness-awarerankinginsearch& recommendati onsystemswith applica-\ntionto linkedin talent search.In Proceedings of the 25thacmsigkddinternational conference on knowledge discovery&data mining .2221–2231.\n[54] SahinCemGeyik,QiGuo, Bo Hu,CagriOzcaglar,KetanTha kkar,XianrenWu,andKrishnaramKenthapadi. 2018. Talent s earchand recommen-\ndationsystemsat LinkedIn: Practicalchallenges and lesso ns learned. In The 41stInternational ACMSIGIR Conferenceon Research &De velopment\ninInformation Retrieval . 1353–1354.\n[55] Aritra Ghosh, Beverly Woolf, Shlomo Zilberstein, and A ndrew Lan. 2020. Skill-based CareerPath Modeling and Recom mendation. In 2020 IEEE\nInternational Conference on BigData (BigData) . IEEE, 1156–1165.\n[56] Alfonso González-Briones, Alberto Rivas, Pablo Chamo so, Roberto Casado-Vara, and Juan Manuel Corchado. 2018. Ca se-based reasoning and\nagentbased jobofferrecommendersystem.In The13thInternational Conferenceon Soft ComputingModels inIndustrial and EnvironmentalAppli-\ncations.Springer,21–33.\n[57] AkshayGugnaniandHemantMisra.2020. Implicitskills extractionusingdocumentembeddinganditsuseinjobrecom mendation.In Proceedings\nofthe AAAIConference on ArtificialIntelligence , Vol. 34.13286–13293.\n[58] Cheng Guo, Hongyu Lu, Shaoyun Shi, Bin Hao, Bin Liu, Min Z hang, Yiqun Liu, and Shaoping Ma. 2017. How integration help s on cold-start\nrecommendations. In Proceedings of the Recommender SystemsChallenge 2017 .1–6.\n[59] Shiqiang Guo, Folami Alamudun, and Tracy Hammond. 2016 . RésuMatcher: A personalized résumé-job matching system. Expert Systems with\nApplications 60 (2016), 169–182.\n[60] AnikaGuptaandDeepakGarg.2014. Applyingdatamining techniques injobrecommendersystemforconsideringcandi datejobpreferences.In\n2014International Conference onAdvances inComputing,Co mmunicationsand Informatics(ICACCI) .IEEE, 1458–1465.\n[61] Francisco Gutiérrez, Sven Charleer, Robin De Croon, Ny i Nyi Htun, Gerd Goetschalckx, and Katrien Verbert. 2019. Ex plaining and exploring\njob recommendations: a user-drivenapproach for interacti ng with knowledge-based job recommender systems. In Proceedings of the 13th ACM\nConferenceon Recommender Systems .60–68.\n[62] AmineHabousandElHabibNfaoui.2021. Afuzzylogicand ontology-basedapproachforimprovingtheCVandjobofferma tchinginrecruitment\nprocess.International Journal of Metadata, Semanticsand Ontologi es15,2 (2021), 104–120.\n[63] ClaudiaHauffandGeorgiosGousios.2015. MatchingGitH ubdeveloperprofilestojobadvertisements.In 2015IEEE/ACM12thWorkingConference\nonMining SoftwareRepositories . IEEE, 362–366.\nPreprint. Underreview.\n22 Mashayekhi et al.\n[64] MiaoHe, DayongShen, TaoWang,HuaZhao, Zhongshan Zhan g, and RenjieHe. 2021. Self-Attentional Multi-Field Featu resRepresentationand\nInteractionLearningfor Person-JobFit. IEEETransactionsonComputational Social Systems (2021).\n[65] Miao He, Tao Wang, Yuanyuan Zhu, Yingguo Chen, Feng Yao, and Ning Wang. 2021. FINN: FeatureInteraction NeuralNetwo rk for Person-Job\nFit.In20217th International Conference on BigData and Informati on Analytics(BigDIA) . IEEE, 123–130.\n[66] BradfordHeap, Alfred Krzywicki,WayneWobcke,MikeBa in,andPaul Compton. 2014. Combiningcareerprogressionan d profilematchingina\njobrecommendersystem.In PacificRim International Conferenceon ArtificialIntellige nce. Springer,396–408.\n[67] Islam A Heggo and Nashwa Abdelbaki. 2018. Hybrid inform ation filtering engine for personalized job recommender sys tem. InInternational\nConferenceon Advanced Machine Learning Technologies and A pplications . Springer,553–563.\n[68] Wenxing Hong, Siting Zheng, and HuanWang. 2013. Dynami cuserprofile-basedjob recommender system.In 20138th International Conference\nonComputer Science &Education .IEEE, 1499–1503.\n[69] Wenxing Hong, Siting Zheng, Huan Wang, and Jianchao Shi . 2013. A job recommender system based on user clustering. J. Comput. 8, 8 (2013),\n1960–1967.\n[70] Rashidul Islam, Kamrun Naher Keya, Ziqian Zeng, Shimei Pan, and James Foulds. 2021. Debiasing career recommendati ons with neural fair\ncollaborativefiltering.In Proceedings ofthe Web Conference2021 .3779–3790.\n[71] Dietmar Jannach, Ahtsham Manzoor, Wanling Cai, and Li C hen. 2021. A survey on conversational recommender systems. ACM Computing\nSurveys(CSUR) 54,5(2021), 1–36.\n[72] JunshuJiang,SongyunYe,WeiWang,JingranXu,andXiao shengLuo.2020. Learningeffectiverepresentationsforper son-jobfitbyfeaturefusion.\nInProceedings of the 29thACMInternational Conference on Info rmation& Knowledge Management . 2549–2556.\n[73] MiaoJiang,YiFang,HuangmingXie,JikeChong,andMeng Meng.2019. Userclickpredictionforpersonalizedjobreco mmendation. WorldWide\nWeb22,1 (2019), 325–345.\n[74] JeffJohnson,MatthijsDouze,andHervéJégou.2019. Bil lion-scalesimilaritysearchwithgpus. IEEETransactionsonBigData 7,3(2019),535–547.\n[75] KrishnaramKenthapadi,BenjaminLe,andGaneshVenkat araman.2017.Personalizedjobrecommendationsystematli nkedin:Practicalchallenges\nand lessonslearned. In Proceedings ofthe eleventh ACMconferenceon recommendersy stems.346–347.\n[76] KrishnaramKenthapadi,BenjaminLe,andGaneshVenkat araman.2017.Personalizedjobrecommendationsystematli nkedin:Practicalchallenges\nand lessonslearned. In Proceedings ofthe eleventh ACMconferenceon recommendersy stems.346–347.\n[77] AparupKhatuaandWolfgang Nejdl.2020. Matchingrecru itersandjobseekersontwitter.In 2020IEEE/ACMInternationalConferenceonAdvances\ninSocial NetworksAnalysis and Mining(ASONAM) .IEEE, 266–269.\n[78] Emanuel Lacic, Markus Reiter-Haas, Tomislav Duricic, Valentin Slawicek, and Elisabeth Lex. 2019. Should we embed ? A study on the online\nperformance of utilizing embeddings for real-time job reco mmendations. In Proceedings of the 13th ACM Conference on Recommender System s.\n496–500.\n[79] Emanuel Lacic, Markus Reiter-Haas, Dominik Kowald, Ma noj Reddy Dareddy, Junghoo Cho, and Elisabeth Lex. 2020. Usi ng autoencoders for\nsession-basedjobrecommendations. UserModeling and User-Adapted Interaction 30, 4(2020), 617–658.\n[80] DorLavi,VolodymyrMedentsiy,andDavidGraus.2021. c onSultantBERT:Fine-tuned SiameseSentence-BERTforMatc hingJobsandJobSeekers.\narXiv preprint arXiv:2109.06501 (2021).\n[81] Ran Le, Wenpeng Hu, Yang Song, Tao Zhang, Dongyan Zhao, a nd Rui Yan. 2019. Towards effective and interpretable person -job fitting. In\nProceedings ofthe 28thACMInternational Conference onInfo rmation and Knowledge Management .1883–1892.\n[82] Yeon-ChangLee,JiwonHong, andSang-Wook Kim.2016. Jo brecommendationinaskstory:experiences,methods, andev aluation.In Proceedings\nofthe 31stAnnual ACMSymposium on Applied Computing .780–786.\n[83] Vasily Leksin and Andrey Ostapets. 2016. Job recommend ation based on factorization machine and topic modelling. I nProceedings of the\nRecommender SystemsChallenge . 1–4.\n[84] Changmao Li, Elaine Fisher, Rebecca Thomas,Steve Pitt ard, Vicki Hertzberg,and Jinho D Choi. 2020. Competence-le vel prediction and resume\n&jobdescriptionmatching usingcontext-aware transforme rmodels. arXiv preprintarXiv:2011.02998 (2020).\n[85] Yunqi Li, Hanxiong Chen, Shuyuan Xu, Yingqiang Ge, Junt ao Tan, Shuchang Liu, and Yongfeng Zhang. 2022. Fairness in R ecommendation: A\nSurvey.arXiv preprint arXiv:2205.13619 (2022).\n[86] JianxunLian,FuzhengZhang,MinHou,Hongwei Wang,Xin gXie,andGuangzhong Sun.2017. Practicallessonsforjobre commendations inthe\ncold-startscenario. In Proceedings of the Recommender SystemsChallenge 2017 .1–6.\n[87] Yiou Lin, Hang Lei, Prince Clement Addo, and Xiaoyu Li.2 016. Machine learned resume-jobmatching solution. arXiv preprint arXiv:1607.07657\n(2016).\n[88] KuanLiu, Xing Shi, Anoop Kumar,Linhong Zhu, and Prem Na tarajan.2016. Temporal learning and sequence modeling for a job recommender\nsystem. In Proceedings of the Recommender SystemsChallenge . 1–4.\n[89] MengshuLiu,JingyaWang,KareemAbdelfatah,andMoham medKorayem.2019. Tripartitevectorrepresentationsforb etterjobrecommendation.\narXiv preprint arXiv:1907.12379 (2019).\n[90] Rui Liu, Yuanxin Ouyang, Wenge Rong, Xin Song, Cui Tang, and Zhang Xiong. 2016. Rating prediction based job recommen dation service for\ncollege students. In International conference oncomputational scienceand its applications . Springer,453–467.\n[91] Rui Liu, Wenge Rong, Yuanxin Ouyang, and Zhang Xiong. 20 17. A hierarchical similarity based job recommendation ser vice framework for\nuniversitystudents. Frontiersof Computer Science 11,5 (2017), 912–922.\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 23\n[92] Yao Lu, Sandy El Helou, and Denis Gillet. 2013. A recomme nder system for job seeking and recruiting website. In Proceedings of the 22nd\nInternational Conference on World Wide Web .963–966.\n[93] Yong Luo, Huaizheng Zhang, Yonggang Wen, and Xinwen Zha ng. 2019. Resumegan: an optimized deep representation lear ning frameworkfor\ntalent-job fit via adversariallearning. In Proceedings of the 28th ACM international conference on info rmation and knowledge management . 1101–\n1110.\n[94] Saket Maheshwary and Hemant Misra. 2018. Matching resu mes to jobs via deep siamese network. In Companion Proceedings of the The Web\nConference2018 .87–88.\n[95] Emmanuel Malherbe,Mamadou Diaby,MarioCataldi, Emma nuel Viennet, and Marie-Aude Aufaure.2014. Field selectio n for job categorization\nand recommendation to social network users. In 2014 IEEE/ACM International Conference on Advances in Soci al Networks Analysis and Mining\n(ASONAM2014) .IEEE, 588–595.\n[96] Mohammed Maree, Aseel B Kmail,and Mohammed Belkhatir. 2019. Analysis and shortcomings of e-recruitmentsystems: Towardsa semantics-\nbasedapproachaddressingknowledge incompleteness and li mited domaincoverage. Journal of InformationScience 45,6 (2019), 713–735.\n[97] Jorge Martinez-Gil, Bernhard Freudenthaler, and Thom as Natschläger. 2018. Recommendation of job offers using ran dom forests and support\nvector machines.In Proceedings of theof the EDBT/ICDTjointconference .\n[98] MohamedAmineMenacer,FatmaBenHamda,GhadaMighri,S abeurBenHamidene,andMaximeCariou.2021.Aninterpretab leperson-jobfitting\napproach based on classification and ranking. In Proceedings of The Fourth International Conference on Natur al Language and Speech Processing\n(ICNLSP 2021) .130–138.\n[99] FrançoisMentec, Zoltán Miklós,SébastienHervieu,an d ThierryRoger. 2021. Conversationalrecommendations for job recruiters.In Knowledge-\nawareand ConversationalRecommender Systems .\n[100] D Mhamdi, Reda Moulouki, Mohammed Yassine El Ghoumari , M Azzouazi, and L Moussaid. 2020. Job recommendation based on job profile\nclusteringand jobseekerbehavior. Procedia Computer Science 175 (2020), 695–699.\n[101] TsunenoriMine,TomoyukiKakuta,andAkiraOno.2013. Reciprocalrecommendationforjobmatchingwithbidirecti onalfeedback.In 2013Second\nIIAI International Conferenceon Advanced Applied Informa tics.IEEE, 39–44.\n[102] Sonu K Mishra and Manoj Reddy. 2016. A bottom-up approa ch to job recommendation system. In Proceedings of the Recommender Systems\nChallenge . 1–4.\n[103] Ala Mughaid, Ibrahim Obeidat, Bilal Hawashin, Shadi A lZu’bi, and Darah Aqel. 2019. A smart geo-location job recom mender system based on\nsocialmedia posts.In 2019SixthInternational Conferenceon Social NetworksAna lysis,Management and Security (SNAMS) .IEEE, 505–510.\n[104] AmberNigam,AakashRoy,HartaranSingh, andHarsimra nWaila.2019. Jobrecommendationthrough progressionofjo bselection. In 2019IEEE\n6thInternational Conferenceon Cloud Computing and Intell igence Systems(CCIS) . IEEE, 212–216.\n[105] Andrzej Pacuk, Piotr Sankowski, Karol Wegrzycki, Ada m Witkowski, and Piotr Wygocki. 2016. RecSys Challenge 2016 : Job recommendations\nbasedonpreselectionof offersand gradient boosting. In Proceedings of the Recommender SystemsChallenge . 1–4.\n[106] Ioannis Paparrizos, B Barla Cambazoglu, and Aristide s Gionis. 2011. Machine learned job recommendation. In Proceedings of the fifth ACM\nConferenceon Recommender Systems .325–328.\n[107] Bharat Patel, Varun Kakuste, and Magdalini Eirinaki. 2017. CaPaR: a career path recommendation framework. In 2017 IEEE Third International\nConferenceon BigData ComputingService and Applications ( BigDataService) . IEEE, 23–30.\n[108] MirkoPolato and FabioAiolli. 2016. Apreliminarystu dy onarecommendersystemforthe jobrecommendation challe nge. InProceedings of the\nRecommender SystemsChallenge . 1–4.\n[109] ChuanQin,HengshuZhu,TongXu,ChenZhu,LiangJiang, Enhong Chen,andHuiXiong.2018. Enhancing person-jobfitfo rtalentrecruitment:\nAn ability-aware neural network approach. In The 41st international ACM SIGIR conference on research & de velopment in information retrieval .\n25–34.\n[110] ChuanQin,HengshuZhu,TongXu,ChenZhu,ChaoMa,Enho ngChen,andHuiXiong.2020.Anenhancedneuralnetworkappr oachtoperson-job\nfitintalent recruitment. ACMTransactionsonInformation Systems(TOIS) 38, 2(2020), 1–33.\n[111] Gábor Rácz, Attila Sali, and Klaus-Dieter Schewe. 201 6. Semantic matching strategies for job recruitment: A comp arison of new and known\napproaches.In FoIKS.Springer,149–168.\n[112] ManishRaghavan,Solon Barocas,JonKleinberg,and Ka renLevy.2020. Mitigatingbiasinalgorithmichiring:Eval uatingclaimsand practices.In\nProceedings ofthe 2020conferenceon fairness,accountabil ity,and transparency .469–481.\n[113] Michael Reusens, Wilfried Lemahieu, Bart Baesens, an d Luc Sels. 2017. A note on explicit versus implicit informat ion for job recommendation.\nDecisionSupport Systems 98 (2017), 26–35.\n[114] AlbertoRivas,PabloChamoso,AlfonsoGonzález-Brio nes,RobertoCasado-Vara,andJuanManuelCorchado.2019. H ybridjobofferrecommender\nsystemina socialnetwork. Expert Systems 36,4 (2019), e12416.\n[115] Leah G Rodriguez and Enrico P Chavez. 2019. Feature sel ection for job matching application using profile matching m odel. In2019 IEEE 4th\nInternational Conference on Computerand CommunicationSy stems(ICCCS) .IEEE, 263–266.\n[116] Pradeep KumarRoy,SarabjeetSingh Chowdhary,and Roc ky Bhatia.2020. A MachineLearning approachforautomation of ResumeRecommen-\ndationsystem. Procedia Computer Science 167 (2020), 2318–2327.\n[117] OscarM Salazar,JuanCJaramillo,DemetrioA Ovalle,a nd JaimeA Guzmán.2015. A case-basedmulti-agent and recomm endationenvironment\ntoimprovethe e-recruitmentprocess.In International ConferenceonPracticalApplicationsofAgen tsand Multi-AgentSystems .Springer,389–397.\nPreprint. Underreview.\n24 Mashayekhi et al.\n[118] Javier Sánchez-Monedero, Lina Dencik, and Lilian Edw ards. 2020. What does it mean to’solve’the problem of discri mination in hiring? Social,\ntechnical and legal perspectives from the UK on automated hi ring systems. In Proceedings of the 2020 conference on fairness, accountabil ity, and\ntransparency .458–468.\n[119] Masahiro Sato, Koki Nagatani, and Takuji Tahara. 2017 . Exploring an optimal online model for new job recommendati on: Solution for recsys\nchallenge 2017. In Proceedings of the Recommender SystemsChallenge 2017 .1–5.\n[120] Thomas Schmitt, Philippe Caillou, and Michele Sebag. 2016. Matching jobs and resumes: a deep collaborative filter ing task. In GCAI 2016-2nd\nGlobal Conferenceon ArtificialIntelligence , Vol. 41.\n[121] ThomasSchmitt, FrançoisGonard,Philippe Caillou, a nd Michèle Sebag.2017. Languagemodelling for collaborati vefiltering:Application to job\napplicantmatching. In 2017IEEE29thInternational Conferenceon Tools withArtifi cialIntelligence (ICTAI) .IEEE, 1226–1233.\n[122] WalidShalaby,BahaaEddinAlAila, MohammedKorayem, LaylaPournajaf,KhalifehAlJadda,ShannonQuinn,and Wlod ek Zadrozny.2017. Help\nme find ajob: A graph-basedapproachfor job recommendation a t scale. In 2017IEEEinternational conference on big data (bigdata) . IEEE, 1544–\n1553.\n[123] BaoxuShi,JaewonYang,Feng Guo,andQiHe.2020. Salie nceandmarket-awareskillextractionforjobtargeting.In Proceedingsofthe26thACM\nSIGKDDInternational Conferenceon Knowledge Discovery&D ata Mining .2871–2879.\n[124] Saman Shishehchi and Seyed Yashar Banihashem. 2019. J rdp: a job recommender system based on ontology for disabled people.International\nJournal of Technology and HumanInteraction (IJTHI) 15,1 (2019), 85–99.\n[125] Olfa Slama and Patrice Darmon. 2021. A Novel Personali zed Preference-based Approach for Job/Candidate Recommen dation. In International\nConferenceon ResearchChallenges inInformation Science . Springer,418–434.\n[126] Ellery Smith, Andreas Weiler, and Martin Braschler. 2 021. Skill Extraction for Domain-Specific Text Retrieval in a Job-Matching Platform. In\nInternational Conference of the Cross-LanguageEvaluatio n ForumforEuropean Languages . Springer,116–128.\n[127] ChirayuUpadhyay,HasanAbu-Rasheed, ChristianWebe r,and MadjidFathi.2021. Explainable Job-PostingRecomme ndations Using Knowledge\nGraphsand Named Entity Recognition. In 2021IEEEInternational Conference onSystems,Man,and Cyb ernetics(SMC) .IEEE, 3291–3296.\n[128] Jorge Carlos Valverde-Rebaza, Ricardo Puma, Paul Bus tios, and Nathalia C Silva. 2018. Job Recommendation Based o n Job Seeker Skills: An\nEmpiricalStudy..In Text2Story@ECIR .47–51.\n[129] MaksimsVolkovs,GuangWeiYu,andTomiPoutanen.2017 .Content-basedneighbormodelsforcoldstartinrecommend ersystems.In Proceedings\nofthe Recommender SystemsChallenge 2017 .1–6.\n[130] ClariceWang,KathrynWang,AndrewBian,RashidulIsl am,KamrunNaherKeya,JamesFoulds,andShimeiPan.2022.Do HumansPreferDebiased\nAI Algorithms? A CaseStudy inCareerRecommendation. In 27thInternational Conferenceon Intelligent UserInterfa ces.134–147.\n[131] Xiaowei Wang, Zhenhong Jiang, and Lingxi Peng. 2021. A Deep-Learning-Inspired Person-Job Matching Model Based o n Sentence Vectors and\nSubject-TermGraphs. Complexity 2021 (2021).\n[132] YusenWang,KaizeShi,andZhendong Niu.2020. ASessio n-basedJobRecommendation SystemCombiningAreaKnowledg e andInterest Graph\nNeuralNetworks..In SEKE.489–492.\n[133] Wenming Xiao, Xiao Xu, Kang Liang, Junkang Mao, and Jun Wang. 2016. Job recommendation with hawkes process: an effec tive solution for\nrecsyschallenge 2016. In Proceedings of the recommendersystemschallenge . 1–4.\n[134] PengXuandDenilsonBarbosa.2018. Matchingrésumést ojobdescriptionswithstackedmodels.In CanadianConferenceonArtificialIntelligence .\nSpringer,304–309.\n[135] MuratYagciand FikretGurgen.2017. A rankerensemble for multi-objectivejob recommendationin anitem cold star tsetting. In Proceedings of\nthe RecommenderSystemsChallenge 2017 .1–4.\n[136] Rui Yan, Ran Le, Yang Song, Tao Zhang, Xiangliang Zhang , and Dongyan Zhao. 2019. Interview choice reveals your pref erence on the market:\nTo improve job-resume matching through profiling memories. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge\nDiscovery&Data Mining .914–922.\n[137] Shuo Yang, Mohammed Korayem, Khalifeh AlJadda, Trey G rainger, and Sriraam Natarajan. 2017. Combining content-b ased and collaborative\nfilteringfor jobrecommendationsystem:A cost-sensitiveS tatisticalRelational Learningapproach. Knowledge-Based Systems 136(2017), 37–45.\n[138] ChenruiZhangandXueqiCheng.2016. Anensemblemetho dforjobrecommendersystems. In ProceedingsoftheRecommenderSystemsChallenge .\n1–4.\n[139] XianXing Zhang,Yitong Zhou,Yiming Ma,Bee-Chung Che n,LiangZhang, andDeepakAgarwal.2016. Glmix: Generalize dlinearmixed models\nfor large-scale response prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining .\n363–372.\n[140] Yunchong Zhang, Baisong Liu, Jiangbo Qian, Jiangchen g Qin, Xueyuan Zhang, and Xueyong Jiang. 2021. An Explainabl e Person-Job Fit Model\nIncorporating Structured Information. In 2021IEEEInternational Conferenceon Big Data(Big Data) .IEEE, 3571–3579.\n[141] Jing Zhao, Jingya Wang, Madhav Sigdel, Bopeng Zhang, P huong Hoang, Mengshu Liu, and Mohammed Korayem. 2021. Embed ding-based Rec-\nommender Systemfor Jobto Candidate Matchingon Scale. arXiv preprint arXiv:2107.00221 (2021).\n[142] Tianhua Zhao, Cheng Wuyu, and Chen Zhixiang. 2021. Sum mer Job Selection Model Based on Job Matching and Comprehens ive Evaluation\nAlgorithm. In 20212nd International Conferenceon ArtificialIntelligen ce and Information Systems .1–5.\n[143] ChenZhu,Hengshu Zhu,HuiXiong,ChaoMa,FangXie,Pen gliang Ding,andPanLi.2018. Person-jobfit:Adapting theri ghttalent fortheright\njobwith joint representationlearning. ACMTransactionson ManagementInformation Systems(TMIS) 9,3(2018), 1–17.\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 25\n[144] DávidZibriczky.2016.Acombinationofsimplemodels byforwardpredictorselectionforjobrecommendation. In ProceedingsoftheRecommender\nSystemsChallenge . 1–4.\nPreprint. Underreview.\n26 Mashayekhi et al.\nA SUPPLEMENTARY MATERIALS\nTable1gives an overview of all the papers that have been collected w ith the literature search methodology in Sec-\ntion1.2.\nTable1. Anoverviewofe-recruitmentrecommendationsyste msis presented.Regardingthe recommended entities,altho ughsome\npaperscouldbereciprocalindesign,wedidnotreportthema sreciprocalsincetheydidnotclaimtobereciprocalandals otheyonly\nexperimentedwiththejoborjobseekerrecommendationtask .Themethodscoverabroadrangeofcontentbased(CB),colla borative\nfiltering(CF),knowledgebased (KB),andhybrid/other met hods.Somepapersfocus onpreprocessing,postprocessing o rre-ranking,\nanddonotmentiontherecommendationmethodtypeindetail. Hence,wealsodonotreporttherecommendationmethodtypef or\nthose papers.The papers aresorted based ontheir publicati on year.\nPaperYearRecommended entities Method ChallengeJob\nJob seeker\nReciprocalCBCFKB\nHybrid/Other\n3.2Data quality\n3.3Heterogenous\ndata, multipleint-eraction typesand datasources\n3.4Cold start\n3.5Userprefer-\nences aswellassuitability\n3.6Interpretability\nand explainability\n3.7Specific\nobjectives\n3.8Bias\nandfairness\n3.9Large scale\n[20]2012/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[45]2012/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[69]2013/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[101]2013/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[92]2013/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[37]2013/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[68]2013/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[60]2014/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle\n[38]2014/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[36]2014/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[95]2014/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[66]2014/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[44]2014/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[7]2014/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[6]2014/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[8]2015/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[31]2015/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[63]2015/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[117]2015/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[30]2015/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[88]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle\n[138]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[90]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[39]2016/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[26]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[83]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[102]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[105]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[34]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[108]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[133]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[144]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[82]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[59]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[111]2016/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[120]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[87]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[21]2016/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE\n[139]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE\n[5]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[122]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE\n[113]2017/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[91]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[137]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[11]2017/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[75]2017/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[10]2017/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[27]2017/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[121]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[40]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE\n[135]2017/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\nPreprint. Underreview.\nA challenge-based survey ofe-recruitment recommendation systems 27\nTable1. Anoverviewofe-recruitmentrecommendationsyste msis presented.Regardingthe recommended entities,altho ughsome\npaperscouldbereciprocalindesign,wedidnotreportthema sreciprocalsincetheydidnotclaimtobereciprocalandals otheyonly\nexperimentedwiththejoborjobseekerrecommendationtask .Themethodscoverabroadrangeofcontentbased(CB),colla borative\nfiltering(CF),knowledgebased (KB),andhybrid/other met hods.Somepapersfocus onpreprocessing,postprocessing o rre-ranking,\nanddonotmentiontherecommendationmethodtypeindetail. Hence,wealsodonotreporttherecommendationmethodtypef or\nthose papers.The papers aresorted based ontheir publicati on year.\nPaperYearRecommended entities Method ChallengeJob\nJob seeker\nReciprocalCBCFKB\nHybrid/Other\n3.2Data quality\n3.3Heterogenous\ndata, multipleint-eraction typesand datasources\n3.4Cold start\n3.5Userprefer-\nences aswellassuitability\n3.6Interpretability\nand explainability\n3.7Specific\nobjectives\n3.8Bias\nandfairness\n3.9Large scale\n[119]2017/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[107]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[58]2017/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[86]2017/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[15]2017/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[129]2017/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[22]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[67]2018/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[43]2018/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[56]2018/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[128]2018/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[33]2018/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[97]2018/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle\n[134]2018/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[94]2018/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[143]2018/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[109]2018/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[29]2018/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE\n[114]2019/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[124]2019/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[103]2019/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[104]2019/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle\n[61]2019/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle\n[73]2019/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[28]2019/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle\n[89]2019/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[78]2019/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[115]2019/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[47]2019/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[136]2019/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[96]2019/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[81]2019/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[14]2019/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[93]2019/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[53]2019/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle\n[100]2020/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE\n[79]2020/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[57]2020/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[12]2020/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle\n[132]2020/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[77]2020/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[84]2020/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[19]2020/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[48]2020/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[110]2020/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[72]2020/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[23]2020/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE\n[116]2020/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[123]2020/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[13]2020/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[46]2021/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[17]2021/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle\n[16]2021/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle\n[99]2021/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle\n[127]2021/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle\n[142]2021/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[18]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle\n[80]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\nPreprint. Underreview.\n28 Mashayekhi et al.\nTable1. Anoverviewofe-recruitmentrecommendationsyste msis presented.Regardingthe recommended entities,altho ughsome\npaperscouldbereciprocalindesign,wedidnotreportthema sreciprocalsincetheydidnotclaimtobereciprocalandals otheyonly\nexperimentedwiththejoborjobseekerrecommendationtask .Themethodscoverabroadrangeofcontentbased(CB),colla borative\nfiltering(CF),knowledgebased (KB),andhybrid/other met hods.Somepapersfocus onpreprocessing,postprocessing o rre-ranking,\nanddonotmentiontherecommendationmethodtypeindetail. Hence,wealsodonotreporttherecommendationmethodtypef or\nthose papers.The papers aresorted based ontheir publicati on year.\nPaperYearRecommended entities Method ChallengeJob\nJob seeker\nReciprocalCBCFKB\nHybrid/Other\n3.2Data quality\n3.3Heterogenous\ndata, multipleint-eraction typesand datasources\n3.4Cold start\n3.5Userprefer-\nences aswellassuitability\n3.6Interpretability\nand explainability\n3.7Specific\nobjectives\n3.8Bias\nandfairness\n3.9Large scale\n[131]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[62]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[42]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[126]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[50]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle\n[141]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE\n[140]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[98]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[125]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle\n[9]2021/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[64]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[65]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[25]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[70]2021/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle\nPreprint. Underreview.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "8fyAgnSkGq",
"year": null,
"venue": null,
"pdf_link": "https://arxiv.org/pdf/2209.05112.pdf",
"forum_link": "https://openreview.net/forum?id=8fyAgnSkGq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A challenge-based survey of e-recruitment recommendation systems",
"authors": [
"yoosof mashayekhi",
"Nan Li",
"Bo Kang",
"Jefrey Lijffijt",
"Tijl De Bie"
],
"abstract": "E-recruitment recommendation systems recommend jobs to job seekers and job seekers to recruiters. The recommendations are generated based on the suitability of the job seekers for the positions as well as the job seekers' and the recruiters' preferences. Therefore, e-recruitment recommendation systems could greatly impact job seekers' careers. Moreover, by affecting the hiring processes of the companies, e-recruitment recommendation systems play an important role in shaping the companies' competitive edge in the market. Hence, the domain of e-recruitment recommendation deserves specific attention. Existing surveys on this topic tend to discuss past studies from the algorithmic perspective, e.g., by categorizing them into collaborative filtering, content based, and hybrid methods. This survey, instead, takes a complementary, challenge-based approach, which we believe might be more practical to developers facing a concrete e-recruitment design task with a specific set of challenges, as well as to researchers looking for impactful research projects in this domain. We first identify the main challenges in the e-recruitment recommendation research. Next, we discuss how those challenges have been studied in the literature. Finally, we provide future research directions that we consider promising in the e-recruitment recommendation domain.",
"keywords": [],
"raw_extracted_content": "arXiv:2209.05112v2 [cs.IR] 20 Oct 2023Achallenge-based survey of e-recruitmentrecommendation systems\nYOOSOF MASHAYEKHI ,IDLAB-DepartmentofElectronicsandInformationSystems ( ELIS),GhentUniversity,\nBelgium\nNAN LI,IDLAB - Department ofElectronicsand Information Systems ( ELIS), Ghent University, Belgium\nBOKANG ,IDLAB - Department of Electronicsand Information Systems ( ELIS), Ghent University, Belgium\nJEFREYLIJFFIJT ,IDLAB - Department of Electronicsand InformationSystems ( ELIS), Ghent University, Belgium\nTIJLDEBIE ,IDLAB - Department of Electronicsand InformationSystems ( ELIS), Ghent University, Belgium\nE-recruitment recommendation systemsrecommend jobstojobseek ersandjobseekerstorecruiters.Therecommendations aregen-\neratedbasedonthesuitabilityofthejobseekersforthepositions aswellasthejobseekers’andtherecruiters’preferences.The refore,\ne-recruitment recommendation systemscould greatly impact jobs eekers’careers.Moreover,by affecting thehiringprocesses of th e\ncompanies,e-recruitmentrecommendationsystemsplayanimportant roleinshapingthecompanies’competitiveedgeinthemarket.\nHence, thedomain of e-recruitment recommendation deservesspecifi c attention. Existing surveys on this topic tend to discuss past\nstudies from the algorithmic perspective, e.g., by categorizing th em into collaborative filtering, content based, and hybrid methods.\nThis survey, instead, takes a complementary, challenge-based ap proach, which we believe might be more practical to developers\nfacing a concrete e-recruitment designtask witha specific setof chal lenges,as wellas to researcherslooking forimpactful research\nprojects in this domain. We first identifythe main challenges in the e-re cruitment recommendation research. Next, we discuss how\nthosechallengeshavebeenstudiedintheliterature.Finally,wep rovidefutureresearchdirections thatweconsiderpromising inthe\ne-recruitment recommendation domain.\nCCS Concepts: • Generaland reference →Surveys and overviews ; •Informationsystems →Recommender systems .\nAdditionalKeyWords andPhrases:Job recommendation, E-recruitment recommendation\n1 INTRODUCTION\nWith the ever-increasing use of the world wide web, many peop leseek jobs on e-recruitment platforms [ 106]. These\nplatforms, such as LinkedIn1, usually provide recommendations for job seekers to apply t o several jobs and for re-\ncruiterstoselect suitablejob seekers for their jobpositi ons[54,75].\nThe recommendation in e-recruitment is an important subfiel d of recommendation systems. Recommending the\nproperjobseekerstorecruiterscouldincreasetheefficienc yofthehiringprocess,andrecommendingtherightjobsto\njobseekerscouldhaveapositiveimpactonjobseekers’care erpaths;ontheotherhand,lowqualityrecommendations\nthat poorly match job seekers with vacancies do not only cost time and effort of both recruiters and job seekers but\nalsocouldhaveanegative impactonthelabormarket,compan ies’competitiveness,andpeople’slivesinthelongrun.\nHence, thedomainof recommendationine-recruitment requi res specificattention.\nIn this study,we review the papers in the past decade aboute- recruitment recommendation systems. Existing sur-\nveys [35,49]one-recruitment recommendationsystems usuallyfocuson categorizingpapersbasedontheir methods\nsuchascollaborativefiltering,contentbased,hybrid,etc .Therangeofchallengesthatthesedifferentmethodsaddres s,\n1https://www.linkedin.com/\nAuthors’ addresses: Yoosof Mashayekhi , [email protected], IDLAB - Department of Elect ronics and Information Systems (ELIS), Ghent\nUniversity,Ghent,Belgium,9000; NanLi,[email protected],IDLAB-DepartmentofElectronicsandIn formationSystems(ELIS),GhentUniversity,Gent,\nBelgium, 9000; Bo Kang, [email protected], IDLAB - Department of Electronics and I nformation Systems (ELIS), Ghent University, Gent, Belgiu m,\n9000;JefreyLijffijt ,jefrey.lijffi[email protected],IDLAB-Department ofElectroni cs and InformationSystems(ELIS), Ghent University,Gent, Belgium,9000;\nTijlDe Bie ,[email protected],IDLAB -Departmentof Electronics a nd Information Systems(ELIS), Ghent University,Gent, Bel gium,9000.\n1\n2 Mashayekhi et al.\non the other hand, has been less central to these prior survey s. Therefore, in this survey we focus on the challenges\nfore-recruitment recommendationsystems and how thosecha llenges havebeen studiedin theliterature.\nWe believe thechallenge-based approach used in this survey is useful bothfor developers of e-recruitment recom-\nmendationsystemsandforresearchers inthefield.Indeed,d eveloperswilltypicallylookforsolutionstothepractica l\nchallenges that naturally pose themselves in the design of t heir e-recruitment recommendation system, so in their\ndesign process the challenges will typically come before th e possible algorithmic approaches. For researchers, our\nchallenge-based approach may help in identifying the most i mpactful research problems of the domain and the pro-\nposed solution approaches to address them that have already been attempted. Moreover, open challenges and future\nresearch directions are alsodiscussed toprovidemoreinsi ght forfutureresearch inthis domain.\nTerminology .Differententitiescouldberecommendedine-recruitmentr ecommendationsystems.Thee-recruitment\nrecommendationsystemscouldbecategorizedintothreegro upsbasedonthe entitiesbeingrecommended :jobrec-\nommendation ,job seeker recommendation , andreciprocal recommendation . In the rest of the paper, we use the term\ne-recruitmentrecommendation torefer toall recommendationsystems inthis research area .\nUnlessotherwisestated,theterms useranditemcanrefertojobseekers,jobpositions,orrecruiters,depe ndingon\nthe context: users receive the recommended lists, and items are the entities recommended to users. Throughout this\npaper,thetermsjob,jobposting,jobposition,vacancy,an dopeningareusedinterchangeably torefertoajobvacancy.\nThetermsrecruiteroremployerarealsousedinterchangeab lytorefertothepersonresponsibleforajobposition.CVs\nand resumes denote the textual content of job seekers. We ref er to all features and textual content of the users (job\nseekersorjobpostings)bythetermuserprofile.Sincediffer enttermsareusedforthejob/jobseekerrecommendation\nin the literature, we also use phrases such as matching job se ekers with job positions (e.g., [ 80,141]), person-job fit\n(e.g., [81,110]), and recommendation in e-recruitment (e.g., [ 30,48]) to denote the same concept of recommendation\nine-recruitment here.\nContributions . This survey will provide an overview of the literature in th e past decade (from 2012 onwards) on\ne-recruitment recommendation systems. Itcontains thefol lowingcontributions:\n•Underscoring theimportanceof a survey onthis topic,we lis t and discuss someimportant specificcharacter-\nistics ofe-recruitment recommendation systems that makei t clearwhy theyrequirea dedicatedapproach.\n•Weidentify and brieflydiscuss eight challenges thatwere fr equentlyaddressed byresearch paperscovered in\nthissurvey, and whereappropriateexplain howthey are ther esult ofspecificcharacteristics ofe-recruitment\nrecommendationsystems.\n•For each of these challenges, we discuss the papers that have specifically targeted it, and we briefly discuss\ntheirapproaches.\n•We provide future research directions and discuss the chall enges that have been investigated less in recent\nyears.\n•Wepresent astructuredoverview ofthecollected123papers inTable1intheAppendix.Theavailableproper-\ntiesofeachpaperinTable 1aretherecommendationtypebasedontherecommendedentiti es(job,jobseeker,\nreciprocal),recommendation methodtype,and thechalleng es thatthepaperhas addressed.\n•We maintain a website2containing the content of Table 1along with paper metadata (e.g. venue, url, au-\nthors, etc.) and summaries of the selected papers. We hope th is can further facilitate the future research in\ne-recruitment recommendationsystems.\n2https://aida-ugent.github.io/e-recruitment-recsys-c hallenges/\nA challenge-based survey ofe-recruitment recommendation systems 3\nFor the rest of this section, we first discuss more in detail ho w our survey complements the existing surveys in\nSection1.1. Next, we describe how the papers were collected and filtered in Section 1.2. Finally, Section 1.3presents\nthestructureofthis survey.\n1.1 Differences with previous recent surveys\nThe two recent surveys on e-recruitment recommendation sys tems [35,49] organized the literature differently from\nthe present survey. The work by Freire and de Castro [ 49] focused on method types, data sources, and assessment\nmethods.TheworkbydeRuijtand Bhulai[ 35]gave anin-depth discussionaboutthee-recruitmentrecom mendation\nsystem methods with a focus on categorizing hybrid and ensem ble hybrid methods. Although de Ruijt and Bhulai\n[35]exploredsomeaspectsand challenges ofe-recruitment rec ommendationsystems such as largescale,ethical, and\nreciprocalaspects,their discussion onthosechallenges a nd aspectsis brief and limited.\nSince the typeof recommendation methodsis well discussed i n previous papers,this aspect is not the focusof the\npresentstudy.Giventhelimitationsofprevioussurveys,w efocusonthechallengesine-recruitmentrecommendation\nsystems and discuss the solutions that have been proposed fo r those challenges from a technical point of view. Our\nsurvey is valuable in that we emphasize the distinguishing n ature of e-recruitment and organize the literature with\nrespecttothespecialdifficulties and challenges in e-recru itment recommendation.\n1.2 Literature search methodology\nWecrawleddatafromdblp3usingtenkeywords:{‘jobrecommender’,‘jobrecommendati on’,‘jobmatching’,‘e-recruitment’,\n‘e-recruiting’, ‘online recruitment’, ‘person-job fit’, ‘ vacancy recommendation’, ‘candidate recommendation’, ‘o ccupa-\ntionrecommendation’}andasaresult,515paperswerecolle cted.Weonlykeptpaperspublishedafter(including)2012,\nwithatleastfivecitationsifpublishedbefore(including) 2019.Papersthatdonotrecommendactualjobsorjobseekers\n(e.g., papersrecommending ajobtype)wereremoved aswell. Thisapproachresultedin99papersintotal.Wefurther\ncollected 24 papers from industry leaders and known experts from top conferences and journals. In total, 123 papers\narekept forfurther examination.\n1.3 Structure ofthe survey\nThestructureoftherestofthepaperisasfollows.InSectio n2,wediscussthepropertiesthatdistinguishe-recruitment\nrecommendation systems from other recommendation systems . Section 3contains our findings, in which Section 3.1\ngives abird’s eye view ofallthechallenges identifiedinthi s survey, Section 3.2to3.9address thedifferent challenges\nrespectively, and Section 3.10briefly talks about the remaining papers not covered in the ch allenge sections. Finally,\nSection4concludes ourfindings and discusses thelimitations ofthis survey, openchallenges and futuredirections.\n2 SPECIFIC CHARACTERISTICS AND PROPERTIESOFE-RECRUITMEN T RECOMMENDATION\nSYSTEMS\nIn this section, we discuss the differences between e-recrui tment and traditional recommendation systems. Although\nmanychallengesandcharacteristicsarecommonbetweenane -recruitmentrecommendationsystemandatraditional\none, such as e-commerce ora movierecommender, certainaspe ctsset e-recruitment recommendationsystems apart:\n3https://dblp.org/\n4 Mashayekhi et al.\n(1)Oneworker,onejob(OWOJ) :Atacertainperiodoftime,apersoncanonlyworkatoneoraf ewjobs,andalso\ncompanieshireoneorafewemployeesforajobposting[ 22].Moreover,jobseekersandjobpositionsaremostly\navailable for a limited time and becomeinactive after they a re employed or filled.In contrast,in a traditional\nrecommender, the same items can be recommended to many users , and users consume several items. The e-\nrecruitment recommendation systems have to consider this a spect in the recommendation. First, the number\nof recommendations for each job/job seeker may have to be kep t relatively small since only one or a few of\nthem can succeed. Moreover, job seekers/jobs usually compe tewith each other for the same jobs/job seekers.\nHence,therecommendationofajobatwhichothershaveahigh er chanceofsuccesscouldbelessinteresting.\nThis competitionaspectshouldideallybetakeninto consid erationin generating therecommendations.\n(2)Two-sided(TS) :Intraditionalrecommendationsystems,thesuccessofare commendationusuallydependson\ntheaction of one user. For example, in e-commerce a recommen dation is successful if the user decides to buy\na product. However, in e-recruitment recommendation syste ms, the ultimate success of a recommendation\ndepends on whether it results in employment. The actions by o ne user, such as applying for a job position\nby a job seeker could only show the interest of the job seeker t o the job position, while the success of the\nrecommendation also depends on the recruiter of the job post ing who makes an offer for the job. Hence, e-\nrecruitmentrecommendation systems have multiplestakeho lders (e.g., job seekers and employers).\n(3)Suitabilityaswellaspreference(SP) :Whileusers’preferencesplayanimportantroleinallreco mmendation\nsystems, e-recruitment recommendation systems recommend jobs/job seekers based on suitability and skills\nas well [60]. One way to define suitability and user preference is as foll ows.Suitability represents the degree\nof matchness between a job seeker and job position based on ty pically but not exclusively knowledge, skills,\ndiplomas,andyears ofexperienceofthejobseekersandthej obpositionrequirements.User preference ,onthe\notherhand,represents one’sinclinationtowardscertaini tems.Forexample,ajobseeker mightbesuitablefor\nseveral positions, but prefer to work for a specific company f or various reasons such as higher salary, social\nconnections, etc. In addition, a recruiter often has to pick one job seeker among multiple equally suitable\njob seekers based on the preferences such as social connecti ons, personality, etc. Hence, the suitability of a\njob seeker for a job and their preferences will in general not be equal, which poses specific challenges to\ne-recruitment recommendationsystems.\n(4)Multi-faceted(MF) :Ine-recruitmentrecommendationsystems,bothsuitabili tyandpreferenceare,infact,de-\npendentonmanydifferentfacetswithdifferentdatatypes.Fo rajobseeker,theirpreviousjobhistory,diplomas,\nseniority,interests,skills,location,socialfittothejo benvironment,etc.couldberelevantforane-recruitment\nrecommendationsystem. Fora jobposting,itsrequiredskil ls,requireddiplomas,seniority,location,organiza-\ntional culture, etc. might be available and could be used in a n e-recruitment recommendation system. Hence,\nthenatureofdataavailableinthee-recruitmentdomainisu suallymulti-facetedandrequiresspecificattention\nindesigning e-recruitment recommendationsystems.\n(5)High-stakes (HS) : E-recruitment is a high risk domain because it can have a lon g-term impact on people’s\ncareers and hence, their career fulfillment. Moreover, it pl ays an important role in shaping the companies’\ncompetitive edge in the market. E-recruitment is even define d as one of the high-risk domains according to\nthe EU’s AI act (proposal) [ 32]. Hence, considering fairness and trustworthiness aspect s is more essential in\ne-recruitment recommendationsystems comparedtothetrad itionalones.\n(6)Short interaction history (SIH) : In e-recruitment recommendation systems, job seekers onl y interact with\nthe system while they are seeking a new job, and they will prob ably stop using it after they are employed.\nA challenge-based survey ofe-recruitment recommendation systems 5\nMoreover,newjobpositionsappearanddisappearfrequentl y[58].Incontrast,inatraditionalrecommendation\nsystem users and items oftenhave a longhistorywithinthesy stem.\n3 SURVEY STRUCTURED ACCORDINGTOCHALLENGES FACED INTHE DEV ELOPMENTOF\nE-RECRUITMENT RECOMMENDATIONSYSTEMS\nIn this survey, we identify some challenges in e-recruitmen t recommendation systems that have been addressed by\nstudiesinrecentyears.Althoughtherewouldbemanyotherc hallengesinthee-recruitmentrecommendationdomain,\nwe focusonthemostcommonones here.\nWe first list the main challenges in e-recruitment recommend ation systems and describe each of the challenges in\nSection3.1.Next,weintroducethemethodsthathavebeenproposedtode alwitheachofthechallenges inSection 3.2\nto3.9.Finally,wediscussthepapersthatarenotincludedinthes ectionscoveringchallengesinSection 3.10.Moreover,\nineachsection,weprovideavisualoverviewoftheproblems andsolutions(Fig. 1toFig.8).Theycontainthesolutions\nthat weobserved in the literature. Of course, other solutions that have not y et been described in the literature may\nexist.\n3.1 Apreview ofthe challenges\n1)Dataquality :E-recruitmentrecommendationsystemsoftenhaveapletho raofdatasources,includinginterac-\ntionsandtextualdatafromthejobseekers (CVs) andjobpost ings(job descriptions).Therearemanyrelevant\nfacets in the available data (MF aspect 2.4), but with variable quality. Moreover, some facets, e.g. sk ills, might\nbe implicit and need to be extracted from unstructured data. Some common issues about dealing with such\ndataare:\na.Data cleaning and preprocessing . Recommendation systems usually use features extracted fr om tex-\ntualdata,which is usuallynoisy. Hence, data cleaning prep rocessing are necessary and crucialfor better\nfeatureextractionand downstream tasks.\nb.Semanticgap . Thetextualdataisusuallywrittenbydifferent people,and different termsareoftenused\ntoaddress thesameconcept.This semantic gap resultsin poo rsemantic matching.\nc.Skill extraction . Although many facets might be implicit and need to be extrac ted with carefully de-\nsigned methods, we focus on skills, which are the most import ant feature in matching job seekers with\njobpostings.Using jobseekers’ skills and thejobpostings ’ requiredskills is necessary forincreasing the\nperformance of e-recruitment recommendation systems. Hen ce, skill extraction from the textual data is\nanother challenging task in thee-recruitment recommendat ionsystems.\nd.Multi-linguality . In some countries/platforms, job seekers’ resumes and job descriptions are written\nin several languages. In such cases, e-recruitment recomme ndation systems should supportmultiplelan-\nguages for thetextualcontent.\ne.Datasparsity . Manyrecommendationsystemssufferfromdatasparsityissu es,e-recruitmentrecommen-\ndationisnoexception(SIHaspect 2.6).Thereasonisthatjobseekersmayonlyusethesystemafewt imes\nandthenleavetheplatformforeverafterasuccessfuljob-h unting;thesameistrueforvacantjobpositions:\nnew jobsmight appear ona dailybasis butdisappear quicklya fter receiving satisfying applications.\n2)Heterogeneousdata, and multiple interaction types and dat a sources : E-recruitment recommendation\nsystemscouldusemoredatasourcescomparedtomanyotherki ndsofrecommendationsystems,astheymight\n6 Mashayekhi et al.\nhaveaccess tojobseekers’previous workexperiences, inte rviews,thetextualcontentoftheirresumes/jobde-\nscriptions, skills, and preferences (MF aspect 2.4). The availability of unstructured,semi-structured and s truc-\ntureddatamakes e-recruitment recommendationsystems hav e todealwith theheterogeneous natureofdata.\nInaddition,therearealsomanyinteractiontypesintherec ommendationsystemsbetweenjobseekersandjob\npostings, e.g., view, click, apply, chat, favorite, like, a nd comment. Using different interaction types between\njobseekersandjobpostingscouldbebothachallengeandano pportunityinthedevelopmentofe-recruitment\nrecommendationsystems.\nMoreover,recommendationsystemscouldalsomakeuseofoth erdatasourcesbesidesjobmarketrelateddata,\nsuchas job seekers’ and jobpostings’ informationinsocial networks,blogs,etc.\n3)Cold Start : The cold start problem in recommendation systems refers to the problem of recommending to\nnew users or recommending new items with few or no interactio ns. This problem might be more acute for\ne-recruitmentrecommendationsystemsthanthetraditiona lonessincenew jobstendtoappearanddisappear\nfrequently (SIH aspect 2.6). The jobs usually disappear after a successful match, and n ew jobs with the same\ntitleareoftenpostedasnew items.Incontrast,theproduct swiththesamenameintraditionalrecommenders\nare usually treated as the same item, and only their availabi lity changes over time (in cases such as movie\nrecommenders, theproductis always available).\nUsing data other than interactions could often alleviate th e cold start problem in recommendation systems.\nHence, it is helpful to have themany facets available in the j ob seekers’ and job postings’ profiles (MF aspect\n2.4).\nAlso note that in e-recruitment recommendation systems ter ms, there are user (job seeker or job) cold start\nanditem(joborjobseeker)coldstartproblems.Injobrecom mendation,usercoldstartreferstojobseekercold\nstartand item coldstartrefers tojob coldstart,and it is th eotherwayaround injob seeker recommendation.\n4)User preferences as well as suitability : To find the best matches between job seekers and vacancies, i t is\ncrucial to use the knowledge and skills of the job seekers and the requirements of job positions. However,\nusers’preferences areequallyimportantforapersonalize drecommendationsystem(SPaspect 2.3).Moreover,\nusers’preferences might change over time,which shouldbet akenintoconsiderationbytherecommendation\nsystems.\n5)Interpretability and explainability : Providing explainable recommendations and designing int erpretable\nmodels are important in e-recruitment recommendation syst ems (HS aspect 2.5). Job seekers could benefit\nfrom explanations of their recommendations since importan t career decisions will depend on their choices.\nMoreover,providing explainableresults helps design user -friendly applications forjob-seekers and recruiters.\n6)Specificobjectives :E-recruitmentrecommendationsystemsusuallyhaveamult i-objectivenature,sincethey\nneedtosatisfy multiplestakeholders,including jobseeke rs, recruiters,and service providers (TS aspect 2.2).\nInaddition,e-recruitmentrecommendationsystemscouldh avespecificobjectives,suchasbalancingthenum-\nberofrecommendationseachjobseeker/jobpostingreceive orrecommendingitemswithahighchanceofsuc-\ncessregardingthecompetitors(OWOJaspect 2.1),oravoidingfalsepositivestomakesurethatuserswouldn ’t\nbebotheredbytoomany spams.\n7)Biasandfairness :Recommendationsystems sufferfromallkinds ofwell-known biases,someofwhichhave\nraised societal and ethical concerns. Providing fair recom mendations in e-recruitment is even more essential\nthantheothertypessincee-recruitmentisahigh-stakes do main(HSaspect 2.5).Itiscrucialtomitigatebiases\nA challenge-based survey ofe-recruitment recommendation systems 7\nData quality\nData cleaning\nand preprocessing\nNLP [9,11,12,23,\n36–38,43,59,62,\n77,82,96,103,115,\n116,120,128,134]Semantic gap\nOntology [ 59,\n62,63,99,124]Skill extraction\nFrom text\nNLP [44,57,62,\n63,96,98,126]From skills\nInferred skills\n[44,57,126]Calibrated\nskills [123]Multi-linguality\nMulti-lingual\nlanguage\nmodel [80,99]Data sparsity\nReducing number\nof individual\njobs/job seekers\n(e.g. by clustering)\n[29,40,82]Densifying in-\nteraction graph\nby content sim-\nilarity [122]\nFig. 1. Anoverviewof the dataquality challenge\nfor job seekers, such as gender bias, as well as biases regard ing the job postings, such as recency bias (recent\njobpostings maybemorepopular).\n8)Largescale :Theever-increasingamountsofdatabringthepressingcha llengeofscalabilitytothee-recruitment\nrecommendation systems. More specifically, large scale dat a may cause issues in both training and inference\nphases: ineach phase, therecouldbeissues withspeedand st orage/memory consumption.\n3.2 Dataquality\nSince most e-recruitment recommendation systems use inter actions as well as textual data (resumes and job descrip-\ntions) to model theuser profileor to construct features, var ious data qualityissues affect the qualityof recommenda-\ntions. Most issues in this section are about textual data qua lity since the facets available in e-recruitment (MF aspect\n2.4)aresometimeshiddeninfreetext.Webrieflydiscussdiffere nt approachesforeachdataqualityissuediscussedin\nSection3.1.1.Anoverviewofthissectionwhichincludesthecategorieso fthedataqualityissuesandthecorresponding\nsolutionsintheliterature, is presented inFig. 1.\nData cleaning and preprocessing (Section3.1.1.a). E-recruitment recommendation systems usually use textu al\ncontent to acquire features for job seekers and job descript ions, which could further be used in recommendation\nmethods.However,thetextualcontentsareusuallywritten bydifferentpeopleandarenoisy.Therefore,datacleaning\nand data preprocessing fortextualdataare crucialforprov iding high qualityrecommendations.\nAlthoughmostapproachesusingtextualcontenthavetodoso medatacleaningandpreprocessing,weonlydiscuss\ntheworksthathaveexplicitlyfocusedonNLPtechniquestod ealwithsuchissues.Thedatacleaningandpreprocessing\nusually involve common NLP techniques such as tokenization , removing stop words, stemming, and lemmatization\n[9,11,12,23,36–38,43,59,62,77,82,96,103,115,116,120,128,134].\nSemanticgap (Section3.1.1.b).Sincethetextualdataiswrittenbydifferentpeople,e-re cruitmentrecommendation\nsystems suffer from a semantic gap between contents from diffe rent sources, such as resumes and job descriptions.\n8 Mashayekhi et al.\nDifferent terms might have been used to refer to the same conce pt. Moreover, the same term could have different\nmeanings depending onthecontext.\nAlthoughmost papersthat uselanguage modelsor learnrepre sentations of textualdatacan alleviate thesemantic\ngap to some degree, we only discuss the papers that explicitl y focus on this issue. The most common approach that\nis employed in the literature to tackle the semantic gap is to map skills/concepts to the nodes in an ontology (by\nexploitingalanguagemodel,usingNamedEntityRecognitio n(NER),usingNamedEntityDisambiguation(NED),etc.)\nand tousetheshared nodes torefer tothesame skills/concep ts [59,62,63,99,124].\nSkillextraction (Section3.1.1.c).E-recruitmentrecommendationsystems mostlymatchjobs eekers withjobpost-\nings based ontheir expertiseand skills. Since job seekers’ profilesand jobdescriptions are oftenavailable as freetex t\nwithnostructure,skillextractionfromthetextualdatais importantforsomee-recruitmentrecommendationsystems.\nSome papers have employed NLP techniques such as n-gram toke nization [ 96], NER [57,63,96,98], part-of-speech\ntagging (PoS tagging) [ 57],using skill dictionaries or ontology[ 44,57,62,63,96], or other techniques (e.g., using the\ncontextofaskillterm,calledskillheadwords)[ 126]toextractskillsfromthetext.Jobseekers’and jobpostin gs’skills\nhave also been expanded using skill similarities or relatio ns provided by word embedding models (e.g., word2vec)\n[57,126],andbydomainspecificontologiesorskilltaxonomies[ 44].Given theextractedskillsforjobseekersandjob\npostingsbyanin-houseskilltaggerinLinkedIn, Shietal.[ 123]selectedskillsforjobpostingsconsideringthemarket\nsupply(enough jobseekers having thatskill) oftheskills a nd alsotheimportanceofeach skill in a jobposting.\nMulti-linguality (Section3.1.1.d).Somee-recruitment recommendationsystems aremulti-li ngual,i.e., thetextual\ncontentofresumes and jobdescriptionscouldbeinmultiple languages. Moreover,matchingresumes andjobdescrip-\ntionswithdifferent languagesresultsincross-linguality challenges. Suchissueshavebeenstudiedin[ 80,99],wherea\nmulti-linguallanguagemodelwasusedtosupportmultiplel anguages.Lavi etal.[ 80]designed aSiamesearchitecture\ntofine-tune themulti-lingualBert usingthehistorical dat aof recruiters’interactions withcandidates.\nDatasparsity (Section3.1.1.e).E-recruitmentrecommendationsystems oftensufferfrom d atasparsityissues (SIH\naspect2.6) due to thefact that similar job positions are usually consi dered as separate entities. Moreover, job seekers\noftenstopusingtheplatformafterbeingemployed.Althoug hmostapproachesthatusecontentintherecommendation\ncould alleviate the data sparsity issue to some extent (e.g. [13]), we only discuss the works that study data sparsity\nexplicitly.\nOneapproachthathasbeenstudiedtocopewiththedataspars ityissueistoreducethenumberofdistinctjobposi-\ntionsbysplittingajobpositionintoajobtitleandacompan yname[82]orbyclusteringsimilarjobpositions[ 29,40].\nAnother approach designed by Shalaby et al. [ 122] is to densify the graph of jobs, which is created based on int erac-\ntions, by adding content similarity links between the entit ies (job seekers and job positions). The recommendations\narethen generated using this graph ofjobs.\n3.3 Heterogeneous data,andmultiple interaction types and datasources\nE-recruitment recommendation systems could use the hetero geneous data of job seekers and job postings, including\nlocation, textual resume/job description, skills, etc. (M F aspect2.4). Moreover, different types of behavioral data are\navailable,whereusingsuchdataischallenginginrecommen dationsystems.Inaddition,jobseekers’andjobpositions ’\ndata could be enriched by their information from external so urces. We briefly discuss the papers dealing with these\nthreeaspectsthatare alsodescribedin Section 3.1.2.Anoverview ofthis sectionis presented in Fig. 2.\nSince resumes and job descriptions are among the most import ant data sources for e-recruitment, it is necessary\nto carefully use them as well as the behavioral data. Job seek er profiles, resumes, and job descriptions sometimes\nA challenge-based survey ofe-recruitment recommendation systems 9\nHeterogeneous data\nComputing\nsimilarity scores\nbetween fields\n[31,42,59,90–\n92,95,115]Learning embed-\ndings for each field\n[64,65,93,141]Multiple inter-\naction types\nConversion to\nratings [138]Weighting in-\nteraction type\nMuti-task [ 50]Sample importance\nweighting [ 129]External\ndata sources\nFriends’ fea-\ntures [27,38]Others [20,44,45]\nFig. 2. Anoverviewof the heterogeneous data, and multiple interaction types and dat asources challenge\nhave several fields with different data types. Hence, the heterogeneousnatureof thedata shouldbeconsidered in\ndesigning recommendation systems in e-recruitment.\nMany papers use features with different types in a recommenda tion algorithm (e.g., decision trees, deep neural\nnetworks,etc.)eitherdirectlyorbysomefeaturerepresen tationtechniquessuchasone-hotencoding,wordembedding ,\netc.(e.g.,[ 97,110]).However,somemethodsareexplicitlydesignedtoworkwi thheterogenousdata.Hence,wemostly\nfocusonthosepapers.Somestudieshavecombinedthesimila rityscoresbetweenthesamefields(e.g.,education,work\nexperience, etc.) of resumes and jobpostings [ 31,42,59,90–92,115]or betweenallfields in resumes and job postings\n[95]. Learning embeddings for each of the fields/data sources of job seeker profiles and job postings, and using the\ninteractions of those embeddings to match job seekers with j ob postings is another approach employed to deal with\nheterogeneousdata[ 64,65,93,141].Morespecifically,Zhaoetal.[ 141]providedrecommendationsbasedonthefused\nembeddings of job seekers and jobs, where they combine the em beddings learned from the textual content, job-skill\ninformation graph, and geolocation data. In the deep neural networks proposed in [ 64,65], the embeddings for the\nsamefields/fieldtypesof resumes and jobpostingswere learn edbytheirinner interactions. In[ 64],a multi-headself-\nattentionmodulewasthenappliedtotheembeddings fordiffe rent fieldsasthefieldouterinteractionmodule.In[ 93],\ndifferent embeddings arelearnedfordifferent fieldsofjobse ekers bytheir interactions intheneuralnetwork.Finally,\nthe learned embeddings were passed to a multi-layer percept ron to compute the matching score between a resume\nand a jobposting[ 64,65,93].\nMoreover, there could be multiple types of interactions between a job seeker and a job position, such as click,\napply, like, favorite, invite, interview, hire, etc., wher e some of them are initiated by the job seeker and some by\nrecruiters. Zhang and Cheng [ 138] transformed the implicit feedback (click, bookmark, repl y, and click) into ratings\nandproposedatwo-stageensemblemethodforgeneratingthe recommendations.Fuetal.[ 50]proposedadeepneural\nnetworktocapturethedynamicpreferencesofthejobseeker sandrecruitersbylearningamulti-taskobjectiveoftheir\nbehavioral data (e.g., click, apply,chat,invite, match). Volkovset al. [ 129]proposeda content-based recommendation\nsystem considering different interaction types as positive with different weights for sampling and used XGBoost to\noptimizethebinary classificationloss.\nTo find a better match between job seekers and vacancies, info rmation other than skills such as personality and\ntraits has also been found to be useful. Some studies have tri ed to use auxiliary information gathered from external\ndata sources such as friends’ features in social networks [ 27,38] and personal websites [ 20,44,45] to build more\ncomprehensive profiles and improve therecommendations.\n10 Mashayekhi et al.\nCold start\nRecommending\nbased on the\ninteractions of\nsimilar jobs/job\nseekers [15,29,\n58,68,82,90,91,\n104,121,122,138]Recommending\nbased on the\nfeatures of jobs\nand job seekers [ 15,\n58,60,86,119,120,\n122,129,135,137]\nFig. 3. Anoverviewof the cold start challenge\n3.4 Cold start\nAs discussed in Section 3.1.3, cold start in recommendation systems refers to the problem of recommending to new\nusers or items with no or few interaction data. This problem c ouldbe more acute for e-recruitment recommendation\nsystems because job opening positions are usually treated a s distinct items even if they have the same job title and\ndescription,andhencethosejobopeningswouldbetreateda snewitems(SIHaspect 2.6).E-recruitmentrecommenders\ncouldsuffer frombothjob seeker coldstartand job coldstart problems.\nUsing content to provide recommendations could alleviate t he cold start problem. In the e-recruitment domain,\nmany facets are often available for this purpose(MF aspect 2.4). Hence, the papers with content based approaches or\nmethodsthatusefeaturesbasedonthecontentcoulddealwit hthecoldstartproblemtosomeextent.However,weonly\ndiscussthepapersthatexplicitlyaddressthecoldstartpr oblem.Thepapersdealingwithcoldstartfollowtwogeneral\napproaches:recommendingusingtheinteractions ofsimila r jobs/jobseekers orpredictingtherecommendationscore\nbasedonjobseekers’andjobs’features.Somepapersalsoem ploybothapproachestodealwiththecoldstartproblem.\nAnoverview ofthissection,includingthesolutionspropos edbyrecentstudiesforthecoldstartproblem,ispresented\ninFig.3.\nTwo approaches have been used in the literature that recomme nd based on the interactions of similar jobs/job\nseekers.First,tocomputethematchingscoresbetweenjobsandnewj obseekers,somestudiesfindsimilarjobseekers\ntothenewonesbasedoncontentfeaturesandthenusetheknow n(e.g.,previouslyinteracted)matchingscoresbetween\nthemandthejobs[ 29,68,90,91,104].Inpapers[ 90,91],jobsarerecommendedtonewgraduatestudentsbasedonthe\njob offers of similar graduates. In another studyby Chen et al . [29],a context-aware multi-arm bandit was employed\nforgeneratingjobrecommendations,wherethejobrecommen dationscoresfornewjobseekerswerecomputedbased\non the interaction history of similar job seekers. This meth od could also deal with the job cold start in case of job\nseeker recommendation due to the symmetric nature of their m odel architecture. Second, to compute the matching\nscores between new jobs and job seekers, some studies find job s with similar content to the new ones and use the\nknown (e.g., previouslyinteracted) matching scores betwe en them and thejobseekers [ 15,58,82,104,121,122,138].\nSome studies predict the matching scores between job seekers and jobs usi ng their features to deal with\nthe cold start problem (e.g., using a machine learning metho d or a scoring function). The job categories that new\njob seekers are interested in are predicted using job seeker s’ textual content [ 122] or attributes [ 60] and are further\nexploited to provide job recommendations. Other papers hav e provided recommendations based on job seekers’ and\njobs’ content, which tackle both job seeker cold start ad job cold start problems [ 15,119,120,135,137] (Although\nA challenge-based survey ofe-recruitment recommendation systems 11\nUser preferences\nBehavioral inter-\nactions, e.g., [ 79,\n113,122,132,136]Explicit prefer-\nences [61,125]Preference model\n[12,16–18,60]Dynamic\npreferences\nNeural architectures\n(e.g. LSTM)\n[50,88,104]Time-dependent\nfeatures [ 88]Time-dependent\nloss function [ 88]\nFig. 4. Anoverviewof the user preferences as well as suitability challenge\nmanycontent-basedmethodscouldtacklethecoldstartprob lemwiththesameapproach,hereweonlycitethepapers\nthat have explicitly addressed the cold start problem). Bes ides features extracted from job seekers’ and jobs’ content ,\nseveral studies [ 58,86,119,129] also extracted features for job seekers based on the jobs th ey have interacted with\nbefore. Hence, they candeal withthejob coldstartproblem.\n3.5 Userpreferences aswellas suitability\nAlthough considering user preferences is important in all r ecommendation systems, e-recruitment recommendation\nsystemsshouldalsoconsidersuitabilityingeneratingthe recommendations,i.e.matchingjobseekerswithjobpostin gs\nbased on the similarity of their skills and requirements (SP aspect2.3). Since matching based on the suitability of job\nseekers for job positions has been the main focus of e-recrui tment recommendation systems, we discuss the studies\nfocusing on capturing user preference. Suitabilityis usua llycaptured by matching the requirements of a job position\nwiththeskillsandotherfeatureofthejobseekers,whilepr eferenceisoftencapturedbyotherfactorsintheprofilesof\njobseekersanjobpostings,suchaslocation,interests,et c.,orbybehavioralinteractions.AsdiscussedinSection 3.1.4,\njobseekers’preferences mightchangeovertimeandmodelin gthedynamicpreferences isalsoachallengingtaskine-\nrecruitmentrecommendation.Inthissection,wefirstdiscu ssthemethodsexplicitlymodelinguserpreferences either\nbased onexplicit preferences in user profiles or using a pref erence model.Next, we present the approaches targeting\nthedynamic natureofuser preferences. Anoverview ofthis s ectionis presented in Fig. 4.\nBehavioralinteractions betweenjobseekers andjobpostings,suchasclick,apply,i nvite, etc.,canshowtheuser\npreferences to some extent. Hence, E-recruitment recommen dation systems that use such behavioral interactions in\ntheir methodareconsidering userpreferences ingeneratin g recommendations (e.g., [ 79,113,122,132,136]).\nAnother way that user preferences are taken into considerat ion in recommendation is by using explicit prefer-\nencesspecified in the user profile (e.g., interests, location, etc .) or in a dashboard. Gutiérrez et al. [ 61] designed a\ndashboard for job seekers to visualize and explore availabl e vacancies based on their preferences. A fuzzy-based rec-\nommendation was proposed by Slama and Darmon [ 125] that matches job seekers with job postings based on their\nfuzzypreferences in their profiles. Althoughmany studies u sesuch features in therecommendation, we only discuss\nthepapersthatexplicitlyfocusontheuser preferences.\nUser preferences are sometimes notobtained directlybutra ther through a preference model .Somestudies learn\nsuch models from explicit feedback [ 12,16–18]. Bills and Ng [ 18] proposed a matching model aimed at adults with\nAutism.They askedbothjob seekers and employers somequest ionstoformthepreference vectorsforbothsides and\nusedthemintheGale-Shapleystablematchingalgorithm[ 51]toprovidetherecommendations.Anotherwaythatuser\npreferences aremodeledisbyusingimplicitfeedbackandco ntent.GuptaandGarg[ 60]designed preference matrices\nfordifferent jobseeker groupsgenerated from historicalda taand usedthem intheir hybrid recommender.\n12 Mashayekhi et al.\nInterpretability\nand explainability\nDeep neural\nnetworks\nAttention weights\n[81,109,110,140]Explaining embedding\ndimensions [ 143]Other methods\nInterpretable\nmachine learning\nmethods [ 97,98]Explicit relations in\ndata [61,99,127]\nFig. 5. Anoverviewofthe interpretability and explainability challenge\nToconsiderthe dynamic natureofuserpreferences,thechangeinuserpreferencesi susuallycapturedthroughthe\ninteractions in time. Nigam et al.[ 104] employed a recurrent neural network (Bidirectional LSTM) withthe attention\nmechanism to capture users’ change of preferences over time . Liu et al. [ 88] proposed an ensemble recommenda-\ntion system with three different recommenders. Observing th e fact that users tend to re-interact with items, a time\nreweighted linearranking modelwasdesigned tocomputethe matchingscoreofajobseeker and ajobpostingbased\nonthe frequency of their previous interactions. The time-d ependent weights were learned by optimizinga smoothed\nhinge loss. Next, a temporal matrix factorization algorith m was designed by introducing a time-related loss term to\nconsider the time of the interactions. Finally, an encoder- decoder model based on LSTM was employed to model the\nsequence of job seekers-jobs interactions. Fu et al. [ 50] proposeda person-job fit model to transform job seekers and\njobs heterogeneous dynamic preferences (preferences base d ondifferent interactions suchas click,apply,chat,invit e,\nand match) into a unified preference space. First, job seeker s and jobs were encoded using hierarchical LSTM. Next,\ntheir dynamic preferences were captured through a Dynamic M ulti-Key Value Memory Network. This network has\na global key matrix for each interaction type (along with the ir attention weights) and a memory matrix for each job\nseeker/jobpreferences.Finally,totransferthepreferen cesfromauxiliarybehavior(interactiontypesotherthanm atch)\ntothematchingtask,theparameterswerelearnedbyamulti- taskobjective,whichistheweightedsumoflossforeach\ninteractiontype.\n3.6 Interpretability and explainability\nInterpretabilityoftenreferstothemodeltransparencyan dtheabilitytounderstandwhyandhowthemodelgenerates\nthepredictions.Ontheotherhand,explainabilityoftenre fers totheabilitytoexplainthepredictionsinhumanterms ,\neven for complex models. However, interpretability and exp lainability have been often been used interchangeably,\nand we also use the two terms interchangeably in this section . As described in Section 3.1.5, providing explanations\nforrecommendations ine-recruitment is a challenging and i mportanttask sincetherecommendations affect people’s\nfuturecareersandexplanationshelpthemmakemoreinsight fuldecisions (HSaspect 2.5).Webrieflydiscussdifferent\napproachesproposedintheliteraturetoachieveinterpret abilityandexplainabilityfore-recruitmentrecommendat ions\nintherestofthissection,whichincludesusingmethodstop rovideexplainabilityindeepneuralnetworkmodels,using\ninterpretablemachinelearning methods,andusingexplici trelations indatatoprovideexplainability. Anoverview o f\ntheapproaches thataddress interpretabilityand explaina bility is presented inFig. 5.\nA challenge-based survey ofe-recruitment recommendation systems 13\nSpecific objectives\nMultiple\nstakeholders\nReciprocal rec-\nommenders\nRecommending based on the labeled data of both sides used for\ntraining [ 13–15,25,50,58,64,65,72,77,80,81,84,86,89,92–\n94,98,101,109,110,117,119,129,131,135,136,140,141,143]Recommending based on the features of jobs and job seekers or some\ninference rules [ 8,10,18,20,27,31,42,48,62,96,111,117,125,126]OWOJ aspect 2.1\nStable match-\ning [18]Job redis-\ntribuition [ 22]Other objectives\nSpecific objective\nfunctions [ 137]\nFig. 6. Anoverviewof the specific objectives challenge\nOne way explainability is addressed in the deep neural models that use resumes and job descriptions for\nperson-job fit prediction is to visualize the attention weig hts. The attention weights could show the importance of\ndifferent words,sentences, oranypartoftheresume/job des criptionintheresume/job description[ 109,110]andalso\ntheir importance in matching with the target job descriptio n/resume words, sentences, or any part of it [ 81,109,110,\n140].Anotherwaytoaddressexplainabilityindeepneuralmode lsisproposedbyZhuetal.[ 143].Foreachdimension\nin the final representation of resumes and jobs resulting fro m the deep model, high-frequency words were gathered\nfrom other resumes and jobs that have high values for that dim ension. Hence, a level of explainability was provided\nforeach jobpostingorresume.\nAnother approach by which explainability is provided in the literature is by applying interpretable machine\nlearningmethods suchas decision treestohuman-readable features [ 97,98].\nIn other studies, explainability is provided using explicit relations in data . In [61] a dashboard was provided to\nview the job seekers’ affinity with the required skills for the jobs that are recommended. In [ 127], recommendations\nwere generated using a knowledge graph together with a templ ate for explainability, where the template was then\ncompleted using the nodes in the knowledge graph. Mentec et a l. [99] provide explanations by the similarity of job\nseekers’ and jobpostings’ skills using anskill ontology.\n3.7 Specific objectives\nE-recruitment recommendation systems usuallyshould sati sfy multiplestakeholders, such as employers, job seekers,\nandsometimestherecommendationplatform,whichbenefitsf rommatchingjobseekerswithjobs(TSaspect 2.2).The\nplatforms’ benefits are often included in the job seeker’s an d employers’ benefits since job seekers’ and employers’\nsatisfaction also leads to more revenue for the recommendat ion platform. Hence, most studies try to improve the\nrecommendations for job seekers and employers. In addition , some studies have considered specific objectives for e-\nrecruitment recommendation systems (e.g., OWOJ aspect 2.1). We briefly discuss the papers dealing with such issues\nthatare alsodescribedin Section 3.1.6.Anoverview ofthis sectionis presented in Fig. 6.\nSincereciprocalrecommenders recommendjobseekerstojobpostingsandviceversa,theyus uallyconsiderthe\nbenefits of job seekers and employers at the same time. Some st udies use historical interactions between job seekers\nand employers that show the interests of both sides for train ing. The labeled data for such methods usually includes\n14 Mashayekhi et al.\nBias and fairness\nFairness for\njob seekers\nReranking [ 53] Debiasing em-\nbeddings [ 70]Fairness for jobs\nUnbiased loss\nfunction [ 28]\nFig. 7. Anoverviewof the biasand fairness challenge\ninterviewandrecruitmentdata[ 13–15,25,50,58,64,65,72,80,81,84,86,89,93,98,101,109,110,119,129,131,135,136,\n140,141,143],actionssuchasfavoriteorclickdatabybothjobseekersa ndrecruiters[ 92,117],ormanuallyannotated\ndata [77,94].On theotherhand, somemethodscomputethematching degre e of ajob seeker and a job postingbased\non the similarity of their contents, skills or other feature s, or by some inference rules [ 8,10,18,20,27,31,42,48,62,\n63,96,111,117,125,126],which couldrecommend jobs tojobseekers and viceversa wi th this approach.\nOther thanthe reciprocalnature of recommendation in e-rec ruitment, some studies have triedto consider thefact\nthat in the job market, for a fixed period of time, each job seek er is hired for one (or a few) job position and vice\nversa (OWOJaspect2.1).Astablematchingalgorithmwasemployed in[ 18]tofind recommendationsforjobseekers\nand recruiters considering this aspect.Moreover, a job app lication redistributionat LinkedIn was proposedin [ 22] to\nprevent job postings from receiving too many or too few appli cations. To achieve this goal, the job recommendation\nscores were penalized orboostedbased onthepredictednumb er of applicationsusing a dynamic forecastingmodel.\nOther objectives have also been investigated for e-recruitment recommendat ion systems. One of such objectives\nfor e-recruitment systems is to prevent job seekers from rec eiving spam. To address this issue, false positives were\npenalizedharshly inthehybrid job recommendationpropose dbyYanget al.[ 137].\n3.8 Bias andfairness\nThe problems related to bias and fairness in AI have gained mo re attention in recent years. Since e-recruitment af-\nfects people’s career choices, it is crucial to consider the fairness aspects of the recommendations (HS aspect 2.5):\ne-recruitment is even defined as one of the high-risk domains according to the EU’s AI act (proposal) [ 32]. Realizing\nthe limitation of pure algorithmic debiasing methods, some researchers have argued that mitigating bias and unfair-\nnessine-recruitmentdeservesaninterdisciplinarypoint ofviewinvolvinglegalandethicalconsiderations([ 112,118]).\nWangetal.[ 130]addressedthelimitationofcurrentdebiasingtechnology byconductinganonlineuserstudyshowing\nthatbiasedrecommendations arepreferred byjobseekers,w hich indicatesthathumanbiasshouldbeaddressedfrom\nnew perspectivesornew technology.\nFrom a technical point of view, fairness concerns may exist o n both sides [ 1], namely for job seekers and also for\njobpostings,since recommendation ine-recruitment is mul ti-stakeholder.Someexamples ofsuch biases and fairness\nconcerns are job seekers’ racial or gender discrimination [ 85], popularity bias [ 2], selection bias [ 28], etc. Moreover,\nfairness concerns exist for both users and items in e-recrui tment recommendation systems, e.g., job seekers with a\ncertainsensitive attributemightnotberecommendedforsp ecificjobsandalsomightnotreceivespecificjobsintheir\nrecommendations.Webrieflydiscussthepapersdealingwith fairness issues,whicharealsodescribedinSection 3.1.7.\nA challenge-based survey ofe-recruitment recommendation systems 15\nLarge scale\nTraining phase\nItem-based\nmethods [ 122]Scalable algorithms\n(e.g. parallel\nmethods) [ 139]Inference phase\nTwo-stage meth-\nods [21,141]Both phases\nBig data com-\nputations [ 23]Reducing the num-\nber of individual\njobs/job seekers\n(e.g., by cluster-\ning) [29,40,100]\nFig. 8. Anoverviewofthe large scale challenge\nWe first present the studies focusing on fairness for job seek ers and then the papers addressing fairness issues for\njobpostings.Anoverview oftheapproachesthataddress fai rness issuesine-recruitmentrecommendationsystems is\npresented in Fig. 7.\nTo provide fair recommendations concerning job seekers , Geyik et al. [ 53] proposed a fairness-aware frame-\nworkforrankingjobseekersasusedinsearchandrecommendi ngjobseekers.Fourdeterministicrerankingalgorithms\nwereproposedtomitigatebiasedpredictiontowardsanysen sitivegroup.Islametal.[ 70]addressedthegenderbiasin\njob recommedationby proposing a neural fair collaborative filtering model(NFCF). Job seeker embeddings were pre-\ntrainedfromnon-e-recruitment recommendationdata(e.g. , movierecommendation) and thendebiasedwithasimilar\ntechniqueofdebiasing wordvectorssothatthegender compo nentisremovedfromeach jobseeker embedding.Next,\nthedebiasedjobseekerembeddingswereusedinthefine-tuni ngstageforjobrecommendationtoensurethatsensitive\nattributesdonotaffect theoutputsofthesystem.\nTo provide fairness for job postings , Chen et al. [ 28] tackled the recency bias in job recommendation. They\nconsideredtherecencybiasasatypeofselectionbiasimpos edbythejobseekersanddesignedanunbiasedlossusing\ninverse propensityweighting ina neural collaborativefilt eringmodel.\n3.9 Largescale\nReal-worldjobrecommendationsystemshavetodealwithmil lionsofjobseekersandjobpostings.Hence,recommend-\ning at large scale needs to be considered in online job market platforms. We briefly discuss the papers dealing with\nlargescaleissues describedinSection 3.1.8,whichincludereducingexecutiontimeandconsumedstorag e/memoryin\ntraining and inference phases. An overview of the approache s that address large scale issues in e-recruitment recom-\nmendation systems is presented inFig. 8.\nTo deal with the execution time and consumed storage/memory issues during the training phase , a study from\nCareerBuilder4[122]createdanitem-basedgraphofjobswithedgesrepresentin g jobsimilaritiesbasedonbehavioral\nand content-based signals. An item-based graph of jobs with different similarity scores was used rather than a user-\nbased (job seeker based) oruser-item (job-job seeker) grap h for scalability.Asubgraph ofthis job graph wasselected\nbya jobseeker’s resumeorpastclicks,and therecommendati ons weregenerated byapplyingPageRank algorithmto\nthis subgraph. In a study at LinkedIn [ 139], a scalable algorithm (a parallel block-wise coordinate d escent algorithm)\nwas designed forlearning theGLMix modeltopredicttheuser response.\n4https://www.careerbuilder.com/\n16 Mashayekhi et al.\nTodealwiththeresponsetimeinthe inferencephase atwo-stagearchitectureisoftenusedbytheindustryleade rs,\nwhere the first stage selects a pool of candidates from a large number of items using a computationally inexpensive\nmodel, and the second stage reranks the results using a more e xpensive model. One example of the two-stage archi-\ntectures was designed for recommendation at CareerBuilder [141]. The first stage was designed to select hundreds\nof candidates from millions using FAISS [ 74] to find the nearest neighbors of an entity in the embedding sp ace. The\nembeddings were calculated through three components; a dee p neural network to learn from the textual data, a rep-\nresentationframework tolearnfromthreegraphsconstruct edfromjobsandskills[ 33],and ageolocationembedding\ncalculator [ 89]. The second stage was designed to rerank the candidates usi ng a weighted linear combination of the\nfirst stage scores and context-based scores. In [ 21], a candidate selection model, CasMoS, was proposed as the fi rst\nstage in the two-stage recommendation framework at LinkedI n. CasMoS is the framework that learns the first stage\nmodel,candidate selection,using theWeighted AND(WAND) q ueryoperator[ 24].\nFrom another perspective, to deal with scalability issues b oth in the training and inference phases , Boukari et\nal. [23] employed Apache Spark, a tool to process big data, to recomm end jobs to job seekers using a content-based\nalgorithm.Another proposedapproach todeal with bigdata a nd thelargenumber of entities is tocluster jobsand/or\njob seekers [ 29,40,100].\n3.10 Papersnotincluded in previous sections\nSomeofthecollectedpapersarenotincludedintheprevious sectionsbecausetheydidnotdirectlyaddressanyofthe\nchallenges discussed in this survey [ 5–7,19,26,30,33,34,39,41,46,47,56,66,67,69,73,75,78,83,87,102,105,107,\n108,114,133,142,144]. However, some papers tackle a specific challenge in e-recr uitment recommendation systems,\nsuchasdealingwithmissingfeatures [ 73]orapplyingdifferent recommendationstrategies fordiffer ent groupsofjob\nseekers [69,73]. We did not discuss such challenges in these papers since ei ther there were not many papers dealing\nwiththesameissuesortheseissueswereconsideredtobeofl esserpracticalsignificanceascomparedtothechallenges\nhighlighted in the present survey. Practical challenges an d lessons learned from the e-recruitment recommendation\nsystem at LinkedIn are alsodiscussed intwotalks [ 54,76].\n4 CONCLUSION\nInthissectionweprovideourfinalremarks.Wefirstprovidea summaryofthissurveyinSection 4.1.Next,wediscuss\nthelimitationsofthissurveyinSection 4.2.Finally,openchallengesandfutureresearchdirectionso frecommendation\nine-recruitment arediscussed inSection 4.3.\n4.1 Summary\nE-recruitmentrecommendationincludesrecommendingjobs tojobseekersandjobseekerstojobs.Weidentifiedeight\nchallenges that have been studied in the past decade for reco mmendation in e-recruitment. Since the available data\nfor training an e-recruitment recommendation model includ e the interactions between job seekers and job positions\ntogether withtheir features and textualcontents,several studieshave addressed dataquality issues.\nJob seekers’ and jobs’ data usually include textual content , location, categorical features, etc., which could also\nbe enriched by external data sources. Moreover, there are ma ny interaction types, such as click, apply, invite, chat,\ninterview, etc., in e-recruitment platforms.Therefore, d ealing with heterogeneousdata,andmultiple interaction\ntypesanddatasources is another challenge in e-recruitment.\nA challenge-based survey ofe-recruitment recommendation systems 17\nSince job positions with the same content are often represen ted as different entities in e-recruitment recommen-\ndation systems (different job entities with distinct IDs may have the same title/content), cold start problem needs\nmoreattentionine-recruitmentrecommendationcomparedt othetraditionalrecommenders.Theavailabilityofmany\nfacets in e-recruitment domaincouldhelp alleviate thecol dstartproblem.\nTraditionalrecommendationsystems mainlyconsideruserp references forgeneratingtherecommendations,while\ne-recruitment recommendationsystems have tomatch job see kers with jobs based onthejobseekers’ skills and jobs’\nrequired skills as well. Hence, e-recruitment recommendat ion systems should consider user preferences as well as\nsuitability .\nExplainable recommendations in general help users make bet ter decisions. Nonetheless, interpretability and ex-\nplainability areevenmoreimportantine-recruitmentrecommendationsy stemssincee-recruitmentrecommendation\nhas a great influence onjobseekers’ futurecareers and alsoo ntheemployers of companies.\nRecommendationsystemsinaspecificdomaincouldhave specificobjectives .Ine-recruitment,thegoalisusually\nto satisfy multiplestakeholders, including job seekers, r ecruiters, and service providers. Moreover, e-recruitmen t rec-\nommendation systems should consider thefact that each job s eeker couldbeemployed for oneor a few jobpositions\nand vice versa, which canintroducenew objectives forrecom mendationsystems.\nBias and fairness issues are challenging for most recommendation systems. In e-recruitment, it is even more\ncriticaltoprovide fair recommendations duetothepossibl ehigh-stakes involved forbothjob seekers and employers.\nFinally,large scale issues are unignorable in designing real-world recommenda tion systems. Since e-recruitment\nrecommendationsystemsusuallyhavetoprovideservicesfo rthousands/millionsofjobseekersandjobpositions,they\nhave toconsider thelargescaleaspectsof therecommendati onsystem.\n4.2 Limitations ofthe survey\nWe have selected and elaborated the main challenges in the e- recruitment recommendation from our point of view,\nbut there could be other challenges in this domain. For examp le, extracting features from textual data with different\ngranularitycouldalsobeconsidered asanother challenge, albeitnotspecifictothee-recruitment domain.Identifyin g\nmorechallenges and categorizing papersbased ontheir appr oaches toaddress them remain forthefuture.\nSincee-recruitmentrecommendationcouldbeareciprocalr ecommendationtask(recommendingjobstojobseekers\nand vice versa), reviewing the challenges in other reciproc alrecommendation systems (e.g., online dating) couldalso\nbeuseful for designing e-recruitment recommendation syst ems. Weomittedpapers from other reciprocal recommen-\ndationdomains tolimitthescopeof this survey.\n4.3 Openchallenges andfuture research directions\nWhiletherehasbeenmuchusefulworkinaddressingcertaina spectsofe-recruitmentrecommendationsystems,there\nare still some open challenges in this domain that could be in vestigated in future research works. Some of such chal-\nlenges thatwe personallyconsider promisinginclude:\n•Oneworker,onejob (OWOJaspect 2.1).Sinceeachjobseekercanonlybeemployedforoneorafewjo bsand\na job can be assigned to one or a few candidates, balancing the recommendations in a way that job postings\ndonot receive toomany or toofew applications is of great imp ortance. Moreover,each job/job seeker should\nreceive recommendations with a high chance of success. This would require the recommendation system to\nconsidertherelativeprobabilityofmatching,thatis,how likelyone’srecommendedjobswouldbesuccessfully\n18 Mashayekhi et al.\nmatched with other job seekers. Although some aspects of the se issues have been addressed in a few papers\n(see Section 3.7),this challenge stillneeds furtherinvestigation for mor einsights and new solutions.\n•Careerpathrecommendation .Somejobseekerschoosetheirnextjobsinawaythathelpsth emreachtheir\ndream jobs in the future. This problem has been addressed by a few career path recommendation systems,\nwhich is to recommend intermediate jobs to reach the final car eer goal [ 55]. This line of research could be\ninvestigated infuturestudies.\n•Domain adaptation . Domain adaptation techniques can improve model performan ce with limited labeled\ndata,butthe applicationof such techniques in e-recruitme nt recommendation has not been well investigated\nexcept for in a few studies such as [ 14]. Methods for domain adaptation between different job secto rs, lan-\nguages, platforms,countries, etc., wouldbeworth investi gating to improve the performanceof e-recruitment\nrecommendationsystems.\n•Multi-linguality . Many platforms/countries have resumes and job postings in multiple languages. Hence,\ne-recruitment recommendation systems in those platforms/ countries should support multiple languages and\ncross-matchingresumesandjobpostingswithdifferent lang uages.Althoughsomepapershaveaddressedthis\nproblem (see Section 3.2), further investigations are still in need to provide bette r support for multi-lingual\nplatforms.\n•Conversational .Conversationalrecommendationsystemsperformmulti-tu rndialoguewithuserstoachieve\nrecommendation related goals [ 71]. Although conversational recommendation systems have be come more\npopular in recent years [ 52], few studies have explored conversational settings in the e-recruitment domain\n[12,16,17,99].Conversational recommendationcanelicitthecurrentus er’spreference, provideexplanations,\nmake use of the explicit feedback, etc., which makes it valua ble to e-recruitment and worthwhile for future\nstudies.[52].\n•Specificjob seekers .Some groups of job seekers may need special attention by e-r ecruitment recommenda-\ntion systems. First, user interfaces need to be designed spe cifically for certain user groups to enhance their\ninteractions with the system (e.g., for people with special needs). This aspect should also be considered for\nsomegroupsofrecruiters.Moreover,somegroupsofjobseek ersmight befitforsomespecificjobs.Forexam-\nple, adults with autism are among the most under employed dem ographics [ 18]. However, they have special\nskills to contribute to the workplace if applied to the right job [18]. Although there have been some job rec-\nommenders designed for specificjob seekers such as students and new graduates [ 46,75,90–92,107,121,142],\ntheelderly[ 10],andpeoplewithspecialneeds[ 18,124],exploringtheneedsofmoresubgroupsofjobseekers\ncouldgreatlybenefitthee-recruitmentfield.Morespecifica lly,designingataxonomyofdifferentgroupsofjob\nseekers with their characteristics and needs would be a good starting point, which could further encourage\ncollectingdatafordesigning recommendationmethodsthat cantakethedifferences betweendifferent groups\nofjob seekers into consideration.\n•Fairness .Fair recommendation in e-recruitment is even more importa nt than that in other recommendation\nsystems because people’s career choices are influenced by th eir recommended jobs and the recommendation\nmayalsohavealong-termimpactonthelabormarket(HSaspec t2.5).Althoughtherehasbeengrowingatten-\ntion to fairness issues in general recommendation settings , not many papers specifically address these issues\nin e-recruitment recommendation systems (as shown in Secti on3.8). One reason could be that the fairness\nissuesaremorecomplicatedthantheotherrecommendations ystemsduetothereciprocalnatureandmultiple\nA challenge-based survey ofe-recruitment recommendation systems 19\nstakeholders involved in e-recruitment. Another reason mi ght be that there are relatively few open datasets\nforthis specificfield,as elaboratedbelow.\nAnother challenge in research for e-recruitment recommend ation systems is that few publicdatasets are available.\nAs far as we know, there areonly twopublic datasets: CareerBuilder2012 dataset5onKaggle6from thee-recruitment\nplatform CareerBuilder7andZhilian dataset8from a Chinese e-recruitment platform Zhilian9. The two datasets for\nthe RecSys challenges 2016 [ 3] and 2017 [ 4] provided by the e-recruitment platform Xing10, although used in some\nrelatedstudies,arenotpubliclyavailable.Advancesine- recruitmentrecommendationsystemsfromacademicresearc h\ndependontheavailabilityofpublicdatasets:morepublicl yavailabledatacouldhelptoestablishstrongerbenchmark s;\na larger datasetsof varietycouldalsofacilitatenew ideas toappearinthefield.\nACKNOWLEDGMENTS\nThe research leading to these results has received funding f rom the European Research Council under the European\nUnion’s Seventh Framework Programme (FP7/2007-2013) (ERC Grant Agreement no. 615517), and under the Euro-\npeanUnion’s Horizon2020research andinnovationprogramm e(ERCGrantAgreement no.963924),fromtheSpecial\nResearch Fund(BOF)ofGhentUniversity (BOF20/IBF/117),f romtheFlemish Government underthe“Onderzoekspro-\ngramma ArtificiëleIntelligentie (AI) Vlaanderen” program me, and from theFWO (projectno. G0F9816N,3G042220).\nREFERENCES\n[1] Himan Abdollahpouri, Gediminas Adomavicius, Robin Bur ke, Ido Guy, Dietmar Jannach, Toshihiro Kamishima, Jan Kras nodebski, and Luiz\nPizzato. 2020. Multistakeholder recommendation: Surveya nd researchdirections. User Modeling and User-Adapted Interaction 30, 1 (2020), 127–\n158.\n[2] Himan Abdollahpouri, Masoud Mansoury, Robin Burke, and Bamshad Mobasher. 2020. Addressing the multistakeholder i mpact of popularity\nbiasinrecommendationthrough calibration. arXiv preprintarXiv:2007.12230 (2020).\n[3] Fabian Abel, András Benczúr, Daniel Kohlsdorf, Martha L arson, and Róbert Pálovics. 2016. Recsys challenge 2016: Jo b recommendations. In\nProceedings of the 10thACMconferenceon recommendersystem s.425–426.\n[4] FabianAbel,YasharDeldjoo,MehdiElahi,andDanielKoh lsdorf.2017. Recsyschallenge2017:Offlineandonlineevalu ation.InProceedingsofthe\neleventh acmconference onrecommendersystems .372–373.\n[5] Shaha Al-Otaibiand MouradYkhlef. 2017. Hybrid immuniz ing solution for job recommender system. Frontiersof Computer Science 11, 3 (2017),\n511–527.\n[6] Nikolaos D Almalis, George A Tsihrintzis, and Nikolaos K aragiannis. 2014. A content based approach for recommendin g personnel for job\npositions. In IISA 2014,The 5thInternational Conferenceon Information ,Intelligence, Systemsand Applications .IEEE, 45–49.\n[7] Nikolaos D Almalis, George A Tsihrintzis, and Nikolaos K aragiannis. 2014. A content based approach for recommendin g personnel for job\npositions. In IISA 2014,The 5thInternational Conferenceon Information ,Intelligence, Systemsand Applications .IEEE, 45–49.\n[8] NikolaosDAlmalis,GeorgeATsihrintzis,NikolaosKara giannis,andAggelikiDStrati.2015. FoDRA-Anewcontent-b asedjobrecommendation\nalgorithm for job seeking and recruiting. In 2015 6th International Conference on Information, Intelli gence, Systems and Applications (IISA) . IEEE,\n1–7.\n[9] Honorio Apaza, Américo Ariel Rubin de Celis Vidal, and Jo simar Edinson Chire Saire. 2021. Job Recommendation Based o n Curriculum Vitae\nUsing Text Mining.In Futureof Informationand CommunicationConference .Springer,1051–1059.\n[10] ShomaArita,AtsushiHiyama,andMichitakaHirose.201 7. Gber:Asocialmatchingappwhichutilizestime,place,an dskillsofworkersandjobs.\nInCompanion of the 2017ACMConference onComputer Supported C ooperative Workand Social Computing .127–130.\n[11] Shivam Bansal, Aman Srivastava, and Anuja Arora. 2017. Topic modeling driven content based jobs recommendation en gine for recruitment\nindustry. Procedia computer science 122 (2017), 865–872.\n5https://www.kaggle.com/c/job-recommendation\n6https://www.kaggle.com/\n7https://www.careerbuilder.com/\n8https://tianchi.aliyun.com/dataset/dataDetail?dataI d=31623\n9https://www.zhaopin.com\n10https://www.xing.com\n20 Mashayekhi et al.\n[12] VitoBellini,GiovanniMariaBiancofiore,TommasoDiNo ia,Eugenio DiSciascio,Fedelucio Narducci,andClaudioPo mo.2020. GUapp:aconver-\nsationalagentforjobrecommendationforthe Italianpubli cadministration.In 2020IEEEConferenceonEvolvingand Adaptive Intelligent S ystems\n(EAIS). IEEE, 1–7.\n[13] Shuqing Bian, Xu Chen, Wayne Xin Zhao, Kun Zhou, Yupeng H ou, Yang Song, Tao Zhang, and Ji-Rong Wen. 2020. Learning to m atch jobs\nwith resumes from sparse interaction data using multi-view co-teaching network. In Proceedings of the 29th ACM International Conference on\nInformation &Knowledge Management .65–74.\n[14] Shuqing Bian, Wayne Xin Zhao, Yang Song, Tao Zhang, and J i-Rong Wen. 2019. Domain adaptation for person-job fit with t ransferable deep\nglobal match network. In Proceedings of the 2019 Conference on Empirical Methods in Na tural Language Processingand the 9th International Joint\nConference onNatural Language Processing(EMNLP-IJCNLP) .4810–4820.\n[15] Mattia Bianchi, Federico Cesaro, Filippo Ciceri, Matt ia Dagrada, Alberto Gasparin, Daniele Grattarola, Ilyas In ajjar, Alberto Maria Metelli, and\nLeonardoCella.2017. Content-based approachesforcold-s tartjobrecommendations. In ProceedingsoftheRecommenderSystemsChallenge 2017 .\n1–5.\n[16] Giovanni Maria Biancofiore, Tommaso Di Noia, Eugenio Di Sciascio, Fedelucio Narducci,and Paolo Pastore. 2021. Gua pp: a knowledge-aware\nconversationalagent for jobrecommendation. In Proceedings of the Joint KaRS & ComplexRecWorkshop. CEUR-WS .\n[17] GiovanniMariaBiancofiore,TommasoDiNoia,EugenioDi Sciascio,FedelucioNarducci,andPaoloPastore.2021. GUa pp:Enhancing JobRecom-\nmendations with Knowledge Graphs..In Proceedings ofthe 11thItalian Information Retrieval Works hop.CEUR-WS .\n[18] Joseph Bills and Yiu-kai Dennis Ng. 2021. Looking for Jo bs? Matching Adults with Autism with Potential Employers fo r Job Opportunities. In\n25thInternational DatabaseEngineering & Applications Sy mposium.212–221.\n[19] RonieCBituin,RonielleBAntonio,andJamesAEsquivel .2020. HarmonicMeansbetweenTF-IDFandAngleofSimilarit ytoIdentifyProspective\nApplicants inaRecruitment Setting. In 20203rdInternational Conferenceon Algorithms,Computin gand Artificial Intelligence . 1–5.\n[20] Jacob Bollinger, David Hardtke, and Ben Martin. 2012. U sing social data for resume job matching. In Proceedings of the 2012 workshop on Data-\ndriven userbehavioral modelling and miningfromsocial med ia. 27–30.\n[21] Fedor Borisyuk, Krishnaram Kenthapadi, David Stein, a nd Bo Zhao. 2016. CaSMoS: A framework for learning candidate selection models over\nstructured queries and documents. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining .\n441–450.\n[22] Fedor Borisyuk, Liang Zhang, and Krishnaram Kenthapad i. 2017. LiJAR: A system for job application redistribution towards efficient career\nmarketplace.In Proceedings of the 23rdACMSIGKDD International Conference on Knowledge Discoveryand Data Mining .1397–1406.\n[23] Shayma Boukari,Sondes Fayech,and Rim Faiz.2020. Hunt alent: A candidates recommendation system for automatic re cruitment via LinkedIn.\nIn2020Seventh International Conference onSocial NetworksA nalysis,Management and Security (SNAMS) .IEEE, 1–7.\n[24] Andrei Z Broder, David Carmel, Michael Herscovici, Aya Soffer, and Jason Zien. 2003. Efficient query evaluation using a two-level retrieval\nprocess.In Proceedings of thetwelfth international conferenceon Info rmationand knowledge management .426–434.\n[25] AlanCardoso,Fernando Mourão,and Leonardo Rocha. 202 1. The matchingscarcityproblem:When recommendersdo not c onnect the edges in\nrecruitmentservices. Expert Systemswith Applications 175 (2021), 114764.\n[26] TommasoCarpi,MarcoEdemanti,ErvinKamberoski,Elen aSacchi,PaoloCremonesi,RobertoPagano,andMassimoQuad rana.2016. Multi-stack\nensemblefor jobrecommendation. In Proceedings of the Recommender SystemsChallenge . 1–4.\n[27] Sisay Chala and Madjid Fathi. 2017. Job seeker to vacanc y matching using social network analysis. In 2017 IEEE International Conference on\nIndustrial Technology (ICIT) .IEEE, 1250–1255.\n[28] Ruey-Cheng Chen, Qingyao Ai, Gaya Jayasinghe, and W Bru ce Croft. 2019. Correcting for recency biasin job recommend ation. InProceedings\nof the 28thACMInternational Conference on Informationand Knowledge Management . 2185–2188.\n[29] Wenbo Chen, Pan Zhou, Shaokang Dong, Shimin Gong, Mengl an Hu, Kehao Wang, and Dapeng Wu. 2018. Tree-based contextua l learning for\nonline jobor candidaterecommendation with bigdata suppor t inprofessionalsocial networks. IEEEAccess 6(2018), 77725–77739.\n[30] Oualid Chenni, Yanis Bouda, Hamid Benachour, and Chahn ez Zakaria. 2015. A content-based recommendation approach using semantic user\nprofileine-recruitment.In International Conferenceon Theory and Practiceof Natural C omputing .Springer,23–32.\n[31] BrunoCoelho,FernandoCosta,andGilMGonçalves.2015 . Hyred:hybridjobrecommendationsystem.In 201512thInternationalJointConference\non e-Businessand Telecommunications(ICETE) ,Vol. 2. IEEE, 29–38.\n[32] Council of EuropeanUnion. 2022. Proposal for a REGULAT ION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCILLAYING DOWN\nHARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL I NTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLA-\nTIVE ACTS2014. https://eur-lex.europa.eu/legal-content/EN/TXT/?qid =1623335154975&uri=CELEX%3A52021PC0206\n[33] Vachik S Dave, Baichuan Zhang, Mohammad Al Hasan, Khali feh AlJadda, and Mohammed Korayem. 2018. A combined represe ntation learn-\ning approach for better job and skill recommendation. In Proceedings of the 27th ACM International Conference on Info rmation and Knowledge\nManagement .1997–2005.\n[34] ToonDePessemier,KrisVanhecke,andLucMartens.2016 . Ascalable,high-performanceAlgorithmforhybridjobrec ommendations. In Proceed-\ningsof theRecommender SystemsChallenge . 1–4.\n[35] Cornéde Ruijt and SandjaiBhulai.2021. Jobrecommende r systems:A review. arXiv preprint arXiv:2111.13576 (2021).\n[36] Mamadou Diaby and Emmanuel Viennet. 2014. Taxonomy-ba sed job recommender systems on Facebook and LinkedIn profile s. In2014 IEEE\nEighthInternational Conferenceon ResearchChallenges in Information Science (RCIS) . IEEE, 1–6.\nA challenge-based survey ofe-recruitment recommendation systems 21\n[37] MamadouDiaby,EmmanuelViennet,andTristanLaunay.2 013. Towardthenextgenerationofrecruitmenttools:anonl inesocialnetwork-based\njobrecommendersystem.In 2013IEEE/ACMInternational Conference onAdvances inSoci al NetworksAnalysisand Mining(ASONAM2013) .IEEE,\n821–828.\n[38] Mamadou Diaby, Emmanuel Viennet, and Tristan Launay. 2 014. Exploration of methodologies to improve job recommend er systems on social\nnetworks. Social NetworkAnalysis and Mining 4,1 (2014), 1–17.\n[39] GiacomoDomeniconi,GianlucaMoro,AndreaPagliarani ,KarinPasini,andRobertoPasolini.2016. Jobrecommendat ionfromsemanticsimilarity\nof linkedin users’skills.In International Conferenceon PatternRecognition Applicati ons and Methods ,Vol. 2. SciTePress,270–277.\n[40] ShaokangDong,ZijianLei,PanZhou,KaiguiBian,andGu anghuiLiu.2017.Jobandcandidaterecommendationwithbig datasupport:acontextual\nonline learning approach.In GLOBECOM2017-2017IEEEGlobal CommunicationsConference .IEEE, 1–7.\n[41] VerenaEitle,FelixPeters,AndreasWelsch,andPeterB uxmann.2021.TheImpactofCVRecommenderSystemsonProced uralJusticeinRecruiting:\nAnExperiment inCandidate Selection. (2021).\n[42] Ziad Elgammal, Abdullah Barmu, Hamza Hassan, Khaled El gammal, Tansel Özyer, and Reda Alhajj. 2021. Matching Appli cants with Positions\nforBetter Allocation of Employees inthe Job Market.In 202122nd International ArabConferenceon InformationTec hnology (ACIT) .IEEE, 1–5.\n[43] Ahmed Elsafty, MartinRiedl, and ChrisBiemann. 2018. D ocument-based recommender system for job postings using de nse representations.In\nProceedings of the 2018Conference of the NorthAmericanChap ter of the Associationfor Computational Linguistics:Huma n Language Technologies,\nVolume 3(Industry Papers) . 216–224.\n[44] Evanthia Faliagka, Lazaros Iliadis, Ioannis Karydis, Maria Rigou, Spyros Sioutas, Athanasios Tsakalidis, and Gi annis Tzimas. 2014. On-line\nconsistent ranking one-recruitment:seekingthe truth beh ind awell-formed CV. ArtificialIntelligence Review 42, 3(2014), 515–528.\n[45] Evanthia Faliagka,Athanasios Tsakalidis,and Gianni s Tzimas.2012. An integrated e-recruitmentsystem forauto mated personality mining and\napplicant ranking. Internet research (2012).\n[46] PeiniFeng,CharlesJiahaoJiang,JialeWang,SunnyYeu ng,andXijieLi.2021. JobRecommendationSystemBasedonAn alyticHierarchyProcess\nand K-meansClustering.In 2021The 13thInternational Conference onComputer Modelin g and Simulation . 104–113.\n[47] Francis C Fernández-Reyes and Suraj Shinde. 2019. CV Re trieval System based on job description matching using hybr id word embeddings.\nComputerSpeech & Language 56 (2019), 73–79.\n[48] MauricioNoris Freireand Leandro Nunes de Castro.2020 . A Frameworkfor e-Recruitment Recommender Systems. In International Conference\non ArtificialIntelligence and Soft Computing .Springer,165–175.\n[49] MauricioNorisFreireandLeandroNunesdeCastro.2021 . e-Recruitmentrecommendersystems:asystematicreview. KnowledgeandInformation\nSystems63, 1(2021), 1–20.\n[50] Bin Fu, Hongzhi Liu, Yao Zhu, Yang Song, Tao Zhang, and Zh onghai Wu. 2021. Beyond Matching: Modeling Two-Sided Multi -Behavioral Se-\nquences forDynamicPerson-Job Fit.In International Conference on DatabaseSystemsforAdvanced Applications . Springer,359–375.\n[51] David Gale and Lloyd S Shapley. 2013. College admission s and the stability of marriage. The American Mathematical Monthly 120, 5 (2013),\n386–391.\n[52] Chongming Gao, Wenqiang Lei, Xiangnan He, Maarten de Ri jke, and Tat-Seng Chua. 2021. Advances and challenges in con versational recom-\nmender systems:A survey. AI Open2(2021), 100–126.\n[53] SahinCemGeyik,StuartAmbler,and KrishnaramKenthap adi.2019. Fairness-awarerankinginsearch&recommendati onsystemswith applica-\ntionto linkedintalent search.In Proceedings of the 25thacmsigkdd international conference onknowledge discovery& data mining .2221–2231.\n[54] SahinCemGeyik,QiGuo,BoHu,CagriOzcaglar,KetanTha kkar,XianrenWu,andKrishnaramKenthapadi.2018. Talent s earchandrecommen-\ndationsystemsatLinkedIn: Practicalchallenges and lesso nslearned. In The 41stInternational ACMSIGIR Conference onResearch&De velopment\nin InformationRetrieval . 1353–1354.\n[55] AritraGhosh, BeverlyWoolf, Shlomo Zilberstein,and A ndrew Lan. 2020. Skill-based CareerPath Modeling and Recom mendation. In 2020IEEE\nInternational Conferenceon Big Data(Big Data) .IEEE, 1156–1165.\n[56] Alfonso González-Briones, Alberto Rivas, Pablo Chamo so, Roberto Casado-Vara,and Juan Manuel Corchado. 2018. Ca se-based reasoning and\nagentbasedjobofferrecommendersystem.In The13thInternational ConferenceonSoftComputingModels inIndustrial andEnvironmentalAppli-\ncations.Springer,21–33.\n[57] AkshayGugnaniandHemantMisra.2020. Implicitskills extractionusingdocumentembeddinganditsuseinjobrecom mendation.In Proceedings\nof the AAAIConferenceon ArtificialIntelligence , Vol. 34.13286–13293.\n[58] Cheng Guo, Hongyu Lu, Shaoyun Shi, Bin Hao, Bin Liu, Min Z hang, Yiqun Liu, and Shaoping Ma. 2017. How integration help s on cold-start\nrecommendations. In Proceedings ofthe Recommender SystemsChallenge 2017 .1–6.\n[59] Shiqiang Guo, Folami Alamudun, and Tracy Hammond. 2016 . RésuMatcher: A personalized résumé-job matching system. Expert Systems with\nApplications 60 (2016), 169–182.\n[60] Anika Gupta and Deepak Garg. 2014. Applying data mining techniques in job recommender system for considering candi date job preferences.\nIn2014International Conferenceon Advances in Computing,Co mmunicationsand Informatics(ICACCI) .IEEE, 1458–1465.\n[61] Francisco Gutiérrez, Sven Charleer, Robin De Croon, Ny i Nyi Htun, Gerd Goetschalckx, and Katrien Verbert. 2019. Ex plaining and exploring\njob recommendations: a user-drivenapproach for interacti ng with knowledge-based job recommender systems. In Proceedings of the 13th ACM\nConference onRecommender Systems .60–68.\n22 Mashayekhi et al.\n[62] AmineHabousandElHabibNfaoui.2021. Afuzzylogicand ontology-basedapproachforimprovingtheCVandjobofferma tchinginrecruitment\nprocess.International Journal of Metadata, Semanticsand Ontologi es15,2 (2021), 104–120.\n[63] ClaudiaHauffandGeorgiosGousios.2015. MatchingGitH ubdeveloperprofilestojobadvertisements.In 2015IEEE/ACM12thWorkingConference\non MiningSoftware Repositories .IEEE, 362–366.\n[64] MiaoHe, DayongShen,TaoWang,HuaZhao,ZhongshanZhan g, andRenjieHe. 2021. Self-Attentional Multi-FieldFeatu resRepresentationand\nInteraction LearningforPerson-JobFit. IEEETransactionson Computational Social Systems (2021).\n[65] Miao He, Tao Wang, YuanyuanZhu, Yingguo Chen, Feng Yao, and Ning Wang. 2021. FINN:Feature Interaction NeuralNetwo rk for Person-Job\nFit. In20217thInternational Conferenceon Big Dataand Informati onAnalytics (BigDIA) .IEEE, 123–130.\n[66] BradfordHeap,Alfred Krzywicki,WayneWobcke,MikeBa in,andPaulCompton.2014. Combiningcareerprogressionan dprofilematchingina\njobrecommender system.In PacificRimInternational Conferenceon ArtificialIntellige nce. Springer,396–408.\n[67] Islam A Heggo and Nashwa Abdelbaki. 2018. Hybrid inform ation filtering engine for personalized job recommender sys tem. InInternational\nConference onAdvanced MachineLearning Technologies and A pplications . Springer,553–563.\n[68] Wenxing Hong, Siting Zheng, and HuanWang. 2013. Dynami cuserprofile-basedjobrecommender system.In 20138thInternational Conference\non ComputerScience &Education . IEEE, 1499–1503.\n[69] Wenxing Hong, Siting Zheng, Huan Wang, and Jianchao Shi . 2013. A job recommender system based on user clustering. J. Comput. 8, 8 (2013),\n1960–1967.\n[70] Rashidul Islam, Kamrun Naher Keya, Ziqian Zeng, Shimei Pan, and James Foulds. 2021. Debiasing career recommendati ons with neural fair\ncollaborativefiltering. In Proceedings of theWeb Conference2021 .3779–3790.\n[71] Dietmar Jannach, Ahtsham Manzoor, Wanling Cai, and Li C hen. 2021. A survey on conversational recommender systems. ACM Computing\nSurveys(CSUR) 54,5 (2021), 1–36.\n[72] JunshuJiang,SongyunYe,WeiWang,JingranXu,andXiao shengLuo.2020. Learningeffectiverepresentationsforper son-jobfitbyfeaturefusion.\nInProceedings of the 29thACMInternational Conferenceon Info rmation& Knowledge Management .2549–2556.\n[73] MiaoJiang,YiFang,HuangmingXie,JikeChong,andMeng Meng.2019. Userclickpredictionforpersonalizedjobreco mmendation. WorldWide\nWeb22,1 (2019), 325–345.\n[74] JeffJohnson,MatthijsDouze,andHervéJégou.2019. Bil lion-scalesimilaritysearchwithgpus. IEEETransactionsonBigData 7,3(2019),535–547.\n[75] KrishnaramKenthapadi,BenjaminLe,andGaneshVenkat araman.2017.Personalizedjobrecommendationsystematli nkedin:Practicalchallenges\nand lessons learned. In Proceedings of the eleventh ACMconferenceon recommendersy stems.346–347.\n[76] KrishnaramKenthapadi,BenjaminLe,andGaneshVenkat araman.2017.Personalizedjobrecommendationsystematli nkedin:Practicalchallenges\nand lessons learned. In Proceedings of the eleventh ACMconferenceon recommendersy stems.346–347.\n[77] AparupKhatuaandWolfgangNejdl.2020. Matchingrecru itersandjobseekersontwitter.In 2020IEEE/ACMInternationalConferenceonAdvances\nin Social NetworksAnalysisand Mining (ASONAM) .IEEE, 266–269.\n[78] Emanuel Lacic, Markus Reiter-Haas, Tomislav Duricic, Valentin Slawicek, and Elisabeth Lex. 2019. Should we embed ? A study on the online\nperformance of utilizing embeddings for real-time job reco mmendations. In Proceedings of the 13th ACM Conference on Recommender System s.\n496–500.\n[79] Emanuel Lacic, Markus Reiter-Haas, Dominik Kowald, Ma noj Reddy Dareddy, Junghoo Cho, and Elisabeth Lex. 2020. Usi ng autoencoders for\nsession-basedjob recommendations. UserModeling and User-Adapted Interaction 30,4 (2020), 617–658.\n[80] DorLavi,VolodymyrMedentsiy,andDavidGraus.2021. c onSultantBERT:Fine-tunedSiameseSentence-BERTforMatc hingJobsandJobSeekers.\narXiv preprint arXiv:2109.06501 (2021).\n[81] Ran Le, Wenpeng Hu, Yang Song, Tao Zhang, Dongyan Zhao, a nd Rui Yan. 2019. Towards effective and interpretable person -job fitting. In\nProceedings of the 28thACMInternational Conference on Info rmationand Knowledge Management . 1883–1892.\n[82] Yeon-ChangLee,JiwonHong,andSang-WookKim.2016. Jo brecommendationinaskstory:experiences,methods,andev aluation.In Proceedings\nof the 31stAnnual ACMSymposium on Applied Computing .780–786.\n[83] Vasily Leksin and Andrey Ostapets. 2016. Job recommend ation based on factorization machine and topic modelling. I nProceedings of the\nRecommender SystemsChallenge . 1–4.\n[84] Changmao Li, Elaine Fisher, RebeccaThomas,Steve Pitt ard, Vicki Hertzberg,and Jinho D Choi. 2020. Competence-le vel prediction and resume\n&job descriptionmatchingusing context-awaretransforme rmodels. arXiv preprint arXiv:2011.02998 (2020).\n[85] Yunqi Li, Hanxiong Chen, Shuyuan Xu, Yingqiang Ge, Junt ao Tan, Shuchang Liu, and Yongfeng Zhang. 2022. Fairness in R ecommendation: A\nSurvey.arXiv preprint arXiv:2205.13619 (2022).\n[86] JianxunLian,FuzhengZhang,MinHou,HongweiWang,Xin gXie,andGuangzhongSun.2017. Practicallessonsforjobre commendationsinthe\ncold-start scenario. In Proceedings of the Recommender SystemsChallenge 2017 .1–6.\n[87] Yiou Lin, Hang Lei, Prince Clement Addo, and XiaoyuLi.2 016. Machine learned resume-jobmatching solution. arXiv preprint arXiv:1607.07657\n(2016).\n[88] KuanLiu, Xing Shi, Anoop Kumar,Linhong Zhu, and Prem Na tarajan.2016. Temporal learning and sequence modeling for a job recommender\nsystem. In Proceedings of the Recommender SystemsChallenge . 1–4.\n[89] MengshuLiu,JingyaWang,KareemAbdelfatah,andMoham medKorayem.2019. Tripartitevectorrepresentationsforb etterjobrecommendation.\narXiv preprint arXiv:1907.12379 (2019).\nA challenge-based survey ofe-recruitment recommendation systems 23\n[90] Rui Liu, Yuanxin Ouyang, Wenge Rong, Xin Song, Cui Tang, and Zhang Xiong. 2016. Rating prediction based job recommen dation service for\ncollege students. In International conferenceon computational scienceand its applications . Springer,453–467.\n[91] Rui Liu, Wenge Rong, Yuanxin Ouyang, and Zhang Xiong. 20 17. A hierarchical similarity based job recommendation ser vice framework for\nuniversitystudents. FrontiersofComputer Science 11,5 (2017), 912–922.\n[92] Yao Lu, Sandy El Helou, and Denis Gillet. 2013. A recomme nder system for job seeking and recruiting website. In Proceedings of the 22nd\nInternational Conferenceon World Wide Web .963–966.\n[93] Yong Luo, Huaizheng Zhang, Yonggang Wen, and Xinwen Zha ng. 2019. Resumegan: an optimized deep representation lear ning framework\nfor talent-job fit via adversarial learning. In Proceedings of the 28th ACM international conference on info rmation and knowledge management .\n1101–1110.\n[94] Saket Maheshwary and Hemant Misra. 2018. Matching resu mes to jobs via deep siamese network. In Companion Proceedings of the The Web\nConference 2018 .87–88.\n[95] Emmanuel Malherbe,MamadouDiaby,MarioCataldi, Emma nuel Viennet, and Marie-AudeAufaure. 2014. Field selectio n for jobcategorization\nand recommendation to social network users. In 2014 IEEE/ACM International Conference on Advances in Soci al Networks Analysis and Mining\n(ASONAM2014) .IEEE, 588–595.\n[96] Mohammed Maree,Aseel B Kmail,and Mohammed Belkhatir. 2019. Analysis and shortcomings of e-recruitment systems: Towardsa semantics-\nbasedapproachaddressingknowledge incompleteness and li mited domaincoverage. Journal of Information Science 45,6 (2019), 713–735.\n[97] Jorge Martinez-Gil, Bernhard Freudenthaler, and Thom as Natschläger. 2018. Recommendation of job offers using ran dom forests and support\nvectormachines.In Proceedings of the ofthe EDBT/ICDTjoint conference .\n[98] MohamedAmineMenacer,FatmaBenHamda,GhadaMighri,S abeurBenHamidene,andMaximeCariou.2021.Aninterpretab leperson-jobfitting\napproach based on classificationand ranking. In Proceedings of The Fourth International Conference on Natur al Language and Speech Processing\n(ICNLSP 2021) .130–138.\n[99] FrançoisMentec, ZoltánMiklós,SébastienHervieu,an d ThierryRoger.2021. Conversationalrecommendationsfor jobrecruiters.In Knowledge-\naware and Conversational Recommender Systems .\n[100] D Mhamdi, Reda Moulouki, Mohammed Yassine El Ghoumari , M Azzouazi, and L Moussaid. 2020. Job recommendation based on job profile\nclustering andjob seekerbehavior. Procedia ComputerScience 175 (2020), 695–699.\n[101] Tsunenori Mine, Tomoyuki Kakuta, and Akira Ono. 2013. Reciprocal recommendation for job matching with bidirecti onal feedback. In 2013\nSecond IIAI International Conferenceon Advanced Applied I nformatics .IEEE, 39–44.\n[102] Sonu K Mishra and Manoj Reddy. 2016. A bottom-up approa ch to job recommendation system. In Proceedings of the Recommender Systems\nChallenge . 1–4.\n[103] Ala Mughaid, Ibrahim Obeidat, Bilal Hawashin, Shadi A lZu’bi, and Darah Aqel. 2019. A smart geo-location job recom mender system based on\nsocialmedia posts. In 2019SixthInternational Conference onSocial NetworksAna lysis, Managementand Security (SNAMS) .IEEE, 505–510.\n[104] AmberNigam,AakashRoy,HartaranSingh,and Harsimra nWaila.2019. Jobrecommendationthroughprogressionofjo bselection. In 2019IEEE\n6thInternational Conference onCloud Computingand Intell igence Systems(CCIS) . IEEE, 212–216.\n[105] Andrzej Pacuk, Piotr Sankowski, Karol Wegrzycki, Ada m Witkowski, and Piotr Wygocki. 2016. RecSys Challenge 2016 : Job recommendations\nbasedonpreselectionof offersand gradient boosting. In Proceedings ofthe Recommender SystemsChallenge . 1–4.\n[106] Ioannis Paparrizos, B Barla Cambazoglu, and Aristide s Gionis. 2011. Machine learned job recommendation. In Proceedings of the fifth ACM\nConference onRecommender Systems .325–328.\n[107] Bharat Patel, VarunKakuste, and Magdalini Eirinaki. 2017. CaPaR: a career path recommendation framework.In 2017 IEEE Third International\nConference onBig DataComputing Service and Applications ( BigDataService) . IEEE, 23–30.\n[108] MirkoPolato and FabioAiolli. 2016. Apreliminarystu dy onarecommendersystemforthe jobrecommendationchalle nge. InProceedingsof the\nRecommender SystemsChallenge . 1–4.\n[109] ChuanQin,HengshuZhu,TongXu,ChenZhu,LiangJiang, Enhong Chen,andHuiXiong.2018. Enhancing person-jobfitfo rtalentrecruitment:\nAn ability-aware neural network approach. In The 41st international ACM SIGIR conference on research & de velopment in information retrieval .\n25–34.\n[110] ChuanQin,HengshuZhu,TongXu,ChenZhu,ChaoMa,Enho ng Chen,andHuiXiong.2020. Anenhanced neuralnetworkappr oachto person-\njobfit intalent recruitment. ACMTransactionson Information Systems(TOIS) 38,2 (2020), 1–33.\n[111] Gábor Rácz, Attila Sali, and Klaus-Dieter Schewe. 201 6. Semantic matching strategies for job recruitment: A comp arison of new and known\napproaches.In FoIKS. Springer,149–168.\n[112] ManishRaghavan,SolonBarocas,JonKleinberg,andKa renLevy.2020. Mitigatingbiasinalgorithmichiring:Eval uatingclaimsandpractices.In\nProceedings of the 2020conference onfairness,accountabil ity,and transparency .469–481.\n[113] Michael Reusens, Wilfried Lemahieu, Bart Baesens, an d Luc Sels. 2017. A note on explicit versusimplicit informat ion for job recommendation.\nDecisionSupport Systems 98 (2017), 26–35.\n[114] AlbertoRivas,PabloChamoso,AlfonsoGonzález-Brio nes,RobertoCasado-Vara,andJuanManuelCorchado.2019. H ybridjobofferrecommender\nsysteminasocialnetwork. Expert Systems 36,4 (2019), e12416.\n[115] Leah G Rodriguez and Enrico P Chavez. 2019. Feature sel ection for job matching application using profile matching m odel. In2019 IEEE 4th\nInternational Conferenceon Computerand CommunicationSy stems(ICCCS) .IEEE, 263–266.\n24 Mashayekhi et al.\n[116] Pradeep KumarRoy,SarabjeetSingh Chowdhary,andRoc kyBhatia.2020. AMachineLearningapproachforautomation of ResumeRecommen-\ndationsystem. Procedia ComputerScience 167 (2020), 2318–2327.\n[117] OscarMSalazar,JuanCJaramillo,DemetrioA Ovalle,a nd JaimeA Guzmán.2015. A case-basedmulti-agent and recomm endationenvironment\ntoimprovethee-recruitmentprocess.In InternationalConferenceonPracticalApplicationsofAgen tsandMulti-AgentSystems .Springer,389–397.\n[118] Javier Sánchez-Monedero, Lina Dencik, and Lilian Edw ards. 2020. What does it mean to’solve’the problem of discri mination in hiring? Social,\ntechnical and legal perspectives from the UK on automated hi ring systems. In Proceedings of the 2020 conference on fairness, accountabil ity, and\ntransparency .458–468.\n[119] Masahiro Sato, Koki Nagatani, and Takuji Tahara. 2017 . Exploring an optimal online model for new job recommendati on: Solution for recsys\nchallenge 2017. In Proceedings ofthe Recommender SystemsChallenge 2017 .1–5.\n[120] Thomas Schmitt, Philippe Caillou, and Michele Sebag. 2016. Matching jobs and resumes: a deep collaborative filter ing task. In GCAI 2016-2nd\nGlobal Conference onArtificialIntelligence , Vol. 41.\n[121] ThomasSchmitt, FrançoisGonard,Philippe Caillou,a nd MichèleSebag.2017. Languagemodelling for collaborati vefiltering:Application to job\napplicant matching.In 2017IEEE29thInternational Conferenceon Tools withArtifi cial Intelligence (ICTAI) .IEEE, 1226–1233.\n[122] WalidShalaby,BahaaEddinAlAila,MohammedKorayem, LaylaPournajaf,KhalifehAlJadda,ShannonQuinn,andWlod ek Zadrozny.2017. Help\nme find a job:A graph-basedapproachfor jobrecommendation a t scale. In 2017IEEEinternational conference on bigdata (bigdata) . IEEE, 1544–\n1553.\n[123] BaoxuShi,JaewonYang,Feng Guo,andQiHe. 2020. Salie nceandmarket-awareskillextractionforjobtargeting.In Proceedingsofthe26thACM\nSIGKDDInternational Conference onKnowledge Discovery& D ataMining .2871–2879.\n[124] Saman Shishehchi and Seyed Yashar Banihashem. 2019. J rdp: a job recommender system based on ontology for disabled people.International\nJournal of Technology and Human Interaction(IJTHI) 15, 1(2019), 85–99.\n[125] Olfa Slama and Patrice Darmon. 2021. A Novel Personali zed Preference-based Approach for Job/Candidate Recommen dation. In International\nConference onResearchChallenges in InformationScience . Springer,418–434.\n[126] Ellery Smith, Andreas Weiler, and Martin Braschler. 2 021. Skill Extraction for Domain-Specific Text Retrieval in a Job-Matching Platform. In\nInternational Conferenceof the Cross-LanguageEvaluatio n ForumforEuropean Languages .Springer,116–128.\n[127] ChirayuUpadhyay,HasanAbu-Rasheed,ChristianWebe r,and MadjidFathi.2021. ExplainableJob-Posting Recomme ndations UsingKnowledge\nGraphsand NamedEntity Recognition. In 2021IEEEInternational Conference onSystems,Man,and Cyb ernetics(SMC) .IEEE, 3291–3296.\n[128] Jorge Carlos Valverde-Rebaza, Ricardo Puma, Paul Bus tios, and Nathalia C Silva. 2018. Job Recommendation Based o n Job Seeker Skills: An\nEmpiricalStudy..In Text2Story@ECIR . 47–51.\n[129] MaksimsVolkovs,GuangWeiYu,andTomiPoutanen.2017 .Content-basedneighbormodelsforcoldstartinrecommend ersystems.In Proceedings\nof the Recommender SystemsChallenge 2017 .1–6.\n[130] Clarice Wang, Kathryn Wang, Andrew Bian, Rashidul Isl am, Kamrun Naher Keya, James Foulds, and Shimei Pan. 2022. Do Humans Prefer\nDebiasedAI Algorithms? A CaseStudy inCareerRecommendati on. In27thInternational Conferenceon Intelligent UserInterfa ces.134–147.\n[131] Xiaowei Wang, Zhenhong Jiang, and Lingxi Peng. 2021. A Deep-Learning-Inspired Person-Job Matching Model Based o n Sentence Vectors and\nSubject-TermGraphs. Complexity 2021(2021).\n[132] YusenWang,KaizeShi,andZhendong Niu.2020. ASessio n-basedJobRecommendationSystemCombiningAreaKnowledg e andInterestGraph\nNeuralNetworks..In SEKE.489–492.\n[133] Wenming Xiao, Xiao Xu, Kang Liang, Junkang Mao, and Jun Wang. 2016. Job recommendation with hawkes process: an effec tive solution for\nrecsyschallenge 2016. In Proceedings ofthe recommendersystemschallenge . 1–4.\n[134] PengXuandDenilsonBarbosa.2018. Matchingrésumést ojobdescriptionswithstackedmodels.In CanadianConferenceonArtificialIntelligence .\nSpringer,304–309.\n[135] MuratYagciand FikretGurgen.2017. A rankerensemble formulti-objectivejobrecommendation inanitem cold star tsetting. In Proceedings of\nthe Recommender SystemsChallenge 2017 .1–4.\n[136] Rui Yan, Ran Le, Yang Song, Tao Zhang, Xiangliang Zhang , and Dongyan Zhao. 2019. Interview choice revealsyour pref erence on the market:\nTo improve job-resume matching through profiling memories. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge\nDiscovery& DataMining .914–922.\n[137] Shuo Yang, Mohammed Korayem, Khalifeh AlJadda, Trey G rainger, and Sriraam Natarajan. 2017. Combining content-b ased and collaborative\nfiltering forjobrecommendationsystem:A cost-sensitiveS tatisticalRelational Learning approach. Knowledge-Based Systems 136 (2017), 37–45.\n[138] ChenruiZhangandXueqiCheng.2016. Anensemblemetho dforjobrecommendersystems.In ProceedingsoftheRecommenderSystemsChallenge .\n1–4.\n[139] XianXing Zhang,Yitong Zhou,YimingMa,Bee-ChungChe n,LiangZhang,and DeepakAgarwal.2016. Glmix:Generalize dlinearmixed models\nfor large-scale response prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining .\n363–372.\n[140] Yunchong Zhang, Baisong Liu, Jiangbo Qian, Jiangchen g Qin, Xueyuan Zhang, and Xueyong Jiang. 2021. An Explainabl e Person-Job Fit Model\nIncorporating StructuredInformation. In 2021IEEEInternational Conference onBig Data (BigData) . IEEE, 3571–3579.\n[141] Jing Zhao, Jingya Wang, Madhav Sigdel, Bopeng Zhang, P huong Hoang, Mengshu Liu, and Mohammed Korayem. 2021. Embed ding-based\nRecommender System forJob to CandidateMatching onScale. arXiv preprint arXiv:2107.00221 (2021).\nA challenge-based survey ofe-recruitment recommendation systems 25\n[142] Tianhua Zhao, Cheng Wuyu, and Chen Zhixiang. 2021. Sum mer Job Selection Model Based on Job Matching and Comprehens ive Evaluation\nAlgorithm. In 20212nd International Conferenceon ArtificialIntelligen ce and InformationSystems .1–5.\n[143] ChenZhu,HengshuZhu,HuiXiong,ChaoMa,FangXie,Pen gliang Ding,andPanLi.2018. Person-jobfit:Adapting theri ghttalent fortheright\njobwith joint representationlearning. ACMTransactionson Management InformationSystems(TMIS) 9,3 (2018), 1–17.\n[144] DávidZibriczky.2016.Acombinationofsimplemodels byforwardpredictorselectionforjobrecommendation. In ProceedingsoftheRecommender\nSystemsChallenge . 1–4.\n26 Mashayekhi et al.\nA SUPPLEMENTARY MATERIALS\nTable1gives an overview of all the papers that have been collected w ith the literature search methodology in Sec-\ntion1.2.\nTable1. Anoverviewofe-recruitmentrecommendationsyste msis presented.Regardingthe recommended entities,altho ughsome\npaperscouldbereciprocalindesign,wedidnotreportthema sreciprocalsincetheydidnotclaimtobereciprocalandals otheyonly\nexperimentedwiththejoborjobseekerrecommendationtask .Themethodscoverabroadrangeofcontentbased(CB),colla borative\nfiltering(CF),knowledgebased (KB),andhybrid/other met hods.Somepapersfocus onpreprocessing,postprocessing o rre-ranking,\nanddonotmentiontherecommendationmethodtypeindetail. Hence,wealsodonotreporttherecommendationmethodtypef or\nthose papers.The papers aresorted based ontheir publicati on year.\nPaperYearRecommended entities Method ChallengeJob\nJob seeker\nReciprocalCBCFKB\nHybrid/Other\n3.2Data quality\n3.3Heterogenous\ndata, multipleint-eraction typesand datasources\n3.4Cold start\n3.5Userprefer-\nences aswellassuitability\n3.6Interpretability\nand explainability\n3.7Specific\nobjectives\n3.8Bias\nandfairness\n3.9Large scale\n[20]2012/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[45]2012/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[69]2013/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[101]2013/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[92]2013/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[37]2013/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[68]2013/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[60]2014/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle\n[38]2014/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[36]2014/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[95]2014/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[66]2014/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[44]2014/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[7]2014/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[6]2014/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[8]2015/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[31]2015/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[63]2015/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[117]2015/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[30]2015/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[88]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle\n[138]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[90]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[39]2016/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[26]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[83]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[102]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[105]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[34]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[108]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[133]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[144]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[82]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[59]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[111]2016/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[120]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[87]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[21]2016/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE\n[139]2016/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE\n[5]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[122]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE\n[113]2017/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[91]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[137]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[11]2017/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[75]2017/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[10]2017/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[27]2017/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[121]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle\n[40]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE\n[135]2017/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\nA challenge-based survey ofe-recruitment recommendation systems 27\nTable1. Anoverviewofe-recruitmentrecommendationsyste msis presented.Regardingthe recommended entities,altho ughsome\npaperscouldbereciprocalindesign,wedidnotreportthema sreciprocalsincetheydidnotclaimtobereciprocalandals otheyonly\nexperimentedwiththejoborjobseekerrecommendationtask .Themethodscoverabroadrangeofcontentbased(CB),colla borative\nfiltering(CF),knowledgebased (KB),andhybrid/other met hods.Somepapersfocus onpreprocessing,postprocessing o rre-ranking,\nanddonotmentiontherecommendationmethodtypeindetail. Hence,wealsodonotreporttherecommendationmethodtypef or\nthose papers.The papers aresorted based ontheir publicati on year.\nPaperYearRecommended entities Method ChallengeJob\nJob seeker\nReciprocalCBCFKB\nHybrid/Other\n3.2Data quality\n3.3Heterogenous\ndata, multipleint-eraction typesand datasources\n3.4Cold start\n3.5Userprefer-\nences aswellassuitability\n3.6Interpretability\nand explainability\n3.7Specific\nobjectives\n3.8Bias\nandfairness\n3.9Large scale\n[119]2017/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[107]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[58]2017/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[86]2017/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[15]2017/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[129]2017/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle\n[22]2017/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[67]2018/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[43]2018/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[56]2018/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[128]2018/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[33]2018/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[97]2018/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle\n[134]2018/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[94]2018/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[143]2018/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[109]2018/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[29]2018/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE\n[114]2019/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[124]2019/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[103]2019/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[104]2019/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle\n[61]2019/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle\n[73]2019/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[28]2019/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle\n[89]2019/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[78]2019/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[115]2019/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle\n[47]2019/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[136]2019/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[96]2019/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[81]2019/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[14]2019/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[93]2019/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[53]2019/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle\n[100]2020/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE\n[79]2020/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[57]2020/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[12]2020/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle\n[132]2020/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[77]2020/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[84]2020/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[19]2020/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[48]2020/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[110]2020/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[72]2020/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[23]2020/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE\n[116]2020/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[123]2020/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[13]2020/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[46]2021/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[17]2021/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle\n[16]2021/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle\n[99]2021/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle\n[127]2021/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle\n[142]2021/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[18]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle\n[80]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n28 Mashayekhi et al.\nTable1. Anoverviewofe-recruitmentrecommendationsyste msis presented.Regardingthe recommended entities,altho ughsome\npaperscouldbereciprocalindesign,wedidnotreportthema sreciprocalsincetheydidnotclaimtobereciprocalandals otheyonly\nexperimentedwiththejoborjobseekerrecommendationtask .Themethodscoverabroadrangeofcontentbased(CB),colla borative\nfiltering(CF),knowledgebased (KB),andhybrid/other met hods.Somepapersfocus onpreprocessing,postprocessing o rre-ranking,\nanddonotmentiontherecommendationmethodtypeindetail. Hence,wealsodonotreporttherecommendationmethodtypef or\nthose papers.The papers aresorted based ontheir publicati on year.\nPaperYearRecommended entities Method ChallengeJob\nJob seeker\nReciprocalCBCFKB\nHybrid/Other\n3.2Data quality\n3.3Heterogenous\ndata, multipleint-eraction typesand datasources\n3.4Cold start\n3.5Userprefer-\nences aswellassuitability\n3.6Interpretability\nand explainability\n3.7Specific\nobjectives\n3.8Bias\nandfairness\n3.9Large scale\n[131]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[62]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[42]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[126]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[50]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle\n[141]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/CIRCLE\n[140]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[98]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle\n[125]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle\n[9]2021/CIRCLE/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle\n[64]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[65]2021/Circle/Circle/CIRCLE/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[25]2021/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/Circle/CIRCLE/Circle/Circle\n[70]2021/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/CIRCLE/Circle/Circle/Circle/Circle/CIRCLE/Circle",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "JqTBoBKvSzL",
"year": null,
"venue": "CoRR 2023",
"pdf_link": "http://arxiv.org/pdf/2309.15140v1",
"forum_link": "https://openreview.net/forum?id=JqTBoBKvSzL",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Review on AI Algorithms for Energy Management in E-Mobility Services",
"authors": [
"Sen Yan",
"Maqsood Hussain Shah",
"Ji Li",
"Noel E. O'Connor",
"Mingming Liu"
],
"abstract": "E-mobility, or electric mobility, has emerged as a pivotal solution to address pressing environmental and sustainability concerns in the transportation sector. The depletion of fossil fuels, escalating greenhouse gas emissions, and the imperative to combat climate change underscore the significance of transitioning to electric vehicles (EVs). This paper seeks to explore the potential of artificial intelligence (AI) in addressing various challenges related to effective energy management in e-mobility systems (EMS). These challenges encompass critical factors such as range anxiety, charge rate optimization, and the longevity of energy storage in EVs. By analyzing existing literature, we delve into the role that AI can play in tackling these challenges and enabling efficient energy management in EMS. Our objectives are twofold: to provide an overview of the current state-of-the-art in this research domain and propose effective avenues for future investigations. Through this analysis, we aim to contribute to the advancement of sustainable and efficient e-mobility solutions, shaping a greener and more sustainable future for transportation.",
"keywords": [],
"raw_extracted_content": "A Review on AI Algorithms for Energy\nManagement in E-Mobility Services\nSen Yan, Maqsood Hussain Shah, Ji Li, Noel O’Connor and Mingming Liu\nAbstract —E-mobility, or electric mobility, has emerged as\na pivotal solution to address pressing environmental and sus-\ntainability concerns in the transportation sector. The depletion\nof fossil fuels, escalating greenhouse gas emissions, and the\nimperative to combat climate change underscore the significance\nof transitioning to electric vehicles (EVs). This paper seeks to\nexplore the potential of artificial intelligence (AI) in addressing\nvarious challenges related to effective energy management in\ne-mobility systems (EMS). These challenges encompass critical\nfactors such as range anxiety, charge rate optimization, and\nthe longevity of energy storage in EVs. By analyzing existing\nliterature, we delve into the role that AI can play in tackling these\nchallenges and enabling efficient energy management in EMS.\nOur objectives are twofold: to provide an overview of the current\nstate-of-the-art in this research domain and propose effective\navenues for future investigations. Through this analysis, we aim\nto contribute to the advancement of sustainable and efficient e-\nmobility solutions, shaping a greener and more sustainable future\nfor transportation.\nIndex Terms —Electric Mobility, Energy Management, Energy\nConsumption Estimation, Artificial Intelligent, Machine Learning\nLIST OF ABBREVIATIONS & A CRONYMS\nAI Artificial Intelligence\nANN Artificial Neural Networks\nCNN Convolutional Neural Networks\nDDPG Deep Deterministic Policy Gradient\nDL Deep Learning\nDNN Deep Neural Networks\nDT Decision Tree\nEMS Electric Mobility Service\nEV Electric Vehicle\nHEV Hybrid Electric Vehicle\nkNN k-Nearest Neighbor\nLGBM Light Gradient Boosting Machine\nLR Linear Regression\nLSTM Long Short-Term Memory\nML Machine Learning\nMLR Multiple Linear Regression\nPHEV Plug-in Hybrid Electric Vehicle\nPMP Pontryagin’s Minimum Principle\nRF Random Forest\nS. Yan ( [email protected] ), N. O’Connor ( [email protected] ) and\nM. Liu ( [email protected] ) are with the School of Electronic Engineering\nand SFI Insight Centre for Data Analytics at Dublin City University, Ireland.\nM. H. Shah ( [email protected] ) is with the Nanjing University of\nAeronautics and Astronautics, China. J. Li ( [email protected] ) is with the\nDepartment of Mechanical Engineering, University of Birmingham, UK. This\nwork is supported by Science Foundation Ireland under Grant No. 21/FFP-\nP/10266 andSFI/12/RC/2289 P2.RL Reinforcement Learning\nRNN Recurrent Neural Networks\nSoC State of Charge\nSVM Support Vector Machine\nSVR Support Vector Regression\nXGB eXtreme Gradient Boosting\nI. I NTRODUCTION\nElectric Mobility Service (EMS) refers to the use of electric-\npowered vehicles, including E-bikes, E-scooters, Hybrid Elec-\ntric Vehicle (HEV), and Plug-in Hybrid Electric Vehicle\n(PHEV), for transportation needs. EMS has rapidly trans-\nformed the transportation landscape, offering sustainable alter-\nnatives to traditional combustion engine vehicles. These Elec-\ntric Vehicle (EV)s not only address environmental concerns\nbut also contribute to the development of an interconnected\ntransportation ecosystem, advancing intelligent transportation\nsystems (ITS). By embracing EMS, we promote a future\nwhere interconnected vehicles, advanced data analytics, and\nsmart infrastructure combine to create a safer, more efficient,\nand sustainable transportation network. Energy management\nis crucial in EMS to ensure the efficient operation of electric\nvehicles and their charging infrastructure. It involves control-\nling and optimizing energy flow to meet specific requirements.\nThree key concerns in EMS energy management include\nensuring a reliable range (often referred to as range anxiety),\noptimizing charging rates, and maximizing energy storage\nlifespan. Achieving this requires coordinating electrical energy\nresources like charging stations, renewable energy sources, and\nenergy storage systems to facilitate electric vehicle charging.\nEffective energy management is crucial for multiple reasons.\nOne important aspect is ensuring the availability of charging\ninfrastructure to meet the rising demand for electric vehicle\ncharging [1]. As the number of EVs continues to grow, the\ncharging load on the power grid can become substantial.\nTherefore, meticulous management is essential to prevent\noverloading the grid and potential blackouts. Another key\nbenefit of energy management is optimizing the utilization of\nenergy resources, minimizing wastage, and maximizing the\nefficiency of the charging process. This not only helps reduce\noperational costs but also enhances the overall sustainability\nof EMS systems. Additionally, energy management facilitates\ngrid integration and empowers EVs to contribute to the grid\nby providing ancillary services or participating in vehicle-\nto-grid systems, thus strengthening the grid’s stability and\nresponsiveness.arXiv:2309.15140v1 [cs.LG] 26 Sep 2023\nArtificial Intelligence (AI) technologies offer a transforma-\ntive solution to the limitations of traditional energy manage-\nment techniques in EMS [2]. Conventional methods, which\nare primarily based on predetermined charging schedules and\nbasic load balancing algorithms, struggle to meet the dynamic\noptimization requirements and growing complexity of modern\nEMS [3]. In contrast, AI leverages advanced algorithms and\nreal-time data analysis to optimize charging strategies intelli-\ngently. By adapting to changing conditions, utilizing predictive\nmodeling, and employing multi-objective optimization, AI\nenables more efficient and effective energy management in\nEMS, addressing the demand for optimal charging solutions.\nThe potential of AI in transforming energy management for\nEMS lies in its computational techniques, including Machine\nLearning (ML) and Deep Learning (DL). AI algorithms and\ndata-driven approaches enable intelligent systems to adapt\nto varying conditions, optimize charging operations, predict\nuser behavior, and manage energy resources in real time. AI\nfacilitates dynamic load balancing, efficient energy alloca-\ntion, and demand-response strategies, resulting in improved\ncharging infrastructure utilization, reduced energy costs, and\nenhanced grid integration. This paper comprehensively reviews\nAI technologies and techniques for energy management in\nEMS, covering energy consumption modeling, estimation, and\nprediction. It also discusses current challenges and proposes a\nresearch roadmap for future advancements. By assessing the\nstate of AI-based energy management, this paper contributes\nto the development of effective and sustainable EMS solutions.\nThe paper is organized as follows. Section II presents\nthe methodology used in our paper, proposes the research\nquestions we plan to investigate and summarizes and compares\nother existing surveys. Section III provides an overview of\nconventional energy management systems, discussing their ad-\nvantages and limitations. Section IV focuses on AI approaches\nfor energy management, delving into the current state of affairs\nin this domain. Section V provides some discussion and in-\ntroduces challenges to AI-based energy management methods.\nFinally, Section VI offers a brief conclusion summarizing the\nentire paper and presents future research directions.\nII. R EVIEW METHODOLOGY\nThe literature survey process was executed meticulously\nin five distinct phases, namely planning, literature research,\nreporting and interpretation of findings, and the synthesis of\nchallenges and potential research directions for the future.\nThis section provides a comprehensive account of the pivotal\nresearch questions to be explored and expounds upon the\nsystematic methodology employed in conducting the literature\nsearch.\nA. Research Questions\nThis paper aims at answering the following questions in\nrelation to the application of AI methods in EMS:\n1) What are the existing AI technologies and techniques\nused for energy management in EMS?2) How are AI-based approaches employed in energy con-\nsumption modeling, estimation, and prediction in EMS?\n3) What are the current challenges and limitations of AI\nmethods in energy management for EMS?\n4) What is the future research roadmap for advancements in\nAI-based energy management for EMS?\n5) How does the use and focus of AI approaches vary among\ndifferent EMS?\nB. Literature Retrieval\nWe conducted a systematic search of peer-reviewed research\npublications to collect studies that employed AI approaches to\naddress issues related to energy management in EMS. Our\nscreening process involved a thorough review of the literature\nto identify papers that addressed the structural challenges\nof EMS energy management and utilized AI methods. We\nutilized reputable online databases, including Google Scholar,\nACM Digital Library, Springer, MDPI, IEEE, and Science\nDirect, which index a wide range of computer science and\ntechnology research, to ensure comprehensive coverage of\nrelevant studies.\nThe literature search process was conducted using a set of\nspecific keywords, including “energy management”, “electric\nmobility service”, “machine learning”, “EV”, “e-bike”, “e-\nscooter” and ”energy consumption prediction”. Only research\npapers written in English were included in the search. As a\nresult of this comprehensive search, a total of approximately\n30 papers were retrieved for review. Among these papers, 1 of\nthem specifically focused on E-scooters [4], 1 paper focused\non E-bikes [5], and the remaining papers centered on EVs.\nThe results show that few existing survey papers in the\nliterature focused on the applications of AI methods for energy\nmanagement in E-micromobility systems, such as E-bikes and\nE-scooters. All selected papers for review are relevant in\nour context, which highlights the applicability of AI-based\napproaches in dealing with energy consumption prediction\nproblems in different aspects.\nC. Existing Survey\nIn this section, we provide a comprehensive summary and\ncomparison of existing surveys pertaining to energy manage-\nment, e.g., the estimation of battery State of Charge (SoC),\nin EMS. It is evident that the field commonly accepts the use\nof three main categories of estimation approaches: electro-\nchemical models, equivalent circuit models, and data-driven\nmodels. However, in recent years (2019 to 2022), there has\nbeen a notable emphasis on “data-driven methods” (such as\nAI approaches) and “connected environments” in the future\ndirection section of these surveys. This highlights the growing\nimportance and attention given by researchers to these areas\nin energy management systems.\nGiven this context, our work primarily aims to summarize\nthe various modeling, estimation, and prediction approaches\nutilized in this domain. The objective is to provide readers with\na concise understanding of the available models or algorithms\nand offer suggestions for their appropriate selection based on\ndifferent cases and scenarios. By offering this overview, we\naim to assist researchers and practitioners in making informed\ndecisions regarding the most suitable approaches for their\nspecific energy management requirements within EMS.\nIII. C ONVENTIONAL APPROACHES\nBased on the literature, conventional energy management\nmethods for EVs (HEVsor PHEVs) can be classified into two\nmain categories: rule-based and optimization-based. A brief\nsummary of the advantages and limitations of conventional\nmethods is provided in Table II, and Fig. 1 shows the hierar-\nchical categorization of classical energy management systems.\nRule-based methods have been widely employed in early\nHEVs due to their simplicity and feasibility [12]. These\nmethods focus on coordinating the operation of the internal\ncombustion engine to improve fuel economy and emission\nperformance by transferring the working points of the engine\nfrom low to high-efficiency zones [12]. Deterministic rule-\nbased methods utilize heuristics, intuition, and mathematical\nmodels to develop control rules based on prior knowledge of\nthe driving cycle [13]. Fuzzy rule-based EMS, on the other\nhand, incorporates fuzzy logic control to enhance adaptability\nand robustness [14].\nOptimization-based methods are categorized as global and\nreal-time optimization methods. Various global optimization\nmethods have been employed, including dynamic program-\nming, Pontryagin’s Minimum Principle (PMP), Evolutionary\nAlgorithms, and Game Theory. Dynamic programming breaks\ndown the decision process into discrete steps and has been\nused to solve the optimization problem of multi-step deci-\nsion processes [15]. PMP finds optimal control signals for\ntime-varying non-linear systems subject to constraints [16].\nEvolutionary Algorithms encompass swarm-based algorithms\nsuch as Particle Swarm Optimization and Genetic Algorithms\n[17]. Game Theory treats the energy management problem as\na game among decision-makers [18].\nReal-time optimization methods aim at minimizing en-\nergy consumption dynamically and include methods such as\nEquivalent Consumption Minimization Strategy and Model\nPredictive Control. ECMS converts electric energy into equiv-\nalent fuel consumption, allowing for compromise optimization\nof the vehicle’s dynamic performance, fuel economy, and\nemission performance [19]. Model Predictive Control utilizes\na prediction horizon and rolling optimization to determine\noptimal control actions in real-time [20].\nAs the complexity of energy management systems continue\nto rise, conventional approaches are being surpassed by more\nadvanced AI methods, offering enhanced energy management\ncapabilities.\nIV. A RTIFICIAL INTELLIGENT APPROACHES\nIn the realm of energy management for EMS, AI-based\napproaches emerge as markedly superior to conventional meth-\nods [2]. This distinction arises from the inherent attributes of\nAI algorithms, which encompass dynamic adaptability, data-\ndriven precision, and the capacity for continuous learning.Consequently, these algorithms possess the capability to pro-\ncess substantial real-time data and promptly adapt to changing\ncircumstances, leading to effective energy consumption opti-\nmization [21]. In the subsequent discussion, we review AI\nstrategies deployed within EMS energy management, exam-\nining them from two vantage points: traditional ML methods\nand DL methods.\nA. Traditional Machine Learning Methods\nML methods leverage the inherent patterns present within\nthe data to facilitate the learning and adaptation of the system,\nthereby enabling accurate predictions for previously unseen\ndata. Various ML approaches, including Linear Regression\n(LR) [22], Multiple Linear Regression (MLR) [23]–[25], Sup-\nport Vector Machine (SVM) or Support Vector Regression\n(SVR) [25]–[27], Decision Tree (DT) [25], [26], Random\nForest (RF) [4], [26], [28], eXtreme Gradient Boosting (XGB)\n[1], [23], Light Gradient Boosting Machine (LGBM) [23], k-\nNearest Neighbor (kNN) [4], [26], [28], and Artificial Neural\nNetworks (ANN) [23], [25]–[27], have been widely employed\nto address the challenges associated with energy consumption\nmodeling or prediction for EMS. A brief summary of the ad-\nvantages and limitations of traditional ML methods is provided\nin Table III.\nThese ML algorithms or models were individually applied\nand compared in various case studies. For example, MLR,\nDT, SVM and other neural network-based models were im-\nplemented on the data collected from electric buses in [25].\nFurthermore, in [23], the authors employed MLR, ANN, XGB\nand LGBM to predict EV energy consumption using a dataset\ncollected in Japan. The results demonstrate the superiority of\nXGB and LGBM compared to other selected algorithms based\non lower mean absolute error.\nOn the other hand, combining ML models may lead to\nimproved performance. For instance, based on the combination\nof DT, RF and kNN, the authors designed a new method\nnamed Ensemble Stacked Generalization [26] to predict the\nenergy consumption of EVs, and evaluated its performance\non the same dataset collected in Japan. The results showed\nthat, despite longer running times, the proposed method out-\nperformed the baselines (i.e., DT, RF and kNN). Therefore,\nthe authors concluded that adopting stacking techniques can\nenhance the accuracy of predictive models for EV energy\nconsumption.\nB. Deep Learning Methods\nDL methods, a subset of ML, possess the capability to\nextract intricate patterns from transportation data. In con-\ntrast to conventional ML models, such as RF and SVM,\nDL models leverage neural network architectures comprising\nmultiple hidden layers to capture intricate relationships within\ntraffic big data. These DL models excel at learning high-level\nrepresentations of data, surpassing the limitations of human-\ndesigned features [29].\nTo illustrate the significance of DL techniques, one can\nconsider a representative scenario involving dynamic range\nTABLE I\nEXISTING SURVEYS FOCUSING ON ENERGY MANAGEMENT IN EMS.\nObject Year Method(s) reviewed Discussion field(s) Future direction(s) Ref\nHEV &\nPHEV2019 AI methodsEnergy management problems\nReinforcement Learning (RL) layouts\nApplications of RL approachesNovel RL algorithms\nEnergy management in intelligent transportation systems\nMulti-objective optimization\nCooperative learning in a connected environment[6]\nHEV &\nPHEV2019Conventional methods\nAI methodsHEV & PHEV definitions\nClassification of strategiesConnected & automated systems\nDriver-in-the loop systems\nMulti-objective optimization\nLearning-based methods\nCooperative adaptive cruise controller-based systems\nMulti-scale systems\nCooperative learning in a connected environment[7]\nEV 2021Conventional methods\nAI methods\nHybrid methodsInfluential variables\nModeling scale\nModeling methodologyEnergy estimation for vehicles\nApplication for vehicle-to-grid integration\nMulti-scale systems[8]\nEV 2021Conventional methods\nAI methodsDefinition of SoC & state of health\nNovel estimation methodsEstimation errors\nGaps between lab and practice\nJoint estimation\nDifferent applications\nData-driven method[9]\nEV 2021Conventional methods\nAI methodsBattery modeling\nSoC estimation\nBattery chargingJoint estimation\nScalable state estimation\nSmart learning and optimization\nSafety Management[10]\nEV 2022Conventional methods\nAI methodsNovel battery technologies\nBattery management technologiesBatteries\nTechnologies regarding batteries\nTechnologies replacing batteries[11]\nEMS 2023Conventional methods\nAI methodsConventional estimation methods\nNovel AI-based estimation methods\nExisting relevant surveysData availability\nModel complexity\nReal-time prediction capabilities\nIntegration with renewable energy sources\nManagement of uncertainty and risk factors*\nThe last item marked in bold represents our work conducted in this paper.\nTABLE II\nSUMMARY OF CONVENTIONAL METHODS IN EMS ENERGY MANAGEMENT .\nMethod Advantages Limitations\nRule-basedEase of implementation and feasibility\nLow computational burdenLimited optimization and adaptability\nInability to adapt to different conditions and dynamic changes\nFuzzy rule-based Robustness and adaptability Difficulty in ensuring optimal control performance\nOptimization-basedGlobal optimization capability\nPotential for multi-objective optimization\nFeasibility of real-time operation\nApplicability to different driving cycles\nEnhanced control performance and optimization effectiveness\nAdaptability to changing conditions and driver behaviorsDependence on prior knowledge of driving cycle conditions\nHeavy computational burden and longer computational time\nInability to handle real-time requirements\nUncertainty in driving conditions\noptimization in electric fleet management. Traditional meth-\nods used to estimate the remaining operational range in EV\nfleet management often rely on simplistic rules that struggle\nto accommodate real-world factors like traffic patterns and\natmospheric conditions [30]. In contrast, DL methods offer\na more advanced solution, leveraging their capacity to assim-\nilate diverse features such as GPS data, weather conditions,\nand driver behaviors. This synthesis of data enables real-\ntime adaptation of EV range predictions [31]. Moreover, DL\ncan not only predict range more accurately and optimize itby suggesting efficient routes and driving modes but also\npersonalize estimates for each driver to continuously improve\nthe prediction performance based on collected data.\nTo provide further specific examples of DL models from\nthe literature, researchers in [32], for instance, who employed\nvarious depths of Deep Neural Networks (DNN) models to\nestimate the SoC of EV batteries. These models utilized open-\ncircuit voltage measurements at different ambient tempera-\ntures as input variables. Moreover, Recurrent Neural Net-\nworks (RNN) models, including the Long Short-Term Memory\nFig. 1. Hierarchical categorization of classical EMS.\n(LSTM) variant, have found extensive utility in EMS due to\ntheir aptitude for capturing temporal dependencies in data. As\npresented in [33], LSTM was applied to predict multiple tar-\ngets, e.g., voltage, temperature, and SoC, based on time series\ndata, showcasing precise online prediction and robustness.\nFurthermore, Convolutional Neural Networks (CNN) mod-\nels have found application in similar research areas by convert-\ning time series data into image representations. In particular,\n[34] used the Gramian Angular Field approach to convert time\nseries data into images, which were then fed into a CNN\nmodel to estimate EV energy consumption. The CNN model’s\nperformance was evaluated against other baseline models such\nas ANN and MLR.\nIn the context of RL, [6] provided a comprehensive\noverview of various RL methods. Two notable methods include\nDeep Deterministic Policy Gradient (DDPG) and a novel\napproach combining deep RL with the PMP method. In [35],\nauthors introduced a DDPG-based following model designed\nfor connected and autonomous EVs, aimed at mitigating traffic\nfluctuations caused by human drivers, known as stop-and-go\ntraffic waves, while optimizing electrical energy consumption.\nMoreover, [21] developed an electric management system that\nintegrates deep RL and the PMP algorithm, demonstrating\nsubstantial performance improvements compared to traditional\nPMP-based electric management systems.\nA brief summary of the advantages and limitations of DL\nmethods is provided in Table IV below.\nV. D ISCUSSION & C HALLENGES\nThe future trajectory of energy consumption modeling,\nestimation, and prediction in the context of e-mobility unfolds\na multitude of challenges that necessitate meticulous atten-\ntion and innovative solutions to further propel this domain.\nThese challenges encompass diverse dimensions, encapsulat-\ning data availability, model complexity, real-time predictioncapabilities, integration with renewable energy sources, and\nthe management of uncertainty and risk factors.\nA. Data Availability\nThe availability of high-quality, real-world data plays a\npivotal role in e-mobility energy consumption modeling. On\nthe one hand, achieving the convergence of either the ML\nmodel or the control policy is a protracted undertaking ne-\ncessitating the acquisition of data from numerous iterative\nsimulations [36], [37]. On the other hand, many researchers\nhave predominantly employed small-scaled datasets derived\nfrom conventional standardized driving cycles or constrained\nreal-world driving scenarios to train predictive models. Such\nan approach potentially compromises the precision of these\nmodels when applied to authentic real-world driving contexts\n[37], [38]. Therefore, it is imperative to acquire diverse and\ncomprehensive datasets that encompass a wide range of driv-\ning conditions, vehicle types (corresponding to the extreme\nimbalance in E-bike, E-scooter and EV systems), and user be-\nhaviors to ensure the accuracy and reliability of the models and\npredictions. Thus, efforts should be made to collect large-scale\ndatasets with high granularity, including vehicle parameters\n(e.g., battery capacity and efficiency), driving patterns (e.g.,\nspeed profiles and acceleration patterns), and environmental\nfactors (e.g., temperature and road conditions). Collaboration\namong researchers, industry partners, and policymakers can\nhelp overcome data accessibility and privacy concerns, allow-\ning for the development of robust models.\nB. Model Complexity& Interpretability\nWith the increasing complexity of e-mobility systems, it is\nimperative that the models employed for energy consumption\nestimation and prediction possess the necessary capability to\ncapture the dynamic nature and intricate interactions within the\nsystem. These models must be designed to be scalable, capable\nTABLE III\nSUMMARY OF TRADITIONAL ML METHODS IN EMS ENERGY MANAGEMENT .\nMethod Advantages Limitations\nLR & MLREase of understanding and interoperability\nHigh computational efficiency\nCapability to handle multiple features\nExcellent performance on small datasetsLimitation to linear relationships\nAssumption of data normality\nSensitivity to outliers\nIssue of multicollinearity\nSVM & SVRAbility to handle linear/non-linear classification problems\nEffectiveness even in situations with high feature dimensionsComplexity of computations for large-scale data\nNeed for parameter tuning\nSensitivity to missing data\nDT & RFEase of understanding and interpretation\nAbility to handle numerical and categorical data\nMinimal need for data preprocessingPropensity for DT to overfit\nRequirement for substantial computational resources for RF\nXGBHigh accuracy\nPrevention of computational resource waste\nBuilt-in handling of missing values\nPresence of regularization parameters to prevent overfittingPotential for long training times\nSensitivity to parameter selection\nPotential need for extensive time in parameter tuning\nLGBMFast training speed\nLow memory usage\nHigh accuracy\nCapability to handle large-scale dataSensitivity to parameter selection\nPotential need for extensive time in parameter tuning\nkNNSimplicity and ease of understanding\nInsensitivity to outliers\nNo data input assumptionsLarge computational requirement\nNeed for substantial memory\nPoor performance on imbalanced sample issues\nANNCapability to handle complex non-linear relationships\nStrong ability to process large-scale and high-dimensional dataNeed for large amounts of data for training\nLong training times\nPoor interpretability of the model\nTABLE IV\nSUMMARY OF DL METHODS IN EMS ENERGY MANAGEMENT .\nMethod Advantages Limitations\nDNNCapability to handle complex non-linear relationships\nStrength in processing high-dimensional unstructured data\nFeature extraction and learning through hidden layersRequirement for large amounts of data for training\nLower interpretability compared to some simpler ML models\nHigh model complexity, requiring long training time\nLSTMCapability to handle long sequence data\nResolution of the gradient issue in RNNRequirement for large amounts of data for training\nHigh model complexity, requiring long training time\nCNNSuitability for handling high-dimensional input data\nAbility to automatically detect important featuresRequirement for large amounts of data for training\nRisk of overfitting\nRequirement for approaches to convert time series to images\nDDPGAbility to handle high-dimensional and continuous action spaces\nCapability to directly learn a policy\nCapability to stabilize the learning processRequirement for large amounts of data and training time\nSensitivity to noise and outliers\nof handling large-scale deployments of EVs, and adaptable to\naccommodate diverse vehicle types, environment factors, and\nuser preferences.\nAdvanced AI methodologies, including DL and RL, offer\navenues for the creation of intricate models adept at capturing\nnuanced interdependencies and nonlinear dynamics inherent\nto the system. Nevertheless, it is important to acknowledge\nthat the elevated intricacy of these models will likely result\nin computational requisites that surpass the capabilities of the\nelectronic control unit embedded within an operational vehicle\npowertrain, particularly when utilized as an online controller\n[36], [37], [39].\nHowever, it should be noted that the interpretability of\nintricate models is relatively lower compared to physics-based methods. This is because data-driven methods rely on\nblack-box models where detailed information is not known\n[37], [40]. Thus, exploring hybrid models merging physics-\nbased and data-driven techniques becomes relevant. These\nhybrids integrate fundamental EV principles with data-driven\nmethods, capturing real-world intricacies. Balancing accuracy\nand efficiency, they offer the potential for valuable insights.\nC. Real-time Prediction\nReal-time prediction of energy consumption is crucial for\noptimizing charging and discharging strategies, managing grid\nintegration, and providing accurate range estimation to EV\nusers. However, achieving real-time predictions while consid-\nering dynamic factors such as traffic conditions, weather, and\nuser behavior poses a significant challenge.\nReal-time prediction models can leverage techniques such\nas online learning, adaptive control, and model-based re-\ninforcement learning. These approaches enable continuous\nlearning from new data and allow for dynamic adaptation to\nchanging conditions. Integration with real-time data sources,\nsuch as traffic information, weather forecasts, and vehicle-to-\ngrid communication, can further enhance the accuracy of real-\ntime predictions.\nD. Integration with Renewable Energy Sources\nIntegrating e-mobility systems with renewable energy\nsources adds complexity to energy modeling and predic-\ntion. Renewable energy’s intermittent nature and the need\nto balance supply and demand require precise predictions\nand optimization. Models must factor in the availability and\nvariability of sources like solar and wind power, considering\nenergy consumption patterns. We can employ techniques like\nprobabilistic forecasting, optimization algorithms, and energy\nmanagement systems to optimize renewable energy use, min-\nimize grid strain, and reduce carbon emissions.\nE. Uncertainty & Risk Management\nUncertainties tied to factors like user behavior, charging\ninfrastructure, and battery wear and tear present hurdles in\naccurately estimating and predicting energy consumption in\ne-mobility systems [37].\nTo tackle these uncertainties and their associated risks, we\ncan employ probabilistic models, uncertainty quantification\ntechniques, and risk analysis frameworks. Among these ap-\nproaches, AI, particularly reinforcement learning (RL), stands\nout as a suitable solution. RL algorithms enable adaptive\ndecision-making by utilizing real-time feedback, empowering\ne-mobility systems to optimize their energy management.\nFor instance, RL can model user behavior to find optimal\ncharging schedules based on preferences and past patterns,\naddress battery wear by optimizing charging and discharging\nprofiles, and optimize the use of charging infrastructure by\nallocating resources intelligently. By incorporating RL models,\ne-mobility systems can effectively handle uncertainties, boost\noperational efficiency, and ensure reliable performance.\nIn summary, future challenges in e-mobility energy man-\nagement include data availability and quality, model com-\nplexity and scalability, real-time prediction, renewable energy\nintegration, and uncertainty management. Addressing these\nchallenges demands interdisciplinary collaboration, advanced\nmachine learning techniques, solid data infrastructure, and\npolicy support to foster the development of dependable and\nefficient e-mobility systems.\nVI. C ONCLUSION & F UTURE DIRECTIONS\nIn this survey paper, we meticulously examined the methods\nfor modeling, estimating, and predicting energy consumption\nin electric mobility. We categorized these approaches into two\nmain groups: conventional methods and AI-based algorithms.\nWe synthesized insights from relevant surveys in this domain.\nOur analysis reveals significant progress in understandingenergy consumption dynamics. Conventional methods pro-\nvide foundational insights, while traditional machine learn-\ning algorithms excel in capturing patterns to make accurate\npredictions. Deep learning algorithms, on the other hand,\nexcel in addressing intricate, non-linear dynamics. However,\nwe also identify several challenges in this research area,\nincluding but not limited to the acquisition of diverse and high-\nquality datasets, addressing model complexity, achieving real-\ntime predictions, integrating renewable energy sources, and\neffectively managing uncertainty.\nFor future work, we recommend further exploring data-\ndriven methods and real-time data integration to boost accu-\nracy and performance. Additionally, the limited research in\nmicro-mobility areas like E-bikes and E-scooters highlights\nthe need for a thorough investigation in this domain.\nACKNOWLEDGMENT\nThis work has emanated from research supported in part\nby Science Foundation Ireland under Grant Number 21/FFP-\nP/10266 andSFI/12/RC/2289 P2(Insight SFI Research Cen-\ntre for Data Analytics), co-funded by the European Regional\nDevelopment Fund in collaboration with the SFI Insight Centre\nfor Data Analytics at Dublin City University.\nREFERENCES\n[1] J. Zhang, Z. Wang, P. Liu, and Z. Zhang, “Energy consumption analysis\nand prediction of electric vehicles based on real-world driving data,”\nApplied Energy , vol. 275, p. 115408, Oct. 2020. [Online]. Available:\nhttps://doi.org/10.1016/j.apenergy.2020.115408\n[2] C. Yang, M. Zha, W. Wang, K. Liu, and C. Xiang, “Efficient energy\nmanagement strategy for hybrid electric vehicles/plug-in hybrid electric\nvehicles: review and recent advances under intelligent transportation\nsystem,” IET Intelligent Transport Systems , vol. 14, no. 7, pp. 702–711,\nMay 2020. [Online]. Available: https://doi.org/10.1049/iet-its.2019.0606\n[3] M. Nigro, M. Ferrara, R. D. Vincentis, C. Liberto, and G. Valenti,\n“Data driven approaches for sustainable development of e-mobility in\nurban areas,” Energies , vol. 14, no. 13, p. 3949, Jul. 2021. [Online].\nAvailable: https://doi.org/10.3390/en14133949\n[4] H. ˙Inac ¸, Y . E. Ay ¨ozen, A. Atalan, and C. C ¸ . D ¨onmez, “Estimation of\npostal service delivery time and energy cost with e-scooter by machine\nlearning algorithms,” Applied Sciences , vol. 12, no. 23, p. 12266, Nov.\n2022. [Online]. Available: https://doi.org/10.3390/app122312266\n[5] E. Burani, G. Cabri, and M. Leoncini, “An algorithm to predict\ne-bike power consumption based on planned routes,” Electronics ,\nvol. 11, no. 7, p. 1105, Mar. 2022. [Online]. Available: https:\n//doi.org/10.3390/electronics11071105\n[6] X. Hu, T. Liu, X. Qi, and M. Barth, “Reinforcement learning\nfor hybrid and plug-in hybrid electric vehicle energy management:\nRecent advances and prospects,” IEEE Industrial Electronics Magazine ,\nvol. 13, no. 3, pp. 16–25, Sep. 2019. [Online]. Available: https:\n//doi.org/10.1109/mie.2019.2913015\n[7] F. Zhang, X. Hu, R. Langari, and D. Cao, “Energy management\nstrategies of connected HEVs and PHEVs: Recent progress and outlook,”\nProgress in Energy and Combustion Science , vol. 73, pp. 235–256, Jul.\n2019. [Online]. Available: https://doi.org/10.1016/j.pecs.2019.04.002\n[8] Y . Chen, G. Wu, R. Sun, A. Dubey, A. Laszka, and P. Pugliese,\n“A review and outlook on energy consumption estimation models\nfor electric vehicles,” SAE International Journal of Sustainable\nTransportation, Energy, Environment, & Policy , vol. 2, no. 1, Mar.\n2021. [Online]. Available: https://doi.org/10.4271/13-02-01-0005\n[9] Z. Wang, G. Feng, D. Zhen, F. Gu, and A. Ball, “A review on online\nstate of charge and state of health estimation for lithium-ion batteries in\nelectric vehicles,” Energy Reports , vol. 7, pp. 5141–5161, Nov. 2021.\n[Online]. Available: https://doi.org/10.1016/j.egyr.2021.08.113\n[10] M. Adaikkappan and N. Sathiyamoorthy, “Modeling, state of\ncharge estimation, and charging of lithium-ion battery in electric\nvehicle: A review,” International Journal of Energy Research ,\nvol. 46, no. 3, pp. 2141–2165, Oct. 2021. [Online]. Available:\nhttps://doi.org/10.1002/er.7339\n[11] W. Liu, T. Placke, and K. Chau, “Overview of batteries and battery\nmanagement for electric vehicles,” Energy Reports , vol. 8, pp.\n4058–4084, Nov. 2022. [Online]. Available: https://doi.org/10.1016/j.eg\nyr.2022.03.016\n[12] S. A. Anbaran, N. R. N. Idris, M. Jannati, M. J. Aziz, and\nI. Alsofyani, “Rule-based supervisory control of split-parallel hybrid\nelectric vehicle,” in 2014 IEEE Conference on Energy Conversion\n(CENCON) . IEEE, Oct. 2014. [Online]. Available: https://doi.org/10\n.1109/cencon.2014.6967468\n[13] F. R. Salmasi, “Control strategies for hybrid electric vehicles: Evolution,\nclassification, comparison, and future trends,” IEEE Transactions on\nVehicular Technology , vol. 56, no. 5, pp. 2393–2404, Sep. 2007.\n[Online]. Available: https://doi.org/10.1109/tvt.2007.899933\n[14] X. Wang, L. Li, K. He, and C. Liu, “Dual-loop self-learning fuzzy\ncontrol for AMT gear engagement: Design and experiment,” IEEE\nTransactions on Fuzzy Systems , vol. 26, no. 4, pp. 1813–1822, Aug.\n2018. [Online]. Available: https://doi.org/10.1109/tfuzz.2017.2779102\n[15] C.-C. Lin, H. Peng, J. Grizzle, and J.-M. Kang, “Power management\nstrategy for a parallel hybrid electric truck,” IEEE Transactions on\nControl Systems Technology , vol. 11, no. 6, pp. 839–849, Nov. 2003.\n[Online]. Available: https://doi.org/10.1109/tcst.2003.815606\n[16] L. Xu, M. Ouyang, J. Li, F. Yang, L. Lu, and J. Hua, “Application of\npontryagin's minimal principle to the energy management strategy of\nplugin fuel cell electric vehicles,” International Journal of Hydrogen\nEnergy , vol. 38, no. 24, pp. 10 104–10 115, Aug. 2013. [Online].\nAvailable: https://doi.org/10.1016/j.ijhydene.2013.05.125\n[17] C. Yang, Y . Shi, L. Li, and X. Wang, “Efficient mode transition\ncontrol for parallel hybrid electric vehicle with adaptive dual-loop\ncontrol framework,” IEEE Transactions on Vehicular Technology ,\nvol. 69, no. 2, pp. 1519–1532, Feb. 2020. [Online]. Available:\nhttps://doi.org/10.1109/tvt.2019.2962509\n[18] C. Dextreit, F. Assadian, I. V . Kolmanovsky, J. Mahtani, and\nK. Burnham, “Hybrid electric vehicle energy management using game\ntheory,” in SAE Technical Paper Series . SAE International, Apr. 2008.\n[Online]. Available: https://doi.org/10.4271/2008-01-1317\n[19] B. ˇSkugor, J. Deur, M. Cipek, and D. Pavkovi ´c, “Design of a power-split\nhybrid electric vehicle control system utilizing a rule-based controller\nand an equivalent consumption minimization strategy,” Proceedings of\nthe Institution of Mechanical Engineers, Part D: Journal of Automobile\nEngineering , vol. 228, no. 6, pp. 631–648, Jan. 2014. [Online].\nAvailable: https://doi.org/10.1177/0954407013517220\n[20] C. Yang, S. You, W. Wang, L. Li, and C. Xiang, “A stochastic\npredictive energy management strategy for plug-in hybrid electric\nvehicles based on fast rolling optimization,” IEEE Transactions on\nIndustrial Electronics , vol. 67, no. 11, pp. 9659–9670, Nov. 2020.\n[Online]. Available: https://doi.org/10.1109/tie.2019.2955398\n[21] H. Hu, W.-W. Yuan, M. Su, and K. Ou, “Optimizing fuel economy and\ndurability of hybrid fuel cell electric vehicles using deep reinforcement\nlearning-based energy management systems,” Energy Conversion and\nManagement , vol. 291, p. 117288, Sep. 2023. [Online]. Available:\nhttps://doi.org/10.1016/j.enconman.2023.117288\n[22] H. Mediouni, A. Ezzouhri, Z. Charouh, K. E. Harouri, S. E. Hani, and\nM. Ghogho, “Energy consumption prediction and analysis for electric\nvehicles: A hybrid approach,” Energies , vol. 15, no. 17, p. 6490, Sep.\n2022. [Online]. Available: https://doi.org/10.3390/en15176490\n[23] I. Ullah, K. Liu, T. Yamamoto, R. E. A. Mamlook, and A. Jamal,\n“A comparative performance of machine learning algorithm to predict\nelectric vehicles energy consumption: A path towards sustainability,”\nEnergy & Environment , vol. 33, no. 8, pp. 1583–1612, Oct. 2021.\n[Online]. Available: https://doi.org/10.1177/0958305x211044998\n[24] F. C. L ´opez and R. ´A. Fern ´andez, “Predictive model for energy\nconsumption of battery electric vehicle with consideration of self-\nuncertainty route factors,” Journal of Cleaner Production , vol. 276, p.\n124188, Dec. 2020. [Online]. Available: https://doi.org/10.1016/j.jclepr\no.2020.124188\n[25] H. Abdelaty, A. Al-Obaidi, M. Mohamed, and H. E. Farag,\n“Machine learning prediction models for battery-electric bus energy\nconsumption in transit,” Transportation Research Part D: Transportand Environment , vol. 96, p. 102868, Jul. 2021. [Online]. Available:\nhttps://doi.org/10.1016/j.trd.2021.102868\n[26] I. Ullah, K. Liu, T. Yamamoto, M. Zahid, and A. Jamal, “Electric\nvehicle energy consumption prediction using stacked generalization:\nan ensemble learning approach,” International Journal of Green\nEnergy , vol. 18, no. 9, pp. 896–909, Feb. 2021. [Online]. Available:\nhttps://doi.org/10.1080/15435075.2021.1881902\n[27] M. Ragone, V . Yurkiv, A. Ramasubramanian, B. Kashir, and\nF. Mashayek, “Data driven estimation of electric vehicle battery\nstate-of-charge informed by automotive simulations and multi-physics\nmodeling,” Journal of Power Sources , vol. 483, p. 229108, Jan. 2021.\n[Online]. Available: https://doi.org/10.1016/j.jpowsour.2020.229108\n[28] P. Li, Y . Zhang, Y . Zhang, Y . Zhang, and K. Zhang, “Prediction\nof electric bus energy consumption with stochastic speed profile\ngeneration modelling and data driven method based on real-world\nbig data,” Applied Energy , vol. 298, p. 117204, Sep. 2021. [Online].\nAvailable: https://doi.org/10.1016/j.apenergy.2021.117204\n[29] S. Gadri, S. O. Mehieddine, K. Herizi, and S. Chabira, “An efficient\nsystem to predict customers’ satisfaction on touristic services using ML\nand DL approaches,” in 2021 22nd International Arab Conference on\nInformation Technology (ACIT) . IEEE, Dec. 2021. [Online]. Available:\nhttps://doi.org/10.1109/acit53391.2021.9677167\n[30] J. P. Trov ˜ao, P. G. Pereirinha, H. M. Jorge, and C. H. Antunes,\n“A multi-level energy management system for multi-source electric\nvehicles – an integrated rule-based meta-heuristic approach,” Applied\nEnergy , vol. 105, pp. 304–318, 2013. [Online]. Available: https:\n//www.sciencedirect.com/science/article/pii/S0306261913000081\n[31] B. Zheng, L. Ming, Q. Hu, Z. L ¨u, G. Liu, and X. Zhou,\n“Supply-demand-aware deep reinforcement learning for dynamic fleet\nmanagement,” ACM Trans. Intell. Syst. Technol. , vol. 13, no. 3, jan\n2022. [Online]. Available: https://doi.org/10.1145/3467979\n[32] D. N. T. How, M. A. Hannan, M. S. H. Lipu, K. S. M. Sahari, P. J.\nKer, and K. M. Muttaqi, “State-of-charge estimation of li-ion battery in\nelectric vehicles: A deep neural network approach,” IEEE Transactions\non Industry Applications , vol. 56, no. 5, pp. 5565–5574, Sep. 2020.\n[Online]. Available: https://doi.org/10.1109/tia.2020.3004294\n[33] J. Hong, Z. Wang, W. Chen, and Y . Yao, “Synchronous multi-parameter\nprediction of battery systems on electric vehicles using long short-term\nmemory networks,” Applied Energy , vol. 254, p. 113648, Nov. 2019.\n[Online]. Available: https://doi.org/10.1016/j.apenergy.2019.113648\n[34] S. Modi, J. Bhattacharya, and P. Basak, “Estimation of energy\nconsumption of electric vehicles using deep convolutional neural\nnetwork to reduce driver’s range anxiety,” ISA Transactions , vol. 98,\npp. 454–470, Mar. 2020. [Online]. Available: https://doi.org/10.1016/j.\nisatra.2019.08.055\n[35] X. Qu, Y . Yu, M. Zhou, C.-T. Lin, and X. Wang, “Jointly dampening\ntraffic oscillations and improving energy consumption with electric,\nconnected and automated vehicles: A reinforcement learning based\napproach,” Applied Energy , vol. 257, p. 114030, Jan. 2020. [Online].\nAvailable: https://doi.org/10.1016/j.apenergy.2019.114030\n[36] H. Lee and S. W. Cha, “Energy management strategy of fuel\ncell electric vehicles using model-based reinforcement learning with\ndata-driven model update,” IEEE Access , vol. 9, pp. 59 244–59 254,\n2021. [Online]. Available: https://doi.org/10.1109/access.2021.3072903\n[37] M. H. Lipu, M. Hannan, A. Hussain, A. Ayob, M. H. Saad, T. F. Karim,\nand D. N. How, “Data-driven state of charge estimation of lithium-ion\nbatteries: Algorithms, implementation factors, limitations and future\ntrends,” Journal of Cleaner Production , vol. 277, p. 124110, Dec.\n2020. [Online]. Available: https://doi.org/10.1016/j.jclepro.2020.124110\n[38] X. Tang, T. Jia, X. Hu, Y . Huang, Z. Deng, and H. Pu,\n“Naturalistic data-driven predictive energy management for plug-\nin hybrid electric vehicles,” IEEE Transactions on Transportation\nElectrification , vol. 7, no. 2, pp. 497–508, Jun. 2021. [Online].\nAvailable: https://doi.org/10.1109/tte.2020.3025352\n[39] H. Sun, Z. Fu, F. Tao, L. Zhu, and P. Si, “Data-driven reinforcement-\nlearning-based hierarchical energy management strategy for fuel\ncell/battery/ultracapacitor hybrid electric vehicles,” Journal of Power\nSources , vol. 455, p. 227964, Apr. 2020. [Online]. Available:\nhttps://doi.org/10.1016/j.jpowsour.2020.227964\n[40] Z. Deng, X. Hu, X. Lin, Y . Che, L. Xu, and W. Guo, “Data-driven state\nof charge estimation for lithium-ion battery packs based on gaussian\nprocess regression,” Energy , vol. 205, p. 118000, Aug. 2020. [Online].\nAvailable: https://doi.org/10.1016/j.energy.2020.118000",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "gcelwH4xUW",
"year": null,
"venue": "SC 2016",
"pdf_link": "https://ieeexplore.ieee.org/iel7/7875333/7876994/07877157.pdf",
"forum_link": "https://openreview.net/forum?id=gcelwH4xUW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Performance analysis, design considerations, and applications of extreme-scale in situ infrastructures",
"authors": [
"Utkarsh Ayachit",
"Andrew C. Bauer",
"Earl P. N. Duque",
"Greg Eisenhauer",
"Nicola J. Ferrier",
"Junmin Gu",
"Kenneth E. Jansen",
"Burlen Loring",
"Zarija Lukic",
"Suresh Menon",
"Dmitriy Morozov",
"Patrick O'Leary",
"Reetesh Ranjan",
"Michel E. Rasquin",
"Christopher P. Stone",
"Venkatram Vishwanath",
"Gunther H. Weber",
"Brad Whitlock",
"Matthew Wolf",
"K. John Wu",
"E. Wes Bethel"
],
"abstract": "A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. This paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: scalability, overhead, performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Rj-B-7HYvkF",
"year": null,
"venue": "PLoS Comput. Biol. 2020",
"pdf_link": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1007659&type=printable",
"forum_link": "https://openreview.net/forum?id=Rj-B-7HYvkF",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Achieving stable dynamics in neural circuits",
"authors": [
"Leo Kozachkov",
"Mikael Lundqvist",
"Jean-Jacques E. Slotine",
"Earl K. Miller"
],
"abstract": "Author summary Stability is essential for any complex system including, and perhaps especially, the brain. The brain’s neural networks are highly dynamic and noisy. Activity fluctuates from moment to moment and can be highly variable. Yet it is critical that these networks reach a consistent state (or sequence of states) for their computations to make sense. Failures in stability have consequences ranging from mild (e.g incorrect decisions) to severe (disease states). In this paper we use tools from control theory and dynamical systems theory to find mechanisms which produce stability in recurrent neural networks (RNNs). We show that a kind of “unlearning” (inhibitory Hebbian and excitatory anti-Hebbian plasticity), balance of excitation and inhibition, and sparse anatomical connectivity all lead to stability. Crucially, we focus on the stability of neural trajectories. This is different from traditional studies of stability of fixed points or planes. We do not assess what trajectories our networks will follow but, rather, when these trajectories will all converge towards each other to achieve stability.",
"keywords": [],
"raw_extracted_content": "ZK[KF ZIN FZ\\OIS K\nFmstovtzr €tklwo nyzkytm€ tzzou~kw mt~mutt€\nSo{ R{�kmsv{� OJ\n4.5.6�.Ttvkow S�zn}�t�� OJ\n4.5.7�.Qokz/Qkm}�o� [w{�tzo4.6�.Kk~w\nR1Ttwwo~ OJ\n4.5�*\n4\\so Wtm{�o~ Oz��t���o q{~Sok~ztzr ’Toy{~�. Tk��kms ��o��� Oz��t���o {q\\omsz{w{ r�*TO\\+. Ikyl~tnro .\nTk��kms ��o���. ]zt�on [�k�o� {qFyo~tmk. 5Jo|k~�y oz�{qG~ktz ’I{rzt�t�o [mtozmo�. Tk��kms ��o���\nOz��t���o {q\\omsz{w{ r�*TO\\+. Ikyl~tnro .Tk��kms ��o���. ]zt�on [�k�o� {qFyo~tmk. 6U{zwtzok~ [���oy�\nSkl{~k�{~� .Tk��kms� �o��� Oz��t���o {q\\omsz{w{r �*TO\\+. Ikyl ~tnro. Tk��kms�� o���. ]zt�on [�k�o� {q\nFyo~tmk. 7Jo|k~�yoz� {qW��ms {w{r�. [�{mvs{wy ]zt�o~� t��.[�{mvs{wy .[�onoz\n�SRkznTS�sk~o qt~��k��s{~ �st| {z�st��{~v1 QQ[kznKRT k~ou{tz� �ozt{~ |~tzmt|kw tz�o��trk�{~ �{z�st�\n�{~v1\n*ovytwwo~E yt�1on�\nFl€t~kmt\n\\so l~ktz m{z�t��� {qykz� tz�o~m{zzom�on zo��{~v� �t�s �tyo/�k~�tzr .|k~�tkww� k��{z{y{��\nkm�t�t��1 \\so~o k~oy�w�t|wo �{�~mo� {qz{t�o kzn�k~tk�t{z �o�km�t�t�� sk��{o�oz��kww� m{z/\n�o~ro �{k��klwo. ~o|~{n�mtlwo ��k�o *{~�o}�ozmo {q��k�o�+ q{~t��m{y|��k�t{z� �{ykvo\n�oz�o1 _ok||~{kmson �st�|~{lwoy q~{y km{z�~{w/�so{~� |o~�|om�t�o l�k||w�tzr m{z�~km/\n�t{zkzkw��t� �{~om�~~oz� zo�~kw zo��{~v�1 \\st� kww{�on ���{qtznyomskzt�y� q{~kmsto�tzr\n��kltwt�� tzy�w�t|wo m{zzom�on zo��{~v� �t�s lt{w{rtmkww� ~okwt��tm n�zkytm�. tzmw�ntzr ��zk|/\n�tm|wk��tmt�� kzn�tyo/�k~�tzr tz|���1 \\so�o yomskzt�y� tzmw�non tzstlt�{~� Nolltkz |wk��tm/\nt��.o�mt�k�{~� kz�t/Nolltkz |wk��tmt��. ��zk|�tm �|k~�t�� kzno�mt�k�{~�/tzstlt�{~ �lkwkzmo1\nV�~qtzntzr� �son wtrs� {zs{� ��klwo m{y|��k�t{z� ytrs� lokmsto�on no�|t�o lt{w{rtmkw\nm{y|wo�t��1 I~�mtkww�. {�~kzkw��t� t�z{�wtyt�on �{kzkw��tzr �so��kltwt�� {qqt�on ro{yo�~tm\n{luom�� tz��k�o �|kmo *o1r |{tz��. wtzo�. |wkzo�+. l��~k�so~ �so��kltwt�� {q��k�o �~kuom�{~to�\n�stms yk� lom{y|wo� kzn�tyo/�k~�tzr1\nF��s{~ ��yyk~�\n[�kltwt�Þ t€o€€oz�tkw q{~kzÞm{y|woð €Þ€�oy tzmw�ntzr. kzn|o~sk|€ o€|omtkwwÞ. �sol~ktz1\n\\so l~ktz)€ zo�~kw zo�Ð{~v€ k~ostrswÞ nÞzkytm kznz{t€Þ1 Fm�t�t�Þ qw�m��k�o€ q~{y y{yoz�\n�{y{yoz� kznmkzlostrswÞ �k~tklwo1 bo�t�t€m~t�tmkw �sk� �so€o zo�Ð{~v€ ~okms km{z€t€�oz�\n€�k�o *{~€o}�ozmo {q€�k�o€+ q{~�sot~ m{y|��k�t{z€ �{ykvo €oz€o1 Lktw�~o€ tz€�kltwt�Þ sk�o\nm{z€o}�ozmo€ ~kzrtzr q~{y ytwn *o1rtzm{~~om� nomt€t{z€+ �{€o�o~o *nt€ok€o €�k�o€+1 Oz�st€\n|k|o~ Ðo�€o�{{w€ q~{y m{z�~{w �so{~Þ kznnÞzkytmkw €Þ€�oy€ �so{~Þ �{qtzn yomskzt€y€\nÐstms |~{n�mo €�kltwt�Þ tz~om�~~oz� zo�~kw zo�Ð{~v€ *ZUU€+1 _o€s{Ð �sk� kvtzn {q\n��zwok~ztzr� *tzstlt�{~Þ Nolltkz kznoðmt�k�{~Þ kz�t/Nolltkz |wk€�tmt�Þ+. lkwkzmo {qoðmt�k/\n�t{z kzntzstlt�t{z. kzn€|k~€o kzk�{ytmkw m{zzom�t�t�Þ kwwwokn �{€�kltwt�Þ1 I~�mtkwwÞ. Ðo\nq{m�€ {z�so€�kltwt�Þ {qzo�~kwtrajectories1 \\st€ t€ntqqo~oz� q~{y �~knt�t{zkw €��nto€ {q€�kltw/\nt�Þ{qqtðon |{tz�€ {~|wkzo€1 _on{z{�k€€o€€what �~kuom�{~to€ {�~zo�Ð{~v€ Ðtwwq{ww{Ð\nl��. ~k�so~. Ðsoz �so€o �~kuom�{~to€ Ðtwwkwwm{z�o~ro �{Ðk~n€ okms {�so~ �{kmsto�o €�kltwt�Þ1\nPLOS COMP UTATIONAL BIOLOGY\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 4249k4444444444\nk4444444444\nk4444444444\nk4444444444\nk4444444444\nOPEN ACCESS\nIt�k�t{z> R{�kmsv{ �S.S�zn}�t�� T.[w{�tzo Q/Q.\nTtwwo~ KR*5353+ Fmsto�tzr ��klwo n�zkytm� tz\nzo�~kw mt~m�t��1 WS{[ I{y|�� Gt{w4:*<+>\no433;:9=1 s��|�>22n {t1{~r243146;42u {�~zkw1\n|mlt1433;:9=\nKnt�{~> Fn~tkz TNkt�s. Q{sz� N{|vtz� ]zt�o~�t��.\n]UO\\KJ [\\F\\K[\nZomot�on> Qkz�k~� 47.5353\nFmmo|�on> Q�zo 5;.5353\nW�lwt�son> F�r��� ;.5353\nWoo~ Zo�to� Nt��{~�> WSV[ ~om{rzt�o ��so\nlozoqt�� {q�~kz�|k~ ozm� tz�so|oo~ ~o�to�\n|~{mo��? �so~oq{~o. �oozklwo �so|�lwtmk�t{z {q\nkww{q�som{z�oz� {q|oo~ ~o�to� kznk��s{~\n~o�|{z�o �kw{zr�tno qtzkw. |�lwt�son k~�tmwo�1 \\so\nont�{~tkw st��{~� {q�st�k~�tmwo t�k�ktwklwo so~o>\ns��|�>22n{t1{ ~r243146;42u{ �~zkw1|mlt 1433;:9=\nI{|�~trs�> ©5353 R{�kmsv{� o�kw1\\st�t�kz{|oz\nkmmo�� k~�tmwo nt��~tl��on �zno~ �so�o~y� {q�so\nI~ok�t�o I{yy{z� F��~tl��t{z Stmoz�o. �stms\n|o~yt�� �z~o��~tm�o n��o.nt��~tl� �t{z.kzn\n~o|~{n�m�t{z tzkz�yont�y. |~{�tnon �so{~trtzkw\nk��s{~ kzn�{�~mo k~om~ont�on1\nJk�k F�ktwkltwt� �[�k�oyoz�> Fwwno�ktwon |~{{q� {q\nyktz ~o��w�� k~oq{�zn tz�sok||oznt�1 [ty�wk�t{z�\n*Ltr� 5kzn6+�o~o |o~q{~yon tzW��s{z1 I{no �{\n~o|~{n�mo �soqtr�~o� t�k�ktwklwo k�ds��|�>22rt�s �l1\nm{y2v{�wo {2��klwoin�zk ytm�f1 U�yo~tm kw\nOz�~{n�m�t{z\nGosk�t{~ oyo~ro€ q~{y m{y|woð zo�~kw nÞzkytm€ �zq{wntzr {�o~ �tyo tzy�w�t/k~ok l~ktz zo�/\nÐ{~v€1 K�oz tz�trs�wÞ m{z�~{wwon oð|o~tyoz�kw €o��tzr€. �so€o zo�~kw nÞzkytm€ {q�oz �k~Þ\nlo�Ðooz tnoz�tmkw �~tkw€ d4.5f1 \\st€ mkzlon�o�{k�k~to�Þ {qqkm�{~€ tzmw�ntzr �k~tkltwt�Þ tz\nyoyl~kzo |{�oz�tkw€. tz|��€. |wk€�tm mskzro€ n�o�{~omoz� oð|o~tozmo kzn €{{z1bo�. tz€|t�o\n{q�so€o qw�m��k�t{z€. l~ktz zo�Ð{~v€ y�€� kmsto�o m{y|��k�t{zkw €�kltwt�Þ> no€|t�o lotzr\n�vz{mvon k~{�zn� lÞ|wk€�tmt�Þ kzn z{t€o. �solosk�t{~kw {��|�� {q�sol~ktz {z�Ð{oð|o~t/\nyoz�kwwÞ tnoz�tmkw �~tkw€ zoon€ �{lo€tytwk~1 N{Ð t€�st€€�kltwt�Þ kmsto�onD\n[�kltwt�Þ sk€|wkÞon kmoz�~kw ~{wo tzm{y|��k�t{zkw zo�~{€mtozmo €tzmo �so4=<3)€. Ðt�s �so\nkn�oz� {qy{now€ {qk€€{mtk�t�o yoy{~Þ �sk� €�{~on zo�~kw km�t�k�t{z |k��o~z€ k€€�klwo |{tz�\nk��~km�{~€ d6˘;f. kw�s{�rs ~o€ok~mso~€ Ðo~o �stzvtzr kl{�� �sol~ktz)€ €�kltwt�Þ €tzmo k€ok~wÞ k€\n�so4=93)€ d<f1\\so �k€� yku{~t�Þ {q�st€Ð{~v t€m{zmo~zon Ðt�s �so€�kltwt�Þ {qkm�t�t�Þ k~{�zn\n|{tz�€. wtzo€. {~|wkzo€ tzzo�~kw €�k�o €|kmo d=.43f1 N{Ðo�o~. ~omoz� zo�~{|sÞ€t{w{rtmkw €��nto€\nsk�o ~o�okwon �sk� tzykzÞ mk€o€. €tzrwo/�~tkw zo�~kw km�t�t�Þ t€strswÞ nÞzkytm. kzn �so~oq{~o\n|{�oz�tkwwÞ tzm{z€t€�oz� Ðt�s k€�k�tm k��~km�{~ �toÐ|{tz� d4.44f1 I{z€o}�oz�wÞ. �so~o sk€looz k\nz�ylo~ {q~omoz� €��nto€Œl{�s m{y|��k�t{zkw kzn oð|o~tyoz�kwŒÐstms q{m�€ y{~o l~{knwÞ\n{z�so€�kltwt�Þ {qzo�~kwtrajectories d45.46f. Ðstms ykÞ lom{y|woð kzn �tyo/�k~Þtzr1\n_stwo �so€o €��nto€ |~{�tno ty|{~�kz� oy|t~tmkw ~o€�w�€ kzn tz��t�t{z€. �soÞ n{z{�{qqo~\nkzkwÞ�tmkw tz€trs� tz�{ yomskzt€y€ q{~kmsto�tzr €�klwo �~kuom�{~to€ tz~om�~~oz� zo�~kw zo�/\nÐ{~v€1 U{~ n{�soÞ {qqo~ tz€trs�€ tz�{ kmsto�tzr €�ms €�kltwt�Þ tz|wk€�tm *{~y�w�t/y{nkw+ zo�/\nÐ{~v€1 No~o Ðoq{m�€ {zqtzntzr m{znt�t{z€ �sk� r�k~kz�oo €�klwo �~kuom�{~to€ tz~om�~~oz�\nzo�~kw zo�Ð{~v€ kzn �s�€ €son wtrs� {z�{ s{Ð €�klwo �~kuom�{~to€ ytrs� lokmsto�oninvivo1\n\\{n{€{.Ðo�€on m{z�~km�t{z kzkwÞ€t€. km{zmo|� no�ow{|on tzm{z�~{w �so{~Þ d47f1 ]zwtvo k\nmsk{�tm €Þ€�oy Ðso~o |o~��~lk�t{z€ kzn nt€�{~�t{z€ mkzloky|wtqton {�o~ �tyo. �so|{|�wk�t{z\nkm�t�t�Þ {qkm{z�~km�tzr zo�Ð{~v Ðtwwm{z�o~ro �{Ðk~n€ �so€kyo �~kuom�{~Þ. �s�€ kmsto�tzr €�k/\nlwonÞzkytm€ *Ltr 4+1Vzo ÐkÞ �{�zno~€�kzn m{z�~km�t{z t€�{~o|~o€oz� �so€�k�o {qkzo�Ð{~v\nk�krt�oz �tyo k€k|{tz� tz�sozo�Ð{~v)€ j€�k�o/€|kmo). q{~tz€�kzmo �so€|kmo €|kzzon lÞ�so\n|{€€tlwo qt~tzr ~k�o€ {qkww�sozo�Ð{~v€) zo�~{z€1 \\st€ €�k�o/€|kmo sk€�so€kyo z�ylo~ {q\nntyoz€t{z€ k€�soz�ylo~ {q�zt�€ntz�sozo�Ð{~v1 F|k~�tm�wk~ |k��o~z {qzo�~kw qt~tzr ~k�o€\nm{~~o€|{zn€ �{k|{tz� tz�st€€�k�o/€|kmo1 \\st€ |{tz� y{�o€ tz�sonntyoz€t{z€ k€�soqt~tzr\n~k�o€ mskzro kzn �~kmo€ {��k�~kuom�{~Þ {�o~ �tyo1\nOzkm{z�~km�tzr zo�Ð{~v. kww€�ms �~kuom�{~to€ m{z�o~ro1 \\so€o m{z�~km�tzr nÞzkytm€ sk�o\n|~o�t{�€wÞ looz �€on tz€o�o~kw k||wtmk�t{z€. tzmw�ntzr zo�~kw zo�Ð{~v€ Ðt�s Ðtzzo~ �kvo kww\nnÞzkytm€ d49.4:f. tzky{now {qkm�t{z/€owom�t{z tz�solk€kw rkzrwtk d4;f. kzn �{oð|wktz s{Ð\nzo�~kw €Þzms~{ztþk�t{z mkz|~{�om� q~{y z{t€o d4<f1 No~o. Ðotz€�okn oð|w{~o s{Ð m{z�~km�t{z\nmkzlokmsto�on rozo~kwwÞ tzy{~o m{y|woð ~om�~~oz� zo�~kw zo�Ð{~v€ *ZUU€+ tzmw�ntzr �s{€o\nÐt�s |wk€�tm Ðotrs�€1 _o�€on ZUU€ �sk� ~omot�on k~lt�~k~Þ �tyo/�k~Þtzr tz|��€ kzn skn€Þz/\nk|€o€ �sk� mskzron {zlt{w{rtmkwwÞ ~owo�kz� �tyo€mkwo€ d4=˘54f1 V�~ kzkwÞ€t€ ~o�okw€ €o�o~kw\nz{�ow mwk€€o€ {qyomskzt€y€ �sk� |~{n�mon m{z�~km�t{z tzmw�ntzr tzstlt�{~Þ Nolltkz |wk€�tm/\nt�Þ.oðmt�k�{~Þ kz�t/Nolltkz |wk€�tmt�Þ. oðmt�k�{~Þ/tzstlt�{~Þ lkwkzmo. kzn €|k~€o m{zzom�t�t�Þ1\nL{~�soqt~€� �Ð{|k~�€ {q�soZo€�w�€ €om�t{z. Ðoq{m�€ {zm{z�~km�t{z {qboth zo�~kw km�t�t�Þ\nand m{y|{zoz�€ {q�soÐotrs� yk�~tð *Ltr 4+1L{~�so~oyktztzr |k~�€ {q�soZo€�w�€ €om�t{z.\nÐos{wn �soÐotrs�€ qtðon *t1o�soÞ lom{yo |k~kyo�o~€. z{��k~tklwo€+ kzn q{m�€ {zm{z�~km�t{z\n{qzo�~kw km�t�t�Þ kw{zo1\nZo��w��\n\\so yktz �{{w Ðo�€on �{msk~km�o~tþo m{z�~km�t{z Ðk€�sow{rk~t�sytm z{~y *kw€{ vz{Ðz k€k\nyk�~tð yok€�~o+1 \\so q{~ykw noqtzt�t{z {q�sow{rk~t�sytm z{~y t€k€q{ww{Ѐ *q~{y d55f\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 5249tz�or~k�t{z �k�|o~q{~yon ��tzr �notz�. kz{|oz/\n�{�~mo m{wwom�t{z {qz�yo~tmkw kwr{~t�sy� q{~\n|o~q{~ytzr tz�or~k�t{z� {q��{msk�� tm{~ntzk~�\nntqqo~oz�t kwo}�k�t{z�1\nL�zntzr> \\st��{~v �k���||{~�on l�UOTN\nZ6;TN3< ;35;. \\soTO\\Wtm{�o~ Oz��t���o\nOzz{�k�t{ zL�zn. VUZ T]ZO U33347/4:/ 4/5<65.\nkzn[�ont�s Zo�ok~ms I{�zmtw [�k~�tzr M~kz�\n534</374=;1 \\soq�zno~� sknz{~{wotz���n�\nno�trz. nk�km{wwom�t{z kznkzkw��t�. nomt�t{z �{\n|�lwt�s. {~|~o|k~k�t{z {q�soykz��m~t|�1\nI{y|o�tzr tz�o~o��� >\\sok��s{~� sk�o nomwk~on\n�sk�z{m{y|o�tzr tz�o~o��� o�t��1\n€om�t{z 51515+> wo�Flokyk�~tð tzCn×nkznz�zilokztzn�mon yk�~tð z{~y {zCn×n1\\soz �so\nm{~~o€|{zntzr w{rk~t�sytm z{~y t€�soq�zm�t{z w
�BCn�n% Rnoqtzon lÞ\nw
Alim\n�%6zI�Azi\u00007\n�\nOz�so€kyo ÐkÞ �sk� ntqqo~oz� �om�{~ z{~y€ tzn�mo ntqqo~oz� yk�~tð z{~y€. ntqqo~oz� �om�{~\nz{~y€ kw€{ tzn�mo ntqqo~oz� w{rk~t�sytm z{~y€1 \\Ð{ ty|{~�kz� w{rk~t�sytm z{~y€ Ðstms Ðo\n�€o�s~{�rs{�� �so|k|o~ k~o�s{€o tzn�mon lÞ�so�om�{~ 4/z{~y kzn �so�om�{~ 5/z{~y>\nw7A
max\njajjdn\ni=jyaijy&’\nw9A
�maxA�A\n9��\n_so~o λmaxnoz{�o€ �sowk~ro€� otroz�kw�o1 \\{€��nÞ �som{z�~km�t{z |~{|o~�to€ {qZUU€. Ðo\nk||wton �sow{rk~t�sytm z{~y �{�soZUU)€Jacobians1 \\so Qkm{ltkz {qknÞzkytmkw €Þ€�oy t€k\nyk�~tð o€€oz�tkwwÞ no€m~tltzr �sow{mkw j�~kqqtm wkЀ) {qzok~lÞ �~kuom�{~to€ {q�so€Þ€�oy tzt�€\n€�k�o €|kmo1 T{~o q{~ykwwÞ. t�t€�soyk�~tð {q|k~�tkw no~t�k�t�o€ no€m~tltzr s{Ð kmskzro tz\nkzÞ€Þ€�oy �k~tklwo ty|km�€ �sorateofchange {qo�o~Þ {�so~ �k~tklwo tz�so€Þ€�oy1 O�Ðk€\n€s{Ðz tzd47f �sk� tq�sow{rk~t�sytm z{~y {q�soQkm{ltkz t€zork�t�o �soz kwwzok~lÞ �~kuom�{/\n~to€k~oq�zzowon �{Ðk~n€ {zokz{�so~ *€oo [4F \\oð� [om�t{z 415q{~�omsztmkw ~o�toÐ+1 \\st€. tz\n��~z. ty|wto€ �sk�all�~kuom�{~to€ k~oq�zzowon �{Ðk~n€ {zokz{�so~ k�~k�o mkwwon �socontraction\nrate1 \\so m{z�~km�t{z ~k�o kzn �sow{rk~t�sytm z{~y k~o~owk�on k€q{ww{Ѐ> �soykðty�y �kw�o\nk��ktzon lÞ�sokl€{w��o �kw�o {qw{rk~t�sytm z{~y {q�soQkm{ltkz kw{zr �sozo�Ð{~v)€\nLtr41Ik~�{{z noy{z €�~k�tzr �som{z�~km�t{z |~{|o~�Þ1 Ozkzo�Ð{~v Ðt�sNzo�~kw �zt�€ kznSnÞzkytm €Þzk|�t m\nÐotrs�€. �sozo�Ð{~v km�t�t�Þ mkzlono€m~tlon k�~kuom�{~Þ {�o~ �tyo tzkz*N-S+/ntyoz€ t{zkw €|kmo1 Ozkm{z�~km�tzr\n€Þ€�oy kww€�ms �~kuom�{~t o€Ðtwwm{z�o~ro oð|{zoz�tkw wÞinsomemetric �{Ðk~n€ okms {�so~ {�o~ �tyo. ~ork~nwo€€ {q\ntzt�tkw m{znt�t{z€1 Oz{�so~ Ð{~n€. �sont€�kzmo lo�Ðooz kzÞ�Ð{�~kuom�{~t o€€s~tzv€ �{þo~{Œ|{�oz �tkwwÞ kq�o~ �~kz€to z�\nnt�o~rozm o*k€€s{Ðz+1\ns��|�>22n {t1{~r243146;42u {�~zkw1|m lt1433;:9=1 r334\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 6249\n�~kuom�{~Þis�som{z�~km�t{z ~k�o1 Oz{�so~ Ð{~n€. tq�sow{rk~t�sytm z{~y {q�soQkm{ltkz t€\n�||o~ l{�znon lÞ€{yo zork�t�o z�ylo~−c.Ðso~ocF3.�soz �som{z�~km�t{z ~k�o t€\n€ty|wÞc1\nOy|{~�kz�wÞ. �sokl{�o no€m~t|�t{z mkzlorozo~kwtþon �{ntqqo~oz�metrics1 Fyo�~tm t€k\n€Þyyo�~tm. |{€t�t�o noqtzt�o yk�~tð Ðstms rozo~kwtþo€ �soz{�t{z {qK�mwtnokz nt€�kzmo1 K�o~Þ\ntz�o~�tlwo m{{~ntzk�o �~kz€q{~yk�t{z ÞBθðÞtown€ kyo�~tm TBθ\\θ1\\{€oo�st€. m{z€tno~ �so\n€}�k~on z{~y {qzÞz5ByTyBðTθTθðBðTTð1 \\s�€. �soz{~y {qÞt€~owk�on �{�soz{~y {qð\n�s~{�rs �soyo�~tm T1Oq{zomkzqtzn yo�~tm tzÐstms �sozo�Ð{~v t€m{z�~km�tzrŒtz �so\n€oz€o �sk� t�€Qkm{ltkz sk€zork�t�o w{rk~t�sytm z{~y˘�st€ ty|wto€ m{z�~km�t{z q{~allm{{~nt/\nzk�o €Þ€�oy€1 \\st€ ykvo€ m{z�~km�t{z kzkwÞ€t€ �€oq�w q{~kzkwÞþtzr €Þ€�oy€ Ðso~o oð|{zoz�tkw\nm{z�o~rozmo {q�~kuom�{~to€ t€|~omonon lÞ�~kz€toz� nt�o~rozmo *Ltr 4+k€tz~omoz� y{now€ {q\ny{�{~ m{~�oð d56.57f1 Oz�st€mk€o. t�t€�€�kwwÞ |{€€tlwo �{qtzn km{{~ntzk�o €Þ€�oy tzÐstms �so\nm{z�o~rozmo {q�~kuom�{~to€ t€j|�~o)1 L{~oðky|wo. wtzok~ €�klwo €Þ€�oy€ Ðo~o ~omoz�wÞ �€on tz\n�soy{�{~ m{z�~{w wt�o~k��~o �{qtzn tzt�tkw m{znt�t{z€ Ðstms |~{n�mo �soy{€� ozo~ro�tm zo�~kw\n~o€|{z€o d56f \\soÞ k~oj|�~owÞ) m{z�~km�tzr tzkyo�~tm noqtzon lÞ�sootroz�om�{~€ {q�so\nÐotrs� yk�~tð *€oo Kðky|wo 914tzd47f+ l���~kz€toz�wÞ nt�o~rtzr tz�sotnoz�t�Þ yo�~tm *t1oT\nBO+1U{�o �sk� �sotnoz�t�Þ yo�~tm m{~~o€|{zn€ �{θBO.Ðstms t€€ty|wÞ �so{~trtzkw. �z�~kz€/\nq{~yon m{{~ntzk�o €Þ€�oy1\nOzstlt�{~Þ solltkz |wk€�tmt�Þ ’oðmt�k�{~Þ kz�t/Nolltkz |wk€�tmt�Þ |~{n�mo\nm{z�~km�t{z\nO�t€vz{Ðz �sk� mo~�ktz q{~y€ {q€Þzk|�tm |wk€�tmt�Þ mkz}�tmvwÞ wokn �{oð�~oyo tz€�kltwt�to€ tq\nwoq��zmsomvon d=.59f1 \\s�€. �so€kyo qok��~o �sk� mkzktnwok~ztzr mkzkw€{ Þtown msk{�tm zo�~kw\nnÞzkytm€ tqz{�~or�wk�on1 O�t€z{�vz{Ðz s{Ð �sol~ktz ~o€{w�o€ �st€ntwoyyk1 Fr~{Ðtzr\nl{nÞ {qo�tnozmoŒl{�s oð|o~tyoz�kw kzn m{y|��k�t{zkwŒ€�rro€�€ �sk� tzstlt�{~Þ |wk€�tmt�Þ\n*�sk� t€.�so€�~ozr�soztzr {qtzstlt�{~Þ €Þzk|€o€+ mkz€�kltwtþo zo�~kw nÞzkytm€ Ðstwo €ty�w�k/\nzo{�€wÞ kww{Ðtzr q{~wok~ztzr2�~ktztzr tzzo�~kw mt~m�t�€ d5:˘5<f1 GÞ�€tzr �soQkm{ltkz kzkwÞ/\n€t€{��wtzon kl{�o. Ðoq{�zn �sk� tzstlt�{~Þ Nolltkz €Þzk|�tm |wk€�tmt�Þ *k€Ðoww k€oðmt�k�{~Þ\nkz�t/Nolltkz |wk€�tmt�Þ+ tznoon wokn€ �{€�klwo nÞzkytm€ tzzo�~kw mt~m�t�€1 [|omtqtmkwwÞ. Ðom{z/\n€tno~on zo�~kw zo�Ð{~v€ {q�soq{ww{Ðtzr m{yy{z q{~y>\nlxih
xidN\nj7Wijxjui
t
4\nÐso~o �so�o~yxinoz{�o€ �sojkm�t�k�t{z) {qzo�~{zik€kq�zm�t{z {q�tyo1 No~o Ðoq{ww{Ð\n{�so~ k��s{~€ d56f kzn tz�o~|~o�xik€�sodeviation q~{y �solk€owtzo qt~tzr ~k�o {qzo�~{zi1\nU{�o �sk� �st€tz�o~|~o�k�t{z k€€�yo€ �sk� �solk€owtzo qt~tzr ~k�o€ k~o|{€t�t�o˘�s�€ kww{Ðtzr\nq{~ð�{lozork�t�oŒkzn wk~ro oz{�rs €{�sk�baseline -xF31\\so �o~yWijnoz{�o€ �so\nÐotrs� lo�Ðooz zo�~{z€ikznj�so�o~yh*xi+mk|��~o€ �sonÞzkytm€ zo�~{ziÐ{�wn sk�o tz\n�sokl€ozmo {q€Þzk|�tm tz|��. tzmw�ntzr €owq/qoonlkmv �o~y€ k~t€tzr q~{y �sontkr{zkw owoyoz�€\n{q�soÐotrs� yk�~tðŒtz {�so~ Ð{~n€. �sonÞzkytm€ zo�~{ziÐ{�wn sk�o tqq{~kwwikznj.WtuB\n31\\so �o~y lotzr €�yyon ~o|~o€oz�€ �soÐotrs�on m{z�~tl��t{z {qkww�sozo�~{z€ tz�sozo�/\nÐ{~v {z�sokm�t�t�Þ {qzo�~{zi1LtzkwwÞ. �so�o~yui*t+~o|~o€oz�€ oð�o~zkw tz|�� tz�{ zo�~{zi1\n_ontnz{�m{z€�~ktz �sotz|��€ tz�{ �soZUU *oðmo|� �sk� �soÞ Ðo~o z{�tzqtzt�o+ kzn Ðo\nntnz{�€|omtqÞ �so|k~�tm�wk~ q{~y {qh*xi+oðmo|� �sk� t�€s{�wn lokwokv �o~y *t1osk€kzork/\n�t�ono~t�k�t�o q{~kwwx.€oo[4F \\oð� [om�t{z 51517. o1rh*xi+B−xi+1L�~�so~y{~o. Ðoykno z{\nk€€�y|�t{z€ ~ork~ntzr �so~owk�t�o �tyo€mkwo€ {q€Þzk|�tm kzn zo�~kw km�t�t�Þ1 [Þzk|�tm nÞzky/\ntm€Ðo~o �~ok�on {zkzo}�kw q{{�tzr k€zo�~kw nÞzkytm€1 _om{z€tno~on €Þzk|�tm |wk€�tmt�Þ {q\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 7249\n�soq{ww{Ðtzr m{~~owk�t{zkw q{~y d5=f>\nlWij\u0000kijxixj\u0000q
tWij
5\nÐso~o �so�o~ykijF3t€�sowok~ztzr ~k�o q{~okms €Þzk|€o kzn γ*t+F3t€knomkÞ qkm�{~ q{~\nokms €Þzk|€o1 L{~�omsztmkw ~ok€{z€ {��wtzon tz�sok||ozntð *[4F \\oð� [om�t{z 6+.Ðo\n~o€�~tm�on R.�soyk�~tð m{z�ktztzr �sowok~ztzr ~k�o€kij.�{lo|{€t�t�o €oyt/noqtzt�o. €Þyyo�/\n~tm.kzn sk�o |{€t�t�o oz�~to€1 F|k~�tm�wk~ oðky|wo {qR€k�t€qÞtzr �so€o m{z€�~ktz�€ t€�{sk�o\n�sowok~ztzr ~k�o€ {qkww€Þzk|€o€ �{loo}�kw *t1o1kijBkF3+1\nGoq{~o Ðo€s{Ð �sk� *5+wokn€ �{{�o~kww €Þzk|�tm kzn zo�~kw m{z�~km�t{z. t�)€�€oq�w �{€|ozn\n€{yo �tyo tz�o~|~o�tzr �st€|wk€�tmt�Þ1 [tzmoWijmkzlo|{€t�t�o {~zork�t�o *m{~~o€|{zntzr �{\noðmt�k�{~Þ kzn tzstlt�{~Þ €Þzk|€o€. ~o€|om�t�owÞ+. kznxixjmkzlo|{€t�t�o {~zork�t�o *m{~~o/\n€|{zntzr �{m{~~owk�on kzn kz�tm{~~owk�on zo�~{z€. ~o€|om�t�owÞ+. �so~o k~oq{�~ mk€o€ �{m{z/\n€tno~1 _o€�yyk~tþo �so€o mk€o€ tz\\klwo 4kzn nt€m�€€ �soy tzno�ktw€ low{Ð1 GÞNolltkz\n|wk€�tmt�Þ Ðo~oqo~ �{�sotzm~ok€o {q€Þzk|�tm oqqtmtozmÞ lo�Ðooz m{~~owk�on zo�~{z€ d63f1 Oz�so\nm{z�oð� {q€ty|wo zo�~kw zo�Ð{~v€ Ðt�s €mkwk~ Ðotrs�€. k€Ðom{z€tno~ so~o. oqqtmtozmÞ ~oqo~€ �{\n�sokl€{w��o �kw�o �w�{qkÐotrs�1 \\s�€. q{~oðmt�k�{~Þ €Þzk|€o€. *5+tzqkm�no€m~tlo€anti/Nol/\nltkz |wk€�tmt�Þ. lomk�€o �so|{€t�t�o €Þzk|�tm Ðotrs� lom{yo€ wo€€|{€t�t�o *kzn �s�€ wo€€oqqt/\nmtoz�+ lo�Ðooz m{~~owk�on zo�~{z€ kzn y{~o |{€t�t�o *�s�€ y{~o oqqtmtoz�+ q{~kz�tm{~~owk�on\nzo�~{z€1 L{~tzstlt�{~Þ €Þzk|€o€. *5+no€m~tlo€ Nolltkz |wk€�tmt�Þ lomk�€o �sont~om�t{z {q€Þz/\nk|�tm Ðotrs� mskzro t€zork�t�o lo�Ðooz m{~~owk�on zo�~{z€. kzn �s�€ �so€Þzk|€o lom{yo€\nmore oqqtmtoz� d64.65f. Ðstwo q{~kz�tm{~~owk�on zo�~{z€ �sont~om�t{z {q€Þzk|�tm Ðotrs� mskzro\nt€|{€t�t�o. kzn �s�€ �so€Þzk|€o lom{yo€ wo€€oqqtmtoz�1 Wwk€�tmt�Þ {q�st€q{~y |~{n�mon m{z/\n�~km�tzr zo�~kw kzn €Þzk|�tm nÞzkytm€ ~ork~nwo€€ {q�sotzt�tkw �kw�o€ {q�soÐotrs�€ kzn zo�~kw\nkm�t�t�Þ *Ltr€ 5kzn 6+1\\so lwkmv �~kmo {qLtr6F€s{Ѐ �sk� �st€t€z{�€ty|wÞ n�o�{�soÐotrs�€\nnomkÞtzr �{31\\s�€. �st€|wk€�tmt�Þ t€z{�{zwÞ m{z�~km�t{z |~o€o~�tzr. t�t€m{z�~km�tzr ensuring1\nL�~�so~y{~o. Ðo€s{Ðon �sk� �sozo�Ð{~v t€m{z�~km�tzr tzkz{z/tnoz�t�Þ yo�~tm *Ðstms Ðo\nno~t�o q~{y �so€Þ€�oy |k~kyo�o~€ tzR+.{|oztzr �|�so|{€€tltwt�Þ {q�~kz€toz� nt�o~roz�\nnÞzkytm€ tz�sotnoz�t�Þ yo�~tm. k€€ooz tz�soy{nowwtzr {qy{�{~ nÞzkytm€ d56f1\n\\{oð|wktz s{Ð tzstlt�{~Þ Nolltkz |wk€�tmt�Þ kzn oðmt�k�{~Þ kz�t/Nolltkz |wk€�tmt�Þ Ð{~v\n�{|~{n�mo m{z�~km�t{z km~{€€ kÐs{wo zo�Ð{~v. Ðozoonon �{nokw Ðt�s �sozo�Ð{~v tzks{wt€/\n�tmqk€st{z. z{�lÞkzkwÞþtzr �sonÞzkytm€ {q€tzrwo zo�~{z€1 \\{n{€{.Ðom{zmo|��kwtþon\nZUU€ Ðt�s nÞzkytm €Þzk|€o€ k€k€tzrwo €Þ€�oy q{~yon lÞm{yltztzr �Ð{€�l€Þ€�oy€. kzo�/\n~kw€�l€Þ€�oy kzn k€Þzk|�tm €�l€Þ€�oy1 _o€s{Ðon �sk� �sokl{�o |wk€�tmt�Þ ~�wo won�sozo�~kw\nkzn €Þzk|�tm €�l€Þ€�oy€ �{lotzno|oznoz�wÞ m{z�~km�tzr1 \\s�€ m{z�~km�t{z kzkwÞ€t€ {q�so\n{�o~kww €Þ€�oy �soz l{twon n{Ðz �{oðkytztzr �sotz�o~km�t{z€ lo�Ðooz �so€o €�l€Þ€�oy€ d66f1\n_oq{�zn �sk� �st€|wk€�tmt�Þ Ð{~v€ wtvokztz�o~qkmo lo�Ðooz �so€o €Þ€�oy€1 O�|~{n�mo€ �Ð{\nnt€�tzm� oqqom�€ �sk� |�€s zo�Ð{~v€ �{Ðk~n m{z�~km�t{z1 Lt~€�. t�ykvo€ �so€Þzk|�tm Ðotrs�\nyk�~tð €Þyyo�~tm *Ltr 6F.~on�~kmo+1 \\st€ yokz€ �sk� �soÐotrs� lo�Ðooz zo�~{zi�{jt€�so\n€kyo k€j�{i1_o€s{Ðon �st€lÞ�€tzr �soqkm��sk� o�o~Þ yk�~tð mkzloÐ~t��oz k€�so€�y {qk\n\\klwo 41[�yyk~Þ {q�sooqqom� {q�so|wk€�tmt�Þ no€m~tlon tzK}*5+{zoðmt�k�{~ Þkzn tzstlt�{~Þ q{~m{~~owk�on {~\nkz�tm{~~owk �on|~okzn |{€� €Þzk|�tm zo�~{z€1\nI{~~owk�on Uo�~{z€ Fz�tm{~ ~owk�on Uo�~{z€\nxixjF3 xixjD3\nKðmt�k�{ ~Þ[Þzk|€o So€€ Kqqtmtoz � T{~o Kqqtmtoz �\nwF3 Δ�w�D3 Δ�w�F3\nOzstlt�{~Þ [Þzk|€o T{~o Kqqtmtoz� So€€ Kqqtmtoz �\nwD3 Δ�w�F3 Δ�w�D3\ns��|�>22n {t1{~r243146;42u {�~zkw1|m lt1433;:9=1 �334\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 9249\n|�~owÞ €Þyyo�~tm yk�~tð kzn k|�~owÞ kz�t/€Þyyo�~tm yk�~tð1 Fzkz�t/€Þyyo�~tm yk�~tð t€\n{zoÐso~o �soijowoyoz� t€�sozork�t�o {q�sojiowoyoz� *i1e1WijB−Wji+kzn kww�sontkr{zkw\nowoyoz�€ k~oþo~{1 _o�soz €s{Ðon �sk� kz�t/Nolltkz |wk€�tmt�Þ €s~tzv€ �sokz�t/€Þyyo�~tm\n|k~� {q�soÐotrs� yk�~tð �{þo~{. ty|wÞtzr �sk� �soÐotrs� yk�~tð lom{yo€ €Þyyo�~tm1 \\so\n€Þyyo�~Þ {q�soÐotrs� yk�~tð jmkzmow€ {��) {qq/ntkr{zkw€ tz�soQkm{ltkz yk�~tð *€oo [4F \\oð�\n[om�t{z 6+{q�so{�o~kww zo�~kw/€Þzk|�tm €Þ€�oy1 S{{€owÞ €|okvtzr. {qq/ntkr{zkw �o~y€ tz�so\nQkm{ltkz ~o|~o€oz� |{�oz�tkwwÞ no€�kltwtþtzr m~{€€/�kwv lo�Ðooz �so�Ð{€�l€Þ€�oy€1 L�~�so~/\ny{~o. kz�t/Nolltkz |wk€�tmt�Þ ykvo€ �soÐotrs� yk�~tð zork�t�o €oyt/noqtzt�o1 \\st€ yokz€ �sk�\nkwwt�€otroz�kw�o€ k~owo€€�skz {~o}�kw �{þo~{ *Ltr 6+1\n[|k~€o m{zzom�t�t�Þ |�€so€ zo�Ð{~v€ �{Ðk~n m{z�~km�t{z\n[Þzk|�tm m{zzom�t�t�Þ tz�sol~ktz t€oð�~k{~ntzk~twÞ €|k~€o1 \\so kn�w� s�ykz l~ktz m{z�ktz€ k�\nwok€� 4344zo�~{z€ Þo�okms zo�~{z q{~y€ kzn ~omot�o€ {zk�o~kro {zwÞ 436−437€Þzk|�tm m{z/\nzom�t{z€ d67f1 Oq�sol~ktz)€ zo�~{z€ Ðo~o kww/�{/kww m{zzom�on �st€z�ylo~ Ð{�wn lo{z�so\n{~no~ {q4344€Þzk|�tm m{zzom�t{z€ |o~zo�~{z *7677�7677\n7677�synapticconnections\nneurons+1K�oz tzw{mkw |k�mso€\n{qm{~�oð. €�ms k€Ðoy{now so~o. m{zzom�t�t�Þ t€qk~q~{y kww/�{/kww? m{~�tmkw mt~m�t�€ k~o€|k~€o\nd69f1 V�~ kzkwÞ€o€ ~o�okwon �sk� €|k~€o m{zzom�t�t�Þ sow|€ |~{n�mo rw{lkw zo�Ð{~v m{z�~km�t{z\nq{~ykzÞ �Þ|o€ {q€Þzk|�tm |wk€�tmt�Þ1\n\\{kmm{�z� q{~�so|{€€tltwt�Þ �sk� €{yo €Þzk|€o€ ykÞ sk�o y�ms €w{Ðo~ |wk€�tmt�Þ �skz {�s/\no~€*kzn mkz�s�€ lo�~ok�on k€€Þzk|€o€ Ðt�s qtðon ky|wt��no+. Ðoykno knt€�tzm�t{z lo�Ðooz\n�so�{�kw z�ylo~ {q€Þzk|€o€ kzn �so�{�kw z�ylo~ {qplastic €Þzk|€o€1 \\so€o |wk€�tm €Þzk|€o€\n�soz mskzron {zk€tytwk~ �tyo/€mkwo k€�sozo�~kw qt~tzr ~k�o€1 GÞzo�~kw nÞzkytm€. Ðoyokz\nLtr51I{z�~km�tzr nÞzkytm€ {qzo�~kw kzn €Þzk|�tm km�t�t�Þ1 K�mwtnokz nt€�kzmo€ lo�Ðooz €Þzk|�tm kzn zo�~kw\n�~kuom�{~t o€noy{z€�~k�o oð|{zoz�tkw €s~tzvkro {�o~ �tyo1 \\so �{|~{Ð {q|kzow€ €s{Ѐ �sokm�t�k�t{z {qk~kzn{ywÞ\n€owom�on zo�~kw �zt� *lwkmv+ kzn €Þzk|€o *lw�o+ km~{€€ �Ð{€ty�wk�t{z€ *n{��on kzn €{wtn wtzo+1 \\so l{��{y ~{Ð €s{Ѐ\n�sok�o~kro K�mwtnokz nt€�kzmo tz€�k�o €|kmo q{~�soÐs{wo |{|�wk�t{z km~{€€ €ty�wk�t{ z€Ðt�s nt€�tzm�. ~kzn{ytþ on\n€�k~�tzr m{znt�t{z€1 Soq�y{€ �Wkzow> [ty�wk�t {z€{qkm{z�~km�tzr €Þ€�oy Ðso~o {zwÞ €�k~�tzr m{znt�t{z€ ntqqo~ {�o~\n€ty�wk�t{ z€1Ioz�o~ Wkzow> �so€kyo k€tzSoq�y{€ �l��Ðt�s kzknnt�t{zkw ~kzn{y |�w€o |o~��~lk�t{ ztz{zo{q�so�Ð{\n€ty�wk�t{ z€tzntmk�on lÞk~onlkmvr~{�zn €skntzr1 Ztrs�y{€� Wkzow> €kyo k€tzIoz�o~ Wkzow l��Ðt�s knnt�t{z kw\n€�€�ktzon z{t€o. �zt}�o �{okms €ty�wk�t{ z1\ns��|�>22n {t1{~r243146;42u {�~zkw1|m lt1433;:9=1 r335\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 :249\n�somskzro tzzo�~kw km�t�t�Þ k€kq�zm�t{z {q�tyo1 _okzkwÞþon ZUU€ Ðt�s �so€�~�m��~o>\nlxihi
xidN\nj7Wij
tr
xjui
t
6\n_so~ohi*xi+t€kz{zwtzok~ wokv �o~y *€oo [4F \\oð� [om�t{z 51517+. kznr*xj+t€kz{zwtzok~ km�t/\n�k�t{z q�zm�t{z1 \\so ZUU€ kzkwÞþon tz�st€€om�t{z k~otnoz�tmkw �{�s{€o kzkwÞþon tz�so|~o�t/\n{�€€om�t{z. Ðt�s �sooðmo|�t{z {q�sor�o~y€. Ðstms Ðom{z€�~ktzon �{lowtzok~1 ]zno~ �so\nk€€�y|�t{z �sk� �so|wk€�tm €Þzk|€o€ sk�o kjq{~ro��tzr �o~y). Ðo€s{Ð tz�sok||ozntð *[4F\n\\oð� [om�t{z 7+�sk� tq�soq{ww{Ðtzr o}�k�t{z t€€k�t€qton q{~o�o~Þ zo�~{z. �soz �so{�o~kww zo�/\nÐ{~v t€m{z�~km�tzr>\npi
gmaxwmaxjirmaxDki
7\nÐso~opinoz{�o€ �so�{�kw z�ylo~ {qkqqo~oz� €Þzk|€o€ tz�{ zo�~{zikzn αinoz{�o€ �soq~km/\n�t{z {qkqqo~oz�plastic €Þzk|€o€ tz�{ zo�~{zi1\\so �o~ywmax~oqo~€ �{�soykðty�y |{€€tlwo\nkl€{w��o oqqtmtozmÞ {qkzÞ€tzrwo €Þzk|€o1 \\sk� t€.wmaxBykði.j�wij�1[tytwk~wÞ. �so�o~yrmax\n~oqo~€ �{�soykðty�y |{€€tlwo kl€{w��o �kw�o {qr1\\sk� t€.rmaxBykði.t�ri*t+�1\\so �o~y βi\nnoz{�o€ �som{z�~km�t{z ~k�o {q�soitht€{wk�on zo�~{z1 \\sk� t€.ki\u0000maxiCtIhi\nIxit
1Zomkww q~{y\n�sotz�~{n�m�t{z �sk� �som{z�~km�t{z ~k�o yok€�~o€ s{Ð }�tmvwÞ �so�~kuom�{~to€ {qkm{z�~km�/\ntzr€Þ€�oy ~om{z�ozo kq�o~ |o~��~lk�t{z1 LtzkwwÞ.gmax~oqo~€ �{�soykðty�y rktz {qkzÞzo�/\n~{ztz�sozo�Ð{~v1 \\sk� t€.gmaxmaxiCtyIri\nIxiyt
1U{�o �sk� lomk�€o βit€k|{€t�t�o z�ylo~ lÞ\nLtr61\\so kz�t/Nolltkz |wk€�tmt�Þ |�€so€ �soÐotrs� yk�~tð �{Ðk~n €€Þyyo�~Þ 1*Soq�+ Ww{��on k~o�so€|om�~kw z{~y€\n*wk~ro€� €tzr�wk~ �kw�o+ {q�so{�o~kww Ðotrs� yk�~tð k€Ðoww k€�sokz�t/€Þy yo�~tm |k~� {q�sk� yk�~tð1 [tzmo o�o~Þ €}�k~o\nyk�~tð mkzlo�zt}�owÞ nom{y|{€on k€�so€�y {qk€Þyyo�~tm kzn kz�t/€Þy yo�~tm m{y|{zoz� Œ319�*_-_)+ kzn\n319�*_/_j+. ~o€|om�t�owÞŒ �so�okwm�~�o nomkÞtzr �{þo~{ ty|wto€ �sk� �soyk�~tð lom{y o€€Þyyo�~tm1 \\so lwkmv �~kmo\n€s{Ѐ �so€|om�~kw z{~y {q�so{�o~kww Ðotrs� yk�~tð1 Oq�st€}�kz�t�Þ n{o€ z{�nomkÞ �{þo~{. t�ty|wto€ �sk� z{�kww�so\nÐotrs�€ sk�o nomkÞon �{þo~{1 Vz�so~trs�. Ðo|w{� �sowk~ro€� otroz�kw� o{q�so€Þyyo�~tm |k~� {q_1F|~o~o}�t€ t�o\nq{~{�o~kww m{z�~km�t{z {q�sozo�Ð{~v t€�sk� �st€}�kz�t�Þ lowo€€�skz {~o}�kw �{�sojwokv/~k�o) {q�sotznt�tn�kw\nzo�~{z€1 \\so n{��on wtzo€s{Ѐ {�~�so{~o�tmkw �||o~ l{�zn q{~�st€}�kz�t�Þ. kzn �so€{wtn wtzo€s{Ѐ �sokm��kw �kw�o\n{q�kvoz q~{y k€ty�wk�t{z *€oo To�s{n€+1\ns��|�>22n {t1{~r243146;42u {�~zkw1|m lt1433;:9=1 r336\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 ;249\nk€€�y|�t{z. t�t€kwÐkÞ€ |{€€tlwo �{nom~ok€opi�{�so|{tz� Ðso~o *7+t€€k�t€qton1 Vqm{�~€o. t�\nt€|{€€tlwo �sk� �so{zwÞ �kw�o {qpi�sk� €k�t€qto€ *7+t€�so�~t�tkw €{w��t{zpiB3.Ðstms m{~~o/\n€|{zn€ �{~oy{�tzr kwwtz�o~m{zzom�t{z€ lo�Ðooz zo�~{z€1 [tzmo �so€o zo�~{z€ k~ok€€�yon �{\nlom{z�~km�tzr tzt€{wk�t{z. �sozo�Ð{~v t€�~t�tkwwÞ m{z�~km�tzr1 N{Ðo�o~. tq�so�o~y tz€tno �so\n|k~oz�so€o€ {q*7+t€€ykww oz{�rs. {~ βit€wk~ro oz{�rs. tz�o~yontk�o �kw�o {qpimkzloq{�zn\nÐstms €k�t€qÞ �sotzo}�kwt�Þ1 Gomk�€o tzm~ok€tzr �so€|k~€t�Þ {qkzo�Ð{~v m{~~o€|{zn€ �{\nnom~ok€tzrpi.ÐoykÞ m{zmw�no �sk� tzm~ok€tzr �so€|k~€t�Þ {qm{zzom�t{z€ |�€so€ �so€Þ€�oy\ntz�sont~om�t{z {qm{z�~km�t{z1 U{�o �sk� *7+kw€{ ty|wto€ �sk� �soqk€�o~ �sotznt�tn�kw zo�~{z€\nk~om{z�~km�tzr *t1o1�sowk~ro~ βit€+.�sonoz€o~ Þ{�mkzm{zzom� �soy Ðt�s {�so~ zo�~{z€\nÐstwo €�tww|~o€o~�tzr {�o~kww m{z�~km�t{z1\n]|�{z{Ð Ðosk�o q{m�€on {�~kzkwÞ€t€ {z�somk€o Ðso~o €Þzk|�tm Ðotrs�€ �k~Þ {zk�tyo/\n€mkwo m{y|k~klwo �{zo�~{z€. kzn y�€� �so~oq{~o loqkm�{~on tz�{ �so€�kltwt�Þ kzkwÞ€t€1 L{~�so\nzoð� �Ð{€om�t{z€. Ðo)ww k||wÞ m{z�~km�t{z kzkwÞ€t€ �{zo�~kw zo�Ð{~v tz�somk€o Ðso~o �so\nÐotrs�€ ykÞ lo~ork~non k€fixed ~owk�t�o �{�sozo�~kw nÞzkytm€ *t1o1�so~o t€k€o|k~k�t{z {q\n�tyo€mkwo€+1\nK/Olkwkzmo wokn€ �{m{z�~km�t{z tz€�k�tm ZUU€\nF|k~� q~{y ykvtzr m{zzom�t{z€ €|k~€o. {zoÐkÞ �{oz€�~o m{z�~km�t{z t€�{ykvo €Þzk|�tm\nÐotrs�€ €ykww1 \\st€ mkzlo€ooz q{~�somk€o Ðt�s €�k�tm €Þzk|€o€ lÞ€o��tzr αiB3tz�so€om�t{z\nkl{�o. Ðso~oWmaxz{Ð sk€�{lo€ykww �{oz€�~o m{z�~km�t{z1 Oz��t�t�owÞ. �st€t€lomk�€o �o~Þ\n€ykww Ðotrs�€ yokz �sk� zo�~{z€ mkzz{� oðo~� y�ms tzqw�ozmo {z{zokz{�so~1 Oq�sozo�~{z€\nk~o€�klwo loq{~o tz�o~m{zzom�t{z. �soÞ Ðtww~oyktz €{1[tzmo €�~{zr €Þzk|�tm Ðotrs�€ k~om{y/\ny{zwÞ {l€o~�on tz�sol~ktz. ÐoÐo~o y{~o tz�o~o€�on tz€��nÞtzr Ðsoz m{z�~km�t{z mkzk~t€o\nt~~o€|om�t�o {qÐotrs� ky|wt��no1 Uork�t�o kzn |{€t�t�o €Þzk|�tm m�~~oz�€ k~ok||~{ðtyk�owÞ\nlkwkzmon tzlt{w{rÞ d6:˘6<f1 _o~ok€{zon �sk� €�ms lkwkzmo ytrs� kww{Ð y�ms wk~ro~ Ðotrs�\nky|wt��no€ Ðstwo €�tww|~o€o~�tzr m{z�~km�t{z €tzmo y{€� {q�soty|km� {q€�ms €Þzk|€o€ mkzmow\nkzn �sozo�oqqom� €ykww1 \\st€ Ðk€tznoon �somk€o1 \\{€s{Ð �st€. Ðo€��nton �so€kyo ZUU k€\ntz�so€om�t{z kl{�o. Ðstwo k€€�ytzr knnt�t{zkwwÞ �sk� �soÐotrs�€ k~o€�k�tm1 Oz|k~�tm�wk~. Ðo\n€s{Ð tz�sok||ozntð *[4F \\oð� [om�t{z 9+�sk� m{z�~km�t{z mkzlok€€o€€on lÞ€��nÞtzr �so\notroz�kw�o€ {q�sosymmetric |k~� {q_*t1o1WWT\n9+1\nGoq{~o Ðont€m�€€ �sokl{�o ~o€�w� tzno�ktw. t�t€�€oq�w so~o �{}�tmvwÞ ~o�toÐ €{yo qkm�€\nkl{�� �so€�kltwt�Þ {qz{zwtzok~ €Þ€�oy€ k€m{y|k~on �{�so€�kltwt�Þ {qwtzok~ €Þ€�oy€1 Oz|k~/\n�tm�wk~. �soqkm��sk� �sootroz�kw�o€ {q_k~o{zwÞ tzq{~yk�t�o q{~k€€o€€tzr m{z�~km�t{z tz\n~ort{z€ Ðso~o �sonÞzkytm€ ykÞ lo~ork~non k€wtzok~1 \\st€ t€lomk�€o tzwtzok~ �tyo/�k~tkz�\n*S\\O+ €Þ€�oy€ *t1o1lxAx+ €�kltwt�Þ t€m{y|wo�owÞ msk~km�o~tþon tz�o~y€ {q�sootroz�kw�o€\n{qF1N{Ðo�o~. �st€ t€not�~�o q{~z{zwtzok~ €Þ€�oy€. o�oz �s{€o {q�sowtzok~ �tyo/�k~Þtzr\nq{~y lxA
txB\\{€oo�st€. m{z€tno~ �soq{ww{Ðtzr m{�z�o~/oðky|wo *q~{y d6=f. €om�t{z\n71515+>\nlx\nly&’\n\u00007e9t\n6\u00007&’\nx\ny&’\n
9\n\\so otroz�kw�o€ {qF*t+ k~o*−4. 4+q{~kww�tyo. s{Ðo�o~ {zo mkz�o~tqÞ lÞnt~om� o�kw��t{z\n�sk� �so€{w��t{z {q�st€ €Þ€�oy €k�t€qto€yBy*3+e−t.lx\u0000xy
6etÐstms t€�z€�klwo kw{zr\nx1N{Ðo�o~. t�mkzlo€s{Ðz €�~ktrs�q{~Ðk~nwÞ �sk� tq�sootroz�kw�o€ {q�sosymmetric |k~�\n{qF*t+ k~okwwzork�t�o. �soz �so€Þ€�oy t€€�klwo d6=f1 \\st€ qkm� �zno~wto€ {�~kzkwÞ€t€. kzn\nstrswtrs�€ �so~ok€{z ÐsÞ �sootroz�kw�o€ {q�so€Þyyo�~tm |k~� {q_k~oty|{~�kz� q{~\n€�kltwt�Þ1\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 <249\nZo��~ztzr �{{�~~o€�w�€. Ðo€s{Ð �sk� tqoðmt�k�{~Þ �{tzstlt�{~Þ m{zzom�t{z€ k~o{qo}�kw\nky|wt��no *kzn {||{€t�o €trz+ k€tzstlt�{~Þ �{oðmt�k�{~Þ m{zzom�t{z€. �soÞ Ðtwwz{�tz�o~qo~o\nzork�t�owÞ Ðt�s €�kltwt�ÞŒ~ork~nwo€€ {qky|wt��no *€oo [4F \\oð� [om�t{z 9+1\\st€ t€lomk�€o\nm{zzom�t{z€ lo�Ðooz tzstlt�{~Þ kzn oðmt�k�{~Þ �zt�€ Ðtwwlotz�so{qq/ntkr{zkw {q�so{�o~kww\nÐotrs� yk�~tð kzn ro�mkzmowwon {��Ðsoz m{y|��tzr �so€Þyyo�~tm |k~�1 F€kztz��t�t�o\noðky|wo. m{z€tno~ k�Ð{/zo�~{z mt~m�t� ykno {q{zooðmt�k�{~Þ zo�~{z kzn {zotzstlt�{~Þ\nzo�~{z m{zzom�on ~om�~~oz�wÞ *k€tzd73f. Ltr4F+1 F€€�yo �sk� �so{�o~kww Ðotrs� yk�~tð sk€\n�soq{ww{Ðtzr €�~�m��~o>\nWw\u0000w\nw\u0000w$%\n_soz �kvtzr �so€Þyyo�~tm |k~� {q�st€yk�~tð. �so{qq/ntkr{zkw owoyoz�€ mkzmow {��. wok�/\ntzr{zwÞ �sontkr{zkw owoyoz�€ �{m{z€tno~1 [tzmo �sootroz�kw�o€ {qkntkr{zkw yk�~tð k~o€ty/\n|wÞt�€ntkr{zkw owoyoz�€. Ðomkzm{zmw�no �sk� tq�sooðmt�k�{~Þ kzn tzstlt�{~Þ €�l|{|�wk�t{z€\nk~otzno|oznoz�wÞ m{z�~km�tzr *wt€wo€€�skz �som{z�~km�t{z ~k�o {qkzt€{wk�on zo�~{z+. �soz\n{�o~kww m{z�~km�t{z t€r�k~kz�oon1 O�t€€�~ktrs�q{~Ðk~n �{rozo~kwtþo �st€€ty|wo �Ð{/zo�~{z\noðky|wo �{mt~m�t�€ kmsto�tzr K/Olkwkzmo �s~{�rs tz�o~km�tzr populations *€oo [4F \\oð� [om/\n�t{z 9+1O�t€kw€{ €�~ktrs�q{~Ðk~n �{rozo~kwtþo �{�somk€o Ðso~o K/Okzn O/Km{zzom�t{z€ n{\nz{�mkzmow {��oðkm�wÞ zo�~{z lÞzo�~{z. l��~k�so~ �soÞ mkzmow {��tzk€�k�t€�tmkw €oz€o Ðso~o\n�soyokz ky|wt��no€ k~oyk�mson1 Fz{�so~ ÐkÞ �{�toÐ �st€K/Olkwkzmo t€tz�soq~kyoÐ{~v {q\nm{yltzk�t{z€ {qm{z�~km�tzr €Þ€�oy€ *Ltr 7+1O�t€vz{Ðz �sk� m{yltztzr tzno|oznoz�wÞ m{z/\n�~km�tzr €Þ€�oy€ tzzork�t�o qoonlkmv |~o€o~�o€ m{z�~km�t{z d47f1 _o€s{Ð �sk� K/Olkwkzmo\nkm��kwwÞ �~kz€wk�o€ �{�st€zork�t�o qoonlkmv kzn �s�€ mkz|~o€o~�o m{z�~km�t{z1\nLtr71Ik~�{{z tww�€�~k�tz r�som{yltzk�t{ z|~{|o~�to€ {qm{z�~km�tzr €Þ€�oy€1 F+\\Ð{ t€{wk�on. m{z�~km�tzr €Þ€�oy€1 \\so Qkm{ltk z\n{q�so{�o~kww €Þ€�oy t€lw{mv ntkr{zkw. Ðt�s kwwþo~{€ {z�so{qq/ntkr{ zkwŒm{~ ~o€|{zntzr �{�soqkm��sk� �so€Þ€�oy€ k~oz{�m{zzom�on 1\nG+Oq{zo{q�so€Þ€�oy€ t€m{zzom�on �{�so{�so~ tzkqoonq{~Ðk~n ykzzo~. �so{�o~kww Qkm{ltkz t€mskzro nlÞ�so|~o€oz mo{qz{z/þo~{\n�o~y€ {z�sol{��{y woq�lw{mvŒm{~~ o€|{zntzr �{�som{zzom� t{z€ r{tzr q~{y �soj�{|) €Þ€�oy �{�sojl{��{y) €Þ€�oy1 \\st€ Qkm{ltkz\nykÞ z{�lozork�t�o noqtzt�o1 N{Ðo�o~. t�t€vz{Ðz �sk� km{{~ntzk�o mskzro oðt€�€ Ðstms Ðtwwykvo t�zork�t�o noqtzt�o1 \\s�€.\nsto~k~mstmkw wÞm{zzom�on m{z�~km�tzr €Þ€�oy€ k~om{z�~km�tz r1I+Oq�so€Þ€�oy€ k~o~omt|~{mkwwÞ m{zzom�on .�so€Þ€�oy ykÞ w{€o t�€\nm{z�~km�tzr |~{|o~�to€ *q{~oðky|wo tz�somk€o {q|{€t�t�o qoonlkmv+1 N{Ðo�o~. t�t€vz{Ðz �sk� tq�soqoonq{~Ð k~nm{zzom�t{z€ *lw�o+\nk~ojo}�kw kzn {||{€t�o) �{�soqoonlkmv m{zzom�t{z€ *r~ooz+ �soz �so{�o~kww €Þ€�oy t€m{z�~km�tzr1 _o�€o�st€|~{|o~�Þ tz�soyktz\n�oð��{|~{�o �sk� tzstlt�{~Þ Nolltkz |wk€�tmt�Þ kzn oðmt�k�{~Þ kz�t/Noll tkz|wk€�tmt�Þ wokn �{m{z�~km�tzr zo�~kw mt~m�t�€ 1\ns��|�>22n{t1{ ~r243146;42u {�~zkw1|ml t1433;:9=1r 337\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 =249\nZowk�t{z �{{�so~ y{now€ Ðt�s qkntzr yoy{~Þ\nF€mkzlo€ooz tzLtr5.m{z�~km�tzr €Þ€�oy€ sk�o jqkntzr yoy{~to€)1 \\st€ yokz€ �sk� |k€�\no�oz�€ Ðtwwkqqom� �som�~~oz� €�k�o. l���sk� �soty|km� {qk�~kz€toz� |o~��~lk�t{z r~kn�kwwÞ\nnomkÞ€ {�o~ �tyo1 I{z€tno~ �so�~kz€toz� tz|�� tzLtr5*~on |kzow+ |~o€oz�on {z{zwÞ {zo{q\n�so�Ð{�~tkw€ �{�sozo�Ð{~v1 Gomk�€o �sotz|�� t€{zwÞ |~o€oz� {z{zo�~tkw kzn z{��so{�so~\nÐomkwwt�k|o~��~lk�t{z1 _soz �st€|o~��~lk�t{z {mm�~€. �so�~kuom�{~to€ {q�so�Ð{�~tkw€\nlom{yo €o|k~k�on1 N{Ðo�o~. kq�o~ �sont€��~lkzmo t€~oy{�on. �sont€�kzmo lo�Ðooz �sozo�/\nÐ{~v)€ �~kuom�{~to€ €�k~�€ €s~tzvtzr lkmv �{þo~{ krktz1 \\s�€. �sozo�Ð{~v n{o€ z{�s{wn {z�{\n�soyoy{~Þ {q�so|o~��~lk�t{z tznoqtzt�owÞŒ�so yoy{~Þ qkno€ kÐkÞ1 F€tytwk~ |~{|o~�Þ sk€\nlooz �€on tzKms{ [�k�o Uo�Ð{~v€ *K[U€+ kzn wt}�tn €�k�o ykmstzo€ *S[T€+ �{|o~q{~y �€oq�w\nl~ktz/tz€|t~on m{y|��k�t{z€ d74.75f1 \\so€o zo�Ð{~v€ k~okzkw�o~zk�t�o �{mwk€€tmkw k��~km�{~\ny{now€ tzÐstms zo�~kw m{y|��k�t{z€ k~o|o~q{~yon lÞoz�o~tzr €�klwo €�k�o€ ~k�so~ �skz lÞ\njqkntzr yoy{~to€) {qoð�o~zkw tz|��€ d76f1\n_stwo �so~o k~o€o�o~kw nt€�tzm�t{z€ lo�Ðooz �sozo�Ð{~v€ no€m~tlon kl{�o kzn K[U€ *o1r1\nK[U€ k~o�Þ|tmkwwÞ nt€m~o�o �tyo nÞzkytmkw €Þ€�oy€. ~k�so~ �skz m{z�tz�{�€+. Ðo€s{Ð tz�so\nk||ozntð *[4F \\oð� [om�t{z 914+ �sk� �soÞ k~ok€|omtkw mk€o {q�sozo�Ð{~v€ m{z€tno~on so~o1\n_o€s{Ð �st€q{~K[U€ k€{||{€on �{S[T€ lomk�€o S[T€ k~o�Þ|tmkwwÞ ty|woyoz�on {ztz�o/\nr~k�o kzn qt~ozo�~{z€ Ðstms. lomk�€o {q�so€|tvo ~o€o�. sk�o k€sk~| nt€m{z�tz�t�Þ tz�sot~\nnÞzkytm€Œykvtzr �soy �zkyozklwo �{m{z�~km�t{z kzkwÞ€t€1\nGÞstrswtrs�tzr �sowtzv lo�Ðooz m{z�~km�t{z kzn K[U€. Ðonoy{z€�~k�o �sk� �som{z�~km�/\ntzrzo�~kw zo�Ð{~v€ m{z€tno~on so~o k~otz|~tzmt|wo mk|klwo {q|o~q{~ytzr �€oq�w kzn tz�o~o€�/\ntzrzo�~kw m{y|��k�t{z€1 Oz{�so~ Ð{~n€. �so€�~{zr €�kltwt�Þ |~{|o~�to€ {qm{z�~km�tzr zo�~kw\nzo�Ð{~v€ n{z{�k��{yk�tmkwwÞ |~{stlt� �soy q~{y n{tzr tz�o~o€�tzr m{y|��k�t{z€1 GÞÐ{~v/\ntzrÐt�stz �soq~kyoÐ{~v {qm{z�~km�t{z kzkwÞ€t€ ÐoÐo~o klwo �{€��nÞ zo�Ð{~v€ l{�s Ðt�s\nnÞzkytm €Þzk|€o€ kzn z{z/tnoz�t�Þ yo�~tm€Œk y�ms l~{kno~ y{now €|kmo �skz kww{Ðon lÞ\n�so€�kznk~n K[U q~kyoÐ{~v1\nJt�m���t{z\n_o€��nton kq�znkyoz�kw }�o€�t{z tzzo�~{€mtozmo> N{Ð n{zo�~kw mt~m�t�€ yktz�ktz €�klwo\nnÞzkytm€ tz�so|~o€ozmo {qnt€��~lkzmo. z{t€Þ tz|��€ kzn |wk€�tm mskzroD _ok||~{kmson �st€\n|~{lwoy q~{y �so|o~€|om�t�o {qnÞzkytmkw €Þ€�oy€ �so{~Þ. tzwtrs� {q�so~omoz� €�mmo€€o€ {q\n�zno~€�kzn zo�~kw mt~m�t�€ k€nÞzkytmkw €Þ€�oy€ d77f1 _oq{m�€on {zcontracting nÞzkytmkw\n€Þ€�oy€. Ðstms k~oÞo�wk~rowÞ �zoð|w{~on tzzo�~{€mtozmo. k€k€{w��t{z �{�so|~{lwoy {��/\nwtzon kl{�o1 _ontn€{q{~�s~oo ~ok€{z€>\n41I{z�~km�tzr €Þ€�oy€ mkzloinput-driven1 \\st€ t€ty|{~�kz� lomk�€o zo�~kw mt~m�t�€ k~o�Þ|t/\nmkwwÞ l{ylk~non Ðt�s �tyo/�k~Þtzr tz|��€ ot�so~ q~{y �sooz�t~{zyoz� {~q~{y {�so~\nl~ktz k~ok€1 W~o�t{�€ €�kltwt�Þ kzkwÞ€o€ sk�o q{m�€on |~tyk~twÞ {z�so€�kltwt�Þ {qZUU€\nÐt�s{�� �tyo/�k~Þtzr tz|��1 \\so€o kzkwÞ€o€ k~oy{€� tz€trs�q�w tz€t��k�t{z€ Ðso~o �so\ntz|�� tz�{ kmt~m�t� mkzlok||~{ðtyk�on k€ot�so~ kl€oz� {~m{z€�kz�1 N{Ðo�o~. zk��~kwt€�tm\n€�ty�wt �ozn �{lostrswÞ �tyo/�k~Þtzr kzn m{y|woð d79f1\n51I{z�~km�tzr €Þ€�oy€ k~o~{l�€� �{z{t€o kzn nt€��~lkzmo€1 Wo~��~lk�t{z€ �{km{z�~km�tzr\n€Þ€�oy k~oq{~r{��oz k��so~k�o {q�som{z�~km�t{z kzn z{t€o �so~oq{~o n{o€ z{�€�kmv �|\n{�o~ �tyo1 Oy|{~�kz�wÞ. �so~k�o {qq{~ro��tzr *t1o�som{z�~km�t{z ~k�o+ n{o€ z{�mskzro Ðt�s\n�so€tþo{q�so|o~��~lk�t{z1 \\s�€ nÞzkytm €�kltwt�Þ mkzm{/oðt€� Ðt�s strs �~tkw/�{/�~tkw �k~t/\nkltwt�Þ tzm{z�~km�tzr zo�~kw zo�Ð{~v€. k€{l€o~�on tzlt{w{rÞ1\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 43249\n61I{z�~km�tzr €Þ€�oy€ mkzlom{yltzon Ðt�s {zokz{�so~ tzÐkÞ€ �sk� |~o€o~�o m{z�~km�t{z\n*Ltr 7+1\\st€ t€z{��~�o {qy{€� nÞzkytmkw €Þ€�oy€ Ðstms mkzok€twÞ jlw{Ð �|)Ðsoz m{z/\nzom�on tzqoonlkmv Ðt�s {zokz{�so~ d<f1\\st€ m{yltzk�t{z |~{|o~�Þ t€ty|{~�kz� k€t�t€\ntzm~ok€tzrwÞ mwok~ �sk� m{rzt�t�o q�zm�t{z€ €�ms k€Ð{~vtzr yoy{~Þ {~k��oz�t{z k~ont€�~tl/\n��on tzy�w�t|wo m{~�tmkw kzn €�l/m{~�tmkw ~ort{z€ d7:.7;f1 Oz|k~�tm�wk~. |~oq~{z�kw m{~�oð\nsk€looz €�rro€�on k€ks�l �sk� mkz~om{zqtr�~o �som{~�tmkw oqqom�t�o zo�Ð{~v lk€on {z\n�k€v noykzn€ d7<f1 G~ktz zo�Ð{~v€ y�€� �so~oq{~o loklwo �{oqqom�t�owÞ ~om{zqtr�~o �soy/\n€ow�o€ {zkqk€��tyo/€mkwo Ðt�s{�� w{€€{q€�kltwt�Þ1 T{€� k��oy|�€ tzy{nowwtzr m{rzt�t{z.\nq{~tz€�kzmo Ð{~vtzr yoy{~Þ. �ozn �{��twtþo €tzrwo kzn {q�oz k��{z{y{�€ zo�Ð{~v€1 I{z/\n�~km�tzr zo�Ð{~v€ nt€|wkÞ km{yltzk�t{z {qtz|��/n~t�oz kzn k��{z{y{�€ nÞzkytm€. kzn\n�s�€ sk�o voÞqok��~o€ zomo€€k~Þ q{~m{yltztzr y{n�wo€ tz�{ qwoðtlwo kzn nt€�~tl��on\nzo�Ð{~v€1\n\\{�zno~€�kzn Ðsk� yomskzt€y€ wokn �{m{z�~km�t{z tzzo�~kw mt~m�t�€. Ðok||wton m{z�~km/\n�t{z kzkwÞ€t€ �{ZUU€1 L{~ZUU€ Ðt�s €�k�tm Ðotrs�€. Ðoq{�zn �sk� �soÐoww/ vz{Ðz Kms{\n[�k�o Uo�Ð{~v€ k~ok€|omtkw mk€o {qkm{z�~km�tzr zo�Ð{~v1 [tzmo ~okwt€�tm €Þzk|€o€ k~om{y|woð\nnÞzkytmkw €Þ€�oy€ tz�sot~ {Ðz ~trs�. ÐoÐoz� {zo€�o| q�~�so~ kzn k€von Ðsoz zo�~kw mt~m�t�€\nÐt�s nÞzkytm €Þzk|€o€ Ð{�wn lom{z�~km�tzr1 _oq{�zn �sk� tzstlt�{~Þ Nolltkz |wk€�tmt�Þ k€\nÐoww k€oðmt�k�{~Þ kz�t/Nolltkz |wk€�tmt�Þ kzn €Þzk|�tm €|k~€t�Þ kwwwokn �{m{z�~km�t{z tzk\nl~{kn mwk€€ {qZUU€1\nOzstlt�{~Þ |wk€�tmt�Þ sk€~omoz�wÞ looz �soq{m�€ {qykzÞ oð|o~tyoz�kw kzn m{y|��k�t{zkw\n€��nto€ n�o�{t�€€�kltwtþtzr zk��~o k€Ðoww k€t�€mk|kmt�Þ q{~qkmtwt�k�tzr z{z�~t�tkw m{y|��k/\n�t{z€ tzzo�~kw mt~m�t�€ d5;.5<.7=f1 O�t€vz{Ðz �{rt�o ~t€o�{oðmt�k�{~Þ/tzstlt�{~Þ lkwkzmo kzn\nsk€looz ty|wtmk�on k€�soyomskzt€y lostzn ykzÞ oð|o~tyoz�kw qtzntzr€ €�ms k€€|k~€o qt~/\ntzr~k�o€ tzm{~�oð d5<f1 [tytwk~wÞ. kz�t/Nolltkz |wk€�tmt�Þ oðt€�€ km~{€€ ykzÞ l~ktz k~ok€ kzn\n€|omto€. €�ms k€€kwkykzno~ kzn ~kllt� ~o�tzk d64f. ~k�st||{mky|�€ d93.94f. owom�~tm qt€sowom/\n�~{€oz€{~Þ w{lo d95f kzn y{�€o |~oq~{z�kw m{~�oð d96f1 Fz�t/Nolltkz nÞzkytm€ mkzrt�o ~t€o�{\n€|k~€o zo�~kw m{no€ Ðstms nom~ok€o m{~~owk�t{z€ lo�Ðooz zo�~kw km�t�t�Þ kzn tzm~ok€o {�o~kww\n€�ty�w�€ ~o|~o€oz�k�t{z tz�sozo�Ð{~v d97f1 Gomk�€o {q�st€{z/wtzo nom{~~owk�t{z |~{|o~�Þ.\nkz�t/Nolltkz |wk€�tmt�Þ sk€kw€{ looz ty|wtmk�on tz|~ontm�t�o m{ntzr d64.95f1 V�~ qtzntzr€\n€�rro€� �sk� t�kw€{ tzm~ok€o �so€�kltwt�Þ {qzo�Ð{~v€1\nL{~y{~o rozo~kw q{~y€ {q€Þzk|�tm nÞzkytm€. Ðo€s{Ðon �sk� €Þzk|�tm €|k~€t�Þ |�€so€\nZUU€ �{Ðk~n€ lotzr m{z�~km�tzr1 \\st€ kwtrz€ Ðoww Ðt�s �sooð|o~tyoz�kw {l€o~�k�t{z �sk� €Þz/\nk|�tm m{zzom�t�t�Þ t€�Þ|tmkwwÞ oð�~oyowÞ €|k~€o tz�sol~ktz1 V�~ ~o€�w�€ €�rro€� �sk� €|k~€t�Þ\nykÞ lo{zoqkm�{~ |�€stzr �sol~ktz �{Ðk~n€ nÞzkytmkw €�kltwt�Þ1 O�t€�so~oq{~o tz�o~o€�tzr �sk�\n€Þzk|€o€ k~o~or�wk�on lÞs{yo{€�k�tm |~{mo€€o€ Ðso~o €Þzk|€o€ zotrsl{~tzr kz�|~or�wk�on\n€Þzk|€o k~otyyontk�owÞ n{Ðz~or�wk�on d99f1 Vz�so€kyo z{�o. Ðokw€{ {l€o~�on �sk� lkwkzm/\ntzr�som{zzom�t{z€ lo�Ðooz oðmt�k�{~Þ kzn tzstlt�{~Þ |{|�wk�t{z€ wokn€ �{m{z�~km�t{z1 Gkw/\nkzmo lo�Ðooz oðmt�k�{~Þ kzn tzstlt�{~Þ €Þzk|�tm tz|��€ k~o{q�oz {l€o~�on tzlt{w{rÞ d6:˘6<f.\nkzn m{�wn �s�€ €o~�o m{z�~km�t�o €�kltwt�Þ |�~|{€o€1 Zowk�on m{y|��k�t{zkw Ð{~v {z€|tvtzr\nzo�Ð{~v€ sk€€�rro€�on �sk� lkwkzmon €Þzk|�tm m�~~oz�€ wokn€ �{qk€�~o€|{z€o |~{|o~�to€. oqqt/\nmtoz� m{ntzr. tzm~ok€on ~{l�€�zo€€ {qq�zm�t{z kzn mkz€�||{~� m{y|woð nÞzkytm€ ~owk�on �{\ny{�oyoz�€ d54.9:˘9<f1\nFyktz kn�kz�kro �{{�~k||~{kms t€�sk� t�|~{�tno€ |~{�klwo mo~�tqtmk�o€ {qrw{lkw m{z�~km/\n�t�o€�kltwt�Þ q{~z{zwtzok~. �tyo/�k~Þtzr ZUU€ Ðt�s €Þzk|�tm |wk€�tmt�Þ1 \\st€ nt€�tzr�t€so€ t�\nq~{y |~o�t{�€ Ð{~v€ Ðso~oŒÐstwo �o~Þ tz�o~o€�tzr kzn �€oq�wŒ€�kltwt�Þ t€oð|o~tyoz�kwwÞ\n{l€o~�on. l��z{�|~{�oz d45f1 Oz€{yo mk€o€ d56.57f. wtzok~ €�kltwt�Þ k~{�zn �so{~trtz t€\n|~{�oz *Ðstms ty|wto€ �sk� �so~o t€km{z�~km�t{z ~ort{z k~{�zn �so{~trtz+ l���so€tþo{q�st€\n~ort{z t€zot�so~ o€�klwt€son z{~€{�rs� kq�o~1 Oznoon. {zoq���~o nt~om�t{z Ðok~o|�~€�tzr t€\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 44249\n�so}�o€�t{z {q>givenanRNN.canoneprovideacertificateofcontractive stabilityinaregionD\nFzkz€Ðo~ �{�st€}�o€�t{z Ð{�wn €son wtrs� {z�so€�kltwt�Þ |~{|o~�to€ {qvz{Ðz ZUU y{now€\ntz�sowt�o~k��~o *o1r1 �~ktzon ZUU€. lt{w{rtmkwwÞ/no�ktwon €|tvtzr y{now€. o�m1+1\nKð|o~tyoz�kw zo�~{€mtozmo t€y{�tzr tz�sont~om�t{z {q€��nÞtzr ykzÞ tz�o~km�tzr zo�~kw\nmt~m�t�€ €ty�w�kzo{�€wÞ1 \\st€ t€q�owon lÞ�sooð|kzntzr mk|kltwt�to€ {q~om{~ntzr y�w�t|wo k~ok€\n€ty�w�kzo{�€wÞ tz�t�{ kzn €��nÞ �sot~ tz�o~km�t{z€1 \\st€ tzm~ok€o€ �sozoon q{~y�w�t/y{nkw\nm{rzt�t�o y{now€1 _o�so~oq{~o kz�tmt|k�o �sk� �so|~o€oz�on Ð{~v mkz|~{�tno k�€oq�w q{�z/\nnk�t{z q{~s{Ð m{rzt�t{z tzz{t€Þ kzn nt€�~tl��on m{y|��k�t{zkw zo�Ð{~v€ mkzlo�zno~€�{{n1\nTk�o~tkw� kzn yo�s{n�\nOz�sotz�o~o€�on {q€|kmo kzn m{so€t{z. Ðo)�o |wkmon kww�sono�ktwon |~{{q€ {qyktz ~o€�w�€ tz�{\n�sok||ozntð1 \\so k||ozntð Ðk€Ð~t��oz �{lo€owq/m{z�ktzon. kzn �s�€ kw€{ m{z�ktz€ knnt�t{zkw\nnoqtzt�t{z€ {qyk�soyk�tmkw {luom�€ �€on �s~{�rs{�� �so�oð�1 [ty�wk�t{z€ *Ltr€ 5kzn 6+Ðo~o\n|o~q{~yon tzWÞ�s{z1 I{no �{~o|~{n�mo �soqtr�~o€ t€k�ktwklwo k�ds��|€>22rt�s�l1m{y2v{þwo{ 2\n€�klwoinÞzkytm€f1 U�yo~tmkw tz�or~k�tzr Ðk€|o~q{~yon �€tzr €notz�. kz{|oz/€{�~mo m{wwom/\n�t{z {qz�yo~tmkw kwr{~t�sy€ q{~tz�or~k�tzr €�{msk€�tm {~ntzk~Þ ntqqo~oz�tkw o}�k�t{z€1\nLtr5no�ktw€>\nFww|k~kyo�o~€ kzn �tyo m{z€�kz�€ tzK}€*4+kzn *5+Ðo~o €o��{{zo1 \\so tz�or~k�t{z €�o|/\n€tþo.dt.Ðk€€o��{4o/51\nOzt�tkw m{znt�t{z€ q{~l{�s zo�~kw kzn €Þzk|�tm km�t�k�t{z Ðo~o n~kÐz �ztq{~ywÞ lo�Ðooz /4\nkzn 41Oz|��€ tz�{ �sozo�Ð{~v Ðo~o rozo~k�on lÞn~kÐtzrNq~o}�ozmto€ �ztq{~ywÞ lo�Ðoozdt\nkzn 433dt. |sk€o€ lo�Ðooz 3kzn 5π.ky|wt��no€ lo�Ðooz 3kzn 53kzn rozo~k�tzr kzUðTime\n�om�{~ {q€tz�€{tn€ Ðt�s �sokl{�o |k~kyo�o~€1\n\\so |o~��~lk�t{z€ {q�sozo�Ð{~v Ðk€kmsto�on lÞknntzr k�om�{~ {qkww43€*t1okzknnt�t�o\n�om�{~ tz|�� tz�{ �sozo�Ð{~v. Ðt�s okms zo�Ð{~v {q�soowoyoz� o}�kw �{43+�{�sokl{�o\ntz|�� {z{zo{q�so�~tkw€ q{~433�tyo €�o|€ tz�soytnnwo {q�so€ty�wk�t{z1\n\\so z{t€o Ðk€rozo~k�on lÞn~t�tzr okms zo�~kw �zt� Ðt�s kztzno|oznoz� _otzo~ |~{mo€€\n*€tryk B15+1\nLtr6no�ktw€>\n\\so Ðotrs� yk�~tð �€on Ðk€�so€kyo k€tzLtr5.woq�y{€� |kzow *Ðt�s{�� |o~��~lk�t{z.\nÐt�s{�� z{t€o+1\n[�||{~�tzr tzq{~yk�t{z\n[4\\oð�1 \\so €�||woyoz�k~Þ k||ozntð qtwom{z�ktz€ oð�oz€t�o yk�soyk�tmkw |~{{q€ {q�so\n~o€�w�€ €�k�on kl{�o1 _ovo|� �sok||ozntð €owq/m{z�ktzon lÞ~o€�k�tzr �solk€tm ~o€�w�€ {qm{z/\n�~km�t{z kzkwÞ€t€ kzn wtzok~ kwrol~k Ðstms Ðo�€on {q�oz tz{�~|~{{q€1\n*WJL+\nFmvz{�wonryoz ��\n_o�skzv WkÐow No~ykz q{~m{yyoz�€ {zkzok~wto~ �o~€t{z {q�st€ykz�€m~t|�1 _o�skzv\nTtmskow Nk|| kzn kwwyoylo~€ {q�soTtwwo~ Sklq{~sow|q�w nt€m�€€t{z€ kzn €�rro€�t{z€1\nF��s{~ I{z�~tl��t{z�\nI{zmo|��kwtþk�t{z> So{R{þkmsv{�. Ttvkow S�zn}�t€�. Qokz/Qkm}�o€ [w{�tzo. Kk~w R1Ttwwo~1\nL{~ykw kzkwÞ€t€> So{R{þkmsv{�. Ttvkow S�zn}�t€�1\nW~{uom� knytzt€�~k�t{z> Kk~w R1Ttwwo~1\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 45249\n[{q�Ðk~o> So{R{þkmsv{�. Ttvkow S�zn}�t€�1\n[�|o~�t€t{z> Qokz/Qkm}�o€ [w{�tzo. Kk~w R1Ttwwo~1\n^kwtnk�t{z> Qokz/Qkm}�o€ [w{�tzo. Kk~w R1Ttwwo~1\n_~t�tzr ˘{~trtzkw n~kq�> So{R{þkmsv{�. Kk~w R1Ttwwo~1\n_~t�tzr ˘~o�toÐ ’ont�tzr> Qokz/Qkm}�o€ [w{�tzo. Kk~w R1Ttwwo~1\nZoqo~ozmo�\n41 S�zn}�t ��T.Z{�o Q.No~ykz W.G~tzmk� [S.G��msykz \\Q.Ttwwo~ KR1Mkyyk kznGo�k G�~��� ]zno~wto\n_{~vtzr Toy{~�1 Uo�~{z1 534:? =3>495˘4:71 s��|�>22n{t1{~ r2431434:2u1 zo�~{z1534: 135135< WTOJ>\n5:==:3<7\n51 Is�~mswkz nTT. b�GT. I�zztzrs kyQW.[�r~�o SW.I{soz TZ. I{~~kn{ M[. o�kw1[�ty�w�� {z�o�\n}�ozmso �zo�~kw �k~tkltwt��> F�tno�|~ok nm{~�tmkw |soz{yo z{z1 Uk�Uo�~{�mt1 5343? 46>6:=˘6;<1\ns��|�>22n{t1{~ r2431436 <2zz15934 WTOJ> 534;6;79\n61 N{|qtown QQ1Uo�~kw zo��{~v� kzn|s��tmkw ����oy� �t�s oyo~ro z�m{wwom�t�o m{y|��k� t{zkw kltwt�to� 1\nW~{m Uk�w Fmkn [mt14=<5? ;=>5997˘599<1 s��|�>2 2n{t1{~r243143 ;62|zk�1; =1<15997 WTOJ> :=96746\n71 Nt~�ms T_1 I{z�o~roz �km�t�k�t{z n�zkytm �tzm{z�tz�{� ��tyo zo��{~v� 1Uo�~kw Uo��{~ v�14=<=1 ||1\n664˘67=1 s��|�>22 n{t1{~r243143 4:23<=6 /:3<3*<=+=334< /a\n91 I{soz TF. M~{��lo ~r[1Fl�{w��o [�kltwt�� {qMw{lkw Wk��o~ zL{~yk�t{z kznWk~kwwow Toy{~� [�{~kro\nl�I{y| o�t�t�o Uo�~kw Uo��{~v�1 OKKK \\~kz� [��� Tkz I�lo~z1 4=<6? [TI/46> <49˘<5:1 s��|�>22n{t1\n{~r2431443=2 \\[TI14=<61:6 463;9\n:1 S�zn}�t ��T.No~ykz W.Skz�zo~ F1\\so�k kznrkyyk |{�o~ tzm~ok�o� kznkw|sk2l o�k|{�o~\nnom~ok�o ��t�s yoy{~� w{kn tzkzk��~km�{~ zo��{~v y{now1 QI{rz Uo�~{�m t15344? 56>633<˘63531\ns��|�>22n{t1{~ r243144: 52u{mziki3 335= WTOJ> 54795=66\n;1 Skz�zo~ F.Kvolo~ rV1Zowtkltwt�� kzn[|oon {qZomkww tzkzF��{mtk�t�o Uo��{~v1 OKKK \\~kz� Wk��o~z\nFzkw Tkms Oz�oww1 4=<9? WFTO/; >7=3˘7=<1 s��|�>22 n{t1{~r243144 3=2�|kyt14= <917;:; :<<WTOJ>\n54<:=5<;\n<1 F�sl� _1Jo�trz q{~kl~ktz> \\so {~trtz {qknk|�t�o losk�t{ �~1Isk|y kz’Nkww S�n?4=951\n=1 Jk�kz W.Fll{� SL1\\so{~o�tmkw Uo�~{�m tozmo I{y|��k� t{zkw Uo�~{�mtozmo1 \\so TO\\ |~o��1 53391\ns��|�>22n{t1{~ r2431434 :2u1zo�~{z153 3<143134 =WTOJ> 4<==9<57\n431 cskzr N._kzr c.St�J1Fm{y|~oso z�t�o ~o�to� {q��kltwt�� kzkw��t� {qm{z�tz�{� �/�tyo ~om�~~oz� zo�/\n~kwzo��{~v�1 OKKK \\~kz� Uo�~kw Uo��{~v� Sok~z [���1 5347? 59>455=˘4 5:51 s��|�>22n{t1{~ r2431443 =2\n\\UUS[1 53471564;<<3\n441 [|kkv K._k�kzkl oR.L�zksk �st[.[�{vo� TM1 [�klwo kznn�zkytm m{ntzr q{~�{~vtzr yoy{~� tz|~t/\nyk�o |~oq~{z�kw m{~�o�1 QUo�~{�mt1 534;? 6;>:936˘:94:1 s��|�>22n{ t1{~r24314956 2QUK]ZV[ IO166:7/\n4:1534; WTOJ> 5<99=6;9\n451 Skuo Z.G�{z{ykz{ J^1Z{l��� �tytzr kzny{�{~ |k��o~z �l��kytzr msk{� tz~om�~~oz� zo�~kw zo�/\n�{~v�1 Uk�Uo�~{�mt1 5346? 4:>=59˘=661 s��|�>22n {t1{~r2431436 <2zz1673 9WTOJ> 56;3<477\n461 Iskt�kzry {zrv{z _.[�kytzk�sk z[R.L~oonyk zJQ._kzr aQ1I{y|��tzr l�Z{l��� \\~kz�tozmo >\nN{� �soL~{z�{/ Wk~to�kw Uo��{~ vWo~q{~y� [o}�oz�tkw. Ik�or{~�/ Gk�on Jomt�t{ z�1Uo�~{z1 534;? =6>\n4937˘494;1 o71s��|�>22n{t1{~ r2431434 :2u1zo�~{z153 4;136133 5WTOJ> 5<667:45\n471 S{sytwwo~ _.[w{�tzo Q/QK1 VzI{z�~km�t{z Fzkw��t� q{~U{z/wtzo k~[���oy�1 F��{yk�tm k14==<? 67>:<6˘\n:=:1 s��|�>22n{ t1{~r2431434: 2[3339/43= <*=<+3334 =/6\n491 Z��t�sk�� o~].J{�rwk� ZQ.[w{�tzo Q/Q1I{wwom�t�o ��kltwt�� {qzo��{~v� {q�tzzo~/�kv o/kww mt~m�t��* 1534<\ndmt�on 64Vm�534=f1\n4:1 Z��t�sk�� o~].[w{�tzo Q/Q.J{�rwk� Z1I{y|��k�t{z tzJ�zkyt mkww� G{�znon F��yyo �~tm[���oy �1\nWS{[ I{y|�� Gt{w1 5349? 44>433736=1 s��|�>22n{t1{~ r243146; 42u{�~zkw1|ml t1433736= WTOJ> 59:4;:79\n4;1 Mt~k~n G.\\klk~o k�U.Wsky YI. Go~�s{� F.[w{�tzo Q/Q1_so~o zo�~{�mto zmokznn�zkytm ����oy �so/\n{~�yoo� k��{z{y{ ��~{l{�tm�> Fm{z�~km�tzr lk�kw rkzrwtk y{now q{~km�t{z �owom�t{z1 Uo�~kw Uo��{~v�1\n533<? 54>:5<˘:7 41s��|�>22n{t1{~ r2431434 :2u1zo�zo�153 3<136133= WTOJ> 4<7=97 55\n4<1 \\klk~ok� U.[w{�tzo QQ.Wsky YI1 N{� ��zms~{zt�k �t{z|~{�om�� q~{y z{t�o1 WS{[ I{y| ��Gt{w1 5343?\n:>4˘=1 s��|�>22 n{t1{~r243146 ;42u{�~zkw1| mlt14333: 6;WTOJ> 533=3< 5:\n4=1 V~skz FK.Tk_Q1 Fnt�o~�o ~kzro {qqkm�{~� kqqom� �sozk��~o {qzo�~kw ~o|~o�oz�k�t{ z��zno~w�tzr\n�s{~�/�o~y yoy{~�1 Uk�Uo�~{�mt1 534=? 55>5;9˘5<61 s��|�>22n{t1{~ r2431436 <2�749=6 /34</3647 /�\nWTOJ> 63::7; :;\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 46249\n531 T{zrtww{ M.Gk~kv V.\\�{n�v� T1[�zk|�tm \\so{~� {q_{~vtzr Toy{~�1 [mtozmo *<3/+1 533<? 64=> 49761\ns��|�>22n{t1{~ r2431445 :2�mtozmo1449 3;:= WTOJ> 4<66==76\n541 S�zn}�t ��T.I{y|�o F.Skz�zo~ F1Gt��klwo .O~~or�wk~ Lt~tzr kznW{|�wk�t{z V�mtwwk�t{z� tzkT{n�wk~\nF��~km�{~ Toy{~� Uo��{~v1 T{~~t�{z F.ont�{~1 WS{[ I{y|�� Gt{w1 5343? :>o4333<361 s��|�>22n{t1{ ~r2\n43146;42 u{�~zkw1|mlt14 333<36 WTOJ> 539654==\n551 ^tn�k�kr k~T1U{zwtzok ~����oy� kzkw��t�1 53351\n561 Nozzo}�t zM.^{row� \\W.Mo~��z o~_1V|�tykw m{z�~{w {q�~kz�toz� n�zkytm �tzlkwkzmon zo��{~v� ��|/\n|{~�� rozo~k� t{z{qm{y|wo� y{�oyoz� �1Uo�~{z1 5347? <5>46=7˘4 73:1 s��|�>22n{t1{~ r2431434 :2u1\nzo�~{z15347 1371379 WTOJ> 57=79;;<\n571 [�~{�n QW.W{~�o~ TF. Nozzo}�tz M.^{row� \\W1T{�{~ |~tyt�t�o� tz�|kmo kzn�tyo �tk�k~ro�o nrktz\ny{n�wk�t{ ztzm{~�tmkw zo��{~v� 1Uk�Uo�~{�mt1 534<? 54>4;;7˘4 ;<61 s��|�>22n{t1{~ r2431436 <2�749=6/\n34</35;:/ 3WTOJ> 637<5=7 =\n591 cozvo L.Mo~��zo~ _.Mkzr�wt [1\\so �oy|{~kw |k~kn{� {qNolltkz wok~ztzr kzns{yo{� �k�tm |wk��tmt��1\nI�~~oz� V|tzt{z tzUo�~{lt {w{r�1 534;1 ||14::˘4;:1 s��|�>22n{t1 {~r2431434:2u 1m{zl1534 ;1361349 WTOJ>\n5<7646:=\n5:1 ^{row�� \\W.L~{oyvo� ZI. J{�{z U.Mtw�{z T.Nkk� Q[.St�Z.o�kw1Ozstlt�{ ~���zk|�tm |wk��tmt�� >\n[|tvo �tytzr/n o|oznozmo kzn|��k�t�o zo��{~v q�zm�t{z1 L~{z�to~� tzUo�~kw It~m�t��1 53461 s��|�>22n{t1\n{~r243166<=2 qzmt~153461334 4=WTOJ> 56<<54<:\n5;1 Nozzo}�t zM.Frzo� KQ.^{row� \\W1Ozstlt�{~� Wwk��tmt��> Gkwkzmo. I{z�~{w. kznI{no|ozn ozmo1 Fzz�\nZo� Uo�~{�mt1 534;? 73>99;˘9;=1 s��|�>22n{t1 {~r2431447:2k zz�~o�/zo� ~{/3;544:/3643 39WTOJ>\n5<9=<;4;\n5<1 ^{row� \\W.[|~ovowo~ N.cozvo L.Iw{|k�s I.Mo~��zo~ _1Ozstlt�{ ~�Wwk��tmt�� Gkwkzmo� K�mt�k�t{z kzn\nOzstlt�t{z tz[oz�{~� Wk�s�k�� kznToy{~� Uo��{~v�1 [mtozmo *<3/+1 5344? 667> 49:=˘49;61 s��|�>22\nn{t1{~r243144 5:2�mtozmo145 443=9 WTOJ> 553;9;57\n5=1 Mo~��zo~ _.Rt��wo~ _T1 Tk�soyk�tmkw q{~y�wk�t {z�{qNolltkz wok~ztzr1 Gt{wI�lo~z1 5335? <;>737˘\n7491 s��|�>22n{ t1{~r2431433; 2�33755/ 335/3696/ �WTOJ> 457:4:63\n631 Mo~��zo~ _.Rt��wo~ _T1 Tk�soyk�tmkw q{~y�wk�t {z�{qNolltkz wok~ztzr1 Gt{wI�lo~z1 5335? <;>737˘\n7491 s��|�>22n{ t1{~r2431433; 2�33755/ 335/3696/ �WTOJ> 457:4:63\n641 N{�{�k \\.Gkmm�� [F.Tot��o~ T1J�zkyt m|~ontm�t �om{ntzr l��so~o�tzk1 Uk��~o1 5339? 76:> ;41F�ktw/\nklwo> s��|�>22n{t1 {~r2431436<2z k��~o36:<= WTOJ> 4:3343:7\n651 Mo~��zo~ _.Rt��wo~ _T1 Tk�soyk�tmkw q{~y�wk�t {z�{qNolltkz wok~ztzr1 Gt{wI�lo~z1 5335? <;>737˘\n7491 s��|�>22n{ t1{~r2431433; 2�33755/ 335/3696/ �WTOJ> 457:4:63\n661 [w{�tzo QQK1 T{n�wk~ ��kltwt�� �{{w� q{~nt��~tl��on m{y|��k�t{z kznm{z�~{w1 Oz�QFnk|� I{z�~{w [trzkw\nW~{mo��1 5336? 4;>6=;˘74:1 s��|�>22n{t1{~ r2431433 52km�1;97\n671 Rkznow KZ.[ms�k~�� QN.Qo��oww \\T. Qo��oww J{qGkznTG\\. [torowlk�y [.N�n�|o�s FQ1W~tzmt|wo�\n{qzo�~kw �mtozmo1 TmM~k�/s twwUo� b{~v? 53331\n691 [{zr [.[u{Æ��~{ÆyWQ.Zotrw T.Uow�{z [.Isvw{��v ttJG1Ntrsw� z{z~kzn{y qok��~o� {q��zk|�tm m{zzom/\n�t�t�� tzw{mkw m{~�tmkw mt~m�t��1 WS{[ Gt{w1 5339? 6>393;˘394=1 s��|�>22n{t1{~ r243146;42 u{�~zkw1|lt{ 1\n33633:< WTOJ> 49;6;3:5\n6:1 Tk~tñ{ Q.[ms�yyo~� Q.S�{z JI. [ms�klo S.Gomv V._to�tzr W.o�kw1Oz�k~tkz� m{y|��k�t{z �tzw{mkw\nm{~�tmkw zo��{~v� �t�s lkwkzmon o�mt�k�t{z kzntzstlt�t{z1 Uk�Uo�~{�m t15339? <>4=7˘5341 s��|�>22n{t1\n{~r2431436<2 zz46=4 WTOJ> 49::9<;:\n6;1 _os~ T.ckn{~ FT1 Gkwkzmo ntzstlt�t{z �zno~wto� ��ztzr kzn�sk~|oz� �|tvo �tytzr tzk�nt�{~ �m{~�o�1\nUk��~o1 5336? 75:> 775˘77 :1s��|�>22n{t1{~ r2431436 <2zk��~o3544 :WTOJ> 47:7;6 <5\n6<1 [s� b.Nk�oz��k �lF.TmI{~ytmv JF1\\�~ztzr {zkzn{qq~om�~~oz� lkwkzmon m{~�tmkw km�t�t��1 Uk��~o1\n5336? 756> 5<<˘5=61 s��|�>22n{t1{~ r2431436 <2zk��~o34:4 :WTOJ> 45;7<: 75\n6=1 [w{�tzo Q/QK. St_1F||wton z{zwtzok~ m{z�~{w1 W~oz�tm oskww Kzrwo�{{n Iwtqq�. UQ?4==41\n731 T�~|s� GR.Ttwwo~ RJ1Gkwkzmo nFy|wtqtmk�t {z>FUo� Tomskzt�y {q[owom�t�o Fy|wtqtmk�t {z{qUo�~kw\nFm�t�t�� Wk��o~z� 1Uo�~{z1 533=? :4>:69˘:7<1 s��|�>22 n{t1{~r243143 4:2u1zo� ~{z1533=13513 39WTOJ>\n4=57=5<5\n741 Qkoro~ N1\\so �oms{ ��k�o� k||~{km s�{kzkw��tzr kzn�~ktztzr ~om�~~oz� zo�~kw zo��{~v�/�t�s kzo~~k/\n��yz{�o1 G{zz. Mo~Mo~Uk�w Zo� Ioz� Ozq\\omsz{w MTJ \\oms Zo|1 5334? 47<> 461\n751 Wk�mkz� Z.Qkoro~ N1FUo�~{n�zk ytmkw T{now q{~_{~vtzr Toy{~� 1���1~o�o~ �{t~/m{y| ��tzr1{~r2\n{~rkztm\n761 G�{z{y kz{J^.Tkk�� _1[�k�o/no|oz noz� m{y|��k�t{z �>�|k�t{�oy |{~kw |~{mo��tzr tzm{~�tmkw zo�/\n�{~v�1 533= dmt�on 44Tk~ 534=f1 s��|�>22n{t1{~ r2431436 <2z~z599< WTOJ> 4=479569\n771 [���tww{ J1Uo�~kw mt~m�t�� k�m{y|��k�t{z kwn�zkytm kw����oy�1 I�~~ V|tz Uo�~{lt{w1 5347? 59>49:˘\n4:61 s��|�>22n{ t1{~r2431434: 2u1m{zl1534 7134133< WTOJ> 5793=3=<\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 47249\n791 ^kz [�o�oztz mvZZJZ. So�oz MJ. [�~{zr [W.R{lo~wo Z.Gtkwov _1Zo|~{n� mtltwt�� kzn^k~tkltwt�� tz\nUo�~kw [|tvo \\~ktz�1 4==;? 5;91\n7:1 Isk�sky IN. Gkn~o J1T�w�t|wo rk�o� {z�{~vtzr yoy{~�1 I�~~ V|tz Gosk� [mt15349? 4>56˘641\ns��|�>22n{t1{~ r2431434 :2u1m{losk 1534713<1334 WTOJ> 5:;4=<94\n7;1 Nkwk��k TT. Rk��zo~ [1\\skwkytm q�zm�t{z� tznt��~tl��on m{rzt�t�o m{z�~{w1 Uk�Uo�~{�mt1 534;? 53>\n4::=˘4:;=1 s��|�>2 2n{t1{~r243143 6<2�749= 6/34;/335 3/4WTOJ> 5=4<7543\n7<1 Ttwwo~ KR.I{soz QJ1FzOz�or~k�t� o\\so{~� {qW~oq~{z�kw I{~�o� L�zm�t{z1 Fzz� Zo� Uo�~{�mt1 5334?\n57>4:;˘5351 s��|�>22n{t1{~ r2431447:2 kzz�~o�1zo�~ {1571414: ;WTOJ> 445<663=\n7=1 ^{row�� \\W.L~{oyvo� ZI. J{�{z U.Mtw�{z T.Nkk� Q[.St�Z.o�kw1Ozstlt�{ ~���zk|�tm |wk��tmt�� >\n[|tvo �tytzr/n o|oznozmo kzn|��k�t�o zo��{~v q�zm�t{z1 L~{z� Uo�~kw It~m�t��1 5346? ;>4˘441\n931 St�ykz Q1Fyomskzt� yq{~�soNoll kzn�sokz�t/Noll |~{mo��o� �zno~w� tzrwok~ztzr kznyoy{~�1\nW~{m Uk�w Fmkn [mt14=<=? <:>=9;7˘=9;<1 s��|�>2 2n{t1{~r243143 ;62|zk�1< :1561=9; 7WTOJ> 599:;4<\n941 R�wwykzz JT. Sky�k RW1S{zr/�o~y ��zk|�tm |wk��tmt�� tzst||{mky| kwtz�o~zo�~{ z�1Uk�Zo� Uo�~{�m t1\n533;? <>:<;˘:= =1s��|�>22n{t1{~ r2431436 <2z~z553; WTOJ> 4;;37<44\n951 Kztv{w{|{� FM. Fll{�� S.[k��oww UG1Oz�o~zk ww�Mozo~k�on W~ontm�t{z� Kzskzm oUo�~kw kznGosk�t{ ~kw\nJo�om�t{z {q[oz�{~ �[�ty�wt tzkzKwom�~t mLt�s1 534< dmt�on 4Tk~ 534=f1 s��|�>22n{t1 {~r2431434:2u 1\nzo�~{z1534< 13:133: WTOJ> 6333493;\n961 Z�kz N.[k�~ \\.bk{ _/J1 J{|kytzo/ ozklwon kz�t/Noll tkz�tytzr/no|ozn oz�|wk��tmt�� tz|~oq~{z�kw mt~/\nm�t�~�1 L~{z� Uo�~kw It~m�t�� 15347? <>6<1s��|�>22 n{t1{~r243166 <=2qzmt~15 34713336< WTOJ> 57;=99;4\n971 L{Æwntk�vW1L{~ytzr �|k~�o ~o|~o�oz�k�t{ z�l�w{mkw kz�t/Noll tkzwok~ztzr1 Gt{wI�lo~z1 4==3? :7>4:9˘\n4;31 s��|�>22n{ t1{~r2431433; 2GL3566 467: WTOJ> 55=4=36\n991 Kw/G{���kzt [.O|QWR. G~o�{z/W~ {�ozmso~ ^.Rz{�� M_. Vv�z{ N.Gt�{ N.o�kw1S{mkww� m{{~ntz k�on ��z/\nk|�tm |wk��tmt�� {q�t��kw m{~�o� zo�~{z� tz�t�{1 [mtozmo *<3/+1 534<? 6:3> 467=˘4 6971 s��|�>22n{t1{~ r2431\n445:2�mtoz mo1kk{3< :5WTOJ> 5==6346;\n9:1 Jozè�o [.Tkmsoz� IR1Kqqtmtoz� m{no� kznlkwkzmon zo��{~v� 1Uk�Uo�~{�mt1 534:? 4=>6;9˘6<51\ns��|�>22n{t1{~ r2431436 <2zz17576 WTOJ> 5:=3:937\n9;1 Nozzo}�t zM.^{row� \\W.Mo~��z o~_1V|�tykw m{z�~{w {q�~kz�toz� n�zkytm �tzlkwkzmon zo��{~v� ��|/\n|{~�� rozo~k� t{z{qm{y|wo� y{�oyoz� �1Uo�~{z1 5347? <5>46=7˘4 73:1 s��|�>22n{t1{~ r2431434 :2u1\nzo�~{z15347 1371379 WTOJ> 57=79;;<\n9<1 G~�zow U1J�zkyt m�{q[|k~�ow� I{zzom� onUo��{~v� {qK�mt�k�{~� kznOzstlt�{~� [|tvtzr Uo�~{z�1 Q\nI{y|�� Uo�~{�m t153331 F�ktwklwo> s��|�>22 �ol1��kzq{~n 1on�2r~{�|2 l~ktz�tz�twtm{z2 n{m�yoz��2\nG~�zow[| k~�ow�I{zzo m�onUo� �1|nq\nPLOS COMP UTATIONAL BIOLOGYFmsto�tzr ��klwo zo�~kw n�zkytm�\nWSV[ I{y|��k�t{zk wGt{w{r� �s��|�>2 2n{t1{~r243146 ;42u{�~zkw1| mlt1433;:9= F�r��� ;.5353 49249",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "AYPt3wsSyzA",
"year": null,
"venue": "SMART@EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-smart.6.pdf",
"forum_link": "https://openreview.net/forum?id=AYPt3wsSyzA",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Learning to translate: a statistical and computational analysis",
"authors": [
"Marco Turchi",
"Tijl De Bie",
"Nello Cristianini"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Learning to Translate: a statistical and computational analysis \nMarco Turchi, Tijl De Bie, Nello Cristianini \nUniversity of Bristol (UK)Department of Engineering Mathematics \nMarco Turchi-Univ. of Bristol Learni ng to Translate 2Outline \n/square6Motivation \n/square6Introduction \n/square6ExperimentalBSetup \n/square6Experiments \n/square6ConclusionBandBConsiderations \nMarco Turchi-Univ. of Bristol Learni ng to Translate 3Motivation \n/square6ABbeliefBinBSMTBisBthatB“moreBdataBB→ betterBtransla tion”.\n/square6But:\n/boxshadowdwnhowBmuchBparallelBtextBdoBweBneedBtoBobtainBaccepta bleBB\ntranslation? \n/boxshadowdwnDoBweBhaveBaBconstantBincreaseBinBperformanceBwhenB addingB\nmoreBdata? \n/boxshadowdwnIfBweBhaveBanBexhaustiveBamountBofBparallelBdata,Bc anBtheBSMTB\nmodelBbeBaBlimitation? \n/boxshadowdwnCanBweBfindBtheBcurrentBlimitationBofBtheBSMTBappro ach? \n/square6SomeBhelpfulBfacts:\n/boxshadowdwndataBavailabilityB(Europarl,BHansard,BUNBcorpus,BWe b,B…);\n/boxshadowdwnrecentBadvancesBinBsoftwareB(Moses,B…);\n/boxshadowdwncomputingBpowerB(HPCBcluster,BcloudBcomputing,B…).\nMarco Turchi-Univ. of Bristol Learni ng to Translate 4Motivation \n/square6Extensive study ofBaBPhraseBbasedBSMTBsystemBusingBMoses,B\nEuroparlandBaBHPCBcluster.\n/square6TryBtoBanswerBtheBpreviousBquestionsBbyBextrapolati ngBtheB performance \nof the system under different conditions :\n/boxshadowdwnconstantlyBincreasingBtheBtraining;\n/boxshadowdwnchangingBtheBsystemBparameters;\n/boxshadowdwnaddingBnoiseBtoBtheBsystemBparameters;\n/boxshadowdwn…\n/square6InvestigateBtheB potentials and limitations ofBtheBcurrentBtechnologyB\nanalysingBaBSTMBsystemBasBaBLearningBSystem.\n/square6Explore newBaspectsBofBaBSMTBsystemBunderBaBmachineBlearnin gBpointB\nofBview.\n/square6Confirm someBpreviousBresultsBinBSMTBfield.\n/square6SuggestBsomeBpossibleB research directions .\nMarco Turchi-Univ. of Bristol Learni ng to Translate 5Introduction \n/square6PerformanceBofBaBlearningBsystemBisBresultBofB(atBleast)BtwoBeffects:\n/boxshadowdwnrepresentationBpowerBofBtheBhypothesisBclass:\nhowBwellBtheBsystemBcanBapproximateBtheBtargetBbeha viour;\n/boxshadowdwnstatisticalBeffects:\nhowBwellBtheBsystemBcanBestimateBtheBbestBelementBo fB\ntheBhypothesisBclass.\nMarco Turchi-Univ. of Bristol Learni ng to Translate 6Introduction \n/square6TheyBinteract,BwithBrichestBclassesBbeingBbetterBapproximatorsofBtheBtargetBbehaviour,BbutBrequiringBmoreBtrainingBdataBtoBidentifyBtheBbestBhypothesis.\n/square6InBSMT,BlearningBtaskBisBcomplicatedBbyBtheBfactBthatBtheBprobabilityBofBencounteringBnewBwordsBorBexpressionsBneverBvanishes.\nMarco Turchi-Univ. of Bristol Learni ng to Translate 7Introduction \n/square6TheseBobservationsBleadBusBtoBanalyze:\n/boxshadowdwnlearningBandBunlearningBcurves;\n/boxshadowdwnflexibilityBofBtheBrepresentationBclass;\n/boxshadowdwnstabilityBofBtheBmodel;\n/square6Experiments:\n1.roleBofBtrainingBsetBsizeBonBperformanceBonBnewBsen tences;\n2.roleBofBtrainingBsetBsizeBonBperformanceBonBknownBs entences;\n3.roleBofBphraseBlengthBinBtranslationBtable;\n4.modelBperturbation:BanalysisBandBunlearningBcurves.\nMarco Turchi-Univ. of Bristol Learni ng to Translate 8Experimental Setup \n/square6Software \n/boxshadowdwnMoses.\n/boxshadowdwnGiza++:BIBMBmodelB1,B2,B3,BandB4BwithBnumberBofBite rationsB\nforBmodelB1BequalBtoB5,BmodelB2BequalBtoB0,BmodelB3 BandB4B\nequalBtoB3.B\n/boxshadowdwnSRILM:Bn…gramBorderBequalBtoB3BandBtheBKneser…NeyBsmoothingBalgorithm.\n/boxshadowdwnMert:B100BtheBnumberBofBnbestBtargetBsentenceBforBe achB\ndevelopBsentence.B\n/boxshadowdwnTraining,BdevelopmentBandBtestBsetBsentencesBareBtokenizedBandBlowercased.B\n/boxshadowdwnMaximumBnumberBofBtokensBforBeachBsentenceBinBtheBtrainingBpairBisB50.\n/boxshadowdwnTMsBwereBlimitedBtoBaBphrase…lengthBofB7Bwords.BLMs BwereB\nlimitedBtoB3.\nMarco Turchi-Univ. of Bristol Learni ng to Translate 9Experimental Setup \n/square6Data \n/boxshadowdwnEuroparlReleaseBv3BSpanish…EnglishBcorpus.\n/boxshadowdwnTrainingBset:B1,259,914Bpairs.\n/boxshadowdwnTestBandBDevelopmentBsetsB2,000BpairsBeach.B\n/boxshadowdwnEvaluationBScores \n/boxshadowdwnBLEU,BNIST,BMeteor,BTER,BUnigramBRecall,BUnigramBPrecision,BFMean,BF1,BPenaltyBandBFragmentation.\n/boxshadowdwnBLEUBisBusedBasBevaluationBscoreBafterBweBobservedBitsBhighBcorrelationBtoBtheBotherBscoresBonBtheBcor pus.\nMarco Turchi-Univ. of Bristol Learni ng to Translate 10 Experimental Setup \n/square6Hardware \n/boxshadowdwnUniversityBofBBristolBclusterBmachine,Bhttp://www.acrc.bris.ac.uk/acrc/hpc.htm.B\n/square696BnodesBeachBwithBtwoBdual…coreBopteronprocessors. B\n/square68BGbofBRAMBmemoryBperBnodeB(2BGbperBcore).\n/square6SilverStormInfinibandhigh…speedBconnectivityBthroug houtB\nforBparallelBcodeBmessageBpassing.B\n/square6GeneralBParallelBFileBSystemB(GPFS).B\n/square6StorageB11Bterabytes.\n/square6TorqueBv2.1.6p17BasBtheBResourceBManager.B\n/square6MauiBv3.2.6p16BasBtheBscheduler.B\nMarco Turchi-Univ. of Bristol Learni ng to Translate 11 Role of training set size on performance \non unknown sentences \n/square6AnalyseBhowBperformanceBisBaffectedBbyBtrainingBset B\nsize,BbyBcreatingBlearningBcurves.\n/square6CreateBsubsetsBofBtheBcompleteBcorpusBbyBsub…samplingBsentencesBfromBaBuniformBdistribution,Bwit hB\nandBwithoutBreplacement;\n/boxshadowdwnwithBreplacement:BanalyseBtheBperformanceBonBdiffer entB\ntrainingBsetsBofBtheBsameBsizeBandBtheBeffectsBofBoptimizationBphase;B\n/boxshadowdwnwithoutBreplacement:BstudyBtheBSMTBlearningBcurvesB inBtheB\nLinear…LinearBandBLinear…LogBscales.\nMarco Turchi-Univ. of Bristol Learni ng to Translate 12 Role of training set size on performance \non unknown sentences \n/square6CreateBsubsetsBofBtheBcompleteBcorpusBbyBsub…samplingBsentencesBfromBaBuniformBdistribution,B with \nreplacement .\n/square6TenBrandomBsubsetsBforBeachBofBtheB20BchosenBsizesB(eachBsizeBB5%,B10%,BetcBofBtheBcompleteBcorpus).\n/square6ForBeachBsubset,BaBnewBinstanceBofBMosesBhasBbeenBcreated.\n/square6DevelopmentBandBtestBsetsBcontainB2,000BpairsBeach.\n/square6TheBexperimentsBhaveBbeenBrunBforBtheBmodelsBwithBandBwithoutBtheBoptimizationBstep.\nMarco Turchi-Univ. of Bristol Learni ng to Translate 13 Role of training set size on performance \non unknown sentences \n1.SmallBerrorBbars.\n2.BenefitsBoptimizationB\nphase.\n3.CurvesBaffectedBbyBtheBBirthdayBparadox.\n\nMarco Turchi-Univ. of Bristol Learni ng to Translate 14 Role of training set size on performance \non unknown sentences \n/square6TheBwholeBtrainingBsetBisBsplitBinB20BblocksBcontai ningB\n5%BofBtheBdataB without replacement .\n/square6EachBincrementBofBtheBtrainingBsetBsizeBisBaBconcatenationBofBaBnewBblockBofBdataBwithBtheBprevious.\n/square6FiveBrandomBsplitsBhaveBbeenBdoneBofBtheBwholeBtrainingBset.\n/square6EachBsplitBproducesBaBlearningBcurve.\n/square6ABregionBofBconfidenceBisBcreatedBbetweenBtheB\nlearningBcurveBwithBbestBperformanceBandBtheBlearningBcurveBwithBworstBperformance.B\nMarco Turchi-Univ. of Bristol Learni ng to Translate 15 Role of training set size on performance \non unknown sentences \n/square6LearningBCurveBregionBinBLinear…LinearBScale.\n1.AdditionBofBmassiveB\namountsBofBdataBresultBintoBsmallerBimprovements.B\nMarco Turchi-Univ. of Bristol Learni ng to Translate 16 Role of training set size on performance \non unknown sentences \n/square6LearningBCurveBregionBinBLinear…LogBScale.\n1.LogarithmicBbehaviourBcanBnotBbeBexcluded.\n2.LearningBcurveBisB“logarithmBatBbest”.\n\nMarco Turchi-Univ. of Bristol Learni ng to Translate 17 Role of training set size on performance on known sentences \n/square6ExperimentBmuchBlikeBtheBoneBdescribedBabove.\n/square6KeyBdifference:BtheBtestBsetBwasBselectedBrandomlyBfromBtheBtrainingBsetB(2,000BpairsBafterBcleaningBphase).B\n/square6AnBupperBboundBonBtheBperformanceBachievableBbyB\nthisBarchitectureBifBaccessBtoBidealBdataBwasBnotBa nB\nissue.\n/square6PerformanceBonBtranslatingBtrainingBsentencesBareBn otB\ndueBtoBsimpleBmemorizationBofBtheBentireBsentence.\n/square6”HumanBTranslation”BidentifiesBtheBcurveBobtainedBusingBtheBreferenceBsentencesBasBtargetBsentences.\nMarco Turchi-Univ. of Bristol Learni ng to Translate 18 Role of training set size on performance on known sentences \n\nMarco Turchi-Univ. of Bristol Learni ng to Translate 19 Role of training set size on performance on known sentences \n/square6FitBaBlineBtoBtheBtestBonBtestBsetBlearningBcurvesB inBtheB\nlinear…logBscaleBusingBleastBsquares.\n/square6TheBapproximatedBlearningBcurveBwillBreachBtheBtest BonB\ntrainingBlearningBcurveBwithB“only”B10^15BsentenceBpairs.BItBmeans:\n/boxshadowdwn10^9BtimesBtheBEuroparldataset\n/boxshadowdwn3*10^9ByearsBofBproceedingsBofBtheBEuropeanBParliam ent.\n/boxshadowdwnTheBIndexedBWebBcontainsBatBleastB27.1BbillionBpage sB\n(Saturday,B09BMay,B2009 )BbyBhttp://www.worldwidewebsize.com.BIfBweBassumeBthatB eachB\npageBhasB10Bsentences,BitBwouldBnotBbeBenough.\nMarco Turchi-Univ. of Bristol Learni ng to Translate 20 Role of training set size on performance on known sentences \n/square6IfBtheBrightBinformationBhasBbeenBseen,BtheBsystemBcanBreconstructBtheBsentencesBratherBaccurately.\n/square6SystemBcanBrepresentBinternallyBaBgoodBmodelBofBtranslation.\n/square6ItBseemsBunlikelyBthatBgoodBperformanceBwillBeverBb eB\ninferredBbyBincreasingBtheBsizeBofBtrainingBdataset sBinB\nrealisticBamounts.\n/square6ProcessBwithBwhichBweBlearnBtheBnecessaryBtablesBrepresentingBtheBknowledgeBofBtheBsystemBisBresponsibleBforBtheBperformanceBlimitations.\nMarco Turchi-Univ. of Bristol Learni ng to Translate 21 Role of phrase length in translation table \n/square6GapBbetweenBperformancesBonBtrainingBandBonBtestBse tsBisB\ntypicallyBaffectedBbyBmodelBselectionBchoices.B\n/square6ChoiceBofBtheBphraseBlengthBisBcrucialBinBtheBselec tionBofBtheB\nrightBmodel.\n/square6TenBrandomBsubsetsBofBtheBcompleteBcorpusBcontainin gB\n629,957BpairsBofBsentencesBhaveBbeenBcreated.B\n/square6ForBeachBsubset,BtenBinstancesBofBtheBSMTBhaveBbeen B\ncreated.\n/square6EachBinstanceBhasBbeenBtrainedBusingBaBdifferentBph raseB\nlength,BfromB1BtoB10.\n/square6EachBmodelBhasBbeenBtestedBonBtheBtestBset,B2,000Bs entences,B\nandBonBaBrandomBsubsetBofB2,000BsentenceBfromBtheBt rainingB\nset.B\nMarco Turchi-Univ. of Bristol Learni ng to Translate 22 Role of phrase length in translation table \n\nMarco Turchi-Univ. of Bristol Learni ng to Translate 23 Role of phrase length in translation table \n/square6InBbothBtheBlearningBcurvesBthereBisBaBbigBimprovem entBmovingB\nfromBtheBwordBbyBwordBtranslation,BphraseBlengthBeq ualB1,BtoBthe \nphraseBbasedBmodel.B\n/square6NoBsignificantBadvantagesBseemBtoBbeBpresentBwhenBp hraseB\nlengthBisBbiggerBthanB4BinBtheB“testBonBtestBset”Bl earningBcurve.\n/square6TheBriseBofBtheBphraseBlengthBimprovesBtheBperforma nceBofBtheB\nsystemBwhenBitBhasBbeenBtestedBonBsentencesBsampled BbyBtheB\ntrainingBset.\n/square6PhraseBlengthBchangesBtheBdimensionBofBtheBtranslat ionBtables,B\nbutBtheBsystemBcontinuesBtoBpreferBshortBphraseBtoB longBonesB\nduringBtheBdecodingBphase \nMarco Turchi-Univ. of Bristol Learni ng to Translate 24 Model perturbation: analysis and unlearning curves \n/square6TheBtrainingBstepBresultsBinBvariousBformsBofBknowl edge:B\ntranslationBtable,BlanguageBmodelBandBlambdaBparame tersB\nfromBtheBoptimization.B\n/square6TheBinternalBmodelsBlearntBbyBtheBsystemBareBBlists BofB\nphrases,BwithBprobabilitiesBassociatedBtoBthem.\n/square6InBorderBtoBsimulateBtheBeffectBofBinaccurateBestim ationBofBtheB\nstatisticalBparameters,BtwoBdifferentBexperimentsBh aveBbeenB\nrun:\n/boxshadowdwnaBpercentageBofBnoiseBhasBbeenBaddedBtoBeachBprobab ilityBinBtheBLMB\nandBTMB(Adding Noise );\n/boxshadowdwnnoiseBhasBbeenBaddedBinBtheBformBofBwrongBassociati onsBbetweenB\nnumericalBandBtextualBpartsBofBLMBandBTMB( Randomization of \nParameters );\nMarco Turchi-Univ. of Bristol Learni ng to Translate 25 Model perturbation: analysis and unlearning curves \n/square6TwoBmodelsBtrainedBwithB62,995BandB629,957BpairsBof B\nsentencesBhaveBbeenBused.\n/square6DifferentBvalueBofBpercentageBofBnoiseBbetweenB0Ban dB1BhaveB\nbeenBused.\n/square6TheBnoisyBprobabilityBisBobtainedBasBp’B=Bmin(1,BpB +ν),whereBν\n=Brand(−p k,B+pBBBk)BwithBpercentageBofBnoiseBkBBBB[ 0,B1].\n/square6IfBthisBquantityBisBbiggerBthanBoneBitBhasBbeenBapp roximatedBtoB\none.\n/square6ForBeachBmodel,BforBeachBvalueBofB k,BtenBexperimentsBhaveB\nbeenBrun.B\n/square6OptimizationBstepBhasBnotBbeenBrun.×× ∈\nMarco Turchi-Univ. of Bristol Learni ng to Translate 26 Model perturbation: analysis and unlearning curves \n\nMarco Turchi-Univ. of Bristol Learni ng to Translate 27 Model perturbation: analysis and unlearning curves (Randomization of Parameters)\n/square6WeBdefine:\n/boxshadowdwnNumericalBSwap:BgivenBtwoBentriesBofBLMBorBTM,BtheBnumericalBpartsBareBswapped.\n/boxshadowdwnWordsBSwap:BgivenBtwoBentriesBofBTM,BtheBtargetBlan guageB\nphrasesBareBswapped.B\n/square6PercentageBofBnoiseBrepresentsBBaBcertainBnumberBof B\nswaps.\n/square6ThreeBdifferentBsetsBofBexperimentsBhaveBbeenBrun:\n/boxshadowdwnWordsBSwapBofBTM;\n/boxshadowdwnNumericalBSwapBofBLM;\n/boxshadowdwnNumericalBSwapBofBTM.B\nMarco Turchi-Univ. of Bristol Learni ng to Translate 28 Model perturbation: analysis and unlearning curves (Randomization of Parameters)\nWords Swap in TM \n Numerical Swap in TM and LM \nMarco Turchi-Univ. of Bristol Learni ng to Translate 29 Model perturbation: analysis and unlearning curves \n/square6AddingBNoise:BgentleBdeclineBofBtheBunlearningBcurv eB\nsuggestsBthatBfinetuningBofBparametersBdoesBnotBseemBtoBcontrolBtheBperformance.B\n/square6RandomisationBofBParameters:BmoreBaggressiveBnoiseBproducesBmoreBsignificantBdeclineBinBperformance.BLMBisBlessBaffectedBthanBTM.BInBWordBSwapBexperimentsBaBmoreBrapidBdeclineBshouldBbeBexpected,BbutBhighBredundancyBinBtheBTMBpreventsBit .\nMarco Turchi-Univ. of Bristol Learni ng to Translate 30 /square6OurBexperimentsBsuggestBthat:B\n/boxshadowdwntheBcurrentBbottleneckBisBtheB lack of sufficient data ,BnotBtheBfunctionB\nclassBusedBforBtheBrepresentationBofBtranslationBsy stems.B\n/boxshadowdwnAddingBmoreB i.i.d. data doesBnotBappearBtoBbeBaBpracticalBwayBtoB\nsignificantlyBimproveBperformance.\n/boxshadowdwnTheBperturbationBanalysisBsuggestsBthatBimprovedB statistical principles \nare unlikely to make a big difference either.\n/boxshadowdwnMoreBthanBtheBaccurateBestimationBofBparameters,Bit BisBtheB compilation \nof the translation tables thatBdrivesBtheBperformanceBofBtheBsystem.\n/boxshadowdwnModelBselectionBchoices,B phrase length ,BwillBnotBmakeBaBbigBdifference \n/boxshadowdwnSinceBitBisB unlikely that sufficient data will be available by simply \nsampling a distribution ,BoneBneedsBtoBaddressBaBfewBpossibleBwaysBtoB\ntransferBlargeBamountsBofBknowledgeBintoBtheBsystem .Conclusion/Considerations \nMarco Turchi-Univ. of Bristol Learni ng to Translate 31 /square6ABresearchBprogrammeBnaturallyBfollowsBfromBourBana lysis:\n/boxshadowdwnanBeffortBtoBidentifyBorBproduceBdatasetsBonBdemand .\n/square6ItBbreaksBtheBtraditionalBi.i.d.BassumptionsBonBthe BoriginBofBdata.\n/square6ItBwouldBalsoBrequireBanBeffectiveBwayBtoBdoBconfid enceBestimationBonB\ntranslations,BtoBidentifyBthoseBinstancesBwhereBthe reBisBlowBconfidenceBinBtheB\noutput.\n/boxshadowdwnIntroductionBofBsignificantBdomainBknowledgeBinBthe BformBofBlinguisticB\nrules,BtoBdramaticallyBreduceBtheBamountBofBdataBne ededBtoBessentiallyB\nreconstructBthemBbyBusingBstatistics.\n/square6TheBbarrierBtoBimprovingBperformanceBisBaBdirectBco nsequenceBofBZipf’s \nlawBandBtheBfrequencyBofBphrasesBinBtext.B\n/boxshadowdwnTheBimpossibilityBofBtheBalgorithmBtoBdealBwithBunk nownBphrases,andB\ntheirBnon…vanishingBfrequencyBinBnaturalBcorporaBco nspireBtoBcreateBaB\nfundamentalBlimitation.Conclusion/Considerations \nLearning to Translate: a statistical and computational analysis \nMarco Turchi, Tijl De Bie, Nello Cristianini \nUniversity of Bristol (UK)Department of Engineering Mathematics",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "zhCsH4VGFJy",
"year": null,
"venue": "SMART@EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-smart.11.pdf",
"forum_link": "https://openreview.net/forum?id=zhCsH4VGFJy",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Closing remarks",
"authors": [
"Nicola Cancedda"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Nicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nStatistical Multilingual Analysis for\nRetrieval and Translation\nNicola Cancedda\nMay 2009\n\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nDemos\n•Platforms for showcasing developed\ntools:\n\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nProject Website\n•Project presentation and deliverables\n–http://www.smart-project.eu\n\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nThanks to:\n•UPC, for hosting and very friendly and effective\nsupport\n•David Farwell and the conference organising\ncommittee, for allowing us to “piggyback” EAMT\n•Our invited speaker Jesús Giménez\n•PASCAL 2, for sponsoring this event\n•Xavier Carreras, Nello Cristianini, Marco Turchi\nAll of you for showing up!",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "cupywo8atRq",
"year": null,
"venue": "SMART@EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-smart.1.pdf",
"forum_link": "https://openreview.net/forum?id=cupywo8atRq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Introduction",
"authors": [
"Nicola Cancedda"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Nicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nStatistical Multilingual Analysis for\nRetrieval and Translation\nNicola Cancedda\nMay 2009\nWelcome!\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nThe SMART project\n••Information Society TechnologiesInformation Society Technologies\nProgrammeProgramme\n••Sixth Framework Programme,Sixth Framework Programme,\n“Specific Target Research Project”“Specific Target Research Project”\n(STReP)(STReP)\n••Start date: October 1, 2006Start date: October 1, 2006\n••Duration: 3 yearsDuration: 3 years\n•35-40 Researchers involved overall\n\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nThe SMART Consortium\n\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nThe SMART Consortium\n\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nMotivation\n•Almost half of the citizens of the EU do not speak a second language\n•Expanding global demand of tools for automatic translation and cross-\nlanguage retrieval, clustering and categorization\n•Statistical approaches mainstream in research, but still suffering from\nshortcomings, preventing their diffusion, e.g.:\n–Relatively low fluency/gr ammaticality of output\n–Model training still a somewhat arcane craft\n–Difficult to use in new domains\n–Trained once for all, do not lear n from constant user feedback\n•Many recent advances in Machine Learning are only starting to hit the\nfield\nSMART is an attempt to propose some original solutions\nusing the methods of Statistical LearningFrom the proposal\n(2005)\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\n•This workshop is supported by the\nPASCAL 2 EC Network of Excellence\n–Machine Learning, Statistics, Optimization,\nand applications thereof\n–http:// www.pascal-network.org ,\nhttp://videolectures.net/pascal\n\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nSome (presented) highlights\n•Two new SMT systems\n–Sinuhe (U. Helsinki)\n•See talk by Matti Kääriäinen\n–MMR (U. Southampton)\n•See talk by Sandor Szedmak\n•On-line adaptation of translation models\n–Adaptive extension (U. Milan) of NRC’s PORTAGE\n•See talk by Nicolò Cesa-Bianchi\n•A method for discriminative learning of phrase\ntables\n–See talk by Zhuoran Wang (UCL)\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nSome (presented) highlights\n•An extensive analysis of an SMT system as a\nlearning system\n–See talk by Marco Turchi (U. Bristol)\n•New methods for sentence-level confidence\nestimation\n–See talk by Lucia Specia (Xerox)\n•A study on detecting a nd exploiting translation\ndirection\n–See talk by Cyril Goutte (NRC)\n•Scale-up of methods derived from Canonical\nCorrelation Analysis\n–See talk by Blaz Fortuna (Jozef Stefan Institute)\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nand also...\n•New discriminatively trained Language Models\nfor SMT\n•New lexicon adaptation methods for CLIR\n•A regression-based approach to translation\n•Methods for extracting b ilingual lexica from the\nWikipedia\n•A prototype-demo for adaptive Computer-Aided\nTranslation\n•A prototype-demo for CLIR and MT of the\nWikipedia\n...check-out our website!\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nProject Website\n•Project presentation and deliverables\n–http://www.smart-project.eu\n\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nBut first of all...\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nInvited Speaker\n•Jesús Giménez, Universitat Politècnica de\nCatalunya\n–SVM tools for sequential labelling problems\n–Semantic Role labeling\n–Empirical Machine Translation\n–IQMT, a framework for combining multiple MT\nmetrics\n–Today speaking on:\n “Empirical Machine Translation and its Evaluation”\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nBackup slides\nNicola Cancedda SMART workshop, Barcelona, Spain May 13, 2009\nWP7 - Dissemination and\nExploitation\n•Platforms for showcasing developed\ntools:",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Jin4ThoAzRQ",
"year": null,
"venue": "SMART@EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-smart.5.pdf",
"forum_link": "https://openreview.net/forum?id=Jin4ThoAzRQ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Large scale maximum margin regression based, structural learning approach to phrase translations",
"authors": [
"Sándor Szedmák",
"Esther Galbrun",
"Craig Saunders",
"Yizhao Ni"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Large scale, maximum margin regression\nbased, structural learning approach to phrase\ntranslations\nSandor Szedmak1\njoint work with\nEsther Galbrun43Craig Saunders2, Yizhao Ni1\n1University of Southampton2XRCE XEROX3University of Helsinki4INSA of\nRouen\nEAMT 2009 Barcelona\nOutline\nBase problem of the phrase translation\nWord features\nLearning problem\nAlignment of words\nExample\nPerformance comparison\nMain Components of the translator system\nIPhrase translator - the main topic of this presentation.\nIA well known system: GIZA++\nIAdditional postprocessing tools, e.g. in Moses\nIDecoder, which can fit better to the phrase dictionary\ngenerated by maximum margin learning procedure.\nThe base learning problem of phrase translation\nIA phrase implies a binary classification of the words of a\nsentence;\nIthe words within the phrase are the positive cases,\nIthe remaining part gives the negative ones.\nIThe translation can be interpreted as a propagation of the\nclasses of a source sentence into the corresponding target\nsentence.\nIIt might be interpreted either as an inductive or a\ntransductive learning problem.\nThe learning schema\nclass source words predicted class target words\n\u0000 Je ?(+ ;\u0000) I\n\u0000 vous ?(+ ;\u0000) would\n\u0000 demands ?(+ ;\u0000) therefore\n\u0000 donc ?(+ ;\u0000) once\n\u0000 \u0012a ?(+ ;\u0000) more\n\u0000 nouveau ?(+ ;\u0000) ask\n\u0000 de ?(+ ;\u0000) you\n\u0000 faire ?(+ ;\u0000) to\n\u0000 le ?(+ ;\u0000) ensure\n\u0000 n\u0013ecessaire ?(+ ;\u0000) that\n\u0000 pour ?(+ ;\u0000) we\n+ que ?(+ ;\u0000) get\n+ nous ?(+ ;\u0000) a\n+ puissions ?(+ ;\u0000) Dutch\n+ disposer ?(+ ;\u0000) channel\n\u0000 d0?(+ ;\u0000) as\n\u0000 une ?(+ ;\u0000) well\n\u0000 cha~ine\n\u0000 n\u0013eerlandaise\nComputational difficulties\nIIf the sentence length in words is 30 and the maximum\nlength allowed of non-gapped phrases is 5 then 140\nbinary classification problems have to be solved!\nIDoes any acceptable efficient joint approximation schema\nexist at all?\nA learning approach\nITheSupport Vector Machine(SVM) has proved to be a\nhighly accurate learning tool, but it is able to deal only with\nbinary outputs.\nIThe learning framework of the SVM can be extended to\npredict arbitrary vector represented outputs with no\nadditional cost, we will call it Maximum Margin\nRegression(MMR) in the sequel.\nIThe details are discussed when the concrete learning\nproblem is unfolded.\nIMATLAB source code of the solver and a demo for\nmulticlass classification in MMR is freely available on the\nweb.\nThe skeleton of the phrase translation\nSentence-wise word relations, the building blocks:\nIglobal relationships between word pairs,\nIlocal relations,\nIinference between global and local relations,\nEstimating phrases, ICollect those sequences of source\nand target words which have the highest\naccumulated word-wise relations.\nA projection rule of the sentences\nIMapping words Mapping phrases\nP1,R 1;\nP2,R 2;\nP1\\P 2,R 1\\R 2;\nP1[P 2,R 1[R 2;\nP1nP2,R 1nR 2;\nP2nP1,R 2nR 1:\nIIntersections mapped into corresponding intersections of\nthe subsets of words those we might consider as phrases.\nObviously it can be achieved only approximately!\nGlobal versus local relations of words\nIInterference of global and local relations:\nIStrong global: Frequent co-occurrences,\nIStrong local: adjacent(or almost adjacent) positions\nIGlobally weak Globally strong\nLocally\nweakHigh confidence Likely\nNo relation No relation\nLocally\nstrongLikely? High confidence\nThere is a relation There is a relation\nI?case of rare words!\nSentence-wised word distances\nDistances:\nIThe distances measure the co-occurrences of words and\ntheir relative positions within the sentences.\nIA co-occurrence with high distance is down scaled.\nIWithin a language:\ndS(w1;w2) =min i12I(w1);i22I(w2)ji1\nnS\u0000i2\nnSj\nIBetween two languages:\ndSs;St(w1;w2) =min i12I(w1);i22I(w2)ji1\nnSs\u0000i2\nnStj\nSentence-wised word similarities\nSimilarities:\nILinear:\nsS(w1;w2) =1\u0000dS(w1;w2)\nsSs;St(w1;w2) =1\u0000dSs;St(w1;w2)\nIGaussian:\nsS(w1;w2) =e\u0000\n\u0000d2\nS(w1;w2)\n\u001b\u0001\nsSs;St(w1;w2) =e\u0000\n\u0000d2\nSs;St(w1;w2)\n\u001b\u0001\nILogistic:\nsS(w1;w2) =1\n4ssech2\u0010\ndS(w1;w2)\n2s\u0011\nsSs;St(w1;w2) =1\n4ssech2\u0010dSs;St(w1;w2)\n2s\u0011\nsech (z) =1\ncosh (z)=2\nez+e\u0000z\nGlobal(training set relative) similarity\nIWithin a language:\ns(w1;w2) =P\nS2S(w1)\\S(w2)sS(w1;w2)\njS(w1)[S(w2)j\nIBetween two languages:\ns(w1;w2) =P\nS2Ss(w1)\\St(w2)sSs;St(w1;w2)\njS(w1)s[St(w2)j\nS(w)is the index set of the sentences containing word win the\ntraining set.\nWord features, local relations\nWord features with respect to a sentence pair(source-target)\nexpressed as a concatenated vector of the similarities between\nthe word and the words of the source and the target sentences.\nISource words:\n\u001eSs;St(ws) =Relations to the sourcez}| {\n(s(ws;ws1);:::; s(ws;wsnSs)\n|{z }\n(ws1;:::;wsnSs)=Ss;Relations to the targetz}|{\ns(ws;wt1);:::; s(ws;wtnSt)\n|{z}\n(wt1;:::;wtnSt)=St\nITarget words:\n\u001eSs;St(wt) =Relations to the sourcez}| {\n(s(wt;ws1);:::; s(wt;wsnSs)\n|{z}\n(ws1;:::;wsnSs)=Ss;Relations to the targetz}|{\ns(wt;wt1);:::; s(wt;wtnSt)\n|{z}\n(wt1;:::;wtnSt)=St)\nFeature = Language model + Translation model\n\u001eSs;St(Ss;St) =\u0014SS ST\nTS TT\u0015\n;\nISS relationship between source items,\nITT relationship between target items,\nIST(TS) relationship between source and target items.\nWord positions\nIThe position feature of a word should express the\nuncertainty arising from the varying grammatical relations.\nIThis uncertainty can be captured by a probability density\nfunction with an expected value localized in the real\nposition of the word in a given concrete sentence.\nI S(w) =f(:jiw;\u0002), where\nIfa suitable density function, e.g. Gaussian\nIiwis the position of the word in sentence S,\nI\u0002a scale parameter, e.g. variance,\nI\nWord\nPosition\nLearning problem\nIThe densities are the representation of the assumed to be\ncorrect positions are inferred with features as\nrepresentation of the relations of the words.\nIWe predict:\nword relations\nm\nexpected position of the words within a sentence\nOptimization problem\nIOptimization framework, Maximum Margin\nRegression(MMR):\nmin1\n2kWk2\nFrobenius+CPnSs\ns=1\u0018s\nw.r.t. Wlinear operator ;\u0018loss;\ns.t.h Ss(ws)|{z}\npossible word position;W\u001eSs;St(ws)|{z}\nword relationsi\u00151\u0000\u0018s;ws2S s;\n\u0018\u00150;\nIThe optimum has the form:\nW=P\nws2Ss\u000bws Ss(ws)\u001eSs;St(ws)0;\nHigh level, margin based word similarity measure\nISentence relative similarity predicted between all pairs of\nsource and target words:\nsource)target\nRW(ws;wt) =h Ss(ws);W\u001eSs;St(wt)i\n=P\nwr2Ss\u000bwr\u0014 (ws;wr)\u0014\u001e(wr;wt)\nand\ntarget)source\nR0\nW(wt;ws) =hW0 Ss(ws);\u001eSs;St(wt)i\n=P\nwr2Ss\u000bwr\u0014 (ws;wr)\u0014\u001e(wr;wt)\nwhere\n\u0014 (ws;wr) =h Ss(ws); Ss(wr)i\n\u0014\u001e(wr;wt) =h\u001eSs;St(wr);\u001eSs;St(wt)i:\nWord alignment\nIAsource word is aligned to those target words which\nmaximizes the relations , and\natarget word is aligned to those source words which\nmaximizes the relations\nws,wt^ws(wt)2arg max w2SsRW(w;wt);\nwt,ws^ws(ws)2arg max w2StR0\nW(w;ws):\nIThe words can be aligned to more than one words in\nambiguous cases!\nAlignment of four views\nIWcomputed on the source words only and the target\nlabels are predicted\nws,wt^ws(wt)2arg max w2SsRWs(w;wt);\nwt,ws^ws(ws)2arg max w2StR0\nWs(w;ws):\nIWcomputed on the target words only and the source\nlabels are predicted\nws,wt^ws(wt)2arg max w2SsRWt(w;wt);\nwt,ws^ws(ws)2arg max w2StR0\nWt(w;ws):\nExample sentences\nsource words word index target words word index\nJe 0 I 0\nvous 1 would 1\ndemands 2 therefore 2\ndonc 3 once 3\nà 4 more 4\nnouveau 5 ask 5\nde 6 you 6\nfaire 7 to 7\nle 8 ensure 8\nnécessaire 9 that 9\npour 10 we 10\nque 11 get 11\nnous 12 a 12\npuissions 13 Dutch 13\ndisposer 14 channel 14\nd’ 15 as 15\nune 16 well 16\nchaîne 17\nnéerlandaise 18\nFeatures, as they look like\nFeature values to the source\nFeature values to the target\n\nWord relations\nSource)Target Target )Source\n0\n 5\n 10\n 15\n0\n5\n10\n15\n0\n 5\n 10\n 15\n0\n5\n10\n15\nWord relations\nJe 0 I 0\nvous 1 would 1\ndemands 2 therefore 2\ndonc 3 once 3\nà 4 more 4\nnouveau 5 ask 5\nde 6 you 6\nfaire 7 to 7\nle 8 ensure 8\nnécessaire 9 that 9\npour 10 we 10\nque 11 get 11\nnous 12 a 12\npuissions 13 Dutch 13\ndisposer 14 channel 14\nd’ 15 as 15\nune 16 well 16\nchaîne 17\nnéerlandaise18\nAlignment, four views\nThe relations between words can be reduced to the raw and\ncolumn maximums (they might be not unique).\nThey can express edges between words in a word graph.\nTraining: source source words )target words\n0123456789101112131415161718\n06523378888910101515121413\nTraining: target source words )target words\n0123456789101112131415161718\n06527377787910111312121414\nTraining: source target words )source words\n012345678910111213141516\n0035621671112 916181715 9\nTraining: target target words )source words\n012345678910111213141516\n0335221671112151618171814\nAlignment\nsource words aligned target words(occurrences)\nJe I(4)\nvous you(4)\ndemands ask(4), more(1)\ndonc therefore(4), would(1)\nà to(1), once(1)\nnouveau once(4)\nde to(4), more(1)\nfaire ensure(3), to(1)\nle to(1), ensure(1)\nnécessaire ensure(2), get(1), well(1)\npour to(1), ensure(1)\nque that(5)\nnous we(4)\npuissions get(1), we(1)\ndisposer Durch(1), as(1), well(1)\nd’ as(2), get(1), a(1)\nune a(4)\nchaîne channel(4)\nnéerlandaise Dutch(3), channel(1), as(1)\nPhrase prediction\nJe 0 I 0\nvous 1 would 1\ndemands 2 therefore 2\ndonc 3 once 3\nà 4 more 4\nnouveau 5 ask 5\nde 6 you 6\nfaire 7 to 7\nle 8 ensure 8\nnécessaire 9 that 9\npour 10 we 10\nque 11 get 11\nnous 12 a 12\npuissions 13 Dutch 13\ndisposer 14 channel 14\nd’ 15 as 15\nune 16 well 16\nchaîne 17\nnéerlandaise18\nPhrase prediction\nICollect the target words most relating to the words of a\ngiven source phrase,\nIA target word has edges going into this source phrase and\ninto its complement with respect to the sentence.\nIConsider the former as positive edges and the latter ones\nas negative ones.\nIIf the sum of scores on the positive edges greater than on\nthe negatives then the word belongs to the translation of\nthe source phrase. Where the score is equal to\nRW(ws;wt) =h PSs(ws);W\u001eSs;St(wt)i\nIThe gapes can be allowed or prohibited in both side. No\ngap dependency!\nIPhrase score is just the sum of the scores of the words\nwithin in the current implementation.\nOffline versus online, parallel processing\nITheupdate of the phrase table works in online fashion ,\nall new sentences are processed incrementaly .\nIComputation of the features, optimization, phrase\nprediction can be evaluated parallel in a multiprocesor\nsystem.\nMMBT versus GIZA\nRecall versus Precision\n0 0.2 0.4 0.6 0.8 100.10.20.30.40.50.60.70.8P\nR \nMMBT\nMoses\nSinuhe pruning\nMMBT versus GIZA\nTuning Recall and Precision\nGIZA MMBT\n−0.200.20.40.60.811.200.20.40.60.81\nBiphrases score thresholds \nP\nR\n1 1.5 2 2.5 3 3.500.20.40.60.81\nBiphrases score thresholds \nP\nR\nCurrent state\nIOn a desktop machine, CPU: Intel 2.1GHz, \u00185 sentences\nper second can be trained assuming the average length of\nthe Europarl sentences.\nIThe memory requirement is \u00188GB at a\n1 million sentence training corpus, which can be reduced\nto half on the expense of the speed.\nIAccuracies with the decoder to be developed parallel,\nwhich currently translates 50 sentences/ second if the\nphrase dictionary stored in a memory disk:\nLanguages Bleu Nist Training size/Test size\nFrench-English 0.2642 7.6713 1.3million/10000\nIThe prototype is written in pure Python code .\nThis is the End ...\nThanks!",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "9S_pNXw6YOWs",
"year": null,
"venue": "SMART@EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-smart.10.pdf",
"forum_link": "https://openreview.net/forum?id=9S_pNXw6YOWs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Sentence-level confidence estimation for MT",
"authors": [
"Lucia Specia",
"Nicola Cancedda",
"Marc Dymetman",
"Craig Saunders",
"Marco Turchi",
"Nello Cristianini",
"Zhuoran Wang",
"John Shawe-Taylor"
],
"abstract": "Lucia Specia, Nicola Cancedda, Marc Dymetman, Craig Saunders, Marco Turchi, Nello Cristianini, Zhuoran Wang, John Shawe-Taylor. Proceedings of the 13th Annual conference of the European Association for Machine Translation. 2009.",
"keywords": [],
"raw_extracted_content": "Sentence-Level Confidence\nEstimation for MT\nLucia Specia\nNicola Cancedda, Marc Dymetm an, Craig Saun ders - Xerox\nMarco Turchi, Nello Cristianini – Univ. Bristol\nZhuoran Wang, John Shawe-Taylor – UCL\n\nOutline\n The task of Confidence Estimation for Machine Translation\n Approach\nFeatures\nAlgorithms\nMethod\n Experiments\nScenario 1 : providing score to the end-user that is as close as possible\nto a human score\nScenario 2 : filtering out bad translations for professional translators\nThe task of CE for MT\n Goal : given the output of a Machine Translation (MT)\nsystem for a given input, provide an estimate of its quality .\n Motivation : assessing the quality of translations is\n Time consuming :\nLos investigadores dicen gripe porcina tiene «pleno potencial\npandémico\", difundiendo rápidamente entre las personas yes probable que vaya mundial en los próximos seis a nueve\nmeses, con uno de cada tres de la población mundial\ninfectada.\n Not possible - if user does not know the source language:\nशोधकतार्ओं सूअर Ýलू पूणर् पैÛडेिमक की\nक्षमता है, लोगɉ को त×काल और वैिƳक अगले\nछह से नौ महीनɉ मɅ, एक तीन दुिनया की आबादी\n....\nThe task of CE for MT\nUses :\n– Filter out “bad” translations to avoid professional\ntranslators wasting time reading / post-editing them.\n– Make end-users aware of the translation quality.Is it worth providing this translation as \nsuggestion to the professional translator?\nShould this translation be highlighted as \n“suspect” to the reader?\nDifferent from MT evaluation (BLEU, NIST , etc):\nreference translations are NOT available\nUnit: word, phrase or sentence\nEmbedded to SMT system (word or phrase probabilities)\nor dedicated layer (machine learning problem)\nTraditional approach :\nBinary problem : distinguish between good and bad translations\nTraining data : data automatically annotated with\nNIST/BLEU or manually a nnotated (e.g. 1-5 scores)General approach\nDifferent from MT evaluation (BLEU, NIST , etc):\nreference translations are NOT available\nUnit: word, phrase or sentence\nEmbedded to SMT system (word or phrase probabilities)\nor dedicated layer (machine learning problem)\nTraditional approach :\nBinary problem : distinguish between good and bad translations\nContinuous score\nTraining data : data automatically annotated with\nNIST/BLEU or manually annotat ed (e.g. 1-5 scores), from\nseveral MT systems , text domains and language pairsGeneral approach\nMethod\nnIdentify and extract information sources .\nnRefine the set of information sources to keep only the\nrelevant ones\nIncrease performance\nnLearn a model to produce quality scores\nRegression algorithm\nnApply the model to predict quality scores for new\ntranslations\nFeatures\n Resource & language independent features\n Black-box (77): from the input and translation\nsentences , monolingual or parallel corpora , e.g.:\nSource and target sentence lengths and their ratios\nLanguage model and other statistics in the corpus\nShallow syntax checking (target and target against source)\nAverage number of possible transla tions per source word (SMT)\nPractical scenario :\nUseful when it is not possible to have access to internal\nfeatures of the MT systems (commercial systems, e.g.).\nProvides a way to perform the task of CE across different\nMT systems , which may use different frameworks.\nFeatures\n Glass-box (54) : depend on some aspect of the translation\nprocess , e.g.:\nLanguage model (target) using n-best list – word/phrase-based\nProximity with other hypothesis in the n-best list\nMT base model features\nDistortion count, gap count, (com positional) bi-phrase probability\nSearch nodes in the graph (aborted, pruned)\nProportion of unknown words in the source\nRicher scenario :\n When it is possible to have access to internal features of\nthe MT systems.\nLearning methods\n Feature selection: Partial Least Squares (PLS)\n Regression: PLS, SVM\nProjects the original data ont o a different space of latent\nvariables (or “ components ”)\nUsed to find the fundamental relations between two matrices\n(input X and Y response variables): tries to find the\nmultidimensional direction in t he X space that explains the\nmaximum variance direction in the Y space\nProvides by-product an ordering of the original features according to\ntheir importance\nParticularly indicated when t he features in X are strongly\ncorrelated ( multicollinearity ) the case in our datasetsPartial Least Square Regression\nOrdinary multiple regression problem:\nWhere:\nBw is the regression matrix comp uted directly using an optimal\nnumber of components.\nF is the residual matrix.\nWhen X is standardized, an element of Bw with large absolute\nvalue indicates an important X-variable .Partial Least Square Regression\nF XBYw\nMethod:\nCompute the Bw matrix on some training data for different\nnumbers of components (all possible)\nSort the absolute value of the bw-coefficients . This\nproduces a list of features from the most important to the\nless important ( Lb)\nSelect the top n features (and number of components) on\na validation set\nProduce predictions using these n features on a test set\nEvaluate predictions using appropriate metricsFeature Selection with PLS\nMethod:\nCompute the Bw matrix on some training data for different\nnumbers of components (all possible)\nSort the absolute value of the bw-coefficients . This\nproduces a list of features from the most important to the\nless important ( Lb)\nDone for each i-th training subsample: obtain several Lb(i)\nThe final list L is obtained picking the most “voted” features for each\ncolumn (mode): L = {66, 56, …, 35, 10}Feature Selection with PLS\n9 35 56 66… … … … …10 9 … 56 4410 35 … 7 66\nMethod:\nCompute the Bw matrix on some training data for different\nnumbers of components (all possible)\nSort the absolute value of the bw-coefficients. This\nproduces a list of features from the most important to the\nless important (Lb)\nSelect the top n features (and number of components) on\na validation set\nAdd one feature one by one\nAnalyze learning curves to verify the prediction error\nSelect the top n features and the number of components that minimize\nthe prediction errorFeature Selection with PLS\nMethod:\nCompute the Bw matrix on some training data for different\nnumbers of components (all possible)\nSort the absolute value of the bw-coefficients. This\nproduces a list of features from the most important to the\nless important (Lb)\nSelect the top n features (and number of components) on\na validation set\nProduce predictions using these n features on a test set\nPLS for regressionFeature Selection with PLS\nMethod:\nCompute the Bw matrix on some training data for different\nnumbers of components (all possible)\nSort the absolute value of the bw-coefficients. This\nproduces a list of features from the most important to the\nless important (Lb)\nSelect the top n features on a validation set\nProduce predictions using these n features on a test set\nEvaluate predictions using appropriate metrics\nRoot Mean Square Error ( RMSPE ) over all subsamplesFeature Selection with PLS\n\n N\njj jy yNRMSPE\n12)ˆ (1\nExperiments – scenario 1\n WMT-2008 Europarl English-Spanish dev and test data\n4K Translations: SMT systems trained on 1.4M parallel\nsentences: Matrax, Portage, Sinuhe and MMR (P-ES-1, P-\nES-2, P-ES-3 and P-ES-4) .\nQuality score: 1-4\nFeatures : Matrax (131), others 77 black-box.4: fit for purpose 3: a little post editing needed2: editing quicker than retranslation 1: requires complete retranslation\n\nCE x best features (Pea rson’s correlation):Experiments – scenario 1\n-0.6-0.4-0.200.20.40.60.8\n1CE\nAborted nodes\nSMT score\nRatio scores\nLM target\nLM source\nBi-phrase prob\nTM\nSent length\nBAD 117\nBAD 76\nCE score x MT metrics: Pearson’s correlation:Experiments – scenario 1\n-0.4-0.3-0.2-0.100.10.20.30.40.50.60.7\n1BLEU-4\nBLEU-2\nNIST\nTER\nMeteor exact\nMeteor porter\nCE\nCE score x MT metrics: P earson’s correlation across\ndatasets (MT systems, languag e-pairs and text domains):Experiments – scenario 1\n\nFilter out bad translations (1-2 ) for professional translators\nAverage human scores in the top n translations:Experiments – scenario 2\nAverage scores x TOP N\n2.52.62.72.82.933.13.23.33.43.53.63.7\naverage top 100 average top 200 average top 300 average top 500Human\nCE\nAborted nodes\nSM T score\nRatio scores\nLM target\nLM source\nBi-phrase prob\nTM\nSent length\nBAD 117\nBAD 76\nFilter out bad translations\nNumber of good (bad) transl ations in the top (bottom) n translations:Experiments – scenario 2\n80100120140160180200220240260280300320340360\nhow many 3-4 top 100 how many 3-4 top 200 how many 3-4 top 500Human\nCE\nAborted N\nLength\n507090110130150170190210230250270290310330350\nhow many 1-2 bottom 100 how many 1-2 bottom 200 how many 1-2 bottom 500\nScenario 2 : better way to use the CE score for filtering\nout bad translations\nUse a technique to dynamically identify the threshold in the\ncontinuous score for bad/good translations\n•Controls the balance between precision and recall based\non some expected confidence level\n•Higher confidence level higher precision\n–In our scenario: precision is more relevant: guarantee that\nsentences selected as ‘good’ are indeed good - minimize\nfalse positives\nTechnique:\nConformal prediction [Vovk, Gammerman & Shafer 2005]\n•Inductive Confidence Machine (I CM) [Papadopoulos et al. 2002]Inductive Confidence Machines\nInductive Confidence Machines\nGiven a pre-defined confidence level 1-δ, predict a region:\nSplit a training set {(x1,y1),…,( xl,yl)} of l examples into:\nA proper training set {(x1,y1),…,( xm,ym)} with m<l elements\nA calibration set {(xm+1,ym+1),…,( xl,yl)} with k:=l-m elements\nTrain a standard regression mode l on the proper training set\nApply it the calibration set, a nd define a strangeness measure:\nThe p-value associated with a potential label yk+1 is:} )(:{ypy\n.,,1 ), ,ˆ(: k i y yim im i \n\n.1} :,,1{#:) (1\n1 \nkk iypk i\nk\n\nInductive Confidence Machines\nEstablishing the expected precision:\nConfidence level 1-δ: expected precision\nSearch for a confidence threshold ρ of the regression predictions\n•A certain percent (e.g. 90%) of the predictions that ≥ ρ will have their true scores ≥ τ\nFor a fixed ρ, only consider those examples y* = f(x*) ≥ρ\nStrangeness measure:\nBinary search:) ˆ() sgn(:\n\n im im i y y\n2; goto .7; then ˆ if else .6 ; then ˆ if else .5; return then or ˆ if .4;\n} ˆ|{} , ˆ|{ˆ),ˆ median( .3}; ˆ |{ .2);ˆ max( ),ˆ min( .1\n\n\n\n\n\n\n\n\n\n \n \nULULyjy yiU yLi SU L\njmim im\nSim\nyy y\nInductive Confidence Machines\n\nInductive Confidence Machines\n\nPLS x PLS+ICM (confidence levels = 95% and 90%)\nPLS+ICM x SVM multi-class and SVM binary (precision ,\nconfidence level = 90% )Inductive Confidence Machines\n\nResults considered to be satisfactory\nError would yield uncertainty in the boundaries between two\nadjacent categories in the 1-4 datasets\nResults for estimating a given type of score are similar\nacross different systems and language pairs\nResults correlate better with human scores than those of\nmetrics using reference translations\nAlso true for models trained in different datasets\nUsing ICM to threshold good/bad translations - better than:\nUsing pre-defined threshold\nUsing classifiers to estimate 1-4 or good/bad scoresDiscussion\nFurther investigate uses for the most relevant features:\n1. Most relevant features are not those usually considered in\nSMT models . We plan to investigate whether they could be\nuseful to improve the translations quality, e.g.:\n•To complement existing features in SMT models.\n•To rerank n-best lists produced by SMT systems, which could\nmake use of the features that are not local to single\nhypotheses.On-going work\n2. Automatic metrics such as NIST aim at simulating how\nhumans evaluate translations. We plan to investigate our\nfindings with human annotation for MT evaluation , e.g.:\n•To provide additional features to reference-based metrics\nbased on ML algorithms, like (Albrecht, J. and R. Hwa, 2007)\n•To provide a score to be combined with other MT evaluation\nmetrics, like ULC (Gimenez and Marquez, 2008)\n•To provide a new evaluation metric on itself, with some\nfunction to optimize the corre lation with human annotations,\nwithout the need of reference translations.On-going work\nThanks!\nLucia Specia\[email protected]\nThe source…\nResearchers say swine flu has \"full pandemic potential\",\nspreading readily between people and is likely to go global in\nthe next six to nine months, wi th one in three of the world's\npopulation infected.\nBBC News, 12/05/09\nValidity of the P-value\nValid p-value: P{p(y)≤δ} ≤ δ\nAssume that the calibration data and an arbitrary new test example are i.i.d\ndrawn according to a fixed distribution D\nThe active calibration examples ( ŷ*\nm+i≥ ρ) are drawn i.i.d from the conditional\ndistribution D*:=D{(x,y)|f (x) ≥ ρ}\nAssume the current data se quence is produced by\nGenerating an unordered set\nAssigning a permutation to it\nThe probability of the test exampl e being selected as the last one is\nk*!/(k*+1)!=1/( k*+1)\np(y*\nk+1) ≤ δ if and only if α(ŷ*\nk*+1,yk*+1) is among the largest αs )1 (*k\n\n\n\n \n)1 (11}) ({*\n**\n1*kkypP\nk",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Yn-biwgjZG",
"year": null,
"venue": "EAMT Workshop 1993",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Yn-biwgjZG",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Relating Parallel Monolingual Lexicon Fragments for Translation Purposes",
"authors": [
"Ulrich Heid"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "PRVhh4bzB3Y",
"year": null,
"venue": "EAMT Workshop 1993",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=PRVhh4bzB3Y",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Memory-Based Lexical Acquisition and Processing",
"authors": [
"Walter Daelemans"
],
"abstract": "Current approaches to computational lexicology in language technology are knowledge-based (competence-oriented) and try to abstract away from specific formalisms, domains, and applications. This results in severe complexity, acquisition and reusability bottlenecks. As an alternative, we propose a particular performance-oriented approach to Natural Language Processing based on automatic memory-based learning of linguistic (lexical) tasks. The consequences of the approach for computational lexicology are discussed, and the application of the approach on a number of lexical acquisition and disambiguation tasks in phonology, morphology and syntax is described.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "T4Lm7JxcobZ",
"year": null,
"venue": "SMART@EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-smart.7.pdf",
"forum_link": "https://openreview.net/forum?id=T4Lm7JxcobZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Improving SMT by learning translation direction",
"authors": [
"Cyril Goutte",
"David Kurokawa",
"Pierre Isabelle"
],
"abstract": "Cyril Goutte, David Kurokawa, Pierre Isabelle. Proceedings of the 13th Annual conference of the European Association for Machine Translation. 2009.",
"keywords": [],
"raw_extracted_content": "Improving SMT by learning translation direction\nCyril Goutte, David Kurokawa, Pierre Isabelle\nInteractive Language Technologies group\nInstitute for Information Technology\nNational Research Council\nApril 2008 SMART workshop, Barcelona 2009 Cyril Goutte\nSMART workshop, Barcelona 2009 / 1\nMotivation\nWe address two questions:\n1. Is there a difference between original and (human-) translated text and can\nwe detect it reliably?\n2. If so, can we use that to improve Machine Translation quality?\nCyril Goutte\nSMART workshop, Barcelona 2009 / 2\nMotivation\nWe address two questions:\n1. Is there a difference between original and (human-) translated text and can\nwe detect it reliably?\n2. If so, can we use that to improve Machine Translation quality?\nOur answers:\n1. Y es: on the Canadian Hansard, we get 90+% accuracy.\n2. Y es: on French-English, we obtain up to 0.6 BLEU point increase.\nCyril Goutte\nSMART workshop, Barcelona 2009 / 3\nProblem setting\nTranslations often have a “feel” of the original language: Translationese .\nIf translationese is real, it may be possible to detect it!\nEarlier studies:\nIBaroni&Bernardini (2006): detect original vs. translation is a monolingual\nItalian corpus, with accuracy up to 87%.\nIvan Halteren (2008) : detect source language in multi-parallel corpus and\nidentify source language markers.\nBoth show that various aspects of translationese are detectable.\nWe experiment on a large bilingual corpus (Hansard) and investigate how\ndetecting translation direction may impact Machine Translation quality.\nCyril Goutte\nSMART workshop, Barcelona 2009 / 4\nIndex\n1 Motivation and setting .1\n\u000e2Data .4\n3 Detecting Translation Direction .8\n4 Exploiting Translation Direction in SMT .14\n5 Discussion .20\nCyril Goutte\nSMART workshop, Barcelona 2009 / 5\nData: The Hansard corpus\nBilingual (En-Fr) transcripts of the sessions of the Canadian parliament.\nMost of 35th to 39th parliaments, covering 1996-2007.\n1. Tagged with information on original language (French or English).\n2. High quality translation: Reference material in Canada.\n3. Large amount of data: 4.5M sentences, 165M words.\nfo eo mx\nwords (fr) 14,648K 72,054K 86,702K\nwords (en) 13,002K 64,899K 77,901K\nsentences 902,349 3,668,389 4,570,738\nblocks 40,538 42,750 83,288\nCyril Goutte\nSMART workshop, Barcelona 2009 / 6\nData: The Hansard corpus (II)\nCorpus issues:\nISlightly inconsistent tagging, eg both sides claim to be original: puts overall\ntagging reliability into question.\nIMissing text/alignment, eg valid English but no translation: seems to be a\nretrieval issue.\nIImbalance at the word/sentence level: 80% originally English.\nIThere may be lexical/contextual hints: Quebec MPs tend to speak French,\nwestern Canada MPs almost only anglophones.\nCyril Goutte\nSMART workshop, Barcelona 2009 / 7\nCorpus (pre)processing\nITokenized (NRC in-house tokenizer)\nILowercased\nISentence-aligned (NRC implementation of Gale&Church, 1991)\nWe consider two levels of granularity:\nISentence-level: individual sentences;\nIBlock-level: maximal consecutive sequence with same original language.\nBlock-level is balanced, sentence-level is imbalanced 4:1 (eo:fo).\nTagged using freely available “Tree Tagger” (Schmid, 1994).\n=)4 representations: 1) word, 2) lemma, 3) POS and 4) mixed n-grams.\n“Mixed”: POS for content words, surface form for grammatical words.\nCyril Goutte\nSMART workshop, Barcelona 2009 / 8\nIndex\n1 Motivation and setting .1\n2 Data .4\n\u000e3Detecting Translation Direction .8\n4 Exploiting Translation Direction in SMT .14\n5 Discussion .20\nCyril Goutte\nSMART workshop, Barcelona 2009 / 9\nDetecting translation direction\nSupport Vector Machines trained with T. Joachims’ SVM-Perf.\nTest various conditions:\n1. Block-level (83K examples) or sentence-level (1.8M examples, balanced).\n2. Features: word, lemma, POS, mixed. . . n-gram frequencies.\n3. N-gram length: 1. . . 3 for word/lemma, 1. . . 5 for POS/mixed.\n4. Monolingual (English or French) or bilingual text.\nSentence-level: test fewer feature/n-gram combinations (because of\ncomputational cost).\nAll results obtained from 10-fold cross-validation.\nResults reported in F-score (\u0019accuracy in this case).\nCyril Goutte\nSMART workshop, Barcelona 2009 / 10\nBlock-level Performance\n1 2 3 4 565 70 75 80 85 90Detection performance (en)\nn−gram sizeF−score (%)\nword\nlemma\nmixed\nPOS\ntf−idf\nSimilar perf. on French,\n+1-2% for bilingual,\nsame general shape.\ntf-idf: small but\nconsistent improvement.\nOptimal:\nword/lemma bigram,\nPOS/mixed trigram.\nWord bigram: F= 90%\nMixed trigram: F= 86% .\nCyril Goutte\nSMART workshop, Barcelona 2009 / 11\nInfluence of block length\n65 70 75 80 85 90 95 100Perf vs. length ( en )\nLength in words (equal bins)Accuracy\n3 37 68 103 147 213 335 541 1084 2638word\nlemma\nPOS\nmixed\n1−gram\n2−gram\n3−gram\nLarge range in block\nlength (3-73887 words!).\nUp to 99% accuracy for\nlarge blocks.\nMuch better than\nrandom for short blocks.\nword >lemma >mixed\nCyril Goutte\nSMART workshop, Barcelona 2009 / 12\nSentence-level Performance\n1 2 3 4 564 66 68 70 72 74 76 78Sentence−level detection\nn−gram sizeF−scoreword\nlemma\nmixed\nPOS\nFrench\nEnglish\n1.8M examples\n(balanced)\nSome missing\nconditions\n(computational cost)\nF= 77%\nCyril Goutte\nSMART workshop, Barcelona 2009 / 13\nAnalysis of\nMost important bigrams in English\n(eo= original, fo=translation).\nMost important=relatively more frequent.\n“A couple of”: no equivalent in French\nCanadian alliance, CPC, NDP: mostly western,\nmostly anglophone parties\nBQ (Bloc Quebecois): French-speaking\nFrench translation overuses articles, preposi-\ntions (because French does), and “Mr. Speaker”!eo fo\ncouple of ofthe\nalliance ) mr.\nacouple ,the\ndothat inthe\n,canadian tothe\ntherecord ,i\nforward to .the\n,cpc ):\ncpc) speaker ,\nofus .i\nthiscountry :mr\nthisparticular ,and\nmany of .speaker\ncanadian alliance bq)\nacross the ,bq\noutthere hon .\nthethings that the\nforthat onthe\nCyril Goutte\nSMART workshop, Barcelona 2009 / 14\nIndex\n1 Motivation and setting .1\n2 Data .4\n3 Detecting Translation Direction .8\n\u000e4Exploiting Translation Direction in SMT .14\n5 Discussion .20\nCyril Goutte\nSMART workshop, Barcelona 2009 / 15\nImpact on Statistical Machine Translation\nTypical SMT system training:\nIGather as much English-French aligned sentences as possible.\nIPreprocess + split data\nIEstimate parameters in either direction (en !fr and fr!en)\nIOriginal translation direction is not considered at all!\n)We use French originals and English translations to train an en !fr system\n(”reverse” translation??)\nWe know SMT is very sensitive to genre/topic. . .\nDoes difference between original and translation matter? If so, by how much?\nCyril Goutte\nSMART workshop, Barcelona 2009 / 16\nImpact on Statistical Machine Translation\nWe analyze the impact of translation direction on MT by investigating:\n1. Do we get better performance by sending original text to MT system trained\nonly on original text?\nCyril Goutte\nSMART workshop, Barcelona 2009 / 17\nImpact on Statistical Machine Translation\nWe analyze the impact of translation direction on MT by investigating:\n1. Do we get better performance by sending original text to MT system trained\nonly on original text?\n2. Detecting translation direction and sending text to the “right” MT system.\nFrenchEnglish(eo) en−>fr\n(fo) en−>frClassifierorig.\ntrans.French\nCyril Goutte\nSMART workshop, Barcelona 2009 / 18\nImpact of Original Language\nSystem trained on eo, fo, or mx, tested on eo/fo part of test set, or all (mx).\nmx test set fo test set eo test set\nTrain fr.en en.frfr.en en.frfr.en en.fr\nmx 36.2 37.1 36.1 37.3 36.1 36.9\nfo 31.2 30.8 36.2 36.5 30.5 30.1\neo 36.6 37.8 33.7 36.0 36.8 38.0\neo system does (much) better on eo test, with 80% of training data.\neo system also does better on mx data (test is 88% eo data vs. 80% in train).\nfo system does much worse on mx and eo data, but about the same as mx on\nthe fo data, with only 20% of the training data!\n)Idea: detect source language using classifier, then use the right MT system\n(“Mixture of Experts”)\nCyril Goutte\nSMART workshop, Barcelona 2009 / 19\nImpact of Automatic Detection\nTop part is more or less identical to previous table.\nref: using reference source language information,\ngain a consistent \u00180:6BLEU points.\nSVM: using SVM prediction, gain is similar.Full test set\nfr!en en!fr\nmx 36.86 37.78\nfo 32.00 31.85\neo 37.20 38.23\nSVM 37.44 38.35\nref 37.46 38.35\nSmaller gain over the eo system (due to having 88% eo data in test set).\n)Detecting original vs. translation provides a small-ish but consistent\nimprovement in translation performance.\n)not worth looking for better classifier (for thattask).\nOther uses of translation direction detection?\nCyril Goutte\nSMART workshop, Barcelona 2009 / 20\nIndex\n1 Motivation and setting .1\n2 Data .4\n3 Detecting Translation Direction .8\n4 Exploiting Translation Direction in SMT .14\n\u000e5Discussion .20\nCyril Goutte\nSMART workshop, Barcelona 2009 / 21\nDiscussion\nHow general are these results? Will it generalize to:\n1. Detection on other English-French data?\n2. Training a classifier on another corpus?\n3. Another language pair?\n4. Other settings: source vs. translations from different languages.\nMixture of experts: could use additional input-specific information.\nIMother tongue?\nIGender?\nCyril Goutte\nSMART workshop, Barcelona 2009 / 22\nTo Conclude...\nCan we tell the difference between an original and translated document?\n!Y es.\nTo what level of accuracy?\n!Up to 90+% accuracy on blocks, 77% on single sentences.\nIs translation direction useful for machine translation?\n!Y es!\nIs the classification performance sufficient?\n!Indistinguishable from reference labels. . .\nCyril Goutte\nSMART workshop, Barcelona 2009 / 23\nIndex\n1 Motivation and setting .1\n2 Data .4\n3 Detecting Translation Direction .8\n4 Exploiting Translation Direction in SMT .14\n5 Discussion .20\nCyril Goutte",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "QqpD4PDErK6",
"year": null,
"venue": "EACL 2021",
"pdf_link": "https://aclanthology.org/2021.eacl-main.3.pdf",
"forum_link": "https://openreview.net/forum?id=QqpD4PDErK6",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Disambiguatory Signals are Stronger in Word-initial Positions",
"authors": [
"Tiago Pimentel",
"Ryan Cotterell",
"Brian Roark"
],
"abstract": "Psycholinguistic studies of human word processing and lexical access provide ample evidence of the preferred nature of word-initial versus word-final segments, e.g., in terms of attention paid by listeners (greater) or the likelihood of reduction by speakers (lower). This has led to the conjecture—as in Wedel et al. (2019b), but common elsewhere—that languages have evolved to provide more information earlier in words than later. Information-theoretic methods to establish such tendencies in lexicons have suffered from several methodological shortcomings that leave open the question of whether this high word-initial informativeness is actually a property of the lexicon or simply an artefact of the incremental nature of recognition. In this paper, we point out the confounds in existing methods for comparing the informativeness of segments early in the word versus later in the word, and present several new measures that avoid these confounds. When controlling for these confounds, we still find evidence across hundreds of languages that indeed there is a cross-linguistic tendency to front-load information in words.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics , pages 31–41\nApril 19 - 23, 2021. ©2021 Association for Computational Linguistics31Disambiguatory Signals are Stronger in Word-initial Positions\nTiago PimentelDRyan CotterellD;Q\nDUniversity of CambridgeQETH Z ¨urich@Google\[email protected] ,[email protected] ,[email protected] Roark@\nAbstract\nPsycholinguistic studies of human word pro-\ncessing and lexical access provide ample ev-\nidence of the preferred nature of word-initial\nversus word-final segments, e.g., in terms of\nattention paid by listeners (greater) or the\nlikelihood of reduction by speakers (lower).\nThis has led to the conjecture—as in Wedel\net al. (2019b), but common elsewhere—that\nlanguages have evolved to provide more infor-\nmation earlier in words than later. Information-\ntheoretic methods to establish such tendencies\nin lexicons have suffered from several method-\nological shortcomings that leave open the ques-\ntion of whether this high word-initial informa-\ntiveness is actually a property of the lexicon\nor simply an artefact of the incremental nature\nof recognition. In this paper, we point out the\nconfounds in existing methods for comparing\nthe informativeness of segments early in the\nword versus later in the word, and present sev-\neral new measures that avoid these confounds.\nWhen controlling for these confounds, we still\nfind evidence across hundreds of languages\nthat indeed there is a cross-linguistic tendency\nto front-load information in words.1\n1 Introduction\nThe psycholinguistic study of human lexical access\nis largely concerned with the incremental process-\ning of words—whereby, as individual sub-lexical\nunits (e.g., phones) are perceived, listeners up-\ndate their expectations of the word being spoken.\nOne common tenet of such studies is that the dis-\nambiguatory signal contributed by units early in\nthe word is stronger than that contributed later—\ni.e.disambiguatory signals are front-loaded in\nwords . This intuition is derived from ample indi-\nrect evidence that the beginnings of words are more\nimportant for humans during word processing—\nincluding, e.g., evidence of increased attention to\nword beginnings (Nooteboom, 1981, inter alia ) or\n1Our code is available at https://github.com/\ntpimentelms/frontload-disambiguation .\n0 2 4 6 8\nPosition from Start24Forward\ncelex\nwikipedia\nnortheuralex\n8\n 6\n 4\n 2\n 0\nPosition from End24Backward\ncelex\nwikipedia\nnortheuralexFigure 1: Forward and Backward Surprisals with LSTM\nmodel from Pimentel et al. (2020). The bottom plot has been\nflipped horizontally such that it visually corresponds to the\nnormal string direction.\nevidence of increased levels of phonological reduc-\ntion in word endings (van Son and Pols, 2003b).\nTo analyse this front-loading effect, researchers\nhave investigated the information provided by seg-\nments in words. van Son and Pols (2003a,b)\nshowed that, in Dutch, a segment’s position in a\nword is a very strong predictor of its conditional sur-\nprisal, with later segments being more predictable\nthan earlier ones—a result which we show to arise\ndirectly from its definition in x3.3.1. Recently King\nand Wedel (2020) and Pimentel et al. (2020) con-\nfirmed the effect on many more languages.\nTheir analysis, however, presents an inherent\nconfound between the amount of conditional in-\nformation available to a model and the surprisal\nof the subsequent segment—see Fig. 1 for results\nillustrating this. Using the LSTM training recipes\nfrom Pimentel et al. (2020),2we calculated the con-\nditional surprisal at each segment position within\nthe words across all languages in three datasets.3\nThe top-half of Fig. 1 shows that, indeed, positions\n2https://github.com/tpimentelms/phonotactic-complexity\n3See x3 and x5 for specifics on training and data. Each\nsegment corresponds to a single phone in CELEX and\nNorthEuraLex, and to a single grapheme in Wikipedia.\n32earlier in the string have higher surprisal than po-\nsitions later in the string, supporting the thesis of\nhigher informativity earlier in words. The bottom-\nhalf shows that modelling the strings right-to-left\ninstead of left-to-right reverses the resulting effect.\nThis decouples conditional surprisal from the dis-\nambiguatory strength. To expose this decoupling,\nconsider an artificial language where every word\ncontains a copy of its first half, e.g., foofoo ,barbar ,\nfoobarfoobar , etc. The first and second halves of\nthese words have identical disambiguatory strength;\nthey are the same so one could disambiguate the\nword as easily from its second half as from the first.\nIn contrast, conditional surprisal would be nearly\nzero for the second halves of words because the sec-\nond half is perfectly predictable from the first half.\nIn natural languages, measuring conditional en-\ntropy in a left-to-right fashion inherently forces a\nreduction of conditional entropy in later segments\nbecause of a language’s phonotactic constraints.\nHowever, the disambiguatory strength of later seg-\nments is not inherently less than that of earlier\nsegments. For instance, in a language like Turkish,\nwhich has vowel harmony, knowledge of any of the\nvowels in a word will provide information about\nthe word’s other vowels in a similar way. As such,\nknowledge of vowels towards the front of a word\nis as disambiguating as of vowels towards its end.\nThe contributions of this paper are threefold.\nFirst, we document and demonstrate the shortcom-\nings of existing methods for measuring the informa-\ntiveness of individual segments in context, includ-\ning the confound with the amount of conditional\ninformation discussed above. Second, we intro-\nduce three surprisal-based measures that control\nfor this confound and enable comparison of word-\ninitial versus -final positions in this respect: uni-\ngram, position-specific and cloze surprisal (see x3).\nFinally, we find robust evidence across many lan-\nguages of stronger disambiguatory signals in word\ninitial than word-final positions. Out of a total of\n151 languages analysed across three separate col-\nlections, 82 of them present a higher cloze surprisal\nin word beginnings than in endings—with similar\npatterns arising with the other two measures.\n2 Background and Related Work\nPsycholinguistic evidence. Lexical access has\nlong been a topic of interest for psycholinguists,\nleading to many distinct models being proposed\nfor this process (Morton, 1969; Marcus, 1981;Marslen-Wilson, 1987). Far earlier, though, Bagley\n(1900) had already demonstrated that earlier seg-\nments in words were more important for word\nrecognition than later segments; specifically, they\nfound that, when exposed to words with word-\ninitial or word-final consonant deletions, listeners\nfound the word-initial deletions more disruptive.\nFay and Cutler (1977) showed mispronunciations\nare more likely in word endings, while Bruner and\nO’Dowd (1958) showed that recognizing written\nwords with flipped initial characters was harder\nthan with word final ones—demonstrating that the\ninitial part of the word was more “useful” for read-\ners. More recently, Wedel et al. (2019a) found\nevidence in support of Houlihan (1975), showing\nneutralizing rules tend to target word endings more\nsignificantly than beginnings in both suffixing and\nprefixing languages.\nNooteboom (1981) investigated the ease of re-\ncovering lexical items from either word beginnings\nor endings, finding that people had an easier time\nrecovering words from their beginnings. For this,\nhe examined words for which the first and second\nhalves each completely identified them in a large\nDutch dictionary—controlling for both segments’\nlength and uniqueness. Later on, though, Noote-\nboom and van der Vlugt (1988) showed this differ-\nence vanishes when priming people with the length\nof the word—proposing the difference comes not\nfrom how informative segments were, but from the\ndifficulty in time aligning later segments in men-\ntal lexicons. Connine et al. (1993) also found no\ndifference in priming effects with non-words that\ndiffered from real words in either word initial or\nmedial positions, suggesting initial positions have\nno special status in word recognition.\nPsycholinguistic evidence is key to understand-\ning how lexical access works in human language\nprocessing, and can help us understand why lexi-\ncons may evolve to provide more disambiguatory\nsignals earlier in words.4Given the incremental\nnature of human lexical processing, however, such\nevidence cannot provide direct evidence of the na-\nture of the lexicon uninfluenced by incrementality.\nComputational evidence. To the best of our\nknowledge, van Son and Pols (2003b,a) were the\nfirst to use computational methods coupled with an\n4Note that there are many possible reasons why the effects\nwe demonstrate in this paper may arise, from the demands of\nlexical access to constraints on articulation. We provide no\nevidence for any of the possible explanations, evolutionary or\notherwise, just methods for measuring the effect.\n33information theoretic definition of informativeness\nto investigate this question. They showed that seg-\nments in the beginning of words carry most of a\nword’s information, as measured by their contex-\ntual surprisal using a plug-in tree structured proba-\nbilistic estimator. Although assessing a less-biased\nsample of words than Nooteboom (1981),5this\nstudy is also limited to a single language (Dutch),\nhence cannot assess whether this is a general phe-\nnomenon or specific to that language.\nFurther, van Son and Pols (2003a,b) use absolute\nword positions in their analysis. Word length corre-\nlates strongly with frequency, hence while early po-\nsitions are present in all words, later positions only\nexist for a much smaller sample of typically lower\nfrequency words. Thus this comparison amounts\nto asking if later positions in longer and infrequent\nwords have lower surprisal than earlier positions in\nall (frequent or infrequent) words. We analyse this\nconfounding factor in x6.\nWedel et al. (2019b) and King and Wedel (2020)\napplied a methodology similar to that of van Son\nand Pols (2003a) to show, for many diverse lan-\nguages, that more frequent words contain less in-\nformative segments in word initial positions, while\nless frequent types carry more informative ones.\nThey further showed that segments in later word\npositions were less informative (given the previ-\nous ones) than average in rarer words. While con-\ntrolling for length, King and Wedel (2020) also\ncompared words’ forward and backward unique-\nness points—nodes in a trie from which only one\nleaf node can be reached, i.e., where the word is\nuniquely identified—showing they happened ear-\nlier in forward strings.\nWhile these studies provide evidence from more\ndiverse sets of languages, they follow van Son and\nPols (2003a) in studying closed lexicons.6As we\nshow inx3.3.1, the use of probabilistic trie models\non a closed lexicon yields a trivial effect of higher\ninformativity at word initial positions. Furthermore,\nsuch studies cannot account for out-of-vocabulary\nwords (e.g., nonce, proper name or otherwise un-\nknown words) or derivational morphology, which\nare key parts of lexical recognition. Lexical access\n5Nooteboom (1981) looked at words completely identi-\nfiable by both their first and second halves in a large Dutch\ndictionary—this resulted in a study with only 14 words.\n6The closed lexicon assumption is incorporated implicitly\nin the probabilistic trie models used by van Son and Pols\n(2003a,b) and King and Wedel (2020)—i.e. they assign zero\nprobability to any form not in their training sets—and in the\nuniqueness point analysis of King and Wedel (2020).is also somewhat robust to segmental misorder-\ning (Toscano et al., 2013) and sounds later in a\nword help determine the perception of earlier ones\n(Gwilliams et al., 2018). In contrast, a trie over a\nclosed lexicon is deterministic. Beyond this, Luce\n(1986) showed in a corpus study that the proba-\nbility of a word type being uniquely identifiable\nbefore its last segment was only 41%—and 19% of\ntypes were identified only by the end of word, be-\ning proper prefixes of other words, such as catand\ncats. They conclude that uniqueness point statistics\nmay only be useful for long word analysis.\nIn Pimentel et al. (2020), we analysed several\nlanguages’ phonotactic distributions, focusing on\npresenting a trade-off between phonotactic entropy\nand word length across languages. As a control\nexperiment we analysed the correlation between\na segment’s surprisal and its word position across\n106 languages. We did not control for word length\nand did not run per-language experiments, though—\nso we could have just been capturing the effect that\nlater positions will mostly be present in languages\nwith longer words (which, as we find, have lower\ninformation on average).7\nWhile this last work avoids many of the issues\nraised earlier in this section, it fails to control the\nkey confound mentioned earlier: it relies on left-to-\nright conditional probabilities to calculate surprisal.\nThus segments early in the word have less condi-\ntional information and hence are generally of lower\nprobability—a trivial effect that does not indicate a\nsegment’s disambiguatory signal strength.\n3 Measures of Disambiguatory Strength\n3.1 A Lexicon Generating Distribution\nIn this work, instead of the lexicon itself, we inves-\ntigate the probability distribution from which it is\nsampled. The distribution is unobserved, but we\ncan get glimpses of it via the sampled lexicon:\nn\nw(n)oN\nn=1\u0018p(w) =jwjY\nt=1p(wtjw<t)(1)\nThe distribution p(w)is defined over the entire\nspace of possible phonological wordforms w2\u0006\u0003,\nwhere \u0006is a language-specific alphabet and the\noperator\u0003indicates its Kleene closure.8This dis-\n7We note this issue only applies to the control experiment,\nand has no bearing on the key findings of that paper.\n8We pad all strings with the end-of-word ( EOW ) symbol.\nFor simplicity, we assume the alphabet includes EOW through-\nout the rest of the paper.\n34tribution should assign high probability to likely\nwordforms (attested or not) and low probability\nto unlikely ones. Using Chomsky and Halle’s\n(1965) classic example from English, brick (at-\ntested) and blick (unattested) would have high prob-\nability, whereas *bnick (unattested) would have a\nlow probability.\n3.2 Entropy and Conditional Entropy\nShannon’s entropy is a measure of how much in-\nformation a random variable contains. Consider a\nsegment wtat word position t, which is a value of\nthe random variable Wt. The average information\n(surprisal) relayed per segment is:\nH(Wt)\u0011X\nwt2\u0006p(wt) log1\np(wt)(2)\nA random variable is maximally entropic if it is\na uniform distribution, in which case H(Wt) =\nlog(j\u0006j). Conditional entropy measures how much\ninformation the knowledge of a variable conveys,\ngiven some previous knowledge. The average infor-\nmation transmitted per segment, given the previous\nones in a word, is\nH(WtjW<t)\u0011 (3)\nX\nw\u0014t2\u0006\u0003p(w\u0014t) log1\np(wtjw<t)\nwhere w\u0014t=w<t\u000ewt. We note the conditional\nentropy is always smaller or equal to the entropy,\ni.e.H(WtjW<t)\u0014H(Wt).\n3.3 Plug-in Estimators, Context Size, and\nDisambiguatory Strength\nOur criticism of previous work investigating the dis-\nambiguatory strength of word-initial vs. word-final\nsegments can be mainly divided in two parts: (i)\nthe use of maximum likelihood plug-in estimators\nof the conditional entropy, by e.g. van Son and Pols\n(2003b); (ii) the use of left-to-right conditional en-\ntropy in itself, by all previous information-theoretic\nwork in this vein.\n3.3.1 A Critique of van Son and Pols (2003b)\nWe present a reductio ad absurdum which shows\nthat van Son and Pols’s (2003b) method will lead to\nthe conclusion that word-initial segments are more\ninformative even if all segments were equally en-\ntropic and sampled independently—a nonsensicalfinding. Accordingly, assume the probability distri-\nbution p(wtjw<t), from which each segment in a\nword is sampled, was independent, e.g. define\n^p(w) =jwjY\nt=1^p(wtjw<t) =jwjY\nt=1^p(wt)(4)\nAssume now that a large, but finite, lexicon is sam-\npled from it\b^w(n)\tN\nn=1\u0018^p(w). Further consider\nmodelling this sampled lexicon with a probabilistic\ntrie structure, similarly to what was done by van\nSon and Pols (2003a,b),9i.e.\nqtrie(wtjw<t) =count( wt; wt\u00001; : : : ; w 0)\ncount( wt\u00001; : : : ; w 0)(5)\nwhere w0is the beginning-of-word symbol. Such a\nmodel uses all Nwords to approximate the distri-\nbution of the first segment—i.e. count( w0) =N.\nYet after t\u00001segments, an exponentially smaller\nsample is used to capture the distribution—i.e.\nE[count( wt\u00001; : : : ; w 0)] =N=j\u0006jt\u00001. Using this\nmodel as a plug-in estimator of the entropy will\nlead to negatively biased estimates, where the error\nis approximately (Basharin, 1959):\nH(WtjWt\u00001)\u0000Eh\n^Hi\n\u0019(j\u0006j\u00001) log e\ncount( wt\u00001; : : : ; w 0)\n\u0019j\u0006jt\u00001(j\u0006j\u00001) log e\nN(6)\nwhere ^His a plug-in estimate of the entropy. The\nerror grows exponentially in tdue to thej\u0006jt\u00001\nfactor. However, by assumption, H(WtjWt\u00001)\nis constant—we have equally entropic and\nindependent segments. Thus, the only way for\nthis difference to increase is for the second term\nto decrease as a function of t. It follows that the\nestimated cross-entropies decrease as a function\noftdue to a methodological technicality. Indeed,\nin the extreme case, every position after a word’s\nuniqueness point would be estimated to have zero\nentropy. Thus, van Son and Pols’s (2003a) method\nonly reveals a trivial effect.\n3.3.2 Conditional Entropy and Context Size\nAs previously mentioned, the conditional entropy\nmeasures how much information the knowledge of\na variable conveys, given some previous informa-\ntion, and it is always smaller or equal to the entropy.\nFor this reason, relying on left-to-right conditional\n9This is in fact a simplification of van Son and Pols’s\n(2003a) model, which in practice uses Katz smoothing.\n35entropies to estimate the strength of disambigua-\ntory signals yields straightforward results; the avail-\nability of larger conditioning contexts in a word’s\nfinal segments will naturally reduce its conditional\nentropy. This will negatively skew the estimated\ninformativeness of the later parts of a word.\nH(Wt)\u0015H(WtjWt\u00001)\u0015H(WtjW<t)(7)\nThis effect can also be easily demonstrated by\nthe symmetrical nature of mutual information (MI),\nwhere the MI is defined as:\nMI(Wt;Wt\u00001) = H( Wt)\u0000H(WtjWt\u00001)\n= H(Wt\u00001)\u0000H(Wt\u00001jWt)\n= MI( Wt\u00001;Wt) (8)\nIf we assume both segments had the same uncon-\nditional entropy, i.e. H(Wt) = H( Wt\u00001), then\nusing left-to-right conditional entropies would sug-\ngest the later segment was less informative, while\nright-to-left conditioning would imply the opposite.\nNonetheless, both their contextual and uncontex-\ntual disambiguatory strength would in fact be the\nsame, if we estimated it with equal-sized contexts:\nH(Wt) = H( Wt\u00001) =) (9)\nH(WtjWt\u00001) = H( Wt\u00001jWt)\n3.4 Cross-Entropy and Entropy\nAs mentioned above, the distribution p(w)is not\ndirectly observable. We can, however, approximate\nit using character-level language models p\u0012(w).\nWe are interested in the entropy of variable Wt, as\na proxy we measure its cross-entropy\nH\u0012(Wt)\u0011X\nwt2\u0006p(wt) log1\np\u0012(wt)|{z}\nsurprisal(10)\nwhere the surprisal is the information provided by a\nsingle segment instance wt. The cross-entropy is an\nupper bound on the entropy, i.e. H(Wt)\u0014H\u0012(Wt),\nwith their difference being the Kullback–Leibler\n(KL) divergence between both distributions. Since\nthe KL-divergence is always positive, this upper-\nbound holds. Furthermore, the closer p\u0012is to the\ntrue distribution p, the smaller the divergence is,\nand the tighter this bound. As such, the better our\nmodel is at estimating the true distribution, the\nbetter our estimates of the entropy will be.Calculating eq. (10) still requires knowledge of\nthe true p. We overcome this limitation by empiri-\ncally estimating it on a held out part of the lexicon\nH\u0012(Wt)\u00191\nNNX\nn=1log1\np\u0012(w(n)\nt)(11)\n3.5 Earlier vs. Later Word Entropy\nFor the remainder of this work, we will discuss\ninformation in terms of surprisal, since the entropy\nis its expected value. We analyse the distribution of\ndisambiguatory information across word positions\nvia three distinct measures—all of which control\nfor the amount of conditioning per position:\n\u000fUnigram Surprisal H\u0012(Wt):the surprisal of\nindividual segments.\n\u000fCloze Surprisal H\u0012(WtjW6=t):surprisal of\na segment given all others in the same word.\n\u000fPosition-Specific Surprisal\nH\u0012(WtjT=t;jWj): the surprisal of in-\ndividual segments given their position in the\nwordform and the word’s length.\nThe unigram surprisal captures the information\nprovided by each segment when considering no\ncontext; while the cloze surprisal represents the in-\nformation provided by a segment when one already\nknows the rest of the word. The position-specific\nsurprisal represents a mid way between both, con-\nditioning each segment only on its position and the\nword’s length—being inspired by Nooteboom and\nvan der Vlugt’s (1988) experiments. These three\nmeasures of information control for the context\nsize considered at each position, being thus better\nfor an investigation of disambiguatory strength.\nWe used an unigram model (see x4) to estimate\nthe unigram surprisal, and transformers (Vaswani\net al., 2017) for cloze and position-specific sur-\nprisals. We also use the LSTM (Long-Short\nTerm Memory, Hochreiter and Schmidhuber, 1997)\nmodel from Pimentel et al. (2020) for two other en-\ntropy measures which do not control for the amount\nof conditional information:\n\u000fForward Surprisal H\u0012(WtjW<t):the sur-\nprisal of a segment given the previous ones.\n\u000fBackward Surprisal H\u0012(WtjW>t):the\nsurprisal of a segment given the future ones.\nWe include the beginning- and end-of-word sym-\nbols in the forward and backward surprisal analy-\nsis, respectively, following previous work (Wedel\n36et al., 2019b; Pimentel et al., 2020; King and\nWedel, 2020). However, we ignore them in the\nunigram, position-specific and cloze surprisal anal-\nyses. Position-specific and cloze surprisal are given\ninformation about word length, hence these sym-\nbols are unambiguously predictable. We analyse\nthe impact of these symbols in x6.\n4 Character-Level Language Models\nIn this paper, we make use of character-level lan-\nguage models to model the probability distributions\np\u0012and approximate the relevant cross-entropies.\nUnigram. This might be the simplest language\nmodel still in use in Natural Language Processing.\nWe use its Laplace-smoothed variant\np\u0012(wt) =count( wt) + 1P\nc02\u0006count( c0) +j\u0006j(12)\nLSTM. This architecture is the state-of-the-art\nfor character-level language modelling (Melis et al.,\n2020). Given a sequence of segments w2\u0006\u0003, we\nuse one hot lookup embeddings to transform each\nof them into a vector zt2Rd. We then feed these\nvectors into a k-layer LSTM\nht= LSTM( zt\u00001;ht\u00001) (13)\nwhere h2Rd,h0is a vector with all zeros and w0\nis the beginning-of-word symbol. We then linearly\ntransform these vectors before feeding them into a\nsoftmax non-linearity to obtain the distribution\np\u0012(wtjw<t) = softmax( Wht+b) (14)\nin this equation, W2Rj\u0006j\u0002dis a weight matrix\nandb2Rj\u0006ja bias vector.\nBackward LSTM. To get the backward sur-\nprisals we use models with the same architecture,\nbut reverse all strings before feeding them to the\nmodels. As such, we get the similar equations\nht= LSTM( zt+1;ht+1) (15)\np\u0012(wtjw>t) = softmax( Wht+b) (16)\nTransformer. Transformers allow a segment to\nbe conditioned on both future and previous sym-\nbols. Our implementation starts similar to the\nLSTM one, getting embedding vectors ztfor each\nsegment in the string w2\u0006\u0003, except that we re-\nplace segment wtwith a MASK symbol. We\nthen feed these vectors through kmulti-headed self-\nattention layers, as defined by Vaswani et al. (2017).Finally, the representations from the last layer are\nlinearly transformed and fed into a softmax\np\u0012(wtjw6=t) = softmax( Wht+b) (17)\nPosition-Specific Transformer. To get position-\nspecific surprisal values, we again use a transformer\narchitecture, but instead of replacing a single seg-\nment with a MASK symbol, we replace all of\nthem. This is equivalent to conditioning each seg-\nment’s distribution on its position and the word\nlength—i.e., estimating p\u0012(wtjt;jwj).\n5 Data\nIn order to estimate redundancy and informative-\nness of segments we use three different datasets,\neach with its own pros and cons. We focus on\ntypes instead of tokens—i.e., the datasets consist\nof lexicons—for a few different reasons. First, it is\neasier to get reliable samples of types than tokens\nfor a language, specially low-resource ones. Sec-\nond, it is a well known result that token frequency\ncorrelates with both word length (Zipf, 1949) and\nphonotactic probability (Mahowald et al., 2018;\nMeylan and Griffiths, 2017), so that would be a\nstrong confound in the results. Third, morphology\nis more easily modeled at the type level than at\ntoken level (Goldwater et al., 2011).10\nCELEX (Baayen et al., 2015) allows us to ex-\nperiment exclusively on monomorphemic words,\nbut covers only three closely related languages. It\ncontains both morphological and phonetic annota-\ntions for a large number of words in English, Dutch\nand German. We follow Dautriche et al. (2017) in\nusing only words labeled as monomorphemic in\nour study, leaving us with 4;810words in German,\n6;206words in English and 7;045words in Dutch.\nNorthEuraLex (Dellert et al., 2019) spans 107\nlanguages from 21 language families in a unified\nIPA format. This database is composed of concept\naligned word lists for these languages, containing\n1016 concepts, each of them translated in most lan-\nguages. However, most of these languages are from\nEurasia, hence the collection lacks the typological\ndiversity we would ideally like.\nWikipedia allows us to investigate a broader\nand more diverse set of languages, but has no pho-\nnetic information (only graphemes) and lexicons\n10For each of the analysed datasets, we use 80% of the word\ntypes for training, with the rest being equally split between\ndevelopment and test sets; only test set surprisal and cross-\nentropies are used in our analysis.\n37extracted from it may be “contaminated” with for-\neign words. We fetch the Wikipedia for a set of 41\ndiverse languages,11and tokenise their text using\nlanguage-specific tokenisers from spaCy (Honni-\nbal and Montani, 2017). When a language-specific\ntokeniser was not available, we used a multilin-\ngual one. We then filtered all non-word tokens—by\nremoving the ones with any symbol not in the lan-\nguage’s scripts—and kept only the 10;000most\nfrequent types in each language.\n6 Experiments and Results\nForward Surprisal. We first replicate the results\nfrom van Son and Pols (2003a,b), Wedel et al.\n(2019b), and Pimentel et al. (2020), which show\nthat surprisal decreases with as the words posi-\ntion advances. On average, forward surprisal, i.e.\nH\u0012(WtjW<t), could decrease for two reasons: (i)\nwords indeed front-load disambiguatory signals; or\n(ii) the trivial fact that conditioning reduces entropy.\nFor each word, we first get the forward surprisal\nfor each segment in it. We then group surprisal\nvalues in two groups: word initial (when they are\nin the first half of its word) and final (when in the\nsecond half), ignoring mid positions in words with\nuneven lengths; we average these initial and final\nsurprisals per word, getting a single value of each\nper word. This way we compare earlier vs. later\nword positions while ignoring any length effect—\nwords with all lengths will possess segments in\nboth groups. For each analysed language, we then\nuse permutation tests (permuting word initial and\nfinal surprisals) to evaluate if one group is statis-\ntically larger than the other—using 100;000per-\nmutations. All but one language in three analysed\ndatasets had significantly larger surprisal in word\ninitial positions12—the exception being Abkhaz in\nNorthEuraLex. These results can be seen in Tab. 1\nand in Fig. 2 (left).\nBackward Surprisal. If the result for forward\nsurprisal is largely due to the amount of condi-\ntional information, then reversing the strings should\nlead to a roughly opposite effect. With this in\nmind, for each language, we again bin surprisals\nin word initial vs. final position, but now we\nevaluate languages using backward surprisal, i.e.,\n11These languages were: af, ak, ar, bg, bn, chr, de, el, en,\nes, et, eu, fa, fi, ga, gn, haw, he, hi, hu, id, is, it, kn, lt, mr, no,\nnv, pl, pt, ru, sn, sw, ta, te, th, tl, tr, tt, ur, zu.\n12All statistical significance results in this work have been\ncorrected for multiple tests with Benjamini and Hochberg\n(1995) corrections and use a confidence value of p <0:01.\n2 3 4 5\nInitial Surprisal (bits)2345Final Surprisal (bits)wikipedia\nnortheuralex\ncelex\n3 4 5\nInitial Surprisal (bits)345Final Surprisal (bits)wikipedia\nnortheuralex\ncelexFigure 2: Word initial vs. final surprisals with: (left)\nForward; (right) Backward.\nH\u0012(WtjW>t).13When using backward sur-\nprisal, many of the analysed languages have signifi-\ncantly higher surprisals in word final positions (see\nTab. 1 and the right graph in Fig. 2). However, 11\nlanguages in the NorthEuraLex dataset still have\nhigher word initial surprisals, suggesting that ini-\ntial positions in these languages are indeed largely\nmore informative than final ones.14There does\nseem to be a large effect of the amount of condi-\ntional information and also some lexical effect of\nfront-loading disambiguatory signals, however it\nis difficult to determine if there are cross-linguistic\ntendencies with these measures.\nUnigram Surprisal. To control for the condi-\ntioning aspect of the question: do words front-load\ntheir disambiguatory signals? , we can look at\nunigram surprisal H\u0012(Wt). This value tells us\nhow uncommon the segments that appear in a\ncertain position are, when analysed in isolation\nfrom the rest of the word—uncommon segments\nare more informative and provide stronger signal\nfor disambiguation. In NorthEuraLex, 71 of the\nlanguages have significantly higher informativity\nin word beginnings than in endings—nonetheless,\none language (Kildin Saami) has higher surprisals\nin word endings. In CELEX, Dutch and German\nhave higher surprisals in initial positions, but\nEnglish does not. And in Wikipedia, all languages\nbut Hebrew and Bengali have higher surprisal\nin initial positions—with Bengali having higher\nsurprisal in word endings. This experiment\nsuggests that indeed most languages are biased\ntowards providing stronger disambiguatory signals\nin word beginnings, even when we control for the\n13We note that King and Wedel (2020) also used backward\nsurprisal, although with a different objective in mind. In one\nof their experiments, they presented aggregate results of a\ncomparison between the forward and backward surprisal.\n14We also ran the same experiments with a probabilistic\ntrie model like the ones used in van Son and Pols (2003b) and\nWedel et al. (2019b), which showed an even stronger result\nreversal when using backward surprisal.\n38Surprisal\nDataset # Languages Forward Backward Unigram Position-Specific Cloze\nCELEX 3 3 j0 0j3 2j0 2j1 2j1\nNorthEuraLex 107 106 j0 11j31 71j1 24j4 45j1\nWikipedia 41 41 j0 0j39 39j1 31j1 35j2\nTable 1: Number of languages in the analysed datasets with significantly larger surprisals in initial jfinal positions.\namount of conditional information. Nonetheless,\nthis is not a universal characteristic which all\nlanguages share and two analysed languages even\nhad a statistically significant inverse effect.\nPosition-Specific Surprisal. While cloze\nsurprisal makes explicit the non-redundant\ninformativity a segment conveys, unigram surprisal\nanalyses the same segments in isolation. Position-\nspecific surprisal provides a midway analysis,\nincorporating the position as some previously-\nspecified knowledge, but not conditioning on the\nother segments in the word. The position-specific\nsurprisal is inspired by Nooteboom and van der\nVlugt (1988) experiments, which prime individuals\non word length and position. As can be seen in\nTab. 1, position-specific surprisal again seems to\nfavour initial positions over final, but only slightly.\nInterestingly, most languages present no significant\ndifference and some the inverse effect (i.e. higher\nsurprisal in final positions).\nPosition-specific Unigram models. To better\nunderstand the differences between the unigram\nand position-specific surprisal results, we trained\nposition-specific unigram models—which count\neach segment’s frequency per position—and then\ncalculated their Kullback–Leibler (KL) divergence\nper position with the traditional unigram\nKL(p(wtjt)jjp(wt)) (18)\n=X\nwt2\u0006p(wtjt) logp(wtjt)\np(wt)\nWe compare these KL divergences and find that, for\nall but four languages, the KL is largest in either the\nfirst or second segment positions.15This suggests\nthat one of the reasons for higher unigram surprisal\nin initial positions is that the first two segments usu-\nally differ from the rest of the positions, potentially\nserving as markers for word segmentation.\n15We use Laplacian smoothing in the position-specific uni-\ngrams and constrain the analysis to positions which appear in\nat least 75% of the analysed words in that language.\n2 3 4 5\nInitial Surprisal (bits)2345Final Surprisal (bits)wikipedia\nnortheuralex\ncelexFigure 3: Word initial vs. final cloze surprisals.\nCloze Surprisal. When we condition a segment\non all others in the same word, we measure how\nmuch uncertainty is left about that individual seg-\nment when considering everything else, or, in other\nwords, how much information is passed only by\nthat segment non-redundantly. Word initial sur-\nprisal is higher in most analysed languages (see\nTab. 1). Nonetheless, two languages in Wikipedia,\nThai and Bengali, have significantly higher sur-\nprisal in their final segments—while English in\nCELEX and Hungarian in NorthEuraLex also\npresent this same inverse effect. Front-loading dis-\nambiguatory information, thus, is not established to\nbe the linguistic universal it is believed to be, with\nonly roughly half the analysed languages show-\ning this property when we control for morphology\n(CELEX and NorthEuraLex). Fig. 3 plots the re-\nsults for all languages analysed.\nWhen we compare these results, we find an inter-\nesting pattern. Morphology seems to reduce non-\nredundant (cloze) information later in the words—\nwhile only half of the languages had significant\nsurprisals in CELEX (which consists of monomor-\nphemic words) and NorthEuraLex (base forms),\nmost languages were significant in Wikipedia.\nFurthermore, English and Hungarian had signifi-\ncantly higher surprisals in word endings in CELEX\nand NorthEuraLex, while the opposite trend in\nWikipedia—this is consistent with the fact that suf-\nfix morphemes are present in more types than word\nroots are, so morphology would make word end-\nings less surprising.\n39EOW Non- EOW\nForward 1.14 3.55\nBackward 0.89 3.61\nUnigram 2.75 4.90\nPosition-specific 0.00 4.36\nCloze 0.00 3.23\nTable 2: Average surprisal (in bits) of EOW vs. non- EOW\nsegments averaged over all datasets.\nLength as a Confounding Effect. We evaluate\nthe impact of length as a confounding effect on\nprevious methodologies. As mentioned in x2,\nby directly analysing surprisal–position pairs (as\nopposed to binning word initial vs. final po-\nsitions), previous work confounds position and\nword length—i.e., only long words will have later\nword positions. In this study, we analyse forward\nsurprisal–length pairs; instead of pairing a seg-\nment’s surprisal with its position, we pair it with\nits word length. We then get the slope formed by a\nlinear regression between these pairs of values and\ntest for its significance per language by using a per-\nmutation test, in which we shuffle surprisal–length\nvalues. On the three datasets, all languages have\nstatistically significant negative slopes, meaning\nlong words have smaller surprisals on average than\nshorter ones.16A caveat, though, is that now we\nare confounding position into our length analysis.\nConstraining our analysis only to the first two seg-\nments in each word, we still find the same effect—\nthough now one language (Hebrew) in Wikipedia\nand seven in NorthEuraLex are not significant. We\ncan thus conclude that longer words have smaller\nsurprisal values than shorter ones, even when con-\ntrolling for the same word positions. This implies\nthat directly using surprisal–position pairs for such\nan analysis is not ideal.\nThe Effect of End of Word in Surprisal. The\nend-of-word ( EOW ) symbol is a special “segment”\nwhich symbolises the end of a string. It is neces-\nsary when modelling the probability distribution\nover strings w2\u0006\u0003, to guarantee that the overall\ndistribution sums to 1. Nonetheless, it is expected\nto behave in a different way from other segments.\nIf a speaker wants to reduce their production ef-\nfort, although changing from one phone to another\nmay help, the most efficient way is usually just\nending the string earlier. Furthermore, since all\nrealisable strings must eventually end, it will be\n16King and Wedel (2020) indeed present a similar correla-\ntion in their Figure 2.EOW NoEOW\nInitial Final Diff (%) Initial Final Diff (%)\nForward 3.85 2.65 31.1 % 3.83 3.00 21.6 %\nBackward 3.02 3.40 -11.3 % 3.63 3.39 6.7 %\nUnigram - - - 4.85 4.40 9.3 %\nPosition - - - 4.36 4.17 4.3 %\nCloze - - - 3.26 2.81 13.9 %\nTable 3: Average surprisal per segment in word initial and\nfinal positions with and without EOW symbols.\npresent in all words, making it a very frequent\nsymbol—in fact, Tab. 2 shows its average surprisal\nis much lower than that of other segments. As\nsuch, it is only natural it should be analysed on its\nown, separately from other segments. Through the\nsame logic, other segments should also be analysed\nseparately from EOW —or else, lower word final\nsurprisals may be due to this symbol alone. As\nsuch, we analyse the surprisal of LSTM “language\nmodels” without the EOW symbol here.17\nUnsurprisingly, Tab. 3 shows the difference be-\ntween word initial and final positions is consider-\nably reduced when we remove the EOW symbol\nfrom the forward surprisal analysis. Surprisingly,\nwe see that when we remove the beginning-of-word\nfrom the backward surprisal analysis, instead of a\nlarger word final surprisal, we get a larger word\ninitial value—even though we are still conditioning\nthe models right-to-left. This result further supports\nthe hypothesis that the disambiguatory signals are\non average stronger in word initial positions.\n7 Conclusions\nIn this work, we analysed the distribution of dis-\nambiguatory information in word positions. We\npresent an in-depth critique of previous work, show-\ning several confounding effects in their analysis.\nWe then proposed the use of three new methods\nwhich corrected for these biases—namely unigram,\nposition-specific and cloze surprisal. These models\ncontrolled for the amount of conditional informa-\ntion across word positions, allowing for an unbi-\nased analysis of the lexicon. Using these models\nwe show that the lexicons of most languages in-\ndeed front-load their disambiguatory signals. This\neffect, though, is not universal and the difference in\ndisambiguatory information between word initial\nand final positions is much lower than previously\nestimated—ranging from 4% to 14%, depending\non the used metric, instead of 31%.\n17To be more precise, we actually ignore the beginning-of-\nword symbol when estimating backward surprisal.\n40References\nR. H. Baayen, R. Piepenbrock, and L. Gulikers. 2015.\nCELEX2 LDC96L14.\nWilliam Chandler Bagley. 1900. The apperception of\nthe spoken sentence: A study in the psychology\nof language. The American Journal of Psychology ,\n12(1):80–130.\nGeorgij P. Basharin. 1959. On a statistical estimate for\nthe entropy of a sequence of independent random\nvariables. Theory of Probability & Its Applications ,\n4(3):333–336.\nYoav Benjamini and Yosef Hochberg. 1995. Control-\nling the false discovery rate: A practical and pow-\nerful approach to multiple testing. Journal of the\nRoyal Statistical Society: Series B (Methodological) ,\n57(1):289–300.\nJerome S. Bruner and Donald O’Dowd. 1958. A note\non the informativeness of parts of words. Language\nand Speech , 1(2):98–101.\nNoam Chomsky and Morris Halle. 1965. Some contro-\nversial questions in phonological theory. Journal of\nLinguistics , 1(2):97–138.\nCynthia M. Connine, Dawn G. Blasko, and Debra\nTitone. 1993. Do the beginnings of spoken words\nhave a special status in auditory word recognition?\nJournal of Memory and Language , 32(2):193–210.\nIsabelle Dautriche, Kyle Mahowald, Edward Gibson,\nAnne Christophe, and Steven T. Piantadosi. 2017.\nWords cluster phonetically beyond phonotactic reg-\nularities. Cognition , 163:128–145.\nJohannes Dellert, Thora Daneyko, Alla M ¨unch, Alina\nLadygina, Armin Buch, Natalie Clarius, Ilja Grigor-\njew, Mohamed Balabel, Hizniye Isabella Boga, Za-\nlina Baysarova, et al. 2019. NorthEuraLex: A wide-\ncoverage lexical database of Northern Eurasia. Lan-\nguage Resources and Evaluation , pages 1–29.\nDavid Fay and Anne Cutler. 1977. Malapropisms and\nthe structure of the mental lexicon. Linguistic In-\nquiry , 8(3):505–520.\nSharon Goldwater, Thomas L. Griffiths, and Mark\nJohnson. 2011. Producing power-law distributions\nand damping word frequencies with two-stage lan-\nguage models. Journal of Machine Learning Re-\nsearch , 12(Jul):2335–2382.\nLaura Gwilliams, Tal Linzen, David Poeppel, and Alec\nMarantz. 2018. In spoken word recognition, the\nfuture predicts the past. Journal of Neuroscience ,\n38(35):7585–7599.\nSepp Hochreiter and J ¨urgen Schmidhuber. 1997.\nLong short-term memory. Neural Computation ,\n9(8):1735–1780.Matthew Honnibal and Ines Montani. 2017. spaCy 2:\nNatural language understanding with Bloom embed-\ndings, convolutional neural networks and incremen-\ntal parsing.\nKathleen Houlihan. 1975. The Role of Word Boundary\nin Phonological Processes . Ph.D. thesis, University\nof Texas at Austin.\nAdam King and Andrew Wedel. 2020. Greater early\ndisambiguating information for less-probable words:\nThe lexicon is shaped by incremental processing.\nOpen Mind , pages 1–12.\nPaul A. Luce. 1986. A computational analysis of\nuniqueness points in auditory word recognition. Per-\nception & Psychophysics , 39(3):155–158.\nKyle Mahowald, Isabelle Dautriche, Edward Gibson,\nand Steven T. Piantadosi. 2018. Word forms are\nstructured for efficient use. Cognitive Science ,\n42(8):3116–3134.\nStephen Michael Marcus. 1981. ERIS-context sensi-\ntive coding in speech perception. Journal of Phonet-\nics, 9(2):197–220.\nWilliam D. Marslen-Wilson. 1987. Functional paral-\nlelism in spoken word-recognition. Cognition , 25(1-\n2):71–102.\nG´abor Melis, Tom ´aˇs Koˇcisk´y, and Phil Blunsom. 2020.\nMogrifier LSTM. In International Conference on\nLearning Representations .\nStephan C. Meylan and Thomas L. Griffiths. 2017.\nWord forms—not just their lengths—are opti-\nmized for efficient communication. arXiv preprint\narXiv:1703.01694 .\nJohn Morton. 1969. Interaction of information in word\nrecognition. Psychological Review , 76(2):165.\nS. G. Nooteboom and M. J. van der Vlugt. 1988.\nA search for a word-beginning superiority effect.\nThe Journal of the Acoustical Society of America ,\n84(6):2018–2032.\nSieb G. Nooteboom. 1981. Lexical retrieval from\nfragments of spoken words: Beginnings vs endings.\nJournal of Phonetics , 9(4):407–424.\nTiago Pimentel, Brian Roark, and Ryan Cotterell. 2020.\nPhonotactic complexity and its trade-offs. Transac-\ntions of the Association for Computational Linguis-\ntics, 8:1–18.\nRob J. J. H. van Son and Louis C. W. Pols. 2003a. Infor-\nmation structure and efficiency in speech production.\nInEighth European Conference on Speech Commu-\nnication and Technology .\nRob J. J. H. van Son and Louis C.W. Pols. 2003b. How\nefficient is speech? In Proceedings of the Institute\nof Phonetic Sciences , volume 25, pages 171–184.\n41Joseph C. Toscano, Nathaniel D. Anderson, and Bob\nMcMurray. 2013. Reconsidering the role of tempo-\nral order in spoken word recognition. Psychonomic\nBulletin & Review , 20(5):981–987.\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Advances in Neural Information Pro-\ncessing Systems , pages 5998–6008.\nAndrew Wedel, Adam Ussishkin, and Adam King.\n2019a. Crosslinguistic evidence for a strong statis-\ntical universal: Phonological neutralization targets\nword-ends over beginnings. Language , 95(4):e428–\ne446.\nAndrew Wedel, Adam Ussishkin, and Adam King.\n2019b. Incremental word processing influences the\nevolution of phonotactic patterns. Folia Linguistica ,\n40(1):231–248.\nGeorge Kingsley Zipf. 1949. Human Behavior and the\nPrinciple of Least Effort . Addison-Wesley Press.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "47GfMSuUTlu",
"year": null,
"venue": "COMPAY 2021",
"pdf_link": "/pdf/e6d9bf6053e17052d597cccdb5303bfcfe79fa7f.pdf",
"forum_link": "https://openreview.net/forum?id=47GfMSuUTlu",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Novel Cell Map Representation for Weakly Supervised Prediction of ER and PR Status from H&E WSIs",
"authors": [
"Hammam Alghamdi",
"Navid Alemi Koohbanani",
"Nasir Rajpoot",
"SHAN E AHMED RAZA"
],
"abstract": "Digital pathology opens new pathways for computational algorithms to play a significant role in the prognosis, diagnosis, and analysis of cancer. However, handling large whole slide images (WSIs) is a vital challenge that these algorithms encounter. In this paper, we propose a novel technique that creates a compressed representation of histology images. This representation is composed of cellular maps and compresses the WSIs while keeping relevant information at hand including the spatial relationships between cells. The compression technique is used to predict the status of ER and PR expressions from H&E images. Our results show that the proposed compression technique can improve the prediction performance by 11-26%.",
"keywords": [
"Computational Pathology",
"ER/PR prediction",
"Compressed representations",
"Breast cancer"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "9zVhkSh5F3",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=9zVhkSh5F3",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "_WQ6XkVP23f",
"year": null,
"venue": "NeurIPS 2022 Accept",
"pdf_link": "/pdf/e2d0d1a35403ba7331dcea11035514de5ed46d1a.pdf",
"forum_link": "https://openreview.net/forum?id=_WQ6XkVP23f",
"arxiv_id": null,
"doi": null
}
|
{
"title": "PALBERT: Teaching ALBERT to Ponder",
"authors": [
"Nikita Balagansky",
"Daniil Gavrilov"
],
"abstract": "Currently, pre-trained models can be considered the default choice for a wide range of NLP tasks. Despite their SoTA results, there is practical evidence that these models may require a different number of computing layers for different input sequences, since evaluating all layers leads to overconfidence in wrong predictions (namely overthinking). This problem can potentially be solved by implementing adaptive computation time approaches, which were first designed to improve inference speed. Recently proposed PonderNet may be a promising solution for performing an early exit by treating the exit layer's index as a latent variable. However, the originally proposed exit criterion, relying on sampling from trained posterior distribution on the probability of exiting from the $i$-th layer, introduces major variance in exit layer indices, significantly reducing the resulting model's performance. In this paper, we propose improving PonderNet with a novel deterministic Q-exit criterion and a revisited model architecture. We adapted the proposed mechanism to ALBERT and RoBERTa and compared it with recent methods for performing an early exit. We observed that the proposed changes can be considered significant improvements on the original PonderNet architecture and outperform PABEE on a wide range of GLUE tasks. In addition, we also performed an in-depth ablation study of the proposed architecture to further understand Lambda layers and their performance.",
"keywords": [
"Early exit",
"ALBERT",
"GLUE"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "LRLaUdtO3Or",
"year": null,
"venue": "CoRR 2020",
"pdf_link": "http://arxiv.org/pdf/2008.13333v2",
"forum_link": "https://openreview.net/forum?id=LRLaUdtO3Or",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Algorithms for Solving High Dimensional PDEs: From Nonlinear Monte Carlo to Machine Learning",
"authors": [
"Weinan E",
"Jiequn Han",
"Arnulf Jentzen"
],
"abstract": "In recent years, tremendous progress has been made on numerical algorithms for solving partial differential equations (PDEs) in a very high dimension, using ideas from either nonlinear (multilevel) Monte Carlo or deep learning. They are potentially free of the curse of dimensionality for many different applications and have been proven to be so in the case of some nonlinear Monte Carlo methods for nonlinear parabolic PDEs. In this paper, we review these numerical and theoretical advances. In addition to algorithms based on stochastic reformulations of the original problem, such as the multilevel Picard iteration and the Deep BSDE method, we also discuss algorithms based on the more traditional Ritz, Galerkin, and least square formulations. We hope to demonstrate to the reader that studying PDEs as well as control and variational problems in very high dimensions might very well be among the most promising new directions in mathematics and scientific computing in the near future.",
"keywords": [],
"raw_extracted_content": "Algorithms for Solving High Dimensional PDEs: From\nNonlinear Monte Carlo to Machine Learning\nWeinan E1,2, Jiequn Han1, and Arnulf Jentzen3\n1Department of Mathematics, Princeton University\n2Program in Applied and Computational Mathematics, Princeton University\n3Faculty of Mathematics and Computer Science, University of M unster\nSeptember 14, 2020\nAbstract\nIn recent years, tremendous progress has been made on numerical algorithms\nfor solving partial di\u000berential equations (PDEs) in a very high dimension, using\nideas from either nonlinear (multilevel) Monte Carlo or deep learning. They are\npotentially free of the curse of dimensionality for many di\u000berent applications and\nhave been proven to be so in the case of some nonlinear Monte Carlo methods for\nnonlinear parabolic PDEs.\nIn this paper, we review these numerical and theoretical advances. In addition\nto algorithms based on stochastic reformulations of the original problem, such as\nthe multilevel Picard iteration and the Deep BSDE method, we also discuss algo-\nrithms based on the more traditional Ritz, Galerkin, and least square formulations.\nWe hope to demonstrate to the reader that studying PDEs as well as control and\nvariational problems in very high dimensions might very well be among the most\npromising new directions in mathematics and scienti\fc computing in the near future.\nContents\n1 Introduction 2\n1.1 A brief introduction of supervised learning . . . . . . . . . . . . . . . . . . 4\n1.2 A brief introduction to multilevel Picard approximation methods . . . . . . 6\n2 General remarks about algorithms for solving PDEs in high dimensions 8\n3 The Deep BSDE method 11\n3.1 PDEs and BSDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11\n3.2 The Deep BSDE Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12\n3.3 Some numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 13\n3.4 Analysis of the Deep BSDE method . . . . . . . . . . . . . . . . . . . . . . 15\n1arXiv:2008.13333v2 [math.NA] 11 Sep 2020\n4 Control problems in high dimensions 16\n5 Ritz, Galerkin, and least squares 19\n5.1 The Deep Ritz method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19\n5.2 The least square formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 21\n5.3 The Galerkin formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 21\n6 Multilevel Picard approximation methods for nonlinear PDEs 22\n7 Mathematical results for neural network approximations for PDEs 25\n8 Conclusion 29\n1 Introduction\nThe mathematical models for many problems around us are in the form of partial di\u000ber-\nential equations (PDEs) in high dimensions. Notable examples include:\n\u000fThe Hamilton-Jacobi-Bellman (HJB) equation in control theory\n@u\n@t+H(x;rxu) = 0: (1)\nHere the dimensionality is the dimensionality of the state space of the original\ncontrol problem. If the original control problem is described by a PDE, then the\ncorresponding HJB equation is formulated in in\fnite dimensional space.\n\u000fThe Black-Scholes equations for pricing \fnancial derivatives\n@u\n@t+1\n2\u001b2dX\ni=1x2\ni@2u\n@x2\ni+rhrxu;xi\u0000ru+f(u) = 0: (2)\nHere the dimensionality is the number of underlying \fnancial assets. Nonlinear\ntermsf(u) may result when default risks, transaction costs, or other factors are\ntaken into account.\n\u000fMany electron Schr odinger equation in quantum mechanics\ni@u\n@t= \u0001xu+V(x)u: (3)\nHere the dimensionality is roughly three times the number of electrons in the con-\nsidered quantum mechanical system.\nSolving these PDEs has been a notoriously di\u000ecult problem in scienti\fc computing\nand computational science, due to the well-known curse of dimensionality (CoD) :\nThe computational complexity grows exponentially as a function of the dimensionality of\nthe problem [13]. In fact, for this reason, traditional numerical algorithms such as \fnite\n2\ndi\u000berence and \fnite element methods have been limited to dealing with problems in a\nrather low dimension. The use of sparse grids can extend the applicability to, say, around\n10 dimensions. But beyond that, there seemed to be little hope except for special PDEs.\nWe are interested in PDEs and algorithms that do not su\u000ber from CoD, i.e., algorithms\nwhose computational complexity scales algebraically with the problem dimension. More\nprecisely, to reach some error tolerance \">0, the computational cost should be no more\nthan\nc(d;\")\u0018Cd\u000b\"\u0000\f(4)\nwhereC;\u000b;\f\u00150 are absolute, dimension-independent constants. In particular, they do\nnot depend on the dimension d2N=f1;2;3;:::g. We are also interested in PDEs for\nwhich such algorithms do exist. In fact, we are interested in developing a theory of high\ndimensional PDEs based on the complexity with which the solutions can be approximated\nusing particular schemes, such as neural network models. We believe that such a theory\nshould be part of the foundation for a theoretical understanding of high dimensional\ncontrol theory, reinforcement learning, and a host of other problems that will become\nimportant research topics in mathematics.\nThe golden standard for high dimensional problems is the approximation of high di-\nmensional integrals. Let g: [0;1]d!Rbe a Lebesgue square integrable function de\fned\non the setX= [0;1]dand let\nI(g) =Z\nXg(x)dx: (5)\nTypical grid-based quadrature rules, such as the Trapezoidal rule and the Simpson's rule,\nall su\u000ber from CoD. The one algorithm that does not su\u000ber from CoD is the Monte Carlo\nalgorithm which works as follows. Let xj,j2f1;2;:::;ng, be independent, continuous\nuniformly distributed random samples on Xand let\nIn(g) =1\nnnX\nj=1g(xj): (6)\nThen a simple calculation gives us\nE\u0002\njI(g)\u0000In(g)j2\u0003\n=Var(g)\nnand Var( g) =Z\nXjg(x)j2dx\u0000\u0014Z\nXg(x)dx\u00152\n:(7)\nTheO(1=pn) rate is independent of d2N. To reduce the error to a tolerance of \">0,\nthe number of samples needed must be of order \"\u00002. This is a situation with \f= 2 in (4).\nThe value of \u000bis more subtle. We need to examine speci\fc classes of examples in\ndi\u000berent dimensions. For example, one can ask about the value of \u000bfor the Ising or\nHeisenberg model in statistical physics. At the moment, results in this direction are still\nquite sparse.\nAlgorithms and results similar to those of the approximative computation of integrals\nin (5){(7) above have been developed in the case of linear PDEs of the Kolmogorov type\nbut, for a very long time, little progress was made in developing algorithms with quanti-\ntatively similar computational complexity for high dimensional nonlinear PDEs and this\nhas impeded advances in several \felds such as optimal control and quantum mechanics.\n3\nThings have changed dramatically in the last few years with the appearance of so-called\nfull history recursive multilevel Picard approximation methods [37, 38, 89] and a host of\nmachine learning-based algorithms for high dimensional PDEs beginning with the Deep\nBSDE method [36, 70]. Full history recursive multilevel Picard approximation methods\n(in the following we abbreviate full history recursive multilevel Picard by MLP) are some\nrecursive nonlinear variants of the classical Monte-Carlo approximation methods and, in\nthat sense, MLP approximation methods are nonlinear Monte-Carlo approximation meth-\nods. For every arbitrarily small \u000e2(0;1) it has been shown that MLP approximation\nmethods achieve (4) with \u000b= 1 and\f= 2 +\u000efor a wide class of nonlinear (parabolic)\nequations (see Section 6 below for details). Although a complete theory is still lacking,\nthe Deep BSDE method has demonstrated very robust performance in practice for a range\nof problems and has been extended in many di\u000berent ways. These developments will be\nreviewed below.\nAlong with the work on developing algorithms, there has been some e\u000bort to develop\nan open-source platform where codes, review papers, and other information can be shared.\nThe interested reader should consult: http://deeppde.org .\nBefore launching into a more detailed discussion, it is useful to review brie\ry the two\nmain ideas that we focus on in this paper: machine learning approximation methods (see\nSection 1.1 below) and MLP approximation methods (see Section 1.2 below).\n1.1 A brief introduction of supervised learning\nThe basic problem in supervised learning is as follows: Given a natural number n2N\nand a sequence ( xj;yj) = (xj;f\u0003(xj)),j2f1;2;:::;ng, of pairs of input-output data, we\nwant to recover the target function f\u0003as accurately as possible. We will assume that the\ninput data xj,j2f1;2;:::;ng, is sampled from the probability distribution \u0016onRd.\nStep 1. Choose a hypothesis space . This is a set of trial functions Hmwhere\nm2Nis a natural number that is strongly related to the dimensionality of Hm. One\nmight choose piecewise polynomials or wavelets. In modern machine learning the most\npopular choice is neural network functions. Two-layer neural network functions (one input\nlayer, one output layer that usually does not count, and one hidden layer) take the form\nfm(x;\u0012) =1\nmmX\nj=1aj\u001b(hwj;xi) (8)\nwhere\u001b:R!Ris a \fxed scalar nonlinear function and where \u0012= (aj;wj)j2f1;2;:::;mg\nare the parameters to be optimized (or trained). A popular example for the nonlinear\nfunction\u001b:R!Ris the recti\fer function (sometimes also referred to as ReLU (recti\fed\nlinear unit) activation function in which case we have for all z2Rthat\u001b(z) = maxfz;0g.\nRoughly speaking, deep neural network (DNN) functions are obtained if one composes\ntwo-layer neural network functions several times. One important class of DNN models are\nresidual neural networks (ResNet). They closely resemble discretized ordinary di\u000berential\n4\nequations and take the form\nzl+1=zl+1\nLMMX\nj=1aj;l\u001b(hzl;wj;li); z 0=Vx; f L(x;\u0012) =h\u000b;zLi (9)\nforl2f0;1;:::;L\u00001gwhereL;M2N. Here the parameters are \u0012= (\u000b;V; (aj;l)j;l;(wj;l)j;l).\nResNets are the model of choice for truly deep neural network models.\nStep 2. Choose a loss function . The primary consideration for the choice of the\nloss function is to \ft the data. Therefore one most obvious choice is the L2loss:\n^Rn(f) =1\nnnX\nj=1jf(xj)\u0000yjj2=1\nnnX\nj=1jf(xj)\u0000f\u0003(xj)j2: (10)\nThis is also called the \\empirical risk\". Sometimes we also add regularization terms.\nStep 3. Choose an optimization algorithm . The most popular optimization\nalgorithms in machine learning are di\u000berent versions of the gradient descent (GD) algo-\nrithm, or its stochastic analog, the stochastic gradient descent (SGD) algorithm. Assume\nthat the objective function we aim to minimize is of the form\nF(\u0012) =E\u0018\u0018\u0017\u0002\nl(\u0012;\u0018)\u0003\n(11)\n(\u0017could be an empirical distribution on a \fnite set). The simplest form of SGD iteration\ntakes the form\n\u0012k+1=\u0012k\u0000\u0011rl(\u0012k;\u0018k); (12)\nfork2N0where\u0018k,k2N0=f0;1;2;:::g, is a sequence of i.i.d. random variables\nsampled from the distribution \u0017and\u0011is the learning rate which might also change during\nthe course of the iteration. In contrast GD takes the form\n\u0012k+1=\u0012k\u0000\u0011rE\u0018\u0018\u0017\u0002\nl(\u0012k;\u0018)\u0003\n: (13)\nObviously this form of SGD can be adapted to loss functions of the form (10) which can\nalso be regarded as an expectation. The DNN-SGD paradigm is the heart of modern\nmachine learning.\nHigh dimensional stochastic control problems. One of the earliest applications of\ndeep learning to problems in scienti\fc computing is for the stochastic control problem [67].\nThis example was chosen because of its close resemblance to the DNN-SGD paradigm in\ndeep learning. From an abstract viewpoint, DNN can be viewed as a (discrete) dynamical\nsystem, of which ResNet is a good example. SGD is a natural consequence when applying\nGD to stochastic optimization problems, in which the objective function is an expectation.\nConsider the stochastic dynamical system:\nst+1=st+bt(st;at) +\u0018t+1: (14)\n5\nHerest;at;\u0018tare respectively the state, the control, and the noise at time t. We assume\nthat the objective function for the control problem is of the form:\nmin\nfatgT\u00001\nt=0EhT\u00001X\nt=0ct(st;at(st)) +cT(sT)js0i\n; (15)\nwhereTis the time horizon and at=at(st) are the feedback controls. One can see the\nclose analogy between the stochastic control problem and the DNN-SGD paradigm: (14)\nplays the role of (9) and (15) is in the form of a stochastic optimization problem. In this\nanalogy, the role of the training data is played by the noise f\u0018tg.\nTo develop an algorithm for this problem, one can approximate the feedback control\nfunctionatby any machine learning model, in particular some neural network model:\nat(st)\u0019at(stj\u0012t); (16)\nwhere\u0012tis the parameter to be trained at time t. The loss function can be de\fned by\nL(f\u0012tg) =E\"T\u00001X\nt=0ct(st;at(stj\u0012t)) +cT(sT)#\n; (17)\nwhich can be optimized using SGD over di\u000berent random samples of \u0018t,t2f1;2;:::;Tg.\nAn example of energy storage is shown in Figure 1. It is an allocation problem, with the\nobjective being optimizing total revenue from multiple storage devices and a renewable\nwind energy source while satisfying stochastic demand. More details of the problem can\nbe found in [67, 95].\n0 10000 20000 30000 40000 50000\niteration0.60.70.80.91.01.1reward relative to the case n=50number of devices\nn=30\nn=40\nn=50\nFigure 1: Relative reward to the case of number of devices n= 50 (with controls satisfying\nconstraints strictly). The space of control function is Rn+2!R3nforn2f30;40;50g,\nwith multiple equality and inequality constrains. The shaded area depicts the mean \u0006\nthe standard deviation over \fve di\u000berent random seeds. Reprinted from [67].\n1.2 A brief introduction to multilevel Picard approximation meth-\nods\nDespite the great performance of deep learning-based approximation schemes in various\nnumerical simulations, until today, there is no rigorous mathematical analysis in the\n6\nscienti\fc literature which proves (or disproves) the conjecture that there exists a deep\nlearning-based approximation method which overcomes the curse of dimensionality in the\nnumerical approximation of PDEs. However, for MLP approximation methods it has been\nproven in the scienti\fc literature that such approximation methods do overcome the curse\nof dimensionality in the numerical approximation of semilinear PDEs with general time\nhorizons and, in particular, the convergence results for MLP approximation methods (see\n[9, 37, 89, 90, 7, 53, 6, 86, 91, 87] and Section 6 below for details) have revealed that\nsemilinear PDEs with general time horizons can be approximated without the curse of\ndimensionality.\nLet us brie\ry illustrate this in the case of semilinear heat PDEs with bounded initial\nvalue functions. Let T;c2(0;1), letf:R!Rbe Lipschitz continuous, for every\nd2Nletud2C1;2([0;T]\u0002Rd;R), and assume for every d2N,t2[0;T],x2Rdthat\njud(t;x)j\u0014cand\n(@\n@tud)(t;x) = (\u0001xud)(t;x) +f(ud(t;x)): (18)\nIn the linear case where f\u00110 vanishes, it is known for a long time that classical Monte-\nCarlo approximation methods can approximate ud(T;0)2Rwith\u000b= 1 and\f= 2\nin (4). In the general nonlinear case, classical Monte-Carlo approximation methods are\nnot applicable anymore but it has recently been shown in the scienti\fc literature (see\nHutzenthaler et al. [89] and Theorem 3 below) that for every arbitrarily small \u000e2(0;1]\nit holds that MLP approximation methods can approximate ud(T;0)2Rin the general\nnonlinear case with \u000b= 1 and\f= 2 +\u000ein (4). The convergence results for MLP\napproximation methods in the scienti\fc literature have thus revealed that semilinear heat\nPDEs can, up to an arbitrarily small polynomial order of convergence, been solved with\nthe same computational complexity as linear heat PDEs.\nIn the following we brie\ry sketch some of the main ideas in the derivation of MLP\napproximation schemes. One of the key ideas in the derivation and the convergence\nanalysis of the MLP approximation scheme is to rewrite the PDE in (18) as a stochastic\n\fxed point equation. More formally, we note that (18) ensures that for all t2[0;T],\nx2Rdit holds that\nu(t;x) =E\u0014\nu\u0000\n0;x+p\n2Wt\u0001\n+Zt\n0f\u0000\nu(s;x+p\n2Wt\u0000s)\u0001\nds\u0015\n: (19)\nwhereW: [0;T]\u0002\n!Rdis a standard Brownian motion on a probability space (\n ;F;P)\n(cf., e.g., Beck et al. [8, Theorem 1.1]). Now we can also write (19) as the \fxed point\nequationu= \b(u) where \b is the self-mapping on the set of all bounded functions in\nC([0;T]\u0002Rd;R) which is described through the right hand side of (19). Using \b one\ncan de\fne Picard iterates un,n2N0, through the recursion that for all n2Nit holds\nthatu0= 0 and un= \b(un\u00001). In the next step we note that a telescoping sum argument\nshows that for all n2Nit holds that\nun=u1+n\u00001X\nk=1[uk+1\u0000uk] = \b(0) +n\u00001X\nl=1[\b(uk)\u0000\b(uk\u00001)]: (20)\nMLP approximations are then derived by approximating the expectations in (20) within\nthe \fxed point mapping \b by means of Monte-Carlo approximations with di\u000berent levels\nof accuracy.\n7\nThe procedure in (20) is inspired by multilevel Monte Carlo (MLMC) approximations\nin the scienti\fc literature (see Heinrich [74], Heinrich & Sindambiwe [76], Giles [51] and,\ne.g., [75, 52] and the references mentioned therein). There are, however, also several\ndi\u000berences when comparing MLP approximations to \\classical\" MLMC approximations.\nIn particular, we note that MLP approximations are full history recursive in the sense that\nfor alln2Nwe have that computations of realizations of MLP approximations in the n-th\niterate require realizations for MLP approximations in the 1st, 2nd, :::, (n\u00001)-th iterate\nthe analysis of MLP approximations (see (74) in Theorem 3 below for details). Taking this\ninto account, the convergence analysis for MLP approximations in the scienti\fc literature\nturns out to be more much subtle, and we refer to Section 6 below for a sketch of the\nsome of the main ideas for the complexity analysis of MLP approximations and also for\nreferences to research articles on MLP approximations in the scienti\fc literature.\nIn the comparison between classical linear Monte-Carlo methods and MLP approxima-\ntion methods in (18) above we have restricted ourselves just for simplicity to the problem\nof approximating semilinear heat PDEs with bounded solutions at the space-time point\nt=T,x= 0 and similar results hold in the case of much more general classes of PDEs\nand more general approximation problems. Until today, MLP approximation schemes are\nthe only approximation schemes in the scienti\fc literature for which it has been proved\nthat they do overcome the curse of dimensionality in the numerical approximation of\nsemilinear PDEs with general time horizons. We refer to Section 6 for further details\nand, in particular, for a comprehensive literature overview on research articles on MLP\napproximation methods.\n2 General remarks about algorithms for solving PDEs\nin high dimensions\nWe begin with a brief overview of the algorithms that have been developed for high\ndimensional PDEs.\nSpecial classes of PDEs: The representative special classes of PDEs which we review\nwithin this subsection are second-order linear parabolic PDEs of the Kolmogorov type\nand \frst-order Hamilton-Jacobi equations.\nConsider the linear parabolic PDE with a terminal condition speci\fed at t=Tde-\nscribed by\n@u\n@t+1\n2Tr\u0000\n\u001b\u001bT(Hessxu)\u0001\n+hrxu;\u0016i+f= 0; u(T;\u0001) =g(\u0001) (21)\nOur objective is to compute u(0;\u0001). For this purpose, we consider the di\u000busion process\ndescribed by the stochastic di\u000berential equation (SDE)\ndXt=\u0016(t;Xt)dt+\u001b(t;Xt)Wt: (22)\nThe solution to the PDE in (21) can be expressed as an expectation in the sense that\nu(0;x) =E\u0014\ng(XT) +ZT\n0f(s;Xs)ds\f\f\f\fX0=x\u0015\n: (23)\n8\nThis is the classical Feynman-Kac formula [98, 119]. Using this formula, one can readily\nevaluateu(0;\u0001) using Monte Carlo without su\u000bering from CoD.\nIn a similar spirit, solutions of the Hamilton-Jacobi equations can also be expressed\nusing the Hopf formula. Consider the PDE\n@u\n@t+H(ru) = 0; u (x;0) =g(x): (24)\nAssume that His convex. Then we have the Hopf formula:\nu(x;t) = inf\ny\u0012\ng(y) +tH\u0003\u0010x\u0000y\nt\u0011\u0013\n: (25)\nwhereH\u0003is the Legendre transform of H(see Evans [43]). The right hand side of the\nabove equation can be computed using some optimization algorithms. A particularly\nattractive algorithm along these lines was developed in Darbon & Osher [32].\nControl problems: The \frst application of deep learning to solving scienti\fc computing\nproblems in high dimensions was in the area of stochastic control. In 2016, Han and E [67]\ndeveloped a deep neural network-based algorithm for stochastic control problems. The\nreason for choosing stochastic control was its very close analogy with the setup of deep\nlearning, as we will see later (see Section 4 below). Deep learning-based algorithms for\ndeterministic control problems were \frst developed in [117].\nSchr odinger equation for spins and electrons: In an in\ruential paper, Carleo and\nTroyer developed an algorithm for solving the Schr odinger equation for spins using the\nrestricted Boltzmann machine (RBM) as the trial function. The variational Monte Carlo\n(VMC) approach was used for ground-state calculations. To solve the dynamic equation,\nthe least square approach was used, i.e., the total integral of the square of the residual\nwas used as the loss function [22].\nFor many-electron Schr odinger equation, the story is quite di\u000berent. The con\fguration\nspace is now continuous, instead of being discrete. In addition, the wave function should\nsatisfy the anti-symmetry constraint. This has proven to be a di\u000ecult issue in solving\nthe Schr odinger equation. In [73], Han, Zhang, and E developed a deep neural network-\nbased methodology for computing the ground states. Compared with traditional VMC, a\npermutation-symmetric neural network-based ansatz is used for the Jastrow factor. The\nanti-symmetric part was treated in a rather simpli\fed fashion. This has been improved\nin the work [112, 122, 81].\nDespite these progresses, it is fair to say that we are still at an early stage for developing\nneural network-based algorithms for the many-body Schr odinger equation. Since the\nissues for solving the Schr odinger equation are quite specialized, we will not discuss them\nfurther in this review.\nNonlinear parabolic PDEs: The \frst class of algorithms developed for general non-\nlinear parabolic PDEs with general time horizons in really high dimensions ( d\u001540)\nis the multilevel Picard method [37, 38, 89]. At this moment, this is also the only al-\ngorithm for which a relatively complete theory has been established (see Section 6 be-\nlow). Among other things, this theory o\u000bers a proof that the MLP method overcomes\n9\nCoD. Shortly after, E, Han, and Jentzen developed the deep neural network-based Deep\nBSDE method , by making use of the connection between nonlinear parabolic equations\nand backward stochastic di\u000berential equations (BSDE) [36, 70]. This was the \frst sys-\ntematic application of deep learning to solving general high dimensional PDEs. Later,\nSirignano and Spiliopoulos developed an alternative deep learning-based algorithm using\nthe least squares approach [132], extending the work of Carleo and Troyer [22] to gen-\neral PDEs. Such deep learning-based approximation methods for PDEs have also been\nextended in di\u000berent ways and to other parabolic and even elliptic problems; see, e.g.,\n[5, 10, 126, 4, 12, 85, 25, 78, 73, 94, 72].\nSome special semilinear parabolic PDEs can be formulated in terms of branching pro-\ncesses. One such example is the Fisher-KPP (Kolmogorov-Petrovski-Piscounov) equation\n[48, 103, 115]. For such PDEs, Monte Carlo methods can be developed, and such Monte\nCarlo approximation algorithms overcome the CoD (see [133, 142, 77, 80, 26, 139, 79]) in\nthe speci\fc situation where the time horizon and/or the PDE nonlinearity is su\u000eciently\nsmall.\nVariational problems: It is fairly straightforward to construct neural network-based\nalgorithms for solving variational problems. One way of doing this is simply to use the\nRitz formulation. The \\Deep Ritz method\", to be discussed below, is such an example;\nsee E & Yu [41] and also Khoo et al. [99]. It is natural to ask whether one can develop a\nsimilar Galerkin formulation, i.e., using a weak form of the PDE. In fact, [41] was written\nas a preparation for developing what would be a \\Deep Galerkin method\". However,\nformulating robust neural network algorithms using a weak form has proven to be quite\nproblematic. The di\u000eculty seems to be analogous to the ones encountered in generative\nadversarial networks (GAN); cf. Goodfellow et al. [62]. For some progress in this direction\nwe refer to the article Zhang et al. [143]. It should also be noted that even though the\ndeep learning-based methodology proposed in Sirignano & Spiliopoulos [132] was named\nas a \\Deep Galerkin method\", the methodology in [132] is somehow based on a least\nsquare formulation rather than a Galerkin formulation.\nParametric PDEs: One of the earliest applications of deep learning to PDEs is in\nthe study of parametric PDEs. In [100], Khoo, Lu, and Ying developed a methodology\nfor solving low dimensional PDEs with random coe\u000ecients in which the neural network\nmodels are used to parametrize the random coe\u000ecients. Recently the neural networks\nare also applied to solve low-dimensional stochastic PDEs [144]. This is a promising\ndirection thought it will not be covered in this review. Another closely related area is\nsolving inverse problems governed by PDEs, which is intrinsically high dimensional as well.\nRecent works [127, 45, 46, 101, 27] have demonstrated the advantages of approximating\nthe forward and inverse maps with carefully designed neural networks.\nGame theory A stochastic game describes the behavior of a population of interactive\nagents among which everyone makes his/her optimal decision in a common environment.\nMany scenarios in \fnance, economics, management science, and engineering can be for-\nmulated as stochastic games. With a \fnite number of agents, the Markovian Nash equi-\nlibrium of a game is determined by a coupled system of parabolic PDEs. To solve these\nproblems, Han et al. extend the Deep BSDE method in [68, 69] with the idea of \fctitious\n10\nplay. In a di\u000berent direction, with an in\fnite number of agents and no common noise,\none can use the mean-\feld game theory developed in [107, 108, 109, 84, 83] to reduce the\ncharacterization of the Nash equilibrium to two coupled equations: a Hamilton-Jacobi-\nBellman equation and a Fokker-Planck equation. Neural network-based algorithms have\nbeen developed in [23, 24, 131, 111] to solve these equations.\nBesides the literature mentioned above, certain deep learning-based approximation\nmethods for PDEs have been proposed (see, e.g., [16, 34, 47, 49, 63, 92, 113, 114, 118,\n124, 125, 21, 145]) and various numerical simulations for such methods suggest that deep\nneural network approximations might have the capacity to indeed solve high dimensional\nproblems e\u000eciently. Actually, the attempts of applying neural networks for solving PDEs\ncan be dated back to the 90s (cf., e.g., [110, 136, 106, 96]), nevertheless, with a focus\non low-dimensional PDEs. Apart from neural networks, there are also other attempts in\nliterature in solving high dimensional PDEs with limited success (see, e.g., [3, 137, 146,\n19, 57, 33, 14, 56, 20, 50, 55, 44, 28, 31, 29, 30, 105, 123, 66, 58, 60, 59, 141, 140, 130, 18]).\nThis review will be focused on nonlinear parabolic PDEs and related control problems.\nThere are two main reasons for choosing these topics. The \frst is that these classes of\nproblems are fairly general and have general interest. The second is that the study of\nhigh dimensional problems is in better shape for these classes of problems, compared to\nothers (e.g., the Schr odinger equation discussed above).\nWe should also mention that the heart of reinforcement learning is solving approx-\nimately the Bellman equation, even though reinforcement learning algorithms are not\nalways formulated that way. The dimensionality in these problems is often very high.\nThis is another topic that will not be covered in this review.\n3 The Deep BSDE method\nThe Deep BSDE method was the \frst deep learning-based numerical algorithm for solving\ngeneral nonlinear parabolic PDEs in high dimensions [36, 70]. It begins by reformulating\nthe PDE as a stochastic optimization problem. This is done with the help of BSDEs,\nhence the name \\Deep BSDE method\". As a by-product, the Deep BSDE method is also\nan e\u000ecient algorithm for solving high dimensional BSDEs.\n3.1 PDEs and BSDEs\nConsider the semilinear parabolic PDE\n@u\n@t+1\n2Tr\u0000\n\u001b\u001bT(Hessxu)\u0001\n+hru;\u0016i+f\u0000\nt;x;u;\u001bT(rxu)\u0001\n= 0; u(T;x) =g(x):(26)\nIn the same way as in Section 2 above, we consider the di\u000busion process\nXt=\u0018+Zt\n0\u0016(s;Xs)ds+Zt\n0\u001b(s;Xs)dWs: (27)\n11\nUsing It^ o's lemma, we obtain that\nu(t;Xt)\u0000u(0;X0)\n=\u0000Zt\n0f\u0000\ns;Xs;u(s;Xs);[\u001b(s;Xs)]T(rxu)(s;Xs)\u0001\nds\n+Zt\n0[ru(s;Xs)]T\u001b(s;Xs)dWs:(28)\nTo proceed further, we recall the notion of backward stochastic di\u000berential equations\n(BSDEs)8\n>>><\n>>>:Xt=\u0018+Zt\n0\u0016(s;Xs)ds+Zt\n0\u001b(s;Xs)dWs;\nYt=g(XT) +ZT\ntf(s;Xs;Ys;Zs)ds\u0000ZT\nt(Zs)TdWs(29)\n(30)\nintroduced by Pardoux and Peng [120]. It was shown in [120, 121] that there is an up-\nto-equivalence unique adapted stochastic process ( Xt;Yt;Zt),t2[0;T], with values in\nRd\u0002R\u0002Rdthat satis\fes the pair of stochastic equations in (29){(30) above.\nThe connection between the BSDE in (29){(30) and the PDE in (26) is as follows\n[120, 121]. Let u: [0;T]\u0002Rd!Rbe a solution of the PDE in (26). If we de\fne\nYt=u(t;Xt) and Zt= [\u001b(t;Xt)]T(rxu)(t;Xt): (31)\nThen (Yt;Zt),t2[0;T], is a solution for the BSDE in (29){(30). With this connection in\nmind, one can formulate the PDE problem as the following variational problem:\ninf\nY0;fZtg0\u0014t\u0014TE\u0002\njg(XT)\u0000YTj2\u0003\n; (32)\ns:t: Xt=\u0018+Zt\n0\u0016(s;Xs)ds+Zt\n0\u0006(s;Xs)dWs; (33)\nYt=Y0\u0000Zt\n0h(s;Xs;Ys;Zs)ds+Zt\n0(Zs)TdWs: (34)\nThe minimizer of this variational problem is the solution to the PDE and vice versa.\n3.2 The Deep BSDE Method\nA key idea of the Deep BSDE method is to approximate the unknown functions X07!\nu(0;X0) andXt7![\u001b(t;Xt)]T(rxu)(t;Xt) by feedforward neural networks and\u001e. To\nthat purpose, we work with the variational formulation described above and discretize\ntime, say using the Euler scheme on a grid 0 = t0<t1<:::<t N=T:\ninf\n 0;f\u001engN\u00001\nn=0Ejg(XT)\u0000YTj2; (35)\ns:t: X 0=\u0018; Y 0= 0(\u0018); (36)\nXtn+1=Xti+\u0016(tn;Xtn)\u0001t+\u001b(tn;Xtn)\u0001Wn; (37)\nZtn=\u001en(Xtn); (38)\nYtn+1=Ytn\u0000f(tn;Xtn;Ytn;Ztn)\u0001t+ (Ztn)T\u0001Wn: (39)\n12\nAt each time slide tn, we associate a subnetwork. We can stack all these subnetworks to-\ngether to form a deep composite neural network. This network takes the paths fXtng0\u0014n\u0014N\nandfWtng0\u0014n\u0014Nas the input data and gives the \fnal output, denoted by ^ u(fXtng0\u0014n\u0014N;fWtng0\u0014n\u0014N),\nas an approximation to u(tN;XtN).\nThe error in the matching of the given terminal condition de\fnes the loss function\nl(\u0012) =Eh\f\fg(XtN)\u0000^u\u0000\nfXtng0\u0014n\u0014N;fWtng0\u0014n\u0014N\u0001\f\f2i\n: (40)\nFigure 2: Network architecture for solving parabolic PDEs. Each column corresponds to\na subnetwork at time t=tn. The whole network has ( H+ 1)(N\u00001) layers in total that\ninvolve free parameters to be optimized simultaneously. Reprinted from [70].\nFrom the viewpoint of machine learning, this neural network model has several inter-\nesting features.\n1. It does not require us to generate training data beforehand. The paths fWtng0\u0014n\u0014N)\nplay the role of the data and they are generated on the \ry. For this reason, one can\nthink of this model as a model with an in\fnite amount of data.\n2. For the same reason, it is very natural to use stochastic gradient descent (SGD) to\ntrain the network.\n3. The network has a very natural \\residual neural network\" structure embedded in\nthe stochastic di\u000berence equations. For example:\nu(tn+1;Xtn+1)\u0000u(tn;Xtn)\n\u0019\u0000f\u0000\ntn;Xtn;u(tn;Xtn);\u001en(Xtn)\u0001\n\u0001tn+ (\u001en(Xtn))T\u0001Wn:(41)\n3.3 Some numerical examples\nNext we examine the e\u000bectiveness of the algorithms described above. We will discuss\ntwo examples: The \frst is a canonical benchmark problem, the linear-quadratic control\n13\nproblem (LQG). The second is a nonlinear Black-Scholes model. We use the simplest\nimplementation of Deep BSDE: Each subnetwork has 3 layers, with 1 input layer ( d-\ndimensional), 2 hidden layers (both d+ 10-dimensional), and d-dimensional output. We\nchoose the recti\fer function (ReLU) as the activation function and optimize with the\nAdam method [102].\nWe will report the mean and the standard deviation of the relative error from 5\nindependent runs with di\u000berent random seeds.\nLQG (linear quadratic Gaussian)\nConsider the stochastic dynamic model in 100 dimension:\ndXt= 2p\n\u0015mtdt+p\n2dWt; (42)\nwith cost functional:\nJ(fmtg0\u0014t\u0014T) =E\u0002ZT\n0kmtk2\n2dt+g(XT)\u0003\n: (43)\nThe associated HJB equation is given by\n@u\n@t+ \u0001u\u0000\u0015kruk2\n2= 0 (44)\nThe solution to this HJB equation can be expressed as\nu(t;x) =\u00001\n\u0015ln\u0012\nEh\nexp\u0010\n\u0000\u0015g(x+p\n2WT\u0000t)\u0011i\u0013\n: (45)\nThis formula can be evaluated directly using Monte Carlo. Therefore this problem serves\nas a good model for validating algorithms. The results from the Deep BSDE method is\nshown in Figure 3.\n0 10 20 30 40 50\nlambda4.04.14.24.34.44.54.64.7u(0,0,...,0)Deep BSDE Solver\nMonte Carlo\nFigure 3: Left: Relative error of the Deep BSDE method for u(t=0;x=(0;:::; 0)) when\n\u0015= 1, which achieves 0 :17% relative error in a run time of 330 seconds. Right: Optimal\ncostu(t=0;x=(0;:::; 0)) for di\u000berent values of \u0015. The shaded area depicts the mean \u0006\nthe standard deviation over \fve di\u000berent random seeds. Reprinted from [70].\n14\nWe see that the accuracy of the trained solution improves along the training curve\nbefore it saturates.\nBlack-Scholes equation with default risk\nThe pricing model for \fnancial derivatives should take into account the whole basket\nof the underlies, which results in high dimensional PDEs. In addition, the classical Black-\nScholes model can and should be augmented by some important factors in real markets,\nincluding the e\u000bect of default, transactions costs, uncertainties in the model parameters,\netc. Taking into account these e\u000bects leads to nonlinear Black-Scholes type of models.\nWe study a particular case of the recursive valuation model with default risk [35, 15].\nThe underlying asset price moves as a geometric Brownian motion, and the possible default\nis modeled by the \frst jump time of a Poisson process. The claim value is modeled by\nthe nonlinear Black-Scholes model with\nf\u0000\nt;x;u (t;x);\u001bT(t;x)ru(t;x)\u0001\n=\u0000(1\u0000\u000e)Q(u(t;x))u(t;x)\u0000Ru(t;x): (46)\nwhereQis some nonlinear function. We will consider the fair price of an European claim\nbased on 100 underlying assets conditional on no default having occurred yet. This leads\nto a problem with d= 100. Figure 4 presents the results of Deep BSDE and multilevel\nPicard for this nonlinear Black-Scholes equation for d= 100. Reported in the \fgure is the\napproximate solution at t= 0; x= (100;:::; 100). For this problem we cannot \fnd the\n\\exact solution\". Therefore we use the results of the two di\u000berent methods to calibrate\neach other.\nFigure 4: Approximation of u(t=0;x=(100;:::; 100)) as a function of the number of\niteration steps. The Deep BSDE method achieves a relative error of size 0 :46% in a\nruntime of 617 seconds. The shaded area depicts the mean \u0006the standard deviation over\n\fve di\u000berent random seeds. Reprinted from [70].\n3.4 Analysis of the Deep BSDE method\nThere is not yet a complete theory for the analysis of the Deep BSDE method. We will\nreview the existing results that have been obtained so far. Here instead of bounding the\ncost required for reducing the error to certain tolerance \", we bound the error associated\n15\nwith certain hyper-parameters, such as the time step size \u0001 tand the size of the neural\nnetwork models. The basic strategy is to reduce the problem to bounding the generaliza-\ntion error for supervised learning [40]. In order to do that, we need to do the following:\n(1) Estimate the error in the time discretization. (2) Prove that the functions that need to\nbe approximated using neural networks belong to the right function class and bound their\nnorms in that function class. (3) Adapt the analysis for supervised learning problems to\nthe current setting. For two-layer neural network models, the function class is the Barron\nspace [39].\nAt this point, only step (1) has been accomplished.\nTheorem 1 (A Posteriori Estimates [71]) .Under some assumptions, there exists a con-\nstant C, independent of h, d, and m, such that for su\u000eciently small h,\nsup\nt2[0;T](EjXt\u0000^X\u0019\ntj2+EjYt\u0000^Y\u0019\ntj2) +ZT\n0EjZt\u0000^Z\u0019\ntj2dt\n\u0014C[h+Ejg(X\u0019\nT)\u0000Y\u0019\nTj2];(47)\nwhere ^X\u0019\nt=X\u0019\nti,^Y\u0019\nt=Y\u0019\nti,^Z\u0019\nt=Z\u0019\ntifort2[ti;ti+1).\nTheorem 2 (Upper Bound of Optimal Loss [71]) .Under some assumptions, there exists\na constant C, independent of h, d, and m, such that for su\u000eciently small h,\nEjg(X\u0019\nT)\u0000Y\u0019\nTj2\n\u0014Cn\nh+EjY0\u0000\u0016\u0019\n0(\u0018)j2+N\u00001X\ni=0EjE[~ZtijX\u0019\nti;Y\u0019\nti]\u0000\u001e\u0019\ni(X\u0019\nti;Y\u0019\nti)j2ho\n;(48)\nwhere ~Zti=h\u00001E[Rti+1\ntiZtdtjFti]. Ifband\u001bare independent of Y, the term E[~ZtijX\u0019\nti;Y\u0019\nti]\ncan be replaced with E[~ZtijX\u0019\nti].\n4 Control problems in high dimensions\nOne of the areas that high dimensional problems are often encountered is optimal control.\nIn fact the term \\curse of dimensionality\" was \frst coined by Richard Bellman in the\ncontext of dynamic programming for control problems [13]. Regarding CoD, there is\nan important di\u000berence between open- and closed-loop controls that we now explain.\nfConsider the optimal control problem with a \fnite horizon T:\n8\n>><\n>>:min\nug(x(T)) +ZT\n0L(t;x(t);u(t))dt;\nsubject to _ x(t) =f(t;x;u );\nx(0) =x0:(49)\nHerex: [0;T]!X \u0012 Rdis the state, u: [0;T]!U\u0012 Rmis the control, g:X !R\nis the terminal cost, and L: [0;T\u0002X\u0002U! Ris the running cost. For \fxed x0, the\n16\nproblem above can be thought of as a two-point boundary value problem over the time\ninterval [0;T] and the optimal control can be sought in the form:\nu=u\u0003(t;x\u0003(t)) (50)\nwherex\u0003denotes the optimal path. We refer to [128] for a review of the numerical\nalgorithms for solving this kind of two-point boundary value problems. In this case,\nCoD is not really an issue since the dimensionality of the independent variable is just 1.\nControls of the form (50) is called an open-loop control. In this case, the optimal control\nis only known along the optimal path. Once the system deviates from the optimal path,\none has to either recompute the optimal control or force the system back to the optimal\npath. In many applications, one prefers a closed-loop control or feedback control\nu=u\u0003(t;x); (51)\nwhere the optimal control is known as every point in the state space. Closed-loop con-\ntrols are functions of the state variable and this is where the CoD problem arises. To\ncharacterize open- and closed-loop controls, let\n~H(t;x;\u0015;u ):=L(t;x;u ) +\u0015Tf(t;x;u ) (52)\nbe the extended Hamiltonian associated with this control problem, and de\fne\nu\u0003(t;x;\u0015) = arg min\nu2UH(t;x;\u0015;u ): (53)\nHere\u0015is the co-state variable. An important result is that the solution to the optimal\ncontrol problem satis\fes Pontryagin's Minimum Principle:\n8\n>>>>><\n>>>>>:_x(t) =@~H\n@\u0015=f(t;x;u\u0003(t;x;\u0015 ));\n_\u0015(t) =\u0000@~H\n@x(t;x;\u0015;u\u0003(t;x;\u0015 ));\n_v(t) =\u0000L(t;x;u\u0003(t;x;\u0015 ));(54)\nwith the boundary conditions\nx(0) =x0; \u0015(T) =rg(x(T)); v(T) =g(x(T)): (55)\nDenote byVthe value function of the control problem:\nV(t;x):= inf\nu2U\u001a\ng(y(T)) +ZT\ntL(\u001c;y;u )d\u001c\u001b\n; (56)\nsubject to _y(\u001c) =f(\u001c;y;u ) andy(t) =x. De\fne the Hamiltonian:\nH\u0003(t;x;\u0015 ):=H(t;x;\u0015;u\u0003): (57)\n17\nThe HJB equation can be written as\nVt(t;x) +H\u0003(t;x;Vx) = 0 (58)\nwith the terminal condition V(T;x) =F(x). The co-state and the closed-loop optimal\ncontrol is given in terms of the value function by\n\u0015(t) =rxV(t;x(t)); (59)\nu\u0003(t;x) = arg min\nu2UH(t;x;rxV;u): (60)\nTo obtain an accurate approximation to the closed-loop control, we need to solve the\ncontrol problem for a large set of initial conditions, if not all. The formulation (49) is for\na single initial condition. To extend it to all initial conditions, we consider instead the\nproblem:\nmin\nuEx0\u0018\u0016\u0012\ng(x(T)) +ZT\n0L(t;x(t);u(t;x(t)))dt\u0013\n(61)\nsubject to _x(t) =f(t;x(t);u(t;(t));x(0) =x0. Here the optimization is over all possible\npolicy functions u. One question that naturally arises is how we should choose the dis-\ntribution\u0016for the initial condition. Clearly we are only interested in states whose value\nfunctions are not very big. Therefore one possible choice is the Gibbs distribution for the\nvalue function:\n\u0016=1\nZe\u0000\fV(62)\nwhereZis a normalization factor. \fis a positive hyper-parameter.\nUnlike the stochastic case for which the training data is obtained on-the-\ry, here\none needs to address the issue of data generation explicitly. The following strategy was\nproposed in [117, 97]:\n\u000fThe two-point boundary value problem (54)-(55) is solved to obtain the training\ndata.\n\u000fA neural network model is trained for the value function.\nIn practice, (54)-(55) is not an easy problem to solve, and it is important to look for a\nsmall yet representative training dataset. The following ideas were proposed and tested\nin [117, 97].\nThe \frst is called \\warm start\". The basic idea is to choose initializations for the\niterative algorithms for (54)-(55) to help guarantee convergence. For example one can\nstart with small values of Tin which case the convergence of the iterative algorithms is\nmuch less of an issue. One can use simple extrapolations of these solutions on longer\ntime intervals as initializations and obtain converged solutions on longer intervals. This\nprocess can be continued. In addition, once a reasonable approximation of the policy and\nvalue functions is obtained, one can use that to help initializing the two-point boundary\nvalue problem.\nThe second is to explore adaptive sampling. It has been explored in a similar context\n[147]. As for all adaptive algorithms, the key issue is an error indicator: The larger\n18\nthe error, the more data are needed. [147] uses the variance of the predictions from an\nensemble of similar machine learning models as the error indicator, A sophisticated error\nindicator that makes use of the variance of the gradient of the loss function was proposed\nin [117]. Another idea is to simply use the magnitude of the gradient of the value function\nas the error indicator.\n5 Ritz, Galerkin, and least squares\nThe Ritz, Galerkin, and least squares formulations are among the most commonly used\nframeworks for designing numerical algorithms for PDEs. The Ritz formulation is based\non a variational principle. The Galerkin formulation is based on the weak formulation of\na PDE that involves both the trial and test functions. Least squares formulation is a very\ngeneral approach for turning a PDE problem into a variational problem by minimizing the\nsquared residual of the PDE. It has the advantage of being general and straightforward to\nthink about. However, in classical numerical analysis, it is often the least preferred since\nthe numerical problem obtained this way tends to be worse conditioned than the ones\nusing Ritz or Galerkin formulation. Designing machine learning-based algorithms using\nRitz and least square formulations is rather straightforward. Since there is a variational\nprinciple behind both the Ritz and least square formulations, one can simply replace\nthe space of trial functions for these variational principles by the hypothesis space in\nmachine learning models. Since machine learning is also a fundamentally optimization-\nbased approach, the integration of machine learning with variational methods for PDEs\nis quite seamless. Indeed these were among the \frst set of ideas that were proposed for\nmachine learning-based numerical algorithms for PDEs [22, 41, 132]. For the same reason,\ndesigning machine learning-based algorithms using the Galerkin formulation is a di\u000berent\nmatter, since Galerkin is not an optimization-based approach. Rather it is based on a\nweak formulation using test functions. The closest machine learning model to the Galerkin\nformulation is the Wasserstein GAN (WGAN) [1, 2]: In WGAN, the discriminator plays\nthe role of the test function; the generator plays the role of the trial function.\n5.1 The Deep Ritz method\nThe Deep Ritz method was proposed in [41]. Consider the variational problem [43]\nmin\nu2HI(u) (63)\nwhere\nI(u) =Z\n\n\u00121\n2jru(x)j2\u0000f(x)u(x)\u0013\ndx (64)\nandHis the set of admissible functions (also called trial function, here represented by\nu),fis a given function, representing external forcing to the system under consideration.\nIt is understood that boundary conditions are incorporated into the de\fnition of H. The\nDeep Ritz method consists of the following components:\n1. Deep neural network-based representation of the trial function.\n19\n2. A numerical quadrature rule for the functional.\n3. An algorithm for solving the \fnal optimization problem.\nEach component is relatively straightforward. One can take the usual neural network\nmodels to represent the trial function. In high dimensions one needs an e\u000bective Monte\nCarlo algorithm to discretize the integral in (64). The interplay between the discretization\nof the integral and the discretization of the trial function using neural network models is\nan interesting issue that requires further attention. Finally, SGD can be used naturally,\nsimilar to the situation in Deep BSDE: The integral in the functional (64) plays the role of\nthe expectation in Deep BSDE. One notable issue is the choice of the activation function.\nReLU activation does not perform well due to the discontinuity in its derivative. It has\nbeen observed that the activation function \u001b3(z) = max(z;0) performs much better than\nReLU. More careful study is needed on this issue also. One feature of the Deep Ritz\nmethod that potentially makes it interesting even for low dimensional problems is that it\nis mesh-free and naturally adaptive. To examine this we consider the well-known crack\nproblem: Computing the displacement around a crack. To this end, we consider the\nPoisson equation:\n\u0000\u0001u(x) = 1; x2\nu(x) = 0; x2@\n(65)\nwhere \n = (\u00001;1)\u0002(\u00001;1)n[0;1)\u0002f0g. The solution to this problem su\u000bers from the well-\nknown \\corner singularity\" caused by the nature of the domain [134]. A simple asymptotic\nanalysis shows that at the origin, the solution behaves as u(x) =u(r;\u0012)\u0018r1\n2sin\u0012\n2[134].\nModels of this type have been extensively used to help developing and testing adaptive\n\fnite element methods. Here the essential boundary condition causes some problems.\nThe simplest idea is to just use a penalty method and consider the modi\fed functional\nI(u) =Z\n\n\u00121\n2jrxu(x)j2\u0000f(x)u(x)\u0013\ndx+\fZ\n@\nu(x)2ds: (66)\nAn acceptable choice is \f= 500. The results from the Deep Ritz method with 811 param-\neters in the neural network model and the \fnite di\u000berence method with \u0001 x1= \u0001x2= 0:1\n(1;681 degrees of freedom) are shown in Figure 5. More quantitative comparisons can be\nfound in [41]. Of course adaptive numerical methods are very well developed for solving\nproblems with corner singularities and even more general singular problems. Nevertheless,\nthis example shows that Deep Ritz is potentially a naturally adaptive algorithm. There\nare also a number of problems that need to be addressed in future work:\n1. The variational problem that results from Deep Ritz is usually not convex even\nwhen the original problem is.\n2. At the present time, there are no consistent conclusions about the convergence rate.\n3. The treatment of the essential boundary condition is not as simple as the traditional\nmethods.\nSome analysis of the Deep Ritz method has been carried out in [116].\n20\n-1.0 -0.5 0.0 0.5 1.01.0\n0.5\n0.0\n-0.5\n-1.0\n 0.000.020.040.060.080.100.120.140.16\n-1.0 -0.5 0.0 0.5 1.01.0\n0.5\n0.0\n-0.5\n-1.0\n 0.000.020.040.060.080.100.120.140.16Figure 5: Solutions computed by two di\u000berent methods. On the left is Deep Ritz with\n811 parameters. On the right is the solution of the \fnite di\u000berence method on a uniform\ngrid with 1681 parameters. Reprinted from [41].\n5.2 The least square formulation\nThe least square approach was used in [22] for solving the dynamic Schr odinger equation\nand was subsequently developed more systematically in [132] (although [132] referred to\nit as Galerkin method). The basic idea is very simple: Solving the PDE\nLu=f (67)\nover a domain \n in Rdcan be formulated equivalently as solving the variational problem\nfor the functional\nJ(u) =Z\n\nkLu\u0000fk2\u0016(dx) (68)\nwhere\u0016is a suitably chosen probability distribution on \n. \u0016should be non-degenerate\nand readily sampled. With this, the least square formulation looks very similar to the\nRitz formulation with Jreplacing the functional I.\n5.3 The Galerkin formulation\nThe starting point of the Galerkin approximation is the weak form of (67):\na(u;\u001e) = (Lu;\u001e ) = (f;\u001e);u2H1;\u001e2H2 (69)\nwhereH1andH2are the trial and test function spaces respectively, \u001eis an arbitrary\ntest function in H2, (\u0001;\u0001) is the standard L2inner product for functions. Usually some\nintegration by parts is applied. For example, if L=\u0000\u0001, then except for boundary terms,\none has\na(u;\u001e) = (ru;r\u001e) (70)\nTherefore this formulation only involves \frst order derivatives. The most important\nfeature of the Galerkin formulation is that involves the test function. In this spirit, the\nWasserstein GAN can also be regarded as an example of the Galerkin formulation. Given\n21\na set of datafxj;j= 1;2;\u0001\u0001\u0001;nginRdand a reference probability distribution \u0017\u0003onRd0,\nwe look for the mapping G(the generator) from Rd0toRd, such that [2]\nZ\nRd0\u001e(G(z))\u0017\u0003(dz) =1\nnnX\nj=1\u001e(xj) (71)\nfor all Lipschitz functions \u001e. The test function \u001eis called the discriminator in this context.\nLike GAN, the most obvious reformulation of (69) is a min-max problem:\nmin\nu2H1max\nk\u001ekH2\u00141(a(u;\u001e)\u0000(f;\u001e))2: (72)\nUnfortunately this formulation is not easy to work with. The problems encountered are\nsimilar to those in WGAN. Despite this some very encouraging progress has been made\nand we refer to [143] for the details.\n6 Multilevel Picard approximation methods for non-\nlinear PDEs\nIn the articles E et al. [37] and Hutzenthaler et al. [89] so-called fully history recursive\nmultilevel Picard approximation methods have been introduced and analyzed (in the fol-\nlowing we abbreviate fully history recursive multilevel Picard by MLP). The error analysis\nin the original article Hutzenthaler et al. [89] is restricted to semilinear heat PDEs with\nLipschitz nonlinearities. By now in the scienti\fc literature there are, however, a series of\nfurther articles on such MLP approximation methods (see [90, 7, 53, 6, 9, 86, 91, 38, 87])\nwhich analyze, extend, or generalize the MLP approximation methods proposed in [37, 89]\nto larger classes of PDE problems such as semilinear Black-Scholes PDEs (see [90, 9]),\nsemilinear heat PDEs with gradient dependent nonlinearities (see [86, 91]), semilinear\nelliptic PDE problems (see [6]), semilinear heat PDEs with non-Lipschitz continuous non-\nlinearities (see [7, 9]), and semilinear second-order PDEs with varying coe\u000ecient functions\n(see [90, 87]).\nIn the remainder of this section we sketch the main ideas of MLP approximation meth-\nods and to keep the presentations as easy as possible we restrict ourselves in the follow-\ning presentations to semilinear heat PDEs with Lipschitz continuous nonlinearities with\nbounded initial values. The next result, Theorem 3 below, provides a complexity analysis\nfor MLP approximations in the case of semilinear heat PDEs with Lipschitz continuous\nnonlinearities. Theorem 3 is strongly based on Hutzenthaler et al. [89, Theorem 1.1] and\nBeck et al. [7, Theorem 1.1].\nTheorem 3. LetT2(0;1),\u0002 =[n2NZn, letf:R!Rbe Lipschitz continuous, for\neveryd2Nletud2C1;2([0;T]\u0002Rd;R)be at most polynomially growing, assume for\neveryd2N,t2[0;T],x2Rdthat\n(@\n@tud)(t;x) = (\u0001xud)(t;x) +f(ud(t;x)); (73)\nlet(\n;F;P)be a probability space, let R\u0012: \n![0;1],\u00122\u0002, be independent U[0;1]-\ndistributed random variables, let Wd;\u0012: [0;T]\u0002\n!Rd,d2N,\u00122\u0002, be independent\n22\nstandard Brownian motions, assume that (R\u0012)\u00122\u0002and(Wd;\u0012)(d;\u0012)2N\u0002\u0002are independent,\nfor everyd2N,s2[0;T],t2[s;T],x2Rd,\u00122\u0002letXd;\u0012\ns;t;x: \n!Rdsatisfy\nXd;\u0012\ns;t;x=x+p\n2(Wd;\u0012\nt\u0000Wd;\u0012\ns), letUd;\u0012\nn;M: [0;T]\u0002Rd\u0002\n!R,d;n;M2N0,\u00122\u0002, satisfy\nfor everyd;M2N,n2N0,\u00122\u0002,t2[0;T],x2Rdthat\nUd;\u0012\nn;M(t;x) =n\u00001X\nk=1t\nMn\u0000k\"Mn\u0000kX\nm=1\u0012\nf\u0010\nUd;(\u0012;k;m )\nk;M\u0000\ntR(\u0012;k;m );Xd;(\u0012;k;m )\ntR(\u0012;k;m );t;x\u0001\u0011\n(74)\n\u0000f\u0010\nUd;(\u0012;\u0000k;m)\nk\u00001;M\u0000\ntR(\u0012;k;m );Xd;(\u0012;k;m )\ntR(\u0012;k;m );t;x\u0001\u0011\u0013#\n+1N(n)\nMn\"MnX\nm=1\u0010\nud(0;Xd;(\u0012;0;\u0000m)\n0;t;x ) +tf(0)\u0011#\n;\nand for every d;M2N,n2N0letCd;n;M2N0be the number of function evaluations of\nfandud(0;\u0001)and the number of realizations of scalar random variables which are used\nto compute one realization of Ud;0\nn;M(T;0): \n!R(cf. [87, Corollary 4.4] for a precise\nde\fnition). Then there exist N: (0;1]!Nandc2Rsuch that for all d2N,\"2(0;1]\nit holds that Cd;N\";N\"\u0014cdc\"\u00003and\u0000\nE\u0002\njUd;0\nN\";N\"(T;0)\u0000ud(T;0)j2\u0003\u00011=2\u0014\".\nIn the following we add some comments on the statement in Theorem 3 above and we\nthereby also provide explanations for some of the mathematical objects which appear in\nTheorem 3.\nTheorem 3 provides a complexity analysis for MLP approximations in the case of\nsemilinear heat PDEs with Lipschitz continuous nonlinearities. In (74) in Theorem 3\nthe employed MLP approximations are speci\fed. The MLP approximations in (74) aim\nto approximate the solutions of the PDEs in (73). The strictly positive real number\nT2(0;1) in Theorem 3 describes the time horizon of the PDEs in (73). The function\nf:R!Rin Theorem 3 describes the nonlinearity of the PDEs in (73). For simplicity we\nrestrict ourselves in Theorem 3 in this article to Lipschitz continuous nonlinearities which\ndo only depend on the solution of the PDE but not on the time variable t2[0;T], not\non the space variable x2Rd, and also not on the derivatives of the PDE solution. In the\nmore general MLP analyses in the scienti\fc literature (cf., e.g., [89, 90, 7, 53, 6, 9, 86, 87])\nthe nonlinearity of the PDE is allowed to depend on the time variable t2[0;T], on the\nspace variable x2Rd, on the PDE solution, and also on the derivatives of the PDE\nsolution (see [86]), and the nonlinearity of the PDE may also be not Lipschitz continuous\n(see [7, 9]).\nThe functions ud: [0;T]\u0002Rd!R,d2N, in Theorem 3 describe the exact solutions\nof the PDEs in (73). The linear di\u000berential operator on the right hand side of the PDE\nin (73) is just the Laplacian and Theorem 3 thus only applies to semilinear heat PDEs of\nthe form (73) but the MLP analyses in the scienti\fc literature also apply to PDEs with\nmuch more general second-order di\u000berential operators (cf., e.g., [87, 90]).\nThe approximation algorithm in (74) is a Monte-Carlo algorithm in the sense that it\nemploys Monte-Carlo averages based on many independent identically distributed (i.i.d.)\nrandom variables. In the case of plain-vanilla standard Monte Carlo algorithms for lin-\near PDEs the employed i.i.d. random variables are often indexed through the set of all\nnatural numbers N=f1;2;3;:::gwhere we have one random variable for each natural\n23\nnumbern2N. The approximation algorithm in (74) is somehow a nonlinear Monte-\nCarlo algorithm and in the case of such a nonlinear Monte-Carlo algorithm the situation\nis getting more complicated and, roughly speaking, we need more i.i.d. random variables\nand therefore, roughly speaking, also a larger index set1. More precisely, in the case of\nthe MLP approximation algorithm in (74) we employ the set \u0002 = [n2NZnin Theorem 3\nas the index set to introduce su\u000eciently many i.i.d. random variables. In particular, in\nTheorem 3 we use the family R\u0012: \n![0;1],\u00122\u0002, of independent on [0 ;1] continuous\nuniformly distributed random variables and the family Wd;\u0012: [0;T]\u0002\n!Rd,d2N,\n\u00122\u0002, of independent standard Brownian motions as random input sources for the MLP\napproximation algorithm in (74).\nThe natural numbers Cd;n;M2N0,d;M2N,n2N0, in Theorem 3 above aim to\nmeasure to computational cost for the MLP approximation algorithm in (74). Theorem 3\nshows that there exists a function N: (0;1]!Nand a real number c2Rsuch that for all\nd2N,\"2(0;1] we have that the L2-approximation error\u0000\nE\u0002\njUd;0\nN\";N\"(T;0)\u0000ud(T;0)j2\u0003\u00011=2\nbetween the MLP approximation Ud;0\nN\";N\"(T;0) and the exact solution udof the PDE at\nthe space-time point t=T,x= 0 is smaller or equal than the prescribed approximation\naccuracy\"with the computational cost Cd;N\";N\"for the MLP approximations being smaller\nor equal than cdc\"\u00003. The computational cost for the MLP approximation algorithm thus\ngrows at most polynomially in the PDE dimension d2Nand at most cubically in the\nreciprocal\"\u00001of the prescribed approximation accuracy \"2(0;1].\nThe more general MLP approximation [7, 89, 90, 53, 86, 87] in the scienti\fc literature\nimprove this statement in several ways. First, the main approximation results in the\nabove named reference list allow the numerical approximation of the PDE solution not\nnecessarily at the space point x= 0 but at much more general space points. Second, the\nmain approximation results in the above named reference list provide explicit errors con-\nstants and explicit exponents in dependence on the constants in the assumptions for the\ninvolved functions. For instance, if the initial conditions of the PDEs under consideration\nare bounded functions, then Hutzenthaler et al. [89, Theorem 1.1] and Beck et al. [7,\nTheorem 1.1] even prove that the computation cost of the employed MLP approximation\nscheme grows at most linearly in the PDE dimension . Finally, most of the MLP approx-\nimation results in the scienti\fc literature also prove that the computational cost of the\nconsidered MLP approximation scheme grows up to an arbitrarily small real number at\nmost quadratically (instead of cubically as in Theorem 3 above) in the reciprocal \"\u00001of\nthe prescribed approximation accuracy \"2(0;1].\nIt should also be noted that MLP approximation schemes not only overcome the curse\nof dimensionality in the numerical approximation of parabolic PDEs but also in the case\nofelliptic PDEs with Lipschitz continuous nonlinearities (see Beck et al. [6]). Encouraging\nnumerical simulations for MLP approximation schemes in the case of semilinear Black-\nScholes PDEs, systems of semilinear PDEs, Allen-Cahn PDEs, and sine-Gordon type\nPDEs can be found in Becker et al. [9] (see also E et al. [38]).\nIn this article we do not provide a detailed proof for Theorem 3 but instead we refer,\ne.g., Hutzenthaler et al. [89, 87] for a detailed proof of Theorem 3. In addition, in the\n1We remark that the set of all natural numbers Nis a proper subset of the set \u0002 = [n2NZnin the\nsense that N $\u0002 but the set of all natural numbers Nand the set \u0002 have, of course, the same cardinality\n24\nfollowing we also brie\ry outline some of the main ideas of the proof of Theorem 3. The\nderivation and thereby also the mathematical analysis of the MLP approximation schemes\nis, roughly speaking, based on the following three steps.\n(I) First, we reformulate the PDE under consideration (or, more generally, the compu-\ntational problem under consideration) as a suitable stochastic \fxed point equation\nwith the unique \fxed point solution of the stochastic \fxed point equation being\nthe unique solution of the PDE under consideration.\n(II) Second, we approximate the unique \fxed point of the stochastic \fxed point equation\nby means of \fxed point iterations according to the Banach \fxed point theorem\n(which are referred to as Picard iterations in the context of temporal integral \fxed\npoint equations).\n(III) Third, we recursively approximate the resulting \fxed point iterations by means of\nsuitable multilevel Monte-Carlo approximations resulting with the resulting Monte-\nCarlo approximations being full history recursive.\nA key idea in the above derivation of the MLP approximation scheme is that the \fxed\npoint iterations often converge exceedingly quick, that is, with factorial convergence speed\nto the unique \fxed point of the stochastic \fxed point equation under consideration while\ne\u000ecient multilevel Monte Carlo approximations assure that the computation cost of the\nconsidered MLP approximation scheme grows not signi\fcantly larger than factorially.\nThese facts made it possible to prove that MLP approximation schemes overcome the\ncurse of dimensionality in the numerical approximation of a large class of semilinear\nPDEs.\nDespite the great performance of deep learning-based approximation schemes in var-\nious numerical simulations, until today, MLP approximation schemes are, to the best of\nour knowledge, the only approximation schemes in the scienti\fc literature for which it has\nbeen proven that they do indeed overcome the curse of dimensionality in the numerical\napproximation of semilinear PDEs with general time horizons.\n7 Mathematical results for neural network approxi-\nmations for PDEs\nUntil today, there is no complete rigorous mathematical analysis which proves (or dis-\nproves) the conjecture that there exists a deep learning-based approximation method\nwhich overcomes the curse of dimensionality in the numerical approximation of PDEs.\nHowever, there are now a few mathematical results in the scienti\fc literature (see, e.g., [17,\n42, 61, 64, 65, 88, 93, 104, 129, 82]) which prove that deep neural networks have the ca-\npacity to approximate solutions of PDEs without the curse of dimensionality.\nIn particular, in the article Grohs et al. [64] it has been proved that there exist neural\nnetworks which approximate solutions of linear Black-Scholes PDEs with the number of\nparameters of the neural networks growing at most polynomially in both the reciprocal1=\"\nof the prescribed approximation accuracy \"2(0;1) and the PDE dimension d2N. The\n25\narticles [17, 42, 61, 65, 93, 104, 129, 82], in particular, extend the results in the article\nGrohs et al. [64] to more general linear PDEs and the article Hutzenthaler et al. [88]\nextends the results in the article Grohs et al. [64] to nonlinear heat PDEs with Lips-\nchitz continuous nonlinearities. To better explain the results in the article Hutzenthaler\net al. [88], we now present in the following result, Theorem 4 below, a special case of\nHutzenthaler et al. [88, Theorem 1.1].\nTheorem 4. Let\u001a: (S\nd2NRd)!(S\nd2NRd)satisfy for all d2N,x= (x1;:::;xd)2Rd\nthat\u001a(x) = (maxfx1;0g;:::; maxfxd;0g), let N=S\nL2NS\nl0;l1;:::;lL2N(\u0002L\nk=1(Rlk\u0002lk\u00001\u0002\nRlk)), letR:N!(S\nk;l2NC(Rk;Rl))andP:N!Nsatisfy for all L2N; l0;l1;:::;lL2\nN;\b = ((W1;B1);(W2;B2);:::; (WL;BL))2(\u0002L\nk=1(Rlk\u0002lk\u00001\u0002Rlk)); x02Rl0; x12\nRl1; :::; xL2RlLwith8k2 f1;2;:::;L\u00001g:xk=\u001a(Wkxk\u00001+Bk)thatR(\b)2\nC(Rl0;RlL),(R(\b))(x0) =WLxL\u00001+BL, andP(\b) =PL\nk=1lk(lk\u00001+1), letT;\u00142(0;1),\n(gd;\")(d;\")2N\u0002(0;1]\u0012N, letf:R!Rbe Lipschitz continuous, let ud2C1;2([0;T]\u0002Rd;R);\nd2N, and assume for all d2N,x= (x1;:::;xd)2Rd,\"2(0;1],t2[0;T]thatR(gd;\")2\nC(Rd;R),\"jud(t;x)j+jud(0;x)\u0000(R(gd;\"))(x)j\u0014\"\u0014d\u0014(1 +Pd\ni=1jxij\u0014),P(gd;\")\u0014\u0014d\u0014\"\u0000\u0014,\nand\n(@\n@tud)(t;x) = (\u0001xud)(t;x) +f(ud(t;x)): (75)\nThen there exist (ud;\")(d;\")2N\u0002(0;1]\u0012Nandc2Rsuch that for all d2N,\"2(0;1]it holds\nthatR(ud;\")2C(Rd;R),P(ud;\")\u0014cdc\"\u0000c, and\n\u0014Z\n[0;1]djud(T;x)\u0000(R(ud;\"))(x)j2dx\u00151=2\n\u0014\": (76)\nTheorem 4 is an immediate consequence of Hutzenthaler et al. [88, Theorem 1.1]. In\nthe following we add some comments on the mathematical objects appearing in Theo-\nrem 4 above and, thereby, we also add some explanatory comments on the statement of\nTheorem 4.\nTheorem 4 is a DNN approximation result with the activation functions in the DNNs\nbeing multidimensional recti\fer functions described by the function \u001a: (S\nd2NRd)!\n(S\nd2NRd) in Theorem 4 above. The set Nin Theorem 4 above represents the set of all neu-\nral networks . The functionR:N!(S\nk;l2NC(Rk;Rl)) maps neural networks to their real-\nization function in the sense that for every \b 2Nit holds thatR(\b)2(S\nk;l2NC(Rk;Rl))\nis the realization function associated with the neural network \b. The function P:N!N\ncounts the number of parameters of the neural networks in the sense that for every \b 2N\nit holds thatP(\b)2Nrepresents the number of real numbers which are used to uniquely\ndescribe the neural network \b.\nTheorem 4 demonstrates that the solutions of the PDEs in (75) can be approximated\nby DNNs without the curse of dimensionality. The real number T2(0;1) in Theo-\nrem 4 describes the time horizon of the PDEs in (75) above. The function f:R!Rin\nTheorem 4 describes the nonlinearity in the PDEs in (75). It is assumed to be Lipschitz\ncontinuous in the sense that there exists L2Rsuch that for all x;y2Rit holds that\njf(x)\u0000f(y)j\u0014Ljx\u0000yj.\nThe real number \u00142(0;1) in Theorem 4 is used to formulate the regularity and the\napproximation assumptions which we impose in Theorem 4. In particular, we assume in\n26\nTheorem 4 that the initial value functions of the PDEs in (75) can be approximated by\nDNNs without the curse of dimensionality. In Theorem 4 this approximation assumption is\nformulated by means of the family ( gd;\")(d;\")2N\u0002(0;1]\u0012Nof neural networks. More formally,\nin Theorem 4 we assume that there exist neural networks gd;\"2N,d2N,\"2(0;1],\nwhich approximate the initial value functions Rd3x7!ud(t;x)2R,d2N, without the\ncurse of dimensionality. In particular, we observe that the assumption in Theorem 4 that\nfor alld2N,x= (x1;:::;xd)2Rd,\"2(0;1],t2[0;T] it holds that \"jud(t;x)j+jud(0;x)\n\u0000(R(gd;\"))(x)j\u0014\"\u0014d\u0014(1 +Pd\ni=1jxij\u0014) assures that for all d2N,x= (x1;:::;xd)2Rd,\n\"2(0;1] it holds thatjud(0;x)\u0000(R(gd;\"))(x)j\u0014\"\u0014d\u0014(1 +Pd\ni=1jxij\u0014) and this condition,\nin turn, ensures that for all d2N,x2Rdit holds that (R(gd;\"))(x) converges to ud(0;x)\nas\"converges to 0.\nMoreover, we observe that the assumption in Theorem 4 that for all d2N,\"2(0;1] it\nholds thatP(gd;\")\u0014\u0014d\u0014\"\u0000\u0014assures that the number of parameters of the neural networks\ngd;\"2N,d2N,\"2(0;1], grows at most polynomially in both the reciprocal \"\u00001of the\nprescribed approximation precision \"2(0;1] and the PDE dimension d2N.\nFurthermore, we note that the assumption in Theorem 4 that for all d2N,x=\n(x1;:::;xd)2Rd,\"2(0;1],t2[0;T] it holds that \"jud(t;x)j+jud(0;x)\u0000(R(gd;\"))(x)j\u0014\n\"\u0014d\u0014(1 +Pd\ni=1jxij\u0014) demonstrates that for all d2N,x= (x1;:::;xd)2Rd,t2[0;T] it\nholds thatjud(t;x)j\u0014\u0014d\u0014(1 +Pd\ni=1jxij\u0014) and this condition, in turn, ensures that the\nsolutions of the PDEs in (75) grow at most polynomially in both the space variable x2Rd\nand the PDE dimension d2N. The condition that for all d2N,x= (x1;:::;xd)2Rd,\nt2[0;T] it holds thatjud(t;x)j\u0014\u0014d\u0014(1+Pd\ni=1jxij\u0014) also ensures that the solutions of the\nPDEs in (75) are uniquely described by their initial value functions Rd3x7!ud(0;x)2R,\nd2N(cf., e.g., Beck et al. [8, Theorem 1.1]).\nRoughly speaking, the conclusion of Theorem 4 assures that there exist neural net-\nworks ud;\"2N,d2N,\"2(0;1], such that for all d2N,\"2(0;1] it holds that\ntheL2-approximation error [R\n[0;1]djud(T;x)\u0000(R(ud;\"))(x)j2dx]1=2between the exact solu-\ntionud(T;x) of the PDE and its neural network approximation ( R(ud;\"))(x) is smaller or\nequal than the prescribed approximation accuracy \"while the numbers P(ud;\"),d2N,\n\"2(0;1], of parameters of the approximating neural networks ud;\"2N,d2N,\"2(0;1],\ngrow at most polynomially in both the PDE dimension d2Nand the reciprocal \"\u00001of\nthe prescribed approximation accuracy \". We note that Theorem 4 above is a neural net-\nwork approximation result for the solutions of the PDEs in (75) at the \fnal time Ton the\nd-dimensional hypercube [0 ;1]dbut the more general neural network approximation result\nin Hutzenthaler et al. [88, Theorem 4.1] also provides neural networks approximations for\nsolutions of PDEs on more general space regions.\nIn the next step we add some words on the strategy of the proofs of Theorem 4 above\nand Theorem 1.1 in Hutzenthaler et al. [88], respectively. Even though Theorem 4 above\nand Theorem 1.1 in Hutzenthaler et al. [88], respectively, are purely deterministic neu-\nral network approximation results, the proofs of Theorem 4 above and Theorem 1.1 in\nHutzenthaler et al. [88], respectively, are strongly based on probabilistic arguments on\na suitable arti\fcial probability space. In particular, the proofs of Theorem 4 above and\nTheorem 1.1 in Hutzenthaler et al. [88], respectively, employ the fact in the following\nelementary lemma.\n27\nLemma 1. Let\"2(0;1), let (\n;F;P)be a probability space, and let E: \n!Rbe a\nrandom variable with (E[jEj2])1=2\u0014\". Then there exists !2\nsuch thatjE(!)j\u0014\".\nThe elementary statement in Lemma 1 follows, e.g., from Grohs et al. [64, Propo-\nsition 3.3]. Lemma 1 is employed in the proofs of Theorem 4 above and Theorem 1.1\nin Hutzenthaler et al. [88], respectively, to construct an appropriate random realization\nwith desired approximation properties on a suitable arti\fcal probability space. To make\nit more concrete, the proofs of Theorem 4 above and Theorem 1.1 in Hutzenthaler et\nal. [88], respectively, consist, roughly speaking, of the following four steps (cf., e.g., [93,\nSection 1]):\n(I) First, appropriate random neural networks are constructed on a suitable arti\fcial\nprobability space. These appropriate neural networks are random in the sense that\nthe weights and the biases of these neural networks are random variables instead\nof deterministic real numbers. The random neural networks are appropriately con-\nstructed with the aim to appropriately approximate the solutions of the PDEs in\n(75).\n(II) Second, it is proved that the realization functions of these random neural networks\nare in a suitable root mean square sense close to the solutions of the PDEs in (75)\nat the \fnal time T.\n(III) Third, it is proved that the numbers of parameters of these random neural networks\ngrow at most polynomially in both the reciprocal \"\u00001of the prescribed approxima-\ntion accuracy \"2(0;1] and the PDE dimension d2N. Here the approximation\naccuracy is measured in a suitable root mean square sense according to item (II)\nabove.\n(IV) Fourth, Lemma 1 is applied to suitable error random variables, which describe\ncertainL2-errors between the realization functions of the constructed random neural\nnetworks (see item (II) above) and the exact solutions of the PDEs in (76) at the\n\fnal timeT(cf. (76) in Theorem 4 above), to obtain the existence of a realization\non the arti\fcal probability space such that the error random variables evaluated at\nthis realization are smaller or equal than the prescribed approximation accuracy \".\nCombining the existence of such a realization on the arti\fcial probability space with\nitem (III) above then completes the proofs of Theorem 4 above and Theorem 1.1\nin Hutzenthaler et al. [88], respectively.\nLet us also add a few comments on the way how the appropriate random neural networks\nin item (I) above are designed and on the way how the statements sketched in items (II){\n(III) above are proved. The main tool for items (I){(III) above are MLP approximations\n(cf. Section 6 above). More formally, the random neural networks in item (I) above are\ndesigned so that their realization functions coincide with suitable MLP approximations\nand the statement in item (II) above is then proved by employing suitable root mean\nsquare error estimates for MLP approximations (cf. Hutzenthaler et al. [89, Theorem 3.5]\nand Theorem 3 above) and the statement in item (III) above is then proved by employing\nsuitable cost estimates for neural networks and MLP approximations (cf. Hutzenthaler et\nal. [88, Sections 3.2{3.3]).\n28\n8 Conclusion\nThe progress reviewed here has opened up a host of new possibilities, both in theory and\napplications. In applications, it has been proved e\u000bective in \fnance, such as the pricing of\n\fnancial derivatives [138, 10, 12, 11] and credit valuation adjustment [54]. It also opens\nup new possibilities in control theory, an area that has long been hindered by the curse of\ndimensionality problem. In fact, it is likely that control theory will be among the areas\nmost impacted by the kind of ideas reviewed here.\nAnother interesting new problem is the mathematical study of high dimensional PDEs.\nThe fact that we can compute their solutions rather e\u000eciently even in very high dimensions\nmeans that the complexity of these solutions should not be very high. Can we quantify\nthis in some way? In low dimensions, a huge amount of e\u000bort has gone into studying\nthe regularity of solutions of PDEs. It seems that regularity is not the most important\nissue in high dimensions. Rather, it is the complexity that is more relevant. It would\nbe very interesting to develop a complexity-based PDE theory in high dimensions. It\nis worth mentioning that in low dimensions, regularity is also a measure of complexity:\nThe e\u000eciency of approximating a target function by certain approximation scheme, say\npiecewise polynomial approximation, is often measured by the regularity of the target\nfunction.\nAn interesting topic untouched in this review is reinforcement learning. Formulated\nwith the language we use here, reinforcement learning is all about solving the Bellman\nequation for the underlying Markov decision process [135]. However, in contrast to the\nideas reviewed here, which make heavy use of the underlying model, reinforcement learning\nmakes minimum use of the speci\fcs of the model. At this moment, it is still quite unclear\nwhat the relative merits are between the ideas reviewed here and those of reinforcement\nlearning. This is undoubtedly an interesting area for further work.\nAcknowledgement\nThe third author acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG,\nGerman Research Foundation) under Germanys Excellence Strategy EXC 2044-390685587,\nMathematics M unster: Dynamics-Geometry-Structure.\nReferences\n[1]Arjovsky, M., and Bottou, L. Towards principled methods for training gen-\nerative adversarial networks. arXiv preprint arXiv: 170104862 (2017).\n[2]Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein generative adver-\nsarial networks. In Proceedings of the 34th International Conference on Machine\nLearning (International Convention Centre, Sydney, Australia, 06{11 Aug 2017),\nD. Precup and Y. W. Teh, Eds., vol. 70 of Proceedings of Machine Learning Re-\nsearch , PMLR, pp. 214{223.\n29\n[3]Bally, V., Pages, G., et al. A quantization algorithm for solving multidimen-\nsional discrete-time optimal stopping problems. Bernoulli 9 , 6 (2003), 1003{1049.\n[4]Beck, C., Becker, S., Cheridito, P., Jentzen, A., and Neufeld, A. Deep\nsplitting method for parabolic PDEs. arXiv:1907.03452 (2019).\n[5]Beck, C., E, W., and Jentzen, A. Machine learning approximation algo-\nrithms for high-dimensional fully nonlinear partial di\u000berential equations and second-\norder backward stochastic di\u000berential equations. Journal of Nonlinear Science 29 ,\n4 (2019), 1563{1619.\n[6]Beck, C., Gonon, L., and Jentzen, A. Overcoming the curse of dimension-\nality in the numerical approximation of high-dimensional semilinear elliptic partial\ndi\u000berential equations. arXiv preprint arXiv:2003.00596 (2020).\n[7]Beck, C., Hornung, F., Hutzenthaler, M., Jentzen, A., and Kruse, T.\nOvercoming the curse of dimensionality in the numerical approximation of Allen-\nCahn partial di\u000berential equations via truncated full-history recursive multilevel\nPicard approximations. arXiv preprint arXiv:1907.06729 (2019).\n[8]Beck, C., Hutzenthaler, M., and Jentzen, A. On nonlinear Feynman-Kac\nformulas for viscosity solutions of semilinear parabolic partial di\u000berential equations.\narXiv preprint arXiv:2004.03389 (2020).\n[9]Becker, S., Braunwarth, R., Hutzenthaler, M., Jentzen, A., and von\nWurstemberger, P. Numerical simulations for full history recursive multilevel\nPicard approximations for systems of high-dimensional partial di\u000berential equations.\narXiv preprint arXiv:2005.10206 (2020).\n[10]Becker, S., Cheridito, P., and Jentzen, A. Deep optimal stopping. Journal\nof Machine Learning Research 20 , 74 (2019), 1{25.\n[11]Becker, S., Cheridito, P., and Jentzen, A. Pricing and hedging American-\nstyle options with deep learning. Journal of Risk and Financial Management 13 , 7\n(2020), 158.\n[12]Becker, S., Cheridito, P., Jentzen, A., and Welti, T. Solving high-\ndimensional optimal stopping problems using deep learning. Minor revision re-\nquested from European Journal of Applied Mathematics, arXiv:1908.01602 (2019).\n[13]Bellman, R. E. Dynamic Programming . Princeton University Press, 1957.\n[14]Bender, C., and Denk, R. A forward scheme for backward SDEs. Stochastic\nProcesses and their Applications 117 , 12 (2007), 1793{1812.\n[15]Bender, C., Schweizer, N., and Zhuo, J. A primal{dual algorithm for BSDEs.\nMathematical Finance 27 , 3 (2017), 866{901.\n30\n[16]Berg, J., and Nystr om, K. A uni\fed deep arti\fcial neural network approach to\npartial di\u000berential equations in complex geometries. Neurocomputing 317 (2018),\n28{41.\n[17]Berner, J., Grohs, P., and Jentzen, A. Analysis of the generalization error:\nEmpirical risk minimization over deep arti\fcial neural networks overcomes the curse\nof dimensionality in the numerical approximation of Black-Scholes partial di\u000beren-\ntial equations. arXiv preprint arXiv:1809.03062 (2018).\n[18]Billaud-Friess, M., Macherey, A., Nouy, A., and Prieur, C. Stochastic\nmethods for solving high-dimensional partial di\u000berential equations. In International\nConference on Monte Carlo and Quasi-Monte Carlo Methods in Scienti\fc Comput-\ning(2018), Springer, pp. 125{141.\n[19]Bouchard, B., and Touzi, N. Discrete-time approximation and Monte-Carlo\nsimulation of backward stochastic di\u000berential equations. Stochastic Processes and\ntheir Applications 111 , 2 (2004), 175{206.\n[20]Briand, P., Labart, C., et al. Simulation of BSDEs by Wiener chaos expan-\nsion. The Annals of Applied Probability 24 , 3 (2014), 1129{1171.\n[21]Cai, Z., and Liu, J. Approximating quantum many-body wave functions using\narti\fcial neural networks. Physical Review B 97 , 3 (2018), 035116.\n[22]Carleo, G., and Troyer, M. Solving the quantum many-body problem with\narti\fcial neural networks. Science 355 , 6325 (2017), 602{606.\n[23]Carmona, R., and Lauri \u0012ere, M. Convergence analysis of machine learning\nalgorithms for the numerical solution of mean \feld control and games: I{the ergodic\ncase. arXiv preprint arXiv:1907.05980 (2019).\n[24]Carmona, R., and Lauri \u0012ere, M. Convergence analysis of machine learning\nalgorithms for the numerical solution of mean \feld control and games: II{the \fnite\nhorizon case. arXiv preprint arXiv:1908.01613 (2019).\n[25]Chan-Wai-Nam, Q., Mikael, J., and Warin, X. Machine learning for semi\nlinear PDEs. Journal of Scienti\fc Computing 79 , 3 (2019), 1667{1712.\n[26]Chang, D., Liu, H., and Xiong, J. A branching particle system approximation\nfor a class of FBSDEs. Probability, Uncertainty and Quantitative Risk 1 , 1 (2016),\n1{34.\n[27]Chen, Y., Lu, L., Karniadakis, G. E., and Dal Negro, L. Physics-informed\nneural networks for inverse problems in nano-optics and metamaterials. Optics\nExpress 28 , 8 (2020), 11618{11633.\n[28]Crisan, D., and Manolarakis, K. Probabilistic methods for semilinear partial\ndi\u000berential equations. Applications to \fnance. ESAIM: Mathematical Modelling and\nNumerical Analysis 44 , 5 (2010), 1107{1133.\n31\n[29]Crisan, D., and Manolarakis, K. Solving backward stochastic di\u000berential\nequations using the cubature method: application to nonlinear pricing. SIAM Jour-\nnal on Financial Mathematics 3 , 1 (2012), 534{571.\n[30]Crisan, D., Manolarakis, K., et al. Second order discretization of backward\nSDEs and simulation with the cubature method. The Annals of Applied Probability\n24, 2 (2014), 652{678.\n[31]Crisan, D., Manolarakis, K., and Touzi, N. On the Monte Carlo simulation\nof BSDEs: An improvement on the Malliavin weights. Stochastic Processes and\ntheir Applications 120 , 7 (2010), 1133{1158.\n[32]Darbon, J., and Osher, S. Algorithms for overcoming the curse of dimension-\nality for certain Hamilton{Jacobi equations arising in control theory and elsewhere.\nResearch in the Mathematical Sciences 3 , 1 (2016), 19.\n[33]Delarue, F., Menozzi, S., et al. A forward{backward stochastic algorithm for\nquasi-linear PDEs. The Annals of Applied Probability 16 , 1 (2006), 140{184.\n[34]Dockhorn, T. A discussion on solving partial di\u000berential equations using neural\nnetworks. arXiv preprint arXiv:1904.07200 (2019).\n[35]Duffie, D., Schroder, M., and Skiadas, C. Recursive valuation of default-\nable securities and the timing of resolution of uncertainty. The Annals of Applied\nProbability 6 , 4 (1996), 1075{1090.\n[36]E, W., Han, J., and Jentzen, A. Deep learning-based numerical methods for\nhigh-dimensional parabolic partial di\u000berential equations and backward stochastic\ndi\u000berential equations. Communications in Mathematics and Statistics 5 , 4 (2017),\n349{380.\n[37]E, W., Hutzenthaler, M., Jentzen, A., and Kruse, T. Multilevel Picard\niterations for solving smooth semilinear parabolic heat equations. arXiv preprint\narXiv:1607.03295 (2016).\n[38]E, W., Hutzenthaler, M., Jentzen, A., and Kruse, T. On multilevel Pi-\ncard numerical approximations for high-dimensional nonlinear parabolic partial dif-\nferential equations and high-dimensional nonlinear backward stochastic di\u000berential\nequations. Journal of Scienti\fc Computing 79 , 3 (2019), 1534{1571.\n[39]E, W., Ma, C., and Wu, L. Barron spaces and the compositional function spaces\nfor neural network models. arXiv preprint arXiv:1906.08039 (2019).\n[40]E, W., Ma, C., and Wu, L. Machine learning from a continuous viewpoint.\narXiv preprint arXiv:1912.12777 (2019).\n[41]E, W., and Yu, B. The deep Ritz method: a deep learning-based numerical\nalgorithm for solving variational problems. Communications in Mathematics and\nStatistics 6 , 1 (2018), 1{12.\n32\n[42]Elbr achter, D., Grohs, P., Jentzen, A., and Schwab, C. DNN expres-\nsion rate analysis of high-dimensional PDEs: Application to option pricing. arXiv\npreprint arXiv:1809.07669 (2018).\n[43]Evans, L. C. Partial Di\u000berential Equations , vol. 19. American Mathematical Soc.,\n2010.\n[44]Fahim, A., Touzi, N., Warin, X., et al. A probabilistic numerical method\nfor fully nonlinear parabolic PDEs. The Annals of Applied Probability 21 , 4 (2011),\n1322{1364.\n[45]Fan, Y., and Ying, L. Solving inverse wave scattering with deep learning. arXiv\npreprint arXiv:1911.13202 (2019).\n[46]Fan, Y., and Ying, L. Solving electrical impedance tomography with deep learn-\ning. Journal of Computational Physics 404 (2020), 109119.\n[47]Farahmand, A.-m., Nabi, S., and Nikovski, D. N. Deep reinforcement learn-\ning for partial di\u000berential equation control. In 2017 American Control Conference\n(ACC) (2017), IEEE, pp. 3120{3127.\n[48]Fisher, R. A. The wave of advance of advantageous genes. Annals of Eugenics 7 ,\n4 (1937), 355{369.\n[49]Fujii, M., Takahashi, A., and Takahashi, M. Asymptotic expansion as prior\nknowledge in deep learning method for high dimensional BSDEs. Asia-Paci\fc Fi-\nnancial Markets 26 , 3 (2019), 391{408.\n[50]Geiss, C., and Labart, C. Simulation of BSDEs with jumps by Wiener chaos\nexpansion. Stochastic Processes and their Applications 126 , 7 (2016), 2123{2162.\n[51]Giles, M. B. Multilevel Monte Carlo path simulation. Operations Research 56 , 3\n(2008), 607{617.\n[52]Giles, M. B. Multilevel Monte Carlo methods. Acta Numerica 24 (2015), 259.\n[53]Giles, M. B., Jentzen, A., and Welti, T. Generalised multilevel Picard\napproximations. arXiv preprint arXiv:1911.03188 (2019).\n[54]Gnoatto, A., Picarelli, A., and Reisinger, C. Deep xVA solver{a neural\nnetwork based counterparty credit risk management framework. arXiv preprint\narXiv:2005.02633 (2020).\n[55]Gobet, E., and Labart, C. Solving BSDE with adaptive control variate. SIAM\nJournal on Numerical Analysis 48 , 1 (2010), 257{277.\n[56]Gobet, E., and Lemor, J.-P. Numerical simulation of BSDEs using empirical\nregression methods: theory and practice. arXiv preprint arXiv:0806.4447 (2008).\n33\n[57]Gobet, E., Lemor, J.-P., Warin, X., et al. A regression-based Monte Carlo\nmethod to solve backward stochastic di\u000berential equations. The Annals of Applied\nProbability 15 , 3 (2005), 2172{2202.\n[58]Gobet, E., L \u0013opez-Salas, J. G., Turkedjiev, P., and V \u0013azquez, C. Strati\fed\nregression Monte-Carlo scheme for semilinear PDEs and BSDEs with large scale\nparallelization on GPUs. SIAM Journal on Scienti\fc Computing 38 , 6 (2016),\nC652{C677.\n[59]Gobet, E., and Turkedjiev, P. Linear regression MDP scheme for discrete\nbackward stochastic di\u000berential equations under general conditions. Mathematics\nof Computation 85 , 299 (2016), 1359{1391.\n[60]Gobet, E., Turkedjiev, P., et al. Approximation of backward stochastic\ndi\u000berential equations using Malliavin weights and least-squares regression. Bernoulli\n22, 1 (2016), 530{562.\n[61]Gonon, L., Grohs, P., Jentzen, A., Kofler, D., and \u0014Si\u0014ska, D. Uniform\nerror estimates for arti\fcial neural network approximations for heat equations. arXiv\npreprint arXiv:1911.09647 (2019).\n[62]Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley,\nD., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In\nAdvances in Neural Information Processing Systems (2014), pp. 2672{2680.\n[63]Gouden \u0012ege, L., Molent, A., and Zanette, A. Variance reduction applied to\nmachine learning for pricing Bermudan/American options in high dimension. arXiv\npreprint arXiv:1903.11275 (2019).\n[64]Grohs, P., Hornung, F., Jentzen, A., and Von Wurstemberger, P. A\nproof that arti\fcial neural networks overcome the curse of dimensionality in the nu-\nmerical approximation of Black-Scholes partial di\u000berential equations. arXiv preprint\narXiv:1809.02362 (2018).\n[65]Grohs, P., Jentzen, A., and Salimova, D. Deep neural network approxima-\ntions for Monte Carlo algorithms. arXiv preprint arXiv:1908.10828 (2019).\n[66]Guo, W., Zhang, J., and Zhuo, J. A monotone scheme for high-dimensional\nfully nonlinear PDEs. The Annals of Applied Probability 25 , 3 (2015), 1540{1580.\n[67]Han, J., and E, W. Deep learning approximation for stochastic control problems.\nDeep Reinforcement Learning Workshop, NIPS (2016).\n[68]Han, J., and Hu, R. Deep \fctitious play for \fnding Markovian Nash equilibrium\nin multi-agent games. In Proceedings of The First Mathematical and Scienti\fc\nMachine Learning Conference (MSML) (2020), vol. 107, pp. 221{245.\n[69]Han, J., Hu, R., and Long, J. Convergence of deep \fctitious play for stochastic\ndi\u000berential games. arXiv preprint arXiv:2008.05519 (2020).\n34\n[70]Han, J., Jentzen, A., and E, W. Solving high-dimensional partial di\u000berential\nequations using deep learning. Proceedings of the National Academy of Sciences\n115, 34 (2018), 8505{8510.\n[71]Han, J., and Long, J. Convergence of the deep BSDE method for coupled\nFBSDEs. Probability, Uncertainty and Quantitative Risk 5 , 1 (2020), 1{33.\n[72]Han, J., Lu, J., and Zhou, M. Solving high-dimensional eigenvalue problems\nusing deep neural networks: A di\u000busion Monte Carlo like approach. Journal of\nComputational Physics (2020).\n[73]Han, J., Zhang, L., and E, W. Solving many-electron Schr odinger equation\nusing deep neural networks. Journal of Computational Physics 399 (2019), 108929.\n[74]Heinrich, S. Monte Carlo complexity of global solution of integral equations.\nJournal of Complexity 14 , 2 (1998), 151{175.\n[75]Heinrich, S. Multilevel Monte Carlo methods. In International Conference on\nLarge-Scale Scienti\fc Computing (2001), Springer, pp. 58{67.\n[76]Heinrich, S., and Sindambiwe, E. Monte Carlo complexity of parametric inte-\ngration. Journal of Complexity 15 , 3 (1999), 317{341.\n[77]Henry-Labord \u0012ere, P. Counterparty risk valuation: a marked branching di\u000busion\napproach. arXiv:1203.2369 (2012).\n[78]Henry-Labord \u0012ere, P. Deep Primal-Dual Algorithm for BSDEs: Applications of\nMachine Learning to CVA and IM. Available at SSRN 3071506 (2017).\n[79]Henry-Labordere, P., Oudjane, N., Tan, X., Touzi, N., Warin, X.,\net al. Branching di\u000busion representation of semilinear PDEs and Monte Carlo\napproximation. In Annales de l'Institut Henri Poincar\u0013 e, Probabilit\u0013 es et Statistiques\n(2019), vol. 55, Institut Henri Poincar\u0013 e, pp. 184{210.\n[80]Henry-Labord \u0012ere, P., Tan, X., and Touzi, N. A numerical algorithm for a\nclass of BSDEs via the branching process. Stochastic Processes and their Applica-\ntions 124 , 2 (2014), 1112{1140.\n[81]Hermann, J., Sch atzle, Z., and No \u0013e, F. Deep neural network solution of the\nelectronic Schr odinger equation. arXiv preprint arXiv:1909.08423 (2019).\n[82]Hornung, F., Jentzen, A., and Salimova, D. Space-time deep neural network\napproximations for high-dimensional partial di\u000berential equations. arXiv preprint\narXiv:2006.02199 (2020).\n[83]Huang, M., Caines, P. E., and Malham \u0013e, R. P. Large-population cost-\ncoupled LQG problems with nonuniform agents: individual-mass behavior and de-\ncentralized \u000f-Nash equilibria. IEEE Transactions on Automatic Control 52 , 9 (2007),\n1560{1571.\n35\n[84]Huang, M., Malham \u0013e, R. P., and Caines, P. E. Large population stochas-\ntic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty\nequivalence principle. Communications in Information and Systems 6 , 3 (2006),\n221{252.\n[85]Hur\u0013e, C., Pham, H., and Warin, X. Some machine learning schemes for high-\ndimensional nonlinear PDEs. arXiv:1902.01599 (2019).\n[86]Hutzenthaler, M., Jentzen, A., and Kruse, T. Overcoming the curse\nof dimensionality in the numerical approximation of parabolic partial di\u000berential\nequations with gradient-dependent nonlinearities. arXiv preprint arXiv:1912.02571\n(2019).\n[87]Hutzenthaler, M., Jentzen, A., Kruse, T., and Nguyen, T. A. Mul-\ntilevel Picard approximations for high-dimensional semilinear second-order PDEs\nwith Lipschitz nonlinearities. arXiv preprint arXiv:2009.02484 (2020).\n[88]Hutzenthaler, M., Jentzen, A., Kruse, T., and Nguyen, T. A. A proof\nthat recti\fed deep neural networks overcome the curse of dimensionality in the nu-\nmerical approximation of semilinear heat equations. SN Partial Di\u000berential Equa-\ntions and Applications 1 (2020), 1{34.\n[89]Hutzenthaler, M., Jentzen, A., Kruse, T., Nguyen, T. A., and von\nWurstemberger, P. Overcoming the curse of dimensionality in the numerical\napproximation of semilinear parabolic partial di\u000berential equations. arXiv preprint\narXiv:1807.01212, accepted in Proceedings of Royal Society A (2020).\n[90]Hutzenthaler, M., Jentzen, A., and von Wurstemberger, P. Overcom-\ning the curse of dimensionality in the approximative pricing of \fnancial derivatives\nwith default risks. Electronic Journal of Probability (2019).\n[91]Hutzenthaler, M., and Kruse, T. Multilevel Picard approximations of high-\ndimensional semilinear parabolic di\u000berential equations with gradient-dependent\nnonlinearities. SIAM Journal on Numerical Analysis 58 , 2 (2020), 929{961.\n[92]Jacquier, A. J., and Oumgari, M. Deep PPDEs for rough local stochastic\nvolatility. arXiv preprint arXiv:1906.02551 (2019).\n[93]Jentzen, A., Salimova, D., and Welti, T. A proof that deep arti\fcial neural\nnetworks overcome the curse of dimensionality in the numerical approximation of\nKolmogorov partial di\u000berential equations with constant di\u000busion and nonlinear drift\ncoe\u000ecients. arXiv preprint arXiv:1809.07321 (2018).\n[94]Ji, S., Peng, S., Peng, Y., and Zhang, X. Three algorithms for solving high-\ndimensional fully-coupled FBSDEs through deep learning. IEEE Intelligent Systems\n(2020).\n36\n[95]Jiang, D. R., and Powell, W. B. An approximate dynamic programming\nalgorithm for monotone value functions. Operations Research 63 , 6 (2015), 1489{\n1511.\n[96]Jianyu, L., Siwei, L., Yingjian, Q., and Yaping, H. Numerical solution\nof elliptic partial di\u000berential equation using radial basis function neural networks.\nNeural Networks 16 , 5-6 (2003), 729{734.\n[97]Kang, W., Gong, Q., and Nakamura-Zimmerer, T. Algorithms of data de-\nvelopment for deep learning and feedback design. arXiv preprint arXiv:1912.00492\n(2019).\n[98]Karatzas, I., and Shreve, S. E. Brownian Motion and Stochastic Calculus .\nSpringer New York, 1998.\n[99]Khoo, Y., Lu, J., and Ying, L. Solving for high-dimensional committor func-\ntions using arti\fcial neural networks. Research in the Mathematical Sciences 6 , 1\n(2019), 1.\n[100] Khoo, Y., Lu, J., and Ying, L. Solving parametric PDE problems with arti\fcial\nneural networks. European Journal of Applied Mathematics (2020), 115.\n[101] Khoo, Y., and Ying, L. SwitchNet: a neural network model for forward and\ninverse scattering problems. SIAM Journal on Scienti\fc Computing 41 , 5 (2019),\nA3182{A3201.\n[102] Kingma, D., and Ba, J. Adam: a method for stochastic optimization. In Proceed-\nings of the International Conference on Learning Representations (ICLR) (2015).\n[103] Kolmogorov, A., Petrovskii, I., and Piscunov, N. A study of the equation\nof di\u000busion with increase in the quantity of matter, and its application to a biological\nproblem. Moscow University Bulletin of Mathematics , 1 (1937), 1{26.\n[104] Kutyniok, G., Petersen, P., Raslan, M., and Schneider, R. A the-\noretical analysis of deep neural networks and parametric PDEs. arXiv preprint\narXiv:1904.00377 (2019).\n[105] Labart, C., and Lelong, J. A parallel algorithm for solving BSDEs. Monte\nCarlo Methods and Applications 19 , 1 (2013), 11{39.\n[106] Lagaris, I. E., Likas, A., and Fotiadis, D. I. Arti\fcial neural networks for\nsolving ordinary and partial di\u000berential equations. IEEE Transactions on Neural\nNetworks 9 , 5 (1998), 987{1000.\n[107] Lasry, J.-M., and Lions, P.-L. Jeux champ moyen. I. Le cas stationnaire. C.\nR. Math. Acad. Sci. Paris 9 (2006), 619{625.\n[108] Lasry, J.-M., and Lions, P.-L. Jeux champ moyen. II. Horizon \fni et contrle\noptimal. C. R. Math. Acad. Sci. Paris 10 (2006), 679{684.\n37\n[109] Lasry, J.-M., and Lions, P.-L. Mean \feld games. Japanese Journal of Mathe-\nmatics 2 (2007), 229{260.\n[110] Lee, H., and Kang, I. S. Neural algorithm for solving di\u000berential equations.\nJournal of Computational Physics 91 , 1 (1990), 110{131.\n[111] Lin, A. T., Fung, S. W., Li, W., Nurbekyan, L., and Osher, S. J. APAC-\nNet: Alternating the population and agent control via two neural networks to solve\nhigh-dimensional stochastic mean \feld games. arXiv preprint arXiv:2002.10113\n(2020).\n[112] Luo, D., and Clark, B. K. Back\row transformations via neural networks\nfor quantum many-body wave functions. Physical Review Letters 122 , 22 (2019),\n226401.\n[113] Lye, K. O., Mishra, S., and Ray, D. Deep learning observables in computa-\ntional \ruid dynamics. Journal of Computational Physics (2020), 109339.\n[114] Magill, M., Qureshi, F., and de Haan, H. Neural networks trained to solve\ndi\u000berential equations learn general representations. In Advances in Neural Informa-\ntion Processing Systems (2018), pp. 4071{4081.\n[115] McKean, H. P. Application of Brownian motion to the equation of Kolmogorov-\nPetrovskii-Piskunov. Communications on Pure and Applied Mathematics 28 , 3\n(1975), 323{331.\n[116] Muller, J., and Zeinhofer, M. Deep Ritz revisited. arXiv preprint\narXiv:1912.03937 (2019).\n[117] Nakamura-Zimmerer, T., Gong, Q., and Kang, W. Adaptive deep learn-\ning for high dimensional Hamilton-Jacobi-Bellman equations. arXiv preprint\narXiv:1907.05317 (2019).\n[118] Nusken, N., and Richter, L. Solving high-dimensional Hamilton-Jacobi-\nBellman PDEs using neural networks: perspectives from the theory of controlled\ndi\u000busions and measures on path space. arXiv preprint arXiv:2005.05409 (2020).\n[119] Oksendal, B. Stochastic Di\u000berential Equations: An Introduction with Applica-\ntions . Springer Science & Business Media, 2013.\n[120] Pardoux, \u0013E., and Peng, S. Backward stochastic di\u000berential equations and\nquasilinear parabolic partial di\u000berential equations. In Stochastic partial di\u000berential\nequations and their applications (Charlotte, NC, 1991) , vol. 176 of Lecture Notes in\nControl and Inform. Sci. Springer, Berlin, 1992, pp. 200{217.\n[121] Pardoux, \u0013E., and Tang, S. Forward-backward stochastic di\u000berential equations\nand quasilinear parabolic PDEs. Probab. Theory Related Fields 114 , 2 (1999), 123{\n150.\n38\n[122] Pfau, D., Spencer, J. S., Matthews, A. G. d. G., and Foulkes, W. Ab-\ninitio solution of the many-electron Schr odinger equation with deep neural networks.\narXiv preprint arXiv:1909.02487 (2019).\n[123] Pham, H. Feynman-Kac representation of fully nonlinear PDEs and applications.\nActa Mathematica Vietnamica 40 , 2 (2015), 255{269.\n[124] Pham, H., Pham, H., and Warin, X. Neural networks-based backward scheme\nfor fully nonlinear PDEs. arXiv preprint arXiv:1908.00412 (2019).\n[125] Raissi, M. Deep hidden physics models: Deep learning of nonlinear partial di\u000ber-\nential equations. The Journal of Machine Learning Research 19 , 1 (2018), 932{955.\n[126] Raissi, M. Forward-backward stochastic neural networks: Deep learning of high-\ndimensional partial di\u000berential equations. arXiv preprint arXiv:1804.07010 (2018).\n[127] Raissi, M., Perdikaris, P., and Karniadakis, G. E. Physics-informed neural\nnetworks: A deep learning framework for solving forward and inverse problems\ninvolving nonlinear partial di\u000berential equations. Journal of Computational Physics\n378(2019), 686{707.\n[128] Rao, A. V. A survey of numerical methods for optimal control. Advances in the\nAstronautical Sciences 135 , 1 (2009), 497{528.\n[129] Reisinger, C., and Zhang, Y. Recti\fed deep neural networks overcome the curse\nof dimensionality for nonsmooth value functions in zero-sum games of nonlinear sti\u000b\nsystems. arXiv preprint arXiv:1903.06652 (2019).\n[130] Ruszczynski, A., and Yao, J. A dual method for evaluation of dynamic risk in\ndi\u000busion processes. arXiv preprint arXiv:1701.06234 (2017).\n[131] Ruthotto, L., Osher, S. J., Li, W., Nurbekyan, L., and Fung, S. W. A\nmachine learning framework for solving high-dimensional mean \feld game and mean\n\feld control problems. Proceedings of the National Academy of Sciences (2020).\n[132] Sirignano, J., and Spiliopoulos, K. DGM: A deep learning algorithm for\nsolving partial di\u000berential equations. Journal of Computational Physics 375 (2018),\n1339{1364.\n[133] Skorokhod, A. V. Branching di\u000busion processes. Theory of Probability & Its\nApplications 9 , 3 (1964), 445{449.\n[134] Strang, G., and Fix, G. J. An Analysis of the Finite Element Method . Prentice-\nHall, 1973.\n[135] Sutton, R. S., and Barto, A. G. Reinforcement Learning: An Introduction .\nMIT press, 2018.\n39\n[136] Uchiyama, T., and Sonehara, N. Solving inverse problems in nonlinear PDEs\nby recurrent neural networks. In IEEE International Conference on Neural Networks\n(1993), IEEE, pp. 99{102.\n[137] Von Petersdorff, T., and Schwab, C. Numerical solution of parabolic equa-\ntions in high dimensions. ESAIM: Mathematical Modelling and Numerical Analysis\n38, 1 (2004), 93{127.\n[138] Wang, H., Chen, H., Sudjianto, A., Liu, R., and Shen, Q. Deep learning-\nbased BSDE solver for LIBOR market model with application to bermudan swaption\npricing and hedging. arXiv preprint arXiv:1807.06622 (2018).\n[139] Warin, X. Variations on branching methods for non linear PDEs. arXiv preprint\narXiv:1701.07660 (2017).\n[140] Warin, X. Monte Carlo for high-dimensional degenerated semi linear and full non\nlinear PDEs. arXiv preprint arXiv:1805.05078 (2018).\n[141] Warin, X. Nesting Monte Carlo for high-dimensional non-linear PDEs. Monte\nCarlo Methods and Applications 24 , 4 (2018), 225{247.\n[142] Watanabe, S. On the branching process for Brownian particles with an absorbing\nboundary. Journal of Mathematics of Kyoto University 4 , 2 (1965), 385{398.\n[143] Zang, Y., Bao, G., Ye, X., and Zhou, H. Weak adversarial networks for\nhigh-dimensional partial di\u000berential equations. Journal of Computational Physics\n(2020), 109409.\n[144] Zhang, D., Guo, L., and Karniadakis, G. E. Learning in modal space:\nSolving time-dependent stochastic PDEs using physics-informed neural networks.\nSIAM Journal on Scienti\fc Computing 42 , 2 (2020), A639{A665.\n[145] Zhang, D., Lu, L., Guo, L., and Karniadakis, G. E. Quantifying total\nuncertainty in physics-informed neural networks for solving forward and inverse\nstochastic problems. Journal of Computational Physics 397 (2019), 108850.\n[146] Zhang, J. A numerical scheme for BSDEs. The Annals of Applied Probability 14 ,\n1 (2004), 459{488.\n[147] Zhang, Y., Wang, H., Chen, W., Zeng, J., Zhang, L., Wang, H., and E,\nW. DP-GEN: A concurrent learning platform for the generation of reliable deep\nlearning based potential energy models. Computer Physics Communications (2020),\n107206.\n40",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "j8ydZSCclm",
"year": null,
"venue": "INFOCOM 2017",
"pdf_link": "https://ieeexplore.ieee.org/iel7/8049192/8056940/08056947.pdf",
"forum_link": "https://openreview.net/forum?id=j8ydZSCclm",
"arxiv_id": null,
"doi": null
}
|
{
"title": "CoCloud: Enabling efficient cross-cloud file collaboration based on inefficient web APIs",
"authors": [
"Jinlong E",
"Yong Cui",
"Peng Wang",
"Zhenhua Li",
"Chaokun Zhang"
],
"abstract": "Cloud storage services such as Dropbox have been widely used for file collaboration among multiple users. However, this desirable functionality is yet restricted to the “walled-garden” of each service. At present, the only effective approach to cross-cloud file collaboration seems to be using web APIs, whose performance is known to be highly unstable and unpredictable. Now that using inefficient web APIs is inevitable, in this paper we attempt to achieve sound user-perceived performance for cross-cloud file collaboration. This attempt is enabled by two key observations from real-world measurements. First, for each cloud, we are always able to deploy one or several nearby (client) proxies which can efficiently access the web APIs. Second, during file collaboration, significant similarity exists among different versions of a file. This can be exploited to substantially reduce inter-proxy traffic and thus shorten the data sync time. Guided by the observations, we design and implement an open-source prototype system called CoCloud. Currently, it supports file collaboration among four popular cloud storage services in the US and China. Its performance is well acceptable to users under representative workloads, even approaching or exceeding intra-cloud performance in many cases.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "qW5rqleEbm",
"year": null,
"venue": "IEEE Trans. Parallel Distributed Syst. 2018",
"pdf_link": "https://ieeexplore.ieee.org/iel7/71/8172504/08030092.pdf",
"forum_link": "https://openreview.net/forum?id=qW5rqleEbm",
"arxiv_id": null,
"doi": null
}
|
{
"title": "CoCloud: Enabling Efficient Cross-Cloud File Collaboration Based on Inefficient Web APIs",
"authors": [
"Jinlong E",
"Yong Cui",
"Peng Wang",
"Zhenhua Li",
"Chaokun Zhang"
],
"abstract": "Cloud storage services such as Dropbox have been widely used for file collaboration among multiple users. However, this desirable functionality is yet restricted to the “walled-garden” of each service. At present, the only feasible approach to cross-cloud file collaboration seems to be using web APIs, whose performance is known to be highly unstable and unpredictable. Now that using inefficient web APIs is inevitable, in this paper we attempt to achieve sound <italic xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">user-perceived</i> performance for cross-cloud file collaboration. This attempt is enabled by two key observations from real-world measurements. First, for each cloud, we are always able to deploy one or several nearby (client) proxies which can efficiently access the web APIs. Second, during file collaboration, significant similarity exists among different versions of a file. This can be exploited to substantially reduce inter-proxy traffic and thus shorten the data sync time. Guided by the observations, we design and implement an open-source prototype system called CoCloud. Currently, it supports file collaboration among four popular cloud storage services in the US and China. Its performance is well acceptable to users under representative workloads, even approaching or exceeding that of intra-cloud collaboration in many cases.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "bVep54ueFa",
"year": null,
"venue": "INFOCOM 2023",
"pdf_link": "https://ieeexplore.ieee.org/iel7/10228851/10228852/10228926.pdf",
"forum_link": "https://openreview.net/forum?id=bVep54ueFa",
"arxiv_id": null,
"doi": null
}
|
{
"title": "WiseCam: Wisely Tuning Wireless Pan-Tilt Cameras for Cost-Effective Moving Object Tracking",
"authors": [
"Jinlong E",
"Lin He",
"Zhenhua Li",
"Yunhao Liu"
],
"abstract": "With desired functionality of moving object tracking, wireless pan-tilt cameras are able to play critical roles in a growing diversity of surveillance environments. However, today's pan-tilt cameras oftentimes underperform when tracking frequently moving objects like humans – they are prone to lose sight of objects and bring about excessive mechanical rotations that are especially detrimental to those energy-constrained outdoor scenarios. The ineffectiveness and high cost of state-of-the-art tracking approaches are rooted in their adherence to the industry's simplicity principle, which leads to their stateless nature, performing gimbal rotations based only on the latest object detection. To address the issues, we design and implement WiseCam that wisely tunes the pan-tilt cameras to minimize mechanical rotation costs while maintaining long-term object tracking. We examine the performance of WiseCam by experiments on two types of pan-tilt cameras with different motors. Results show that WiseCam significantly outperforms the state-of-the-art tracking approaches on both tracking duration and power consumption.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "COAjdXYAgT",
"year": null,
"venue": "ICDE 2021",
"pdf_link": "https://ieeexplore.ieee.org/iel7/9458599/9458600/09458618.pdf",
"forum_link": "https://openreview.net/forum?id=COAjdXYAgT",
"arxiv_id": null,
"doi": null
}
|
{
"title": "CrowdAtlas: Estimating Crowd Distribution within the Urban Rail Transit System",
"authors": [
"Jinlong E",
"Mo Li",
"Jianqiang Huang"
],
"abstract": "While the urban rail transit systems are playing an increasingly important role in meeting the transportation demands of people, the precise awareness of how the human crowd is distributed within the urban rail transit system is highly necessary, which serves to a range of important applications including emergency response, transit recommendation, commercial valuation, etc. Most urban rail transit systems are closed systems where once entered the travelers are free to move around all stations that are connected into the system and are difficult to track. In this paper, we attempt to estimate the crowd distribution within the urban rail transit system based only on the entrance and exit records of all the rail riders. Specifically, we study Singapore MRT (Mass Rapid Transit) as a vehicle and leverage the tap-in and tap-out records of the EZ-Link transit cards to estimate the crowd distribution. Guided by a key observation that the passenger inflows and arrival flows at various MRT stations are spatio-temporally correlated due to behavioral consistence of MRT riders, we design and implement a machine learning based solution, CrowdAtlas, that accurately estimates the crowd distribution within the MRT system. Our trace-driven performance evaluation demonstrates the effectiveness of CrowdAtlas.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ochdLcRQYu",
"year": null,
"venue": "CoRR 2018",
"pdf_link": "http://arxiv.org/pdf/1807.01083v1",
"forum_link": "https://openreview.net/forum?id=ochdLcRQYu",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Mean-Field Optimal Control Formulation of Deep Learning",
"authors": [
"Weinan E",
"Jiequn Han",
"Qianxiao Li"
],
"abstract": "Recent work linking deep neural networks and dynamical systems opened up new avenues to analyze deep learning. In particular, it is observed that new insights can be obtained by recasting deep learning as an optimal control problem on difference or differential equations. However, the mathematical aspects of such a formulation have not been systematically explored. This paper introduces the mathematical formulation of the population risk minimization problem in deep learning as a mean-field optimal control problem. Mirroring the development of classical optimal control, we state and prove optimality conditions of both the Hamilton-Jacobi-Bellman type and the Pontryagin type. These mean-field results reflect the probabilistic nature of the learning problem. In addition, by appealing to the mean-field Pontryagin's maximum principle, we establish some quantitative relationships between population and empirical learning problems. This serves to establish a mathematical foundation for investigating the algorithmic and theoretical connections between optimal control and deep learning.",
"keywords": [],
"raw_extracted_content": "arXiv:1807.01083v1 [math.OC] 3 Jul 2018Noname manuscript No.\n(will be inserted by the editor)\nA Mean-Field Optimal Control Formulation of Deep\nLearning\nWeinan E, Jiequn Han, Qianxiao Li\nAbstract Recent work linking deep neural networks and dynamical syst ems\nopened up new avenues to analyze deep learning. In particula r, it is observed\nthat new insights can be obtained by recasting deep learning as an optimal\ncontrol problem on difference or differential equations. How ever, the mathe-\nmatical aspects of such a formulation have not been systemat ically explored.\nThis paper introduces the mathematical formulation of the p opulation risk\nminimization problem in deep learning as a mean-field optima l control prob-\nlem. Mirroring the development of classical optimal contro l, we state and prove\noptimality conditions of both the Hamilton-Jacobi-Bellma n type and the Pon-\ntryagin type. These mean-field results reflect the probabili stic nature of the\nlearning problem. In addition, by appealing to the mean-fiel d Pontryagin’s\nmaximum principle, we establish some quantitative relatio nships between pop-\nulation and empirical learning problems. This serves to est ablish a mathemat-\nical foundation for investigating the algorithmic and theo retical connections\nbetween optimal control and deep learning.\n1 Introduction\nDeep learning [1,2,3] has become a primary tool in many moder n machine\nlearning tasks, such as image classification and segmentati on. Consequently,\nthere is a pressing need to provide a solid mathematical fram ework to analyze\nvarious aspects of deep neural networks. The recent line of w ork on linking\nWeinan E\nPrinceton University, Princeton, NJ 08544, USA,\nBeijing Institute of Big Data Research and Peking Universit y, Beijing, China 100871\nJiequn Han\nPrinceton University, Princeton, NJ 08544, USA\nQianxiao Li\nInstitute of High Performance Computing, Agency for Scienc e, Technology and Research.\n1 Fusionopolis Way, Connexis North, Singapore 138632\n2 Weinan E, Jiequn Han, Qianxiao Li\ndynamical systems, optimal control and deep learning has su ggested such a\ncandidate [4,5,6,7,8,9,10,11,12,13]. In this view, ResNe t [14] can be regarded\nas a time-discretization of a continuous-time dynamical sy stem. Learning (usu-\nally in the empirical risk minimization form) is then recast as an optimal con-\ntrol problem, from which novel algorithms [5,6] and network structures [7,\n8,9,10] can be designed. An attractive feature of this appro ach is that, the\ncompositional structure, which is widely considered the es sence of deep neural\nnetworks is explicitly taken into account in the time-evolu tion of the dynamical\nsystems.\nWhile most prior work on the dynamical systems viewpoint of d eep learning\nhave focused on algorithms and network structures, this pap er aims to study\nthe fundamental mathematical aspects of the formulation. I ndeed, we show\nthat the most general formulation of the population risk min imization problem\ncan be regarded as a mean-field optimal control problem , in the sense that the\noptimal control parameters (or equivalently, the trainabl e weights) depend on\nthe population distribution of input-target pairs. Our tas k is then to analyze\nthe mathematical properties of this mean-field control prob lem. Mirroring the\ndevelopment of classical optimal control, we will proceed i n two parallel, but\ninter-connected ways, namely the dynamic programming form alism and the\nmaximum principle formalism.\nThe paper is organized as follows. We discuss related work in Sec. 2 and\nintroduce the basic mean-field optimal control formulation of deep learning in\nSec. 3. In Sec. 4, following the classical dynamic programmi ng approach [15],\nwe introduce and study the properties of a value function for the mean-field\ncontrol problem whose state space is an appropriate Wassers tein space of prob-\nability measures. By defining an appropriate notion of deriv ative with respect\nto probability measures, we show that the value function is r elated to solutions\nof an infinite dimensional Hamilton-Jacobi-Bellman (HJB) p artial differential\nequation. With the concept of viscosity solutions [16], we s how in Sec. 5 that\nthe HJB equation admits a unique viscosity solution and comp letely charac-\nterize the optimal loss function and the optimal control pol icy of the mean-\nfield control problem. This establishes a concrete link betw een the learning\nproblem viewed as a variational problem and the Hamilton-Ja cobi-Bellman\nequation that is associated with the variational problem. I t should be noted\nthe essential ideas in the proof of Sec. 4 and 5 are not new, but we present our\nsimplified treatment for this particular setting.\nNext, in Sec. 6, we develop the more local theory based on the P ontryagin’s\nmaximum principle (PMP) [17]. We state and prove a mean-field version of the\nclassical PMP that provides necessary conditions for optim al controls. Further,\nwe study situations when the mean-field PMP admits a unique so lution, which\nthen imply that it is also sufficient for optimality, provided an optimal solu-\ntion exists. We will see in Sec. 7 that compared with the HJB ap proach, this\nfurther requires the fact that the time horizon of the learni ng problem is small\nenough. Finally, in Sec. 8 we study the relationship between the population\nrisk minimization problem (cast as a mean-field control prob lem and charac-\nterized by a mean-field PMP) and its empirical risk minimizat ion counter-part\nA Mean-Field Optimal Control Formulation of Deep Learning 3\n(cast as a classical control problem and characterized by a c lassical, sampled\nPMP). We prove that under appropriate conditions for every s table solution\nof the mean-field PMP, with high probability there exist clos e-by solutions of\nthe sampled PMP, and the latter converge in probability to th e former, with\nexplicit error estimates on both the distance between the so lutions and the\ndistance between their loss function values. This provides a type of a priori\nerror estimate that has implications on the generalization ability of neural\nnetworks, which is an important and active area of machine le arning research.\nNote that it is not the purpose of this paper to prove the sharp est estimates\nunder the most general conditions, thus we have taken the mos t convenient\nbut reasonable assumptions and the results presented could be sharpened with\nmore technical details. In each section from Sec. 4 to Sec. 8, we first present\nthe mathematical results, and then discuss the related impl ications in deep\nlearning. Furthermore, in this work we shall focus our analy sis on the contin-\nuous idealization of deep residual networks, but we believe that much of the\nanalysis presented also carry over to the discrete domain (i .e. discrete layers).\n2 Related work\nThe connection between back-propagation and optimal contr ol of dynamical\nsystems is known since the earlier works on control and deep l earning [18,19,\n20]. Recently, the dynamical systems approach to deep learn ing was proposed\nin [4] and explored in the direction of training algorithms b ased on the PMP\nand the method of successive approximations [5,6]. In anoth er vein, there are\nalso studies on the continuum limit of neural networks [11,1 2] and on designing\nnetwork architectures for deep learning [7,8,9,10] based o n dynamical systems\nand differential equations. Instead of analysis of algorith ms or architectures,\nthe present paper focuses on the mathematical aspects of the control formula-\ntion itself, and develops a mean-field theory that character ize the optimality\nconditions and value functions using both PDE (HJB) and ODE ( PMP) ap-\nproaches. The over-arching goal is to develop the mathemati cal foundations of\nthe optimal control formulation of deep learning.\nIn the control theory literature, mean-field optimal contro l is an active\narea of research. Many works on mean-field games [21,22,23,2 4], the control\nof McKean-Vlasov systems [25,26,27], and the control of Cuc ker-Smale sys-\ntems [28,29,30] focus on deriving the limiting partial diffe rential equations\nthat characterize the optimal control as the number of agent s goes to infinity.\nThis is akin to the theory of the propagation of chaos [31]. Me anwhile there are\nalso works discussing the stochastic maximum principle for stochastic differ-\nential equations of mean-field type [32,33,34]. The present paper differs from\nall previous works in two aspects. First, in the context of co ntinuous-time deep\nlearning, the problem differs from these previous control fo rmulations as the\nsource of randomness are coupled input-target pairs (the la tter determines the\nterminal loss function, which can now be regarded as a random function). On\nthe other hand, a simplifying feature in our case is that the d ynamics, given\n4 Weinan E, Jiequn Han, Qianxiao Li\nthe input-target pair, are otherwise deterministic. Secon d, the dynamics of\neach random realization are independent of the distributio n law of the popu-\nlation, and are coupled only through the shared control para meters. This is to\nbe contrasted with optimal control of McKean-Vlasov dynami cs [34,26,27] or\nmean-field games [21,22,23,24], where the population law di rectly enters the\ndynamical equations (and not just through the shared contro l). Thus, in this\nsense our dynamical equations are much simpler to analyze. C onsequently,\nalthough some of our results can be deduced from more general mean-field\nanalysis in the control literature, here we will present sim plified derivations\ntailored to our setting, Note also that there are neural netw ork structures\n(e.g. batch-normalization) that can be considered to have e xplicit mean-field\ndynamics, and we defer this discussion to Sec. 9.\n3 From ResNets to mean-field optimal control\nLet us now present the optimal control formulation of deep le arning as in-\ntroduced in [4,5,6]. In the simplest form, the feed-forward propagation in a\nT-layer residual network can be represented by the difference equations\nxt+1=xt+f(xt,θt), t = 0,...,T −1. (1)\nwherex0is the input (image, time-series, etc.) and xTis the final output. The\nfinal output is then compared with some target y0corresponding to x0via\nsome loss function. The goal of learning is to tune the traina ble parameters\nθ0,...,θ T−1such thatxTis close toy0. The only change in the continuous-\ntime idealization of deep residual learning, which we will s ubsequently focus\non, is that instead of the difference equation (1), the forwar d dynamics is now\na differential equation. Let us now introduce this formulati on more precisely.\nLet (Ω,F,P) be a fixed and sufficiently rich probability space so that all\nsubsequently required random variables can be constructed . Supposex0∈\nRdandy0∈Rlare random variables jointly distributed according to µ0:=\nP(x0,y0)(hereafter, for each random variable Xwe denote its distribution or\nlaw byPX). This represents the distribution of the input-target pai rs, which\nwe assume can be embedded in Euclidean spaces. Consider a set of admissible\ncontrols or training weights Θ⊆Rm. In typical deep learning, Θis taken as the\nwhole space Rm, but here we consider the more general case where Θcan be\nconstrained. Fix T >0 (network “depth”) and let f(feed-forward dynamics),\nΦ(terminal loss function) and L(regularizer) be functions\nf:Rd×Θ→Rd, Φ :Rd×Rl→R, L :Rd×Θ→R.\nWe define the state dynamics as the ordinary differential equa tion (ODE)\n˙xt=f(xt,θt) (2)\nwith initial condition equals to the random variable x0. Thus, this is a stochas-\ntic ODE, whose only source of randomness is on the initial con dition. Consider\nA Mean-Field Optimal Control Formulation of Deep Learning 5\nthe set of essentially bounded measurable controls L∞([0,T],Θ). To improve\nclarity, we will reserve bold-faced letters for path-space quantities. For exam-\nple,θ≡ {θt: 0≤t≤T}. In contrast, variables/functions taking values in\nfinite-dimensional Euclidean spaces are not bold-faced.\nThe population risk minimization problem in deep learning c an hence be\nposed as the following mean-field optimal control problem\ninf\nθ∈L∞([0,T],Θ)J(θ):=Eµ0/bracketleftBigg\nΦ(xT,y0) +/integraldisplayT\n0L(xt,θt)dt/bracketrightBigg\n,\nSubject to (2) .(3)\nThe term “mean-field” highlights the fact that θis shared by a whole popula-\ntion of input-target pairs, and the optimal control must dep end on the law of\nthe input-target random variables. Strictly speaking, the law of xdoes not en-\nter the forward equations explicitly (unlike e.g., McKean- Vlasov control [34]),\nand hence our forward dynamics are not explicitly in mean-fie ld form. Never-\ntheless, we will use the term “mean-field” to emphasize the de pendence of the\ncontrol on the population distribution.\nIn contrast, if we were to perform empirical risk minimizati on, as is often\nthe case in practice (and is the case analyzed by previous wor k on algorithms [5,\n6]), we would first draw i.i.d. samples {xi\n0,yi\n0}N\ni=1∼µ0and pose the sampled\noptimal control problem\ninf\nθ∈L∞([0,T],Θ)JN(θ):=1\nNN/summationdisplay\ni=1/bracketleftBigg\nΦ(xi\nT,yi\n0) +/integraldisplayT\n0L(xi\nt,θt)dt/bracketrightBigg\n,\nSubject to ˙ xi\nt=f(xi\nt,θt), i = 1,...,N.(4)\nThus, the solutions of sampled optimal control problems are typically random\nvariables. We now focus our analysis on the mean-field proble m (3) and only\nlater in Sec. 8 relate it with the sampled problem (4).\nAdditional Notation\nThroughout this paper, we always use wto denote the concatenated ( d+\nl)-dimensional variable ( x,y) wherex∈Rdandy∈Rl. Correspondingly\n¯f(w,θ):= (f(x,θ),0) is the extended ( d+l)-dimensional feed-forward function,\n¯L(w,θ):=L(x,θ) is the extended ( d+l)-dimensional regularization loss, and\n¯Φ(w):=Φ(x,y) still denotes the terminal loss function. We denote by x·y\nthe inner product of two Euclidean vectors xandywith the same dimension.\nThe Euclidean norm is denoted by ∝ba∇dbl · ∝ba∇dbland the absolute value is denoted by\n| · |. Gradient operators on Euclidean spaces are denoted by ∇with subscripts\nindicating the variable with which the derivative is taken w ith. In contrast,\nwe useDto represent the Fréchet derivative on Banach spaces. Namel y, if\nx∈UandF:U→Vis a mapping between two Banach spaces ( U,∝ba∇dbl · ∝ba∇dbl U)\n6 Weinan E, Jiequn Han, Qianxiao Li\nand (V,∝ba∇dbl · ∝ba∇dbl V), thenDF(x) is defined by the linear operator DF(x) :U→V\ns.t.\nr(x,y):=∝ba∇dblF(x+y)−F(x)−DF(x)y∝ba∇dblV\n∝ba∇dbly∝ba∇dblU→0,as∝ba∇dbly∝ba∇dblU→0. (5)\nFor a matrix A, we use the symbol A∝√∇ecedesequal0 to mean that Ais negative semi-\ndefinite.\nLet the Banach space L∞([0,T],E) be the set of essentially bounded mea-\nsurable functions from [0 ,T] toE, whereEis a subset of a Euclidean space with\nthe usual Lebesgue measure. The norm is ∝ba∇dblx∝ba∇dblL∞([0,T],E)= ess supt∈[0,T]∝ba∇dblx(t)∝ba∇dbl,\nand we shall write for brevity ∝ba∇dbl · ∝ba∇dbl L∞in place of ∝ba∇dbl · ∝ba∇dbl L∞([0,T],E). In this paper,\nEis often either ΘorRd, and the path-space variables we consider in this\npaper, such as the controls θ, will mostly be defined in this space.\nAs this paper introduces a mean-field optimal control approa ch, we also\nneed some notation for the random variables and their distri butions. We\nuse the shorthand L2(Ω,Rd+l) forL2((Ω,F,P),Rd+l), the set of Rd+l-valued\nsquare integrable random variables. We equip this Hilbert s pace with the norm\n∝ba∇dblX∝ba∇dblL2:= (E∝ba∇dblX∝ba∇dbl2)1/2forX∈L2(Ω,Rd+l). We denote by P2(Rd+l) the set\nof square integrable probability measures on the Euclidean spaceRd+l. Note\nthatX∈L2(Ω,Rd+l) if and only if PX∈ P 2(Rd+l). The space P2(Rd+l) is\nregarded as a metric space equipped with the 2-Wasserstein d istance\nW2(µ,ν):= inf/braceleftBig/parenleftBig/integraldisplay\nRd+l×Rd+l∝ba∇dblw−z∝ba∇dbl2π(dw,dz )/parenrightBig1/2/vextendsingle/vextendsingle/vextendsingle\nπ∈ P 2(Rd+l×Rd+l) with marginals µandν/bracerightBig\n:= inf/braceleftBig\n∝ba∇dblX−Y∝ba∇dblL2/vextendsingle/vextendsingle/vextendsingleX,Y ∈L2(Ω,Rd+l) withPX=µ,PY=ν/bracerightBig\n.\nForµ∈ P 2(Rd+l), we also define ∝ba∇dblµ∝ba∇dblL2:= (/integraltext\nRd+l∝ba∇dblw∝ba∇dbl2µ(dw))1/2.\nGiven a measurable function ψ:Rd+l→Rqthat is square integrable with\nrespect toµ, we use the notation\n∝an}b∇acketle{tψ(.), µ∝an}b∇acket∇i}ht:=/integraldisplay\nRd+lψ(w)µ(dw).\nNow, we introduce some notation for the dynamical evolution of proba-\nbilities. Given ξ∈L2(Ω,Rd+l) and a control process θ∈L∞([0,T],Θ), we\nconsider the following dynamical system for t≤s≤T:\nWt,ξ,θ\ns =ξ+/integraldisplays\nt¯f(Wt,ξ,θ\ns,θt)ds.\nNote thatWt,ξ,θ\ns is always square integrable given ¯f(w,θ) is Lipschitz contin-\nuous with respect to w. Letµ=Pξ∈ P 2(Rd+l), we denote the law of Wt,ξ,θ\ns\nfor simplicity by\nPt,µ,θ\ns:=PWt,ξ, θ\ns.\nA Mean-Field Optimal Control Formulation of Deep Learning 7\nThis is valid since the law of Wt,ξ,θ\ns should only depend on the law of ξand\nnot on the random variable itself. This notation also allow a s to write down\nthe flow or semi-group property of the dynamical system as\nPt,µ,θ\ns=Pˆt,Pt,µ, θ\nˆt,θ\ns, (6)\nfor all 0 ≤t≤ˆt≤s≤T,µ ∈ P 2(Rd+l),θ∈L∞([0,T],Θ).\nFinally, throughout the results and proofs, we will use KorCwith sub-\nscripts as names for generic constants, whose values may cha nge from line to\nline when there is no need for them to be distinct. In general, these constants\nmay implicitly depend on Tand the ambient dimensions d,m, but for brevity\nwe omit them in the rest of the paper.\n4 Mean-field dynamic programming principle and HJB equation\nWe begin our analysis of (3) by employing the dynamic program ming prin-\nciple and the Hamilton-Jacobi-Bellman formalism. In this a pproach, the key\nidea is to define a value function that corresponds to the opti mal loss of the\ncontrol problem (3), but under a general starting time and st arting state.\nOne can then derive a partial differential equation (Hamilto n-Jacobi-Bellman\nequation, or HJB equation) to be satisfied by such a value func tion, which\ncharacterizes both the optimal loss function value and the o ptimal control\npolicy of the original control problem. Compared to the clas sical optimal con-\ntrol case corresponding to empirical risk minimization in l earning, here the\nvalue function’s state argument is no longer a finite-dimens ional vector, but\nan infinite-dimensional object corresponding to the joint d istribution of the\ninput-target pair. We shall interpret it as an element of a su itable Wasserstein\nspace. The detailed mathematical definition of this value fu nction and its basic\nproperties are discussed in Subsec. 4.1.\nIn the finite-dimensional case, the HJB equation is a classic al partial dif-\nferential equation. In contrast, since the state variables we are dealing with\nare probability measures rather than Euclidean vectors, we need a concept of\nderivative with respect to a probability measure, as introd uced by Lions in his\ncourse at Collège de France [35]. We give a brief introductio n of this concept\nin Subsec. 4.2 and refer readers to the lecture notes [36] for more details. We\nthen present the resulting infinite-dimensional HJB equati on in Subsec. 4.3.\nThroughout this section and next section (Sec. 5), we assume\n(A1)f,L,Φ are bounded; f,L,Φ are Lipschitz continuous with respect to x, and\nthe Lipschitz constants of fandLare independent of θ.\n(A2)µ0∈ P 2(Rd+l).\n8 Weinan E, Jiequn Han, Qianxiao Li\n4.1 Value function and its properties\nAdopting the viewpoint of taking probability measures µ∈ P 2(Rd+l) as state\nvariables, we can define a time-dependent objective functio nal\nJ(t,µ,θ):=E(xt,y0)∼µ/bracketleftBigg\nΦ(xT,y0) +/integraldisplayT\ntL(xt,θt)dt/bracketrightBigg\n(subject to (2))\n=∝an}b∇acketle{t¯Φ(.),Pt,µ,θ\nT∝an}b∇acket∇i}ht+/integraldisplayT\nt∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ\ns∝an}b∇acket∇i}htds. (7)\nThe second line in the above is just a rewriting of the first lin e based on the\nnotation introduced earlier. Here, we abuse the notation Jin (3) for the new\nobjective functional, which now has additional arguments t,µ. Of course, J(θ)\nin (3) corresponds to J(0,µ0,θ) in (7).\nThevalue function v∗(t,µ) is defined as a real-valued function on [0 ,T]×\nP2(Rd+l) through\nv∗(t,µ) = inf\nθ∈L∞([0,T],Θ)J(t,µ,θ). (8)\nIf we assume θ∗attains the infimum in (3), then by definition\nJ(θ∗) =v∗(0,µ0).\nThe following proposition shows the continuity of the value function.\nProposition 1 The function (t,µ)∝ma√sto→J(t,µ,θ)is Lipschitz continuous on\n[0,T]× P 2(Rd+l), uniformly with respect to θ∈L∞([0,T],Θ), and the value\nfunctionv∗(t,µ)is Lipschitz continuous on [0,T]× P 2(Rd+l).\nProof. We first establish some elementary estimates based on the ass umptions.\nWe suppose\n∝an}b∇acketle{t¯L(.,θ), µ∝an}b∇acket∇i}ht ≤C. (9)\nLetX,Y ∈L2(Ω,Rd+l) such that PX=µ,PY= ˆµ, the Lipschitz continuity\nof¯Lgives us\n|∝an}b∇acketle{t¯L(.,θ), µ∝an}b∇acket∇i}ht − ∝an}b∇acketle{t ¯L(.,θ),ˆµ∝an}b∇acket∇i}ht|=|E[¯L(X,θ)−¯L(Y,θ)]| ≤KL∝ba∇dblX−Y∝ba∇dblL2.\nNote that in the proceeding inequality the left side does not depend on the\nchoice ofX,Y while the right side does. Hence we can take the infimum over\nall the joint choices of X,Y to get\n|∝an}b∇acketle{t¯L(.,θ), µ∝an}b∇acket∇i}ht − ∝an}b∇acketle{t ¯L(.,θ),ˆµ∝an}b∇acket∇i}ht|\n≤KL×inf/braceleftBig\n∝ba∇dblX−Y∝ba∇dblL2/vextendsingle/vextendsingle/vextendsingleX,Y ∈L2(Ω,Rd+l) withPX=µ,PY=ν/bracerightBig\n≤KLW2(µ,ˆµ). (10)\nThe same argument applied to ¯Φgives us\n|∝an}b∇acketle{t¯Φ(.), µ∝an}b∇acket∇i}ht − ∝an}b∇acketle{t ¯Φ(.),ˆµ∝an}b∇acket∇i}ht| ≤KLW2(µ,ˆµ). (11)\nA Mean-Field Optimal Control Formulation of Deep Learning 9\nFor the deterministic ODE\ndwθ\nt\ndt=¯f(wθ\nt,θt), wθ\n0=w0,\ndefine the induced flow map as\nh(t,w 0,θ):=wθ\nt.\nUsing Gronwall’s inequality with the boundedness and Lipsc hitz continuity of\n¯f, we know\n|h(t,w,θ)−h(t,ˆw,θ)| ≤KL∝ba∇dblw−ˆw∝ba∇dbl,\n|h(t,w,θ)−h(ˆt,w,θ)| ≤KL|t−ˆt|.\nTherefore we use the definition of Wasserstein distance to ob tain\nW2(Pt,µ,θ\ns,Pt,ˆµ,θ\ns)\n= inf/braceleftBig\n∝ba∇dblX−Y∝ba∇dblL2/vextendsingle/vextendsingle/vextendsingleX,Y ∈L2(Ω,Rd+l) withPX=Pt,µ,θ\ns,PY=Pt,ˆµ,θ\ns/bracerightBig\n= inf/braceleftBig\n∝ba∇dblh(s−t,X, θ)−h(s−t,Y,θ)∝ba∇dblL2/vextendsingle/vextendsingle/vextendsingle\nX,Y ∈L2(Ω,Rd+l) withPX=µ,PY= ˆµ/bracerightBig\n≤inf/braceleftBig\nKL∝ba∇dblX−Y∝ba∇dblL2/vextendsingle/vextendsingle/vextendsingleX,Y ∈L2(Ω,Rd+l) withPX=µ,PY= ˆµ/bracerightBig\n=KLW2(µ,ˆµ) (12)\nand similarly\nW2(Pt,µ,θ\ns,µ)≤KL|s−t|. (13)\nThe flow property (6) and estimates (12), (13) together give u s\nW2(Pt,µ,θ\ns,Pˆt,ˆµ,θ\ns) =W2(Pˆt,Pt,µ, θ\nˆt,θ\ns,Pˆt,ˆµ,θ\ns)\n≤KLW2(Pt,µ,θ\nˆt,ˆµ)\n≤KL(|t−ˆt|+W2(µ,ˆµ)). (14)\nNow for all 0 ≤t≤ˆt≤T,µ,ˆµ∈ P 2(Rd+l),θ∈L∞([0,T],Θ), we em-\nploy (9), (10), (11), and (14) to obtain\n|J(t,µ,θ)−J(ˆt,ˆµ,θ)|\n≤/integraldisplayˆt\nt|∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ\ns∝an}b∇acket∇i}ht|ds+/integraldisplayT\nˆt|∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ\ns∝an}b∇acket∇i}ht − ∝an}b∇acketle{t ¯L(.,θs),Pˆt,ˆµ,θ\ns∝an}b∇acket∇i}ht|ds\n+|∝an}b∇acketle{t¯Φ(.),Pt,µ,θ\nT∝an}b∇acket∇i}ht − ∝an}b∇acketle{t ¯Φ(.),Pˆt,ˆµ,θ\nT∝an}b∇acket∇i}ht|\n≤C|ˆt−t|+KLsup\nˆt≤s≤TW2(Pt,µ,θ\ns,Pˆt,ˆµ,θ\ns)\n≤KL(|t−ˆt|+W2(µ,ˆµ)),\n10 Weinan E, Jiequn Han, Qianxiao Li\nwhich gives us the desired Lipschitz continuity property.\nFinally, combining the fact that\n|v∗(t,µ)−v∗(ˆt,ˆµ)| ≤supθ∈L∞([0,T],Θ)|J(t,µ,θ)−J(ˆt,ˆµ,θ)|,\n∀t,ˆt∈[0,T],µ,ˆµ∈ P 2(Rd+l),\nandJ(t,µ,θ) is Lipschitz continuous at ( t,µ)∈[0,T]× P 2(Rd+l), uniformly\nwith respect to θ∈L∞([0,T],Θ), we deduce that the value function v∗(t,µ)\nis Lipschitz continuous on [0 ,T]× P 2(Rd+l).\nThe important observation we now make is that the value funct ion satis-\nfies a recursive relation. This is known as the dynamic programming principle ,\nwhich forms the basis of deriving the Hamilton-Jacobi-Bell man equation. In-\ntuitively, the dynamic programming principle states that f or any optimal tra-\njectory, starting from any intermediate state in the trajec tory, the remaining\ntrajectory must again be optimal, starting from that time an d state. We now\nstate and prove this intuitive statement precisely.\nProposition 2 (Dynamic programming principle) For all 0≤t≤ˆt≤T,\nµ∈ P 2(Rd+l), we have\nv∗(t,µ) = inf\nθ∈L∞([0,T],Θ)/bracketleftBig/integraldisplayˆt\nt∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ\ns∝an}b∇acket∇i}htds+v∗(ˆt,Pt,µ,θ\nˆt)/bracketrightBig\n. (15)\nProof. The proof is elementary as in the context of deterministic co ntrol prob-\nlem. We provide it as follows for completeness.\n1). Given fixed t,ˆt,µand any θ1∈L∞([0,T],Θ), we consider the proba-\nbility measure Pt,µ,θ1\nˆt. Fixε>0 and by definition of value function (8) we can\npick θ2∈L∞([0,T],Θ) satisfying\nv∗(ˆt,Pt,µ,θ1\nˆt) +ε≥ ∝an}b∇acketle{t¯Φ(.),Pˆt,Pt,µ, θ1\nˆt,θ2\nT ∝an}b∇acket∇i}ht+/integraldisplayT\nˆt∝an}b∇acketle{t¯L(.,θ2\ns),Pˆt,Pt,µ, θ1\nˆt,θ2\ns ∝an}b∇acket∇i}htds. (16)\nNow consider the control process ˆθdefined as\nˆθs=1{s<ˆt}θ1\ns+1{s≥ˆt}θ2\ns.\nA Mean-Field Optimal Control Formulation of Deep Learning 1 1\nThus we can use (16) and flow property (6) to deduce\nv∗(t,µ)\n≤/integraldisplayT\nt∝an}b∇acketle{t¯L(.,ˆθs),Pt,µ,ˆθ\ns∝an}b∇acket∇i}htds+∝an}b∇acketle{t¯Φ(.),Pt,µ,ˆθ\nT∝an}b∇acket∇i}ht\n=/integraldisplayˆt\nt∝an}b∇acketle{t¯L(.,ˆθs),Pt,µ,ˆθ\ns∝an}b∇acket∇i}htds+/integraldisplayT\nˆt∝an}b∇acketle{t¯L(.,ˆθs),Pt,µ,ˆθ\ns∝an}b∇acket∇i}htds+∝an}b∇acketle{t¯Φ(.),Pt,µ,ˆθ\nT∝an}b∇acket∇i}ht\n=/integraldisplayˆt\nt∝an}b∇acketle{t¯L(.,ˆθs),Pt,µ,ˆθ\ns∝an}b∇acket∇i}htds+/integraldisplayT\nˆt∝an}b∇acketle{t¯L(.,θ2\ns),Pˆt,Pt,µ, θ1\nˆt,θ2\ns ∝an}b∇acket∇i}htds+∝an}b∇acketle{t¯Φ(.),Pˆt,Pt,µ, θ1\nˆt,θ2\nT∝an}b∇acket∇i}ht\n≤/integraldisplayˆt\nt∝an}b∇acketle{t¯L(.,ˆθs),Pt,µ,ˆθ\ns∝an}b∇acket∇i}htds+v∗(ˆt,Pt,µ,θ1\nˆt) +ε\n=/integraldisplayˆt\nt∝an}b∇acketle{t¯L(.,θ1\ns),Pt,µ,θ1\ns ∝an}b∇acket∇i}htds+v∗(ˆt,Pt,µ,θ1\nˆt) +ε.\nAsθ1andεare both arbitrary, we have\nv∗(t,µ)≤ inf\nθ∈L∞([0,T],Θ)/bracketleftBig/integraldisplayˆt\nt∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ\ns∝an}b∇acket∇i}htds+v∗(ˆt,Pt,µ,θ\nˆt)/bracketrightBig\n.\n2). Fixε >0 again and we choose by definition θ3∈L∞([0,T],Θ) such\nthat\nv∗(t,µ) +ε≥/integraldisplayT\nt∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ3\ns ∝an}b∇acket∇i}htds+∝an}b∇acketle{t¯Φ(.),Pt,µ,θ3\nT∝an}b∇acket∇i}ht.\nUsing the flow property (6) and the definition of the value func tion again gives\nus the estimate\nv∗(t,µ) +ε\n≥/integraldisplayT\nt∝an}b∇acketle{t¯L(.,θ3\ns),Pt,µ,θ3\ns ∝an}b∇acket∇i}htds+∝an}b∇acketle{t¯Φ(.),Pt,µ,θ3\nT ∝an}b∇acket∇i}ht\n=/integraldisplayˆt\nt∝an}b∇acketle{t¯L(.,θ3\ns),Pt,µ,θ3\ns ∝an}b∇acket∇i}htds+/integraldisplayT\nˆt∝an}b∇acketle{t¯L(.,θ3\ns),Pˆt,Pt,µ, θ3\nˆt,θ3\ns ∝an}b∇acket∇i}htds+∝an}b∇acketle{t¯Φ(.),Pˆt,Pt,µ, θ3\nˆt,θ3\nT ∝an}b∇acket∇i}ht\n≥/integraldisplayˆt\nt∝an}b∇acketle{t¯L(.,θ3\ns),Pt,µ,θ3\ns ∝an}b∇acket∇i}htds+v∗(ˆt,Pt,µ,θ3\nˆt)\n≥ inf\nθ∈L∞([0,T],Θ)/bracketleftBig/integraldisplayˆt\nt∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ\ns∝an}b∇acket∇i}htds+v∗(ˆt,Pt,µ,θ\nˆt)/bracketrightBig\n.\nHence we deduce\nv∗(t,µ)≥ inf\nθ∈L∞([0,T],Θ)/bracketleftBig/integraldisplayˆt\nt∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ\ns∝an}b∇acket∇i}htds+v∗(ˆt,Pt,µ,θ\nˆt)/bracketrightBig\n.\nCombining the inequalities in the two parts completes the pr oof.\n12 Weinan E, Jiequn Han, Qianxiao Li\n4.2 Derivative and Chain Rule in Wasserstein Space\nIn classical finite-dimensional optimal control, the HJB eq uation can be for-\nmally derived from the dynamic programming principle by a Ta ylor expansion\nof the value function with respect to the state vector. Howev er, in the current\nformulation, the state is now a probability measure. To deri ve the correspond-\ning HJB equation in this setting, it is essential to define a no tion of deriva-\ntive of the value function with respect to a probability meas ure. The basic\nidea to achieve this is to take probability measures on Rd+las laws of Rd+l-\nvalued random variables on the probability space ( Ω,F,P) and then use the\ncorresponding Banach space of random variables to define der ivatives. This\napproach is more extensively outlined in [36].\nConcretely, let us take any function u:P2(Rd+l)→R. We now lift it into\nits “extension” U, a function defined on L2(Ω,Rd+l) by\nU(X) =u(PX),∀X∈L2(Ω,Rd+l). (17)\nWe sayuisC1(P2(Rd+l)) if the lifted function Uis Fréchet differentiable with\ncontinuous derivatives. Since we can identify L2(Ω,Rd+l) with its dual space,\nif the Fréchet derivative DU(X) exists, by Riesz’ theorem one can view it as\nan element of L2(Ω,Rd+l):\nDU(X)(Y) =E[DU(X)·Y],∀Y∈L2(Ω,Rd+l).\nThe important result one can prove is that the law of DU(X) does not depend\nonXbut only on the law of X. Accordingly we have the representation\nDU(X) =∂µu(PX)(X),\nfor some function ∂µu(PX) :Rd+l→Rd+l, which is called derivative of uat\nµ=PX. Moreover, we know ∂µu(µ) is square integrable with respect to µ.\nWe next need a chain rule defined on P2(Rd+l). Consider the dynamical\nsystem\nWt=ξ+/integraldisplayt\n0¯f(Ws)ds, ξ ∈L2(Ω,Rd+l),\nandu∈ C1(P2(Rd+l)). Then, for all t∈[0,T], we have\nu(PWt) =u(PW0) +/integraldisplayt\n0∝an}b∇acketle{t∂µu(PWs)(.)·¯f(.),PWs∝an}b∇acket∇i}htds, (18)\nor equivalently its lifted version\nU(Wt) =U(W0) +/integraldisplayt\n0E[DU(Ws)·¯f(Ws)]ds. (19)\nA Mean-Field Optimal Control Formulation of Deep Learning 1 3\n4.3 HJB equation in Wasserstein Space\nGuided by the dynamic programming principle (15) and formul a (18), we are\nready to formally derive the associated HJB equation as foll ows. Let ˆt=t+δt\nwithδtbeing small. By performing a formal Taylor series expansion of (15),\nwe have\n0 = inf\nθ∈L∞([0,T],Θ)/bracketleftBig\nv∗(t+δt,Pt,µ,θ\nt+δt)−v∗(t,µ) +/integraldisplayt+δt\nt∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ\ns∝an}b∇acket∇i}htds/bracketrightBig\n≈ inf\nθ∈L∞([0,T],Θ)/bracketleftBig\n∂tv(t,µ)δt+/integraldisplayt+δt\nt∝an}b∇acketle{t∂µv(t,µ)(.)·¯f(.,θ) +¯L(.,θs), µ∝an}b∇acket∇i}htds/bracketrightBig\n≈δt inf\nθ∈L∞([0,T],Θ)/bracketleftBig\n∂tv(t,µ) +∝an}b∇acketle{t∂µv(t,µ)(.)·¯f(.,θ) +¯L(.,θs), µ∝an}b∇acket∇i}ht/bracketrightBig\n.\nPassing to the limit δt→0, we obtain the following HJB equation\n\n\n∂v\n∂t+ inf\nθ∈Θ/angbracketleftbig\n∂µv(t,µ)(.)·¯f(.,θ) +¯L(.,θ), µ/angbracketrightbig\n= 0,on [0,T)× P 2(Rd+l),\nv(T,µ) =∝an}b∇acketle{t¯Φ(.),µ∝an}b∇acket∇i}ht, onP2(Rd+l),\n(20)\nwhich the value function should satisfy. The rest of this and the next section is\nto establish the precise link between equation (20) and the v alue function (8).\nWe now prove a verification result, which essentially says th at if we have a\nsmooth enough solution of the HJB equation (20), then this so lution must be\nthe value function. Moreover, the HJB allows us to identify t he optimal control\npolicy.\nProposition 3 Letvbe a function in C1,1([0,T]×P 2(Rd+l)). Ifvis a solution\nto(20) and there exists θ†(t,µ), which is a mapping (t,µ)∝ma√sto→Θattaining the\ninfimum in (20), thenv(t,µ) =v∗(t,µ), andθ†is an optimal feedback control\npolicy, i.e. θ=θ∗is a solution of (3), whereθ∗\nt:=θ†(t,Pw∗\nt)withPw∗\n0=µ0\nanddw∗\nt/dt=¯f(w∗\nt,θ∗\nt).\nProof. Given any control process θ, one can apply formula (18) between s=t\nands=Twith explicit tdependence and obtain\nv(T,Pt,µ,θ\nT) =v(t,µ) +/integraldisplayT\nt∂v\n∂t(s,Pt,µ,θ\ns) +∝an}b∇acketle{t∂µv(s,Pt,µ,θ\ns)(.)·¯f(.;θs),Pt,µ,θ\ns∝an}b∇acket∇i}htds.\nEquivalently, we have\nv(t,µ) =v(T,Pt,µ,θ\nT)−/integraldisplayT\nt∂v\n∂t(s,Pt,µ,θ\ns) +∝an}b∇acketle{t∂µv(s,Pt,µ,θ\ns)(.)·¯f(.;θs),Pt,µ,θ\ns∝an}b∇acket∇i}htds\n≤v(T,Pt,µ,θ\nT) +/integraldisplayT\nt∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ\ns∝an}b∇acket∇i}htds\n=∝an}b∇acketle{t¯Φ(.),PT,µ, θ\nt∝an}b∇acket∇i}ht+/integraldisplayT\nt∝an}b∇acketle{t¯L(.,θs),Pt,µ,θ\ns∝an}b∇acket∇i}htds\n=J(t,µ,θ),\n14 Weinan E, Jiequn Han, Qianxiao Li\nwhere the first inequality comes from the infimum condition in (20). Since the\ncontrol process is arbitrary, we have\nv(t,µ)≤v∗(t,µ). (21)\nReplacing the arbitrary control process with θ∗whereθ∗\nt=θ†(t,Pt,µ,θ∗\ns ) is\ngiven by the optimal feedback control and repeating the abov e argument,\nnoting that the inequality becomes equality since the infimu m is attained, we\nhave\nv(t,µ) =J(t,µ,θ∗)≥v∗(t,µ). (22)\nTherefore we obtain v(t,µ) =v∗(t,µ) andθ†defines an optimal feedback\ncontrol policy.\nProp. 3 is an important statement that links smooth solution s of the HJB\nequation with solutions of the mean-field optimal control pr oblem, and hence\nthe population minimization problem in deep learning. Furt hermore, by taking\nthe infimum in (20), it allows us to identify an optimal contro l policyθ†:\n[0,T]× P 2(Rd+l)→Θ. This is in general a stronger characterization of the\nsolution of the learning problem. In particular, it is of feedback , orclosed-loop\nform. On the other hand, an open-loop solution can be obtained from the\nclosed-loop control policy by sequentially setting θ∗\nt=θ†(t,Pw∗\nt), wherew∗\ntis\nthe solution of the feed-forward ODE with θ=θ∗up to time t. Note that in\nusual deep learning, the open-loop type solutions are obtai ned during training\nand used in inference. In other words, during inference the t rained weights are\nfixed and are not dependent on the distribution of the inputs e ncountered. On\nthe other hand, controls obtained from closed-loop control policies are actively\nadjusted according to the distribution encountered. In thi s sense, the ability\nto generate an optimal control policy in the form of state-ba sed feedback is an\nimportant feature of the dynamic programming approach. How ever, we should\nnote there is a price to pay for obtaining such a feedback cont rol: the HJB\nequation is general difficult to solve numerically. We shall r eturn to this point\nat the end of Sec. 5.\nThe limitation of Prop. 3 is that it assumes the value functio nv∗(t,µ) is\ncontinuously differentiable, which is often not the case. In order to formulate\na complete characterization, we would also like to deduce th e statement in\nthe other direction: a solution to (3) should also solve the P DE (20) in an\nappropriate sense. In the next section, we achieve this by gi ving a more flexible\ncharacterization of the value function as the viscosity sol ution of the HJB\nequation.\n5 Viscosity solution of HJB equation\n5.1 The concept of viscosity solutions\nIn general, one cannot expect to have smooth solutions to the HJB equa-\ntion (20). Therefore we need to extend the classical concept of PDE solutions\nA Mean-Field Optimal Control Formulation of Deep Learning 1 5\nto a type of weak solutions. As in the analysis of classical Ha milton-Jacobi\nequations, we shall introduce a notion of viscosity solutio n for the HJB equa-\ntion in the Wasserstein space of probability measures. The k ey idea is again the\nlifting identification between measures and random variabl es, working in the\nHilbert space L2(Ω,Rd+l), instead of the Wasserstein space P2(Rd+l). Then,\nwe can use the tools developed for viscosity solutions in Hil bert spaces. The\ntechniques presented below have been employed in the study o f well-posedness\nfor general Hamilton-Jacobi equations in Banach spaces, se e e.g. [37,38,39].\nFor convenience, we define the Hamiltonian H(ξ,P) :L2(Ω,Rd+l)×L2(Ω,Rd+l)→\nRas\nH(ξ,P):= inf\nθ∈ΘE[P·¯f(ξ,θ) +¯L(ξ,θ)]. (23)\nThen the “lifted” Bellman equation of (20) with V(t,ξ) =v(t,Pξ) can be\nwritten down as follows, except that the state space is enlar ged toL2(Ω,Rd+l):\n\n\n∂V\n∂t+H(ξ,DV (t,ξ)) = 0,on [0,T)×L2(Ω,Rd+l),\nV(T,ξ) =E[¯Φ(ξ)], onL2(Ω,Rd+l).(24)\nDefinition 1 We say that a bounded, uniformly continuous function u:\n[0,T]× P 2(Rd+l)→Ris a viscosity (sub, super) solution to (20) if the lifted\nfunctionU: [0,T]×L2(Ω,Rd+l)→Rdefined by\nU(t,ξ) =u(t,Pξ)\nis a viscosity (sub, super) solution to the lifted Bellman eq uation (24), that is:\n(i)U(T,ξ)≤E[¯Φ(ξ)], and for any test function ψ∈C1,1([0,T]×L2(Ω,Rd+l))\nsuch that the map U−ψhas a local maximum at ( t0,ξ0)∈[0,T)×L2(Ω,Rd+l),\none has\n∂tψ(t0,ξ0) +H(ξ0,Dψ(t0,ξ0))≥0. (25)\n(ii)U(T,ξ)≥E[¯Φ(ξ)], and for any test function ψ∈C1,1([0,T]×L2(Ω,Rd+l))\nsuch that the map U−ψhas a local minimum at ( t0,ξ0)∈[0,T)×L2(Ω,Rd+l),\none has\n∂tψ(t0,ξ0) +H(ξ0,Dψ(t0,ξ0))≤0. (26)\n5.2 Existence and uniqueness of viscosity solution\nThe main goal of introducing the concept of viscosity soluti ons is that in the\nviscosity sense, the HJB equation is well-posed and the valu e function is the\nunique solution of the HJB equation. We show this in Thm. 1 and 2.\nTheorem 1 The value function v∗(t,µ)defined in (8)is a viscosity solution\nto the HJB equation (20).\nBefore proving Thm. 1, we first introduce a useful Lemma regar ding the\ncontinuity of H(ξ,P).\n16 Weinan E, Jiequn Han, Qianxiao Li\nLemma 1 The Hamiltonian H(ξ,P)defined in (23) satisfies the following\ncontinuity conditions:\n|H(ξ,P)− H(ξ,Q)| ≤KL∝ba∇dblP−Q∝ba∇dblL2, (27)\n|H(ξ,P)− H(ζ,P)| ≤KL(1 + ∝ba∇dblP∝ba∇dblL2)∝ba∇dblξ−ζ∝ba∇dblL2. (28)\nProof. For simplicity we define\nˆH(ξ,P;θ):=E[P·¯f(ξ,θ) +¯L(ξ,θ)].\nThe boundedness of ¯fand¯Lgives us\n|ˆH(ξ,P;θ)−ˆH(ξ,Q;θ)| ≤KL∝ba∇dblP−Q∝ba∇dblL2 (29)\n|ˆH(ξ,P;θ)−ˆH(ζ,P;θ)| ≤KL(1 + ∝ba∇dblP∝ba∇dblL2)∝ba∇dblξ−ζ∝ba∇dblL2. (30)\nBy definition we know\nH(ξ,P):= inf\nθ∈ΘˆH(ξ,P;θ).\nLetθnsatisfy\nˆH(ξ,P;θn)− H(ξ,Q)≤1/n.\nThen\nH(ξ,P)− H(ξ,Q)\n= (H(ξ,P)−ˆH(ξ,P;θn)) + ( ˆH(ξ,P;θn)−ˆH(ξ,Q;θn)) + ( ˆH(ξ,Q;θn)− H(ξ,Q))\n≤ |ˆH(ξ,P;θn)−ˆH(ξ,Q;θn)|+ 1/n\n≤KL∝ba∇dblP−Q∝ba∇dblL2+ 1/n.\nTakingn→ ∞ , we have H(ξ,P)− H(ξ,Q)≤KL∝ba∇dblP−Q∝ba∇dblL2. A similar com-\nputation shows H(ξ,Q)− H(ξ,P)≤KL∝ba∇dblP−Q∝ba∇dblL2, and we prove (27). (28)\ncan be proved in a similar way, based on the condition (30).\nProof of Thm. 1. We lift the value function v∗(t,µ) to [0,T]×L2(Ω,Rd+l)\nand denote it by V∗(t,ξ). Note that the convergence ξn→ξinL2(Ω,Rd+l)\nimplies the convergence Pξn→PξinP2(Rd+l), thus Prop. 1 guarantees that\nV∗(t,ξ) is continuous on [0 ,T]×L2(Ω,Rd+l). By definition we know V∗(t,ξ)\nis bounded and V∗(T,ξ) =E(¯Φ(ξ)). It remains to show the viscosity sub and\nsuper solution properties of V∗(t,ξ). To proceed, we note that V∗(t,ξ) also\ninherits the dynamic programming principle from v∗(t,µ) (c.f. Prop. 2), which\ncan be represented as\nV∗(t,ξ) = inf\nθ∈L∞([0,T],Θ)/bracketleftBig/integraldisplayˆt\ntE[¯L(Wt,ξ,θ\ns,θs)]ds+V∗(ˆt,Wt,ξ,θ\nˆt)/bracketrightBig\n. (31)\n1. Subsolution property. Supposeψis a test function in C1,1([0,T]×L2(Ω,Rd+l))\nandV∗−ψhas a local maximum at ( t0,ξ0)∈[0,T)×L2(Ω,Rd+l), which means\n(V∗−ψ)(t,ξ)≤(V∗−ψ)(t0,ξ0) for all (t,ξ) satisfying |t−t0|+∝ba∇dblξ−ξ0∝ba∇dblL2<δ.\nA Mean-Field Optimal Control Formulation of Deep Learning 1 7\nLetθ0be an arbitrary element in Θand define a control process θ0∈L∞([0,T],Θ)\nsuch thatθ0\ns≡θ0, s∈[t0,T]. Leth∈(0,T−t0) be small enough such that\n|s−t0|+∝ba∇dblWt0,ξ0,θ0\ns −ξ0∝ba∇dblL2< δfor alls∈[t0,t0+h]. This is possible from\nan argument similar in the proof of Prop. 1. From the dynamic p rogramming\nprinciple (31), we have\nV∗(t0,ξ0)≤/integraldisplayt0+h\nt0E[¯L(Wt0,ξ0,θ0\ns,θ0\ns)]ds+V∗(t0+h,Wt0,ξ0,θ0\nt0+h).\nUsing the condition of local maximality and chain rule (19), we have the\ninequality\n0≤V∗(t0+h,Wt0,ξ0,θ0\nt0+h)−V∗(t0,ξ0) +/integraldisplayt0+h\nt0E[¯L(Wt0,ξ0,θ0\ns,θ0\ns)]ds\n≤ψ(t0+h,Wt0,ξ0,θ0\nt0+h)−ψ(t0,ξ0) +/integraldisplayt0+h\nt0E[¯L(Wt0,ξ0,θ0\ns,θ0\ns)]ds\n=/integraldisplayt0+h\nt0∂tψ(s,Wt0,ξ0,θ0\ns ) +E[Dψ(s,Wt0,ξ0,θ0\ns )·¯f(Wt0,ξ0,θ0\ns,θ0\ns)]ds\n+/integraldisplayt0+h\nt0E[¯L(Wt0,ξ0,θ0\ns,θ0\ns)]ds. (32)\nSince we know Wt0,ξ0,θ0\ns is continuous in time, in the sense of L2-metric of\nL2(Ω,Rd+l), hence\n∂tψ(s,Wt0,ξ0,θ0\ns ) +E[Dψ(s,Wt0,ξ0,θ0\ns )·¯f(Wt0,ξ0,θ0\ns,θ0\ns) +¯L(Wt0,ξ0,θ0\ns,θ0\ns)]\nis also continuous in time. Dividing the inequality (32) by hand taking the\nlimith→0, we obtain\n0≤/bracketleftBig\n∂tψ(s,Wt0,ξ0,θ0\ns ) +E[Dψ(s,Wt0,ξ0,θ0\ns )·¯f(Wt0,ξ0,θ0\ns,θ0\ns) +¯L(Wt0,ξ0,θ0\ns,θ0\ns)]/bracketrightBig/vextendsingle/vextendsingle/vextendsingle\ns=t0\n=∂tψ(t0,ξ0) +E[Dψ(t0,ξ0)·¯f(ξ0,θ0) +¯L(ξ0,θ0)].\nSinceθ0is arbitrary in Θ, we obtain the desired subsolution property (25).\n2. Supersolution property. Supposeψis a test function in C1,1([0,T]×\nL2(Ω,Rd+l)) andV∗−ψhas a local minimum at ( t0,ξ0)∈[0,T)×L2(Ω,Rd+l),\nwhich means\n(V∗−ψ)(t,ξ)≥(V∗−ψ)(t0,ξ0) for all (t,ξ) satisfying |t−t0|+∝ba∇dblξ−ξ0∝ba∇dblL2<δ 1.\nGiven an arbitrary ε>0, since Lemma 1 tells us His continuous, there exits\nδ2>0 such that\n|∂tψ(t,ξ) +H(t,ξ)−∂tψ(t0,ξ0)− H(t0,ξ0)|<ε,\nfor all (t,ξ) satisfying |t−t0|+∝ba∇dblξ−ξ0∝ba∇dblL2<δ 2. Again as argued in the proof\nof Prop. 1, we can choose h∈(0,T−t0) to be small enough such that |s−\nt0|+∝ba∇dblWt0,ξ0,θ\ns −ξ0∝ba∇dblL2<min{δ1,δ2}for alls∈[t0,t0+h],θ∈L∞([0,T],Θ).\n18 Weinan E, Jiequn Han, Qianxiao Li\nFrom the dynamic programming principle (31), there exists θhsuch that\nV∗(t0,ξ0) +εh≥/integraldisplayt0+h\nt0E[¯L(Wt0,ξ0,θh\ns,θh\ns)]ds+V∗(t0+h,Wt0,ξ0,θh\nt0+h).\nAgain using the condition of local minimality, chain rule (1 9), and definition\nofH, we have the inequality\nεh≥V∗(t0+h,Wt0,ξ0,θh\nt0+h)−V∗(t0,ξ0) +/integraldisplayt0+h\nt0E[¯L(Wt0,ξ0,θh\ns,θh\ns)]ds\n≥ψ(t0+h,Wt0,ξ0,θh\nt0+h)−ψ(t0,ξ0) +/integraldisplayt0+h\nt0E[¯L(Wt0,ξ0,θh\ns,θh\ns)]ds\n=/integraldisplayt0+h\nt0∂tψ(s,Wt0,ξ0,θh\ns ) +E[Dψ(s,Wt0,ξ0,θh\ns )·¯f(Wt0,ξ0,θh\ns,θh\ns)]ds\n+/integraldisplayt0+h\nt0E[¯L(Wt0,ξ0,θh\ns,θh\ns)]ds\n≥/integraldisplayt0+h\nt0∂tψ(s,Wt0,ξ0,θh\ns ) +H(Wt0,ξ0,θh\ns,Dψ(s,Wt0,ξ0,θh\ns ))ds\n≥h(∂tψ(t0,ξ0) +H(t0,ξ0)−ε). (33)\nDividing the inequality (33) by hand taking the limit ε→0, we obtain the\ndesired supersolution property (26).\nTheorem 1 incidentally establishes the existence of viscos ity solutions to\nthe HJB, which we can identify as the value function of the mea n-field control\nproblem. We show below that this solution is in fact unique.\nTheorem 2 Letu1andu2be two functions defined on [0,T]× P 2(Rd+l)such\nthatu1andu2are viscosity subsolution and supersolution to (20) respectively.\nThenu1≤u2. Consequently, the value function v∗(t,µ)defined in (8)is the\nunique viscosity solution to the HJB equation (20).\nProof. The final assertion of the theorem follows immediately from T hm. 1. As\nbefore we consider the lifted version U1(t,ξ) =u1(t,Pξ),U2(t,ξ) =u2(t,Pξ)\non [0,T]×L2(Ω,Rd+l). By definition we know U1andU2are subsolution and\nsupersolution to (24) respectively. By definition of viscos ity solution, U1,U2are\nboth bounded and uniformly continuous. We denote their modu li of continuity\nbyω1,ω2, which satisfy\n|Ui(t,ξ)−Ui(s,ζ)| ≤ωi(|t−s|+∝ba∇dblξ−ζ∝ba∇dblL2), i= 1,2\nfor all 0 ≤t≤s≤T,ξ,ζ ∈L2(Ω,Rd+l), andωi(r)→0 asr→0+. To prove\nU1≤U2, we assume\nδ:= sup\n[0,T]×L2(Ω,Rd+l)U1(t,ξ)−U2(t,ξ)>0, (34)\nA Mean-Field Optimal Control Formulation of Deep Learning 1 9\nand proceed in five steps below to derive a contradiction.\n1). Letσ,ε∈(0,1) and construct the auxiliary function\nG(t,s,ξ,ζ ) =U1(t,ξ)−U2(s,ζ)+σ(t+s)−ε(∝ba∇dblξ∝ba∇dbl2\n2+∝ba∇dblζ∝ba∇dbl2\n2)−1\nε2((t−s)2+∝ba∇dblξ−ζ∝ba∇dbl2\nL2),\n(35)\nfort,s∈[0,T],ξ,ζ ∈L2(Ω,Rd+l). From Stegall Theorem [40] there exist\nηt,ηs∈R,ηξ,ηζ∈L2(Ω,Rd+l) such that |ηt|,|ηs|,∝ba∇dblηξ∝ba∇dblL2,∝ba∇dblηζ∝ba∇dblL2≤εand the\nfunction with linear perturbation\n˜G(t,s,ξ,ζ ):=G(t,s,ξ,ζ )−ηtt−ηss−E[ηξ·ξ]−E[ηζ·ζ] (36)\nhas a maximum over [0 ,T]×[0,T]×L2(Ω,Rd+l)×L2(Ω,Rd+l) at (t0,s0,ξ0,ζ0).\n2). Since ˜G(0,0,0,0)≤˜G(t0,s0,ξ0,ζ0) andU1,U2are bounded, after an\narrangement of terms, we have\nε(∝ba∇dblξ0∝ba∇dbl2\nL2+∝ba∇dblζ0∝ba∇dbl2\nL2)\n≤C+σ(t0+s0)−1\nε2((t0−s0)2+∝ba∇dblξ0−ζ0∝ba∇dbl2\nL2)−ηtt0−ηss0\n−E[ηξ·ξ0]−E[ηζ·ζ0]\n≤C−E[ηξ·ξ0]−E[ηζ·ζ0]\n≤C+√\n2ε(∝ba∇dblξ0∝ba∇dbl2\nL2+∝ba∇dblζ0∝ba∇dbl2\nL2)1/2. (37)\nHere and in the following Cdenotes generic positive constant, whose value\nmay change from line to line but is always independent of εandσ. Solving\nthe quadratic inequality above, we get\n(∝ba∇dblξ0∝ba∇dbl2\nL2+∝ba∇dblζ0∝ba∇dbl2\nL2)1/2≤C(1 +ε−1/2). (38)\nNow arguing in the same way as (37) and further combining (37) , we have\n1\nε2((t0−s0)2+∝ba∇dblξ0−ζ0∝ba∇dbl2\nL2)≤C−E[ηξ·ξ0]−E[ηζ·ζ0]\n≤C+√\n2ε(∝ba∇dblξ0∝ba∇dbl2\nL2+∝ba∇dblζ0∝ba∇dbl2\nL2)1/2\n≤C,\nor equivalently\n|t0−s0|+∝ba∇dblξ0−ζ0∝ba∇dblL2≤Cε. (39)\n3). Eq. (39) allows us to further sharpen the estimate of ( t−s)2+∝ba∇dblξ−ζ∝ba∇dbl2\nL2.\nSpecifically, since ˜G(t0,t0,ξ0,ξ0)≤˜G(t0,s0,ξ0,ζ0), we have\nE[ηs·(s0−t0)] +E[ηζ·(ζ0−ξ0)]\n≤U2(t0,ξ0)−U2(s0,ζ0) +σ(s0−t0) +ε(∝ba∇dblξ0∝ba∇dbl2\nL2− ∝ba∇dblζ0∝ba∇dbl2\nL2)\n−1\nε2((t0−s0)2+∝ba∇dblξ0−ζ0∝ba∇dbl2\nL2).\n20 Weinan E, Jiequn Han, Qianxiao Li\nRearranging the above inequality and using estimates (38), (39), and uni-\nform continuity of U2, we obtain\n1\nε2((t0−s0)2+∝ba∇dblξ0−ζ0∝ba∇dbl2\nL2)\n≤ω2(|t0−s0|+∝ba∇dblξ0−ζ0∝ba∇dblL2) +C(|t0−s0|+∝ba∇dblξ0−ζ0∝ba∇dblL2) +ε∝ba∇dblξ0+ζ0∝ba∇dblL2∝ba∇dblξ0−ζ0∝ba∇dblL2\n≤ω2(|t0−s0|+∝ba∇dblξ0−ζ0∝ba∇dblL2) +C(|t0−s0|+∝ba∇dblξ0−ζ0∝ba∇dblL2)\n≤ω2(Cε) +Cε.\nBy the property of modulus, we conclude\n|t0−s0|+∝ba∇dblξ0−ζ0∝ba∇dblL2=o(ε). (40)\n4). From the definition of ˜Gandδ, we can choose εso small that\nsup\n[0,T]×L2(Ω,Rd+l)˜G(t,t,ξ,ξ )≥δ\n2.\nUsing estimate (38), (40), we can furthermore choose σ,εsmall enough such\nthat\nU1(t0,ξ0)−U2(s0,ζ0)\n≥˜G(t0,s0,ξ0,ζ0)−Cσ−Cε\n≥ sup\n[0,T]×L2(Ω,Rd+l)˜G(t,t,ξ,ξ )−δ\n4\n≥δ\n4.\nNoting the terminal condition U1(T,ξ)≤U2(T,ξ), we are ready to estimate\n|T−t0|through\nδ\n4≤U1(t0,ξ0)−U2(s0,ζ0)\n≤U1(t0,ξ0)−U1(T,ξ 0) +U1(T,ξ 0)−U2(T,ξ 0)\n+U2(T,ξ 0)−U2(t0,ξ0) +U2(t0,ξ0)−U2(s0,ζ0)\n≤ω1(|T−t0|) +ω2(|T−t0|) +ω2(|t0−s0|+∝ba∇dblξ0−ζ0∝ba∇dblL2)\n=ω1(|T−t0|) +ω2(|T−t0|) +ω2(o(ε)).\nTherefore, when εis small enough, we have\nω1(|T−t0|) +ω2(|T−t0|)≥δ\n8,\nwhich implies\n|T−t0| ≥λ>0,\nfor some positive constant λ, providedσ,εare small enough. The same argu-\nment as above can also give |T−s0| ≥λ>0.\nA Mean-Field Optimal Control Formulation of Deep Learning 2 1\n5). The finite differences between t0,s0andTfinally allow us to employ the\nviscosity property. Consider the map ( t,ξ)∝ma√sto→˜G(t,s0,ξ,ζ 0) has a maximum at\n(t0,ξ0), i.e.U1−ψhas a maximum at ( t0,ξ0) for\nψ(t,ξ):=U2(s0,ζ0)−σ(t+s0) +ε(∝ba∇dblξ∝ba∇dbl2\nL2+∝ba∇dblζ0∝ba∇dbl2\nL2) +1\nε2((t−s0)2+∝ba∇dblξ−ζ0∝ba∇dbl2\nL2)\n+ηtt+ηss0+E[ηξ·ξ] +E[ηζ·ζ0].\nSinceU1is a viscosity subsolution, using the subsolution property (25), we\nhave\n−σ+2(t−s0)\nε2+ηt+H(ξ0,2εξ0+2(ξ0−ζ0)\nε2+ηξ)≥0. (41)\nIn the same way, consider the map ( s,ζ)∝ma√sto→ − ˜G(t0,s,ξ 0,ζ) has a minimum at\n(s0,ζ0), i.e.U2−ψhas a minimum at ( s0,ζ0) for\nψ(t,ξ):=U1(s0,ζ0) +σ(t0+s)−ε(∝ba∇dblξ0∝ba∇dbl2\nL2+∝ba∇dblζ∝ba∇dbl2\nL2)−1\nε2((t0−s)2+∝ba∇dblξ0−ζ∝ba∇dbl2\nL2)\n−ηtt0−ηss−E[ηξ·ξ0]−E[ηζ·ζ].\nSinceU2is a viscosity supersolution, using the supersolution prop erty (26),\nwe have\nσ+2(t0−s)\nε2−ηs+H(ζ0,−2εζ0+2(ξ0−ζ0)\nε2−ηζ)≥0. (42)\nComputing the difference in the two inequalities (41),(42) g ives\n−2σ+ηt+ηs+H(ξ0,2εξ0+2(ξ0−ζ0)\nε2+ηξ)−H(ζ0,−2εζ0+2(ξ0−ζ0)\nε2−ηζ)≥0.\nUsing estimates (38), (40) and Lemma 1, we have\n2σ≤ηt+ηs+H(ζ0,−2εζ0+2(ξ0−ζ0)\nε2−ηζ)− H(ξ0,2εξ0+2(ξ0−ζ0)\nε2+ηξ)\n≤2ε+|H(ζ0,−2εζ0+2(ξ0−ζ0)\nε2−ηζ)− H(ζ0,2εξ0+2(ξ0−ζ0)\nε2+ηξ)|\n+|H(ζ0,2εξ0+2(ξ0−ζ0)\nε2+ηξ)− H(ξ0,2εξ0+2(ξ0−ζ0)\nε2+ηξ)|\n≤2ε+KL∝ba∇dbl2εξ0+ 2εζ0+ηξ+ηζ∝ba∇dblL2\n+KL(1 + ∝ba∇dbl2εξ0+2(ξ0−ζ0)\nε2+ηξ∝ba∇dblL2)∝ba∇dblξ0−ζ0∝ba∇dblL2\n≤o(1) (ε→0+).\nTherefore taking the limit gives us a contradiction 0 <σ≤0, which completes\nthe proof.\n22 Weinan E, Jiequn Han, Qianxiao Li\nThm. 1 and 2 establishes the well-posedness, in the viscosit y sense, of the\nHJB equation and identifies the value function for the mean-fi eld optimal con-\ntrol problem as the unique solution of the HJB equation. More over, it provides\nus (through solving the infimum in (20) after solving for the v alue function)\nan optimal control policy, from which we can synthesize an op timal control as\nthe solution of our learning problem. In this sense, the HJB e quation gives us\na necessary and sufficient condition for optimality of the lea rning problem (3).\nThis demonstrate an essential observation from the mean-fie ld optimal con-\ntrol viewpoint of deep learning: the population risk minimi zation problem of\ndeep learning can be viewed as a variational problem, whose s olution can be\ncharacterized by a suitably defined Hamilton-Jacobi-Bellm an equation. This\nvery much parallels classical calculus of variations.\nIt is worth noting that the HJB equation is a global character ization of the\nvalue function, in the sense that it must in principle be solv ed over the entire\nspace P2(Rd+l) of input-target distributions. Of course, we would not exp ect\nthis to be the case in practice for any non-trivial machine le arning problem.\nHowever, if we can solve it locally around some trajectories generated by the\ninitial condition µ0∈ P 2(Rd+l), then we would expect the obtained feedback\ncontrol policy to apply to nearby input-label distribution s as well. This may\nbe able to give a principled way to perform transfer or one-sh ot learning [41,\n42,43].\nFinally, observe that if the Hamiltonian defined in (23) is at tained by a\nunique minimizer θ∗∈Θgiven anyξ∈L2(Ω,Rd+l) andP∈L2(Ω,Rd+l),\nthen the uniqueness of value function immediately implies t he uniqueness of\nthe open-loop optimal control, which is sometimes a desired property of the\npopulation risk minimization problem. The following examp le gives such an\ninstance.\nExample 1 Consider a specific type of residual networks, where f(x,θ) =\nθσ(x) andL(x,θ)∝ ∝ba∇dblθ∝ba∇dbl2. Hereθ∈Rd×dis a matrix and σis a smooth\nand bounded non-linearity, e.g., tanh or sigmoid. This is si milar to conventional\nresidual neural networks except that the order of the affine tr ansformation and\nthe non-linearity are swapped. In this case, the Hamiltonia n defined in (23)\nadmits a unique minimizer θ∗given anyξ∈L2(Ω,Rd+l) andP∈L2(Ω,Rd+l).\n6 Mean-field Pontryagin’s maximum principle\nAs discussed in the earlier sections, the HJB equation provi des us with a\ncomplete characterization of the optimality conditions fo r the population risk\nminimization problem (3). However, it has the disadvantage that it is global in\nP(Rd+l) (or its lifted version, in L2(Ω,Rd+l), and hence difficult to handle in\npractice. The natural question is whether we can have a local characterization\nof optimality, and by local we mean having no need for the opti mality condition\nto depend on the whole space of input-label distributions. I n this section,\nwe provide such a characterization by proving a mean-field ve rsion of the\nA Mean-Field Optimal Control Formulation of Deep Learning 2 3\ncelebrated Pontryagin’s maximum principle (PMP) [44]. Alt hough seemingly\ndisparate at first, we will discuss in Subsec. 6.1 that the max imum principle\napproach is intimately connected with the dynamic programm ing approach\nintroduced earlier.\nIn classical optimal control, such a local characterizatio n is given in the\nform of the Pontryagin’s maximum principle, where forward a nd backward\nHamiltonian dynamics are coupled through a maximization co ndition. In the\npresent formulation, a common control parameter is shared b y all input-target\npair values ( x0,y0) that can take under the distribution µ0. Thus, one expects\nthat a maximum principle should exist in the average sense. L et us state and\nprove such a maximum principle below. We modify the assumpti ons (A1),\n(A2) to\n(A1′) The function fis bounded; f,Lare continuous in θ; andf,L,Φ are con-\ntinuously differentiable with respect to x.\n(A2′) The distribution µ0has bounded support in Rd×Rl, i.e. there exists M > 0\nsuch thatµ({(x,y)∈Rd×Rl:∝ba∇dblx∝ba∇dbl+∝ba∇dbly∝ba∇dbl ≤M}) = 1.\nTheorem 3 (Mean-field PMP) Let (A1′), (A2′) be satisfied and θ∗∈\nL∞([0,T],Θ)be a solution of (3)in the sense that J(θ∗)attains the infimum.\nThen, there exists absolutely continuous stochastic proce ssesx∗,p∗such that\n˙x∗\nt=f(x∗\nt,θ∗\nt), x∗\nt=x0, (43)\n˙p∗\nt=−∇ xH(x∗\nt,p∗\nt,θ∗\nt), p∗\nT=−∇ xΦ(x∗\nT,y0), (44)\nEµ0H(x∗\nt,p∗\nt,θ∗\nt)≥Eµ0H(x∗\nt,p∗\nt,θ), ∀θ∈Θ, a.e.t ∈[0,T], (45)\nwhere the Hamiltonian function H:Rd×Rd×Θ→Ris given by\nH(x,p,θ ) =p·f(x,θ)−L(x,θ). (46)\nProof. To simplify the proof we first make a substitution by introduc ing a new\ncoordinate x0satisfying the dynamics ˙ x0\nt=L(xt,θt) withx0\n0= 0. Then, it is\nclear that the PMP above can be transformed into one without r unning loss\nby redefining\nx→(x0,x), f →(L,f), Φ(xT,y0)→Φ(xT,y0) +x0\nT.\nCheck that (A1′), (A2′) are preserved but now we can consider without loss\nof generality the case L≡0.\nLet someτ∈(0,T] be a Lebesgue point of ˆf(t):=f(x∗\nt,θ∗\nt). By assump-\ntions (A1′) and (A2′) these points are dense in [0 ,T]. Now, forǫ∈(0,τ), define\nthe family of perturbed controls\nθτ,ǫ\nt=/braceleftBigg\nω t ∈[τ−ǫ,τ],\nθ∗\ntotherwise.\nwhereω∈Θ. This is a “needle” perturbation. Accordingly, define xτ,ǫ\ntby\nxτ,ǫ\nt=x0+/integraldisplayt\n0f(xτ,ǫ\ns,θτ,ǫ\ns)ds.\n24 Weinan E, Jiequn Han, Qianxiao Li\ni.e. solution of the forward propagation equation with the p erturbed control\nθτ,ǫ. It is clear that x∗\nt=xτ,ǫ\ntfor everyt < τ −ǫand everyx0, since the\nperturbation is not present. At t=τ, we have\n1\nǫ(xτ,ǫ\nτ−x∗\nτ) =1\nǫ/integraldisplayτ\nτ−ǫf(xτ,ǫ\ns,ω)−f(x∗\ns,θ∗\ns)ds.\nSinceτis Lebesgue point of F, we have\nvτ:= lim\nǫ↓01\nǫ(xτ,ǫ\nτ−x∗\nτ) =f(x∗\nτ,ω)−f(x∗\nτ,θ∗\nτ).\nHere,vτrepresents the leading order perturbation on the state due t o the\n“needle” perturbation introduced in the infinitesimal inte rval [τ−ǫ,τ]. For the\nrest of the time interval ( τ,T], the dynamics remain the same since the controls\nare the same. It remains to compute how the perturbation vτpropagates.\nDefine fort≥τ,vǫ\nt:=1\nǫ(xτ,ǫ\nt−x∗\nt) andvt:= lim ǫ↓0vǫ\nt. By Theorem 2.3.1\nof [45], we know that vtis well defined for almost every t(all the Lebesgue\npoints of the map t∝ma√sto→x∗(t)) and satisfies the following linearized equation:\n˙vt=∇xf(x∗\nt,θ∗\nt)Tvt, t ∈(τ,T],\nvτ=f(x∗\nτ,ω)−f(x∗\nτ,θ∗\nτ).(47)\nIn particular, v(T) represents the perturbation of the final state introduced b y\nthis control. By the optimality assumption of θ∗, we must have\nEµ0Φ(xτ,ǫ\nT,y0)≥Eµ0Φ(x∗\nT,y0).\nAssumption (A1′) and (A2′) implies ∇xΦis bounded so by dominated conver-\ngence theorem,\n0≤lim\nǫ↓01\nǫEµ0[Φ(xτ,ǫ\nT,y0)−Φ(x∗\nT,y0)]\n=Eµ0d\ndǫΦ(xǫ,τ\nT,y0)/vextendsingle/vextendsingle/vextendsingle\nǫ=0+\n=Eµ0∇xΦ(x∗\nT,y0)·vT.(48)\nNow, let us define p∗to be the solution of the adjoint of Eq (47),\n˙p∗\nt=−∇ xf(x∗\ns,θ∗\ns)p∗\nt, p∗\nT=−∇ xΦ(x∗\nT,y0).\nThen, (48) implies Eµ0p∗\nT·vT≤0. Moreover, we have\nd\ndt(p∗\nt·vt) = ˙p∗\nt·vt+ ˙vt·p∗\nt= 0\nfor allt∈[τ,T]. Thus, we must have Eµ0p∗\nt·vt=Eµ0p∗\nT·vT≤0 for all\nt∈[τ,T] and so for t=τ(with initial condition in (47)),\nEµ0p∗\nτ·f(x∗\nτ,θ∗\nτ)≥Eµ0p∗\nτ·f(x∗\nτ,ω).\nSinceω∈Θis arbitrary, this completes the proof by recalling that H(x,p,θ ) =\np·f(x,θ).\nA Mean-Field Optimal Control Formulation of Deep Learning 2 5\nRemark 1 In fact, one can show, under slightly stronger conditions (b ounded\nfirst partial derivatives) that Eµ0H(x∗\nt,p∗\nt,θ∗\nt) is constant in time, using stan-\ndard techniques (see e.g., Sec. 4.2.9 of [46]).\nLet us now discuss the mean-field PMP. First, notice that it is a necessary\ncondition, and hence is much weaker than the HJB characteriz ation. Also, the\nPMP refers only to the open-loop control process θwith no explicit reference\nto an optimal control policy. Now, since the PMP is a necessar y condition,\nwe should discuss its relationship with classical necessar y conditions in opti-\nmization. Equation (43) is simply the feed-forward ODE (2) u nder the optimal\nparameters θ∗. On the other hand, Eq. (44) defines the evolution of the co-\nstatep∗\ns. To draw analogy with constrained optimization, the co-sta te can be\nregarded as Lagrange multipliers which enforce the ODE cons traint (2). How-\never, as in the proof of Thm. 3, it may be more general to interp ret it as the\nevolution of an adjoint variational condition backwards in time. The Hamilto-\nnian maximization condition (45) is a unique feature of PMP- type statements,\nin that it does not characterize optimality in terms of vanis hing of first order\npartial derivatives, as is the case in usual first order optim ality conditions.\nInstead, optimal solutions must globally maximize the Hami ltonian function.\nThis feature allows greater applicability since we can also deal with the case\nwhere the dynamics are not differentiable with respect to the controls/training\nweights, or when the optimal controls/training weights lie on the boundary of\nthe setΘ. Moreover, the usual first order optimality conditions and t he cel-\nebrated back-propagation algorithm can be readily derived from the PMP,\nsee [5]. We note that compared to classical statements of the PMP [17], the\nmain difference in our result is the presence of the expectati on overµ0in the\nHamiltonian maximization condition (45). This is to be expe cted since the\nmean-field optimal control must depend on the distribution o f input-target\npairs.\nWe conclude the discussion by noting that the PMP above can be written\nmore compactly as follows. For each control process θ∈L∞([0,T],Θ), denote\nbyxθ:={xθ\nt: 0≤t≤T}andpθ:={pθ\nt: 0≤t≤T}the solution of\nthe Hamilton’s equations (43) and (44) using this control wi th the random\nvariables (x0,y0)∼µ0, i.e.\n˙xθ\nt=f(xθ\nt,θt), xθ\n0=x0,\n˙pθ\nt=−∇ xH(xθ\nt,pθ\nt,θt), pθ\nT=−∇ xΦ(xθ\nT,y0).(49)\nThen, θ∗satisfies the PMP if and only if\nEµ0H(xθ∗\nt,pθ∗\nt,θ∗\nt)≥Eµ0H(xθ∗\nt,pθ∗\nt,θ),∀θ∈Θ. (50)\nFurthermore, observe that the mean-field PMP derived above i ncludes, as a\nspecial case, the necessary conditions for optimality for t he sampled optimal\ncontrol problem (4). To see this, simply define the empirical measureµN\n0:=\n1\nN/summationtextN\ni=1δ(xi\n0,yi\n0)and apply the mean-field PMP (Thm. 3) with µN\n0in place of\n26 Weinan E, Jiequn Han, Qianxiao Li\nµ0to give\n1\nNN/summationdisplay\ni=1H(xθ∗,i\nt,pθ∗,i\nt,θ∗\nt)≥1\nNN/summationdisplay\ni=1H(xθ∗,i\nt,pθ∗,i\nt,θ),∀θ∈Θ, (51)\nwhere each xθ,iandpθ,iare defined as in (49), but with the input-target pair\n(xi\n0,yi\n0). Of course, since µN\n0is a random measure, this is a random equation\nwhose solution are random variables.\n6.1 Connection between the HJB equation and the PMP\nWe now discuss some concrete connections between the HJB equ ation and the\nPMP, thus justifying our claim that the PMP can be understood as a local\nresult compared to the global characterization of the HJB eq uation.\nIt should be noted that the Hamiltonian defined in Pontryagin ’s maximum\nprinciple (46) is different from (23) in the HJB equation, due to different sign\nconventions in these two approaches of classical optimal co ntrol. We choose\nto keep this difference such that readers familiar with class ical control theory\ncan draw an analogy easily. Nevertheless, if one replaces p,L,f in (46) by\n−P,−¯L,¯frespectively and takes the infimum over Θinstead of the maximum\ncondition in (45), one formally obtains the negative of (23) .\nNow, our goal is to show that the HJB and PMP are more intimatel y con-\nnected than it appears in the definition of Hamiltonian. The d eeper connections\noriginate from the link between Hamilton’s canonical equat ions (ODEs) and\nHamilton-Jacobi equations (PDEs), of which we give an infor mal description\nas follows.\nFirst, note that although the Hamiltonian dynamics (43) and (44) de-\nscribe the trajectory of particular random variables (comp letely determined\nby (x0,y0)), the optimality conditions are not dependent on the parti cular\nrepresentation of the probability measures by these random variables. In other\nwords, we could also formulate a maximum principle whose Ham iltonian flow\nis that on measures in a Wasserstein space, from which the abo ve PMP can\nbe seen as a “lifting” . This approach would parallel the deve lopments in the\nprevious sections on the HJB equations. However, here we cho ose to establish\nand analyze the PMP in the lifted space due to the simplicity o f having well-\ndefined evolution equations. The corresponding evolution o f measures would\nrequire more technical analysis while not being particular ly more elucidating.\nInstead, we shall establish the connections by also lifting the HJB equation\nintoL2(Ω,Rd+l).\nConsider the lifted HJB equation (24) in L2(Ω,Rd+l). The key observation\nis that we can apply the method of characteristics (see e.g., Ch. 3.2 of [47]) by\ndefiningPt=DV(t,ξt) and write down the characteristic evolution equations:\n/braceleftBigg˙ξt=DPH(ξt,Pt),\n˙Pt=−DξH(ξt,Pt).(52)\nA Mean-Field Optimal Control Formulation of Deep Learning 2 7\nSuppose this system has a solution satisfying boundary cond itionsPξ0=\nµ0,PT=∇w¯Φ(ξT), where the second condition comes from the terminal condi-\ntion of (24). To avoid technicalities, we further assume tha t the infimum in (23)\nis attained at θ†(ξ,P), which is always an interior point of Θ. Hence (23) can\nbe explicitly written down as\nH=E[P·¯f(ξ,θ†(ξ,P)) + ¯L(ξ,θ†(ξ,P))],\nand by first order condition we have\nE/bracketleftbig\n∇θ¯f(ξ,θ†(ξ,P))P+∇θ¯L(ξ,θ†(ξ,P))/bracketrightbig\n= 0.\nPlugging the above two equalities into (52) gives us\n/braceleftBigg˙ξt=¯f(ξt,θ†(ξt,Pt)),\n˙Pt=−∇ w¯f(ξt,θ†(ξt,Pt))Pt− ∇ w¯L(ξt,θ†(ξt,Pt)).\nLetθ∗\nt=θ†(ξt,Pt). Note that w= (x,y) is the concatenated variable and\nthe lastlcomponents of ¯fare zero. If we only consider the first dcomponents,\nthen we can deduce the d−dimensional dynamical system in L2(Ω,Rd):\n/braceleftBigg\n˙xt=f(ξt,θ∗\nt),\n˙pt=−∇ xf(xt,θ∗\nt)pt− ∇ xL(xt,θ∗\nt).(53)\nIf we make the transformation p→ −pin Thm. 3, it is straightforward to see\nthat the deduced dynamical system by Thm. 3 satisfies (53) in L2(Ω,Rd) and\nthe boundary conditions are matched.\nIn summary, the Hamilton’s equations (53) in the PMP can be vi ewed\nas the characteristic equations for the HJB equation (24). C onsequently, the\nPMP pinpoints the necessary condition a characteristic of t he HJB equation\noriginating from (a random variable with law) µ0must satisfy. This justifies\nthe preceding claim that the PMP constitutes a local optimal ity condition as\ncompared to the HJB equation.\n7 Small-time uniqueness\nAs discussed, the PMP constitute necessary conditions for o ptimality. A nat-\nural question is when are the PMP solutions also sufficient for optimality\n(See [45], Ch. 8 for some discussions on sufficiency). One simp le case where\nit is sufficient, assuming an optimal solution exists, is when the PMP equa-\ntions admit a unique solution. In this section, we investiga te the uniqueness\nproperties of the PMP system.\nNote that even if there exists a unique solution θ†(ν) of the Hamiltonian\nmaximization arg maxθE(x,p)∼νH(x,p,θ ) for any P(x,p), the equation (49) re-\nduces to a highly non-linear two-point boundary value probl em for x∗,p∗,\nfurther coupled with their laws. Even without the coupling t o laws, such two-\npoint boundary value problems are known to not have unique so lutions in\n28 Weinan E, Jiequn Han, Qianxiao Li\ngeneral (see e.g., Ch. 7 of [48]). In the following, we shall s how that if Tis\nsufficiently small and His strongly concave, then the PMP admits a unique so-\nlution. Hereafter, we retain assumption (A2′) and replace (A1′) with a stronger\nassumption, which greatly simplifies our arguments:\n(A1′′)fis bounded; f,L,Φ are twice continuously differentiable with respect to\nbothx,θ, with bounded and Lipschitz partial derivatives.\nWith an estimate of the difference in flow maps due to two differe nt con-\ntrols, we can prove a small-time uniqueness result for the PM P.\nTheorem 4 Suppose that H(x,p,θ )is strongly concave in θ, uniformly in\nx,p∈Rd, i.e. ∇2\nxxH(x,p,θ ) +λ0I∝√∇ecedesequal0for someλ0>0. Then, for sufficiently\nsmallT, ifθ1andθ2are solutions of the PMP (50) then θ1=θ2.\nNote that since we are considering the effects of T, in the rest of the\nestimates in this section, the dependence of constants on Tare explicitly con-\nsidered. We first estimate the difference of flow-maps driven b y two different\ncontrols.\nLemma 2 Letθ1,θ2∈L∞([0,T],Θ). Then, there exists a constant T0such\nthat for all T∈[0,T0), we have\n∝ba∇dblxθ1−xθ2∝ba∇dblL∞+∝ba∇dblpθ1−pθ2∝ba∇dblL∞≤C(T)∝ba∇dblθ1−θ2∝ba∇dblL∞.\nwhereC(T)>0satisfiesC(T)→0asT→0.\nProof. Denote δθ:=θ1−θ2,δx:=xθ1−xθ2andδp:=pθ1−pθ2. Since\nxθ1\n0=xθ2\n0=x0, integrating the respective ODEs and using (A1′′) we have\n∝ba∇dblδxt∝ba∇dbl ≤/integraldisplayt\n0∝ba∇dblf(xθ1\ns,θ1\ns)−f(xθ1\nt,θ2\ns)∝ba∇dblds≤KL/integraldisplayT\n0∝ba∇dblδxs∝ba∇dblds+KL/integraldisplayT\n0∝ba∇dblδps∝ba∇dblds,\nand so\n∝ba∇dblδx∝ba∇dblL∞≤KLT∝ba∇dblδx∝ba∇dbl∞+KLT∝ba∇dblδθ∝ba∇dbl∞.\nNow, ifT <T 0:= 1/KL, we then have\n∝ba∇dblδx∝ba∇dblL∞≤KLT\n1−KLT∝ba∇dblδθ∝ba∇dblL∞. (54)\nSimilarly,\n∝ba∇dblδpt∝ba∇dbl ≤KL∝ba∇dblδxT∝ba∇dbl+KL/integraldisplayT\nt∝ba∇dblδxs∝ba∇dblds+KL/integraldisplayT\nt∝ba∇dblδps∝ba∇dblds,\n∝ba∇dblδp∝ba∇dblL∞≤(KL+KLT)∝ba∇dblδx∝ba∇dblL∞+KLT∝ba∇dblδp∝ba∇dblL∞,\nand hence\n∝ba∇dblδp∝ba∇dblL∞≤KL(1 +T)\n1−KLT∝ba∇dblδx∝ba∇dblL∞. (55)\nCombining (54) and (55) proves the claim.\nA Mean-Field Optimal Control Formulation of Deep Learning 2 9\nWith the above estimate, we can now prove Thm. 4.\nProof of Thm. 4. By uniform strong concavity, the function θ∝ma√sto→Eµ0H(xθ1\nt,xθ1\nt,θ)\nis strongly concave. Thus, we have a λ0>0 such that\nλ0\n2∝ba∇dblθ1\nt−θ2\nt∝ba∇dbl2≤/bracketleftBig\nEµ0∇H(xθ1\nt,pθ1\nt,θ2\nt)−Eµ0∇H(xθ1\nt,pθ1\nt,θ1\nt)/bracketrightBig\n·(θ1\nt−θ2\nt).\nA similar expression holds for θ∝ma√sto→Eµ0H(xθ2\nt,xθ2\nt,θ) and so combining them\nand using assumptions (A1′′) we have\nλ0∝ba∇dblθ1\nt−θ2\nt∝ba∇dbl2≤/bracketleftBig\nEµ0∇H(xθ1\nt,pθ1\nt,θ2\nt)−Eµ0∇H(xθ1\nt,pθ1\nt,θ1\nt)/bracketrightBig\n·(θ1\nt−θ2\nt)\n+/bracketleftBig\nEµ0∇H(xθ2\nt,pθ2\nt,θ1\nt)−Eµ0∇H(xθ2\nt,pθ2\nt,θ2\nt)/bracketrightBig\n·(θ1\nt−θ2\nt)\n≤Eµ0∝ba∇dbl∇H(xθ1\nt,pθ1\nt,θ1\nt)− ∇H(xθ2\nt,pθ2\nt,θ1\nt)∝ba∇dbl∝ba∇dblθ1\nt−θ2\nt∝ba∇dbl\n+Eµ0∝ba∇dbl∇H(xθ1\nt,pθ1\nt,θ2\nt)− ∇H(xθ2\nt,pθ2\nt,θ2\nt)∝ba∇dbl∝ba∇dblθ1\nt−θ2\nt∝ba∇dbl\n≤KL∝ba∇dblδθ∝ba∇dblL∞(∝ba∇dblδx∝ba∇dblL∞+∝ba∇dblδp∝ba∇dblL∞).\nCombining the above and Lemma 2, we have\n∝ba∇dblδθ∝ba∇dbl2\nL∞≤KL\nλ0C(T)∝ba∇dblδθ∝ba∇dbl2\nL∞.\nButC(T) =o(1) and so we may take Tsufficiently small so that KLC(T)<λ 0\nto conclude that ∝ba∇dblδθ∝ba∇dblL∞= 0.\nIn the context of machine learning, since fis bounded, small Troughly\ncorresponds to the regime where the reachable set of the forw ard dynamics\nis small. This can be loosely interpreted as the case where th e model has low\ncapacity or expressive power. We note that the number of para meters is still\ninfinite, since we only require θto be essentially bounded and measurable in\ntime. Hence, Thm. 4 can be interpreted as the statement that w hen the model\ncapacity is low, the optimal solution is unique, albeit with possibly high loss\nfunction values. Note that the strong concavity of the Hamil tonian does not\nimply that the loss function Jis strongly convex, or even convex, which is often\nan unrealistic assumption in deep learning. In fact, in the c ase considered in\nExample 1, we observe that His strongly concave but the loss function Jcan\nbe highly non-convex due to the non-linear transformation σ. Compared with\nthe characterization using HJB (Sec. 5), we observe that the uniqueness of the\nsolutions of the PMP requires the small Tcondition.\n8 From mean-field PMP to sampled PMP\nSo far, we have focused our discussion on the mean-field contr ol problem (3)\nand mean-field PMP (50). However, the solution of the mean-fie ld PMP re-\nquires maximizing an expectation. Hence, in practice we mus t resort to solving\n30 Weinan E, Jiequn Han, Qianxiao Li\na sampled version (51), which constitutes necessary condit ions for the sampled\noptimal control problem (4).\nThe goal of this section is to draw some precise connections b etween the\nsolutions of the mean-field PMP (50) and the sampled PMP (51). In par-\nticular, we show that under appropriate conditions, near an y stable (to be\nprecisely defined later) solution of the mean-field PMP (50) w e can find with\nhigh probability a solution of the sampled PMP (51). This all ows us to es-\ntablish a concrete link, via the maximum principle, between solutions of the\npopulation risk minimization problem (3) and the empirical risk minimization\nproblem (4). To proceed, the key observation is that the inte rior solutions\nto both the mean-field and sampled PMPs can be written as the so lutions\nto algebraic equations on Banach spaces. Indeed, in view of t he compact no-\ntation (50), let us suppose that θ∗is a solution of the PMP such that the\nmaximization step attains a maximum in the interior of Θfor a.e.t∈[0,T].\nNote that if Θis sufficiently large, e.g., Θ=Rm, then this must be the case.\nWe shall hereafter assume this holds. Consequently, the PMP solution satisfies\n(by dominated convergence theorem)\nF(θ∗)t:=Eµ0∇θH(xθ∗\nt,pθ∗\nt,θ∗\nt) = 0, (56)\nfor a.e.t, where F:L∞([0,T],Θ)→L∞([0,T],Rm) is a Banach space map-\nping. Similarly, from (51) we know that an interior solution θNof the finite-\nsample PMP is a random variable which satisfies\nFN(θN)t:=1\nNN/summationdisplay\ni=1∇θH(xθN,i\nt,pθN,i\nt,θN\nt) = 0, (57)\nfor a.e.t. Now, FNis a random approximation of FandEFN(θ) = F(θ)\nfor all θ. In fact, FN→Falmost surely by law of large numbers. Hence, the\nanalysis of the approximation properties of the mean-field P MP by its sampled\ncounterpart amounts to the study of the approximation of zer os of Fby those\nofFN.\nIn view of this, we shall take a brief excursion to develop som e theory\non random approximations of zeros of Banach space mappings a t an abstract\nlevel, and then use these results to deduce properties of the PMP approxima-\ntions. The techniques employed in the next section are remin iscent of classical\nnumerical analysis results on finite difference approximati on schemes [49], ex-\ncept that we work with random approximations.\n8.1 Excursion: random approximations of zeros of Banach spa ce mappings\nLet (U,∝ba∇dbl · ∝ba∇dbl U),(V,∝ba∇dbl · ∝ba∇dbl V) be Banach spaces and F:U→Vbe a mapping.\nWe first define a notion of stability, which shall be a primary c ondition that\nensures existence of close-by zeros of approximations.\nA Mean-Field Optimal Control Formulation of Deep Learning 3 1\nDefinition 2 Forρ>0 andx∈U, defineSρ(x):={y∈U:∝ba∇dblx−y∝ba∇dblU≤ρ}.\nWe say that the mapping Fis stable on Sρ(x) if there exists a constant Kρ>0\nsuch that for all y,z∈Sρ(x),\n∝ba∇dbly−z∝ba∇dblU≤Kρ∝ba∇dblF(y)−F(z)∝ba∇dblV.\nNote that if Fis stable on Sρ(x), then it is trivially true that it has at\nmost one solution to F= 0 onSρ(x). If it does have a solution, say at x∗, then\nit is necessarily isolated, i.e., if DF(x∗) exists, then it is non-singular. The\nfollowing proposition establishes a stronger version of th is: ifDF(x) exists for\nanyx∈Sρ(x∗), then it is necessarily non-singular.\nProposition 4 LetFonSρ(x∗)be stable. Then, for any x∈Sρ(x∗), ifDF(x)\nexists, then it is non-singular, i.e. DF(x)y= 0impliesy= 0.\nProof. Suppose for the sake of contradiction that DF(x)y= 0 and ∝ba∇dbly∝ba∇dblU∝ne}ationslash= 0.\nDefinez(α):=x+αywithαsufficiently small so that z(α)∈Sρ(x∗). Then,\nα∝ba∇dbly∝ba∇dblU=∝ba∇dblx−z(α)∝ba∇dblU\n≤Kρ∝ba∇dblF(x)−F(z(α))∝ba∇dblV\n≤Kρ(α∝ba∇dblDF(x)y∝ba∇dblV+∝ba∇dblF(x+αy)−F(x)−DF(x)αy∝ba∇dblV).\nButDF(x)y= 0, and so α∝ba∇dbly∝ba∇dblU≤Kρr(x,αy )α∝ba∇dbly∝ba∇dblU, By definition of the\nFréchet derivative (5), r(x,αy )→0 asα→0. Thus ifαis sufficiently small so\nthatKρr(x,αy )<1, then ∝ba∇dbly∝ba∇dblU= 0 and hence we arrive at a contradiction.\nAs the previous proposition suggests, a converse statement that establishes\nstability will require DF(x) to be non-singular on some neighborhood of x∗.\nOne in fact requires more, i.e. that DFneeds to be Lipschitz. Note that for\na linear operator A:U→V, we also use ∝ba∇dblA∝ba∇dblVto denote the usual induced\nnorm, ∝ba∇dblA∝ba∇dblV= sup/bardbly/bardblU≤1∝ba∇dblAy∝ba∇dblV.\nProposition 5 SupposeDF(x∗)is non-singular, DF(x)exists and ∝ba∇dblDF(x)−\nDF(y)∝ba∇dblV≤KL∝ba∇dblx−y∝ba∇dblUfor allx,y∈Sρ(x∗). Then,Fis stable on Sρ0(x∗)\nfor any 0<ρ 0≤min(ρ,1\n2(KL∝ba∇dblDF(x∗)−1∝ba∇dblU)−1)with stability constant\nKρ0= 2∝ba∇dblDF(x∗)−1∝ba∇dblU.\nProof. Letρ0≤ρand takex,y∈Sρ0(x∗). Using the mean value theorem, we\ncan writeF(x)−F(y) =R(x,y)(x−y) where\nR(x,y):=/integraldisplay1\n0DF(sx+ (1 −s)y)ds.\nBut, using the Lipschitz condition we have\n∝ba∇dblR(x,y)−DF(x∗)∝ba∇dblV≤/integraldisplay1\n0∝ba∇dblDF(sx+ (1 −s)y)−DF(sx∗+ (1 −s)x∗)∝ba∇dblVds\n≤KL/integraldisplay1\n0∝ba∇dbls(x−x∗) + (1 −s)(y−x∗)∝ba∇dblUds\n≤ρ0KL.\n32 Weinan E, Jiequn Han, Qianxiao Li\nWe takeρ0sufficiently small so that ρ0KL≤1\n2∝ba∇dblDF(x∗)−1∝ba∇dbl−1\nU. Then, by the\nBanach lemma, R(x,y) is non-singular and ∝ba∇dblR(x,y)−1∝ba∇dblU≤2∝ba∇dblDF(x∗)−1∝ba∇dblU.\nThe result follows since ( x−y) =R(x,y)−1(F(x)−F(y)).\nNow, let us now introduce a family of random mappingsFNthat approxi-\nmateF. Let (Ω,F,P) be a probability space and {FN(ω) :N≥1,ω∈Ω}be a\nfamily of mappings from UtoVsuch thatω∝ma√sto→FN(ω)(x) isF-measurable for\neachx(we equip the Banach spaces U,V with the Borel σ-algebra). We make\nthe following assumptions which will allow us to relate the r andom solutions\nofFN= 0 with those of F= 0 in Thm. 5.\n(B1) (Stability) There exists x∗∈Usuch thatF(x∗) = 0 andFis stable on\nSρ(x∗) for someρ>0.\n(B2) (Uniform convergence in probability) For all N≥1,DF(x) andDF N(x)\nexists for all x∈Sρ(x∗),P-a.s. and\nP[∝ba∇dblF(x)−FN(x)∝ba∇dblV≥s]≤r1(N,s),\nP[∝ba∇dblDF(x)−DF N(x)∝ba∇dblV≥s]≤r2(N,s),\nfor some real-valued functions r1,r2such thatr1(N,s),r2(N,s)→0 as\nN→ ∞ .\n(B3) (Uniformly Lipschitz derivative) There exists KL>0 such that for all\nx,y∈Sρ(x∗),\n∝ba∇dblDF N(x)−DF N(y)∝ba∇dblV≤KL∝ba∇dblx−y∝ba∇dblU,P-a.s.\nTheorem 5 Let (B1)-(B3) hold. Then, there exist positive constants s0,ρ1,C\nwithρ1<ρandU-valued random variables xN∈Sρ1(x∗)satisfying\nP[∝ba∇dblxN−x∗∝ba∇dblU≥Cs]≤r1(N,s) +r2(N,s), s ∈(0,s0],\nP[FN(xN)∝ne}ationslash= 0] ≤r1(N,s 0) +r2(N,s 0).\nIn particular, xN→x∗andFN(xN)→0in probability.\nTo establish Thm. 5, we first prove that for large N, with high probability\nDF N(x∗) is non-singular and ∝ba∇dblDF N(x∗)−1∝ba∇dblUis uniformly bounded.\nLemma 3 Let (B1)-(B3) hold. Then, there exists a constant s0>0such that\nfor eachs∈(0,s0]andN≥1, there exists a measurable AN(s)⊂Ωsuch that\nP[AN(s)]≥1−r1(N,s)−r2(N,s)and for each ω∈AN(s),\n∝ba∇dblF(x∗)−FN(ω)(x∗)∝ba∇dblV<s.\nMoreover,DF N(ω)(x∗)is non-singular with\n∝ba∇dblDF N(ω)(x∗)−1∝ba∇dblU≤2∝ba∇dblDF(x∗)−1∝ba∇dblU.\nIn particular, DF N(ω)is stable on Sρ0(x∗)withρ0≤min(ρ,1\n4(KL∝ba∇dblDF(x∗)−1∝ba∇dblU)−1)\nand stability constant Kρ0= 4∝ba∇dblDF(x∗)−1∝ba∇dblU.\nA Mean-Field Optimal Control Formulation of Deep Learning 3 3\nProof. Fors>0 set\nAN(s):={ω∈Ω:∝ba∇dblF(x∗)−FN(ω)(x∗)∝ba∇dblV)<s\nand∝ba∇dblDF(x∗)−DF N(ω)(x∗)∝ba∇dblV<s}.\nObserve that AN(s) is measurable as DF N(ω)(x∗) is measurable and assump-\ntion (B2) implies P[AN(s)]≥1−r1(N,s)−r2(N,s). Now, take ssufficiently\nsmall so that s≤s0=1\n2∝ba∇dblDF(x∗)−1∝ba∇dbl−1\nU. Then, for each ω∈AN(s), the\nBanach lemma implies DF N(ω)(x∗) is non-singular and\n∝ba∇dblDF N(ω)(x∗)−1∝ba∇dblU≤∝ba∇dblDF(x∗)−1∝ba∇dblU\n1−1\n2= 2∝ba∇dblDF(x∗)−1∝ba∇dblU.\nFinally, we use Proposition 5 to deduce stability of FN(ω).\nNow we are ready to prove Thm. 5 by constructing a uniform cont raction\nmapping whose fixed point is a solution of FN(x) = 0.\nProof of Thm. 5. Lets0,AN(s) andρ0be those defined in Lemma 3. For each\nω∈AN(s) withs≤s0, define the mapping\nGN(ω)(x):=x−DF N(ω)(x∗)−1FN(ω)(x).\nWe now show that this is in fact a uniform contraction on Sρ1(x∗) for suffi-\nciently small ρ1. Letx,y∈Sρ1(x∗). By the mean value theorem, we have\nGN(ω)(x)−GN(ω)(y)\n=DF N(ω)(x∗)−1[DF N(ω)(x∗)(x−y)−(FN(ω)(x)−FN(ω)(y))]\n=DF N(ω)(x∗)−1[DF N(ω)(x∗)−RN(ω)(x,y)](x−y),\nwhereRN(ω)(x,y) =/integraltext1\n0DF N(ω)(sx+ (1 −s)y)ds. Lipschitz condition (B3)\nimplies\n∝ba∇dblDF N(ω)(x∗)−RN(ω)(x,y)∝ba∇dblV≤ρ1KL\nand hence by Lemma 3,\n∝ba∇dblGN(ω)(x)−GN(ω)(y)∝ba∇dblU≤α∝ba∇dblx−y∝ba∇dblU,\nwhereα= 2KLρ1∝ba∇dblDF(x∗)−1∝ba∇dblU. We now pick ρ1< ρ 0sufficiently small so\nthatα <1. It remains to show that the mapping GN(ω) mapsSρ1(x∗) onto\nitself. Letx∈Sρ1(x∗), then by noting that F(x∗) = 0,\n∝ba∇dblGN(ω)(x)−x∗∝ba∇dblU≤ ∝ba∇dblGN(ω)(x)−GN(ω)(x∗)∝ba∇dblU+∝ba∇dblGN(ω)(x∗)−x∗∝ba∇dblU\n≤αρ1+ 2∝ba∇dblDF(x∗)−1∝ba∇dblU∝ba∇dblFN(ω)(x∗)−F(x∗)∝ba∇dblV.\nUsing Lemma 3 again, we have\n∝ba∇dblGN(ω)(x)−x∗∝ba∇dblU≤αρ1+ 2s∝ba∇dblDF(x∗)−1∝ba∇dblU.\n34 Weinan E, Jiequn Han, Qianxiao Li\nWe now take s0>ssmall enough so that 2 s0∝ba∇dblDF(x∗)−1∝ba∇dblU<(1−α)ρ1. Then,\nfor allN≥1,GN(ω) is a contraction, uniform in N, onSρ1(x∗) and hence\nby Banach fixed point theorem, there exists a unique ˜ xN,s(ω)∈Sρ1(x∗) such\nthatGN(ω)(˜xN,s(ω)) = ˜xN,s(ω), i.e.FN(ω)(˜xN,s(ω)) = 0 for all ω∈AN(s).\nMoreover, ˜xN,s(ω) = lim k→∞[GN(ω)](k)(y) for anyy∈Sρ0(x∗). Define\nxN,s(ω) =1AN(s)(ω)˜xN(ω) +1AN(s)c(ω)x∗.\nNow,xN,sis measurable since AN(s) is measurable and ˜ xN,sis the limit\nof measurable random variables, and hence measurable. More over,AN(s)⊂\n{FN(xN) = 0 }and soP[FN(xN,s) = 0] ≥1−r1(N,s)−r2(N,s). Since\nxN,s∈Sρ1(x∗) andρ1< ρ 0, using the stability of FN(ω) established in\nLemma 3, and the fact that FN(xN,s) =F(x∗) = 0, we have for any ω∈AN(s)\n∝ba∇dblxN,s(ω)−x∗∝ba∇dblU≤Kρ0∝ba∇dblFN(ω)(xN,s)−FN(ω)(x∗)∝ba∇dblV\n≤4∝ba∇dblDF(x∗)−1∝ba∇dblU∝ba∇dblF(x∗)−FN(ω)(x∗)∝ba∇dblV\n<4s∝ba∇dblDF(x∗)−1∝ba∇dblU,\nand soP[∝ba∇dblxN,s(ω)−x∗∝ba∇dblU≥Cs]≤r1(N,s)+r2(N,s) withC= 4∝ba∇dblDF(x∗)−1∝ba∇dblU.\nAt this point, it appears that xN,sdepends on s. However, notice that for all\ns≤s0,AN(s)⊂AN(s0). But,xN,s(ω) is the unique solution of FN(ω)(·) = 0\ninSρ1(x∗) for eachω∈AN(s)⊂AN(s0). Therefore, xN,s(ω) =xN,s 0(ω) for\nalls≤s0. We can thus write xN:=xN,s 0≡xN,s.\nLastly, convergence in probability follows from the decay o f the functions\nr1,r2hasN→ ∞ .\n8.2 Error estimate for sampled PMP\nNow, our goal is to apply the theory developed in Sec. 8.1 to th e PMP. We shall\nassume that θ∗, the solution of the mean-field PMP, is such that F(θ∗) = 0\n(recall that this holds for Θ=Rm). Suppose further that Fis stable at θ∗(see\nDef. 2). We wish to show that for sufficiently large N, with high probability\nFNmust have a solution θNclose to θ∗.\nIn view of Thm. 5, we only need to check that (B2)-(B3) are sati sfied.\nThis requires a few elementary estimates and an application of the infinite-\ndimensional Hoeffding’s inequality [50].\nLemma 4 There exist constants KB,KL>0such that for all θ,φ∈L∞([0,T],Θ)\n∝ba∇dblxθ∝ba∇dblL∞+∝ba∇dblpθ∝ba∇dblL∞≤KB,\n∝ba∇dblxθ−xφ∝ba∇dblL∞+∝ba∇dblpθ−pφ∝ba∇dblL∞≤KL∝ba∇dblθ−φ∝ba∇dblL∞.\nA Mean-Field Optimal Control Formulation of Deep Learning 3 5\nProof. We have by Gronwall’s inequality for a.e. t,\n∝ba∇dblxθ\nt−xφ\nt∝ba∇dbl=/vextenddouble/vextenddouble/vextenddouble/vextenddouble/integraldisplayt\n0f(xθ\ns,θs)−f(xθ\ns,θs)ds/vextenddouble/vextenddouble/vextenddouble/vextenddouble\n≤KL/integraldisplayt\n0∝ba∇dblxθ\ns−xφ\ns∝ba∇dblds+KL/integraldisplayt\n0∝ba∇dblθs−φs∝ba∇dblds\n≤KLTeKLT∝ba∇dblθ−φ∝ba∇dblL∞.\nSimilarly,\n∝ba∇dblpθ\nt−pφ\nt∝ba∇dbl ≤∝ba∇dbl∇ xΦ(xθ\nT,y0)− ∇ xΦ(xφ\nT,y0)∝ba∇dbl\n+/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/integraldisplayT\nt∇xH(xθ\ns,pθ\ns,θs)− ∇ xH(xφ\ns,pφ\ns,φs)ds/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble\n≤KL∝ba∇dblxθ\nT−xφ\nT∝ba∇dbl+KL/integraldisplayT\nt∝ba∇dblxθ\ns−xφ\ns∝ba∇dblds+KL/integraldisplayT\nt∝ba∇dblpθ\ns−pφ\ns∝ba∇dblds\n≤(KL+T)KLTe2KLT∝ba∇dblθ−φ∝ba∇dblL∞.\nNotice that we can view xθ≡x(θ) as a Banach space mapping from\nL∞([0,T],Θ) toL∞([0,T],Rd), and similarly for pθ. Below, we establish some\nelementary estimates for the derivatives of these mappings with respect to θ.\nLemma 5 There exist constants KB,KL>0such that for all θ,φ∈L∞([0,T],Θ)\n∝ba∇dblDxθ∝ba∇dblL∞+∝ba∇dblDpθ∝ba∇dblL∞≤KB,\n∝ba∇dblDxθ−Dxφ∝ba∇dblL∞+∝ba∇dblDpθ−Dpφ∝ba∇dblL∞≤KL∝ba∇dblθ−φ∝ba∇dblL∞.\nProof. Letη∈L∞([0,T],Rm) such that ∝ba∇dblη∝ba∇dblL∞≤1. For brevity, let us also\ndenotefθ\nt:=f(xθ\nt,θt) andHθ\nt:=H(xθ\nt,pθ\nt,θt). Then, (Dxθ)ηsatisfy the\nlinearized ODE\nd\ndt[(Dxθ)η]t=∇xfθ\nt[(Dxθ)η]t+∇θfθ\ntηt, [(Dxθ)η]0= 0.\nGronwall’s inequality and (A1′′) immediately implies that ∝ba∇dbl[(Dxθ)η]t∝ba∇dbl ≤\nKL∝ba∇dblη∝ba∇dblL∞, and so ∝ba∇dblDxθ∝ba∇dblL∞≤K′. Next,\n∝ba∇dbl[(Dxθ)η]t−[(Dxφ)η]t∝ba∇dbl ≤/integraldisplayt\n0∝ba∇dbl∇xfθ\ns∝ba∇dbl∝ba∇dbl[(Dxθ)η]s−[(Dxφ)η]s∝ba∇dblds\n+/integraldisplayt\n0∝ba∇dbl∇xfθ\ns− ∇ xfφ\ns∝ba∇dbl∝ba∇dbl[(Dxφ)η]s∝ba∇dblds\n+/integraldisplayt\n0∝ba∇dbl∇θfθ\ns− ∇ θfφ\ns∝ba∇dbl∝ba∇dblηs∝ba∇dblds.\n36 Weinan E, Jiequn Han, Qianxiao Li\nBut, using Lemma 4, assumption (A1′′), we have\n∝ba∇dbl∇xfθ\ns− ∇ xfφ\ns∝ba∇dbl ≤KL∝ba∇dblxθ\ns−xφ\ns∝ba∇dbl+KL∝ba∇dblθs−φs∝ba∇dbl\n≤KL∝ba∇dblθ−φ∝ba∇dblL∞.\nA similar calculation shows ∝ba∇dbl∇xfθ\ns− ∇ xfφ\ns∝ba∇dbl ≤KL∝ba∇dblθ−φ∝ba∇dblL∞. Hence, Gron-\nwall’s inequality gives\n∝ba∇dbl[(Dxθ)η]t−[(Dxφ)η]t∝ba∇dbl ≤KL∝ba∇dblη∝ba∇dblL∞∝ba∇dblθ−φ∝ba∇dblL∞.\nSimilarly, (Dpθ)ηsatisfies the ODE\nd\ndt[(Dpθ)η]t=−∇2\nxxHθ\nt[(Dxθ)η]t− ∇2\nxpHθ\nt[(Dpθ)η]t− ∇2\nxθHθ\ntηt,\n[(Dpθ)η]T=−∇2\nxxΦ(xθ\nT,y0)[(Dxθ)η]T.\nA analogous calculation as above with (A1′′) shows that\n∝ba∇dbl[(Dpθ)η]t−[(Dpφ)η]t∝ba∇dbl ≤KL∝ba∇dblη∝ba∇dblL∞∝ba∇dblθ−φ∝ba∇dblL∞.\nLemma 6 Leth:Rd×Rd×Θ→Rmhave bounded and Lipschitz deriva-\ntives in all arguments and define the mapping θ∝ma√sto→G(θ)where [G(θ)]t=\nh(xθ\nt,pθ\nt,θt). Then, Gis differentiable and DGis bounded and Lipschitz µ0-\na.s., i.e.\n∝ba∇dblDG(θ)∝ba∇dblL∞≤KB,\n∝ba∇dblDG(θ)−DG(φ)∝ba∇dblL∞≤KL∝ba∇dblθ−φ∝ba∇dblL∞.\nfor someKB,KL>0and all θ,φ∈L∞([0,T],Θ).\nProof. Letη∈L∞([0,T],Rm) such that ∝ba∇dblη∝ba∇dblL∞≤1. By assumptions on h\nand Lemmas 4 and 5, DGexists and by the chain rule,\n[(DG(θ))η]t=∇xhθ\nt[(Dxθ)η]t+∇phθ\nt[(Dpθ)η]t+∇θhθ\ntηt,\nThus, ∝ba∇dbl[(DG(θ))η]t∝ba∇dbl ≤KB∝ba∇dblη∝ba∇dblL∞and\n∝ba∇dbl[(DG(θ))η]t−[(DG(φ))η]t∝ba∇dbl ≤KB∝ba∇dbl∇xhθ\nt− ∇ xhφ\nt∝ba∇dbl\n+KL∝ba∇dbl[(Dxθ)η]t−[(Dxφ)η]t∝ba∇dbl\n+...\nThe other terms are split similarly and we omit them for simpl icity. Using\nLipschitz assumption of the derivatives of hand Lemmas 4 and 5, we obtain\nthe result.\nApplying Lemma 6 with h=Hfor each sample iand summing, we see that\nDFNis bounded and Lipschitz µ0-a.s. and so (B3) is satisfied. It remains to\ncheck (B2). Using Lemma (6) and (A1′′),∝ba∇dblFN∝ba∇dblL∞and∝ba∇dblDFN∝ba∇dblL∞are almost\nsurely bounded, hence they satisfy standard concentration estimates. We have:\nA Mean-Field Optimal Control Formulation of Deep Learning 3 7\nLemma 7 There exist constants KB,KL>0such that for all θ,φ∈L∞([0,T],Θ)\nP[∝ba∇dblF(θ)−FN(θ)∝ba∇dblL∞≥s]≤2 exp/parenleftbigg\n−Ns2\nK1+K2s/parenrightbigg\n,\nP[∝ba∇dblDF(θ)−DFN(θ)∝ba∇dblL∞≥s]≤2 exp/parenleftbigg\n−Ns2\nK1+K2s/parenrightbigg\n.\nProof. Since ∝ba∇dblF(θ)∝ba∇dblis uniformly bounded by KB, we can apply the infinite-\ndimensional Hoeffding’s inequality ([50], Corollary 2) to o btain\nP[∝ba∇dblF(θ)−FN(θ)∝ba∇dblL∞≥s]≤2 exp/parenleftbigg\n−Ns2\n2K2\nB+ (2/3)KBs/parenrightbigg\n.\nand similarly for DFN.\nGiven the above results, we can deduce Thm. 6 directly.\nTheorem 6 Letθ∗be a solution F= 0(defined in (56)), which is stable on\nSρ(θ∗)for someρ>0. Then, there exists positive constants s0,C,K 1,K2and\nρ1<ρand a random variable θN∈Sρ1(θ∗)⊂L∞([0,T],Θ), such that\nP[∝ba∇dblθ−θN∝ba∇dblL∞≥Cs]≤4 exp/parenleftbigg\n−Ns2\nK1+K2s/parenrightbigg\n, s ∈(0,s0],\nP[FN(θN)∝ne}ationslash= 0] ≤4 exp/parenleftbigg\n−Ns2\n0\nK1+K2s0/parenrightbigg\n.\nIn particular, θN→θ∗andFN(θN)→0in probability.\nProof. Use Thm. 5 with estimates derived in Lemmas 6 and 7.\nThm. 6 describes the convergence of a solution of the first ord er condition\nof the PMP solution in the sampled situation to the populatio n solution of the\nPMP. Together with a condition of local strong concavity, we show further in\nCor. 1 that this stationary solution is in fact a local/globa l maximum of the\nsampled PMP. The claim regarding the convergence of loss fun ction values is\nprovided in Cor. 2.\nCorollary 1 Letθ∗be a solution of the mean-filed PMP such that there exists\nλ0>0satisfying that for a.e. t∈[0,T],E∇2\nθθH(xθ∗\nt,pθ∗\nt,θ∗\nt) +λ0I∝√∇ecedesequal0.\nThen the random variable θNdefined in Thm. 6 satisfies, with probability at\nleast 1−6 exp [ −(Nλ2\n0)/(K1+K2λ0)], thatθN\ntis a strict local maximum of\nsampled Hamiltonian1\nN/summationtextN\ni=1H(xθN,i\nt,pθN,i\nt,θ). In particular, if the finite-\nsampled Hamiltonian has a unique local maximizer, then θNis a solution of\nthe sampled PMP with the same high probability.\n38 Weinan E, Jiequn Han, Qianxiao Li\nProof. Let\n[I(θ)]t:=Eµ0∇2\nθθH(xθ\nt,pθ\nt,θt),\n[IN(θ)]t:=1\nNN/summationdisplay\ni=1∇2\nθθH(xθ,i\nt,pθ,i\nt,θt).\nGiven the assumption of negative definite Hessian matrix at θ∗\nt:\n[I(θ∗)]t+λ0I∝√∇ecedesequal0,\nwhat we need to prove is\nP[∝ba∇dblIN(θN)−I(θ∗)∝ba∇dblL∞≥2cλ0]≤o(1), N → ∞,\nfor sufficient small c>0. Consider the following estimate\nP[∝ba∇dblIN(θN)−I(θ∗)∝ba∇dblL∞≥2cλ0]\n≤P[∝ba∇dblIN(θN)−IN(θ∗)∝ba∇dblL∞≥cλ0and∝ba∇dblIN(θ∗)−I(θ∗)∝ba∇dblL∞≥cλ0]\n≤P[∝ba∇dblIN(θN)−IN(θ∗)∝ba∇dblL∞≥cλ0] +P[∝ba∇dblIN(θ∗)−I(θ∗)∝ba∇dblL∞≥cλ0].\nTo bound the first term, we can use similar steps as in the proof of Lemma 6,\nwhich gives\ness sup\nt∈[0,T]∝ba∇dbl∇2\nθθH(xθ\nt,pθ\nt,θt)− ∇2\nθθH(xφ\nt,pφ\nt,φt)∝ba∇dbl ≤KL∝ba∇dblθ−φ∝ba∇dblL∞.\nHence we have\nP[∝ba∇dblIN(θN)−IN(θ∗)∝ba∇dblL∞≥cλ0]≤P[∝ba∇dblθN−θ∗∝ba∇dblL∞≥cλ0/KL]\n≤4 exp/parenleftbigg\n−Nλ2\n0\nK1+K2λ0/parenrightbigg\n.\nTo bound the second term, note that ∝ba∇dblIN(θ)∝ba∇dblis uniformly bounded, we can\napply the infinite-dimensional Hoeffding’s inequality ([50 ], Corollary 2) to ob-\ntain\nP[∝ba∇dblIN(θ∗)−I(θ∗)∝ba∇dblL∞≥cλ0]≤2 exp/parenleftbigg\n−Nλ2\n0\nK′\n1+K′\n2λ0/parenrightbigg\n.\nCombining two estimates together, we complete the proof.\nCorollary 2 LetθNbe as defined in Thm. 6. Then there exist constants\nK1,K2such that,\nP[|J(θN)−J(θ∗)| ≥s]≤4 exp/parenleftbigg\n−Ns2\nK1+K2s/parenrightbigg\n, s ∈(0,s0].\nA Mean-Field Optimal Control Formulation of Deep Learning 3 9\nProof. Note thatJ(θ) =Φ(xθ\nT,y0) +/integraltextT\n0L(xθ\nt,θt)dt. Using Lemma 4, we have\n|J(θN)−J(θ∗)| ≤KL∝ba∇dblxθ∗\nT−xθN\nT∝ba∇dbl+KL/integraldisplayT\n0∝ba∇dblxθ∗\nt−xθN\nt∝ba∇dbl+∝ba∇dblθ∗\nt−θN\nt∝ba∇dbldt\n≤K′\nL∝ba∇dblθN−θ∗∝ba∇dblL∞\nThus, using Thm. 6, we have\nP[|J(θN)−J(θ∗)| ≥s]≤P[∝ba∇dblθN−θ∗∝ba∇dblL∞≥s/K′\nL]\n≤4 exp/parenleftbigg\n−Ns2\nK1+K2s/parenrightbigg\n.\nThm. 6 and Cor. 1 establishes a rigorous connection between s olutions of\nthe mean-field PMP and its sampled version: when a solution of the mean-\nfield PMP θ∗is stable, then for large N, with high probability we can find\nin its neighborhood a random variable θNthat is a stationary solution of\nthe sampled PMP (51). If further that the maximization is non -degenerate\n(local concavity assumption in Thm. 1) and unique, then θN\ntmaximizes the\nsample Hamiltonian with high probability. Note that this co ncavity condition\nis local in the sense that it only has to be satisfied at the path s involving θ∗,\nwhereas the strong concavity condition required in Thm. 4 is stronger as it is\nglobal. Of course, in the case where the Hamiltonian is quadr atic inθ, i.e. when\nf(x,θ) is linear in θand the regularization L(x,θ) is quadratic in θ(this is\nstill a nonlinear network, see Example 1), then all concavit y assumptions in\nthe preceding results are satisfied.\nThe key assumption for the results in this section is the stab ility condition\n(c.f. Def. 2). In general, this is different from the assumpti on thatH(xθ∗\nt,pθ∗\nt,θ∗\nt)\nis strongly concave point-wise in t. However, note that one can show using tri-\nangle inequality and estimates in Lemma 5 that if His strongly concave with\nsufficiently large concavity parameter ( λ0), then the solution must stable. Intu-\nitively, the stability assumption ensures that we can find a s mall region around\nθ∗such that it is isolated from other solutions, and this then a llows us to find\na nearby solution of the sampled problem that is close to this solution. On\nthe other hand, if DF(θ∗) has a non-trivial kernel, then one cannot expect to\nconstruct a θNthat is close to θ∗itself, or any specific point in the kernel.\nHowever, one may still find θNthat is close to the whole kernel.\nCor. 2 is a simple consequence of the previous results, and is effectively a\nstatement about generalization error of the learning model , because it quan-\ntifies the difference between loss function values when evalu ated on either the\npopulation or empirical risk minimization solution. We men tion an interest-\ning point of the optimal control framework alluded to earlie r in the context\nof generalization. Notice that since we have only assumed th at the controls\nor weights θare measurable and essentially bounded (and thus can be very\ndiscontinuous) in time, we are always dealing with the case w here the num-\nber of parameters are infinite. Even in this case, we can deriv e non-trivial\n40 Weinan E, Jiequn Han, Qianxiao Li\ngeneralization estimates. This is to be contrasted with cla ssical generalization\nbounds based on measures of complexity [51], where the numbe r of parameters\nadversely affect generalization. Note that there are many re cent works which\ntake on such issues from varying angles, e.g., [52,53,54].\n9 Conclusion\nIn this paper, we introduce the mathematical formulation of the population\nrisk minimization problem of continuous-time deep learnin g in the context of\nmean-field optimal control. In this framework, the composit ional structure of\ndeep neural networks is explicitly taken into account as the evolution of the\ndynamical system in time. To analyze this mean-field optimal control prob-\nlem, we proceed from two parallel but interrelated perspect ives, namely the\ndynamic programming approach and the maximum principle app roach. In the\nformer, an infinite-dimensional Hamilton-Jacobi-Bellman (HJB) equation for\nthe optimal loss function values is derived, with state vari ables being the joint\ndistribution of input-target pairs. The viscosity solutio n of the derived HJB\nequation provides us with a complete characterization of th e original popula-\ntion risk minimization problem, giving both the optimal los s function value\nand a optimal feedback control policy. In the latter approac h, we prove a mean-\nfield Pontryagin’s maximum principle that constitutes nece ssary conditions for\noptimality. This can be viewed as a local characterization o f optimal trajec-\ntories, and indeed we formally show that the PMP can be derive d from the\nHJB equation using the method of characteristics. Using the PMP, we study\na sufficient condition for which the solution of the PMP is uniq ue. Lastly, we\nprove an existence result of sampled PMP solutions near the s table solutions\nof the mean-field PMP. We show how this result connects with ge neralization\nerrors of deep learning and provide a new direction for obtai ning generalization\nestimates in the case of infinite number of parameters and fini te number of\nsample points. Overall, this work establishes a concrete ma thematical frame-\nwork from which novel ways to attack the pertinent problems i n practical and\ntheoretical deep learning may be further developed.\nAs a specific motivation for future work, notice that here, we have assumed\nthat the state dynamics fis independent of distribution law of xtand only\ndepends on xtitself and control θt. There are also more complex network\nstructures used in practice which are beyond this assumptio n. Let us take\nbatch normalization as an example [55]. A batch normalizati on step involves\nnormalizing inputs using some distribution ν, and then rescale (and re-center)\nthe output using trainable variables so that the matching sp ace is recovered.\nThis has been found empirically to have a good regularizatio n effect for train-\ning, but theoretical analysis of such effects are limited. In the present setting,\nwe can write a batch normalization operation as\nBN γ,β(x,ν):=γ⊙x−/integraltext\nzdν(z)/radicalBig\n(z−/integraltext\nz′dν(z′))2dν(z) +ǫ+β.\nA Mean-Field Optimal Control Formulation of Deep Learning 4 1\nHereγ,β∈Rdare trainable parameters, ⊙denotes element-wise multiplica-\ntion, andǫis a small constant avoiding division by zero. Suppose we ins ert\na Batch Normalization operation immediately after the skip connection, the\ncorresponding state dynamics fbecomes\nf(x,θ)→f(BN γ,β(x,ν),θ).\nBy incorporating γ,βinto the parameter vector θand taking νas the popula-\ntion distribution of the state, the equation of state dynami cs has the following\nabstract form\n˙xt=˜f(xt,θ,Pxt). (58)\nThis is a more general formulation typically considered in t he mean-field op-\ntimal control literature. The associated objective is very similar to (3) except\nthe state dynamics:\ninf\nθ∈L∞([0,T],Θ)J(θ):=Eµ0/bracketleftBigg\nΦ(xT,y0) +/integraldisplayT\n0L(xt,θt)dt/bracketrightBigg\n,\nSubject to (58) .(59)\nThe dynamic programming principle and the maximum principl e are still ap-\nplicable in this setting. For instance, the associated HJB e quation can be\nderived as\n\n∂v\n∂t+ inf\nθ∈Θ/angbracketleftbig\n∂µv(t,µ)(.)·¯f(.,θ,µ ) +¯L(.,θ), µ/angbracketrightbig\n= 0,on [0,T)× P 2(Rd+l),\nv(T,µ) =∝an}b∇acketle{t¯Φ(.),µ∝an}b∇acket∇i}ht, onP2(Rd+l),\nwhere ¯f(w,θ,µ ):= (˜f(x,θ,µ x),0). Similarly, we expect the following mean\nfield PMP (in the lifted space) to hold under suitable conditi ons:\n˙x∗\nt=˜f(x∗\nt,θ∗\nt,Px∗\nt), x∗\nt=x0,\n˙p∗\nt=−∇ xH(x∗\nt,p∗\nt,θ∗\nt,Px∗\nt), p∗\nT=−∇ xΦ(x∗\nT,y0),\nEµ0H(x∗\nt,p∗\nt,θ∗\nt,Px∗\nt)≥Eµ0H(x∗\nt,p∗\nt,θ,Px∗\nt),∀θ∈Θ, a.e.t ∈[0,T],\nwhere the Hamiltonian function H:Rd×Rd×Θ× P 2(Rd)→Ris given by\nH(x,p,θ,µ ) =p·f(x,θ,µ )−L(x,θ).\nThus, batch normalization can be viewed as a general form of m ean-field dy-\nnamics, and can be treated in a principled way under the mean- field optimal\ncontrol framework. We leave the study of further implicatio ns of this connec-\ntion on the theoretical understanding of batch normalizati on to future work.\nAcknowledgments\nThe work of W. E and J. Han are supported in part by ONR grant N00 014-\n13-1-0338 and Major Program of NNSFC under grant 91130005. Q . Li is sup-\nported by the Agency for Science, Technology and Research, S ingapore.\n42 Weinan E, Jiequn Han, Qianxiao Li\nReferences\n1. Yoshua Bengio. Learning deep architectures for AI. Foundations and Trends R/circlecopyrtin\nMachine Learning , 2(1):1–127, 2009.\n2. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learn ing. Nature ,\n521(7553):436–444, 2015.\n3. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning . MIT press, 2016.\n4. Weinan E. A proposal on machine learning via dynamical sys tems. Communications\nin Mathematics and Statistics , 5(1):1–11, 2017.\n5. Qianxiao Li, Long Chen, Cheng Tai, and Weinan E. Maximum pr inciple based algo-\nrithms for deep learning. Journal of Machine Learning Research , 18:1–29, 2018.\n6. Qianxiao Li and Shuji Hao. An optimal control approach to d eep learning and applica-\ntions to discrete-weight neural networks. arXiv preprint arXiv:1803.01299 , 2018.\n7. Eldad Haber and Lars Ruthotto. Stable architectures for d eep neural networks. Inverse\nProblems , 34(1):014004, 2017.\n8. Bo Chang, Lili Meng, Eldad Haber, Frederick Tung, and Davi d Begert. Multi-level\nresidual networks from dynamical systems view. In Proceedings of the International\nConference on Learning Representations , 2018.\n9. Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Beg ert, and Elliot Holtham.\nReversible architectures for arbitrarily deep residual ne ural networks. arXiv preprint\narXiv:1709.03698 , 2017.\n10. Yiping Lu, Aoxiao Zhong, Quanzheng Li, and Bin Dong. Beyo nd finite layer neural net-\nworks: Bridging deep architectures and numerical different ial equations. arXiv preprint\narXiv:1710.10121 , 2017.\n11. Sho Sonoda and Noboru Murata. Double continuum limit of d eep neural networks. In\nICML Workshop on Principled Approaches to Deep Learning , 2017.\n12. Zhen Li and Zuoqiang Shi. Deep residual learning and PDEs on manifold. arXiv preprint\narXiv:1708.05115 , 2017.\n13. Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and Dav id Duvenaud. Neural ordi-\nnary differential equations. arXiv preprint arXiv:1806.07366 , 2018.\n14. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. De ep residual learning\nfor image recognition. In Proceedings of the IEEE conference on computer vision and\npattern recognition , pages 770–778, 2016.\n15. Richard Bellman. Dynamic programming . Courier Corporation, 2013.\n16. Michael G Crandall and Pierre-Louis Lions. Viscosity so lutions of hamilton-Jacobi\nequations. Transactions of the American Mathematical Society , 277(1):1–42, 1983.\n17. Lev S Pontryagin. Mathematical theory of optimal processes . CRC Press, 1987.\n18. Arthur Earl Bryson. Applied optimal control: optimization, estimation and con trol.\nCRC Press, 1975.\n19. Michael Athans and Peter L Falb. Optimal control: an introduction to the theory and\nits applications . Courier Corporation, 2013.\n20. Yann LeCun. A theoretical framework for back-propagati on. In The Connectionist\nModels Summer School , volume 1, pages 21–28, 1988.\n21. Jean-Michel Lasry and Pierre-Louis Lions. Mean field gam es.Japanese Journal of\nMathematics , 2(1):229–260, 2007.\n22. Minyi Huang, Roland P Malhamé, and Peter E Caines. Large p opulation stochastic\ndynamic games: closed-loop Mckean-Vlasov systems and the N ash certainty equivalence\nprinciple. Communications in Information & Systems , 6(3):221–252, 2006.\n23. Olivier Guéant, Jean-Michel Lasry, and Pierre-Louis Li ons. Mean field games and ap-\nplications. In Paris-Princeton lectures on mathematical finance 2010 , pages 205–266.\nSpringer, 2011.\n24. Alain Bensoussan, Jens Frehse, and Phillip Yam. Mean field games and mean field type\ncontrol theory , volume 101. Springer, 2013.\n25. Mathieu Lauriere and Olivier Pironneau. Dynamic progra mming for mean-field type\ncontrol. Comptes Rendus Mathematique , 352(9):707–713, 2014.\n26. Huyên Pham and Xiaoli Wei. Dynamic programming for optim al control of stochastic\nMckean–Vlasov dynamics. SIAM Journal on Control and Optimization , 55(2):1069–\n1101, 2017.\nA Mean-Field Optimal Control Formulation of Deep Learning 4 3\n27. Huyên Pham and Xiaoli Wei. Bellman equation and viscosit y solutions for mean-field\nstochastic control problem. ESAIM: Control, Optimisation and Calculus of Variations ,\n24(1):437–461, 2018.\n28. Marco Caponigro, Massimo Fornasier, Benedetto Piccoli , and Emmanuel Trélat. Sparse\nstabilization and control of alignment models. Mathematical Models and Methods in\nApplied Sciences , 25(03):521–564, 2015.\n29. Massimo Fornasier and Francesco Solombrino. Mean-field optimal control. ESAIM:\nControl, Optimisation and Calculus of Variations , 20(4):1123–1152, 2014.\n30. Mattia Bongini, Massimo Fornasier, Francesco Rossi, an d Francesco Solombrino. Mean-\nfield pontryagin maximum principle. Journal of Optimization Theory and Applications ,\n175(1):1–38, 2017.\n31. Alain-Sol Sznitman. Topics in propagation of chaos. In Ecole d’été de probabilités de\nSaint-Flour XIX—1989 , pages 165–251. Springer, 1991.\n32. Daniel Andersson and Boualem Djehiche. A maximum princi ple for SDEs of mean-field\ntype. Applied Mathematics & Optimization , 63(3):341–356, 2011.\n33. Rainer Buckdahn, Boualem Djehiche, and Juan Li. A genera l stochastic maximum\nprinciple for SDEs of mean-field type. Applied Mathematics & Optimization , 64(2):197–\n216, 2011.\n34. René Carmona and François Delarue. Forward–backward st ochastic differential equa-\ntions and controlled mckean–vlasov dynamics. The Annals of Probability , 43(5):2647–\n2700, 2015.\n35. Pierre-Louis Lions. Cours au collège de france: Théorie des jeuxa champs moyens, 2012.\n36. Pierre Cardaliaguet. Notes on mean field games. Technica l report, 2010.\n37. Michael G Crandall and Pierre-Louis Lions. Hamilton-Ja cobi equations in infinite di-\nmensions I. Uniqueness of viscosity solutions. Journal of Functional Analysis , 62(3):379\n– 396, 1985.\n38. Michael G Crandall and Pierre-Louis Lions. Hamilton-Ja cobi equations in infinite di-\nmensions. II. Existence of viscosity solutions. Journal of Functional Analysis , 65(3):368\n– 405, 1986.\n39. Michael G Crandall and Pierre-Louis Lions. Hamilton-Ja cobi equations in infinite di-\nmensions, III. Journal of Functional Analysis , 68(2):214 – 247, 1986.\n40. Charles Stegall. Optimization of functions on certain s ubsets of banach spaces. Math-\nematische Annalen , 236(2):171–176, 1978.\n41. Sinno Jialin Pan and Qiang Yang. A survey on transfer lear ning. IEEE Transactions\non knowledge and data engineering , 22(10):1345–1359, 2010.\n42. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Domai n adaptation for large-\nscale sentiment classification: A deep learning approach. I nProceedings of the 28th\ninternational conference on machine learning (ICML-11) , pages 513–520, 2011.\n43. Fei-Fei Li, Rob Fergus, and Pietro Perona. One-shot lear ning of object categories. IEEE\ntransactions on pattern analysis and machine intelligence , 28(4):594–611, 2006.\n44. Vladimir G Boltyanskii, Revaz V Gamkrelidze, and Lev S Po ntryagin. The theory of\noptimal processes. I. The maximum principle. Technical rep ort, TRW Space Tochnology\nLabs, Los Angeles, California, 1960.\n45. Alberto Bressan and Benedetto Piccoli. Introduction to the mathematical theory of\ncontrol , volume 2. American institute of mathematical sciences Spr ingfield, 2007.\n46. Daniel Liberzon. Calculus of variations and optimal control theory: a concis e introduc-\ntion. Princeton University Press, 2012.\n47. Lawrence C. Evans. Partial differential equations . Graduate studies in mathematics.\nAmerican Mathematical Society, 1998.\n48. Walter G Kelley and Allan C Peterson. The theory of differential equations: classical\nand qualitative . Springer Science & Business Media, 2010.\n49. HB Keller. Approximation methods for nonlinear problem s with application to two-\npoint boundary value problems. Mathematics of Computation , 29(130):464–474, 1975.\n50. IF Pinelis and AI Sakhanenko. Remarks on inequalities fo r large deviation probabilities.\nTheory of Probability & Its Applications , 30(1):143–148, 1986.\n51. Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical\nlearning , volume 1. Springer series in statistics New York, 2001.\n44 Weinan E, Jiequn Han, Qianxiao Li\n52. Behnam Neyshabur, Srinadh Bhojanapalli, David McAlles ter, and Nati Srebro. Ex-\nploring generalization in deep learning. In Advances in Neural Information Processing\nSystems , pages 5949–5958, 2017.\n53. Gintare Karolina Dziugaite and Daniel M Roy. Computing n onvacuous generalization\nbounds for deep (stochastic) neural networks with many more parameters than training\ndata. arXiv preprint arXiv:1703.11008 , 2017.\n54. Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. S tronger generalization\nbounds for deep nets via a compression approach. arXiv preprint arXiv:1802.05296 ,\n2018.\n55. Sergey Ioffe and Christian Szegedy. Batch normalization : Accelerating deep network\ntraining by reducing internal covariate shift. In International conference on machine\nlearning , pages 448–456, 2015.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "YXr-F0cy4W2",
"year": null,
"venue": "MSML 2021",
"pdf_link": "https://proceedings.mlr.press/v145/e22b/e22b.pdf",
"forum_link": "https://openreview.net/forum?id=YXr-F0cy4W2",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the emergence of simplex symmetry in the final and penultimate layers of neural network classifiers",
"authors": [
"Weinan E",
"Stephan Wojtowytsch"
],
"abstract": "A recent numerical study observed that neural network classifiers enjoy a large degree of symmetry in the penultimate layer. Namely, if $h(x) = Af(x) +b$ where $A$ is a linear map and $f$ is the ou...",
"keywords": [],
"raw_extracted_content": "Proceedings of Machine Learning Research vol 107:270–290, 2021 2nd Annual Conference on Mathematical and Scientific Machine Learning\nOn the emergence of simplex symmetry in the final and penultimate\nlayers of neural network classifiers\nWeinan E WEINAN @MATH .PRINCETON .EDU\nProgram for Applied and Computational Mathematics\nPrinceton University\nPrinceton, NJ 08544\nStephan Wojtowytsch STEPHANW @PRINCETON .EDU\nProgram for Applied and Computational Mathematics\nPrinceton University\nPrinceton, NJ 08544\nEditors: Joan Bruna, Jan S Hesthaven, Lenka Zdeborova\nAbstract\nA recent numerical study observed that neural network classifiers enjoy a large degree of symmetry\nin the penultimate layer. Namely, if h(x) =Af(x) +bwhereAis a linear map and fis the output\nof the penultimate layer of the network (after activation), then all data points xi;1;:::;x i;N iin a\nclassCiare mapped to a single point yibyfand the points yiare located at the vertices of a regular\nk\u00001-dimensional standard simplex in a high-dimensional Euclidean space.\nWe explain this observation analytically in toy models for highly expressive deep neural net-\nworks. In complementary examples, we demonstrate rigorously that even the final output of the\nclassifierhis not uniform over data samples from a class Ciifhis a shallow network (or if the\ndeeper layers do not bring the data samples into a convenient geometric configuration).\nKeywords: Classification problem, deep learning, neural collapse, cross entropy, geometry within\nlayers, simplex symmetry\n1. Introduction\nA recent empirical study Papyan et al. (2020) took a first step towards investigating the inner ge-\nometry of neural networks close to the output layer. In classification problems, the authors found\nthat the data in the final and penultimate layers enjoy a high degree of symmetry. Namely, a neural\nnetwork function hL:Rd!RkwithLlayers can be understood as a composition\nhL(x) =AfL(x) +b (1)\nwherefL:Rd!Rmis (the composition of a componentwise nonlinearity with) a neural network\nwithL\u00001layers,b2RkandA:Rm!Rkis linear. In applications where hLwas trained by\nstochastic gradient descent to minimize softmax-crossentropy loss to distinguish elements in various\nclassesC1;:::;Ck, the authors observed that the following became approximately true in the long\ntime limit.\n•fLmaps all elements in a class Cito a single point yi.\n• The distance between the centers of mass of different classes in the penultimate layer kyi\u0000yjk\ndoes not depend on i6=j.\n© 2021 W. E & S. Wojtowytsch.\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS\n• LetM=1\nkPk\ni=1yibe the center of mass of the data distribution in the penultimate center\n(normalizing the weight of data classes). Then the angle between yi\u0000Mandyj\u0000Mdoes\nnot depend on i6=j.\n• Thei-th row ofAis parallel to yi\u0000M.\nIn less precise terms, hLmaps the classes Cito the vertices of a regular standard simplex in\na high-dimensional space. This phenomenon is referred to as ‘neural collapse’ in Papyan et al.\n(2020). In this note, we consider the toy model where fLis merely a bounded measurable function\nand prove that under certain assumptions such simplex geometries are optimal. An investigation\nalong the same lines has been launched separately in Mixon et al. (2020).\nConversely, we show that even the output hL(Ci)of a shallow neural network hLover a data\nclassCidoes not approach a single value ziwhen the parameters of hLare trained by continuous\ntime gradient descent. Since a deep neural network is the composition of a slightly less deep network\nand a shallow neural network containing the output layer, these results suggest that the hLcannot\nbe expected to be uniform over a data class unless a convenient geometric configuration has already\nbeen reached two layers before the output.\nWe make the following observations.\n1. Overparametrized networks can fit random labels at data points Cooper (2018) and can be\nefficiently optimized for this purpose in certain scaling regimes, see e.g. Du et al. (2018a,b);\nE et al. (2020). The use of the class L1(P;Rm) := (L1(P))mas a proxy for very expressive\ndeep neural networks thus can be justified heuristically from the static perspective of energy\nminimization (but not necessarily from the dynamical perspective of training algorithms).\nIn practice, the data distribution Pis estimated on a finite set of sample points fx1;:::;xNg\nand an empirical distribution PN=1\nNPN\ni=1\u000exi. A function fL2L1(P;Rm)determined by\nits values at the points x1;:::;xN. A class of sufficiently complex neural networks which can\nfit any given set of outputs fy1;:::;yNgfor inputsfx1;:::;xNgcoincides with Lp(P;Rm)\nfor any 1\u0014p\u00141 . The same is true for many other function models.\nIfPN=1\nNPN\ni=1\u000exior more generally, if all classes C1;:::;Ckhave a positive distance to\neach other, a function f2Lp(P;Rm)which is constant on every class can be extended to a\nC1-function on Rd. Thus in realistic settings, all functions below can be taken to be fairly\nregular.\n2. As the softmax cross-entropy functional does not have minimizers in sufficiently expressive\nscaling-invariant function classes, we need to consider norm bounded classes.\nIn the hypothesis class given by the ball of radius RinL1(P;Rm), the optimal map hsatisfies\nh(x) =zifor allxin a data class Ciand the values ziform the vertices of a regular simplex.\nMore precisely, the statement is valid under the constraint kh(x)k`p\u0014Rfor allp2(1;1),\nbut the precise location of the vertices depends on p. We refer to this as final layer geometry.\nIfh:Rd!Rkis given by h(x) =Af(x)forf2L1(P;Rm)and a linear map A:\nRm!Rk, the following holds: If kAkL(`2;`2)\u00141andkf(x)k`2\u0014Rfor allx2Rd,\nthen any energy minimizer satisfies f(x) =yifor allx2Ciwhere the outputs yiform\nthe vertices of a regular standard simplex in a high-dimensional ambient space. We refer to\nthis as penultimate layer geometry. We note that similar results were obtained in a different\nframework in Lu and Steinerberger (2020).\n271\nE W OJTOWYTSCH\n3. Considerations on the final layer geometry are generally independent of the choice of norm\nonRkwithin the class of `p-norms, while the penultimate layer geometry appears to depend\nspecifically on the use of the Euclidean norm. While the coordinate-wise application of a\none-dimensional activation function is not hugely compatible with Euclidean geometry (or at\nleast no more compatible than with `p-geometry for any p2[1;1]), the transition from the\npenultimate layer to the final layer is described by a single affine map y7!Ay+b. IfA\nandbare initilized from a distribution compatible with Euclidean geometry (e.g. a rotation-\ninvariant Gaussian) and optimized by an algorithm such as gradient descent which is based on\nthe Euclidean inner product, then the use of Euclidean geometry for (A;b)is well justified.\nIn deeper layers, the significance of Euclidean geometry becomes more questionable. Even\nfor the map f:Rd!Rm, it is unclear whether the Euclidean norm captures the constraints\nonfwell.\n4. Ifh(x) =Pm\ni=1ai\u001b(wT\nix+bi)is a shallow neural network classifier and the weights\n(ai;wi;bi)are optimized by gradient descent, then in general hdoes notconverge to a clas-\nsifier which is constant on different data classes (although the hypothesis class contains func-\ntions with arbitrarily low risk which are constant on the different classes Ci). This is estab-\nlished in different geometries:\n(a) In the first case, \u001bis the ReLU activation function and the classes are linearly separable.\nUnder certain conditions, gradient descent approaches a maximum margin classifier,\nwhich can be a linear function and thus generally non-constant over the data classes.\n(b) In the second case, \u001bis constant for large arguments and there are three data points\nx1;x2;x3on a line where x1;x3belong to the same class, but the middle point x2\nbelongs to a different class. Then the values of hatx1;x2;x3cannot be chosen inde-\npendently due to the linear structure of the first layer, and the heuristic behind the toy\nmodel does not apply.\nNote thathis of the form h=Af, butf(x) =\u001b(Wx)is not sufficiently expressive for the\nanalysis of the penultimate layer to apply.\nThe theoretical analysis raises further questions. As the expressivity of the hypothesis class and\nthe ability to set values on the training set with little interaction between different point evaluations\nseems crucial to the ‘neural collapse’ phenomenon, we must question whether this simple geometric\nconfiguration is in fact desirable, or merely the optimal configuration in a hypothesis class which\nis too large to allow any statistical generalization bounds. Such concerns were already raised in\nElad et al. (2020). While the latter possibility is suggested by the theoretical analysis, it should be\nemphasized that in the numerical experiments in Papyan et al. (2020) solutions with good gener-\nalization properties are found. This compatibility could be explained by considering a hypothesis\nclass which is not as expressive as L1(P;Rm), but contains a function which attains a desired set\nof values on a realistic data set.\nIt should be noted that the final layer results apply to any sufficiently expressive function class,\nnot just neural networks. The results for the penultimate layer apply to classes of classifiers which\nare compositions of a linear function and a function in a very expressive function class. In both\ncases, we consider (norm-constrained) energy minimizers, not training dynamics. If the norm con-\nstraints are meaningful for a function model and an optimization algorithm can find the minimizers,\n272\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS\nthe analysis applies in the long time limit, but the dynamics would certainly depend on the precise\nfunction model. This coincides with the situation considered by Papyan et al. (2020), in which the\ncross-entropy is close to zero after significant training.\nIfh=Afandfis not sufficiently expressive (as in two-layer neural networks), we observe that\nclassifier collapse does not occur, even in the final layer. Whether there are further causes driving\nclassifier collapse in deep neural networks remains to be seen.\nWe believe that further investigation in this direction is needed to understand the following:\nIs neural collapse observed on random data sets or real data sets with randomly permuted labels?\nDoes it occur also on test data or just training data? Is neural collapse observed for ReLU activation\nfunctions, or only for activation functions which tend to a limit at positive and negative infinity? Do\nthe outputs over different classes yiattain a regular simplex configuration also if the weights of the\ndifferent data classes are vastly different? Is neural collapse observed if a parameter optimization\nalgorithm is used which does not respect Euclidean geometry (e.g. an algorithm with coordinatewise\nlearning rates such as ADAM)? The question when neural collapse occurs and whether it helps\ngeneralization in deep learning remains fairly open.\nThe article is structured as follows. In Section 2, we rigorously introduce the problem we will\nbe studying and obtain some first properties. In Sections 3 and 4, we study a toy model for the\ngeometry of the output layer and penultimate layer of a neural network classifier respectively. In\nSection 5, we present analytic examples in simple situations where neural network classifiers behave\nmarkedly differently and where the toy model analysis does not apply.\n1.1. Notation\nWe consider classifiers h:Rd!Rkin a hypothesis class H. Often,hwill be assumed to be a\ngeneral function on a finite set with norm-bounded output, or the composition of such a function\nf:Rd!Rmand a linear map A:Rm!Rkfor somem\u00151. Variables in Rd;RmandRkare\ndenoted byx;yandzrespectively.\n2. Preliminaries\n2.1. Set-up\nA classification problem is made up of the following ingredients:\n1. A data distribution , i.e. a probability measure PonRd.\n2. A label function , i.e. a P-measurable function \u0018:Rd!fe1;:::;ekg\u001aRk. We refer to the\nsetsCi=\u0018\u00001(feig)as the classes.\n3. A hypothesis class , i.e. a classHof functions h:Rd!Rkford\u001d1andk\u00152.\n4. A loss function `:Rk\u0002Rk![0;1).\nWe always assume that H\u0012L1(P;Rk)and often evenH\u0012L1(P;Rk). These ingredients are\ncombined in the risk functional\nR:H! [0;1);R(h) =Z\nRd`\u0000\nh(x);\u0018x\u0001\nP(dx); (2)\n273\nE W OJTOWYTSCH\nwhich is approximated by the empirical risk functional\nbRn(h) =1\nnnX\ni=1`\u0000\nh(xi);\u0018i\u0001\nwherexiare samples drawn from the distribution Pand\u0018i=\u0018xi. Since we can write\nbRn(h) =Z\nRd`\u0000\nh(x);\u0018x\u0001\nPn(dx);Pn=1\nnnX\ni=1\u000exi;\nwe do not differentiate between empirical risk and (population) risk in this article. This allows us to\norganically incorporate that all results are independent of the number of data points. We focus on\nthesoftmax cross entropy risk functional associated to the loss function\n`\u0000\nh;y\u0001\n=\u0000log \nexp(h\u0001y)Pk\ni=1exp(h\u0001ek)!\n: (3)\nThis loss function allows the following probabilistic interpretation: For given a given classifier\nh2H and data point x2Rd, the vector\u0019with entries\n\u0019i(x) :=exp(h(x)\u0001ei)Pk\nj=1exp(h(x)\u0001ej)\nis a counting density on the set of labels f1;:::;kg, depending on the input x. The function\n\b :Rk!Rk; \b(h) = \nexp(h\u0001e1)Pk\ni=1exp(h\u0001ei);:::;exp(h\u0001ek)Pk\ni=1exp(h\u0001ei)!\nwhich converts a k-dimensional vector into a counting density is referred to as the softmax function\nsince it approximates the maximum coordinate function of hfor large inputs. The cross-entropy\n(Kullback-Leibler divergence) of this distribution with respect to the distribution \u0016\u0019(x)which gives\nthe correct label with probability 1is precisely\n\u0000kX\nj=1log\u0012\u0019j(x)\n\u0016\u0019j(x)\u0013\n\u0016\u0019(x) =\u0000log\u0010\u0019i(x)\n1\u0011\n\u00011 =\u0000log \nexp(h(x)\u0001\u0018x)Pk\ni=1exp(h(x)\u0001ek)!\nsince \u0016\u0019j=\u000ej;i(x)and0\u0001log(1) = 0 in this case by approximation. The risk functional thus is the\naverage integral of the pointwise cross-entropy of the softmax counting densities with respect to the\ntrue underlying distribution.\nNote the following: `>0, but ifhis such thath(x)\u0001\u0018x>maxe1;:::;ek6=\u0018xh(x)\u0001eiforP-almost\neveryx, then\nlim\n\u0015!1R(\u0015h) = lim\n\u0015!1\u0000Z\nRdlog \nexp(\u0015h(x)\u0001\u0018x)Pk\ni=1exp(\u0015h(x)\u0001ek)!\nP(dx) = 0:\nThus the cross-entropy functional does not have minimizers in suitably expressive function classes\nwhich are cones (i.e. f2H;\u0015 > 0)\u0015f2H ). So to obtain meaningful results by energy\nminimization, we must consider\n274\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS\n1. a dynamical argument concerning a specific optimization algorithm, or\n2. a restricted hypothesis class with meaningful norm bounds, or\n3. a higher order expansion of the risk.\nWe follow the first line of inquiry for shallow neural networks in Section 5 and the second line\nof inquiry for toy models for deep networks in Sections 3 and 4.\n2.2. Convexity of the loss function\nFor the following, we note that the softmax cross entropy loss function has the following convexity\nproperty.\nLemma 1 The function\n\bj:Rk!R; \b(z) =\u0000log \nexp(zj)Pk\ni=1exp(zi)!\n= log kX\ni=1exp(zi)!\n\u0000zj\nis convex for any 1\u0014j\u0014kand strictly convex on hyperplanes H\u000bof the form\nH\u000b=8\n<\n:z2Rk:kX\nj=1zj=\u000b9\n=\n;:\nFor the sake of completeness, we provide a proof in the Appendix. Since \b\u0000\nz+\u0015(1;:::; 1)\u0001\n=\n\b(z)for all\u00152R, we note that \bjisnotstrictly convex on the whole space Rd.\n3. Heuristic geometry: final layer\n3.1. Collapse to a point\nIn this section, we argue that the output h(Ci)of the classifier should be a single point for all classes\nCi,i= 1;:::;k if the hypothesis class is sufficiently expressive. We will discuss the penultimate\nlayer below.\nLemma 2 Leth2H and set\nzi:=1\njCijZ\nCih(x0)P(dx0); \u0016h(x) =zifor allx2Ci:\nThenR(\u0016h)\u0014R(h)and equality holds if and only if there exists a function \u00152L1(P)such that\nh\u0000\u0016h=\u0015(1;:::; 1)P-almost everywhere.\n275\nE W OJTOWYTSCH\nThe reasoning behind the Lemma is that\nZ\nCi\bi(h(x))P(dx)\u0019Z\nCi\bi(zi) +r\bi(zi)\u0001\u0000\nh(x)\u0000zi\u0001\n+1\n2\u0000\nh(x)\u0000zi\u0001TD2\bi(zi)\u0000\nh(x)\u0000zi\u0001\nP(dx)\n=Z\nCi\bi(zi)P(dx) +r\bi(zi)\u0001Z\nCih(x)\u0000ziP(dx)\n+1\n2Z\nCi\u0000\nh(x)\u0000zi\u0001TD2\bi(zi)\u0000\nh(x)\u0000zi\u0001\nP(dx)\n\u0015Z\nCi\bi(zi)P(dx)\nsince the first order term vanishes. A summation over iestablishes the result. A rigorous proof\nusing Jensen’s inequality can be found in the appendix.\nThus if a class Cjis mapped to a set h(Cj)\u0012Rkwith a prescribed center of mass, different\nclasses are mapped to the same centers of mass, it is energetically favorable to reduce the variance\nto the point that h(Cj)is a single point. Whether or not this is attainable depends primarily on\nthe hypothesis class H, but a very expressive class like deep neural networks is likely to allow this\ncollapse to a single point.\nCorollary 3 IfH=L1(P;V)is the class of bounded P-measurable functions which take values\nin a compact convex set V\u001aRk, then a minimizer horRinHcan be taken to map the class Ci\nto a single point zi2Vfor alli= 1;:::;k , and all other minimizer differ from honly in direction\n(1;:::; 1).\n3.2. Simplex configuration\nIn this section, we discuss the emergence of the simplex configuration under the assumption that\nthe every class gets mapped to a single point zi2Rk, or equivalently that each class consists of a\nsingle data point. Again, we consider the last layer problem : Assume that\n•X=fx1;:::;xdg,\n•His the class of functions from Xto the Euclidean ball BR(0)inRk.\nLetPbe a probability measure on Xandpi:=P(fxig). We wish to solve the minimization problem\nh\u00032argminh2HR(h)where\nR(h) =Z\nX\u0000log \nexp(h(x)\u0001\u0018x)Pd\ni=1exp(h(x)\u0001ei)!\nP(dx) =\u0000dX\ni=1pilog \nexp(h(xi)\u0001ei)Pd\nj=1exp(h(xi)\u0001ej)!\n:\nDue to our choice of hypothesis class, there is no interaction between h(xi)andh(xj), so we can\nminimize the sum term by term:\nzi:=h(xi)2argmin\nz2BR(b) \n\u0000log \nexp(z\u0001ei)Pd\nj=1exp(z\u0001ej)!!\n= min\nz2BR(b)\bi(z)\nwhere \bi(z) = log\u0010Pk\nj=1exp(zk)\u0011\n\u0000ziis as in Lemma 1.\n276\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS\nLemma 4 For everyithere exists a unique minimizer ziof\biinBR(0)andzi=\u000bei+\fP\nj6=iej\nfor\u000b;\f2Rwhich do not depend on i.\nSince \b\u0000\nz+\u0015(1;:::; 1)\u0001\n= \b(z)for all\u00152R, the same result holds for the ball BR\u0000\n\u0015(1;:::; 1)\u0001\nwith any\u00152R. We can determine the minimizers by exploiting the relationships\n\u000b2+ (k\u00001)\f2=R2; \u000b + (k\u00001)\f= 0\nwhich are obtained from the Lagrange-multiplier equation (9) in the proof of Lemma 4. The equa-\ntions reduce to\n\u000b= (k\u00001)r\nR2\u0000\u000b2\nk\u00001=p\n(k\u00001) (R2\u0000\u000b2))\u000b2= (k\u00001) (R2\u0000\u000b2)\nand ultimately\n\u000b2=k\u00001\nkR2)\u000b=r\nk\u00001\nkR; \f =\u00001\nk\u00001\u000b=\u0000Rp\nk(k\u00001): (4)\nRemark 5 Lemma 2 remains true when BR(0)is the ball of radius R > 0with respect to an\n`p-norm on Rkfor1<p<1(with different values for \u000band\f) – see appendix for further details.\nCorollary 6 IfHis the unit ball in L1(P;Rk)whereRkis equipped with the `p-norm for 1<p<\n1, then any minimizer hofRinHsatisfies that h(Ci)is a single point zifor alli= 1;:::;k and\nthe pointsziform the vertices of a regular standard simplex.\nRemark 7 A major simplification in our analysis was the restriction to one-point classes and gen-\neral functions on the finite collection of points or more generally to bounded P-measurable func-\ntions. In other hypothesis classes, the point values h(xi)andh(xj)cannot be chosen independently.\nIt is therefore no longer possible to minimize all terms in the sum individually, and trade-offs are ex-\npected. In particular, while our analysis was independent of the weight pi=P(Ci)of the individual\nclasses, these are expected to influence trade-offs in real applications.\nNevertheless, we record that simplex configurations are favored for hypothesis classes Hwith\nthe following two properties:\n1.His expressive enough to collapse classes to single points and to choose the values on differ-\nent classes almost independently, and\n2. functions inHrespect the geometry of Rkequipped with an `p-norm in a suitable manner.\n4. Heuristic geometry: penultimate layer\nAbove, we obtained rigorous results for the final layer geometry under heuristic assumptions. In\nthis section, we consider a hypothesis class Hin which functions can be decomposed as\nhf;A;b(x) =Af(x) +b wheref:Rd!Rm; A :Rm!Rk; b2Rk\nand we are interested in the geometry of fandA. Typically, we imagine the case that m\u001dk.\n277\nE W OJTOWYTSCH\n4.1. Collapse to a point\nWe have given a heuristic proof above that it is energetically favorable to contract h(Ci)to a single\npointzi2Rkunder certain conditions. Since A:Rm!Rkhas a non-trivial kernel for m > k ,\nthis is a weaker statement than claiming that fmapsCito a single point yi2Rm. We note the\nfollowing:Vi= (A\u0001+b)\u00001(zi)is anm\u0000k-dimensional affine subspace of Rm. In particular, a\nstrictly convex norm (e.g. an `p-norm for 1<p<1) has a unique minimum yi2Vi. Thus if we\nsubscribe to the idea that fis constrained by an `p-norm, it is favorable for fto collapseCito a\nsingle point yi2Rm.\nHeuristically, this situation arises either if it is more expensive to increase the norm of fthan\nchange its direction, or if (A;b)evolve during training and it is desirable to bring f(x)towards the\nminimum norm element of (A\u0001b)\u00001(zi)to increase the stability of training. The first consideration\napplies when A;bare fixed while the second relies on the variability of (A;b). Their relative impor-\ntance could therefore be assessed numerically by initializing the final layer variables in a simplex\nconfiguration and making them non-trainable.\nIf\u001bis a bounded activation function, the direction of the final layer output depends on the\ncoefficients of all layers in a complicated fashion, while its magnitude mostly depends on the final\nlayer coefficients. We can imagine gradient flows as continuous time versions of the minimizing\nmovements scheme\n\u0012n+12argmin\n\u00121\n2\u0011k\u0012n\u0000\u0012k2+R\u0000\nh(\u0012n;\u0001)\u0001\nwhereh(\u0012;\u0001)is a parameterized function model. Using the unweighted Euclidean norm for the\ngradient flow, we allow the same budget to adjust final layer and deep layer coefficients. It may\ntherefore be easier to adjust the direction of the output than the norm. For ReLU activation on the\nother hand, the magnitude of the coefficients in all layers combines to an output in a multiplicative\nfashion. It may well be that neural collapse is more likely to occur for activation functions which\ntend to a finite limit at positive and negative infinity.\nIn section 5.2, we present examples which demonstrates that if all data points are not collapsed\nto a single point in the penultimate layer, they may not collapse to a single point in the final layer\neither when the weights of a neural network are trained by gradient descent. This is established in\ntwo different geometries for different activation functions\n4.2. Simplex configuration\nWe showed above that any`p-geometry leads to simplex configurations in the last layer for certain\ntoy models. When considering the geometry of the penultimate layer, we specifically consider\n`2-geometry. This is justified for A;b since the parameters are typically initialized according to\na normal distribution (which is invariant under general rotations) and optimized by (stochastic)\ngradient descent, an algorithm based on the Euclidean inner product. For compatibility purposes,\nalso the output of the preceding layers fshould be governed by Euclidean geometry.\nAgain, as a toy model we consider the case of one-point classes. To simplify the problem, we\nfurthermore suppress the bias vector of the last layer. Let\n1.X=fx1;:::;xkg\u001aRd,\n2.f:X!BR(0)\u0012Rm, and\n278\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS\n3.A:Rm!Rklinear.\nAs beforeBR(0)denotes the Euclidean ball of radius R > 0centered at the origin in Rm. We\ndenoteh(x) =Af(x),yi:=f(xi)2Rmandzi:=h(xi)2Rk. As we suppressed the bias of\nthe last layer, we could normalize the center of mass in the penultimate layer to be1\nkPk\ni=1yi= 0.\nInstead, we make the (weaker) assumption that yi2BR(0)for someR> 0and alli= 1;:::;k .\nWe assume that the outputs h(xi)are in the optimal positions in the last layer and show that if\nAhas minimal norm, also the outputs f(xi)in the penultimate layer are located at the vertices of a\nregular standard simplex. Denote by\nkAkL(`2;`2)= max\nkxk`2\u00141kAxk`2\nkxk`2\nthe operator norm of the linear map Awith respect to the Euclidean norm on both domain and range.\nLemma 8 Letm\u0015k\u00001andyi2BR(0)\u0012RmandA:Rm!Rklinear such that Ayi=zi\nwhereziare the vertices of the regular standard simplex described in Lemma 2 and (4). Then\n1. the center of mass of outputs yioffis1\nkPk\ni=1yi= 0,\n2.kAkL(`2;`2)\u00151, and\n3.kAkL(`2;`2)= 1if and only if\n(a)Ais an isometric embedding of the k\u00001-dimensional subspace spanned by fy1;:::;ykg\nintoRkand\n(b)yiare vertices of a regular standard simplex with the same side lengths.\nThe proof is given in the appendix. We conclude the following.\nCorollary 9 For anym\u0015k\u00001, consider the hypothesis class\nH=\u001a\nh:Rd!Rk\f\f\f\fh=Afwheref:Rd!RmisP\u0000measurable;kf(x)k`2\u0014RP\u0000a.e.\nA:Rm!Rkis linear;kAkL(`2;`2)\u00141\u001b\n:\nThen a minimizer h2H ofRsatisfiesh=Afwhere\n1. there exist values yi2Rmsuch thatf(x) =yifor almost every x2Ci,\n2. the points yiare located at the vertices of a regular k\u00001-dimensional standard simplex in\nRm,\n3. the center of mass of the points yi(with respect to the uniform distribution) is at the origin,\nand\n4.Ais an isometric embedding of the k\u00001-dimensional space spanned by fy1;:::;ykginto\nRk.\nRemark 10 The restriction to the Euclidean case is because in Euclidean geometry, any k\u00001-\ndimensional subspace of Rdis equipped with the Euclidean norm in a natural way. For other\n`p-spaces, the restriction of the `p-norm is not a norm of `q-type and we cannot apply Lemma 4.\n279\nE W OJTOWYTSCH\nThus, we conclude that a simplex geometry is desirable also in the penultimate layer of a func-\ntionh(x) =Af(x)if\n1. the function class Fin whichfis chosen and the linear matrix class in which Ais chosen\nrespect the Euclidean geometry of Rm,\n2.Fis sufficiently expressive to collapse all data points in the class Cito a single point yiand\n3.Fis so expressive that yiandyjcan be chosen mostly independently.\n5. Caveats: Binary classification using two-layer neural networks\nIn this section we consider simple neural network classifier models and data sets on which we can\nshow that the classes are not collapsed into single points when the model parameters are trained by\ngradient descent, despite the fact that the function class is sufficiently expressive. This is intended\nas a complementary illustration that the heuristic considerations of Sections 3 and 4 may or may not\nbe valid, depending on factors which are yet to be understood.\nDeep neural networks with many nonlinearities can be a lot more flexible than shallow neural\nnetworks, and the intuition we built up above does not quite apply here. However, we emphasize\nthat a deep neural network hcan be decomposed as h=g\u000efwheref:Rd!Rkis a deep neural\nnetwork and g:Rk!Ris a shallow neural network. All results should therefore be considered\nalso valid in deep classification models where only the outermost two layers are trained. This is a\nmore realistic assumption in applications where large pretrained models are used to preprocess data\nand only the final layers are trained for a specific new task. Similarly, we note that this indicates\nthat if data is non-collapsed two layers before the output, then it may not collapse in the output layer\neither.\nThe examples we consider concern binary classification, i.e. all functions take values in Rrather\nthan a higher-dimensional space. The label function x7!\u0018xtakes values inf\u00001;1ginstead of the\nset of basis vectors. For the sake of convenience, the data below are assumed to be one-dimensional,\nbut similar results are expected to hold when data in a high-dimensional space is either concentrated\non a line or classification only depends on the projection to a line.\n5.1. Two-layer ReLU-networks in the mean field scaling\nConsider the mean field scaling of shallow neural networks, where a network function is described\nas\nf(x) =1\nmmX\ni=1ai\u001b(wT\nix+bi) rather than f(x) =mX\ni=1ai\u001b(wT\nix+bi):\nIn this regime, it is easy to take the infinite width limit\nf(x) =Z\nRk\u0002Rd\u0002Ra\u001b(wTx+b)\u0019(da\ndw\ndb) (5)\nwith general weight distributions \u0019onRk+d+1. We denote the functions as represented in (5)\nbyh\u0019. Finite neural networks are a special case in these considerations with distribution \u0019=\n1\nmPm\ni=1\u000e(ai;wi;bi). We recall the following results.\n280\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS\nProposition 11 Chizat and Bach (2018) All weights (ai;wi;bi)evolve by the gradient flow of\n(ai;wi;bi)m\ni=17!R \n1\nmmX\ni=1ai\u001b(wT\nix+bi)!\nin(Rk+d+1)mif and only if the empirical distribution \u0019=1\nmPm\ni=1\u000e(ai;wi;bi)evolves by the Wasser-\nstein gradient flow of\n\u00197!R (h\u0019) (6)\n(up to time rescaling).\nConsider specifically \u001b(z) = maxfz;0gandk= 1with the risk functional\nR(h) =\u0000Z\nRdlog\u0012exp(\u0000h(x)\u0001\u0018x)\nexp(h(x)) + exp(\u0000h(x))\u0013\nP(dx):\nThe following result applies specifically to the Wasserstein gradient flow of certain continuous dis-\ntributions, which can be approximated by finite sets of weights.\nProposition 12 Chizat and Bach (2020) Assume that \u00190is such thatjaj2\u0014jwj2+jbj2almost\nsurely and such that\n\u00190(f(w;b)2\u0002g)>0\nfor every open cone \u0002inRd+1. Let\u0019tevolve by the Wasserstein gradient flow of (6)with initial\ncondition\u00190. Then (under additional technical conditions), the following hold:\n1.\u0018xh\u0019t(x)!+1forP-almost every x.\n2. There exist\n\u0019\u00032argmax\u001a\nmin\nx2sptP\u0000\n\u0018x\u0001h\u0019(x)\u0001\f\f\f\f\u0019s.t.Z\nRd+2jaj\u0002\njwj+jbj\u0003\nd\u0019\u00141\u001b\n(7)\nand a normalizing function \u0016: [0;1)!(0;1)such that\u0016(t)h\u0019t!h\u0019\u0003locally uniformly\nonRd.\nRemark 13 We callh\u0003themaximum margin classifier in Barron space. Both the normalization\ncondition in (7)and the normalizing function \u0016are related to the Barron norm or variation norm of\nclassifier functions. The existence of a minimizer in (7)is guaranteed by compactness. Existence of\na limit of\u0019tin some weak sense has to be assumed a priori in Chizat and Bach (2018).\nRemark 14 The open cone condition is satisfied for example if \u00190is a normal distribution on\nRd+1, which is a realistic distribution. This property ensures a diversity in the initial distribution,\nwhich is required to guarantee convergence. The smallness condition on ais purely technical and\nrequired to deal with the non-differentiability of the ReLU activation function, see also Wojtowytsch\n(2020). The same result holds without modification for leaky-ReLU activation. With some additional\nmodifications, it is assumed to also extend to smooth and bounded activation functions.\n281\nE W OJTOWYTSCH\nRemark 15 The divergence \u0018xh\u0019t(x)!+1is expected to be logarithmic in time, which can\nalmost be considered bounded in practice. The convergence h\u0019t!h\u0003is purely qualitative, without\na rate.\nConsider a binary classification problem in RwhereC\u00001= [\u00002;\u00001]andC1= [1;2].\nLemma 16 Consider a binary classification problem in Rwhere one class C\u00001with label\u0018=\u00001\nis contained in [\u00002;\u00001]and the other class C1with label\u0018= +1 is contained in [1;2]. Assume\nthat\u000012C\u00001;12C1and that both classes contain at least one additional point.\nThe classification problem admits a continuum of maximum margin classifiers\nfb(x) =1\n2 [1 +b]8\n><\n>:x+b x>b\n2x\u0000b<x<b\nx\u0000b x<\u0000b\nparametrized by b2[0;1].\nIn particular, we expect that h\u0019tis not constant on either of the classes [1;2]or[\u00002;\u00001]. The\nproof is postponed until the appendix.\nRemark 17 We described the mean field setting in its natural scaling. However, the same re-\nsults are true (with a different time rescaling) if fis represented in the usual fashion as f(x) =Pm\ni=1ai\u001b(wT\nix+bi)without the normalizing factor1\nm, assuming that the weights are initialized\nsuch thatai;wi;bi\u0018m\u00001=2.\n5.2. Two-layer networks with non-convex input classes\nAssume that\nP=p1\u000e\u00001+p2\u000e0+p3\u000e1; p 1;p2;p3\u00150; p 1+p2+p3= 1\nand that\u0018\u00001=\u00181= 1and\u00180=\u00001. We consider the risk functional\nR(h) =Z\nRexp\u0000\n\u0000\u0018xh(x)\u0001\nP(dx) =p1exp\u0000\n\u0000h(\u00001)\u0001\n+p2exp\u0000\nh(0)\u0001\n+p3exp\u0000\n\u0000h(1)\u0001\n;\nwhich is similar to cross-entropy loss in its tails since\n\u0000log\u0012exp(\u0018xh(x))\nexp(\u0018xh(x)) + exp(\u0000\u0018xh(x))\u0013\n=\u0000log\u00121\n1 + exp(\u00002\u0018xh(x))\u0013\n\u00191\u00001\n1 + exp(\u00002\u0018xh(x))\n=exp(\u00002\u0018xh(x))\n1 + exp(\u00002\u0018xh(x))\n\u0019exp(\u00002\u0018xh(x))\n282\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS\nif\u0018xh(x)is large. Further assume that the classifier is a shallow neural network with three neurons\nh(x) =3X\ni=1ai\u001b(wix+bi):\nTo make life easier, we consider a simplified sigmoid activation function \u001b:R!Rwhich satisfies\n\u001b(z) = 0 forz\u00140and\u001b(z) = 1 forz\u00151, and we assume that the parameters (ai;wi;bi)are\ninitialized such that\nh(x) =a1\u001b(\u0000x)\u0000a2\u001b(x+ 1) +a3\u001b(x) (8)\nIn particular, \u001b0(wix+bi) = 0 forP-almost every xat initialization and all i= 1;2;3. This implies\nthat(wi;bi)are constant along gradient descent training, so only a1;a2;a3evolve. We can write\nR\u0000\n\u0000a1\u001b(x) +a2\u001b(x+ 1)\u0000a3\u001b(x\u00001)\u0001\n=p1exp(\u0000a1) +p2exp(\u0000a2) +p3exp(a2\u0000a3):\nLemma 18 Leth=ha1;a2;a3be as in (8)fora1;a2;a32R. Assume that a1;a2;a3evolve by the\ngradient flow of F(a1;a2;a3) =R(ha1;a2;a3). Then\nlim\nt!1\u0002\nh(t;1)\u0000h(t;\u00001)\u0003\n= 0,p3= 2p1\nindependently of the initial condition (a1;a2;a3)(0).\nIn general, assume that h=f\u000egwherefis a shallow neural network. Assume that there are\ntwo classesCi;Cjsuch that the convex hull of g(Ci)intersectsg(Cj). Then it is questionable that\nclasses can collapse to a single point in the final layer. While this does not imply that g(Ci)and\ng(Cj)should concentrate around the vertices of a regular standard simplex, it suggests that simple\ngeometries are preferred already before the penultimate layer if the his to collapse Cito a single\npoint.\nThe proof of Lemma 18 is given in the appendix.\nRemark 19 We note that the probabilities of the different data points crucially enter the analysis,\nwhile considerations above in Lemma 4 were entirely independent of the weight of different classes.\nThe toy model does not capture interactions between the function values at different data points,\nwhich is precisely what drives the dynamics here.\nReferences\nLenaic Chizat and Francis Bach. On the global convergence of gradient descent for over-\nparameterized models using optimal transport. In Advances in neural information processing\nsystems , pages 3036–3046, 2018.\nLenaic Chizat and Francis Bach. Implicit bias of gradient descent for wide two-layer neural net-\nworks trained with the logistic loss. arxiv:2002.04486 [math.OC] , 2020.\nYaim Cooper. The loss landscape of overparameterized neural networks. arXiv:1804.10200\n[cs.LG] , 2018.\nSimon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global\nminima of deep neural networks. arXiv:1811.03804 [cs.LG] , 2018a.\n283\nE W OJTOWYTSCH\nSimon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes\nover-parameterized neural networks. arXiv:1810.02054 [cs.LG] , 2018b.\nWeinan E, Chao Ma, and Lei Wu. A comparative analysis of optimization and generalization prop-\nerties of two-layer neural network and random feature models under gradient descent dynamics.\nSci. China Math. , https://doi.org/10.1007/s11425-019-1628-5, 2020.\nMichael Elad, Dror Simon, and Aviad Aberdam. Another step toward demystifying deep neural\nnetworks. Proceedings of the National Academy of Sciences , 117(44):27070–27072, 2020.\nJianfeng Lu and Stefan Steinerberger. Neural collapse with cross-entropy loss. arxiv: 2012.08465\n[cs.LG] , 2020.\nDustin G Mixon, Hans Parshall, and Jianzong Pi. Neural collapse with unconstrained features.\narXiv:2011.11619 [cs.LG] , 2020.\nVardan Papyan, XY Han, and David L Donoho. Prevalence of neural collapse during the terminal\nphase of deep learning training. Proceedings of the National Academy of Sciences , 117(40):\n24652–24663, 2020.\nStephan Wojtowytsch. On the global convergence of gradient descent training for two-layer Relu\nnetworks in the mean field regime. arXiv:2005.13530 [math.AP] , 2020.\n284\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS\nAppendix A. Proofs\nA.1. Proof from Section 2\nWe prove the convexity property of the loss function.\nProof [Proof of Lemma 1] Without loss of generality j= 1 and we abbreviate \b = \b 1. We\ncompute\nr\b(z) =\u0000e1+kX\nj=1exp(zj)Pk\ni=1exp(zi)ej\n@j@l\b(z) =exp(zj)Pk\ni=1exp(zi)\u000ejl\u0000exp(zj) exp(zl)\u0010Pk\ni=1exp(zi)\u00112\n=pj\u000ejl\u0000pjpl\nwherepj=exp(zj)Pk\ni=1exp(zi). Thus\natD2\ba=kX\ni=1a2\nipi\u0000kX\ni;j=1aiajpipj\n=kX\ni=1a2\nipi\u0000 kX\ni=1aipi!2\n=kak2\n`2(p)\u0000kak2\n`1(p)\n\u00150\nsincepis a counting density on f1;:::;kg. Sincepis a vector with strictly positive entries, equality\nis attained if and only if ais a multiple of (1;:::; 1). Since the Hessian of \bis positive semi-definite,\nthe function is convex.\nA.2. Proofs from Section 3\nThe rigorous proof that it is advantageous to collapse the output of a classifier to the center of mass\nover a class goes as follows.\nProof [Proof of Lemma 2] Denote by P]the orthogonal projection of honto the orthogonal com-\nplement of the space spanned by the vector (1;:::; 1)and observe that `(P]h;\u0018) =`(h;\u0018)for all\nh;\u0018.\n285\nE W OJTOWYTSCH\nWe compute by the vector-valued Jensen’s inequality that\nR(h) =R(P]h)\n=\u0000Z\nRdlog \nexp(P]h(x)\u0001\u0018x)Pk\ni=1exp(P]h(x)\u0001ei)!\nP(dx)\n=\u0000kX\nj=1Z\nCjlog \nexp(P]h(x)\u0001ej)Pk\ni=1exp(P]h(x)\u0001ei)!\nP(dx)\n=kX\nj=1jCjj1\njCjjZ\nCj\bj(P]h(x))P(dx)\n\u0015kX\nj=1jCjj\bj \n1\njCjjZ\nCjP]h(x)P(dx)!\n=\u0000Z\nRdlog \nexp\u0000\nP]h(x)\u0001\u0018x\u0001\nPk\ni=1exp\u0000\nP]h(x)\u0001ei\u0001!\nP(dx)\nand note that the inequality is strict unless P]h(x) =P]h(x)forP-almost every xsince`strictly\nconvex on the orthogonal complement of (1;:::; 1). This is the case if and only if h(x)\u0000\u0016h(x)2\nspanf(1;:::; 1)gfor almost all x.\nWe proceed to show the optimality of a simplex configuration in the toy problem.\nProof [Proof of Lemma 4] Step 1. Existence of the minimizer. Due to the convexity of \biis\nconvex on the compact convex set BR(0),\bihas a minimizer ziinBR(0).\nStep 2. Uniqueness of the minimizer. By the Lagrange multiplier theorem, there exists \u0015i2R\nsuch that\n0 =\u0000\nr\bi\u0001\n(zi)\u0000\u0015zi\n=2\n4kX\nj=1exp(zi\u0001ej)Pk\nl=1exp(zi\u0001el)ek3\n5\u0000ei\u0000\u0015izi\n=kX\nj=1\"\nexp(zi\u0001ej)Pk\nl=1exp(zi\u0001el)\u0000\u000eij\u0000\u0015i(zi\u0001ej)#\nej: (9)\nAll coefficients in the basis expansion have to vanish separately, so in particular\n0 =kX\nj=1\"\nexp(zi\u0001ej)Pd\nl=1exp(zi\u0001el)\u0000\u000eij\u0000\u0015i(zi\u0001ek)#\n= 1\u00001\u0000\u0015kX\nj=1(zi\u0001ej) =\u0000\u0015kX\nj=1(zi\u0001ej);\nmeaning that either \u0015i= 0 orPk\nj=1(zi\u0001ej) = 0 . Sinceexp(zi\u0001ej)Pk\nl=1exp(zi\u0001el)\u0000\u000eij6= 0 for anyi;jand\nchoice ofzi, we find that \u0015i6= 0and thuszi2@BR(0)and\n0 =kX\ni=1(zi\u0001ek) = (1;:::; 1)\u0001zi:\n286\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS\nSince \biisstrictly convex in the hyperplane H=fz2Rk: (1;:::; 1)\u0001z= 0gby Lemma 1, we\nfind that the minimizer zi2BR(0)\\His unique.\nStep 3. Symmetry. Since the minimizer ziis unique and \bi(z1;:::;zk)is invariant under the\npermutation of the coordinate entries zjof its argument for j6=i, we find that also the minimizer\nzimust have this invariance, i.e.\nzi=\u000biei+\fiX\nj6=iej:\nUsing symmetry, we find that \u000bi\u0011\u000b;\fi\u0011\findependently of i.\nRemark 20 The first and third step of the proof go through for general `p-norms since also these\nnorms are invariant under the rearrangement of coordinates. The second step requires slightly\ndifferent reasoning. Still, the Lagrange-multiplier equation\n0 =kX\nj=1\"\nexp(zi\u0001ej)Pk\nl=1exp(zi\u0001el)\u0000\u000eij\u0000\u0015i\f\fzi\u0001ej\f\fp\u00002(zi\u0001ej)#\nej\ncan be used to conclude \u0015i6= 0and thus that any minimizer zimust lie in the boundary of BR(0).\nNow assume that there are multiple minimizers zi;1andzi;2. Then \bicannot be uniformly convex\nalong the connecting line between zi;1andzi;2. Therefore zi;2\u0000zi;1k(1;:::; 1). Since the ball\nBR(0)is strictly convex and \biis constant along the connecting line, this is a contradiction to the\nfact that the minimum is only attained on the boundary.\nThe equations which determine \u000b>0;\f < 0become\nj\u000bjp+ (k\u00001)j\fjp=Rp;j\u000bjp\u00002\u000b+ (k\u00001)j\fjp\u00002\f= 0\nwhich is solved by\n\u000b= \n(k\u00001)1\np\u00001\n1 + (k\u00001)1\np\u00001!1\np\nR; \f =\u00000\nBBB@1\u0000(k\u00001)1\np\u00001\n1+(k\u00001)1\np\u00001\nk\u000011\nCCCA1\np\nR:\nIfp2f1;1g, the unit spheres in Rkhave straight segments and singularities, and the Lagrange-\nmultiplier theorem no longer applies. However, we note that the facets of the `1-unit ball are never\nparallel to (1;:::; 1), and that the same statement is expected to hold. The same is true for the\n`1-unit ball close to points of the form \u000bei+\fP\nj6=iejifk>2.\nA.3. Proofs from Section 4\nNow we show that the simplex symmetry is optimal under certain conditions.\nProof [Proof of Lemma 8] We have\nkAk`2= sup\nkyk\u0014RkAyk\nkyk\u0015max\n1\u0014i\u0014kkzik\nkyik= max\n1\u0014i\u0014kR\nkyik\u00151:\nIn particularkAk`2\u00151and ifkAk`2= 1, thenkyik=Rfor all 1\u0014i\u0014k.\n287\nE W OJTOWYTSCH\nWe observe that the collection fz1;:::;zk\u00001gspans thek\u00001-dimensional hyperplane H=\nfz2Rk: (1;:::; 1)\u0001z= 0ginRk. Consequently, the collection fy1;:::;yk\u00001gmust be linearly\nindependent in Rm, i.e. the basis of a k\u00001-dimensional subspace. The map Ais therefore injective\nand uniquely determined by the prescription zi=Ayifori= 1;:::;k\u00001. Since\n0 =kX\nj=1zj=kX\nj=1(Ayj) =A0\n@kX\nj=1yj1\nA;\nwe conclude by injectivity thatPk\nj=1yj= 0. After a rotation, we may assume without loss of\ngenerality that m=k\u00001. Since rotations are Euclidean isometries, also Rk\u00001is equipped with the\n`2-norm. Assume that kAk`2= 1. Then\n1.kyjk=Rfor allj= 1;:::;k and\n2.Pk\nj=1yj= 0.\nThis implies that for every i= 1;:::;k we have\nkX\nj=1kyj\u0000yik2=kX\nj=1\u0002\nkyjk2+kyik2+ 2hyi;yji\u0003\n= 2kR2+ 2*\nyi;kX\nj=1yj+\n= 2kR2:\nThe sum on the left is a sum of only k\u00001positive terms since yi\u0000yi= 0, so there exists j6=i\nsuch thatkyi\u0000yjk2\u00152k\nk\u00001R2. On the other hand, we know that zi;zjcoincide in all but two\ncoordinates, so by (4) we find that\nkzi\u0000zjk2= 2(\u000b\u0000\f)2= 2\"r\nk\u00001\nk\u00001p\nk(k\u00001)#\nR2= 2k\u00001\nkR2\u0014\n1\u00001\nk\u00001\u00152\n= 2k\nk\u00001R2:\nIn particular, since kAk= 1we find that\n2k\nk\u00001R2\u0014kyi\u0000yjk2\u0014kA(yi\u0000yj)k2=kzi\u0000zjk2= 2k\nk\u00001R2: (10)\nSince strict inequality cannot hold, we find that 10 must hold for all 1\u0014i6=j\u0014kand thus\nkyi\u0000yjk2=kzi\u0000zjk2. This in particular implies that hyi;yji=hzi;zjifor alli;j= 1;:::;k .\nSincefy1;:::;yk\u00001gis a basis of Rk\u00001, we conclude that Ais an isometric embedding.\nA.4. Proofs from Section 5\nWe begin by proving that the maximum margin classifier in the problem under discussion is in fact\nf(x) =x\n2.\nProof [Proof of Lemma 16] Note that \u0016f(x) =f(x)\u0000f(\u0000x)\n2satisfies\n\u0018x\u0016f(x) =\u0018xf(x) +\u0018\u0000xf(\u0000x)\n2\u0015min\b\n\u0018xf(x); \u0018\u0000xf(\u0000x)\t\n288\nSIMPLEX SYMMETRY IN NEURAL NETWORK CLASSIFIERS\nforP-almost every x. We can therefore assume that the maximum margin classifier is a odd function.\nThe function class under consideration therefore is the convex hull of the family\nH\u000e=\u001aa\u001b(wx+b)\u0000a\u001b(b\u0000wx)\n2jaj[jwj+jbj]:a6= 0;(w;b)6= 0\u001b\n:\nConsider the map\nF: conv(H\u000e)!R; F (h) =h(1)\nwhich bounds the maximum margin functional from above: minx2sptP\u0000\n\u0018xh(x)\u0001\n\u00141\u0001h(1). Since\nFis linear, it attains its maximum at the boundary of the class, i.e. there exist (w;b)such that\n\u001b(w+b)\u0000\u001b(b\u0000w)\n2 [jwj+jbj]=F\u0012\u001b(wx+b)\u0000\u001b(b\u0000wx)\n2 [jwj+jbj]\u0013\n= max\nh2conv(H\u000e)F(h)\nand thus\nmax\nh2conv(H\u000e)min\nx2sptP\u0000\n\u0018xh(x)\u0001\n= max\nw;b\u001b(w+b)\u0000\u001b(b\u0000w)\n2 [jwj+jbj]\u0014\u001b(w+b)\n2 [jwj+jbj]\u00141\n2:\nThe bound is realized precisely if and only if w > b > 0, i.e. due to the positive homogeneity of\nReLU if and only if\nh(x) =\u001b(x+b)\u0000\u001b(b\u0000x)\n2\u0002\n1 +jbj\u0003 =1\n2\u0002\n1 +jbj\u00038\n><\n>:x+b x>b\n2x\u0000b<x<b\nx\u0000b x<\u0000b\nforb2[0;1].\nFinally, we prove the non-collapse result in the three neuron model.\nProof [Proof of Lemma 18] The gradient flow equation is the ODE\n0\n@_a1\n_a2\n_a31\nA=0\n@p1exp(\u0000a1)\np2exp(\u0000a2)\u0000p3exp(a2\u0000a3)\np3exp(a2\u0000a3)1\nA\nThe first equation is solved easily explicitly since\nd\ndtexp(a1) = exp(a1) _a1=p1)a1(t) = log\u0010\nea1(0)+p1t\u0011\n:\nThe second equation can be reformulated as\nd\ndtexp(a2) = exp(a2) _a2=p2\u0000p3exp(2a2\u0000a3);\nwhich leads us to consider\nd\ndtexp(2a2\u0000a3) = exp(2a2\u0000a3)\u0002\n2 _a2\u0000_a3\u0003\n= exp(2a2\u0000a3)\u0002\n2p2exp(\u0000a2)\u00002p3exp(a2\u0000a3)\u0000p3exp(a2\u0000a3)\u0003\n= exp(2a2\u0000a3)\u0002\n2p2\u00003p3exp(2a2\u0000a3)\u0003\nexp(\u0000a2):\n289\nE W OJTOWYTSCH\nDenotef(t) = exp(2a2\u0000a3). The differential equation\nf0=f(2p2\u00003p3f) exp(\u0000a2) (11)\nimplies that f\u00112p2\n3p3iff(0) =2p2\n3p3. The same is true for long times and arbitrary initialization\n(anticipating that the integral of exp(\u0000a2)diverges). If the equality is satisfied exactly, we find that\nd\ndtexp(a2) =p2\u0000p3exp(2a2\u0000a3) =p2\u0000p32p2\n3p3=p2\n3)a2(t) = log\u0010\nea2(0)+p2\n3t\u0011\nand thus\nexp(2a2\u0000a3) =2p2\n3p3) exp(a3) =3p3\n2p2exp(2a2))a3= log\u00123p3\n2p2exp(2a2)\u0013\n= log\u00123p3\n2p2\u0013\n+2a2\nThe question is whether all data points in the same class are mapped to the same value. This is only\na relevant question for the ‘outer’ class where\nf(t;\u00001) =a1(t)\n= log\u0010\nea1(0)+p1t\u0011\nf(t;1) = (a3\u0000a2)(t)\n= log\u00123p3\n2p2\u0013\n+a2(t)\n= log\u00123p3\n2p2\u0013\n+ log\u0010\nea2(0)+p2\n3t\u0011\nIn particular\nf(t;1)\u0000f(t;\u00001) = log\u00123p3\n2p2\u0013\n+ log\u0010\nea2(0)+p2\n3t\u0011\n\u0000log\u0010\nea1(0)+p1t\u0011\n= log\u00123p3\n2p2\u0013\n+ log \nea2(0)+p2\n3t\nea1(0)+p1t!\n!log\u00123p3\n2p2\u0013\n+ log\u0012p2\n3p1\u0013\n= log\u00123p3\n2p2p2\n3p1\u0013\n= log\u0012p3\n2p1\u0013\nThus the difference between f(t;1)andf(t;\u00001)goes to zero if and only if p3= 2p1.\nFinally, we remark that if exp(2a2\u0000a3) =2p2\n3p3is not satisfied exactly at time t= 0, then\nby (11), we find that it is approximately satisfied at a later time t0\u001d1. Since the influence of the\ninitial condition goes to zero, we find that the conclusion is almost satisfied by considering dynamics\nstarting at (a1;a2;a3)(t0). This argument can easily be made quantitative.\n290",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.