{
    "name": "SciDuet-ACL-Test-Challenge-woSectionHeader",
    "data": [
        {
            "slides": {
                "0": {
                    "title": "Background Semantic Hashing",
                    "text": [
                        "Fast and accurate similarity search (i.e., finding documents from a large corpus that are most similar to a query of interest) is at the core of many information retrieval applications;",
                        "One strategy is to represent each document as a continuous vector: such as Paragraph",
                        "Cosine similarity is typically employed to measure relatedness;",
                        "Semantic hashing is an effective approach: the similarity between two documents can be evaluated by simply calculating pairwise Hamming distances between hashing (binary) codes;"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Motivation and contributions",
                    "text": [
                        "Existing semantic hashing approaches typically require two-stage training procedures (e.g. continuous representations are crudely binarized after training);",
                        "Vast amount of unlabeled data is not fully leveraged for learning binary document representations.",
                        "we propose a simple and generic neural architecture for text hashing that learns binary latent codes for documents, which be trained an end-to-end manner;",
                        "We leverage a Neural Variational Inference (NVI) framework, which introduces data-dependent noises during training and makes effective use of unlabeled information."
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "4": {
                    "title": "Framework components Injecting Data dependent Noise to z",
                    "text": [
                        "We found that injecting random",
                        "Gaussian noise into z makes the decoder a more favorable regularizer for the binary codes;",
                        "x <latexit sha1_base64=\"wrYRrS9nqr2/jTKdHNfdRLtLB0k=\">AAAB53icbVBNS8NAEJ3Ur1q/qh69LBbBU0lFUG9FLx5bMLbQhrLZTtq1m03Y3Ygl9Bd48aDi1b/kzX/jts1BWx8MPN6bYWZekAiujet+O4WV1bX1jeJmaWt7Z3evvH9wr+NUMfRYLGLVDqhGwSV6hhuB7UQhjQKBrWB0M/Vbj6g0j+WdGSfoR3QgecgZNVZqPvXKFbfqzkCWSS0nFcjR6JW/uv2YpRFKwwTVulNzE+NnVBnOBE5K3VRjQtmIDrBjqaQRaj+bHTohJ1bpkzBWtqQhM/X3REYjrcdRYDsjaoZ60ZuK/3md1ISXfsZlkhqUbL4oTAUxMZl+TfpcITNibAllittbCRtSRZmx2ZRsCLXFl5eJd1a9qrrN80r9Ok+jCEdwDKdQgwuowy00wAMGCM/wCm/Og/PivDsf89aCk88cwh84nz9UTYzP</latexit> log x",
                        "The objective function in (4) can be written in a form similar to the rate-distortion tradeoff:"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": [
                        "figure/image/955-Figure1-1.png"
                    ]
                },
                "8": {
                    "title": "Experiments Ablation study",
                    "text": [
                        "Figure: The precisions of the top 100 retrieved Table: Ablation study with different documents for NASH-DN with stochastic or encoder/decoder networks. deterministic binary latent variables.",
                        "Leveraging stochastically sampling during training generalizes better;",
                        "Linear decoder networks gives rise to better empirical results."
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": [
                        "figure/image/955-Table6-1.png",
                        "figure/image/955-Figure3-1.png"
                    ]
                },
                "9": {
                    "title": "Experiments Qualitative Analysis",
                    "text": [
                        "Figure: Examples of learned compact hashing codes on 20Newsgroups dataset.",
                        "NASH typically compresses documents with shared topics into very similar binary codes."
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": [
                        "figure/image/955-Table5-1.png"
                    ]
                }
            },
            "paper_title": "NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing",
            "paper_id": "955",
            "paper": {
                "title": "NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing",
                "abstract": "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The problem of similarity search, also called nearest-neighbor search, consists of finding documents from a large collection of documents, or corpus, which are most similar to a query document of interest."
                    },
                    {
                        "id": 1,
                        "string": "Fast and accurate similarity search is at the core of many information retrieval applications, such as plagiarism analysis (Stein et al., 2007) , collaborative filtering (Koren, 2008) , content-based multimedia retrieval (Lew et al., 2006) and caching (Pandey et al., 2009) ."
                    },
                    {
                        "id": 2,
                        "string": "Semantic hashing is an effective approach for fast similarity search (Salakhutdinov and Hinton, 2009; Zhang * Equal contribution."
                    },
                    {
                        "id": 3,
                        "string": "et al., 2010; Wang et al., 2014) ."
                    },
                    {
                        "id": 4,
                        "string": "By representing every document in the corpus as a similaritypreserving discrete (binary) hashing code, the similarity between two documents can be evaluated by simply calculating pairwise Hamming distances between hashing codes, i.e., the number of bits that are different between two codes."
                    },
                    {
                        "id": 5,
                        "string": "Given that today, an ordinary PC is able to execute millions of Hamming distance computations in just a few milliseconds (Zhang et al., 2010) , this semantic hashing strategy is very computationally attractive."
                    },
                    {
                        "id": 6,
                        "string": "While considerable research has been devoted to text (semantic) hashing, existing approaches typically require two-stage training procedures."
                    },
                    {
                        "id": 7,
                        "string": "These methods can be generally divided into two categories: (i) binary codes for documents are first learned in an unsupervised manner, then l binary classifiers are trained via supervised learning to predict the l-bit hashing code (Zhang et al., 2010; Xu et al., 2015) ; (ii) continuous text representations are first inferred, which are binarized as a second (separate) step during testing (Wang et al., 2013; Chaidaroon and Fang, 2017) ."
                    },
                    {
                        "id": 8,
                        "string": "Because the model parameters are not learned in an end-to-end manner, these two-stage training strategies may result in suboptimal local optima."
                    },
                    {
                        "id": 9,
                        "string": "This happens because different modules within the model are optimized separately, preventing the sharing of information between them."
                    },
                    {
                        "id": 10,
                        "string": "Further, in existing methods, binary constraints are typically handled adhoc by truncation, i.e., the hashing codes are obtained via direct binarization from continuous representations after training."
                    },
                    {
                        "id": 11,
                        "string": "As a result, the information contained in the continuous representations is lost during the (separate) binarization process."
                    },
                    {
                        "id": 12,
                        "string": "Moreover, training different modules (mapping and classifier/binarization) separately often requires additional hyperparameter tuning for each training stage, which can be laborious and timeconsuming."
                    },
                    {
                        "id": 13,
                        "string": "In this paper, we propose a simple and generic neural architecture for text hashing that learns binary latent codes for documents in an end-toend manner."
                    },
                    {
                        "id": 14,
                        "string": "Inspired by recent advances in neural variational inference (NVI) for text processing (Miao et al., 2016; Yang et al., 2017; Shen et al., 2017b) , we approach semantic hashing from a generative model perspective, where binary (hashing) codes are represented as either deterministic or stochastic Bernoulli latent variables."
                    },
                    {
                        "id": 15,
                        "string": "The inference (encoder) and generative (decoder) networks are optimized jointly by maximizing a variational lower bound to the marginal distribution of input documents (corpus)."
                    },
                    {
                        "id": 16,
                        "string": "By leveraging a simple and effective method to estimate the gradients with respect to discrete (binary) variables, the loss term from the generative (decoder) network can be directly backpropagated into the inference (encoder) network to optimize the hash function."
                    },
                    {
                        "id": 17,
                        "string": "Motivated by the rate-distortion theory (Berger, 1971; Theis et al., 2017) , we propose to inject data-dependent noise into the latent codes during the decoding stage, which adaptively accounts for the tradeoff between minimizing rate (number of bits used, or effective code length) and distortion (reconstruction error) during training."
                    },
                    {
                        "id": 18,
                        "string": "The connection between the proposed method and ratedistortion theory is further elucidated, providing a theoretical foundation for the effectiveness of our framework."
                    },
                    {
                        "id": 19,
                        "string": "Summarizing, the contributions of this paper are: (i) to the best of our knowledge, we present the first semantic hashing architecture that can be trained in an end-to-end manner; (ii) we propose a neural variational inference framework to learn compact (regularized) binary codes for documents, achieving promising results on both unsupervised and supervised text hashing; (iii) the connection between our method and rate-distortion theory is established, from which we demonstrate the advantage of injecting data-dependent noise into the latent variable during training."
                    },
                    {
                        "id": 20,
                        "string": "Related Work Models with discrete random variables have attracted much attention in the deep learning community (Jang et al., 2016; Maddison et al., 2016; van den Oord et al., 2017; Li et al., 2017; Shu and Nakayama, 2017) ."
                    },
                    {
                        "id": 21,
                        "string": "Some of these structures are more natural choices for language or speech data, which are inherently discrete."
                    },
                    {
                        "id": 22,
                        "string": "More specifically, g (x) < l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 g s o F B p B B A b m y f n 2 Z e N A 3 f T q K 6 U = \" > A A A B 7 3 i c b V B N T w I x E J 3 F L 8 Q v 1 K O X R m K C F 7 J r S N Q b 0 Y t H T F z B w I Z 0 S x c a 2 u 6 m 7 R r J h l / h x Y M a r / 4 d b / 4 b C + x B w Z d M 8 v L e T G b m h Q l n 2 r j u t 1 N Y W V 1 b 3 y h u l r a 2 d 3 b 3 y v s H 9 z p O F a E + i X m s 2 i H W l D N J f c M M p + 1 E U S x C T l v h 6 H r q t x 6 p 0 i y W d 2 a c 0 E D g g W Q R I 9 h Y 6 W H Q 6 y Z D V n 0 6 7 Z U r b s 2 d A S 0 T L y c V y N H s l b + 6 / Z i k g k p D O N a 6 4 7 m J C T K s D C O c T k r d V N M E k x E e 0 I 6 l E g u q g 2 x 2 8 A S d W K W P o l j Z k g b N 1 N 8 T G R Z a j 0 V o O w U 2 Q 7 3 o T c X / v E 5 q o o s g Y z J J D Z V k v i h K O T I x m n 6 P + k x R Y v j Y E k w U s 7 c i M s Q K E 2 M z K t k Q v M W X l 4 l / V r u s u b f 1 S u M q T 6 M I R 3 A M V f D g H B p w A 0 3 w g Y C A Z 3 i F N 0 c 5 L 8 6 7 8 z F v L T j 5 z C H 8 g f P 5 A 5 / Q j 9 M = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 g s o F B p B B A b m y f n 2 Z e N A 3 f T q K 6 U = \" > A A A B 7 3 i c b V B N T w I x E J 3 F L 8 Q v 1 K O X R m K C F 7 J r S N Q b 0 Y t H T F z B w I Z 0 S x c a 2 u 6 m 7 R r J h l / h x Y M a r / 4 d b / 4 b C + x B w Z d M 8 v L e T G b m h Q l n 2 r j u t 1 N Y W V 1 b 3 y h u l r a 2 d 3 b 3 y v s H 9 z p O F a E + i X m s 2 i H W l D N J f c M M p + 1 E U S x C T l v h 6 H r q t x 6 p 0 i y W d 2 a c 0 E D g g W Q R I 9 h Y 6 W H Q 6 y Z D V n 0 6 7 Z U r b s 2 d A S 0 T L y c V y N H s l b + 6 / Z i k g k p D O N a 6 4 7 m J C T K s D C O c T k r d V N M E k x E e 0 I 6 l E g u q g 2 x 2 8 A S d W K W P o l j Z k g b N 1 N 8 T G R Z a j 0 V o O w U 2 Q 7 3 o T c X / v E 5 q o o s g Y z J J D Z V k v i h K O T I x m n 6 P + k x R Y v j Y E k w U s 7 c i M s Q K E 2 M z K t k Q v M W X l 4 l / V r u s u b f 1 S u M q T 6 M I R 3 A M V f D g H B p w A 0 3 w g Y C A Z 3 i F N 0 c 5 L 8 6 7 8 z F v L T j 5 z C H 8 g f P 5 A 5 / Q j 9 M = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 g s o F B p B B A b m y f n 2 Z e N A 3 f T q K 6 U = \" > A A A B 7 3 i c b V B N T w I x E J 3 F L 8 Q v 1 K O X R m K C F 7 J r S N Q b 0 Y t H T F z B w I Z 0 S x c a 2 u 6 m 7 R r J h l / h x Y M a r / 4 d b / 4 b C + x B w Z d M 8 v L e T G b m h Q l n 2 r j u t 1 N Y W V 1 b 3 y h u l r a 2 d 3 b 3 y v s H 9 z p O F a E + i X m s 2 i H W l D N J f c M M p + 1 E U S x C T l v h 6 H r q t x 6 p 0 i y W d 2 a c 0 E D g g W Q R I 9 h Y 6 W H Q 6 y Z D V n 0 6 7 Z U r b s 2 d A S 0 T L y c V y N H s l b + 6 / Z i k g k p D O N a 6 4 7 m J C T K s D C O c T k r d V N M E k x E e 0 I 6 l E g u q g 2 x 2 8 A S d W K W P o l j Z k g b N 1 N 8 T G R Z a j 0 V o O w U 2 Q 7 3 o T c X / v E 5 q o o s g Y z J J D Z V k v i h K O T I x m n 6 P + k x R Y v j Y E k w U s 7 c i M s Q K E 2 M z K t k Q v M W X l 4 l / V r u s u b f 1 S u M q T 6 M I R 3 A M V f D g H B p w A 0 3 w g Y C A Z 3 i F N 0 c 5 L 8 6 7 8 z F v L T j 5 z C H 8 g f P 5 A 5 / Q j 9 M = < / l a t e x i t > z < l a t e x i t s h a 1 _ b a s e 6 4 = \" W I l b T b B F L L c q O v t 8 1 z B c 0 3 G a g J U = \" > A A A B 5 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l F U G 9 F L x 5 b M L b Q h r L Z T t q 1 m 0 3 Y 3 Q g 1 9 B d 4 8 a D i 1 b / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H 9 w r + N U M f R Y L G L V D q h G w S V 6 h h u B 7 U Q h j Q K B r W B 0 M / V b j 6 g 0 j + W d G S f o R 3 Q g e c g Z N V Z q P v X K F b f q z k C W S S 0 n F c j R 6 J W / u v 2 Y p R F K w w T V u l N z E + N n V     For natural language processing (NLP), although significant research has been made to learn continuous deep representations for words or documents (Mikolov et al., 2013; Kiros et al., 2015; , discrete neural representations have been mainly explored in learning word embeddings (Shu and Nakayama, 2017; Chen et al., 2017) ."
                    },
                    {
                        "id": 23,
                        "string": "In these recent works, words are represented as a vector of discrete numbers, which are very efficient storage-wise, while showing comparable performance on several NLP tasks, relative to continuous word embeddings."
                    },
                    {
                        "id": 24,
                        "string": "However, discrete representations that are learned in an endto-end manner at the sentence or document level have been rarely explored."
                    },
                    {
                        "id": 25,
                        "string": "Also there is a lack of strict evaluation regarding their effectiveness."
                    },
                    {
                        "id": 26,
                        "string": "Our work focuses on learning discrete (binary) representations for text documents."
                    },
                    {
                        "id": 27,
                        "string": "Further, we employ semantic hashing (fast similarity search) as a mechanism to evaluate the quality of learned binary latent codes."
                    },
                    {
                        "id": 28,
                        "string": "R w K R T 3 U a D k 7 V R z G o e S t 8 L R z d R v P X J t R K L u c Z z y I K Y D J S L B K F q p 1 R 1 S z J 8 m v W r N r b s z k G X i F a Q G B Z q 9 6 l e 3 n 7 A s 5 g q Z p M Z 0 P D f F I K c a B Z N 8 U u l m h q e U j e i A d y x V N O Y m y G f n T s i J V f o k S r Q t h W S m / p 7 I a W z M O A 5 t Z 0 x x a B a 9 q f i f 1 8 k w u g x y o d I M u W L z R V E m C S Z k + j v p C 8 0 Z y r E l l G l h b y V s S D V l a B O q 2 B C 8 x Z e X i X 9 W v 6 q 7 d + e 1 x n W R R h m O 4 B h O w Y M L a M A t N M E H B i N 4 h l d 4 c 1 L n x X l 3 P u a t J a e Y O Y Q / c D 5 / A B u 5 j 5 w = < / l a t e x i t > x < l a t e x i t s h a 1 _ b a s e 6 4 = \" w r Y R r S 9 n q r 2 / j T K d H N f d R L t L B 0 k = \" > A A A B 5 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l F U G 9 F L x 5 b M L b Q h r L Z T t q 1 m 0 3 Y 3 Y g l 9 B d 4 8 a D i 1 b / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H 9 w r + N U M f R Y L G L V D q h G w S V 6 h h u B 7 U Q h j Q K B r W B 0 M / V b j 6 g 0 j + W d G S f o R 3 Q g e c g Z N V Z q P v X K F b f q z k C W S S 0 n F c j R 6 J W / u v 2 Y p R F K w w T V u l N z E + N n V B n O B E 5 K 3 V R j Q t m I D r B j q a Q R a j + b H T o h J 1 b p k z B W t q Q h M / X 3 R E Y j r c d R Y D s j a o Z 6 0 Z u K / 3 m d 1 I S X f s Z l k h q U b L 4 o T A U x M Z l + T f p c I T N i b A l l i t t b C R t S R Z m x 2 Z R s C L X F / j T K d H N f d R L t L B 0 k = \" > A A A B 5 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l F U G 9 F L x 5 b M L b Q h r L Z T t q 1 m 0 3 Y 3 Y g l 9 B d 4 8 a D i 1 b / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H 9 w r + N U M f R Y L G L V D q h G w S V 6 h h u B 7 U Q h j Q K B r W B 0 M / V b j 6 g 0 j + W d G S f o R 3 Q g e c g Z N V Z q P v X K F b f q z k C W S S 0 n F c j R 6 J W / u v 2 Y p R F K w w T V u l N z E + N n V B n O B E 5 K 3 V R j Q t m I D r B j q a Q R a j + b H T o h J 1 b p k z B W t q Q h M / X 3 R E Y j r c d R Y D s j a o Z 6 0 Z u K / 3 m d 1 I S X f s Z l k h q U b L 4 o T A U x M Z l + T f p c I T N i b A l l i t t b C R t S R Z m x 2 Z R s C L X F / j T K d H N f d R L t L B 0 k = \" > A A A B 5 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l F U G 9 F L x 5 b M L b Q h r L Z T t q 1 m 0 3 Y 3 Y g l 9 B d 4 8 a D i 1 b / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H 9 w r + N U M f R Y L G L V D q h G w S V 6 h h u B 7 U Q h j Q K B r W B 0 M / V b j 6 g 0 j + W d G S f o R 3 Q g e c g Z N V Z q P v X K F b f q z k C W S S 0 n F c j R 6 J W / u v 2 Y p R F K w w T V u l N z E + N n V B n O B E 5 K 3 V R j Q t m I D r B j q a Q R a j + b H T o h J 1 b p k z B W t q Q h M / X 3 R E Y j r c d R Y D s j a o Z 6 0 Z u K / 3 m d 1 I S X f s Z l k h q U b L 4 o T A U x M Z l + T f p c I T N i b A l l i t t b C R t S R Z m x 2 Z R s C L X F l 5 e J d 1 a 9 q r r N 8 0 r 9 O k + j C E d w D K d Q g w u o w y 0 0 w A M G C M / w C m / O g / P i v D s f 8 9 a C k 8 8 c w h 8 4 n z 9 U T Y z P < / l a t e x i t > log 2 < l a t e x i t s h a 1 _ b a s e 6 4 = \" 7 f X R e u S i 2 A G X H Q b F X 8 o a g c V U X c o = \" > A A A B 8 3 i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K k k R 1 F v R i 8 c K x h a a W D b b T b p 0 N x t 3 N 4 V S + j u 8 e F D x 6 p / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v y j j T x n W / n Z X V t f W N z d J W e X t n d 2 + / c n D 4 o G W u C P W J 5 F K 1 I 6 w p Z y n 1 D T O c t j N F s Y g 4 b U W D m 6 n f G l K l m U z v z S i j o c B J y m J G s L F S G H C Z o E C z R O D H e r d S d W v u D G i Z e A W p Q o F m t / I V 9 C T J B U 0 N 4 V j r j u d m J h x j Z R j h d F I O c k 0 z T A Y 4 o R 1 L U y y o D s e z o y f o 1 C o 9 F E t l K z V o p v 6 e G G O h 9 U h E t l N g 0 9 e L 3 l T 8 z + v k J r 4 M x y z N c k N T M l 8 U 5 x w Z i a Y J o B 5 T l B g + s g Q T x e y t i P S x w s T Y n M o 2 B G / x 5 W X i 1 2 t X N f f u v N q 4 L t I o w T G c w B l 4 c A E N u I U m + E D g C Z 7 h F d 6 c o f P i v D s f 8 9 Y V p 5 g 5 g j 9 w P n 8 A n m W R i g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 7 f X R e u S i 2 A G X H Q b F X 8 o a g c V U X c o = \" > A A A B 8 3 i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K k k R 1 F v R i 8 c K x h a a W D b b T b p 0 N x t 3 N 4 V S + j u 8 e F D x 6 p / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v y j j T x n W / n Z X V t f W N z d J W e X t n d 2 + / c n D 4 o G W u C P W J 5 F K 1 I 6 w p Z y n 1 D T O c t j N F s Y g 4 b U W D m 6 n f G l K l m U z v z S i j o c B J y m J G s L F S G H C Z o E C z R O D H e r d S d W v u D G i Z e A W p Q o F m t / I V 9 C T J B U 0 N 4 V j r j u d m J h x j Z R j h d F I O c k 0 z T A Y 4 o R 1 L U y y o D s e z o y f o 1 C o 9 F E t l K z V o p v 6 e G G O h 9 U h E t l N g 0 9 e L 3 l T 8 z + v k J r 4 M x y z N c k N T M l 8 U 5 x w Z i a Y J o B 5 T l B g + s g Q T x e y t i P S x w s T Y n M o 2 B G / x 5 W X i 1 2 t X N f f u v N q 4 L t I o w T G c w B l 4 c A E N u I U m + E D g C Z 7 h F d 6 c o f P i v D s f 8 9 Y V p 5 g 5 g j 9 w P n 8 A n m W R i g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 7 f X R e u S i 2 A G X H Q b F X 8 o a g c V U X c o = \" > A A A B 8 3 i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K k k R 1 F v R i 8 c K x h a a W D b b T b p 0 N x t 3 N 4 V S + j u 8 e F D x 6 p / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v y j j T x n W / n Z X V t f W N z d J W e X t n d 2 + / c n D 4 o G W u C P W J 5 F K 1 I 6 w p Z y n 1 D T O c t j N F s Y g 4 b U W D m 6 n f G l K l m U z v z S i j o c B J y m J G s L F S G H C Z o E C z R O D H e r d S d W v u D G i Z e A W p Q o F m t / I V 9 C T J B U 0 N 4 V j r j u d m J h x j Z R j h d F I O c k 0 z T A Y 4 o R 1 L U y y o D s e z o y f o 1 C o 9 F E t l K z V o p v 6 e G G O h 9 U h E t l N g 0 9 e L 3 l T 8 z + v k J r 4 M x y z N c k N T M l 8 U 5 x w Z i a Y J o B 5 T l B g + s g Q T x e y t i P S x w s T Y n M o 2 B G / x 5 W X i 1 2 t X N f f u v N q 4 L t I o w T G c w B l 4 c A E N u I U m + E D g C Z 7 h F d 6 c o f P i v D s f 8 9 Y V D i l e v 3 v w 3 b t o c t P X B w O O 9 G W b m + R G j U l n W t 1 G Y m 1 9 Y X C o u l 1 Z W 1 9 Y 3 z M 2 t O x n G A h M H h y w U T R 9 J w m h A H E U V I 8 1 I E M R 9 R h r + 4 C L z G w 9 E S B o G t 2 o Y E Y + j X k C 7 F C O l p b a 5 P 7 p P 3 E h Q T l J X U g 5 d j l Q f I 5 Z c p 5 X R I d R a j y N 4 d d A 2 y 1 b V G g P O E j s n Z Z C j 3 j a / 3 E 6 I Y 0 4 C h R m S s m V b k f I S J B T F j K Q l N 5 Y k Q n i A e q S l a Y A 4 k V 4 y f i i F e 1 r p w G 4 o d A U K j t X f E w n i U g 6 5 r z u z e + W 0 l 4 n / e a 1 Y d U + 9 h A Z R r E i A J 4 u 6 M Y M q h F k 6 s E M F w Y o N N U F Y U H 0 r x H 0 k E F Y 6 w 5 I O w Z 5 + e Z Y 4 R 9 W z q n V z X K 6 d 5 2 k U w Q 7 Y B R V g g x N Q A 5 e g D h The Proposed Method Hashing under the NVI Framework Inspired by the recent success of variational autoencoders for various NLP problems (Miao et al., 2016; Bowman et al., 2015; Yang et al., 2017; Miao et al., 2017; Shen et al., 2017b; , we approach the training of discrete (binary) latent variables from a generative perspec-tive."
                    },
                    {
                        "id": 29,
                        "string": "Let x and z denote the input document and its corresponding binary hash code, respectively."
                    },
                    {
                        "id": 30,
                        "string": "Most of the previous text hashing methods focus on modeling the encoding distribution p(z|x), or hash function, so the local/global pairwise similarity structure of documents in the original space is preserved in latent space (Zhang et al., 2010; Wang et al., 2013; Xu et al., 2015; Wang et al., 2014) ."
                    },
                    {
                        "id": 31,
                        "string": "However, the generative (decoding) process of reconstructing x from binary latent code z, i.e., modeling distribution p(x|z), has been rarely considered."
                    },
                    {
                        "id": 32,
                        "string": "Intuitively, latent codes learned from a model that accounts for the generative term should naturally encapsulate key semantic information from x because the generation/reconstruction objective is a function of p(x|z)."
                    },
                    {
                        "id": 33,
                        "string": "In this regard, the generative term provides a natural training objective for semantic hashing."
                    },
                    {
                        "id": 34,
                        "string": "We define a generative model that simultaneously accounts for both the encoding distribution, p(z|x), and decoding distribution, p(x|z), by defining approximations q φ (z|x) and q θ (x|z), via inference and generative networks, g φ (x) and g θ (z), parameterized by φ and θ, respectively."
                    },
                    {
                        "id": 35,
                        "string": "Specifically, x ∈ Z |V | + is the bag-of-words (count) representation for the input document, where |V | is the vocabulary size."
                    },
                    {
                        "id": 36,
                        "string": "Notably, we can also employ other count weighting schemes as input features x, e.g., the term frequency-inverse document frequency (TFIDF) (Manning et al., 2008) ."
                    },
                    {
                        "id": 37,
                        "string": "For the encoding distribution, a latent variable z is first inferred from the input text x, by constructing an inference network g φ (x) to approximate the true posterior distribution p(z|x) as q φ (z|x)."
                    },
                    {
                        "id": 38,
                        "string": "Subsequently, the decoder network g θ (z) maps z back into input space to reconstruct the original sequence x asx, approximating p(x|z) as q θ (x|z) (as shown in Figure 1 )."
                    },
                    {
                        "id": 39,
                        "string": "This cyclic strategy, x → z →x ≈ x, provides the latent variable z with a better ability to generalize (Miao et al., 2016) ."
                    },
                    {
                        "id": 40,
                        "string": "To tailor the NVI framework for semantic hashing, we cast z as a binary latent variable and assume a multivariate Bernoulli prior on z: p(z) ∼ Bernoulli(γ) = l i=1 γ z i i (1 − γ i ) 1−z i , where γ i ∈ [0, 1] is component i of vector γ."
                    },
                    {
                        "id": 41,
                        "string": "Thus, the encoding (approximate posterior) distribution q φ (z|x) is restricted to take the form q φ (z|x) = Bernoulli(h), where h = σ(g φ (x)), σ(·) is the sigmoid function, and g φ (·) is the (nonlinear) inference network specified as a multilayer perceptron (MLP)."
                    },
                    {
                        "id": 42,
                        "string": "As illustrated in Figure 1 , we can obtain samples from the Bernoulli posterior either deterministically or stochastically."
                    },
                    {
                        "id": 43,
                        "string": "Suppose z is a l-bit hash code, for the deterministic binarization, we have, for i = 1, 2, ......, l: z i = 1 σ(g i φ (x))>0.5 = sign(σ(g i φ (x) − 0.5) + 1 2 , (1) where z is the binarized variable, and z i and g i φ (x) denote the i-th dimension of z and g φ (x), respectively."
                    },
                    {
                        "id": 44,
                        "string": "The standard Bernoulli sampling in (1) can be understood as setting a hard threshold at 0.5 for each representation dimension, therefore, the binary latent code is generated deterministically."
                    },
                    {
                        "id": 45,
                        "string": "Another strategy to obtain the discrete variable is to binarize h in a stochastic manner: z i = 1 σ(g i φ (x))>µ i = sign(σ(g i φ (x)) − µ i ) + 1 2 , (2) where µ i ∼ Uniform(0, 1)."
                    },
                    {
                        "id": 46,
                        "string": "Because of this sampling process, we do not have to assume a predefined threshold value like in (1)."
                    },
                    {
                        "id": 47,
                        "string": "Training with Binary Latent Variables To estimate the parameters of the encoder and decoder networks, we would ideally maximize the marginal distribution p(x) = p(z)p(x|z)dz."
                    },
                    {
                        "id": 48,
                        "string": "However, computing this marginal is intractable in most cases of interest."
                    },
                    {
                        "id": 49,
                        "string": "Instead, we maximize a variational lower bound."
                    },
                    {
                        "id": 50,
                        "string": "This approach is typically employed in the VAE framework (Kingma and Welling, 2013) : L vae = E q φ (z|x) log q θ (x|z)p(z) q φ (z|x) , (3) = E q φ (z|x) [log q θ (x|z)] − D KL (q φ (z|x)||p(z)), where the Kullback-Leibler (KL) divergence D KL (q φ (z|x)||p(z)) encourages the approximate posterior distribution q φ (z|x) to be close to the multivariate Bernoulli prior p(z)."
                    },
                    {
                        "id": 51,
                        "string": "In this case, D KL (q φ (z|x)|p(z)) can be written in closed-form as a function of g φ (x): D KL = g φ (x) log g φ (x) γ + (1 − g φ (x)) log 1 − g φ (x) 1 − γ ."
                    },
                    {
                        "id": 52,
                        "string": "(4) Note that the gradient for the KL divergence term above can be evaluated easily."
                    },
                    {
                        "id": 53,
                        "string": "For the first term in (3) , we should in principle estimate the influence of µ i in (2) on q θ (x|z) by averaging over the entire (uniform) noise distribution."
                    },
                    {
                        "id": 54,
                        "string": "However, a closed-form distribution does not exist since it is not possible to enumerate all possible configurations of z, especially when the latent dimension is large."
                    },
                    {
                        "id": 55,
                        "string": "Moreover, discrete latent variables are inherently incompatible with backpropagation, since the derivative of the sign function is zero for almost all input values."
                    },
                    {
                        "id": 56,
                        "string": "As a result, the exact gradients of L vae wrt the inputs before binarization would be essentially all zero."
                    },
                    {
                        "id": 57,
                        "string": "To estimate the gradients for binary latent variables, we utilize the straight-through (ST) estimator, which was first introduced by Hinton (2012) ."
                    },
                    {
                        "id": 58,
                        "string": "So motivated, the strategy here is to simply backpropagate through the hard threshold by approximating the gradient ∂z/∂φ as 1."
                    },
                    {
                        "id": 59,
                        "string": "Thus, we have: dE q φ (z|x) [log q θ (x|z)] ∂φ = dE q φ (z|x) [log q θ (x|z)] dz dz dσ(g i φ (x)) dσ(g i φ (x)) dφ ≈ dE q φ (z|x) [log q θ (x|z)] dz dσ(g i φ (x)) dφ (5) Although this is clearly a biased estimator, it has been shown to be a fast and efficient method relative to other gradient estimators for discrete variables, especially for the Bernoulli case (Bengio et al., 2013; Hubara et al., 2016; Theis et al., 2017) ."
                    },
                    {
                        "id": 60,
                        "string": "With the ST gradient estimator, the first loss term in (3) can be backpropagated into the encoder network to fine-tune the hash function g φ (x)."
                    },
                    {
                        "id": 61,
                        "string": "For the approximate generator q θ (x|z) in (3) , let x i denote the one-hot representation of ith word within a document."
                    },
                    {
                        "id": 62,
                        "string": "Note that x = i x i is thus the bag-of-words representation for document x."
                    },
                    {
                        "id": 63,
                        "string": "To reconstruct the input x from z, we utilize a softmax decoding function written as: q(x i = w|z) = exp(z T Ex w + b w ) |V | j=1 exp(z T Ex j + b j ) , (6) where q(x i = w|z) is the probability that x i is word w ∈ V , q θ (x|z) = i q(x i = w|z) and θ = {E, b 1 , ."
                    },
                    {
                        "id": 64,
                        "string": "."
                    },
                    {
                        "id": 65,
                        "string": "."
                    },
                    {
                        "id": 66,
                        "string": ", b |V | }."
                    },
                    {
                        "id": 67,
                        "string": "Note that E ∈ R d×|V | can be interpreted as a word embedding matrix to be learned, and {b i } |V | i=1 denote bias terms."
                    },
                    {
                        "id": 68,
                        "string": "Intuitively, the objective in (6) encourages the discrete vector z to be close to the embeddings for every word that appear in the input document x."
                    },
                    {
                        "id": 69,
                        "string": "As shown in Section 5.3.1, meaningful semantic structures can be learned and manifested in the word embedding matrix E. Injecting Data-dependent Noise to z To reconstruct text data x from sampled binary representation z, a deterministic decoder is typically utilized (Miao et al., 2016; Chaidaroon and Fang, 2017 )."
                    },
                    {
                        "id": 70,
                        "string": "Inspired by the success of employing stochastic decoders in image hashing applications (Dai et al., 2017; Theis et al., 2017) , in our experiments, we found that injecting random Gaussian noise into z makes the decoder a more favorable regularizer for the binary codes, which in practice leads to stronger retrieval performance."
                    },
                    {
                        "id": 71,
                        "string": "Below, we invoke the rate-distortion theory to perform some further analysis, which leads to interesting findings."
                    },
                    {
                        "id": 72,
                        "string": "Learning binary latent codes z to represent a continuous distribution p(x) is a classical information theory concept known as lossy source coding."
                    },
                    {
                        "id": 73,
                        "string": "From this perspective, semantic hashing, which compresses an input document into compact binary codes, can be casted as a conventional ratedistortion tradeoff problem (Theis et al., 2017; Ballé et al., 2016) : min − log 2 R(z) Rate +β ·D(x,x) Distortion , (7) where rate and distortion denote the effective code length, i.e., the number of bits used, and the distortion introduced by the encoding/decoding sequence, respectively."
                    },
                    {
                        "id": 74,
                        "string": "Further,x is the reconstructed input and β is a hyperparameter that controls the tradeoff between the two terms."
                    },
                    {
                        "id": 75,
                        "string": "Considering the case where we have a Bernoulli prior on z as p(z) ∼ Bernoulli(γ), and x conditionally drawn from a Gaussian distribution p(x|z) ∼ N (Ez, σ 2 I)."
                    },
                    {
                        "id": 76,
                        "string": "Here, E = {e i } |V | i=1 , where e i ∈ R d can be interpreted as a codebook with |V | codewords."
                    },
                    {
                        "id": 77,
                        "string": "In our case, E corresponds to the word embedding matrix as in (6) ."
                    },
                    {
                        "id": 78,
                        "string": "For the case of stochastic latent variable z, the objective function in (3) can be written in a form similar to the rate-distortion tradeoff: min E q φ (z|x)     − log q φ (z|x) Rate + 1 2σ 2 β ||x − Ez|| 2 2 Distortion +C     , (8) where C is a constant that encapsulates the prior distribution p(z) and the Gaussian distribution normalization term."
                    },
                    {
                        "id": 79,
                        "string": "Notably, the trade-off hyperparameter β = σ −2 /2 is closely related to the variance of the distribution p(x|z)."
                    },
                    {
                        "id": 80,
                        "string": "In other words, by controlling the variance σ, the model can adaptively explore different trade-offs between the rate and distortion objectives."
                    },
                    {
                        "id": 81,
                        "string": "However, the optimal trade-offs for distinct samples may be different."
                    },
                    {
                        "id": 82,
                        "string": "Inspired by the observations above, we propose to inject data-dependent noise into latent variable z, rather than to setting the variance term σ 2 to a fixed value (Dai et al., 2017; Theis et al., 2017) ."
                    },
                    {
                        "id": 83,
                        "string": "Specifically, log σ 2 is obtained via a one-layer MLP transformation from g φ (x)."
                    },
                    {
                        "id": 84,
                        "string": "Afterwards, we sample z from N (z, σ 2 I), which then replace z in (6) to infer the probability of generating individual words (as shown in Figure 1 )."
                    },
                    {
                        "id": 85,
                        "string": "As a result, the variances are different for every input document x, and thus the model is provided with additional flexibility to explore various trade-offs between rate and distortion for different training observations."
                    },
                    {
                        "id": 86,
                        "string": "Although our decoder is not a strictly Gaussian distribution, as in (6) , we found empirically that injecting data-dependent noise into z yields strong retrieval results, see Section 5.1."
                    },
                    {
                        "id": 87,
                        "string": "Supervised Hashing The proposed Neural Architecture for Semantic Hashing (NASH) can be extended to supervised hashing, where a mapping from latent variable z to labels y is learned, here parametrized by a twolayer MLP followed by a fully-connected softmax layer."
                    },
                    {
                        "id": 88,
                        "string": "To allow the model to explore and balance between maximizing the variational lower bound in (3) and minimizing the discriminative loss, the following joint training objective is employed: L = −L vae (θ, φ; x) + αL dis (η; z, y)."
                    },
                    {
                        "id": 89,
                        "string": "(9) where η refers to parameters of the MLP classifier and α controls the relative weight between the variational lower bound (L vae ) and discriminative loss (L dis ), defined as the cross-entropy loss."
                    },
                    {
                        "id": 90,
                        "string": "The parameters {θ, φ, η} are learned end-to-end via Monte Carlo estimation."
                    },
                    {
                        "id": 91,
                        "string": "Experimental Setup Datasets We use the following three standard publicly available datasets for training and evaluation: (i) Reuters21578, containing 10,788 news documents, which have been classified into 90 different categories."
                    },
                    {
                        "id": 92,
                        "string": "(ii) 20Newsgroups, a collection of 18,828 newsgroup documents, which are categorized into 20 different topics."
                    },
                    {
                        "id": 93,
                        "string": "(iii) TMC (stands for SIAM text mining competition), containing air traffic reports provided by NASA."
                    },
                    {
                        "id": 94,
                        "string": "TMC consists 21,519 training documents divided into 22 different categories."
                    },
                    {
                        "id": 95,
                        "string": "To make direct comparison with prior works, we employed the TFIDF features on these datasets supplied by (Chaidaroon and Fang, 2017) , where the vocabulary sizes for the three datasets are set to 10,000, 7,164 and 20,000, respectively."
                    },
                    {
                        "id": 96,
                        "string": "Training Details For the inference networks, we employ a feedforward neural network with 2 hidden layers (both with 500 units) using the ReLU non-linearity activation function, which transform the input documents, i.e., TFIDF features in our experiments, into a continuous representation."
                    },
                    {
                        "id": 97,
                        "string": "Empirically, we found that stochastic binarization as in (2) shows stronger performance than deterministic binarization, and thus use the former in our experiments."
                    },
                    {
                        "id": 98,
                        "string": "However, we further conduct a systematic ablation study in Section 5.2 to compare the two binarization strategies."
                    },
                    {
                        "id": 99,
                        "string": "Our model is trained using Adam (Kingma and Ba, 2014), with a learning rate of 1 × 10 −3 for all parameters."
                    },
                    {
                        "id": 100,
                        "string": "We decay the learning rate by a factor of 0.96 for every 10,000 iterations."
                    },
                    {
                        "id": 101,
                        "string": "Dropout (Srivastava et al., 2014) is employed on the output of encoder networks, with the rate selected from {0.7, 0.8, 0.9} on the validation set."
                    },
                    {
                        "id": 102,
                        "string": "To facilitate comparisons with previous methods, we set the dimension of z, i.e., the number of bits within the hashing code) as 8, 16, 32, 64, or 128."
                    },
                    {
                        "id": 103,
                        "string": "Baselines We evaluate the effectiveness of our framework on both unsupervised and supervised semantic hashing tasks."
                    },
                    {
                        "id": 104,
                        "string": "We consider the following unsupervised baselines for comparisons: Locality Sensitive Hashing (LSH) (Datar et al., 2004) , Stack Restricted Boltzmann Machines (S-RBM) (Salakhutdinov and Hinton, 2009 ), Spectral Hashing (SpH) (Weiss et al., 2009 ), Self-taught Hashing (STH) (Zhang et al., 2010) and Variational Deep Semantic Hashing (VDSH) (Chaidaroon and Fang, 2017) ."
                    },
                    {
                        "id": 105,
                        "string": "For supervised semantic hashing, we also compare NASH against a number of baselines: Supervised Hashing with Kernels (KSH) (Liu et al., 2012) , Semantic Hashing using Tags and Topic Modeling (SHTTM) (Wang et al., 2013) and Supervised VDSH (Chaidaroon and Fang, 2017) ."
                    },
                    {
                        "id": 106,
                        "string": "It is worth noting that unlike all these baselines, our NASH model is trained end-to-end in one-step."
                    },
                    {
                        "id": 107,
                        "string": "Evaluation Metrics To evaluate the hashing codes for similarity search, we consider each document in the testing set as a query document."
                    },
                    {
                        "id": 108,
                        "string": "Similar documents to the query in the corresponding training set need to be retrieved based on the Hamming distance of their hashing codes, i.e."
                    },
                    {
                        "id": 109,
                        "string": "number of different bits."
                    },
                    {
                        "id": 110,
                        "string": "To facilitate comparison with prior work (Wang et al., 2013; Chaidaroon and Fang, 2017) , the performance is measured with precision."
                    },
                    {
                        "id": 111,
                        "string": "Specifically, during testing, for a query document, we first retrieve the 100 nearest/closest documents according to the Hamming distances of the corresponding hash codes (i.e., the number of different bits)."
                    },
                    {
                        "id": 112,
                        "string": "We then examine the percentage of documents among these 100 retrieved ones that belong to the same label (topic) with the query document (we consider documents having the same label as relevant pairs)."
                    },
                    {
                        "id": 113,
                        "string": "The ratio of the number of relevant documents to the number of retrieved documents (fixed value of 100) is calculated as the precision score."
                    },
                    {
                        "id": 114,
                        "string": "The precision scores are further averaged over all test (query) documents."
                    },
                    {
                        "id": 115,
                        "string": "Experimental Results We experimented with four variants for our NASH model: (i) NASH: with deterministic decoder; (ii) NASH-N: with fixed random noise injected to decoder; (iii) NASH-DN: with data-dependent noise injected to decoder; (iv) NASH-DN-S: NASH-DN with supervised information during training."
                    },
                    {
                        "id": 116,
                        "string": "Table 1 presents the results of all models on Reuters dataset."
                    },
                    {
                        "id": 117,
                        "string": "Regarding unsupervised semantic hashing, all the NASH variants consistently outperform the baseline methods by a substantial margin, indicating that our model makes the most effective use of unlabeled data and manage to assign similar hashing codes, i.e., with small Hamming distance to each other, to documents that belong to the same label."
                    },
                    {
                        "id": 118,
                        "string": "It can be also observed that the injection of noise into the decoder networks has improved the robustness of learned binary representations, resulting in better retrieval performance."
                    },
                    {
                        "id": 119,
                        "string": "More importantly, by making the variances of noise adaptive to the specific input, our NASH-DN achieves even better results, compared with NASH-N, highlighting the importance of exploring/learning the trade-off between rate and distortion objectives by the data itself."
                    },
                    {
                        "id": 120,
                        "string": "We observe the same trend and superiority of our NASH-DN models on the other two benchmarks, as shown in Tables 3 and 4 ."
                    },
                    {
                        "id": 121,
                        "string": "Semantic Hashing Evaluation Another observation is that the retrieval results tend to drop a bit when we set the length of hashing codes to be 64 or larger, which also happens for some baseline models."
                    },
                    {
                        "id": 122,
                        "string": "This phenomenon has been reported previously in ; Liu et al."
                    },
                    {
                        "id": 123,
                        "string": "(2012) ; Wang et al."
                    },
                    {
                        "id": 124,
                        "string": "(2013) ; Chaidaroon and Fang (2017) , and the reasons could be twofold: (i) for longer codes, the number of data points that are assigned to a certain binary code decreases exponentially."
                    },
                    {
                        "id": 125,
                        "string": "As a result, many queries may fail to return any neighbor documents ; (ii) considering the size of training data, it is likely that the model may overfit with long hash codes (Chaidaroon and Fang, 2017) ."
                    },
                    {
                        "id": 126,
                        "string": "However, even with longer hashing codes, Word  weapons  medical  companies  define  israel  book   NASH   gun  treatment  company  definition  israeli  books  guns  disease  market  defined  arabs  english  weapon  drugs  afford  explained  arab  references  armed  health  products  discussion  jewish  learning  assault  medicine  money  knowledge  jews  reference   NVDM   guns  medicine  expensive  defined  israeli  books  weapon  health  industry  definition  arab  reference  gun  treatment  company  printf  arabs  guide  militia  disease  market  int  lebanon  writing  armed  patients  buy  sufficient  lebanese  pages   Table 2 : The five nearest words in the semantic space learned by NASH, compared with the results from NVDM (Miao et al., 2016) ."
                    },
                    {
                        "id": 127,
                        "string": "our NASH models perform stronger than the baselines in most cases (except for the 20Newsgroups dataset), suggesting that NASH can effectively allocate documents to informative/meaningful hashing codes even with limited training data."
                    },
                    {
                        "id": 128,
                        "string": "We also evaluate the effectiveness of NASH in a supervised scenario on the Reuters dataset, where the label or topic information is utilized during training."
                    },
                    {
                        "id": 129,
                        "string": "As shown in Figure 2 , our NASH-DN-S model consistently outperforms several supervised semantic hashing baselines, with various choices of hashing bits."
                    },
                    {
                        "id": 130,
                        "string": "Notably, our model exhibits higher Top-100 retrieval precision than VDSH-S and VDSH-SP, proposed by Chaidaroon and Fang (2017) ."
                    },
                    {
                        "id": 131,
                        "string": "This may be attributed to the fact that in VDSH models, the continuous embeddings are not optimized with their future binarization in mind, and thus could hurt the relevance of learned binary codes."
                    },
                    {
                        "id": 132,
                        "string": "On the contrary, our model is optimized in an end-to-end manner, where the gradients are directly backpropagated to the inference network (through the binary/discrete latent variable), and thus gives rise to a more robust hash function."
                    },
                    {
                        "id": 133,
                        "string": "Ablation study The effect of stochastic sampling As described in Section 3, the binary latent variables z in NASH can be either deterministically (via (1)) or stochastically (via (2)) sampled."
                    },
                    {
                        "id": 134,
                        "string": "We compare these two types of binarization functions in the case of unsupervised hashing."
                    },
                    {
                        "id": 135,
                        "string": "As illustrated in Figure 3 , stochastic sampling shows stronger retrieval results on all three datasets, indicating that endowing the sampling process of latent variables with more stochasticity improves the learned representations."
                    },
                    {
                        "id": 136,
                        "string": "The effect of encoder/decoder networks Under the variational framework introduced here, the encoder network, i.e., hash function, and decoder network are jointly optimized to abstract semantic features from documents."
                    },
                    {
                        "id": 137,
                        "string": "An interesting question concerns what types of network should be leveraged for each part of our NASH model."
                    },
                    {
                        "id": 138,
                        "string": "In this regard, we further investigate the effect of Category Title/Subject 8-bit code 16-bit code Baseball Dave Kingman for the hall of fame 1 1 1 0 1 0 0 1 0 0 1 0 1 1 0 1 0 0 0 0 0 1 1 0 Time of game 1 1 1 1 1 0 0 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 1 1 Game score report 1 1 1 0 1 0 0 1 0 0 1 0 1 1 0 1 0 0 0 0 0 1 1 0 Why is Barry Bonds not batting 4th?"
                    },
                    {
                        "id": 139,
                        "string": "1 1 1 0 1 1 0 1 0 0 1 1 1 1 0 1 0 0 0 0 0 1 1 0 Electronics Building a UV flashlight 1 0 1 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 1 1 How to drive an array of LEDs 1 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 1 2% silver solder 1 1 0 1 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 1 1 Subliminal message flashing on TV 1 0 1 1 0 1 0 0 0 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1  using an encoder or decoder with different nonlinearity, ranging from a linear transformation to two-layer MLPs."
                    },
                    {
                        "id": 140,
                        "string": "We employ a base model with an encoder of two-layer MLPs and a linear decoder (the setup described in Section 3), and the ablation study results are shown in Table 6 ."
                    },
                    {
                        "id": 141,
                        "string": "Network Encoder Decoder linear 0.5844 0.6225 one-layer MLP 0.6187 0.3559 two-layer MLP 0.6225 0.1047 Table 6 : Ablation study with different encoder/decoder networks."
                    },
                    {
                        "id": 142,
                        "string": "It is observed that for the encoder networks, increasing the non-linearity by stacking MLP layers leads to better empirical results."
                    },
                    {
                        "id": 143,
                        "string": "In other words, endowing the hash function with more modeling capacity is advantageous to retrieval tasks."
                    },
                    {
                        "id": 144,
                        "string": "However, when we employ a non-linear network for the decoder, the retrieval precision drops dramatically."
                    },
                    {
                        "id": 145,
                        "string": "It is worth noting that the only difference between linear transformation and one-layer MLP is whether a non-linear activation function is employed or not."
                    },
                    {
                        "id": 146,
                        "string": "This observation may be attributed the fact that the decoder networks can be considered as a sim-ilarity measure between latent variable z and the word embeddings E k for every word, and the probabilities for words that present in the document is maximized to ensure that z is informative."
                    },
                    {
                        "id": 147,
                        "string": "As a result, if we allow the decoder to be too expressive (e.g., a one-layer MLP), it is likely that we will end up with a very flexible similarity measure but relatively less meaningful binary representations."
                    },
                    {
                        "id": 148,
                        "string": "This finding is consistent with several image hashing methods, such as SGH (Dai et al., 2017) or binary autoencoder (Carreira-Perpinán and Raziperchikolaei, 2015) , where a linear decoder is typically adopted to obtain promising retrieval results."
                    },
                    {
                        "id": 149,
                        "string": "However, our experiments may not speak for other choices of encoder-decoder architectures, e.g., LSTM-based sequence-to-sequence models  or DCNN-based autoencoder (Zhang et al., 2017) ."
                    },
                    {
                        "id": 150,
                        "string": "Qualitative Analysis Analysis of Semantic Information To understand what information has been learned in our NASH model, we examine the matrix E ∈ R d×l in (6)."
                    },
                    {
                        "id": 151,
                        "string": "Similar to (Miao et al., 2016; Larochelle and Lauly, 2012) , we select the 5 nearest words according to the word vectors learned from NASH and compare with the corresponding results from NVDM."
                    },
                    {
                        "id": 152,
                        "string": "As shown in Table 2 , although our NASH model contains a binary latent variable, rather than a continuous one as in NVDM, it also effectively group semantically-similar words together in the learned vector space."
                    },
                    {
                        "id": 153,
                        "string": "This further demonstrates that the proposed generative framework manages to bypass the binary/discrete constraint and is able to abstract useful semantic information from documents."
                    },
                    {
                        "id": 154,
                        "string": "Case Study In Table 5 , we show some examples of the learned binary hashing codes on 20Newsgroups dataset."
                    },
                    {
                        "id": 155,
                        "string": "We observe that for both 8-bit and 16bit cases, NASH typically compresses documents with shared topics into very similar binary codes."
                    },
                    {
                        "id": 156,
                        "string": "On the contrary, the hashing codes for documents with different topics exhibit much larger Hamming distance."
                    },
                    {
                        "id": 157,
                        "string": "As a result, relevant documents can be efficiently retrieved by simply computing their Hamming distances."
                    },
                    {
                        "id": 158,
                        "string": "Conclusions This paper presents a first step towards end-to-end semantic hashing, where the binary/discrete constraints are carefully handled with an effective gradient estimator."
                    },
                    {
                        "id": 159,
                        "string": "A neural variational framework is introduced to train our model."
                    },
                    {
                        "id": 160,
                        "string": "Motivated by the connections between the proposed method and rate-distortion theory, we inject data-dependent noise into the Bernoulli latent variable at the training stage."
                    },
                    {
                        "id": 161,
                        "string": "The effectiveness of our framework is demonstrated with extensive experiments."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 19
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 20,
                        "end": 28
                    },
                    {
                        "section": "Hashing under the NVI Framework",
                        "n": "3.1",
                        "start": 29,
                        "end": 46
                    },
                    {
                        "section": "Training with Binary Latent Variables",
                        "n": "3.2",
                        "start": 47,
                        "end": 68
                    },
                    {
                        "section": "Injecting Data-dependent Noise to z",
                        "n": "3.3",
                        "start": 69,
                        "end": 86
                    },
                    {
                        "section": "Supervised Hashing",
                        "n": "3.4",
                        "start": 87,
                        "end": 90
                    },
                    {
                        "section": "Datasets",
                        "n": "4.1",
                        "start": 91,
                        "end": 95
                    },
                    {
                        "section": "Training Details",
                        "n": "4.2",
                        "start": 96,
                        "end": 102
                    },
                    {
                        "section": "Baselines",
                        "n": "4.3",
                        "start": 103,
                        "end": 106
                    },
                    {
                        "section": "Evaluation Metrics",
                        "n": "4.4",
                        "start": 107,
                        "end": 114
                    },
                    {
                        "section": "Experimental Results",
                        "n": "5",
                        "start": 115,
                        "end": 120
                    },
                    {
                        "section": "Semantic Hashing Evaluation",
                        "n": "5.1",
                        "start": 121,
                        "end": 132
                    },
                    {
                        "section": "The effect of stochastic sampling",
                        "n": "5.2.1",
                        "start": 133,
                        "end": 134
                    },
                    {
                        "section": "The effect of encoder/decoder networks",
                        "n": "5.2.2",
                        "start": 135,
                        "end": 149
                    },
                    {
                        "section": "Analysis of Semantic Information",
                        "n": "5.3.1",
                        "start": 150,
                        "end": 153
                    },
                    {
                        "section": "Case Study",
                        "n": "5.3.2",
                        "start": 154,
                        "end": 157
                    },
                    {
                        "section": "Conclusions",
                        "n": "6",
                        "start": 158,
                        "end": 161
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/955-Table1-1.png",
                        "caption": "Table 1: Precision of the top 100 retrieved documents on Reuters dataset (Unsupervised hashing).",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 289.44,
                            "y1": 61.44,
                            "y2": 163.2
                        }
                    },
                    {
                        "filename": "../figure/image/955-Figure2-1.png",
                        "caption": "Figure 2: Precision of the top 100 retrieved documents on Reuters dataset (Supervised hashing), compared with other supervised baselines.",
                        "page": 5,
                        "bbox": {
                            "x1": 328.32,
                            "x2": 505.91999999999996,
                            "y1": 66.24,
                            "y2": 186.23999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/955-Table6-1.png",
                        "caption": "Table 6: Ablation study with different encoder/decoder networks.",
                        "page": 7,
                        "bbox": {
                            "x1": 101.75999999999999,
                            "x2": 261.12,
                            "y1": 483.84,
                            "y2": 540.0
                        }
                    },
                    {
                        "filename": "../figure/image/955-Figure3-1.png",
                        "caption": "Figure 3: The precisions of the top 100 retrieved documents for NASH-DN with stochastic or deterministic binary latent variables.",
                        "page": 7,
                        "bbox": {
                            "x1": 104.64,
                            "x2": 261.12,
                            "y1": 202.56,
                            "y2": 312.96
                        }
                    },
                    {
                        "filename": "../figure/image/955-Table5-1.png",
                        "caption": "Table 5: Examples of learned compact hashing codes on 20Newsgroups dataset.",
                        "page": 7,
                        "bbox": {
                            "x1": 108.0,
                            "x2": 489.12,
                            "y1": 61.44,
                            "y2": 161.28
                        }
                    },
                    {
                        "filename": "../figure/image/955-Figure1-1.png",
                        "caption": "Figure 1: NASH for end-to-end semantic hashing. The inference network maps x→ z using an MLP and the generative network recovers x as z → x̂.",
                        "page": 1,
                        "bbox": {
                            "x1": 328.8,
                            "x2": 504.0,
                            "y1": 60.48,
                            "y2": 162.72
                        }
                    },
                    {
                        "filename": "../figure/image/955-Table4-1.png",
                        "caption": "Table 4: Precision of the top 100 retrieved documents on TMC dataset.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 289.44,
                            "y1": 451.68,
                            "y2": 620.16
                        }
                    },
                    {
                        "filename": "../figure/image/955-Table2-1.png",
                        "caption": "Table 2: The five nearest words in the semantic space learned by NASH, compared with the results from NVDM (Miao et al., 2016).",
                        "page": 6,
                        "bbox": {
                            "x1": 132.96,
                            "x2": 464.15999999999997,
                            "y1": 61.44,
                            "y2": 180.95999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/955-Table3-1.png",
                        "caption": "Table 3: Precision of the top 100 retrieved documents on 20Newsgroups dataset.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 289.44,
                            "y1": 233.76,
                            "y2": 402.24
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-1"
        },
        {
            "slides": {
                "0": {
                    "title": "Abstract",
                    "text": [
                        "Emotions, a complex state of feeling results in physical and psychological changes that influence human behavior. Thus, in order to extract the emotional key phrases from psychological texts, here, we have presented a phrase level emotion identification and classification system. The system takes pre- defined emotional statements of seven basic emotion classes",
                        "(anger, disgust, fear, guilt, joy, sadness and shame) as input and extracts seven types of emotional trigrams. The trigrams were represented as Context Vectors. Between a pair of",
                        "Context Vectors, an Affinity Score was calculated based on the law of gravitation with respect to different distance metrics",
                        "(e.g., Chebyshev, Euclidean and Hamming)."
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "5": {
                    "title": "Context Windows",
                    "text": [
                        "The tokenized words were grouped to form trigrams in order to grasp the roles of the previous and next tokens with respect to the target token.",
                        "(CW) to acquire the emotional phrases."
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "6": {
                    "title": "Context Windows contd",
                    "text": [
                        "It is considered that, in each of the Context Windows, the first word appears as a non-affect word, second word as an affect word, and third word as a non-affect word (<NAW1>, <AW>,",
                        "A few example patterns of the CWs which follows the pattern",
                        "and, sorry, just (Shame)"
                    ],
                    "page_nums": [
                        11,
                        12
                    ],
                    "images": []
                },
                "8": {
                    "title": "Similar and Dissimilar NAWs",
                    "text": [
                        "It was observed that the stop words are mostly present in",
                        "<NAW1, AW, NAW2> pattern where similar and dissimilar",
                        "NAWs are appeared before and after their corresponding CWs."
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "9": {
                    "title": "Similar and Dissimilar NAWs contd",
                    "text": [
                        "NAW1= Non Affect Word1; AW=Affect Word; NAW2=Non Affect Word2"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "16": {
                    "title": "Distance Metrics",
                    "text": [
                        "Chebyshev distance (Cd) = max |xi yi | where xi and yi represents two vectors.",
                        "Euclidean distance (Ed) = ||x y||2 for vectors x and y.",
                        "Hamming distance (Hd) = (c01 c10) / n where cij is the number of occurrence in the boolean vectors x and y and x[k] = i and y[k] = j for k < n. Hamming distance denotes the proportion of disagreeing components in x and y."
                    ],
                    "page_nums": [
                        23
                    ],
                    "images": []
                },
                "17": {
                    "title": "POS Tagged Context Windows and POS Tagged Windows",
                    "text": [
                        "The sentences were POS tagged using the Stanford POS",
                        "Tagger and the POS tagged Context Windows were extracted and termed as PTCW. Similarly, the POS tag sequence from each of the PTCWs were extracted and named each as POS"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": []
                },
                "18": {
                    "title": "Count of CW PTCW PTW",
                    "text": [
                        "Figurel:Count of CW,PTCW and PTW",
                        "= No of POS tagged Context Window(CW)",
                        "= No of Unique POS tagged Context Window(CW)",
                        "= No of Unique PTW",
                        "Anger Disgust Fear Guilt Joy Sadness Shame Emotions"
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": []
                },
                "19": {
                    "title": "Total Count of CW PTCW PTW",
                    "text": [
                        "Figure 2:Total Count of CW,PTCW and PTW",
                        "Total CW Totla PTCW Total PTW Different windows"
                    ],
                    "page_nums": [
                        26
                    ],
                    "images": []
                },
                "20": {
                    "title": "TF and TF IDF Measure",
                    "text": [
                        "The Term Frequencies (TFs) and the Inverse Document",
                        "Frequencies (IDFs) of the CWs for each of the emotion classes were calculated. In order to identify different ranges of the TF and TF-IDF scores, the minimum and maximum values of the",
                        "TF and the variance of TF were calculated for each of the"
                    ],
                    "page_nums": [
                        27
                    ],
                    "images": []
                },
                "26": {
                    "title": "Conclusion",
                    "text": [
                        "In this paper, vector formation was done for each of the",
                        "Context Windows; TF and TF-IDF measures were calculated.",
                        "The calculated affinity score, depending on the distance values was inspired from Newton's law of gravitation. To classify these CWs, BayesNet, J48, NaivebayesSimple and",
                        "DecisionTable classifiers is used."
                    ],
                    "page_nums": [
                        34
                    ],
                    "images": []
                },
                "27": {
                    "title": "Future Work",
                    "text": [
                        "In future, we would like to incorporate more number of lexicons to identify and classify emotional expressions.",
                        "Moreover, we are planning to include associative learning process to identify some important rules for classification."
                    ],
                    "page_nums": [
                        35
                    ],
                    "images": []
                }
            },
            "paper_title": "Identification and Classification of Emotional Key Phrases from Psycho- logical Texts",
            "paper_id": "956",
            "paper": {
                "title": "Identification and Classification of Emotional Key Phrases from Psycho- logical Texts",
                "abstract": "Emotions, a complex state of feeling results in physical and psychological changes that influence human behavior. Thus, in order to extract the emotional key phrases from psychological texts, here, we have presented a phrase level emotion identification and classification system. The system takes pre-defined emotional statements of seven basic emotion classes (anger, disgust, fear, guilt, joy, sadness and shame) as input and extracts seven types of emotional trigrams. The trigrams were represented as Context Vectors. Between a pair of Context Vectors, an Affinity Score was calculated based on the law of gravitation with respect to different distance metrics (e.g., Chebyshev, Euclidean and Hamming). The words, Part-Of-Speech (POS) tags, TF-IDF scores, variance along with Affinity Score and ranked score of the vectors were employed as important features in a supervised classification framework after a rigorous analysis. The comparative results carried out for four different classifiers e.g., NaiveBayes, J48, Decision Tree and BayesNet show satisfactory performances.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Human emotions are the most complex and unique features to be described."
                    },
                    {
                        "id": 1,
                        "string": "If we ask someone regarding emotion, he or she will reply simply that it is a 'feeling'."
                    },
                    {
                        "id": 2,
                        "string": "Then, the obvious question that comes into our mind is about the definition of feeling."
                    },
                    {
                        "id": 3,
                        "string": "It is observed that such terms are difficult to define and even more difficult to understand completely."
                    },
                    {
                        "id": 4,
                        "string": "Ekman (1980) proposed six basic emotions (anger, disgust, fear, guilt, joy and sadness) that have a shared meaning on the level of facial expressions across cultures (Scherer, 1997; Scher-er and Wallbott, 1994) ."
                    },
                    {
                        "id": 5,
                        "string": "Psychological texts contain huge number of emotional words because psychology and emotions are inter-wined, though they are different (Brahmachari et.al, 2013) ."
                    },
                    {
                        "id": 6,
                        "string": "A phrase that contains more than one word can be a better way of representing emotions than a single word."
                    },
                    {
                        "id": 7,
                        "string": "Thus, the emotional phrase identification and their classification from text have great importance in Natural Language Processing (NLP)."
                    },
                    {
                        "id": 8,
                        "string": "In the present work, we have extracted seven different types of emotional statements (anger, disgust, fear, guilt, joy, sadness and shame) from the Psychological corpus."
                    },
                    {
                        "id": 9,
                        "string": "Each of the emotional statements was tokenized; the tokens were grouped in trigrams and considered as Context Vectors."
                    },
                    {
                        "id": 10,
                        "string": "These Context Vectors are POS tagged and corresponding TF and TF-IDF scores were measured for considering them as important features or not."
                    },
                    {
                        "id": 11,
                        "string": "In addition, the Affinity Scores were calculated for each pair of Context Vectors based on different distance metrics (Chebyshev, Euclidean and Hamming) ."
                    },
                    {
                        "id": 12,
                        "string": "Such features lead to apply different classification methods like NaiveBayes, J48, Decision Tree and BayesNet and after that the results are compared."
                    },
                    {
                        "id": 13,
                        "string": "The route map for this paper is the Related Work (Section 2), Data Preprocessing Framework (Section 3) followed by Feature Analysis and Classification framework (Section 4) and result analysis (Section 5) along with the improvement due to ranking."
                    },
                    {
                        "id": 14,
                        "string": "Finally, we have concluded the discussion (Section 6)."
                    },
                    {
                        "id": 15,
                        "string": "Strapparava and Valitutti (2004) developed the WORDNET-AFFECT, a lexical resource that assigns one or more affective labels such as emotion, mood, trait, cognitive state, physical state, behavior, attitude and sensation etc to a number of WORDNET synsets."
                    },
                    {
                        "id": 16,
                        "string": "A detailed annotation scheme that identifies key components and properties of opinions and emotions in language has been described in (Wiebe et al., 2005) ."
                    },
                    {
                        "id": 17,
                        "string": "The authors in (Kobayashi et al., 2004) also developed an opinion lexicon out of their annotated corpora."
                    },
                    {
                        "id": 18,
                        "string": "Takamura et al."
                    },
                    {
                        "id": 19,
                        "string": "(2005) extracted semantic orientation of words according to the spin model, where the semantic orientation of words propagates in two possible directions like electrons."
                    },
                    {
                        "id": 20,
                        "string": "Esuli and Sebastiani's (2006) approach to develop the SentiWord-Net is an adaptation to synset classification based on the training of ternary classifiers for deciding positive and negative (P-N) polarity."
                    },
                    {
                        "id": 21,
                        "string": "Each of the ternary classifiers is generated using the Semisupervised rules."
                    },
                    {
                        "id": 22,
                        "string": "Related Work On the other hand, Mohammad, et al., (2010) has performed an extensive analysis of the annotations to better understand the distribution of emotions evoked by terms of different parts of speech."
                    },
                    {
                        "id": 23,
                        "string": "The authors in Bandyopadhyay, 2009, 2010) created the emotion lexicon and systems for Bengali language."
                    },
                    {
                        "id": 24,
                        "string": "The development of SenticNet (Cambria et al., 2010) was inspired later by (Poria et al., 2013) ."
                    },
                    {
                        "id": 25,
                        "string": "The authors developed an enriched SenticNet with affective information by assigning emotion labels."
                    },
                    {
                        "id": 26,
                        "string": "Similarly, ConceptNet 1 is a multilingual knowledge base, representing words and phrases that people use and the common-sense relationships between them."
                    },
                    {
                        "id": 27,
                        "string": "Balahur et al., (2012) had shown that the task of emotion detection from texts such as the one in the ISEAR corpus (where little or no lexical clues of affect are present) can be best tackled using approaches based on commonsense knowledge."
                    },
                    {
                        "id": 28,
                        "string": "In this sense, EmotiNet, apart from being a precise resource for classifying emotions in such examples, has the advantage of being extendable with external sources, thus increasing the recall of the methods employing it."
                    },
                    {
                        "id": 29,
                        "string": "Patra et al., (2013) adopted the Potts model for the probability modeling of the lexical network that was constructed by connecting each pair of words in which one of the two words appears in the gloss of the other."
                    },
                    {
                        "id": 30,
                        "string": "In contrast to the previous approaches, the present task comprises of classifying the emotional phrases by forming Context Vectors and the experimentation with simple features like POS, TF-IDF and Affinity Score followed by the computation of 1 http://conceptnet5.media.mit.edu/ similarities based on different distance metrics help in making decisions to correctly classify the emotional phrases."
                    },
                    {
                        "id": 31,
                        "string": "3 Data Preprocessing Framework Corpus Preparation The emotional statements were collected from the ISEAR 7 (International Survey on Emotion Antecedents and Reactions) database."
                    },
                    {
                        "id": 32,
                        "string": "Each of the emotion classes contains the emotional statements given by the respondents as answers based on some predefined questions."
                    },
                    {
                        "id": 33,
                        "string": "Student respondents, both psychologists and non-psychologists were asked to report situations in which they had experienced all of the 7 major emotions (anger, disgust, fear, guilt, joy, sadness, shame) ."
                    },
                    {
                        "id": 34,
                        "string": "The final data set contains reports of 3000 respondents from 37 countries."
                    },
                    {
                        "id": 35,
                        "string": "The statements were split in sentences and tokenized into words and the statistics were presented in Table 1."
                    },
                    {
                        "id": 36,
                        "string": "It is found that only 1096 statements belong to anger, disgust sadness and shame classes whereas the fear, guilt and joy classes contain 1095, 1093 and 1094 different statements, respectively."
                    },
                    {
                        "id": 37,
                        "string": "Since each statement may contain multiple sentences, so after sentence tokenization, it is observed that the anger and fear classes contain the maximum number of sentences."
                    },
                    {
                        "id": 38,
                        "string": "Similarly, it is observed that the anger class contains the maximum number of tokenized words."
                    },
                    {
                        "id": 39,
                        "string": "The tokenized words were grouped to form trigrams in order to grasp the roles of the previous and next tokens with respect to the target token."
                    },
                    {
                        "id": 40,
                        "string": "Thus, each of the trigrams was considered as a Context Window (CW) to acquire the emotional phrases."
                    },
                    {
                        "id": 41,
                        "string": "The updated version of the standard word lists of the WordNet Affect (Strapparava, and Vali-tutti, 2004 ) was collected and it is observed that the total of 2,958 affect words is present."
                    },
                    {
                        "id": 42,
                        "string": "It is considered that, in each of the Context Windows, the first word appears as a non-affect word, second word as an affect word, and third word as a non-affect word (<NAW 1 >, <AW>, <NAW 2 >)."
                    },
                    {
                        "id": 43,
                        "string": "It is observed from the statistics of CW as shown in Table 2 that the anger class contains the maximum number of trigrams (20, 785) and joy class has the minimum number of trigrams (15, 743) whereas only the fear class contains the maximum number of trigrams (1,573) that follow the CW pattern."
                    },
                    {
                        "id": 44,
                        "string": "A few example patterns of the CWs which follows the pattern (<NAW 1 >, <AW>, <NAW 2 >) are \"advices, about, problems\" (Anger), \"already, frightened, us\" (Fear), \"always, joyous, one\" (Joy), \"acted, cruelly, to\" (Disgust), \"adolescent, guilt, growing\" (guilt), \"always, sad, for\" (sad) , \"and, sorry, just\" (Shame) etc."
                    },
                    {
                        "id": 45,
                        "string": "It was observed that the stop words are mostly present in <NAW 1 , AW, NAW 2 > pattern where similar and dissimilar NAWs are appeared before and after their corresponding CWs."
                    },
                    {
                        "id": 46,
                        "string": "In case of fear, a total of 979 stop words were found in NAW 1 position and 935 stop words in NAW 2 position."
                    },
                    {
                        "id": 47,
                        "string": "It is observed that in case of fear, the occurrence of similar NAW before and after of CWs is only 22 in contrast to the dissimilar occurrences of 1551."
                    },
                    {
                        "id": 48,
                        "string": "Table 3 explains the statistics of similar and dissimilar NAWs along with their appearances as stop words."
                    },
                    {
                        "id": 49,
                        "string": "Context Vector Formation In order to identify whether the Context Windows (CWs) play any significant role in classifying emotions or not, we have mapped the Context Windows in a Vector space by representing them as vectors."
                    },
                    {
                        "id": 50,
                        "string": "We have tried to find out the semantic relation or similarity between a pair of vectors using Affinity Score which in turn takes care of different distances into consideration."
                    },
                    {
                        "id": 51,
                        "string": "Since a CW follows the pattern (NAW1, AW, NAW2), the formation of vector with respect to each of the Context Windows of each emotion class was done based on the following formula, 1 2 CW ( ) #NAW #NAW #A = , , W Vectoriza T T T tion       Where, T= Total count of CW in an emotion class #NAW 1 = Total occurrence of a nonaffect word in NAW 1 position #NAW 2 = Total occurrence of a nonaffect word in NAW 2 position #AW = Total occurrence of an affect word in AW position."
                    },
                    {
                        "id": 52,
                        "string": "It was found that in case of anger emotion, a CW identified as (always, angry, about) corresponds to a Vector, <0.29, 10.69, 1.47> Emotions Total No of Trigrams Affinity Score Calculation We assume that each of the Context Vectors in an emotion class is represented in the vector space at a specific distance from the others."
                    },
                    {
                        "id": 53,
                        "string": "Thus, there must be some affinity or similarity exists between each of the Context Vectors."
                    },
                    {
                        "id": 54,
                        "string": "An Affinity Score was calculated for each pair of Context Vectors (p u ,q v ) where u = {1,2,3,.........n} and v = {1,2,3,.......n} for n number of vectors with respect to each of the emotion classes."
                    },
                    {
                        "id": 55,
                        "string": "The final Score is calculated using the following gravitational formula as described in (Poria et al., 2013) :       p q , , q * p q p            Score 2 dist The Score of any two context vectors p and q of an emotion class is the dot product of the vectors divided by the square of distance (dist) between p and q."
                    },
                    {
                        "id": 56,
                        "string": "This score was inspired by Newton's law of gravitation."
                    },
                    {
                        "id": 57,
                        "string": "This score values reflect the affinity between two context vectors p and q."
                    },
                    {
                        "id": 58,
                        "string": "Higher score implies higher affinity between p and q."
                    },
                    {
                        "id": 59,
                        "string": "However, apart from the score values, we also calculated the median, standard deviation and inter quartile range (iqr) and only those context windows were considered if their iqr values are greater than some cutoff value selected during experiments."
                    },
                    {
                        "id": 60,
                        "string": "Affinity Scores using Distance Metrics In the vector space, it is needed to calculate how close the context vectors are in the space in order to conduct better classification into their respective emotion classes."
                    },
                    {
                        "id": 61,
                        "string": "The Score values were calculated for all the emotion classes with respect to different metrics of distance (dist) viz."
                    },
                    {
                        "id": 62,
                        "string": "Chebyshev, Euclidean and Hamming."
                    },
                    {
                        "id": 63,
                        "string": "The distance was calculated for each context vector with respect to all the vectors of the same emotion class."
                    },
                    {
                        "id": 64,
                        "string": "The distance formula is given below: a. Chebyshev distance (C d ) = max |x i -y i | where x i and y i represents two vectors."
                    },
                    {
                        "id": 65,
                        "string": "b. Euclidean distance (E d ) = ||x -y|| 2 for vectors x and y. c. Hamming distance (H d ) = (c 01 + c 10 ) / n where c ij is the number of occurrence in the boolean vectors x and y and x[k] = i and y[k] = j for k < n. Hamming distance denotes the proportion of disagreeing components in x and y."
                    },
                    {
                        "id": 66,
                        "string": "Feature Selection and Analysis It is observed that the feature selection always plays an important role in building a good pattern classifier."
                    },
                    {
                        "id": 67,
                        "string": "The sentences were POS tagged using the Stanford POS Tagger and the POS tagged Context Windows were extracted and termed as PTCW."
                    },
                    {
                        "id": 68,
                        "string": "Similarly, the POS tag sequence from each of the PTCWs were extracted and named each as POS Tagged Window (PTW)."
                    },
                    {
                        "id": 69,
                        "string": "It is observed that \"fear\" emotion class has the maximum number of CWs and unique PTCWs whereas the \"anger\" class contains the maximum number of unique PTWs."
                    },
                    {
                        "id": 70,
                        "string": "The Figure 1 as shown below represents the counts of CW, unique PTCWs and PTWs."
                    },
                    {
                        "id": 71,
                        "string": "It was noticed that the total number of CWs is 8967, total number of unique PTCW is 7609 and of unique PTW is 3117."
                    },
                    {
                        "id": 72,
                        "string": "Obviously, the number of PTCW was less than CW and number of PTW was less than PTCW, because of the uniqueness of PTCW and PTW."
                    },
                    {
                        "id": 73,
                        "string": "In Figure 2 , the total counts of CW, PTCW and PTW have been shown."
                    },
                    {
                        "id": 74,
                        "string": "Some sample patterns of PTWs that occur with the maximum frequencies in three emotion classes are \"VBD/RB_JJ_IN\" (anger), \"NN/VBD_VBN_NN\" (disgust) and \"VBD_VBN/JJ_IN/NN\" (fear)."
                    },
                    {
                        "id": 75,
                        "string": "TF and TF-IDF Measure The Term Frequencies (TFs) and the Inverse Document Frequencies (IDFs) of the CWs for each of the emotion classes were calculated."
                    },
                    {
                        "id": 76,
                        "string": "In order to identify different ranges of the TF and TF-IDF scores, the minimum and maximum values of the TF and the variance of TF were calculated for each of the emotion classes."
                    },
                    {
                        "id": 77,
                        "string": "It was observed that guilt has the maximum scores for Max_TF and variance whereas the emotions like anger and disgust have the lowest scores for Max_TF as shown in Figure  3 ."
                    },
                    {
                        "id": 78,
                        "string": "Similarly, the minimum, maximum and variance of the TF-IDF values were calculated for each emotion class, separately."
                    },
                    {
                        "id": 79,
                        "string": "Again, it is found that the guilt emotion has the highest Max_TF-IDF and disgust emotion has the lowest Max_TF-IDF as shown in Figure 4 ."
                    },
                    {
                        "id": 80,
                        "string": "Not only for the Context Windows (CWs), the TF and TF-IDF scores of the POS Tagged Context Windows (PTCWs) and POS Tagged Windows (PTWs) were also calculated with respect to each emotion."
                    },
                    {
                        "id": 81,
                        "string": "It was observed that, similar results were found."
                    },
                    {
                        "id": 82,
                        "string": "Variance, or second moment about the mean, is a measure of the variability (spread or dispersion) of data."
                    },
                    {
                        "id": 83,
                        "string": "A large variance indicates that the data is spread out; a small variance indicates it is clustered closely around the mean.The variance for TF_IDF of guilt is 0.0000456874."
                    },
                    {
                        "id": 84,
                        "string": "A few slight differences were found in the results of PTWs while calculating Max_TF , Min_TF and variance as shown in Figure 3 ."
                    },
                    {
                        "id": 85,
                        "string": "It was observed that fear emotion has the highest Max_TF and anger has the lowest Max_TF whereas the variance of TF for guilt is 0.0002435522."
                    },
                    {
                        "id": 86,
                        "string": "Similarly, Figure 4 shows that fear has the highest Max_TF_IDF and anger contains the lowest Max_TF-IDF values and the variance of TF-IDF of fear is 0.000922226."
                    },
                    {
                        "id": 87,
                        "string": "Ranking Score of CW It was found that some of the Context Windows appear more than one time in the same emotion class."
                    },
                    {
                        "id": 88,
                        "string": "Thus, they were removed and a ranking score was calculated for each of the context windows."
                    },
                    {
                        "id": 89,
                        "string": "Each of the words in a context window was searched in the SentiWordnet lexicon and if found, we considered either positive or negative or both scores."
                    },
                    {
                        "id": 90,
                        "string": "The summation of the absolute scores of all the words in a Context Window is returned."
                    },
                    {
                        "id": 91,
                        "string": "The returned scores were sorted so that, in turn, each of the context windows obtains a rank in its corresponding emotion class."
                    },
                    {
                        "id": 92,
                        "string": "All the ranks were calculated for each emotion class, successively."
                    },
                    {
                        "id": 93,
                        "string": "This rank is useful in finding the important emotional phrases from the list of CWs."
                    },
                    {
                        "id": 94,
                        "string": "Some examples from the list of top 12 important context windows according to their rank are \"much anger when\" (anger), \"whom love after\" (happy), \"felt sad about\" (sadness) etc."
                    },
                    {
                        "id": 95,
                        "string": "Result Analysis The accuracies of the classifiers were obtained by employing user defined test data and data for 10 fold cross validation."
                    },
                    {
                        "id": 96,
                        "string": "It is observed that when Euclidean distance was considered, the BayesNet Classifier gives 100% accuracy on the Test data and gives 97.91% of accuracy on 10-fold cross validation data."
                    },
                    {
                        "id": 97,
                        "string": "On the other hand, J48 classifier achieves 77% accuracy on Test data and 83.54% on 10-fold cross validation data whereas the Nai-veBayesSimple classifier obtains 92.30% accuracy on Test data and 27.07% accuracy on 10-fold cross validation data."
                    },
                    {
                        "id": 98,
                        "string": "In the Naïve BayesSimple with 10fold cross validation, the average Recall, Precision and F-measure values are 0.271, 0.272 and 0.264, respectively."
                    },
                    {
                        "id": 99,
                        "string": "But, the DecisionTree classifier obtains 98.30% and 98.10% accuracies on the Test data as well as 10-fold cross validation data."
                    },
                    {
                        "id": 100,
                        "string": "The comparative results are shown in Figure 5 ."
                    },
                    {
                        "id": 101,
                        "string": "Overall, it is observed from Figure 5 that the BayesNet classifier achieves the best results on the score data which was prepared based on the Euclidean distance."
                    },
                    {
                        "id": 102,
                        "string": "In contrast, the BayesNet achieved 99.30% accuracy on the Test data and 96.92% accuracy on 10-fold cross validation data when the Hamming distance was considered."
                    },
                    {
                        "id": 103,
                        "string": "Similarly, J48 and Naïve BayesSimple classifiers produce 93.05% and 85.41% accuracies on the Test data and 87.95% and 39.50% accuracies on 10-fold cross validation data, respectively."
                    },
                    {
                        "id": 104,
                        "string": "From Figure 6 , it is observed that the DecisionTree classifier produces the best accuracy on the score data that was found using Hamming distance."
                    },
                    {
                        "id": 105,
                        "string": "When the score values are found by using Chebyshev distance, the BayesNet classifier obtains 100% accuracy on Test data and 97.57% accuracy on 10-fold cross validation data."
                    },
                    {
                        "id": 106,
                        "string": "Similarly, J48 achieves 84.82% accuracy on the Test data and 82.75% accuracy on 10-fold cross validation data whereas NaiveBayes and DecisionTable achieve 80% , 29.85% and 98.62% ,96.93% accuracies on the Test data and 10-fold cross validatation data, respectively."
                    },
                    {
                        "id": 107,
                        "string": "It has to be mentioned based on Figure 7 that the DecisionTree classifier performs better in comparison with all other classifiers and achieves the best result among the rest of the classifiers on affinity score data prepared based on the Chebyshev distance only."
                    },
                    {
                        "id": 108,
                        "string": "Conclusions and Future Works In this paper, vector formation was done for each of the Context Windows; TF and TF-IDF measures were calculated."
                    },
                    {
                        "id": 109,
                        "string": "The calculated affinity score, depending on the distance values was inspired from Newton's law of gravitation."
                    },
                    {
                        "id": 110,
                        "string": "To classify these CWs, BayesNet, J48, NaivebayesSimple and Deci-sionTable classifiers."
                    },
                    {
                        "id": 111,
                        "string": "In future, we would like to incorporate more number of lexicons to identify and classify emotional expressions."
                    },
                    {
                        "id": 112,
                        "string": "Moreover, we are planning to include associative learning process to identify some important rules for classification."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 21
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 22,
                        "end": 30
                    },
                    {
                        "section": "Corpus Preparation",
                        "n": "3.1",
                        "start": 31,
                        "end": 48
                    },
                    {
                        "section": "Context Vector Formation",
                        "n": "3.2",
                        "start": 49,
                        "end": 51
                    },
                    {
                        "section": "Affinity Score Calculation",
                        "n": "3.3",
                        "start": 52,
                        "end": 59
                    },
                    {
                        "section": "Affinity Scores using Distance Metrics",
                        "n": "3.4",
                        "start": 60,
                        "end": 65
                    },
                    {
                        "section": "Feature Selection and Analysis",
                        "n": "4",
                        "start": 66,
                        "end": 74
                    },
                    {
                        "section": "TF and TF-IDF Measure",
                        "n": "4.2",
                        "start": 75,
                        "end": 86
                    },
                    {
                        "section": "Ranking Score of CW",
                        "n": "4.3",
                        "start": 87,
                        "end": 94
                    },
                    {
                        "section": "Result Analysis",
                        "n": "5",
                        "start": 95,
                        "end": 112
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/956-Figure5-1.png",
                        "caption": "Figure 5: Classification Results on Test data and 10- fold cross validation using Euclidean distance (Ed)",
                        "page": 5,
                        "bbox": {
                            "x1": 303.84,
                            "x2": 515.04,
                            "y1": 98.88,
                            "y2": 225.12
                        }
                    },
                    {
                        "filename": "../figure/image/956-Figure6-1.png",
                        "caption": "Figure 6: Classification Results on Test data and 10- fold cross validation using Hamming distance (Hd)",
                        "page": 5,
                        "bbox": {
                            "x1": 303.84,
                            "x2": 515.04,
                            "y1": 245.76,
                            "y2": 365.28
                        }
                    },
                    {
                        "filename": "../figure/image/956-Figure7-1.png",
                        "caption": "Figure 7: Classification Results on Test data and 10- fold cross validation using Chebyshev distance (Cd)",
                        "page": 5,
                        "bbox": {
                            "x1": 303.84,
                            "x2": 518.4,
                            "y1": 384.47999999999996,
                            "y2": 516.96
                        }
                    },
                    {
                        "filename": "../figure/image/956-Table1-1.png",
                        "caption": "Table 1: Corpus Statistics",
                        "page": 1,
                        "bbox": {
                            "x1": 297.59999999999997,
                            "x2": 524.16,
                            "y1": 477.59999999999997,
                            "y2": 603.36
                        }
                    },
                    {
                        "filename": "../figure/image/956-Table2-1.png",
                        "caption": "Table 2: Trigrams and Affect Words Statistics",
                        "page": 2,
                        "bbox": {
                            "x1": 318.71999999999997,
                            "x2": 503.03999999999996,
                            "y1": 229.44,
                            "y2": 348.96
                        }
                    },
                    {
                        "filename": "../figure/image/956-Table3-1.png",
                        "caption": "Table 3: Statistics for similar and dissimilar NAW patterns and stop words",
                        "page": 2,
                        "bbox": {
                            "x1": 305.76,
                            "x2": 516.0,
                            "y1": 370.56,
                            "y2": 539.04
                        }
                    },
                    {
                        "filename": "../figure/image/956-Figure1-1.png",
                        "caption": "Figure 1: Count of CW, PTCW and PTW for seven emotion classes",
                        "page": 3,
                        "bbox": {
                            "x1": 303.84,
                            "x2": 518.4,
                            "y1": 521.76,
                            "y2": 645.12
                        }
                    },
                    {
                        "filename": "../figure/image/956-Figure4-1.png",
                        "caption": "Figure 4: Variance,Max_TF-IDF, Min_TF-IDF of CW, PTCW and PTW",
                        "page": 4,
                        "bbox": {
                            "x1": 303.84,
                            "x2": 515.04,
                            "y1": 283.68,
                            "y2": 413.28
                        }
                    },
                    {
                        "filename": "../figure/image/956-Figure3-1.png",
                        "caption": "Figure 3:Variance,Max_TF,Min_TF of CW, PTCW and PTW",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 515.04,
                            "y1": 122.88,
                            "y2": 259.2
                        }
                    },
                    {
                        "filename": "../figure/image/956-Figure2-1.png",
                        "caption": "Figure 2:Total Count of CW, PTCW and PTW",
                        "page": 4,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 291.36,
                            "y1": 98.88,
                            "y2": 224.16
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-2"
        },
        {
            "slides": {
                "0": {
                    "title": "Background",
                    "text": [
                        "Information Retrieval (IR) and Recommender Systems (RS) techniques",
                        "have been used to address:-",
                        "Literature Review (LR) search tasks",
                        "Explicit and implicit ad-hoc information needs",
                        "Examples of such tasks include",
                        "Building a reading list of research papers",
                        "Recommending papers based on query logs",
                        "Recommending papers based on publication history",
                        "Serendipitous discovery of interesting papers and more.",
                        "What about recommending papers during manuscript preparation"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Addressed scenarios in mp",
                    "text": [
                        "Recommending papers based on Citation Contexts in manuscripts",
                        "Recommending new papers based on To-Be-Cited papers from the",
                        "Recommending papers based on the full text of the draft",
                        "What more could be done?",
                        "Explore the total list of papers compiled during literature review",
                        "Explore the article-type preference to vary recommendations correspondingly?"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Enter rec4lrw",
                    "text": [
                        "Rec4LRW is a task-based assistive system that offers",
                        "recommendations for the below tasks:-",
                        "Task 1 Building an initial reading list of research papers",
                        "Task 2 Finding similar papers based on a seed set of papers",
                        "Task 3 Shortlisting papers from the final reading list based on",
                        "The system is based on a threefold intervention framework",
                        "For better meeting the task requirements",
                        "Novel informational display features",
                        "For speeding up the relevance judgement decisions",
                        "For establishing the natural relationships between tasks"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Rec4lrw usage sequence",
                    "text": [
                        "Select papers from Task 2 to the final reading list",
                        "N Execute Task 3 with the final reading list papers"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "Corpus",
                    "text": [
                        "ACM DL extract of papers published between 1951 and 2011 used as",
                        "AnyStyle (https://anystyle.io) parser used to extract article title, venue",
                        "and year from references",
                        "Data stored in a MySQL database with the tables related using a"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "5": {
                    "title": "Task objective and steps",
                    "text": [
                        "OBJECTIVE: To identify the important papers from the final reading list",
                        "and vary recommendations count based on article-type preference",
                        "Input: P set of papers in the final reading list",
                        "AT article-type choice of the user",
                        "1: RC the average references count retrieved for AT",
                        "2: R list of retrieved citations & references of papers from P",
                        "3: G directed sparse graph created with papers from R",
                        "4: run edge betweenness algorithm on G to form cluster set C 5: S final list of shortlisted papers 6: if |C| > RC then while |S = RC for each cluster in C do sort papers in the cluster on citation count s top ranked paper from the cluster add s to S end for end while 14: else N while |S = RC N N +1 for each cluster in C do sort papers in the cluster on citation count s N ranked paper from the cluster add s to S end for end while 24: end if 25: display papers from S to user"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "6": {
                    "title": "User evaluation study",
                    "text": [
                        "OBJECTIVE: To ascertain the usefulness and effectiveness",
                        "of the task to researchers",
                        "Ascertain the agreement percentages of the evaluation",
                        "Relevance The shortlisted papers are relevant to my article-type preference",
                        "Usefulness The shortlisted papers are useful for inclusion in my manuscript",
                        "Importance The shortlisted papers comprises of important papers from my reading list",
                        "Certainty The shortlisted list comprises of papers which I would definitely cite in my manuscript Good_List This is a good recommendation list, at an overall level Improvement_Needed There is a need to further improve this shortlisted papers list",
                        "Shortlisting_Feature I would like to see the feature of shortlisting papers from reading list based on article-type preference, in academic search systems and databases",
                        "Identify the top preferred and critical aspects of the task",
                        "through the subjective feedback of the participants",
                        "Feedback responses were coded by a single coder using an inductive approach"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "7": {
                    "title": "Study information",
                    "text": [
                        "The study was conducted between November 2015 and January 2016",
                        "Pre-screening survey conducted to identify participants who have authored at",
                        "least one journal or conference paper",
                        "116 participants completed the whole study inclusive of the three tasks in the",
                        "57 participants were Ph.D./Masters students while 59 were research staff,",
                        "academic staff and librarians",
                        "The average research experience for students was 2 years while for staff, it",
                        "51% of participants were from the computer science, electrical and electronics disciplines, 35% from information and communication studies discipline while 14% from other disciplines"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "8": {
                    "title": "Study procedure",
                    "text": [
                        "Step Participant selects one of the available 43 topics for executing task 1",
                        "Step Re-run task 1 and select at least five papers for the seed basket",
                        "Step Execute task 2 with the seed basket papers",
                        "Step Re-run task 2 (and task 1) to select at least 30 papers for the final",
                        "Step 5: Execute task 3 with the final reading list papers and article-type",
                        "Four article-type choices: conference full paper, poster, case study and a generic research paper"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "10": {
                    "title": "Results",
                    "text": [
                        "Biggest differences found for the below measures:-",
                        "The measures with the highest agreement:-"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": [
                        "figure/image/959-Figure2-1.png"
                    ]
                },
                "11": {
                    "title": "Qualitative feedback",
                    "text": [
                        "Rank Preferred Aspects Categories Critical Aspects Categories",
                        "Shortlisting Feature & Rec. Quality (24%) Rote Selection of Papers (16%)",
                        "Information Cue Labels (15%) Limited Dataset Issue (5%)",
                        "View Papers in Clusters (11%) Quality can be Improved (5%)",
                        "Rich Metadata (7%) Not Sure of the Usefulness of the Task (4%)",
                        "Ranking of Papers (3%) UI can be Improved (3%)",
                        "The newly introduced informational display features were a big hit",
                        "The purely experimental nature of the study affected the experience of",
                        "Tasks effectiveness needs to be validated with a longitudinal study with a large collection of papers in the final reading list"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "12": {
                    "title": "Limitations",
                    "text": [
                        "Lack of an offline evaluation experiment",
                        "Study procedure involved selection of comparatively fewer number of papers",
                        "in the final reading list",
                        "Not much variations in the final shortlisted papers for the different article-type",
                        "Information displayed in a purely textual manner"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "13": {
                    "title": "Future work",
                    "text": [
                        "The scope for this task will be expanded to bring in more variations for the",
                        "Inclusion of new papers in the output which could have been missed during",
                        "Provide more user control in the system so that the user can select papers as",
                        "mandatory to be shortlisted",
                        "Integrate this task with the citation context recommendation task",
                        "Represent the information in the form of citation graphs"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                }
            },
            "paper_title": "What papers should I cite from my reading list? User evaluation of a manuscript preparatory assistive task",
            "paper_id": "959",
            "paper": {
                "title": "What papers should I cite from my reading list? User evaluation of a manuscript preparatory assistive task",
                "abstract": "Literature Review (LR) and Manuscript Preparatory (MP) tasks are two key activities for researchers. While process-based and technologicaloriented interventions have been introduced to bridge the apparent gap between novices and experts for LR tasks, there are very few approaches for MP tasks. In this paper, we introduce a novel task of shortlisting important papers from the reading list of researchers, meant for citation in a manuscript. The technique helps in identifying the important and unique papers in the reading list. Based on a user evaluation study conducted with 116 participants, the effectiveness and usefulness of the task is shown using multiple evaluation metrics. Results show that research students prefer this task more than research and academic staff. Qualitative feedback of the participants including the preferred aspects along with critical comments is presented in this paper.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The Scientific Publication Lifecycle comprises of different activities carried out by researchers [5] ."
                    },
                    {
                        "id": 1,
                        "string": "Of all these activities, the three main activities are literature review, actual research work and dissemination of results through conferences and journals."
                    },
                    {
                        "id": 2,
                        "string": "These three activities in themselves cover multiple sub-activities that require specific expertise and experience [16] ."
                    },
                    {
                        "id": 3,
                        "string": "Prior studies have shown researchers with low experience, face difficulties in completing research related activities [9, 15] ."
                    },
                    {
                        "id": 4,
                        "string": "These researchers rely on assistance from supervisors, experts and librarians for learning the required skills to pursue such activities."
                    },
                    {
                        "id": 5,
                        "string": "Scenarios where external assistance have been traditionally required are (i) selection of information sources (academic search engines, databases and citation indices), (ii) formulation of search queries, (iii) browsing of retrieved results and (iv) relevance judgement of retrieved articles [9] ."
                    },
                    {
                        "id": 6,
                        "string": "Apart from human assistance, academic assistive systems have been built for alleviating the expertise gap between experts and novices in terms of research execution."
                    },
                    {
                        "id": 7,
                        "string": "Some of these interventions include search systems with faceted user interfaces for better dis-play of search results [2] , bibliometric tools for visualizing citation networks [7] and scientific paper recommender systems [3, 14] , to name a few."
                    },
                    {
                        "id": 8,
                        "string": "In the area of manuscript writing, techniques have been proposed to recommend articles for citation contexts in manuscripts [11] ."
                    },
                    {
                        "id": 9,
                        "string": "In the context of manuscript publication, prior studies have tried to recommend prospective conference venues [25] most suited for the research in hand."
                    },
                    {
                        "id": 10,
                        "string": "One unexplored area is helping researchers in identifying the important and unique papers that can be potentially cited in the manuscript."
                    },
                    {
                        "id": 11,
                        "string": "This identification is affected by two factors."
                    },
                    {
                        "id": 12,
                        "string": "The first factor is the type of research where citation of a particular paper makes sense due to the particular citation context."
                    },
                    {
                        "id": 13,
                        "string": "The second factor is the type of article (for e.g., conference full paper, journal paper, demo paper) that the author is intending to write."
                    },
                    {
                        "id": 14,
                        "string": "For the first factor, there have been some previous studies [11, 14, 21] ."
                    },
                    {
                        "id": 15,
                        "string": "The second factor represents a task that can be explored since the article-type places a constraint on the citations that can be made in a manuscript, in terms of dimensions such as recency, quantity, to name a few."
                    },
                    {
                        "id": 16,
                        "string": "In our research, we address this new manuscript preparatory task with the objective of shortlisting papers from the reading list of researchers based on article-type preference."
                    },
                    {
                        "id": 17,
                        "string": "By the term 'shortlisting', we allude to the nature of the task in identifying important papers from the reading list This task is part of a functionality provided by an assistive system called Rec4LRW meant for helping researchers in literature review and manuscript preparation."
                    },
                    {
                        "id": 18,
                        "string": "The system uses a corpus of papers, built from an extract of ACM Digital Library (ACM DL)."
                    },
                    {
                        "id": 19,
                        "string": "It is hypothesized that the Rec4LRW system will be highly beneficial to novice researchers such as Ph.D. and Masters students and also for researchers who are venturing into new research topics."
                    },
                    {
                        "id": 20,
                        "string": "A user evaluation study was conducted to evaluate all the tasks in the system, from a researcher's perspective."
                    },
                    {
                        "id": 21,
                        "string": "In this paper, we report the findings from the study."
                    },
                    {
                        "id": 22,
                        "string": "The study was conducted with 116 participants comprising of research students, academic staff and research staff."
                    },
                    {
                        "id": 23,
                        "string": "Results from the six evaluation measures show that the participants prefer to have the shortlisting feature included in academic search systems and digital libraries."
                    },
                    {
                        "id": 24,
                        "string": "Subjective feedback from the participants in terms of the preferred features and the features that need to be improved, are also presented in the paper."
                    },
                    {
                        "id": 25,
                        "string": "The reminder of this work is organized as follows."
                    },
                    {
                        "id": 26,
                        "string": "Section two surveys the related work."
                    },
                    {
                        "id": 27,
                        "string": "The Rec4LRW system is introduced along with dataset, technical details and unique UI features in section three."
                    },
                    {
                        "id": 28,
                        "string": "In section four, the shortlisting technique of the task is explained."
                    },
                    {
                        "id": 29,
                        "string": "Details about the user study and data collection are outlined in Section five."
                    },
                    {
                        "id": 30,
                        "string": "The evaluation results are presented in section six."
                    },
                    {
                        "id": 31,
                        "string": "The concluding remarks and future plans for research are provided in the final section."
                    },
                    {
                        "id": 32,
                        "string": "Related Work Conceptual models and systems have been proposed in the past for helping researchers during manuscript writing."
                    },
                    {
                        "id": 33,
                        "string": "Generating recommendations for citation contexts is an approach meant to help the researcher in finding candidate citations for particular placeholders (locations) in the manuscript."
                    },
                    {
                        "id": 34,
                        "string": "These studies make use of content oriented recommender techniques as there is no scope for using Collaborative Filtering (CF) based techniques due to lack of user ratings."
                    },
                    {
                        "id": 35,
                        "string": "Translation models have been specifically used in [13, 17] as they are able to handle the issue of vocabulary mismatch gap between the user query and document content."
                    },
                    {
                        "id": 36,
                        "string": "The efficiency of the approaches is dependent on the comprehensiveness of training set data as the locations and corresponding citations data are recorded."
                    },
                    {
                        "id": 37,
                        "string": "The study in [11] is the most sophisticated, as it does not expect the user to mark the citation contexts in the input paper unlike other studies where the contexts have to be set by the user."
                    },
                    {
                        "id": 38,
                        "string": "The proposed model in the study learns the placeholders in previous research articles where citations are widely made so that the citation recommendation can be made on occurrence of similar patterns."
                    },
                    {
                        "id": 39,
                        "string": "The methods in these studies are heavily reliant on the quality & quantity of training data; therefore they are not applicable to systems which lack access to full text of research papers."
                    },
                    {
                        "id": 40,
                        "string": "Citation suggestions have also been provided as part of reference management and stand-alone recommendation tools."
                    },
                    {
                        "id": 41,
                        "string": "ActiveCite [21] is a recommendation tool that provides both high level and specific citation suggestions based on text mining techniques."
                    },
                    {
                        "id": 42,
                        "string": "Docear is one of the latest reference management software [3] with a mind map feature that helps users in better organizing their references."
                    },
                    {
                        "id": 43,
                        "string": "The in-built recommendation module in this tool is based on Content based (CB) recommendation technique with all the data stored in a central server."
                    },
                    {
                        "id": 44,
                        "string": "The Refseer system [14] , similar to ActiveCite, provides both global and local (particular citation context) level recommendations."
                    },
                    {
                        "id": 45,
                        "string": "The system is based on the non-parametric probabilistic model proposed in [12] ."
                    },
                    {
                        "id": 46,
                        "string": "These systems depend on the quality and quantity of full text data available in the central server as scarcity of papers could lead to redundant recommendations."
                    },
                    {
                        "id": 47,
                        "string": "Even though article-type recommendations have not been practically implemented, the prospective idea has been discussed in few studies."
                    },
                    {
                        "id": 48,
                        "string": "The article-type dimension has been highlighted as part of the user's 'Purpose' in the multi-layer contextual model put forth in [8] and as one of the facets in document contextual information in [6] ."
                    },
                    {
                        "id": 49,
                        "string": "The article type indirectly refers to the goal of the researcher."
                    },
                    {
                        "id": 50,
                        "string": "It is to be noted that goal or purpose related dimensions have been considered for research in other research areas of recommender systems namely course recommendations [23] and TV guide recommendations [20] ."
                    },
                    {
                        "id": 51,
                        "string": "Our work, on the other hand, is the first to explore this task of providing article-type based recommendations with the aim of shortlisting important and unique papers from the cumulative reading list prepared by researchers during their literature review."
                    },
                    {
                        "id": 52,
                        "string": "Through this study, we hope to open new avenues of research which requires a different kind of mining of bibliographic data, for providing more relevant results."
                    },
                    {
                        "id": 53,
                        "string": "3 Assistive System Brief Overview The Rec4LRW system has been built as a tool aimed to help researchers in two main tasks of literature review and one manuscript preparatory task."
                    },
                    {
                        "id": 54,
                        "string": "The three tasks are (i) Building an initial reading list of research papers, (ii) Finding similar papers based on a set of papers, and (iii) Shortlisting papers from the final reading list for inclusion in manuscript based on article-type choice."
                    },
                    {
                        "id": 55,
                        "string": "The usage context of the system is as follows."
                    },
                    {
                        "id": 56,
                        "string": "Typically, a researcher would run the first task for one or two times at the start of the literature review, followed by selection of few relevant seed papers which are then used for task 2."
                    },
                    {
                        "id": 57,
                        "string": "The second task takes these seed papers as an input to find topically similar papers."
                    },
                    {
                        "id": 58,
                        "string": "This task is run multiple times until the researcher is satisfied with the whole list of papers in the reading list."
                    },
                    {
                        "id": 59,
                        "string": "The third task (described in this paper), is meant to be run when the researcher is at the stage of writing manuscripts for publication."
                    },
                    {
                        "id": 60,
                        "string": "It is observed that the researcher would maintain numerous papers in his/her reading list while performing research (could be more than 100 papers for most research studies)."
                    },
                    {
                        "id": 61,
                        "string": "The third task helps the researcher in identifying both important and unique papers from the reading list."
                    },
                    {
                        "id": 62,
                        "string": "The shortlisted papers count varies as per the article-type preference of the researcher."
                    },
                    {
                        "id": 63,
                        "string": "The recommendation mechanisms of the three tasks are based on seven features/criteria that represent the characteristics of the bibliography and its relationship with the parent research paper [19] ."
                    },
                    {
                        "id": 64,
                        "string": "Dataset A snapshot of the ACM Digital Library (ACM DL) is used as the dataset for the system."
                    },
                    {
                        "id": 65,
                        "string": "Papers from proceedings and journals for the period 1951 to 2011 form the dataset."
                    },
                    {
                        "id": 66,
                        "string": "The papers from the dataset have been shortlisted based on full text and metadata availability in the dataset, to form the sample set/corpus for the system."
                    },
                    {
                        "id": 67,
                        "string": "The sample set contains a total of 103,739 articles and corresponding 2,320,345 references."
                    },
                    {
                        "id": 68,
                        "string": "User-Interface (UI) Features In this sub-section, the unique UI features of the Rec4LRW system are presented."
                    },
                    {
                        "id": 69,
                        "string": "Apart from the regular fields such as author name(s), abstract, publication year and citation count, the system displays the fields:-author-specified keywords, references count and short summary of the paper (if the abstract of the paper is missing)."
                    },
                    {
                        "id": 70,
                        "string": "Most importantly, we have included information cue labels beside the title for each article."
                    },
                    {
                        "id": 71,
                        "string": "There are four labels (1) Popular, (2) Recent, (3) High Reach and (4) Survey/Review."
                    },
                    {
                        "id": 72,
                        "string": "A screenshot from the system for the cue labels (adjacent to article title) is provided in Figure 1 ."
                    },
                    {
                        "id": 73,
                        "string": "The display logic for the cue labels are described as follows."
                    },
                    {
                        "id": 74,
                        "string": "The recent label is displayed for papers published between the years 2009 and 2011 (the most recent papers in the ACM dataset is of 2011)."
                    },
                    {
                        "id": 75,
                        "string": "The survey/review label is displayed for papers which are of the type -literature survey or review."
                    },
                    {
                        "id": 76,
                        "string": "For the popular label, the unique citation counts of all papers for the selected research topic are first retrieved from the database."
                    },
                    {
                        "id": 77,
                        "string": "The label is displayed for a paper if the citation count is in the top 5% percentile of the citation counts for that topic."
                    },
                    {
                        "id": 78,
                        "string": "Similar logic is used for the high reach label with references count data."
                    },
                    {
                        "id": 79,
                        "string": "The high reach label indicates that the paper has more number of references than most other articles for the research topic, thereby facilitating the scope for extended citation chaining."
                    },
                    {
                        "id": 80,
                        "string": "Specifically for task 3, the system provides an option for the user to view the papers in the parent cluster of the shortlisted papers."
                    },
                    {
                        "id": 81,
                        "string": "This feature helps the user in serendipitously finding more papers for reading."
                    },
                    {
                        "id": 82,
                        "string": "The screenshot for this feature is provided in Figure 1 ."
                    },
                    {
                        "id": 83,
                        "string": "Technique For Shortlisting Papers From Reading List The objective of this task is to help researchers in identifying important (based on citation counts) and unique papers from the final reading list."
                    },
                    {
                        "id": 84,
                        "string": "These papers are to be considered as potential candidates for citation in the manuscript."
                    },
                    {
                        "id": 85,
                        "string": "For this task, the Girvan-Newman algorithm [10] was used for identifying the clusters in the citations network."
                    },
                    {
                        "id": 86,
                        "string": "The specific goal of clustering is to identify the communities within the citation network."
                    },
                    {
                        "id": 87,
                        "string": "From the identified clusters, the top cited papers are shortlisted."
                    },
                    {
                        "id": 88,
                        "string": "The algorithm is implemented as the EdgeBetweennessClusterer in JUNG library."
                    },
                    {
                        "id": 89,
                        "string": "The algorithm was selected as it is the one of the most prominent community detection algorithms based on link removal."
                    },
                    {
                        "id": 90,
                        "string": "The other algorithms considered were voltage clustering algorithm [24] and bi-component DFS clustering algorithm [22] ."
                    },
                    {
                        "id": 91,
                        "string": "Based on internal trail tests, the Girvan-Newman algorithm was able to consistently identify meaningful clusters using the graph constructed with the citations and references of the papers from the reading list."
                    },
                    {
                        "id": 92,
                        "string": "As a part of this task, we have tried to explore the notion of varying the count of shortlisted papers by article-type choice."
                    },
                    {
                        "id": 93,
                        "string": "For this purpose, four article-types were considered: conference full paper (cfp), conference poster (cp), generic research paper (gp) 1 and case study (cs)."
                    },
                    {
                        "id": 94,
                        "string": "The article-type classification is not part of the ACM metadata but it is partly inspired by the article classification used in Emerald publications."
                    },
                    {
                        "id": 95,
                        "string": "The number of papers to be shortlisted for these article-types was identified by using the historical data from ACM dataset."
                    },
                    {
                        "id": 96,
                        "string": "First, the papers in the dataset were filtered by using the title field and section field for the four article-types."
                    },
                    {
                        "id": 97,
                        "string": "Second, the average of the references count was calculated for the filtered papers for each articletype from previous step."
                    },
                    {
                        "id": 98,
                        "string": "The average references count for the article-types gp, cs, cfp and cp are 26, 17, 16 and 6 respectively."
                    },
                    {
                        "id": 99,
                        "string": "This new data field is used to set the number of papers to be retrieved from the paper clusters."
                    },
                    {
                        "id": 100,
                        "string": "The procedure for this technique is given in Procedure 1. for each cluster in C do 9: sort papers in the cluster on citation count 10: s top ranked paper from the cluster 11: add s to S 12: end for 13: end while 14: else 15: N 0 16. while |S| = RC 17: N N +1 18: for each cluster in C do 19: sort papers in the cluster on citation count 20: s N ranked paper from the cluster 21: add s to S 22: end for 23: end while 24: end if 25: display papers from S to user 5 User Evaluation Study In IR and RS studies, offline experiments are conducted for evaluating the proposed technique/algorithm with baseline approaches."
                    },
                    {
                        "id": 101,
                        "string": "Since the task addressed in the current study is a novel task, the best option was to perform a user evaluation study with researchers."
                    },
                    {
                        "id": 102,
                        "string": "Considering the suggestions from [4] , the objective of the study was to ascertain the usefulness and effectiveness of the task to researchers."
                    },
                    {
                        "id": 103,
                        "string": "The specific evaluation goals were (i) ascertain the agreement percentages of the evaluation measures and (ii) identify the top preferred and critical aspects of the task through the subjective feedback of the participants."
                    },
                    {
                        "id": 104,
                        "string": "An online pre-screening survey was conducted to identify the potential participants."
                    },
                    {
                        "id": 105,
                        "string": "Participants needed to have experience in writing conference or journal paper(s) as a qualification for taking part in the study."
                    },
                    {
                        "id": 106,
                        "string": "All the participants were required to evaluate the three tasks and the overall system."
                    },
                    {
                        "id": 107,
                        "string": "In task 1, the participants had to select a research topic from a list of 43 research topics."
                    },
                    {
                        "id": 108,
                        "string": "On selection of topic, the system provides the top 20 paper recommendations which are meant to be part of the initial LR reading list."
                    },
                    {
                        "id": 109,
                        "string": "In task 2, they had to select a minimum of five papers from task 1 in order for the system to retrieve 30 topically similar papers."
                    },
                    {
                        "id": 110,
                        "string": "For the third task, the participants were requested to add at least 30 papers in the reading list."
                    },
                    {
                        "id": 111,
                        "string": "The paper count was set to 30 as the threshold for highest number of shortlisted papers was 26 (for the article-type 'generic research paper')."
                    },
                    {
                        "id": 112,
                        "string": "The three other article-types provided for the experiment were conference full paper, conference poster and case study."
                    },
                    {
                        "id": 113,
                        "string": "The shortlisted papers count for these article-types was fixed by taking average of the references count of the related papers from the ACM DL extract."
                    },
                    {
                        "id": 114,
                        "string": "The participant had to then select the article-type and run the task so that the system could retrieve the shortlisted papers."
                    },
                    {
                        "id": 115,
                        "string": "The screenshot of the task 3 from the Rec4LRW system is provided in Figure 1 ."
                    },
                    {
                        "id": 116,
                        "string": "In addition to the basic metadata, the system provides the feature \"View papers in the parent cluster\" for the participant to see the cluster from which the paper has been shortlisted."
                    },
                    {
                        "id": 117,
                        "string": "The evaluation screen was provided to the user at the bottom of the screen (not shown in Figure 1 )."
                    },
                    {
                        "id": 118,
                        "string": "The participants had to answer seven mandatory survey questions and one optional subjective feedback question as a part of the evaluation."
                    },
                    {
                        "id": 119,
                        "string": "The seven survey questions and the corresponding measures are provided in Table 1 ."
                    },
                    {
                        "id": 120,
                        "string": "A five-point Likert scale was provided for measuring participant agreement for each question."
                    },
                    {
                        "id": 121,
                        "string": "The measures were selected based on the key aspects of the task."
                    },
                    {
                        "id": 122,
                        "string": "The measures Relevance, Usefulness, Importance, Certainty, Good_List and Improve-ment_Needed were meant to ascertain the quality of the recommendations."
                    },
                    {
                        "id": 123,
                        "string": "The final measure Shortlisting_Feature was used to identify whether participants would be interested to use this task in current academic search systems and digital libraries."
                    },
                    {
                        "id": 124,
                        "string": "This is a good recommendation list, at an overall level Improvement_Needed There is a need to further improve this shortlisted papers list Shortlisting_Feature I would like to see the feature of shortlisting papers from reading list based on article-type preference, in academic search systems and databases The response values 'Agree' and 'Strongly Agree' were the two values considered for the calculation of agreement percentages for the evaluation measures."
                    },
                    {
                        "id": 125,
                        "string": "Descriptive statistics were used to measure central tendency."
                    },
                    {
                        "id": 126,
                        "string": "Independent samples t-test was used to check the presence of statistically significant difference in the mean values of the students and staff group, for the testing the hypothesis."
                    },
                    {
                        "id": 127,
                        "string": "Statistical significance was set at p < .05."
                    },
                    {
                        "id": 128,
                        "string": "Statistical analyses were done using SPSS 21.0 and R. Participants' subjective feedback responses were coded by a single coder using an inductive approach [1] , with the aim of identifying the central themes (concepts) in the text."
                    },
                    {
                        "id": 129,
                        "string": "The study was conducted between November 2015 and January 2016."
                    },
                    {
                        "id": 130,
                        "string": "Out of the eligible 230 participants, 116 participants signed the consent form and completed the whole study inclusive of the three tasks in the system."
                    },
                    {
                        "id": 131,
                        "string": "57 participants were Ph.D./Masters students while 59 were research staff, academic staff and librarians."
                    },
                    {
                        "id": 132,
                        "string": "The average research experience for Ph.D. students was 2 years while for staff, it was 5.6 years."
                    },
                    {
                        "id": 133,
                        "string": "51% of participants were from the computer science, electrical and electronics disciplines, 35% from information and communication studies discipline while 14% from other disciplines."
                    },
                    {
                        "id": 134,
                        "string": "6 Results and Discussion Agreement Percentages (AP) The agreement percentages (AP) for the seven measures by the participant groups are shown in Figure 2 ."
                    },
                    {
                        "id": 135,
                        "string": "In the current study, an agreement percentage above 75% is considered as an indication of higher agreement from the participants."
                    },
                    {
                        "id": 136,
                        "string": "As expected, the AP of students was consistently higher than the staff with the biggest difference found for the measures Usefulness (82.00% for students, 64.15% for staff) and Good_List (76.00% for students, 62.26% for staff)."
                    },
                    {
                        "id": 137,
                        "string": "It has been reported in earlier studies that graduate students generally look for assistance in most stages of research [9] ."
                    },
                    {
                        "id": 138,
                        "string": "Consequently, students would prefer technological interventions such as the current system due to the simplicity in interaction."
                    },
                    {
                        "id": 139,
                        "string": "Hence, the evaluation of students was evidently better than staff."
                    },
                    {
                        "id": 140,
                        "string": "The quality measures Importance (85.96% for students, 77.97% for staff) and Shortlisting_Feature (84.21% for students, 74.58% for staff) had the highest APs."
                    },
                    {
                        "id": 141,
                        "string": "This observation validates the usefulness of the technique in identifying popular/seminal papers from the reading list."
                    },
                    {
                        "id": 142,
                        "string": "Due to favorable APs for the most measures, the lowest agreement values were observed for the measure Improve-ment_Needed (57.89% for students, 57.63% for staff)."
                    },
                    {
                        "id": 143,
                        "string": "The results for the measure Certainty (70% for students, 62.26% for staff) indicate some level of reluctance among the participants in being confident of citing the papers."
                    },
                    {
                        "id": 144,
                        "string": "Citation of a particular paper is subject to the particular citation context in the manuscript, therefore not all participants would be able to prejudge their citation behavior."
                    },
                    {
                        "id": 145,
                        "string": "In summary, participants seem to acknowledge the usefulness of the task in identifying important papers from the reading list."
                    },
                    {
                        "id": 146,
                        "string": "However, there is an understandable lack of inclination in citing these papers."
                    },
                    {
                        "id": 147,
                        "string": "This issue is to be addressed in future studies."
                    },
                    {
                        "id": 148,
                        "string": "Qualitative Data Analysis In Table 2 , the top five categories of the preferred aspects and critical aspects are listed."
                    },
                    {
                        "id": 149,
                        "string": "Preferred Aspects."
                    },
                    {
                        "id": 150,
                        "string": "Out of the total 116 participants, 68 participants chose to give feedback about the features that they found to be useful."
                    },
                    {
                        "id": 151,
                        "string": "24% of the participants felt that the feature of the shortlisting papers based on article-type preference was quite preferable and would help them in completing their tasks in a faster and efficient manner."
                    },
                    {
                        "id": 152,
                        "string": "They also felt that the quality of the shortlisting papers was satisfactory."
                    },
                    {
                        "id": 153,
                        "string": "15% of the participants felt that the information cue labels (popular, recent, high reach and literature survey) were helpful for them in relevance judgement of the shortlisted papers."
                    },
                    {
                        "id": 154,
                        "string": "This particular observation of the participants was echoed for the first two tasks of the Rec4LRW system, thereby validating the usefulness of information cue labels in academic search systems and digital libraries."
                    },
                    {
                        "id": 155,
                        "string": "Around 11% of the participants felt the option of viewing papers in the parent cluster of the particular shortlisted papers was useful in two ways."
                    },
                    {
                        "id": 156,
                        "string": "Firstly, it helped in understanding the different clusters formed with the references and citations of the papers in the reading list."
                    },
                    {
                        "id": 157,
                        "string": "Secondly, the clusters served as an avenue for finding some useful and relevant papers in serendipitous manner as some papers could have been missed by the researcher dur-ing the literature review process."
                    },
                    {
                        "id": 158,
                        "string": "The other features that the participants commended were the metadata provided along with the shortlisted papers (citations count, article summary) and the paper management collection features across the three tasks."
                    },
                    {
                        "id": 159,
                        "string": "Ranking of Papers (3%) UI can be Improved (3%) Critical Aspects."
                    },
                    {
                        "id": 160,
                        "string": "Out of the 116 participants, 41 participants gave critical comments about the task and features of the system catering to the task."
                    },
                    {
                        "id": 161,
                        "string": "Around 16% of the participants felt that the study procedure of adding 30 papers to the reading list as a precursor for running the task was uninteresting."
                    },
                    {
                        "id": 162,
                        "string": "The reasons cited were the irrelevance of some of the papers to the participants as these papers had to be added just for the sake of executing the task while some participants felt that the 30 papers count was too much while some could not comprehend why these many papers had to be added."
                    },
                    {
                        "id": 163,
                        "string": "Around 5% of the participants felt that the study experience was hindered by the dataset not catering to recent papers (circa 2012-2015) and the dataset being restricted to computer science related topics."
                    },
                    {
                        "id": 164,
                        "string": "Another 5% of the participants felt that they shortlisting algorithm/technique could be improved to provide a better list of papers."
                    },
                    {
                        "id": 165,
                        "string": "A section of these participants needed more recent papers in the final list while others wanted papers specifically from high impact publications."
                    },
                    {
                        "id": 166,
                        "string": "Around 4% of the participants could not find the usefulness of the task in their work."
                    },
                    {
                        "id": 167,
                        "string": "They felt that the task was not beneficial."
                    },
                    {
                        "id": 168,
                        "string": "The other minor critical comments given by the participants were the ranking of the list could be improved, the task execution speed could be improved and more UI control features could be provided, such as sorting options and free-text search box."
                    },
                    {
                        "id": 169,
                        "string": "Conclusion and Future Work For literature review and manuscript preparatory related tasks, the gap between novices and experts in terms of task knowledge and execution skills is well-known [15] ."
                    },
                    {
                        "id": 170,
                        "string": "A majority of the previous studies have brought forth assistive systems that focus heavily on LR tasks, while only a few studies have concentrated on approaches for helping researchers during manuscript preparation."
                    },
                    {
                        "id": 171,
                        "string": "With the Rec4LRW system, we have attempted to address the aforementioned gap with a novel task for shortlisting articles from researcher's reading list, for inclusion in manuscript."
                    },
                    {
                        "id": 172,
                        "string": "The shortlisting task makes use of a popular community detection algorithm [10] for identifying communities of papers generated from the citations network of the papers from the reading list."
                    },
                    {
                        "id": 173,
                        "string": "Additionally, we have also tried to vary shortlisted papers count by taking the article-type choice into consideration."
                    },
                    {
                        "id": 174,
                        "string": "In order to evaluate the system, a user evaluation study was conducted with 116 participants who had the experience of writing research papers."
                    },
                    {
                        "id": 175,
                        "string": "The participants were instructed to run each task followed by evaluation questionnaire."
                    },
                    {
                        "id": 176,
                        "string": "Participants were requested to answer survey questions and provide subjective feedback on the features of the tasks."
                    },
                    {
                        "id": 177,
                        "string": "As hypothesized before the start of the study, students evaluated the task favorably for all measures."
                    },
                    {
                        "id": 178,
                        "string": "There was high level of agreement among all participants on the availability of important papers among the shortlisted papers."
                    },
                    {
                        "id": 179,
                        "string": "This finding validates the aim of the task in identifying the papers that manuscript reviewers would expected to be cited."
                    },
                    {
                        "id": 180,
                        "string": "In the qualitative feedback provided by the participants, majority of the participants preferred the idea of shortlisting papers and also thought the output of the task was of good quality."
                    },
                    {
                        "id": 181,
                        "string": "Secondly, they liked the information cue labels provided along with certain papers, for indicating the special nature of the paper."
                    },
                    {
                        "id": 182,
                        "string": "As a part of critical feedback, participants felt that the study procedure was a bit longwinded as they had to select 30 papers without reading them, just for running the task."
                    },
                    {
                        "id": 183,
                        "string": "As a part of future work, the scope for this task will be expanded to bring in more variations for the different article-type choices."
                    },
                    {
                        "id": 184,
                        "string": "For instance, research would be conducted:-(i) to ascertain the quantity of recent papers to be shortlisted for different article-type choices, (ii) include new papers in the output so that the user is alerted about some key paper(s) which could have been missed during literature review, (iii) provide more user control in the system so that the user can select papers as mandatory to be shortlisted and (iv) Integrate this task with the citation context recommendation task [11, 14] so that the user can be fully aided during the whole process of manuscript writing."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 31
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 32,
                        "end": 52
                    },
                    {
                        "section": "Brief Overview",
                        "n": "3.1",
                        "start": 53,
                        "end": 63
                    },
                    {
                        "section": "Dataset",
                        "n": "3.2",
                        "start": 64,
                        "end": 67
                    },
                    {
                        "section": "User-Interface (UI) Features",
                        "n": "3.3",
                        "start": 68,
                        "end": 82
                    },
                    {
                        "section": "Technique For Shortlisting Papers From Reading List",
                        "n": "4",
                        "start": 83,
                        "end": 133
                    },
                    {
                        "section": "Agreement Percentages (AP)",
                        "n": "6.1",
                        "start": 134,
                        "end": 147
                    },
                    {
                        "section": "Qualitative Data Analysis",
                        "n": "6.2",
                        "start": 148,
                        "end": 168
                    },
                    {
                        "section": "Conclusion and Future Work",
                        "n": "7",
                        "start": 169,
                        "end": 184
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/959-Figure2-1.png",
                        "caption": "Fig. 2. Agreement percentage results by participant group",
                        "page": 8,
                        "bbox": {
                            "x1": 142.56,
                            "x2": 452.15999999999997,
                            "y1": 249.6,
                            "y2": 408.96
                        }
                    },
                    {
                        "filename": "../figure/image/959-Figure1-1.png",
                        "caption": "Fig. 1. Sample list of shortlisted papers for the task output",
                        "page": 4,
                        "bbox": {
                            "x1": 124.8,
                            "x2": 473.28,
                            "y1": 183.84,
                            "y2": 397.91999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/959-Table2-1.png",
                        "caption": "Table 2. Top five categories for preferred and critical aspects",
                        "page": 9,
                        "bbox": {
                            "x1": 117.6,
                            "x2": 477.12,
                            "y1": 211.67999999999998,
                            "y2": 307.2
                        }
                    },
                    {
                        "filename": "../figure/image/959-Table1-1.png",
                        "caption": "Table 1. Evaluation measures and corresponding questions",
                        "page": 6,
                        "bbox": {
                            "x1": 118.56,
                            "x2": 477.12,
                            "y1": 639.84,
                            "y2": 668.64
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-3"
        },
        {
            "slides": {
                "0": {
                    "title": "Key point Syntactic Information",
                    "text": [
                        "To use or not to use?",
                        "string-to-string model tree/graph-to-string model"
                    ],
                    "page_nums": [
                        2,
                        3,
                        4,
                        5
                    ],
                    "images": []
                },
                "8": {
                    "title": "English Chinese",
                    "text": [
                        "s2s is the worst",
                        "More syntactic information is useful Chinese",
                        "No score is the worst English-",
                        "Score is useful Chinese",
                        "SoA is better than SoE",
                        "Adjusting attention is better than adjusting word embedding",
                        "Forest is better than 1-best English-",
                        "Forest (No score) is worse than 1-best (SoE/SoA)",
                        "FS/TN is worse than 1-best (SoE/SoA) English-",
                        "Better to use score in linearization Chinese"
                    ],
                    "page_nums": [
                        51,
                        52,
                        53,
                        54,
                        55,
                        56
                    ],
                    "images": [
                        "figure/image/961-Table3-1.png",
                        "figure/image/961-Table2-1.png"
                    ]
                },
                "9": {
                    "title": "English Japanese",
                    "text": [
                        "s2s is the worst",
                        "No score is the worst",
                        "SoA is better than SoE",
                        "Forest is better than 1-best",
                        "Forest (No score) is worse",
                        "FS/TN is worse than 1-best"
                    ],
                    "page_nums": [
                        57
                    ],
                    "images": [
                        "figure/image/961-Table3-1.png"
                    ]
                },
                "10": {
                    "title": "Merits and Demerits",
                    "text": [
                        "Use syntactic information explicitly",
                        "Simpler model, more information",
                        "Robust to parsing errors",
                        "Lots of sentences are filtered out due to lengths"
                    ],
                    "page_nums": [
                        58
                    ],
                    "images": []
                }
            },
            "paper_title": "Forest-Based Neural Machine Translation",
            "paper_id": "961",
            "paper": {
                "title": "Forest-Based Neural Machine Translation",
                "abstract": "Tree-based neural machine translation (NMT) approaches, although achieved impressive performance, suffer from a major drawback: they only use the 1best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors. For statistical machine translation (SMT), forestbased methods have been proven to be effective for solving this problem, while for NMT this kind of approach has not been attempted. This paper proposes a forest-based NMT method that translates a linearized packed forest under a simple sequence-to-sequence framework (i.e., a forest-to-string NMT model). The BLEU score of the proposed method is higher than that of the string-to-string NMT, treebased NMT, and forest-based SMT systems.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction NMT has witnessed promising improvements recently."
                    },
                    {
                        "id": 1,
                        "string": "Depending on the types of input and output, these efforts can be divided into three categories: string-to-string systems ; tree-to-string systems (Eriguchi et al., 2016 (Eriguchi et al., , 2017 ; and string-totree systems (Aharoni and Goldberg, 2017; Nadejde et al., 2017) ."
                    },
                    {
                        "id": 2,
                        "string": "Compared with string-to-string systems, tree-to-string and string-to-tree systems (henceforth, tree-based systems) offer some attractive features."
                    },
                    {
                        "id": 3,
                        "string": "They can use more syntactic information , and can conveniently incorporate prior knowledge ."
                    },
                    {
                        "id": 4,
                        "string": "* Contribution during internship at National Institute of Information and Communications Technology."
                    },
                    {
                        "id": 5,
                        "string": "† Corresponding author Because of these advantages, tree-based methods become the focus of many researches of NMT nowadays."
                    },
                    {
                        "id": 6,
                        "string": "Based on how to represent trees, there are two main categories of tree-based NMT methods: representing trees by a tree-structured neural network (Eriguchi et al., 2016; Zaremoodi and Haffari, 2017) , representing trees by linearization (Vinyals et al., 2015; Dyer et al., 2016; Ma et al., 2017) ."
                    },
                    {
                        "id": 7,
                        "string": "Compared with the former, the latter method has a relatively simple model structure, so that a larger corpus can be used for training and the model can be trained within reasonable time, hence is preferred from the viewpoint of computation."
                    },
                    {
                        "id": 8,
                        "string": "Therefore we focus on this kind of methods in this paper."
                    },
                    {
                        "id": 9,
                        "string": "In spite of impressive performance of tree-based NMT systems, they suffer from a major drawback: they only use the 1-best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors (Quirk and Corston-Oliver, 2006) ."
                    },
                    {
                        "id": 10,
                        "string": "For SMT, forest-based methods have employed a packed forest to address this problem (Huang, 2008) , which represents exponentially many parse trees rather than just the 1-best one ."
                    },
                    {
                        "id": 11,
                        "string": "But for NMT, (computationally efficient) forestbased methods are still being explored 1 ."
                    },
                    {
                        "id": 12,
                        "string": "Because of the structural complexity of forests, the inexistence of appropriate topological ordering, and the hyperedge-attachment nature of weights (see Section 3.1 for details), it is not trivial to linearize a forest."
                    },
                    {
                        "id": 13,
                        "string": "This hinders the development of forest-based NMT to some extent."
                    },
                    {
                        "id": 14,
                        "string": "Inspired by the tree-based NMT methods based on linearization, we propose an efficient forestbased NMT approach (Section 3), which can en-code the syntactic information of a packed forest on the basis of a novel weighted linearization method for a packed forest (Section 3.1), and can decode the linearized packed forest under the simple sequence-to-sequence framework (Section 3.2) ."
                    },
                    {
                        "id": 15,
                        "string": "Experiments demonstrate the effectiveness of our method (Section 4)."
                    },
                    {
                        "id": 16,
                        "string": "Preliminaries We first review the general sequence-to-sequence model (Section 2.1), then describe tree-based NMT systems based on linearization (Section 2.2), and finally introduce the packed forest, through which exponentially many trees can be represented in a compact manner (Section 2.3)."
                    },
                    {
                        "id": 17,
                        "string": "Sequence-to-sequence model Current NMT systems usually resort to a simple framework, i.e., the sequence-to-sequence model ."
                    },
                    {
                        "id": 18,
                        "string": "Given a source sequence (x 0 , ."
                    },
                    {
                        "id": 19,
                        "string": "."
                    },
                    {
                        "id": 20,
                        "string": "."
                    },
                    {
                        "id": 21,
                        "string": ", x T ), in order to find a target sequence (y 0 , ."
                    },
                    {
                        "id": 22,
                        "string": "."
                    },
                    {
                        "id": 23,
                        "string": "."
                    },
                    {
                        "id": 24,
                        "string": ", y T ) that maximizes the conditional probability p(y 0 , ."
                    },
                    {
                        "id": 25,
                        "string": "."
                    },
                    {
                        "id": 26,
                        "string": "."
                    },
                    {
                        "id": 27,
                        "string": ", y T | x 0 , ."
                    },
                    {
                        "id": 28,
                        "string": "."
                    },
                    {
                        "id": 29,
                        "string": "."
                    },
                    {
                        "id": 30,
                        "string": ", x T ), the sequence-to-sequence model uses one RNN to encode the source sequence into a fixed-length context vector c and a second RNN to decode this vector and generate the target sequence."
                    },
                    {
                        "id": 31,
                        "string": "Formally, the probability of the target sequence can be calculated as follows: p(y 0 , ."
                    },
                    {
                        "id": 32,
                        "string": "."
                    },
                    {
                        "id": 33,
                        "string": "."
                    },
                    {
                        "id": 34,
                        "string": ",y T | x 0 , ."
                    },
                    {
                        "id": 35,
                        "string": "."
                    },
                    {
                        "id": 36,
                        "string": "."
                    },
                    {
                        "id": 37,
                        "string": ", x T ) = T t=0 p(y t | c, y 0 , ."
                    },
                    {
                        "id": 38,
                        "string": "."
                    },
                    {
                        "id": 39,
                        "string": "."
                    },
                    {
                        "id": 40,
                        "string": ", y t−1 ), (1) where p(y t | c, y 0 , ."
                    },
                    {
                        "id": 41,
                        "string": "."
                    },
                    {
                        "id": 42,
                        "string": "."
                    },
                    {
                        "id": 43,
                        "string": ", y t−1 ) = g(y t−1 , s t , c), (2) s t = f (s t−1 , y t−1 , c), (3) c = q(h 0 , ."
                    },
                    {
                        "id": 44,
                        "string": "."
                    },
                    {
                        "id": 45,
                        "string": "."
                    },
                    {
                        "id": 46,
                        "string": ", h T ), (4) h t = f (e t , h t−1 )."
                    },
                    {
                        "id": 47,
                        "string": "(5) Here, g, f , and q are nonlinear functions; h t and s t are the hidden states of the source-side RNN and target-side RNN, respectively, c is the context vector, and e t is the embedding of x t ."
                    },
                    {
                        "id": 48,
                        "string": "introduced an attention mechanism to deal with the issues related to long sequences ."
                    },
                    {
                        "id": 49,
                        "string": "Instead of encoding the source sequence into a fixed vector c, the attention model uses different c i -s when calculating the target-side output y i at time step i: c i = T j=0 α ij h j , (6) α ij = exp(a(s i−1 , h j )) T k=0 exp(a(s i−1 , h k )) ."
                    },
                    {
                        "id": 50,
                        "string": "(7) The function a(s i−1 , h j ) can be regarded as representing the soft alignment between the target-side RNN hidden state s i−1 and the source-side RNN hidden state h j ."
                    },
                    {
                        "id": 51,
                        "string": "By changing the format of the source/target sequences, this framework can be regarded as a string-to-string NMT system , a tree-to-string NMT system , or a string-to-tree NMT system (Aharoni and Goldberg, 2017) ."
                    },
                    {
                        "id": 52,
                        "string": "Linear-structured tree-based NMT systems Regarding the linearization adopted for tree-tostring NMT (i.e., linearization of the source side), Sennrich and Haddow (2016) encoded the sequence of dependency labels and the sequence of words simultaneously, partially utilizing the syntax information, while  traversed the constituent tree of the source sentence and combined this with the word sequence, utilizing the syntax information completely."
                    },
                    {
                        "id": 53,
                        "string": "Regarding the linearization used for string-totree NMT (i.e., linearization of the target side), Nadejde et al."
                    },
                    {
                        "id": 54,
                        "string": "(2017) used a CCG supertag sequence as the target sequence, while Aharoni and Goldberg (2017) applied a linearization method in a top-down manner, generating a sequence ensemble for the annotated tree in the Penn Treebank (Marcus et al., 1993) ."
                    },
                    {
                        "id": 55,
                        "string": "Wu et al."
                    },
                    {
                        "id": 56,
                        "string": "(2017) used transition actions to linearize a dependency tree, and employed the sequence-to-sequence framework for NMT."
                    },
                    {
                        "id": 57,
                        "string": "It can be seen all current tree-based NMT systems use only one tree for encoding or decoding."
                    },
                    {
                        "id": 58,
                        "string": "In contrast, we hope to utilize multiple trees (i.e., a forest)."
                    },
                    {
                        "id": 59,
                        "string": "This is not trivial, on account of the lack of a fixed traversal order and the need for a compact representation."
                    },
                    {
                        "id": 60,
                        "string": "Packed forest The packed forest gives a representation of exponentially many parsing trees, and can compactly encode many more candidates than the n-best list Figure 1 : An example of (a) a packed forest."
                    },
                    {
                        "id": 61,
                        "string": "The numbers in the brackets located at the upper-left corner of each node in the packed forest show one correct topological ordering of the nodes."
                    },
                    {
                        "id": 62,
                        "string": "The packed forest is a compact representation of two trees: (b) the correct constituent tree, and (c) an incorrect constituent tree."
                    },
                    {
                        "id": 63,
                        "string": "Note that the terminal nodes (i.e., words in the sentence) in the packed forest are shown only for illustration, and they do not belong to the packed forest."
                    },
                    {
                        "id": 64,
                        "string": "(Huang, 2008) ."
                    },
                    {
                        "id": 65,
                        "string": "Figure 1a shows a packed forest, which can be unpacked into two constituent trees ( Figure 1b and Figure 1c )."
                    },
                    {
                        "id": 66,
                        "string": "Formally, a packed forest is a pair V, E , where V is the set of nodes and E is the set of hyperedges."
                    },
                    {
                        "id": 67,
                        "string": "Each v ∈ V can be represented as X i,j , where X is a constituent label and i, j ∈ [0, n] are indices of words, showing that the node spans the words ranging from i (inclusive) to j (exclusive)."
                    },
                    {
                        "id": 68,
                        "string": "Here, n is the length of the input sentence."
                    },
                    {
                        "id": 69,
                        "string": "Each e ∈ E is a three-tuple head(e), tails(e), score(e) , where head(e) ∈ V is similar to the head node in a constituent tree, and tails(e) ∈ V * is similar to the set of child nodes in a constituent tree."
                    },
                    {
                        "id": 70,
                        "string": "score(e) ∈ R is the logarithm of the probability that tails(e) represents the tails of head(e) calculated by the parser."
                    },
                    {
                        "id": 71,
                        "string": "Based on score(e), the score of a constituent tree T can be calculated as follows: score(T ) = −λn + e∈E(T ) score(e), (8) where E(T ) is the set of hyperedges appearing in tree T , and λ is a regularization coefficient for the sentence length 2 ."
                    },
                    {
                        "id": 72,
                        "string": "2 Following the configuration of Charniak and Johnson Forest-based NMT We first propose a linearization method for the packed forest (Section 3.1), then describe how to encode the linearized forest (Section 3.2), which can then be translated by the conventional decoder (see Section 2.1)."
                    },
                    {
                        "id": 73,
                        "string": "Forest linearization Recently, several studies have focused on the linearization methods of a syntax tree, both in the area of tree-based NMT (Section 2.2) and in the area of parsing (Vinyals et al., 2015; Dyer et al., 2016; Ma et al., 2017) ."
                    },
                    {
                        "id": 74,
                        "string": "Basically, these methods follow a fixed traversal order (e.g., depthfirst), which does not exist for the packed forest (a directed acyclic graph (DAG))."
                    },
                    {
                        "id": 75,
                        "string": "Furthermore, the weights are attached to edges of a packed forest instead of the nodes, which further increase the difficulty."
                    },
                    {
                        "id": 76,
                        "string": "Topological ordering algorithms for DAG (Kahn, 1962; Tarjan, 1976) are not good solutions, because the outputted ordering is not always optimal for machine translation."
                    },
                    {
                        "id": 77,
                        "string": "In particular, a topo- (2005) , for all the experiments in this paper, we fixed λ to log 2 600."
                    },
                    {
                        "id": 78,
                        "string": "Algorithm 1 Linearization of a packed forest 1: function LINEARIZEFOREST( V, E , w) 2: v ← FINDROOT(V ) 3: r ← [] 4: EXPANDSEQ(v, r, V, E , w) 5: return r 6: function FINDROOT(V ) 7: for v ∈ V do 8: if v has no parent then 9: return v 10: procedure EXPANDSEQ(v, r, V, E , w) 11: for e ∈ E do 12: if head(e) = v then 13: if tails(e) = ∅ then 14: for t ∈ SORT(tails(e)) do Sort tails(e) by word indices."
                    },
                    {
                        "id": 79,
                        "string": "15: EXPANDSEQ(t, r, V, E , w) 16: l ← LINEARIZEEDGE(head(e), w) 17: r.append( l, σ(0.0) ) σ is the sigmoid function, i.e., σ(x) = 1 1+e −x , x ∈ R. 18: l ← c LINEARIZEEDGES(tails(e), w) c is a unary operator."
                    },
                    {
                        "id": 80,
                        "string": "19: r.append( l, σ(score(e)) ) 20: else 21: l ← LINEARIZEEDGE(head(e), w) 22: r.append( l, σ(0.0) ) 23: function LINEARIZEEDGE(Xi,j, w) 24: return X ⊗ ( j−1 k=i w k ) 25: function LINEARIZEEDGES(v, w) 26: return ⊕v∈vLINEARIZEEDGE(v, w) logical ordering could ignore \"word sequential information\" and \"parent-child information\" in the sentences."
                    },
                    {
                        "id": 81,
                        "string": "For example, for the packed forest in Figure 1a , although \"[10]→[1]→[2]→ · · · →[9]→[11]\" is a valid topological ordering, the word sequential information of the words (e.g., \"John\" should be located ahead of the period), which is fairly crucial for translation of languages with fixed pragmatic word order such as Chinese or English, is lost."
                    },
                    {
                        "id": 82,
                        "string": "As another example, for the packed forest in Figure 1a , nodes [2], [9], and [10] are all the children of node [11] ."
                    },
                    {
                        "id": 83,
                        "string": "However, in the topological or- der \"[1]→[2]→ · · · →[9]→[10]→[11],\" node [2] is quite far from node [11], while nodes [9] and [10] are both close to node [11] ."
                    },
                    {
                        "id": 84,
                        "string": "The parent-child information cannot be reflected in this topological order, which is not what we would expect."
                    },
                    {
                        "id": 85,
                        "string": "To address the above two problems, we propose a novel linearization algorithm for a packed forest (Algorithm 1)."
                    },
                    {
                        "id": 86,
                        "string": "The algorithm linearizes the packed forest from the root node (Line 2) to leaf nodes by calling the EXPANDSEQ procedure (Line 15) recursively, while preserving the word order in the sentence (Line 14)."
                    },
                    {
                        "id": 87,
                        "string": "In this way, word sequential information is preserved."
                    },
                    {
                        "id": 88,
                        "string": "Within the Figure 1a EXPANDSEQ procedure, once a hyperedge is linearized (Line 16), the tails are also linearized immediately (Line 18)."
                    },
                    {
                        "id": 89,
                        "string": "In this way, parent-child information is preserved."
                    },
                    {
                        "id": 90,
                        "string": "Intuitively, different parts of constituent trees should be combined in different ways, therefore we define different operators ( c , ⊗, ⊕, or ) to represent the relationships between different parts, so that the representations of these parts can be combined in different ways (see Section 3.2 for details)."
                    },
                    {
                        "id": 91,
                        "string": "Words are concatenated by the operator \" \" with each other, a word and a constituent label is concatenated by the operator \"⊗\", the linearization results of child nodes are concatenated by the operator \"⊕\" with each other, while the unary operator \" c \" is used to indicate that the node is the child node of the previous part."
                    },
                    {
                        "id": 92,
                        "string": "Furthermore, each token in the linearized sequence is related to a score, representing the confidence of the parser."
                    },
                    {
                        "id": 93,
                        "string": "The linearization result of the packed forest in Figure 1a is shown in Figure 2 ."
                    },
                    {
                        "id": 94,
                        "string": "Tokens in the linearized sequence are separated by slashes."
                    },
                    {
                        "id": 95,
                        "string": "Each token in the sequence is composed of different types of symbols and combined by different operators."
                    },
                    {
                        "id": 96,
                        "string": "We can see that word sequential information is preserved."
                    },
                    {
                        "id": 97,
                        "string": "For example, \"NNP⊗John\" (linearization result of node [1]) is in front of \"VBZ⊗has\" (linearization result of node [3]), which is in front of \"DT⊗a\" (linearization result of node [4])."
                    },
                    {
                        "id": 98,
                        "string": "Moreover, parent-child information is also preserved."
                    },
                    {
                        "id": 99,
                        "string": "For example, \"NP⊗John\" (linearization result of node [2]) is followed by \" c NNP⊗John\" (linearization result of node [1], the child of node [2])."
                    },
                    {
                        "id": 100,
                        "string": "Note that our linearization method cannot fully recover packed forest."
                    },
                    {
                        "id": 101,
                        "string": "What we want to do is not to propose a fully recoverable linearization method."
                    },
                    {
                        "id": 102,
                        "string": "What we actually want to do is to encode syntax information as much as possible, so that we can improve the performance of NMT."
                    },
                    {
                        "id": 103,
                        "string": "As will be shown in Section 4, this goal is achieved."
                    },
                    {
                        "id": 104,
                        "string": "Also note that there is one more advantage of our linearization method: the linearized sequence Figure 3 : The framework of the forest-based NMT system."
                    },
                    {
                        "id": 105,
                        "string": "is a weighted sequence, while all the previous studies ignored the weights during linearization."
                    },
                    {
                        "id": 106,
                        "string": "As will be shown in Section 4, the weights are actually important not only for the linearization of a packed forest, but also for the linearization of a single tree."
                    },
                    {
                        "id": 107,
                        "string": "By preserving only the nodes and hyperedges in the 1-best tree and removing all others, our linearization method can be regarded as a treelinearization method."
                    },
                    {
                        "id": 108,
                        "string": "Compared with other treelinearization methods, our method combines several different kinds of information within one symbol, retaining the parent-child information, and incorporating the confidence of the parser in the sequence."
                    },
                    {
                        "id": 109,
                        "string": "We examine whether the weights can be useful not only for linear structured tree-based NMT but also for our forest-based NMT."
                    },
                    {
                        "id": 110,
                        "string": "Furthermore, although our method is nonreversible for packed forests, it is reversible for constituent trees, in that the linearization is processed exactly in the depth-first traversal order and all necessary information in the tree nodes has been encoded."
                    },
                    {
                        "id": 111,
                        "string": "As far as we know, there is no previous work on linearization of packed forests."
                    },
                    {
                        "id": 112,
                        "string": "Encoding the linearized forest The linearized packed forest forms the input of the encoder, which has two major differences from the input of a sequence-to-sequence NMT system."
                    },
                    {
                        "id": 113,
                        "string": "First, the input sequence of the encoder consists of two parts: the symbol sequence and the score sequence."
                    },
                    {
                        "id": 114,
                        "string": "Second, each symbol in the symbol sequence consists of several parts (words and constituent labels), which are combined by certain operators ( c , ⊗, ⊕, or )."
                    },
                    {
                        "id": 115,
                        "string": "Based on these observa-tions, we propose two new frameworks, which are illustrated in Figure 3 ."
                    },
                    {
                        "id": 116,
                        "string": "Formally, the input layer receives the sequence ( l 0 , ξ 0 , ."
                    },
                    {
                        "id": 117,
                        "string": "."
                    },
                    {
                        "id": 118,
                        "string": "."
                    },
                    {
                        "id": 119,
                        "string": ", l T , ξ T ), where l i denotes the i-th symbol and ξ i its score."
                    },
                    {
                        "id": 120,
                        "string": "Then, the sequence is fed into the score layer and the symbol layer."
                    },
                    {
                        "id": 121,
                        "string": "The score and symbol layers receive the sequence and output the score sequence ξ = (ξ 0 , ."
                    },
                    {
                        "id": 122,
                        "string": "."
                    },
                    {
                        "id": 123,
                        "string": "."
                    },
                    {
                        "id": 124,
                        "string": ", ξ T ) and symbol sequence l = (l 0 , ."
                    },
                    {
                        "id": 125,
                        "string": "."
                    },
                    {
                        "id": 126,
                        "string": "."
                    },
                    {
                        "id": 127,
                        "string": ", l T ), respectively, from the input."
                    },
                    {
                        "id": 128,
                        "string": "Any item l ∈ l in the symbol layer has the form l = o 0 x 1 o 1 ."
                    },
                    {
                        "id": 129,
                        "string": "."
                    },
                    {
                        "id": 130,
                        "string": "."
                    },
                    {
                        "id": 131,
                        "string": "x m−1 o m−1 x m , (9) where each x k (k = 1, ."
                    },
                    {
                        "id": 132,
                        "string": "."
                    },
                    {
                        "id": 133,
                        "string": "."
                    },
                    {
                        "id": 134,
                        "string": ", m) is a word or a constituent label, m is the total number of words and constituent labels in a symbol, o 0 is \" c \" or empty, and each o k (k = 1, ."
                    },
                    {
                        "id": 135,
                        "string": "."
                    },
                    {
                        "id": 136,
                        "string": "."
                    },
                    {
                        "id": 137,
                        "string": ", m − 1) is either \"⊗\", \"⊕\", or \" \"."
                    },
                    {
                        "id": 138,
                        "string": "Then, in the node/operator layer, the x-s and o-s are separated and rearranged as x = (x 1 , ."
                    },
                    {
                        "id": 139,
                        "string": "."
                    },
                    {
                        "id": 140,
                        "string": "."
                    },
                    {
                        "id": 141,
                        "string": ", x m , o 0 , ."
                    },
                    {
                        "id": 142,
                        "string": "."
                    },
                    {
                        "id": 143,
                        "string": "."
                    },
                    {
                        "id": 144,
                        "string": ", o m−1 ), which is fed to the pre-embedding layer."
                    },
                    {
                        "id": 145,
                        "string": "The pre-embedding layer generates a sequence p = (p 1 , ."
                    },
                    {
                        "id": 146,
                        "string": "."
                    },
                    {
                        "id": 147,
                        "string": "."
                    },
                    {
                        "id": 148,
                        "string": ", p m , ."
                    },
                    {
                        "id": 149,
                        "string": "."
                    },
                    {
                        "id": 150,
                        "string": "."
                    },
                    {
                        "id": 151,
                        "string": ", p 2m ), which is calculated as follows: p = W emb [I(x)]."
                    },
                    {
                        "id": 152,
                        "string": "(10) Here, the function I(x) returns a list of the indices in the dictionary for all the elements in x, which consist of words, constituent labels, or operators."
                    },
                    {
                        "id": 153,
                        "string": "In addition, W emb is the embedding matrix of size (|w word | + |w label | + 4) × d word , where |w word | and |w label | are the total number of words and constituent labels, respectively, d word is the dimension of the word embedding, and there are four possible operators: \" c ,\" \"⊗,\" \"⊕,\" and \" .\""
                    },
                    {
                        "id": 154,
                        "string": "Note that p is a list of 2m vectors, and the dimension of each vector is d word ."
                    },
                    {
                        "id": 155,
                        "string": "Because the length of the sequence of the input layer is T + 1, there are T + 1 different ps in the pre-embedding layer, which we denote by P = (p 0 , ."
                    },
                    {
                        "id": 156,
                        "string": "."
                    },
                    {
                        "id": 157,
                        "string": "."
                    },
                    {
                        "id": 158,
                        "string": ", p T )."
                    },
                    {
                        "id": 159,
                        "string": "Depending on where the score layer is incorporated, we propose two frameworks: Score-on-Embedding (SoE) and Score-on-Attention (SoA)."
                    },
                    {
                        "id": 160,
                        "string": "In SoE, the k-th element of the embedding layer is calculated as follows: e k = ξ k p∈p k p, (11) while in SoA, the k-th element of the embedding layer is calculated as e k = p∈p k p, (12) where k = 0, ."
                    },
                    {
                        "id": 161,
                        "string": "."
                    },
                    {
                        "id": 162,
                        "string": "."
                    },
                    {
                        "id": 163,
                        "string": ", T ."
                    },
                    {
                        "id": 164,
                        "string": "Note that e k ∈ R d word ."
                    },
                    {
                        "id": 165,
                        "string": "In this manner, the proposed forest-to-string NMT framework is connected with the conventional sequence-to-sequence NMT framework."
                    },
                    {
                        "id": 166,
                        "string": "After calculating the embedding vectors in the embedding layer, the hidden vectors are calculated using Equation 5."
                    },
                    {
                        "id": 167,
                        "string": "When calculating the context vector c i -s, SoE and SoA differ from each other."
                    },
                    {
                        "id": 168,
                        "string": "For SoE, the c i -s are calculated using Equation 6 and 7, while for SoA, the α ij -s used to calculate the c i -s are determined as follows: α ij = exp(ξ j a(s i−1 , h j )) T k=0 exp(ξ k a(s i−1 , h k )) ."
                    },
                    {
                        "id": 169,
                        "string": "(13) Then, using the decoder of the sequence-tosequence framework, the sentence of the target language can be generated."
                    },
                    {
                        "id": 170,
                        "string": "Experiments Setup We evaluate the effectiveness of our forest-based NMT systems on English-to-Chinese and Englishto-Japanese translation tasks 3 ."
                    },
                    {
                        "id": 171,
                        "string": "The statistics of the corpora used in our experiments are summarized in Table 1 ."
                    },
                    {
                        "id": 172,
                        "string": "The packed forests of English sentences are obtained by the constituent parser proposed by Huang (2008) 4 ."
                    },
                    {
                        "id": 173,
                        "string": "We filtered out the sentences for 3 English is commonly chosen as the target language."
                    },
                    {
                        "id": 174,
                        "string": "We chose English as the source language because a highperformance forest parser is not available for other languages."
                    },
                    {
                        "id": 175,
                        "string": "For Japanese sentences, we followed the preprocessing steps recommended in WAT 2017 6 ."
                    },
                    {
                        "id": 176,
                        "string": "We implemented our framework based on nematus 8 (Sennrich et al., 2017) ."
                    },
                    {
                        "id": 177,
                        "string": "For optimization, we used the Adadelta algorithm (Zeiler, 2012) ."
                    },
                    {
                        "id": 178,
                        "string": "In order to avoid overfitting, we used dropout (Srivastava et al., 2014) on the embedding layer and hidden layer, with the dropout probability set to 0.2."
                    },
                    {
                        "id": 179,
                        "string": "We used the gated recurrent unit  as the recurrent unit of RNNs, which are bi-directional, with one hidden layer."
                    },
                    {
                        "id": 180,
                        "string": "Based on the tuning result, we set the maximum length of the input sequence to 300, the hidden layer size as 512, the dimension of word embedding as 620, and the batch size for training as 40."
                    },
                    {
                        "id": 181,
                        "string": "We pruned the packed forest using the algorithm of Huang (2008) , with a threshold of 5."
                    },
                    {
                        "id": 182,
                        "string": "If the linearization of the pruned forest is still longer than 300, then we linearize the 1-best parsing tree instead of the forest."
                    },
                    {
                        "id": 183,
                        "string": "During decoding, we used beam search, and fixed the beam size to 12."
                    },
                    {
                        "id": 184,
                        "string": "For the case of Forest (SoA), with 1 core of Tesla K80 GPU and LDC corpus as the training data, training spent about 10 days, and decoding speed is about 10 sentences per second."
                    },
                    {
                        "id": 185,
                        "string": "Table 2 : English-Chinese experimental results (character-level BLEU)."
                    },
                    {
                        "id": 186,
                        "string": "\"FS,\" \"TN,\" and \"FN\" denote forest-based SMT, tree-based NMT, and forest-based NMT systems, respectively."
                    },
                    {
                        "id": 187,
                        "string": "We performed the paired bootstrap resampling significance test (Koehn, 2004) Table 3 : English-Japanese experimental results (character-level BLEU)."
                    },
                    {
                        "id": 188,
                        "string": "Experimental results Table 2 and 3 summarize the experimental results."
                    },
                    {
                        "id": 189,
                        "string": "To avoid the affect of segmentation errors, the performance were evaluated by character-level BLEU (Papineni et al., 2002) ."
                    },
                    {
                        "id": 190,
                        "string": "We compare our proposed models (i.e., Forest (SoE) and Forest (SoA)) with three types of baseline: a string-to-string model (s2s), forest-based models that do not use score sequences (Forest (No score)), and tree-based models that use the 1-best parsing tree (1-best (No score, SoE, SoA))."
                    },
                    {
                        "id": 191,
                        "string": "For the 1-best models, we preserve the nodes and hyperedges that are used in the 1-best constituent tree in the packed forest, and remove all other nodes and hyperedges, yielding a pruned forest that contains only the 1-best constituent tree."
                    },
                    {
                        "id": 192,
                        "string": "For the \"No score\" configurations, we force the input score sequence to be a sequence of 1.0 with the same length as the input symbol sequence, so that neither the embedding layer nor the attention layer are affected by the score sequence."
                    },
                    {
                        "id": 193,
                        "string": "In addition, we also perform a comparison with some state-of-the-art tree-based systems that are publicly available, including an SMT system  and the NMT systems (Eriguchi et al."
                    },
                    {
                        "id": 194,
                        "string": "(2016) 2017) )."
                    },
                    {
                        "id": 195,
                        "string": "For , we use the implementation of cicada 11 ."
                    },
                    {
                        "id": 196,
                        "string": "For , we reimplemented the \"Mixed RNN Encoder\" model, because of its outstanding performance on the NIST MT corpus."
                    },
                    {
                        "id": 197,
                        "string": "We can see that for both English-Chinese and English-Japanese, compared with the s2s baseline system, both the 1-best and forest-based configurations yield better results."
                    },
                    {
                        "id": 198,
                        "string": "This indicates syntactic information contained in the constituent trees or forests is indeed useful for machine translation."
                    },
                    {
                        "id": 199,
                        "string": "Specifically, we observe the following facts."
                    },
                    {
                        "id": 200,
                        "string": "First, among the three different frameworks SoE, SoA, and No-score, the SoA framework performs the best, while the No-score framework per-9 https://github.com/tempra28/tree2seq 10 https://github.com/howardchenhd/ Syntax-awared-NMT 11 https://github.com/tarowatanabe/ cicada [Source] In the Czech Republic , which was ravaged by serious floods last summer , the temperatures in its border region adjacent to neighboring Slovakia plunged to minus 18 degrees Celsius ."
                    },
                    {
                        "id": 201,
                        "string": "forms the worst."
                    },
                    {
                        "id": 202,
                        "string": "This indicates that the scores of the edges in constituent trees or packed forests, which reflect the confidence of the correctness of the edges, are indeed useful."
                    },
                    {
                        "id": 203,
                        "string": "In fact, for the 1-best constituent parsing tree, the score of the edge reflects the confidence of the parser."
                    },
                    {
                        "id": 204,
                        "string": "By using this information, the NMT system succeed to learn a better attention, paying much attention to the confident structure and not paying attention to the unconfident structure, which improved the translation performance."
                    },
                    {
                        "id": 205,
                        "string": "This fact is ignored by previous studies on tree-based NMT."
                    },
                    {
                        "id": 206,
                        "string": "Furthermore, it is better to use the scores to modify the values of attention instead of rescaling the word embeddings, because modifying word embeddings carelessly may change the semantic meanings of words."
                    },
                    {
                        "id": 207,
                        "string": "Second, compared with the cases that only using the 1-best constituent trees, using packed forests yields statistical significantly better results for the SoE and SoA frameworks."
                    },
                    {
                        "id": 208,
                        "string": "This shows the effectiveness of using more syntactic information."
                    },
                    {
                        "id": 209,
                        "string": "Compared with one constituent tree, the packed forest, which contains multiple different trees, describes the syntactic structure of the sentence in different aspects, which together increase the accuracy of machine translation."
                    },
                    {
                        "id": 210,
                        "string": "However, without using the scores, the 1-best constituent tree is preferred."
                    },
                    {
                        "id": 211,
                        "string": "This is because without using the scores, all trees in the packed forest are treated equally, which makes it easy to import noise into the encoder."
                    },
                    {
                        "id": 212,
                        "string": "Compared with other types of state-of-the-art systems, our systems using only the 1-best tree (1-best(SoE, SoA)) are better than the other treebased systems."
                    },
                    {
                        "id": 213,
                        "string": "Moreover, our NMT systems using the packed forests achieve the best performance."
                    },
                    {
                        "id": 214,
                        "string": "These results also support the usefulness of the scores of the edges and packed forests in NMT."
                    },
                    {
                        "id": 215,
                        "string": "As for the efficiency, the training time of the SoA system was slightly longer than that of the SoE system, which was about twice of the s2s baseline."
                    },
                    {
                        "id": 216,
                        "string": "The training time of the tree-based system was about 1.5 times of the baseline."
                    },
                    {
                        "id": 217,
                        "string": "For the case of Forest (SoA), with 1 core of Tesla P100 GPU and LDC corpus as the training data, training spent about 10 days, and decoding speed was about 10 sentences per second."
                    },
                    {
                        "id": 218,
                        "string": "The reason for the relatively low efficiency is that the linearized sequences of packed forests were much longer than word sequences, enlarging the scale of the inputs."
                    },
                    {
                        "id": 219,
                        "string": "Despite this, the training process ended within reasonable time."
                    },
                    {
                        "id": 220,
                        "string": "Figure 4 illustrates the translation results of an English sentence using several different configurations: the s2s baseline, using only the 1-best tree (SoE), and using the packed forest (SoE)."
                    },
                    {
                        "id": 221,
                        "string": "This is a sentence from NIST MT 03, and the training corpus is the LDC corpus."
                    },
                    {
                        "id": 222,
                        "string": "Qualitative analysis For the s2s case, no syntactic information is utilized, and therefore the output of the system is not a grammatical Chinese sentence."
                    },
                    {
                        "id": 223,
                        "string": "The attributive phrase of \"Czech border region\" is a complete sentence."
                    },
                    {
                        "id": 224,
                        "string": "However, the attributive is not allowed to be a complete sentence in Chinese."
                    },
                    {
                        "id": 225,
                        "string": "For the case of using 1-best constituent tree, the output is a grammatical Chinese sentence."
                    },
                    {
                        "id": 226,
                        "string": "However, the phrase \"adjacent to neighboring Slovakia\" is completely ignored in the translation result."
                    },
                    {
                        "id": 227,
                        "string": "After analyzing the constituent tree, we found that this phrase was incorrectly parsed as an \"adverb phrase\", so that the NMT system paid little attention to it, because of the low confidence given by the parser."
                    },
                    {
                        "id": 228,
                        "string": "In contrast, for the case of the packed forest, we can see this phrase was not ignored and was translated correctly."
                    },
                    {
                        "id": 229,
                        "string": "Actually, besides \"adverb phrase\", this phrase was also correctly parsed as an \"adjective phrase\", and covered by multiple different nodes in the forest, making it difficult for the encoder to ignore the phrase."
                    },
                    {
                        "id": 230,
                        "string": "We also noticed that our method performed better on learning attention."
                    },
                    {
                        "id": 231,
                        "string": "For the example in Figure 4 , we observed that for s2s model, the decoder paid attention to the word \"Czech\" twice, which causes the output sentence contains the Chinese translation of Czech twice."
                    },
                    {
                        "id": 232,
                        "string": "On the other hand, for our forest model, by using the syntax information, the decoder paid attention to the phrase \"In the Czech Republic\" only once, making the decoder generates the correct output."
                    },
                    {
                        "id": 233,
                        "string": "Related work Incorporating syntactic information into NMT systems is attracting widespread attention nowadays."
                    },
                    {
                        "id": 234,
                        "string": "Compared with conventional string-to-string NMT systems, tree-based systems demonstrate a better performance with the help of constituent trees or dependency trees."
                    },
                    {
                        "id": 235,
                        "string": "The first noteworthy study is Eriguchi et al."
                    },
                    {
                        "id": 236,
                        "string": "(2016) , which used Tree-structured LSTM (Tai et al., 2015) to encode the HPSG syntax tree of the sentence in the source-side in a bottom-up manner."
                    },
                    {
                        "id": 237,
                        "string": "Then, Chen et al."
                    },
                    {
                        "id": 238,
                        "string": "(2017) enhanced the encoder with a top-down tree encoder."
                    },
                    {
                        "id": 239,
                        "string": "As a simple extension of Eriguchi et al."
                    },
                    {
                        "id": 240,
                        "string": "(2016) , very recently, Zaremoodi and Haffari (2017) proposed a forest-based NMT method by representing the packed forest with a forest-structured neural network."
                    },
                    {
                        "id": 241,
                        "string": "However, their method was evaluated in small-scale MT settings (each training dataset consists of under 10k parallel sentences)."
                    },
                    {
                        "id": 242,
                        "string": "In contrast, our proposed method is effective in a largescale MT setting, and we present qualitative analysis regarding the effectiveness of using forests in NMT."
                    },
                    {
                        "id": 243,
                        "string": "Although these methods obtained good results, the tree-structured network used by the encoder made the training and decoding relatively slow, therefore restricts the scope of application."
                    },
                    {
                        "id": 244,
                        "string": "Other attempts at encoding syntactic trees have also been proposed."
                    },
                    {
                        "id": 245,
                        "string": "Eriguchi et al."
                    },
                    {
                        "id": 246,
                        "string": "(2017) combined the Recurrent Neural Network Grammar (Dyer et al., 2016) with NMT systems, while  linearized the constituent tree and encoded it using RNNs."
                    },
                    {
                        "id": 247,
                        "string": "The training of these methods is fast, because of the linear structures of RNNs."
                    },
                    {
                        "id": 248,
                        "string": "However, all these syntax-based NMT systems used only the 1-best parsing tree, making the systems sensitive to parsing errors."
                    },
                    {
                        "id": 249,
                        "string": "Instead of using trees to represent syntactic information, some studies use other data structures to represent the latent syntax of the input sentence."
                    },
                    {
                        "id": 250,
                        "string": "For example, Hashimoto and Tsuruoka (2017) proposed translating using a latent graph."
                    },
                    {
                        "id": 251,
                        "string": "However, such systems do not enjoy the benefit of handcrafted syntactic knowledge, because they do not use a parser trained from a large treebank with human annotations."
                    },
                    {
                        "id": 252,
                        "string": "Compared with these related studies, our framework utilizes a linearized packed forest, meaning the encoder can encode exponentially many trees in an efficient manner."
                    },
                    {
                        "id": 253,
                        "string": "The experimental results demonstrated these advantages."
                    },
                    {
                        "id": 254,
                        "string": "Conclusion and future work We proposed a new NMT framework, which encodes a packed forest for the source sentence using linear-structured neural networks, such as RNN."
                    },
                    {
                        "id": 255,
                        "string": "Compared with conventional string-tostring NMT systems and tree-to-string NMT systems, our framework can utilize exponentially many linearized parsing trees during encoding, without significantly decreasing the efficiency."
                    },
                    {
                        "id": 256,
                        "string": "This represents the first attempt at using a forest under the string-to-string NMT framework."
                    },
                    {
                        "id": 257,
                        "string": "The experimental results demonstrate the effectiveness of our framework."
                    },
                    {
                        "id": 258,
                        "string": "As future work, we plan to design some more elaborate structures to incorporate the score layer in the encoder."
                    },
                    {
                        "id": 259,
                        "string": "Further improvement in the translation performance is expected to be achieved for the forest-based NMT system."
                    },
                    {
                        "id": 260,
                        "string": "We will also apply the proposed linearization method to other tasks."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 15
                    },
                    {
                        "section": "Preliminaries",
                        "n": "2",
                        "start": 16,
                        "end": 16
                    },
                    {
                        "section": "Sequence-to-sequence model",
                        "n": "2.1",
                        "start": 17,
                        "end": 51
                    },
                    {
                        "section": "Linear-structured tree-based NMT systems",
                        "n": "2.2",
                        "start": 52,
                        "end": 59
                    },
                    {
                        "section": "Packed forest",
                        "n": "2.3",
                        "start": 60,
                        "end": 71
                    },
                    {
                        "section": "Forest-based NMT",
                        "n": "3",
                        "start": 72,
                        "end": 72
                    },
                    {
                        "section": "Forest linearization",
                        "n": "3.1",
                        "start": 73,
                        "end": 111
                    },
                    {
                        "section": "Encoding the linearized forest",
                        "n": "3.2",
                        "start": 112,
                        "end": 169
                    },
                    {
                        "section": "Setup",
                        "n": "4.1",
                        "start": 170,
                        "end": 187
                    },
                    {
                        "section": "Experimental results",
                        "n": "4.2",
                        "start": 188,
                        "end": 221
                    },
                    {
                        "section": "Qualitative analysis",
                        "n": "4.3",
                        "start": 222,
                        "end": 232
                    },
                    {
                        "section": "Related work",
                        "n": "5",
                        "start": 233,
                        "end": 253
                    },
                    {
                        "section": "Conclusion and future work",
                        "n": "6",
                        "start": 254,
                        "end": 260
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/961-Table1-1.png",
                        "caption": "Table 1: Statistics of the corpora.",
                        "page": 5,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 525.12,
                            "y1": 61.44,
                            "y2": 168.0
                        }
                    },
                    {
                        "filename": "../figure/image/961-Table2-1.png",
                        "caption": "Table 2: English-Chinese experimental results (character-level BLEU). “FS,” “TN,” and “FN” denote forest-based SMT, tree-based NMT, and forest-based NMT systems, respectively. We performed the paired bootstrap resampling significance test (Koehn, 2004) over the NIST MT 03 to 05 corpus, with respect to the s2s baseline, and list the p values in the table.",
                        "page": 6,
                        "bbox": {
                            "x1": 118.56,
                            "x2": 478.08,
                            "y1": 61.44,
                            "y2": 197.28
                        }
                    },
                    {
                        "filename": "../figure/image/961-Table3-1.png",
                        "caption": "Table 3: English-Japanese experimental results (character-level BLEU).",
                        "page": 6,
                        "bbox": {
                            "x1": 197.76,
                            "x2": 400.32,
                            "y1": 276.48,
                            "y2": 412.32
                        }
                    },
                    {
                        "filename": "../figure/image/961-Figure1-1.png",
                        "caption": "Figure 1: An example of (a) a packed forest. The numbers in the brackets located at the upper-left corner of each node in the packed forest show one correct topological ordering of the nodes. The packed forest is a compact representation of two trees: (b) the correct constituent tree, and (c) an incorrect constituent tree. Note that the terminal nodes (i.e., words in the sentence) in the packed forest are shown only for illustration, and they do not belong to the packed forest.",
                        "page": 2,
                        "bbox": {
                            "x1": 99.36,
                            "x2": 526.56,
                            "y1": 67.2,
                            "y2": 305.28
                        }
                    },
                    {
                        "filename": "../figure/image/961-Figure4-1.png",
                        "caption": "Figure 4: Chinese translation results of an English sentence.",
                        "page": 7,
                        "bbox": {
                            "x1": 73.44,
                            "x2": 524.16,
                            "y1": 64.8,
                            "y2": 128.64
                        }
                    },
                    {
                        "filename": "../figure/image/961-Figure2-1.png",
                        "caption": "Figure 2: Linearization result of the packed forest in Figure 1a",
                        "page": 3,
                        "bbox": {
                            "x1": 312.96,
                            "x2": 537.12,
                            "y1": 63.36,
                            "y2": 121.44
                        }
                    },
                    {
                        "filename": "../figure/image/961-Figure3-1.png",
                        "caption": "Figure 3: The framework of the forest-based NMT system.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 521.76,
                            "y1": 63.839999999999996,
                            "y2": 241.44
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-4"
        },
        {
            "slides": {
                "0": {
                    "title": "Adversarial Attacks Perturbations",
                    "text": [
                        "Apply a small (indistinguishable) perturbation to the input that elicit large changes in the output",
                        "Figure from Goodfellow et al. (2014)"
                    ],
                    "page_nums": [
                        1,
                        2,
                        3,
                        4,
                        5
                    ],
                    "images": []
                },
                "1": {
                    "title": "Indistinguishable Perturbations",
                    "text": [
                        "Small perturbations are well defined in vision",
                        "Small l2 ~= indistinguishable to the human eye"
                    ],
                    "page_nums": [
                        6,
                        7
                    ],
                    "images": []
                },
                "2": {
                    "title": "Not all Text Perturbations are Equal",
                    "text": [
                        "Hes very annoying Hes pretty friendly Hes She friendly Hes very freindly",
                        "[Different meaning] [Similar meaning] [Nonsensical] [Typo]",
                        "Cant expect the model to output the same output!",
                        "Why and How you should evaluate adversarial perturbations"
                    ],
                    "page_nums": [
                        8,
                        9,
                        10,
                        11,
                        12,
                        13,
                        14
                    ],
                    "images": []
                },
                "4": {
                    "title": "Problem Definition",
                    "text": [
                        "Reference They plow it right back into filing",
                        "Original Ils le reinvestissent directement en engageant",
                        "Base output They direct it directly by engaging",
                        "A dv. src Ilss le reinvestissent dierctement en engagaent plus de proces. Adv. output .. de plus."
                    ],
                    "page_nums": [
                        16,
                        17,
                        18,
                        19,
                        20,
                        21,
                        22
                    ],
                    "images": []
                },
                "5": {
                    "title": "Source Side Evaluation",
                    "text": [
                        "Evaluate meaning preservation on the source side",
                        "Where is a similarity metric such that",
                        "Hes very friendly H es pretty friendly Hes very friendly H es very annoying",
                        "Hes very friendly H es pretty friendly Hes very friendly Hes She friendly"
                    ],
                    "page_nums": [
                        23
                    ],
                    "images": []
                },
                "6": {
                    "title": "Target Side Evaluation",
                    "text": [
                        "Evaluate relative meaning destruction on the target side"
                    ],
                    "page_nums": [
                        24,
                        25,
                        26,
                        27,
                        28,
                        29,
                        30
                    ],
                    "images": []
                },
                "7": {
                    "title": "Successful Adversarial Attacks",
                    "text": [
                        "Source meaning destruction Target meaning destruction",
                        "Destroy the meaning on the target side more than on the source side"
                    ],
                    "page_nums": [
                        31,
                        32,
                        33,
                        34
                    ],
                    "images": []
                },
                "8": {
                    "title": "Which similarity metric to use",
                    "text": [
                        "How would you rate the similarity between the meaning of these two sentences?",
                        "6 point scale, details in paper",
                        "The meaning is completely different or one of the sentence s is meaningless",
                        "The topic is the same but the meaning is different",
                        "Some key information is different",
                        "The key information is the same but the details differ",
                        "Meaning is essentially the same but some expressions are unnatural Meaning is essentially equal and the two sentences are well-formed [Language]",
                        "Geometric mean of n-gram precision + length penalty",
                        "METEOR [Banerjee and Lavie, 2005]",
                        "Word matching taking into account stemming, synonyms, paraphrases...",
                        "chrF [Popovic, 2015] Character n-gram F-score"
                    ],
                    "page_nums": [
                        35,
                        36,
                        37,
                        38
                    ],
                    "images": []
                },
                "10": {
                    "title": "Data and Models",
                    "text": [
                        "{Czech, German, French} English",
                        "Both word and sub-word based models"
                    ],
                    "page_nums": [
                        40
                    ],
                    "images": []
                },
                "11": {
                    "title": "Gradient Based Adversarial Attacks on Text",
                    "text": [
                        "Idea: Back propagate through the model to score possible substitutions",
                        "Le g ros c hien The big dog .",
                        "The big dog . <eos>",
                        "Idea: Word substitution Adding word vector difference",
                        "Use the 1st order approximation to maximize the loss"
                    ],
                    "page_nums": [
                        41,
                        42,
                        43,
                        44,
                        45,
                        46,
                        71
                    ],
                    "images": []
                },
                "13": {
                    "title": "Constrained Adversarial Attacks kNN",
                    "text": [
                        "Only replace words with 10 nearest neighbors in embedding space",
                        "Example from our fren Transformer source embeddings",
                        "grand (tall SING+MASC) grands (tall PL+MASC) grande (tall SING+FEM) grandes (tall PL+FEM) gros (fat SING+MASC) grosse (fat SING+FEM) math (math) maths (maths) mathematique (mathematic) mathematiques (mathematics) objective (objective [ADJ] SING+FEM)"
                    ],
                    "page_nums": [
                        48
                    ],
                    "images": []
                },
                "14": {
                    "title": "Constrained Adversarial Attacks CharSwap",
                    "text": [
                        "Only swap word internal characters to get OOVs",
                        "adversarial ad vresa rial",
                        "If thats impossible, repeat the last character"
                    ],
                    "page_nums": [
                        49
                    ],
                    "images": []
                },
                "15": {
                    "title": "Choosing an Similarity Metric",
                    "text": [
                        "Human vs automatic (pearson r):",
                        "Humans score original/adversarial outpu t",
                        "Compare scores to automatic metric with",
                        "(Relative Decrease in chrF)"
                    ],
                    "page_nums": [
                        51,
                        52,
                        53
                    ],
                    "images": []
                },
                "16": {
                    "title": "Effect of Constraints on Evaluation",
                    "text": [
                        "a feet eae Unconstrained"
                    ],
                    "page_nums": [
                        54
                    ],
                    "images": [
                        "figure/image/967-Figure1-1.png"
                    ]
                },
                "18": {
                    "title": "Takeway",
                    "text": [
                        "How would you rate the similarity between the meaning of these two sentences?",
                        "The meaning is complete ly different or one of the sentence s is meaningless",
                        "The topic is the same but the meaning is different Some key information is different",
                        "When doing adversarial attacks",
                        "The key information is th e same but the details differ Meaning is essentially the same but some expressions are unnatural Meaning is essentially eq ual and the two sentences are we ll-formed [Language]",
                        "Evaluate meaning preservation on the source side",
                        "When doing adversarial training",
                        "Consider adding constraints to your attacks",
                        "Not only true for seq2seq!",
                        "Easily transposed to classification, etc..",
                        "Just adapt and accordingly"
                    ],
                    "page_nums": [
                        66,
                        67,
                        68
                    ],
                    "images": []
                },
                "19": {
                    "title": "Human Evaluation the Gold Standard",
                    "text": [
                        "Check for semantic similarity and fluency",
                        "How would you rate the similarity between the meaning of these two sentences?",
                        "The meaning is completely different o r one of the sentences is meaningless",
                        "The topic is the same but the meaning is different",
                        "Some key information is different",
                        "The key information is the same but the details differ",
                        "Meaning is essentially the same but some expressions are unnatural",
                        "Meaning is essentially equal and the two sentences are well-formed [Language]"
                    ],
                    "page_nums": [
                        72
                    ],
                    "images": []
                },
                "20": {
                    "title": "Example of a Successful Attack",
                    "text": [
                        "Original Ils le reinvestissent directement en engageant plus de proces.",
                        "Adv. src. Ilss le reinvestissent dierctement en engagaent plus de proces.",
                        "Ref. They plow it right back into filing more troll lawsuits.",
                        "Base output They direct it directly by engaging more cases.",
                        "Adv. output .. de plus."
                    ],
                    "page_nums": [
                        73
                    ],
                    "images": []
                },
                "21": {
                    "title": "Example of an Unsuccessful Attack",
                    "text": [
                        "Original Cetait en Juillet 1969.",
                        "Adv. src. Cetiat en Jiullet",
                        "Base output This was in July 1969."
                    ],
                    "page_nums": [
                        74
                    ],
                    "images": []
                }
            },
            "paper_title": "On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models",
            "paper_id": "967",
            "paper": {
                "title": "On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models",
                "abstract": "Adversarial examples -perturbations to the input of a model that elicit large changes in the output -have been shown to be an effective way of assessing the robustness of sequenceto-sequence (seq2seq) models. However, these perturbations only indicate weaknesses in the model if they do not change the input so significantly that it legitimately results in changes in the expected output. This fact has largely been ignored in the evaluations of the growing body of related literature. Using the example of untargeted attacks on machine translation (MT), we propose a new evaluation framework for adversarial attacks on seq2seq models that takes the semantic equivalence of the pre-and post-perturbation input into account. Using this framework, we demonstrate that existing methods may not preserve meaning in general, breaking the aforementioned assumption that source side perturbations should not result in changes in the expected output. We further use this framework to demonstrate that adding additional constraints on attacks allows for adversarial perturbations that are more meaningpreserving, but nonetheless largely change the output sequence. Finally, we show that performing untargeted adversarial training with meaning-preserving attacks is beneficial to the model in terms of adversarial robustness, without hurting test performance. 1",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Attacking a machine learning model with adversarial perturbations is the process of making changes to its input to maximize an adversarial goal, such as mis-classification (Szegedy et al., 2013) or mis-translation (Zhao et al., 2018) ."
                    },
                    {
                        "id": 1,
                        "string": "These attacks provide insight into the vulnerabilities of machine learning models and their brittleness to samples outside the training distribution."
                    },
                    {
                        "id": 2,
                        "string": "Lack of robustness to these attacks poses security concerns to safety-critical applications, e.g."
                    },
                    {
                        "id": 3,
                        "string": "self-driving cars (Bojarski et al., 2016) ."
                    },
                    {
                        "id": 4,
                        "string": "Adversarial attacks were first defined and investigated for computer vision systems (Szegedy et al."
                    },
                    {
                        "id": 5,
                        "string": "(2013) ; Goodfellow et al."
                    },
                    {
                        "id": 6,
                        "string": "(2014) ; Moosavi-Dezfooli et al."
                    },
                    {
                        "id": 7,
                        "string": "(2016) inter alia), where the input space is continuous, making minuscule perturbations largely imperceptible to the human eye."
                    },
                    {
                        "id": 8,
                        "string": "In discrete spaces such as natural language sentences, the situation is more problematic; even a flip of a single word or character is generally perceptible by a human reader."
                    },
                    {
                        "id": 9,
                        "string": "Thus, most of the mathematical framework in previous work is not directly applicable to discrete text data."
                    },
                    {
                        "id": 10,
                        "string": "Moreover, there is no canonical distance metric for textual data like the p norm in real-valued vector spaces such as images, and evaluating the level of semantic similarity between two sentences is a field of research of its own (Cer et al., 2017) ."
                    },
                    {
                        "id": 11,
                        "string": "This elicits a natural question: what does the term \"adversarial perturbation\" mean in the context of natural language processing (NLP)?"
                    },
                    {
                        "id": 12,
                        "string": "We propose a simple but natural criterion for adversarial examples in NLP, particularly untargeted 2 attacks on seq2seq models: adversarial examples should be meaning-preserving on the source side, but meaning-destroying on the target side."
                    },
                    {
                        "id": 13,
                        "string": "The focus on explicitly evaluating meaning preservation is in contrast to previous work on adversarial examples for seq2seq models (Belinkov and Bisk, 2018; Zhao et al., 2018; Cheng et al., 2018; Ebrahimi et al., 2018a) ."
                    },
                    {
                        "id": 14,
                        "string": "Nonetheless, this feature is extremely important; given two sentences with equivalent meaning, we would expect a good model to produce two outputs with equivalent meaning."
                    },
                    {
                        "id": 15,
                        "string": "In other words, any meaningpreserving perturbation that results in the model output changing drastically highlights a fault of the model."
                    },
                    {
                        "id": 16,
                        "string": "A first technical contribution of this paper is to lay out a method for formalizing this concept of meaning-preserving perturbations ( §2)."
                    },
                    {
                        "id": 17,
                        "string": "This makes it possible to evaluate the effectiveness of adversarial attacks or defenses either using goldstandard human evaluation, or approximations that can be calculated without human intervention."
                    },
                    {
                        "id": 18,
                        "string": "We further propose a simple method of imbuing gradient-based word substitution attacks ( §3.1) with simple constraints aimed at increasing the chance that the meaning is preserved ( §3.2)."
                    },
                    {
                        "id": 19,
                        "string": "Our experiments are designed to answer several questions about meaning preservation in seq2seq models."
                    },
                    {
                        "id": 20,
                        "string": "First, we evaluate our proposed \"sourcemeaning-preserving, target-meaning-destroying\" criterion for adversarial examples using both manual and automatic evaluation ( §4.2) and find that a less widely used evaluation metric (chrF) provides significantly better correlation with human judgments than the more widely used BLEU and ME-TEOR metrics."
                    },
                    {
                        "id": 21,
                        "string": "We proceed to perform an evaluation of adversarial example generation techniques, finding that chrF does help to distinguish between perturbations that are more meaning-preserving across a variety of languages and models ( §4.3)."
                    },
                    {
                        "id": 22,
                        "string": "Finally, we apply existing methods for adversarial training to the adversarial examples with these constraints and show that making adversarial inputs more semantically similar to the source is beneficial for robustness to adversarial attacks and does not decrease test performance on the original data distribution ( §5)."
                    },
                    {
                        "id": 23,
                        "string": "A Framework for Evaluating Adversarial Attacks In this section, we present a simple procedure for evaluating adversarial attacks on seq2seq models."
                    },
                    {
                        "id": 24,
                        "string": "We will use the following notation: x and y refer to the source and target sentence respectively."
                    },
                    {
                        "id": 25,
                        "string": "We denote x's translation by model M as y M ."
                    },
                    {
                        "id": 26,
                        "string": "Finally, x andŷ M represent an adversarially perturbed version of x and its translation by M , respectively."
                    },
                    {
                        "id": 27,
                        "string": "The nature of M and the procedure for obtaininĝ x from x are irrelevant to the discussion below."
                    },
                    {
                        "id": 28,
                        "string": "The Adversarial Trade-off The goal of adversarial perturbations is to produce failure cases for the model M ."
                    },
                    {
                        "id": 29,
                        "string": "Hence, the evaluation must include some measure of the target similarity between y and y M , which we will denote s tgt (y,ŷ M )."
                    },
                    {
                        "id": 30,
                        "string": "However, if no distinction is being made between perturbations that preserve the meaning and those that don't, a sentence like \"he's very friendly\" is considered a valid adversarial perturbation of \"he's very adversarial\", even though its meaning is the opposite."
                    },
                    {
                        "id": 31,
                        "string": "Hence, it is crucial, when evaluating adversarial attacks on MT models, that the discrepancy between the original and adversarial input sentence be quantified in a way that is sensitive to meaning."
                    },
                    {
                        "id": 32,
                        "string": "Let us denote such a source similarity score s src (x,x)."
                    },
                    {
                        "id": 33,
                        "string": "Based on these functions, we define the target relative score decrease as: d tgt (y, y M ,ŷ M ) = 0 if s tgt (y,ŷ M ) ≥ s tgt (y, y M ) stgt(y,y M )−stgt(y,ŷ M ) stgt(y,y M ) otherwise (1) The choice to report the relative decrease in s tgt makes scores comparable across different models or languages 3 ."
                    },
                    {
                        "id": 34,
                        "string": "For instance, for languages that are comparatively easy to translate (e.g."
                    },
                    {
                        "id": 35,
                        "string": "French-English), s tgt will be higher in general, and so will the gap between s tgt (y, y M ) and s tgt (y,ŷ M )."
                    },
                    {
                        "id": 36,
                        "string": "However this does not necessarily mean that attacks on this language pair are more effective than attacks on a \"difficult\" language pair (e.g."
                    },
                    {
                        "id": 37,
                        "string": "Czech-English) where s tgt is usually smaller."
                    },
                    {
                        "id": 38,
                        "string": "We recommend that both s src and d tgt be reported when presenting adversarial attack results."
                    },
                    {
                        "id": 39,
                        "string": "However, in some cases where a single number is needed, we suggest reporting the attack's success S := s src + d tgt ."
                    },
                    {
                        "id": 40,
                        "string": "The interpretation is simple: S > 1 ⇔ d tgt > 1 − s src , which means that the attack has destroyed the target meaning (d tgt ) more than it has destroyed the source meaning (1 − s src )."
                    },
                    {
                        "id": 41,
                        "string": "Importantly, this framework can be extended beyond strictly meaning-preserving attacks."
                    },
                    {
                        "id": 42,
                        "string": "For example, for targeted keyword introduction attacks (Cheng et al., 2018; Ebrahimi et al., 2018a) , the same evaluation framework can be used if s tgt (resp."
                    },
                    {
                        "id": 43,
                        "string": "s src ) is modified to account for the presence (resp."
                    },
                    {
                        "id": 44,
                        "string": "absence) of the keyword (or its translation in the source)."
                    },
                    {
                        "id": 45,
                        "string": "Similarly this can be extended to other tasks by adapting s tgt (e.g."
                    },
                    {
                        "id": 46,
                        "string": "for classification one would use the zero-one loss, and adapt the success threshold)."
                    },
                    {
                        "id": 47,
                        "string": "Similarity Metrics Throughout §2.1, we have not given an exact description of the semantic similarity scores s src and s tgt ."
                    },
                    {
                        "id": 48,
                        "string": "Indeed, automatically evaluating the semantic similarity between two sentences is an open area of research and it makes sense to decouple the definition of adversarial examples from the specific method used to measure this similarity."
                    },
                    {
                        "id": 49,
                        "string": "In this section, we will discuss manual and automatic metrics that may be used to calculate it."
                    },
                    {
                        "id": 50,
                        "string": "Human Judgment Judgment by speakers of the language of interest is the de facto gold standard metric for semantic similarity."
                    },
                    {
                        "id": 51,
                        "string": "Specific criteria such as adequacy/fluency (Ma and Cieri, 2006) , acceptability (Goto et al., 2013) , and 6-level semantic similarity (Cer et al., 2017) have been used in evaluations of MT and sentence embedding methods."
                    },
                    {
                        "id": 52,
                        "string": "In the context of adversarial attacks, we propose the following 6-level evaluation scheme, which is motivated by previous measures, but designed to be (1) symmetric, like Cer et al."
                    },
                    {
                        "id": 53,
                        "string": "(2017) , (2) and largely considers meaning preservation but at the very low and high levels considers fluency of the output 4 , like Goto et al."
                    },
                    {
                        "id": 54,
                        "string": "(2013) : How would you rate the similarity between the meaning of these two sentences?"
                    },
                    {
                        "id": 55,
                        "string": "0."
                    },
                    {
                        "id": 56,
                        "string": "The meaning is completely different or one of the sentences is meaningless 1."
                    },
                    {
                        "id": 57,
                        "string": "The topic is the same but the meaning is different 2."
                    },
                    {
                        "id": 58,
                        "string": "Some key information is different 3."
                    },
                    {
                        "id": 59,
                        "string": "The key information is the same but the details differ 4."
                    },
                    {
                        "id": 60,
                        "string": "Meaning is essentially equal but some expressions are unnatural 5."
                    },
                    {
                        "id": 61,
                        "string": "Meaning is essentially equal and the two sentences are well-formed English a a Or the language of interest."
                    },
                    {
                        "id": 62,
                        "string": "4 This is important to rule out nonsensical sentences and distinguish between clean and \"noisy\" paraphrases (e.g."
                    },
                    {
                        "id": 63,
                        "string": "typos, non-native speech."
                    },
                    {
                        "id": 64,
                        "string": "."
                    },
                    {
                        "id": 65,
                        "string": "."
                    },
                    {
                        "id": 66,
                        "string": ")."
                    },
                    {
                        "id": 67,
                        "string": "We did not give annotators additional instruction specific to typos."
                    },
                    {
                        "id": 68,
                        "string": "Automatic Metrics Unfortunately, human evaluation is expensive, slow and sometimes difficult to obtain, for example in the case of low-resource languages."
                    },
                    {
                        "id": 69,
                        "string": "This makes automatic metrics that do not require human intervention appealing for experimental research."
                    },
                    {
                        "id": 70,
                        "string": "This section describes 3 evaluation metrics commonly used as alternatives to human evaluation, in particular to evaluate translation models."
                    },
                    {
                        "id": 71,
                        "string": "5 BLEU: (Papineni et al., 2002) is an automatic metric based on n-gram precision coupled with a penalty for shorter sentences."
                    },
                    {
                        "id": 72,
                        "string": "It relies on exact word-level matches and therefore cannot detect synonyms or morphological variations."
                    },
                    {
                        "id": 73,
                        "string": "METEOR: (Denkowski and Lavie, 2014) first estimates alignment between the two sentences and then computes unigram F-score (biased towards recall) weighted by a penalty for longer sentences."
                    },
                    {
                        "id": 74,
                        "string": "Importantly, METEOR uses stemming, synonymy and paraphrasing information to perform alignments."
                    },
                    {
                        "id": 75,
                        "string": "On the downside, it requires language specific resources."
                    },
                    {
                        "id": 76,
                        "string": "chrF: (Popović, 2015) is based on the character n-gram F-score."
                    },
                    {
                        "id": 77,
                        "string": "In particular we will use the chrF2 score (based on the F2-score -recall is given more importance), following the recommendations from Popović (2016) ."
                    },
                    {
                        "id": 78,
                        "string": "By operating on a sub-word level, it can reflect the semantic similarity between different morphological inflections of one word (for instance), without requiring language-specific knowledge which makes it a good one-size-fits-all alternative."
                    },
                    {
                        "id": 79,
                        "string": "Because multiple possible alternatives exist, it is important to know which is the best stand-in for human evaluation."
                    },
                    {
                        "id": 80,
                        "string": "To elucidate this, we will compare these metrics to human judgment in terms of Pearson correlation coefficient on outputs resulting from a variety of attacks in §4.2."
                    },
                    {
                        "id": 81,
                        "string": "Gradient-Based Adversarial Attacks In this section, we overview the adversarial attacks we will be considering in the rest of this paper."
                    },
                    {
                        "id": 82,
                        "string": "Attack Paradigm We perform gradient-based attacks that replace one word in the sentence so as to maximize an adversarial loss function L adv , similar to the substitution attacks proposed in (Ebrahimi et al., 2018b) ."
                    },
                    {
                        "id": 83,
                        "string": "Original Pourquoi faire cela ?"
                    },
                    {
                        "id": 84,
                        "string": "English gloss Why do this?"
                    },
                    {
                        "id": 85,
                        "string": "Unconstrained construisant (English: building) faire cela ?"
                    },
                    {
                        "id": 86,
                        "string": "kNN interrogez (English: interrogate) faire cela ?"
                    },
                    {
                        "id": 87,
                        "string": "CharSwap Puorquoi (typo) faire cela ?"
                    },
                    {
                        "id": 88,
                        "string": "Original Si seulement je pouvais me muscler aussi rapidement."
                    },
                    {
                        "id": 89,
                        "string": "English gloss If only I could build my muscle this fast."
                    },
                    {
                        "id": 90,
                        "string": "Unconstrained Si seulement je pouvais me muscler etc rapidement."
                    },
                    {
                        "id": 91,
                        "string": "kNN Si seulement je pouvais me muscler plsu (typo for \"more\") rapidement."
                    },
                    {
                        "id": 92,
                        "string": "CharSwap Si seulement je pouvais me muscler asusi (typo) rapidement."
                    },
                    {
                        "id": 93,
                        "string": "General Approach Precisely, for a word-based translation model M 6 , and given an input sentence w 1 , ."
                    },
                    {
                        "id": 94,
                        "string": "."
                    },
                    {
                        "id": 95,
                        "string": "."
                    },
                    {
                        "id": 96,
                        "string": ", w n , we find the position i * and word w * satisfying the following optimization problem: arg max 1≤i≤n,ŵ∈V L adv (w 0 , ."
                    },
                    {
                        "id": 97,
                        "string": "."
                    },
                    {
                        "id": 98,
                        "string": "."
                    },
                    {
                        "id": 99,
                        "string": ", w i−1 ,ŵ, w i+1 , ."
                    },
                    {
                        "id": 100,
                        "string": "."
                    },
                    {
                        "id": 101,
                        "string": "."
                    },
                    {
                        "id": 102,
                        "string": ", w n ) (2) where L adv is a differentiable function which represents our adversarial objective."
                    },
                    {
                        "id": 103,
                        "string": "Using the first order approximation of L adv around the original word vectors w 1 , ."
                    },
                    {
                        "id": 104,
                        "string": "."
                    },
                    {
                        "id": 105,
                        "string": "."
                    },
                    {
                        "id": 106,
                        "string": ", w n 7 , this can be derived to be equivalent to optimizing arg max 1≤i≤n,ŵ∈V [ŵ − w i ] ∇ w i L adv (3) The above optimization problem can be solved by brute-force in O(n|V|) space complexity, whereas the time complexity is bottlenecked by a |V| × d times n × d matrix multiplication, which is not more computationally expensive than computing logits during the forward pass of the model."
                    },
                    {
                        "id": 107,
                        "string": "Overall, this naive approach is sufficiently fast to be conducive to adversarial training."
                    },
                    {
                        "id": 108,
                        "string": "We also found that the attacks benefited from normalizing the gradient by taking its sign."
                    },
                    {
                        "id": 109,
                        "string": "Extending this approach to finding the optimal perturbations for more than 1 substitution would require exhaustively searching over all possible combinations."
                    },
                    {
                        "id": 110,
                        "string": "However, previous work (Ebrahimi 6 Note that this formulation is also valid for characterbased models (see Ebrahimi et al."
                    },
                    {
                        "id": 111,
                        "string": "(2018a) ) and subwordbased models."
                    },
                    {
                        "id": 112,
                        "string": "For subword-based models, additional difficulty would be introduced due to changes to the input resulting in different subword segmentations."
                    },
                    {
                        "id": 113,
                        "string": "This poses an interesting challenge that is beyond the scope of the current work."
                    },
                    {
                        "id": 114,
                        "string": "7 More generally we will use the bold w when talking about the embedding vector of word w et al., 2018a) suggests that greedy search is a good enough approximation."
                    },
                    {
                        "id": 115,
                        "string": "The Adversarial Loss L adv We want to find an adversarial inputx such that, assuming that the model has produced the correct output y 1 , ."
                    },
                    {
                        "id": 116,
                        "string": "."
                    },
                    {
                        "id": 117,
                        "string": "."
                    },
                    {
                        "id": 118,
                        "string": ", y t−1 up to step t − 1 during decoding, the probability that the model makes an error at the next step t is maximized."
                    },
                    {
                        "id": 119,
                        "string": "In the log-semiring, this translates into the following loss function: L adv (x, y) = |y| t=1 log(1 − p(y t |x, y 1 , ."
                    },
                    {
                        "id": 120,
                        "string": "."
                    },
                    {
                        "id": 121,
                        "string": "."
                    },
                    {
                        "id": 122,
                        "string": ", y t−1 )) (4) Enforcing Semantically Similar Adversarial Inputs In contrast to previous methods, which don't consider meaning preservation, we propose simple modifications of the approach presented in §3.1 to create adversarial perturbations at the word level that are more likely to preserve meaning."
                    },
                    {
                        "id": 123,
                        "string": "The basic idea is to restrict the possible word substitutions to similar words."
                    },
                    {
                        "id": 124,
                        "string": "We compare two sets of constraints: kNN: This constraint enforces that the word be replaced only with one of its 10 nearest neighbors in the source embedding space."
                    },
                    {
                        "id": 125,
                        "string": "This has two effects: first, the replacement will be likely semantically related to the original word (if words close in the embedding space are indeed semantically related, as hinted by Table 1 )."
                    },
                    {
                        "id": 126,
                        "string": "Second, it ensures that the replacement's word vector is close enough to the original word vector that the first order assumption is more likely to be satisfied."
                    },
                    {
                        "id": 127,
                        "string": "CharSwap: This constraint requires that the substituted words must be obtained by swapping characters."
                    },
                    {
                        "id": 128,
                        "string": "Word internal character swaps have been shown to not affect human readers greatly (McCusker et al., 1981) , hence making them likely to be meaning-preserving."
                    },
                    {
                        "id": 129,
                        "string": "Moreover we add the additional constraint that the substitution must not be in the vocabulary, which will likely be particularly meaning-destroying on the target side for the word-based models we test here."
                    },
                    {
                        "id": 130,
                        "string": "In such cases where word-internal character swaps are not possible or can't produce out-of-vocabulary (OOV) words, we resort to the naive strategy of repeating the last character of the word."
                    },
                    {
                        "id": 131,
                        "string": "The exact procedure used to produce this kind of perturbations is described in Appendix A.1."
                    },
                    {
                        "id": 132,
                        "string": "Note that for a word-based model, every OOV will look the same (a special <unk> token), however the choice of OOV will still have an influence on the output of the model because we use unk-replacement."
                    },
                    {
                        "id": 133,
                        "string": "In contrast, we refer the base attack without constraints as Unconstrained hereforth."
                    },
                    {
                        "id": 134,
                        "string": "Table 1 gives qualitative examples of the kind of perturbations generated under the different constraints."
                    },
                    {
                        "id": 135,
                        "string": "For subword-based models, we apply the same procedures at the subword-level on the original segmentation."
                    },
                    {
                        "id": 136,
                        "string": "We then de-segment and resegment the resulting sentence (because changes at the subword or character levels are likely to change the segmentation of the resulting sentence)."
                    },
                    {
                        "id": 137,
                        "string": "Experiments Our experiments serve two purposes."
                    },
                    {
                        "id": 138,
                        "string": "First, we examine our proposed framework of evaluating adversarial attacks ( §2), and also elucidate which automatic metrics correlate better with human judgment for the purpose of evaluating adversarial attacks ( §4.2)."
                    },
                    {
                        "id": 139,
                        "string": "Second, we use this evaluation framework to compare various adversarial attacks and demonstrate that adversarial attacks that are explicitly constrained to preserve meaning receive better assessment scores ( §4.3)."
                    },
                    {
                        "id": 140,
                        "string": "Experimental setting Data: Following previous work on adversarial examples for seq2seq models (Belinkov and Bisk, 2018; Ebrahimi et al., 2018a) , we perform all experiments on the IWSLT2016 dataset (Cettolo et al., 2016) in the {French,German,Czech}→English directions (fr-en, de-en and cs-en)."
                    },
                    {
                        "id": 141,
                        "string": "We compile all previous IWSLT test sets before 2015 as validation data, and keep the 2015 and 2016 test sets as test data."
                    },
                    {
                        "id": 142,
                        "string": "The data is tokenized with the Moses tokenizer (Koehn et al., 2007) ."
                    },
                    {
                        "id": 143,
                        "string": "The exact data statistics can be found in Appendix A.2."
                    },
                    {
                        "id": 144,
                        "string": "MT Models: We perform experiments with two common neural machine translation (NMT) models."
                    },
                    {
                        "id": 145,
                        "string": "The first is an LSTM based encoderdecoder architecture with attention (Luong et al., 2015) ."
                    },
                    {
                        "id": 146,
                        "string": "It uses 2-layer encoders and decoders, and dot-product attention."
                    },
                    {
                        "id": 147,
                        "string": "We set the word embedding dimension to 300 and all others to 500."
                    },
                    {
                        "id": 148,
                        "string": "The second model is a self-attentional Transformer (Vaswani et al., 2017) , with 6 1024-dimensional encoder and decoder layers and 512 dimensional word embeddings."
                    },
                    {
                        "id": 149,
                        "string": "Both the models are trained with Adam (Kingma and Ba, 2014), dropout (Srivastava et al., 2014) of probability 0.3 and label smoothing (Szegedy et al., 2016) with value 0.1."
                    },
                    {
                        "id": 150,
                        "string": "We experiment with both word based models (vocabulary size fixed at 40k) and subword based models (BPE (Sennrich et al., 2016) with 30k operations)."
                    },
                    {
                        "id": 151,
                        "string": "For word-based models, we perform <unk> replacement, replacing <unk> tokens in the translated sentences with the source words with the highest attention value during inference."
                    },
                    {
                        "id": 152,
                        "string": "The full experimental setup and source code are available at https://github."
                    },
                    {
                        "id": 153,
                        "string": "com/pmichel31415/translate/tree/ paul/pytorch_translate/research/ adversarial/experiments."
                    },
                    {
                        "id": 154,
                        "string": "Automatic Metric Implementations: To evaluate both sentence and corpus level BLEU score, we first de-tokenize the output and use sacreBLEU 8 (Post, 2018) with its internal intl tokenization, to keep BLEU scores agnostic to tokenization."
                    },
                    {
                        "id": 155,
                        "string": "We compute METEOR using the official implementation 9 ."
                    },
                    {
                        "id": 156,
                        "string": "ChrF is reported with the sacreBLEU implementation on detokenized text with default parameters."
                    },
                    {
                        "id": 157,
                        "string": "A toolkit implementing the evaluation framework described in §2.1 for these metrics is released at https://github."
                    },
                    {
                        "id": 158,
                        "string": "com/pmichel31415/teapot-nlp."
                    },
                    {
                        "id": 159,
                        "string": "Correlation of Automatic Metrics with Human Judgment We first examine which of the automatic metrics listed in §2.2 correlates most with human judgment for our adversarial attacks."
                    },
                    {
                        "id": 160,
                        "string": "For this experiment, we restrict the scope to the case of the  These sentences are sent to English and French speaking annotators to be rated according to the guidelines described in §2.2.1."
                    },
                    {
                        "id": 161,
                        "string": "Each sample (a pair of sentences) is rated by two independent evaluators."
                    },
                    {
                        "id": 162,
                        "string": "If the two ratings differ, the sample is sent to a third rater (an auditor and subject matter expert) who makes the final decision."
                    },
                    {
                        "id": 163,
                        "string": "Finally, we compare the human results to each automatic metric with Pearson's correlation coefficient."
                    },
                    {
                        "id": 164,
                        "string": "The correlations are reported in Table 3 ."
                    },
                    {
                        "id": 165,
                        "string": "As evidenced by the results, chrF exhibits higher correlation with human judgment, followed by ME-TEOR and BLEU."
                    },
                    {
                        "id": 166,
                        "string": "This is true both on the source side (x vsx) and in the target side (y vsŷ M )."
                    },
                    {
                        "id": 167,
                        "string": "We Language BLEU METEOR chrF French 0.415 0.440 0.586 * English 0.357 0.478 * 0.497 Table 3 : Correlation of automatic metrics to human judgment of adversarial source and target sentences. \""
                    },
                    {
                        "id": 168,
                        "string": "* \" indicates that the correlation is significantly better than the next-best one."
                    },
                    {
                        "id": 169,
                        "string": "evaluate the statistical significance of this result using a paired bootstrap test for p < 0.01."
                    },
                    {
                        "id": 170,
                        "string": "Notably we find that chrF is significantly better than METEOR in French but not in English."
                    },
                    {
                        "id": 171,
                        "string": "This is not too unexpected because METEOR has access to more language-dependent resources in English (specifically synonym information) and thereby can make more informed matches of these synonymous words and phrases."
                    },
                    {
                        "id": 172,
                        "string": "Moreover the French source side contains more \"character-level\" errors (from CharSwap attacks) which are not picked-up well by word-based metrics like BLEU and ME-TEOR."
                    },
                    {
                        "id": 173,
                        "string": "For a breakdown of the correlation coefficients according to number of perturbation and type of constraints, we refer to Appendix A.3."
                    },
                    {
                        "id": 174,
                        "string": "Thus, in the following, we report attack results both in terms of chrF in the source (s src ) and relative decrease in chrF (RDchrF) in the target (d tgt )."
                    },
                    {
                        "id": 175,
                        "string": "Table 2 for word-based models."
                    },
                    {
                        "id": 176,
                        "string": "High source chrF and target RDchrF (upper-right corner) indicates a good attack."
                    },
                    {
                        "id": 177,
                        "string": "Attack Results We can now compare attacks under the three constraints Unconstrained, kNN and CharSwap and draw conclusions on their capacity to preserve meaning in the source and destroy it in the target."
                    },
                    {
                        "id": 178,
                        "string": "Attacks are conducted on the validation set using the approach described in §3.1 with 3 substitutions (this means that each adversarial input is at edit distance at most 3 from the original input)."
                    },
                    {
                        "id": 179,
                        "string": "Results (on a scale of 0 to 100 for readability) are reported in Table 2 for both word-and subwordbased LSTM and Transformer models."
                    },
                    {
                        "id": 180,
                        "string": "To give a better idea of how the different variables (language pair, model, attack) affect performance, we give a graphical representation of these same results in Figure 1 for the word-based models."
                    },
                    {
                        "id": 181,
                        "string": "The rest of this section discusses the implication of these results."
                    },
                    {
                        "id": 182,
                        "string": "Source chrF Highlights the Effect of Adding Constraints: Comparing the kNN and CharSwap rows to Unconstrained in the \"source\" sections of Table 2 clearly shows that constrained attacks have a positive effect on meaning preservation."
                    },
                    {
                        "id": 183,
                        "string": "Beyond validating our assumptions from §3.2, this shows that source chrF is useful to carry out the comparison in the first place 10 ."
                    },
                    {
                        "id": 184,
                        "string": "To give a point of reference, results from the manual evaluation carried out in §4.2 show that that 90% of the French sentence pairs to which humans gave a score of 4 or 5 in semantic similarity have a chrF > 78."
                    },
                    {
                        "id": 185,
                        "string": "10 It can be argued that using chrF gives an advantage to CharSwap over kNN for source preservation (as opposed to METEOR for example)."
                    },
                    {
                        "id": 186,
                        "string": "We find that this is the case for Czech and German (source METEOR is higher for kNN) but not French."
                    },
                    {
                        "id": 187,
                        "string": "Moreover we find (see A.3) that chrF correlates better with human judgement even for kNN."
                    },
                    {
                        "id": 188,
                        "string": "Different Architectures are not Equal in the Face of Adversity: Inspection of the targetside results yields several interesting observations."
                    },
                    {
                        "id": 189,
                        "string": "First, the high RDchrF of CharSwap for wordbased model is yet another indication of their known shortcomings when presented with words out of their training vocabulary, even with <unk>replacement."
                    },
                    {
                        "id": 190,
                        "string": "Second, and perhaps more interestingly, Transformer models appear to be less robust to small embedding perturbations (kNN attacks) compared to LSTMs."
                    },
                    {
                        "id": 191,
                        "string": "Although the exploration of the exact reasons for this phenomenon is beyond the scope of this work, this is a good example that RDchrF can shed light on the different behavior of different architectures when confronted with adversarial input."
                    },
                    {
                        "id": 192,
                        "string": "Overall, we find that the Char-Swap constraint is the only one that consistently produces attacks with > 1 average success (as defined in Section 2.1) according to Table 2 ."
                    },
                    {
                        "id": 193,
                        "string": "Table 4 contains two qualitative examples of this attack on the LSTM model in fr-en."
                    },
                    {
                        "id": 194,
                        "string": "Adversarial Training with Meaning-Preserving Attacks Adversarial Training Adversarial training (Goodfellow et al., 2014) augments the training data with adversarial examples."
                    },
                    {
                        "id": 195,
                        "string": "Formally, in place of the negative log likelihood (NLL) objective on a sample x, y, L(x, y) = N LL(x, y), the loss function is replaced with an interpolation of the NLL of the original sample x, y and an adversarial samplex, y: L (x, y) = (1 − α)N LL(x, y) + αN LL(x, y) (5) Ebrahimi et al."
                    },
                    {
                        "id": 196,
                        "string": "(2018a) suggest that while adversarial training improves robustness to adversarial attacks, it can be detrimental to test performance on non-adversarial input."
                    },
                    {
                        "id": 197,
                        "string": "We investigate whether this is still the case when adversarial attacks are largely meaning-preserving."
                    },
                    {
                        "id": 198,
                        "string": "In our experiments, we generatex by applying 3 perturbations on the fly at each training step."
                    },
                    {
                        "id": 199,
                        "string": "To maintain training speed we do not solve Equation (2) iteratively but in one shot by replacing the argmax by top-3."
                    },
                    {
                        "id": 200,
                        "string": "Although this is less exact than iterating, this makes adversarial training time less than 2× slower than normal training."
                    },
                    {
                        "id": 201,
                        "string": "We perform adversarial training with perturbations without constraints (Unconstrained-adv) and with the CharSwap constraint (CharSwap-adv)."
                    },
                    {
                        "id": 202,
                        "string": "All experiments are conducted with the word-based LSTM model."
                    },
                    {
                        "id": 203,
                        "string": "Results Test performance on non-adversarial input is reported in Table 5 ."
                    },
                    {
                        "id": 204,
                        "string": "In keeping with the rest of the paper, we primarily report chrF results, but also show the standard BLEU as well."
                    },
                    {
                        "id": 205,
                        "string": "We observe that when α = 1.0, i.e."
                    },
                    {
                        "id": 206,
                        "string": "the model only sees the perturbed input during training 11 , the Unconstrained-adv model suffers a drop in test performance, whereas CharSwap-adv's performance is on par with the original."
                    },
                    {
                        "id": 207,
                        "string": "This is likely   where y is not an acceptable translation ofx introduced by the lack of constraint."
                    },
                    {
                        "id": 208,
                        "string": "This effect disappears when α = 0.5 because the model sees the original samples as well."
                    },
                    {
                        "id": 209,
                        "string": "Not unexpectedly, Table 6 indicates that CharSwap-adv is more robust to CharSwap constrained attacks for both values of α, with 1.0 giving the best results."
                    },
                    {
                        "id": 210,
                        "string": "On the other hand, Unconstrained-adv is similarly or more vulnerable to these attacks than the baseline."
                    },
                    {
                        "id": 211,
                        "string": "Hence, we can safely conclude that adversarial training with CharSwap attacks improves robustness while not impacting test performance as much as unconstrained attacks."
                    },
                    {
                        "id": 212,
                        "string": "Related work Following seminal work on adversarial attacks by Szegedy et al."
                    },
                    {
                        "id": 213,
                        "string": "(2013) , Goodfellow et al."
                    },
                    {
                        "id": 214,
                        "string": "(2014) introduced gradient-based attacks and adversarial training."
                    },
                    {
                        "id": 215,
                        "string": "Since then, a variety of attack (Moosavi-Dezfooli et al., 2016) and defense (Cissé et al., 2017; Kolter and Wong, 2017) mechanisms have been proposed."
                    },
                    {
                        "id": 216,
                        "string": "Adversarial examples for NLP specifically have seen attacks on sentiment Samanta and Mehta, 2017; Ebrahimi et al., 2018b) , malware (Grosse et al., 2016) , gender (Reddy and Knight, 2016) or toxicity (Hosseini et al., 2017) classification to cite a few."
                    },
                    {
                        "id": 217,
                        "string": "In MT, methods have been proposed to attack word-based (Zhao et al., 2018; Cheng et al., 2018) and character-based (Belinkov and Bisk, 2018; Ebrahimi et al., 2018a) models."
                    },
                    {
                        "id": 218,
                        "string": "However these works side-step the question of meaning preservation in the source: they mostly focus on target side evaluation."
                    },
                    {
                        "id": 219,
                        "string": "Finally there is work centered around meaning-preserving adversarial attacks for NLP via paraphrase generation (Iyyer et al., 2018) or rule-based approaches (Jia and Liang, 2017; Ribeiro et al., 2018; Naik et al., 2018; Alzantot et al., 2018) ."
                    },
                    {
                        "id": 220,
                        "string": "However the proposed attacks are highly engineered and focused on English."
                    },
                    {
                        "id": 221,
                        "string": "Conclusion This paper highlights the importance of performing meaning-preserving adversarial perturbations for NLP models (with a focus on seq2seq)."
                    },
                    {
                        "id": 222,
                        "string": "We proposed a general evaluation framework for adversarial perturbations and compared various automatic metrics as proxies for human judgment to instantiate this framework."
                    },
                    {
                        "id": 223,
                        "string": "We then confirmed that, in the context of MT, \"naive\" attacks do not preserve meaning in general, and proposed alternatives to remedy this issue."
                    },
                    {
                        "id": 224,
                        "string": "Finally, we have shown the utility of adversarial training in this paradigm."
                    },
                    {
                        "id": 225,
                        "string": "We hope that this helps future work in this area of research to evaluate meaning conservation more consistently."
                    },
                    {
                        "id": 226,
                        "string": "A Supplemental Material A.1 Generating OOV Replacements with Internal Character Swaps We use the following snippet to produce an OOV word from an existing word: 1 def make_oov( 2 word, 3 vocab, 4 max_scrambling, 5 ): 6 \"\"\"Modify a word to make it OOV 7 (while keeping the meaning)\"\"\" 8 # If the word has >3 letters 9 # try scrambling them 10 L = len ( #train #valid #test fr-en 220.4k 6,824 2,213 de-en 196.9k 11,825 2,213 cs-en 114.4k 5,716 2,213 A.3 Breakdown of Correlation with Human Judgement We provide a breakdown of the correlation coefficients of automatic metrics with human judgment for source-side meaning-preservation, both in terms of number of perturbed words (Table 8) and constraint (Table 9 )."
                    },
                    {
                        "id": 227,
                        "string": "While those coefficients are computed on a much smaller sample size, and their differences are not all statistically significant with p < 0.01, they exhibit the same trend as the results from Table 9 : Correlation of automatic metrics to human judgment of semantic similarity between original and adversarial source sentences, broken down by type of constraint on the perturbation. \""
                    },
                    {
                        "id": 228,
                        "string": "* \" indicates that the correlation is significantly better than the next-best one."
                    },
                    {
                        "id": 229,
                        "string": "In particular Table 8 shows that the good correlation of chrF with human judgment is not only due to the ability to distinguish between different number of edits."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 22
                    },
                    {
                        "section": "A Framework for Evaluating Adversarial Attacks",
                        "n": "2",
                        "start": 23,
                        "end": 27
                    },
                    {
                        "section": "The Adversarial Trade-off",
                        "n": "2.1",
                        "start": 28,
                        "end": 46
                    },
                    {
                        "section": "Similarity Metrics",
                        "n": "2.2",
                        "start": 47,
                        "end": 49
                    },
                    {
                        "section": "Human Judgment",
                        "n": "2.2.1",
                        "start": 50,
                        "end": 67
                    },
                    {
                        "section": "Automatic Metrics",
                        "n": "2.2.2",
                        "start": 68,
                        "end": 80
                    },
                    {
                        "section": "Gradient-Based Adversarial Attacks",
                        "n": "3",
                        "start": 81,
                        "end": 81
                    },
                    {
                        "section": "Attack Paradigm",
                        "n": "3.1",
                        "start": 82,
                        "end": 92
                    },
                    {
                        "section": "General Approach",
                        "n": "3.1.1",
                        "start": 93,
                        "end": 114
                    },
                    {
                        "section": "The Adversarial Loss L adv",
                        "n": "3.1.2",
                        "start": 115,
                        "end": 121
                    },
                    {
                        "section": "Enforcing Semantically Similar Adversarial Inputs",
                        "n": "3.2",
                        "start": 122,
                        "end": 136
                    },
                    {
                        "section": "Experiments",
                        "n": "4",
                        "start": 137,
                        "end": 139
                    },
                    {
                        "section": "Experimental setting",
                        "n": "4.1",
                        "start": 140,
                        "end": 158
                    },
                    {
                        "section": "Correlation of Automatic Metrics with Human Judgment",
                        "n": "4.2",
                        "start": 159,
                        "end": 176
                    },
                    {
                        "section": "Attack Results",
                        "n": "4.3",
                        "start": 177,
                        "end": 193
                    },
                    {
                        "section": "Adversarial Training",
                        "n": "5.1",
                        "start": 194,
                        "end": 202
                    },
                    {
                        "section": "Results",
                        "n": "5.2",
                        "start": 203,
                        "end": 211
                    },
                    {
                        "section": "Related work",
                        "n": "6",
                        "start": 212,
                        "end": 220
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 221,
                        "end": 229
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/967-Table2-1.png",
                        "caption": "Table 2: Target RDchrF and source chrF scores for all the attacks on all our models (word- and subword-based LSTM and Transformer).",
                        "page": 5,
                        "bbox": {
                            "x1": 80.64,
                            "x2": 517.4399999999999,
                            "y1": 64.8,
                            "y2": 336.96
                        }
                    },
                    {
                        "filename": "../figure/image/967-Table3-1.png",
                        "caption": "Table 3: Correlation of automatic metrics to human judgment of adversarial source and target sentences. “∗” indicates that the correlation is significantly better than the next-best one.",
                        "page": 5,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 501.12,
                            "y1": 392.64,
                            "y2": 435.35999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/967-Table4-1.png",
                        "caption": "Table 4: Example of CharSwap attacks on the fr-en LSTM. The first example is a successful attack (high source chrF and target RDchrF) whereas the second is not.",
                        "page": 6,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 527.04,
                            "y1": 283.68,
                            "y2": 462.24
                        }
                    },
                    {
                        "filename": "../figure/image/967-Figure1-1.png",
                        "caption": "Figure 1: Graphical representation of the results in Table 2 for word-based models. High source chrF and target RDchrF (upper-right corner) indicates a good attack.",
                        "page": 6,
                        "bbox": {
                            "x1": 116.64,
                            "x2": 481.44,
                            "y1": 61.44,
                            "y2": 231.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/967-Table6-1.png",
                        "caption": "Table 6: Robustness to CharSwap attacks on the validation set with/without adversarial training (RDchrF). Lower is better.",
                        "page": 7,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 537.12,
                            "y1": 301.44,
                            "y2": 411.35999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/967-Table5-1.png",
                        "caption": "Table 5: chrF (BLEU) scores on the original test set before/after adversarial training of the word-based LSTM model.",
                        "page": 7,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 537.12,
                            "y1": 64.8,
                            "y2": 241.92
                        }
                    },
                    {
                        "filename": "../figure/image/967-Table1-1.png",
                        "caption": "Table 1: Examples of different adversarial inputs. The substituted word is highlighted.",
                        "page": 3,
                        "bbox": {
                            "x1": 96.96,
                            "x2": 498.24,
                            "y1": 62.879999999999995,
                            "y2": 205.92
                        }
                    },
                    {
                        "filename": "../figure/image/967-Table7-1.png",
                        "caption": "Table 7: IWSLT2016 data statistics.",
                        "page": 11,
                        "bbox": {
                            "x1": 97.92,
                            "x2": 264.0,
                            "y1": 517.4399999999999,
                            "y2": 571.1999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/967-Table8-1.png",
                        "caption": "Table 8: Correlation of automatic metrics to human judgment of semantic similarity between original and adversarial source sentences, broken down by number of perturbed words. “∗” indicates that the correlation is significantly better than the next-best one.",
                        "page": 11,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 487.2,
                            "y1": 64.8,
                            "y2": 120.0
                        }
                    },
                    {
                        "filename": "../figure/image/967-Table9-1.png",
                        "caption": "Table 9: Correlation of automatic metrics to human judgment of semantic similarity between original and adversarial source sentences, broken down by type of constraint on the perturbation. “∗” indicates that the correlation is significantly better than the next-best one.",
                        "page": 11,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 517.4399999999999,
                            "y1": 203.51999999999998,
                            "y2": 259.2
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-5"
        },
        {
            "slides": {
                "0": {
                    "title": "Do we really need context",
                    "text": [
                        "It has 48 columns.",
                        "What does it refer to?",
                        "Possible translations into Russian:",
                        "48 . (masculine or neuter)",
                        "What do columns mean?",
                        "Under the cathedral lies the antique chapel."
                    ],
                    "page_nums": [
                        1,
                        2,
                        3,
                        4,
                        5,
                        6,
                        7,
                        8
                    ],
                    "images": []
                },
                "1": {
                    "title": "Recap antecedent and anaphora resolution",
                    "text": [
                        "Under the cathedral lies the antique chapel. It has 48 columns.",
                        "An antecedent is an expression that gives its meaning to",
                        "a proform (pronoun, pro-verb, pro-adverb, etc.)",
                        "Anaphora resolution is the problem of resolving references to earlier",
                        "or later items in the discourse."
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "2": {
                    "title": "Context in Machine Translation",
                    "text": [
                        "focused on handling specific phenomena",
                        "directly provide context to an NMT system at training time",
                        "what kinds of discourse phenomena are successfully handled",
                        "how they are modeled"
                    ],
                    "page_nums": [
                        10,
                        11,
                        12
                    ],
                    "images": []
                },
                "3": {
                    "title": "Plan",
                    "text": [
                        "we introduce a context-aware neural model, which is effective",
                        "an d has a sufficiently simple and interpretable interface between Model Archit cture",
                        "the context and the rest of the translation model",
                        "we analyze the flow of information from the context and identify",
                        "Overall performance pr onoun translation as the key phenomenon captured by the",
                        "by comparing to automatically predicted or human-annotated Analys s",
                        "coreference relations, we observe that the model implicitly"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "5": {
                    "title": "Context aware model architecture",
                    "text": [
                        "start with the Transformer [Vaswani et al, 2018]",
                        "incorporate context information on the encoder side",
                        "use a separate encoder for context",
                        "share first N-1 layers of source and context encoders",
                        "the last layer incorporates contextual information"
                    ],
                    "page_nums": [
                        16,
                        17,
                        18
                    ],
                    "images": [
                        "figure/image/969-Figure1-1.png"
                    ]
                },
                "8": {
                    "title": "Our model different types of context",
                    "text": [
                        "Next sentence does not appear",
                        "previous sentence Performance drops for a random",
                        "Model is robust towards being",
                        "shown a random context",
                        "(the only significant at p<0.01 difference is with the best model;",
                        "differences between other results are not significant)"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": []
                },
                "10": {
                    "title": "What do we mean by attention to context",
                    "text": [
                        "attention from source to context",
                        "mean over heads of per-head attention",
                        "take sum over context words",
                        "(excluding <bos>, <eos> and punctuation)"
                    ],
                    "page_nums": [
                        24,
                        25
                    ],
                    "images": []
                },
                "11": {
                    "title": "Top words influenced by context",
                    "text": [
                        "it Need to know gender, because",
                        "yours verbs must agree in gender with I",
                        "(in past tense) yes",
                        "yes Many of these words appear at",
                        "i sentence initial position.",
                        "you Maybe this is all that matters?",
                        "word pos word pos",
                        "Only positions i after the first m"
                    ],
                    "page_nums": [
                        26,
                        27,
                        28,
                        29,
                        30,
                        31
                    ],
                    "images": [
                        "figure/image/969-Table3-1.png"
                    ]
                },
                "12": {
                    "title": "Dependence on sentence length",
                    "text": [
                        "high attention to context"
                    ],
                    "page_nums": [
                        33,
                        34,
                        35
                    ],
                    "images": []
                },
                "18": {
                    "title": "Ambiguous it noun antecedent",
                    "text": [
                        "masculine feminine neuter plural"
                    ],
                    "page_nums": [
                        41
                    ],
                    "images": []
                },
                "19": {
                    "title": "It with noun antecedent example",
                    "text": [
                        "It was locked up in the hold with 20 other boxes of supplies.",
                        "Possible translations into Russian:",
                        "You left money unattended?"
                    ],
                    "page_nums": [
                        42,
                        43
                    ],
                    "images": []
                },
                "21": {
                    "title": "Hypothesis",
                    "text": [
                        "Large improvements in BLEU on test sets with pronouns",
                        "co-referent with an expression in context",
                        "Attention mechanism Latent anaphora resolution"
                    ],
                    "page_nums": [
                        45
                    ],
                    "images": []
                },
                "22": {
                    "title": "How to test the hypothesis agreement with CoreNLP",
                    "text": [
                        "Find an antecedent noun phrase (using CoreNLP)",
                        "Pick examples where the noun phrase contains a single noun",
                        "Pick examples with several nouns in context",
                        "Identify the token with the largest attention weight (excluding punctuation,",
                        "If the token falls within the antecedent span, then its an agreement"
                    ],
                    "page_nums": [
                        46,
                        47
                    ],
                    "images": []
                },
                "23": {
                    "title": "Does the model learn anaphora",
                    "text": [
                        "or just some simple heuristic?"
                    ],
                    "page_nums": [
                        48
                    ],
                    "images": []
                },
                "24": {
                    "title": "Agreement with CoreNLP predictions",
                    "text": [
                        "random first last attention agreement of attention is the",
                        "first noun is the best heuristic"
                    ],
                    "page_nums": [
                        49,
                        50
                    ],
                    "images": []
                },
                "25": {
                    "title": "Compared to human annotations for it",
                    "text": [
                        "pick 500 examples from the",
                        "ask human annotators to mark",
                        "pick examples where an",
                        "antecedent is a noun phrase",
                        "calculate the agreement with"
                    ],
                    "page_nums": [
                        51
                    ],
                    "images": []
                },
                "26": {
                    "title": "Attention map examples",
                    "text": [
                        "There was a time I would",
                        "have lost my heart to a",
                        "And you, no doubt, would"
                    ],
                    "page_nums": [
                        52,
                        53,
                        54
                    ],
                    "images": [
                        "figure/image/969-Figure5-1.png"
                    ]
                }
            },
            "paper_title": "Context-Aware Neural Machine Translation Learns Anaphora Resolution",
            "paper_id": "969",
            "paper": {
                "title": "Context-Aware Neural Machine Translation Learns Anaphora Resolution",
                "abstract": "Standard machine translation systems process sentences in isolation and hence ignore extra-sentential information, even though extended context can both prevent mistakes in ambiguous cases and improve translation coherence. We introduce a context-aware neural machine translation model designed in such way that the flow of information from the extended context to the translation model can be controlled and analyzed. We experiment with an English-Russian subtitles dataset, and observe that much of what is captured by our model deals with improving pronoun translation. We measure correspondences between induced attention distributions and coreference relations and observe that the model implicitly captures anaphora. It is consistent with gains for sentences where pronouns need to be gendered in translation. Beside improvements in anaphoric cases, the model also improves in overall BLEU, both over its context-agnostic version (+0.7) and over simple concatenation of the context and source sentences (+0.6).",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction It has long been argued that handling discourse phenomena is important in translation (Mitkov, 1999; Hardmeier, 2012) ."
                    },
                    {
                        "id": 1,
                        "string": "Using extended context, beyond the single source sentence, should in principle be beneficial in ambiguous cases and also ensure that generated translations are coherent."
                    },
                    {
                        "id": 2,
                        "string": "Nevertheless, machine translation systems typically ignore discourse phenomena and translate sentences in isolation."
                    },
                    {
                        "id": 3,
                        "string": "Earlier research on this topic focused on handling specific phenomena, such as translating pronouns (Le Nagard and Koehn, 2010; Hardmeier and Federico, 2010; Hardmeier et al., 2015) , discourse connectives (Meyer et al., 2012) , verb tense (Gong et al., 2012) , increasing lexical consistency (Carpuat, 2009; Tiedemann, 2010; Gong et al., 2011) , or topic adaptation (Su et al., 2012; Hasler et al., 2014) , with special-purpose features engineered to model these phenomena."
                    },
                    {
                        "id": 4,
                        "string": "However, with traditional statistical machine translation being largely supplanted with neural machine translation (NMT) models trained in an end-toend fashion, an alternative is to directly provide additional context to an NMT system at training time and hope that it will succeed in inducing relevant predictive features (Jean et al., 2017; Wang et al., 2017; Tiedemann and Scherrer, 2017; Bawden et al., 2018) ."
                    },
                    {
                        "id": 5,
                        "string": "While the latter approach, using context-aware NMT models, has demonstrated to yield performance improvements, it is still not clear what kinds of discourse phenomena are successfully handled by the NMT systems and, importantly, how they are modeled."
                    },
                    {
                        "id": 6,
                        "string": "Understanding this would inform development of future discourse-aware NMT models, as it will suggest what kind of inductive biases need to be encoded in the architecture or which linguistic features need to be exploited."
                    },
                    {
                        "id": 7,
                        "string": "In our work we aim to enhance our understanding of the modelling of selected discourse phenomena in NMT."
                    },
                    {
                        "id": 8,
                        "string": "To this end, we construct a simple discourse-aware model, demonstrate that it achieves improvements over the discourse-agnostic baseline on an English-Russian subtitles dataset (Lison et al., 2018) and study which context information is being captured in the model."
                    },
                    {
                        "id": 9,
                        "string": "Specifically, we start with the Trans-former (Vaswani et al., 2017) , a state-of-the-art model for context-agnostic NMT, and modify it in such way that it can handle additional context."
                    },
                    {
                        "id": 10,
                        "string": "In our model, a source sentence and a context sentence are first encoded independently, and then a single attention layer, in a combination with a gating function, is used to produce a context-aware representation of the source sentence."
                    },
                    {
                        "id": 11,
                        "string": "The information from context can only flow through this attention layer."
                    },
                    {
                        "id": 12,
                        "string": "When compared to simply concatenating input sentences, as proposed by Tiedemann and Scherrer (2017) , our architecture appears both more accurate (+0.6 BLEU) and also guarantees that the contextual information cannot bypass the attention layer and hence remain undetected in our analysis."
                    },
                    {
                        "id": 13,
                        "string": "We analyze what types of contextual information are exploited by the translation model."
                    },
                    {
                        "id": 14,
                        "string": "While studying the attention weights, we observe that much of the information captured by the model has to do with pronoun translation."
                    },
                    {
                        "id": 15,
                        "string": "It is not entirely surprising, as we consider translation from a language without grammatical gender (English) to a language with grammatical gender (Russian)."
                    },
                    {
                        "id": 16,
                        "string": "For Russian, translated pronouns need to agree in gender with their antecedents."
                    },
                    {
                        "id": 17,
                        "string": "Moreover, since in Russian verbs agree with subjects in gender and adjectives also agree in gender with pronouns in certain frequent constructions, mistakes in translating pronouns have a major effect on the words in the produced sentences."
                    },
                    {
                        "id": 18,
                        "string": "Consequently, the standard cross-entropy training objective sufficiently rewards the model for improving pronoun translation and extracting relevant information from the context."
                    },
                    {
                        "id": 19,
                        "string": "We use automatic co-reference systems and human annotation to isolate anaphoric cases."
                    },
                    {
                        "id": 20,
                        "string": "We observe even more substantial improvements in performance on these subsets."
                    },
                    {
                        "id": 21,
                        "string": "By comparing attention distributions induced by our model against co-reference links, we conclude that the model implicitly captures coreference phenomena, even without having any kind of specialized features which could help it in this subtask."
                    },
                    {
                        "id": 22,
                        "string": "These observations also suggest potential directions for future work."
                    },
                    {
                        "id": 23,
                        "string": "For example, effective co-reference systems go beyond relying simply on embeddings of contexts."
                    },
                    {
                        "id": 24,
                        "string": "One option would be to integrate 'global' features summarizing properties of groups of mentions predicted as linked in a document (Wiseman et al., 2016) , or to use latent relations to trace en-tities across documents (Ji et al., 2017) ."
                    },
                    {
                        "id": 25,
                        "string": "Our key contributions can be summarized as follows: • we introduce a context-aware neural model, which is effective and has a sufficiently simple and interpretable interface between the context and the rest of the translation model; • we analyze the flow of information from the context and identify pronoun translation as the key phenomenon captured by the model; • by comparing to automatically predicted or human-annotated coreference relations, we observe that the model implicitly captures anaphora."
                    },
                    {
                        "id": 26,
                        "string": "Neural Machine Translation Given a source sentence x = (x 1 , x 2 , ."
                    },
                    {
                        "id": 27,
                        "string": "."
                    },
                    {
                        "id": 28,
                        "string": "."
                    },
                    {
                        "id": 29,
                        "string": ", x S ) and a target sentence y = (y 1 , y 2 , ."
                    },
                    {
                        "id": 30,
                        "string": "."
                    },
                    {
                        "id": 31,
                        "string": "."
                    },
                    {
                        "id": 32,
                        "string": ", y T ), NMT models predict words in the target sentence, word by word."
                    },
                    {
                        "id": 33,
                        "string": "Current NMT models mainly have an encoderdecoder structure."
                    },
                    {
                        "id": 34,
                        "string": "The encoder maps an input sequence of symbol representations x to a sequence of distributed representations z = (z 1 , z 2 , ."
                    },
                    {
                        "id": 35,
                        "string": "."
                    },
                    {
                        "id": 36,
                        "string": "."
                    },
                    {
                        "id": 37,
                        "string": ", z S )."
                    },
                    {
                        "id": 38,
                        "string": "Given z, a neural decoder generates the corresponding target sequence of symbols y one element at a time."
                    },
                    {
                        "id": 39,
                        "string": "Attention-based NMT The encoder-decoder framework with attention has been proposed by Bahdanau et al."
                    },
                    {
                        "id": 40,
                        "string": "(2015) and has become the defacto standard in NMT."
                    },
                    {
                        "id": 41,
                        "string": "The model consists of encoder and decoder recurrent networks and an attention mechanism."
                    },
                    {
                        "id": 42,
                        "string": "The attention mechanism selectively focuses on parts of the source sentence during translation, and the attention weights specify the proportions with which information from different positions is combined."
                    },
                    {
                        "id": 43,
                        "string": "Transformer Vaswani et al."
                    },
                    {
                        "id": 44,
                        "string": "(2017) proposed an architecture that avoids recurrence completely."
                    },
                    {
                        "id": 45,
                        "string": "The Transformer follows an encoder-decoder architecture using stacked self-attention and fully connected layers for both the encoder and decoder."
                    },
                    {
                        "id": 46,
                        "string": "An important advantage of the Transformer is that it is more parallelizable and faster to train than recurrent encoder-decoder models."
                    },
                    {
                        "id": 47,
                        "string": "From the source tokens, learned embeddings are generated and then modified using positional encodings."
                    },
                    {
                        "id": 48,
                        "string": "The encoded word embeddings are then used as input to the encoder which consists of N layers each containing two sub-layers: (a) a multihead attention mechanism, and (b) a feed-forward network."
                    },
                    {
                        "id": 49,
                        "string": "The self-attention mechanism first computes attention weights: i.e., for each word, it computes a distribution over all words (including itself)."
                    },
                    {
                        "id": 50,
                        "string": "This distribution is then used to compute a new representation of that word: this new representation is set to an expectation (under the attention distribution specific to the word) of word representations from the layer below."
                    },
                    {
                        "id": 51,
                        "string": "In multi-head attention, this process is repeated h times with different representations and the result is concatenated."
                    },
                    {
                        "id": 52,
                        "string": "The second component of each layer of the Transformer network is a feed-forward network."
                    },
                    {
                        "id": 53,
                        "string": "The authors propose using a two-layered network with the ReLU activations."
                    },
                    {
                        "id": 54,
                        "string": "Analogously, each layer of the decoder contains the two sub-layers mentioned above as well as an additional multi-head attention sub-layer that receives input from the corresponding encoding layer."
                    },
                    {
                        "id": 55,
                        "string": "In the decoder, the attention is masked to prevent future positions from being attended to, or in other words, to prevent illegal leftward information flow."
                    },
                    {
                        "id": 56,
                        "string": "See Vaswani et al."
                    },
                    {
                        "id": 57,
                        "string": "(2017) for additional details."
                    },
                    {
                        "id": 58,
                        "string": "The proposed architecture reportedly improves over the previous best results on the WMT 2014 English-to-German and English-to-French translation tasks, and we verified its strong performance on our data set in preliminary experiments."
                    },
                    {
                        "id": 59,
                        "string": "Thus, we consider it a strong state-of-the-art baseline for our experiments."
                    },
                    {
                        "id": 60,
                        "string": "Moreover, as the Transformer is attractive in practical NMT applications because of its parallelizability and training efficiency, integrating extra-sentential information in Transformer is important from the engineering perspective."
                    },
                    {
                        "id": 61,
                        "string": "As we will see in Section 4, previous techniques developed for recurrent encoderdecoders do not appear effective for the Transformer."
                    },
                    {
                        "id": 62,
                        "string": "3 Context-aware model architecture Our model is based on Transformer architecture (Vaswani et al., 2017) ."
                    },
                    {
                        "id": 63,
                        "string": "We leave Transformer's decoder intact while incorporating context information on the encoder side ( Figure 1 )."
                    },
                    {
                        "id": 64,
                        "string": "Source encoder: The encoder is composed of a stack of N layers."
                    },
                    {
                        "id": 65,
                        "string": "The first N − 1 layers are identical and represent the original layers of Trans- g i = σ W g c (s−attn) i , c (c−attn) i + b g (1) c i = g i c (s−attn) i + (1 − g i ) c (c−attn) i (2) Context encoder: The context encoder is composed of a stack of N identical layers and replicates the original Transformer encoder."
                    },
                    {
                        "id": 66,
                        "string": "In contrast to related work (Jean et al., 2017; Wang et al., 2017) , we found in preliminary experiments that using separate encoders does not yield an accurate model."
                    },
                    {
                        "id": 67,
                        "string": "Instead we share the parameters of the first N − 1 layers with the source encoder."
                    },
                    {
                        "id": 68,
                        "string": "Since major proportion of the context encoder's parameters are shared with the source encoder, we add a special token (let us denote it <bos>) to the beginning of context sentences, but not source sentences, to let the shared layers know whether it is encoding a source or a context sentence."
                    },
                    {
                        "id": 69,
                        "string": "Experiments Data and setting We use the publicly available OpenSubtitles2018 corpus (Lison et al., 2018) for English and Russian."
                    },
                    {
                        "id": 70,
                        "string": "1 As described in the appendix, we apply data cleaning and randomly choose 2 million training instances from the resulting data."
                    },
                    {
                        "id": 71,
                        "string": "For development and testing, we randomly select two subsets of 10000 instances from movies not encountered in training."
                    },
                    {
                        "id": 72,
                        "string": "2 Sentences were encoded using byte-pair encoding (Sennrich et al., 2016) , with source and target vocabularies of about 32000 tokens."
                    },
                    {
                        "id": 73,
                        "string": "We generally used the same parameters and optimizer as in the original Transformer (Vaswani et al., 2017) ."
                    },
                    {
                        "id": 74,
                        "string": "The hyperparameters, preprocessing and training details are provided in the supplementary material."
                    },
                    {
                        "id": 75,
                        "string": "Results and analysis We start by experiments motivating the setting and verifying that the improvements are indeed genuine, i.e."
                    },
                    {
                        "id": 76,
                        "string": "they come from inducing predictive features of the context."
                    },
                    {
                        "id": 77,
                        "string": "In subsequent section 5.2, we analyze the features induced by the context encoder and perform error analysis."
                    },
                    {
                        "id": 78,
                        "string": "Overall performance We use the traditional automatic metric BLEU on a general test set to get an estimate of the overall performance of the discourse-aware model, before turning to more targeted evaluation in the next section."
                    },
                    {
                        "id": 79,
                        "string": "We provide results in Table 1 ."
                    },
                    {
                        "id": 80,
                        "string": "3 The 'baseline' is the discourse-agnostic version of the Transformer."
                    },
                    {
                        "id": 81,
                        "string": "As another baseline we use the standard Transformer applied to the concatenation of the previous and source sentences, as proposed by Tiedemann and Scherrer (2017 a substantial degradation of performance (over 1 BLEU)."
                    },
                    {
                        "id": 82,
                        "string": "Instead, we use a binary flag at every word position in our concatenation baseline telling the encoder whether the word belongs to the context sentence or to the source sentence."
                    },
                    {
                        "id": 83,
                        "string": "We consider two versions of our discourseaware model: one using the previous sentence as the context, another one relying on the next sentence."
                    },
                    {
                        "id": 84,
                        "string": "We hypothesize that both the previous and the next sentence provide a similar amount of additional clues about the topic of the text, whereas for discourse phenomena such as anaphora, discourse relations and elliptical structures, the previous sentence is more important."
                    },
                    {
                        "id": 85,
                        "string": "First, we observe that our best model is the one using a context encoder for the previous sentence: it achieves 0.7 BLEU improvement over the discourse-agnostic model."
                    },
                    {
                        "id": 86,
                        "string": "We also notice that, unlike the previous sentence, the next sentence does not appear beneficial."
                    },
                    {
                        "id": 87,
                        "string": "This is a first indicator that discourse phenomena are the main reason for the observed improvement, rather than topic effects."
                    },
                    {
                        "id": 88,
                        "string": "Consequently, we focus solely on using the previous sentence in all subsequent experiments."
                    },
                    {
                        "id": 89,
                        "string": "Second, we observe that the concatenation baseline appears less accurate than the introduced context-aware model."
                    },
                    {
                        "id": 90,
                        "string": "This result suggests that our model is not only more amendable to analysis but also potentially more effective than using concatenation."
                    },
                    {
                        "id": 91,
                        "string": "In order to verify that our improvements are genuine, we also evaluate our model (trained with the previous sentence as context) on the same test set with shuffled context sentences."
                    },
                    {
                        "id": 92,
                        "string": "It can be seen that the performance drops significantly when a real context sentence is replaced with a random one."
                    },
                    {
                        "id": 93,
                        "string": "This confirms that the model does rely on context information to achieve the improvement in translation quality, and is not merely better regularized."
                    },
                    {
                        "id": 94,
                        "string": "However, the model is robust towards being shown a random context and obtains a performance similar to the context-agnostic baseline."
                    },
                    {
                        "id": 95,
                        "string": "Analysis In this section we investigate what types of contextual information are exploited by the model."
                    },
                    {
                        "id": 96,
                        "string": "We study the distribution of attention to context and perform analysis on specific subsets of the test data."
                    },
                    {
                        "id": 97,
                        "string": "Specifically the research questions we seek to answer are as follows: • For the translation of which words does the model rely on contextual history most?"
                    },
                    {
                        "id": 98,
                        "string": "• Are there any non-lexical patterns affecting attention to context, such as sentence length and word position?"
                    },
                    {
                        "id": 99,
                        "string": "• Can the context-aware NMT system implicitly learn coreference phenomena without any feature engineering?"
                    },
                    {
                        "id": 100,
                        "string": "Since all the attentions in our model are multihead, by attention weights we refer to an average over heads of per-head attention weights."
                    },
                    {
                        "id": 101,
                        "string": "First, we would like to identify a useful attention mass coming to context."
                    },
                    {
                        "id": 102,
                        "string": "We analyze the attention maps between source and context, and find that the model mostly attends to <bos> and <eos> context tokens, and much less often attends to words."
                    },
                    {
                        "id": 103,
                        "string": "Our hypothesis is that the model has found a way to take no information from context by looking at uninformative tokens, and it attends to words only when it wants to pass some contextual information to the source sentence encoder."
                    },
                    {
                        "id": 104,
                        "string": "Thus we define useful contextual attention mass as sum of attention weights to context words, excluding <bos> and <eos> tokens and punctuation."
                    },
                    {
                        "id": 105,
                        "string": "Top words depending on context We analyze the distribution of attention to context for individual source words to see for which words the model depends most on contextual history."
                    },
                    {
                        "id": 106,
                        "string": "We compute the overall average attention to context words for each source word in our test set."
                    },
                    {
                        "id": 107,
                        "string": "We do the same for source words at positions higher than first."
                    },
                    {
                        "id": 108,
                        "string": "We filter out words that occurred less than 10 times in a test set."
                    },
                    {
                        "id": 109,
                        "string": "The top 10 words with the highest average attention to context words are provided in Table 2 ."
                    },
                    {
                        "id": 110,
                        "string": "An interesting finding is that contextual attention is high for the translation of \"it\", \"yours\", \"ones\", \"you\" and \"I\", which are indeed very ambiguous out-of-context when translating into Russian."
                    },
                    {
                        "id": 111,
                        "string": "For example, \"it\" will be translated as third person singular masculine, feminine or neuter, or third person plural depending on its antecedent."
                    },
                    {
                        "id": 112,
                        "string": "Table 2 : Top-10 words with the highest average attention to context words."
                    },
                    {
                        "id": 113,
                        "string": "attn gives an average attention to context words, pos gives an average position of the source word."
                    },
                    {
                        "id": 114,
                        "string": "Left part is for words on all positions, right -for words on positions higher than first."
                    },
                    {
                        "id": 115,
                        "string": "\"You\" can be second person singular impolite or polite, or plural."
                    },
                    {
                        "id": 116,
                        "string": "Also, verbs must agree in gender and number with the translation of \"you\"."
                    },
                    {
                        "id": 117,
                        "string": "It might be not obvious why \"I\" has high contextual attention, as it is not ambiguous itself."
                    },
                    {
                        "id": 118,
                        "string": "However, in past tense, verbs must agree with \"I\" in gender, so to translate past tense sentences properly, the source encoder must predict speaker gender, and the context may provide useful indicators."
                    },
                    {
                        "id": 119,
                        "string": "Most surprising is the appearance of \"yes\", \"yeah\", and \"well\" in the list of context-dependent words, similar to the finding by Tiedemann and Scherrer (2017) ."
                    },
                    {
                        "id": 120,
                        "string": "We note that these words mostly appear in sentence-initial position, and in relatively short sentences."
                    },
                    {
                        "id": 121,
                        "string": "If only words after the first are considered, they disappear from the top-10 list."
                    },
                    {
                        "id": 122,
                        "string": "We hypothesize that the amount of attention to context not only depends on the words themselves, but also on factors such as sentence length and position, and we test this hypothesis in the next section."
                    },
                    {
                        "id": 123,
                        "string": "Dependence on sentence length and position We compute useful attention mass coming to context by averaging over source words."
                    },
                    {
                        "id": 124,
                        "string": "Figure 2 illustrates the dependence of this average attention mass on sentence length."
                    },
                    {
                        "id": 125,
                        "string": "We observe a disproportionally high attention on context for short sentences, and a positive correlation between the average contextual attention and context length."
                    },
                    {
                        "id": 126,
                        "string": "It is also interesting to see the importance given to the context at different positions in the source sentence."
                    },
                    {
                        "id": 127,
                        "string": "We compute an average attention mass to context for a set of 1500 sentences of the same length."
                    },
                    {
                        "id": 128,
                        "string": "As can be seen in Figure 3 , words at the beginning of a source sentence tend to attend to context more than words at the end of a sentence."
                    },
                    {
                        "id": 129,
                        "string": "This correlates with standard view that English sentences present hearer-old material before hearer-new."
                    },
                    {
                        "id": 130,
                        "string": "There is a clear (negative) correlation between sentence length and the amount of attention placed on contextual history, and between token position and the amount of attention to context, which suggests that context is especially helpful at the beginning of a sentence, and for shorter sentences."
                    },
                    {
                        "id": 131,
                        "string": "However, Figure 4 shows that there is no straightforward dependence of BLEU improvement on source length."
                    },
                    {
                        "id": 132,
                        "string": "This means that while attention on context is disproportionally high for short sentences, context does not seem disproportionally more useful for these sentences."
                    },
                    {
                        "id": 133,
                        "string": "Analysis of pronoun translation The analysis of the attention model indicates that the model attends heavily to the contextual history for the translation of some pronouns."
                    },
                    {
                        "id": 134,
                        "string": "Here, we investigate whether this context-aware modelling results in empirical improvements in translation Ambiguous pronouns and translation quality Ambiguous pronouns are relatively sparse in a general-purpose test set, and previous work has designed targeted evaluation of pronoun translation (Hardmeier et al., 2015; Miculicich Werlen and Popescu-Belis, 2017; Bawden et al., 2018) ."
                    },
                    {
                        "id": 135,
                        "string": "However, we note that in Russian, grammatical gender is not only marked on pronouns, but also on adjectives and verbs."
                    },
                    {
                        "id": 136,
                        "string": "Rather than using a pronoun-specific evaluation, we present results with BLEU on test sets where we hypothesize context to be relevant, specifically sentences containing co-referential pronouns."
                    },
                    {
                        "id": 137,
                        "string": "We feed Stanford CoreNLP open-source coreference resolution system (Manning et al., 2014a) with pairs of sentences to find examples where there is a link between one of the pronouns under consideration and the context."
                    },
                    {
                        "id": 138,
                        "string": "We focus on anaphoric instances of \"it\" (this excludes, among others, pleonastic uses of \"it\"), and instances of the pronouns \"I\", \"you\", and \"yours\" that are coreferent with an expression in the previous sentence."
                    },
                    {
                        "id": 139,
                        "string": "All these pronouns express ambiguity in the translation into Russian, and the model has learned to attend to context for their translation (Table 2) ."
                    },
                    {
                        "id": 140,
                        "string": "To combat data sparsity, the test sets are extracted from large amounts of held-out data of OpenSubtitles2018."
                    },
                    {
                        "id": 141,
                        "string": "Table 3 shows BLEU scores for the resulting subsets."
                    },
                    {
                        "id": 142,
                        "string": "First of all, we see that most of the antecedents in these test sets are also pronouns."
                    },
                    {
                        "id": 143,
                        "string": "Antecedent pronouns should not be particularly informative for translating the source pronoun."
                    },
                    {
                        "id": 144,
                        "string": "Nevertheless, even with such contexts, improvements are generally larger than on the overall test set."
                    },
                    {
                        "id": 145,
                        "string": "When we focus on sentences where the antecedent for pronoun under consideration contains    a noun, we observe even larger improvements ( Table 4 )."
                    },
                    {
                        "id": 146,
                        "string": "Improvement is smaller for \"I\", but we note that verbs with first person singular subjects mark gender only in the past tense, which limits the impact of correctly predicting gender."
                    },
                    {
                        "id": 147,
                        "string": "In contrast, different types of \"you\" (polite/impolite, singular/plural) lead to different translations of the pronoun itself plus related verbs and adjectives, leading to a larger jump in performance."
                    },
                    {
                        "id": 148,
                        "string": "Examples of nouns co-referent with \"I\" and \"you\" include names, titles (\"Mr.\", \"Mrs.\", \"officer\"), terms denoting family relationships (\"Mom\", \"Dad\"), and terms of endearment (\"honey\", \"sweetie\")."
                    },
                    {
                        "id": 149,
                        "string": "Such nouns can serve to disambiguate number and gender of the speaker or addressee, and mark the level of familiarity between them."
                    },
                    {
                        "id": 150,
                        "string": "The most interesting case is translation of \"it\", as \"it\" can have many different translations into Russian, depending on the grammatical gender of the antecedent."
                    },
                    {
                        "id": 151,
                        "string": "In order to disentangle these cases, we train the Berkeley aligner on 10m sentences and use the trained model to divide the test set with \"it\" referring to a noun into test sets specific to each gender and number."
                    },
                    {
                        "id": 152,
                        "string": "Results are in Table 5 ."
                    },
                    {
                        "id": 153,
                        "string": "We see an improvement of 4-5 BLEU for sentences where \"it\" is translated into a feminine or plural pronoun by the reference."
                    },
                    {
                        "id": 154,
                        "string": "For cases where \"it\" is translated into a masculine pronoun, the improvement is smaller because the masculine gender is more frequent, and the context-agnostic baseline tends to translate the pronoun \"it\" as masculine."
                    },
                    {
                        "id": 155,
                        "string": "Latent anaphora resolution The results in Tables 4 and 5 suggest that the context-aware model exploits information about the antecedent of an ambiguous pronoun."
                    },
                    {
                        "id": 156,
                        "string": "We hypothesize that we can interpret the model's attention mechanism as a latent anaphora resolution, and perform experiments to test this hypothesis."
                    },
                    {
                        "id": 157,
                        "string": "For test sets from Table 4 , we find an antecedent noun phrase (usually a determiner or a possessive pronoun followed by a noun) using Stanford CoreNLP (Manning et al., 2014b) ."
                    },
                    {
                        "id": 158,
                        "string": "We select only examples where a noun phrase contains a single noun to simplify our analysis."
                    },
                    {
                        "id": 159,
                        "string": "Then we identify which token receives the highest attention weight (excluding <bos> and <eos> tokens and punctuation)."
                    },
                    {
                        "id": 160,
                        "string": "If this token falls within the antecedent span, then we treat it as agreement (see Table 6 )."
                    },
                    {
                        "id": 161,
                        "string": "One natural question might be: does the attention component in our model genuinely learn to perform anaphora resolution, or does it capture some simple heuristic (e.g., pointing to the last noun)?"
                    },
                    {
                        "id": 162,
                        "string": "To answer this question, we consider several baselines: choosing a random, last or first pronoun agreement (in %) random first last attention  it  40  36  52  58  you  42  63  29  67  I  39  56 35 62 noun from the context sentence as an antecedent."
                    },
                    {
                        "id": 163,
                        "string": "Note that an agreement of the last noun for \"it\" or the first noun for \"you\" and \"I\" is very high."
                    },
                    {
                        "id": 164,
                        "string": "This is partially due to the fact that most context sentences have only one noun."
                    },
                    {
                        "id": 165,
                        "string": "For these examples a random and last predictions are always correct, meanwhile attention does not always pick a noun as the most relevant word in the context."
                    },
                    {
                        "id": 166,
                        "string": "To get a more clear picture let us now concentrate only on examples where there is more than one noun in the context (Table 7) ."
                    },
                    {
                        "id": 167,
                        "string": "We can now see that the attention weights are in much better agreement with the coreference system than any of the heuristics."
                    },
                    {
                        "id": 168,
                        "string": "This indicates that the model is indeed performing anaphora resolution."
                    },
                    {
                        "id": 169,
                        "string": "While agreement with CoreNLP is encouraging, we are aware that coreference resolution by CoreNLP is imperfect and partial agreement with it may not necessarily indicate that the attention is particularly accurate."
                    },
                    {
                        "id": 170,
                        "string": "In order to control for this, we asked human annotators to manually evaluate 500 examples from the test sets where CoreNLP predicted that \"it\" refers to a noun in the context sentence."
                    },
                    {
                        "id": 171,
                        "string": "More precisely, we picked random 500 examples from the test set with \"it\" from Table 7."
                    },
                    {
                        "id": 172,
                        "string": "We marked the pronoun in a source which CoreNLP found anaphoric."
                    },
                    {
                        "id": 173,
                        "string": "Assessors were given the source and context sentences and were asked to mark an antecedent noun phrase for a marked pronoun in a source sentence or say that there is no antecedent at all."
                    },
                    {
                        "id": 174,
                        "string": "We then picked those examples where assessors found a link from \"it\" to some noun in context (79% of all examples)."
                    },
                    {
                        "id": 175,
                        "string": "Then we evaluated agreement of CoreNLP and our model with the ground truth links."
                    },
                    {
                        "id": 176,
                        "string": "We also report the performance of the best heuristic for \"it\" from our previous analysis (i.e."
                    },
                    {
                        "id": 177,
                        "string": "last noun in context)."
                    },
                    {
                        "id": 178,
                        "string": "The results are provided in Table 8 ."
                    },
                    {
                        "id": 179,
                        "string": "The agreement between our model and the ground truth is 72%."
                    },
                    {
                        "id": 180,
                        "string": "Though 5% below the coreference system, this is a lot higher than the best agreement (in %) CoreNLP 77 attention 72 last noun 54 Table 8 : Performance of CoreNLP and our model's attention mechanism compared to human assessment."
                    },
                    {
                        "id": 181,
                        "string": "Examples with ≥1 noun in context sentence."
                    },
                    {
                        "id": 182,
                        "string": "Figure 5 : An example of an attention map between source and context."
                    },
                    {
                        "id": 183,
                        "string": "On the y-axis are the source tokens, on the x-axis the context tokens."
                    },
                    {
                        "id": 184,
                        "string": "Note the high attention between \"it\" and its antecedent \"heart\"."
                    },
                    {
                        "id": 185,
                        "string": "CoreNLP right wrong attn right 53 19 attn wrong 24 4 Table 9 : Performance of CoreNLP and our model's attention mechanism compared to human assessment (%)."
                    },
                    {
                        "id": 186,
                        "string": "Examples with ≥1 noun in context sentence."
                    },
                    {
                        "id": 187,
                        "string": "heuristic (+18%)."
                    },
                    {
                        "id": 188,
                        "string": "This confirms our conclusion that our model performs latent anaphora resolution."
                    },
                    {
                        "id": 189,
                        "string": "Interestingly, the patterns of mistakes are quite different for CoreNLP and our model (Table 9)."
                    },
                    {
                        "id": 190,
                        "string": "We also present one example ( Figure 5) where the attention correctly predicts anaphora while CoreNLP fails."
                    },
                    {
                        "id": 191,
                        "string": "Nevertheless, there is room for improvement, and improving the attention component is likely to boost translation performance."
                    },
                    {
                        "id": 192,
                        "string": "Related work Our analysis focuses on how our context-aware neural model implicitly captures anaphora."
                    },
                    {
                        "id": 193,
                        "string": "Early work on anaphora phenomena in statistical machine translation has relied on external systems for coreference resolution (Le Nagard and Koehn, 2010; Hardmeier and Federico, 2010) ."
                    },
                    {
                        "id": 194,
                        "string": "Results were mixed, and the low performance of coreference resolution systems was identified as a problem for this type of system."
                    },
                    {
                        "id": 195,
                        "string": "Later work by Hardmeier et al."
                    },
                    {
                        "id": 196,
                        "string": "(2013) has shown that cross-lingual pronoun prediction systems can implicitly learn to resolve coreference, but this work still relied on external feature extraction to identify anaphora candidates."
                    },
                    {
                        "id": 197,
                        "string": "Our experiments show that a contextaware neural machine translation system can implicitly learn coreference phenomena without any feature engineering."
                    },
                    {
                        "id": 198,
                        "string": "Tiedemann and Scherrer (2017) and Bawden et al."
                    },
                    {
                        "id": 199,
                        "string": "(2018) analyze the attention weights of context-aware NMT models."
                    },
                    {
                        "id": 200,
                        "string": "Tiedemann and Scherrer (2017) find some evidence for aboveaverage attention on contextual history for the translation of pronouns, and our analysis goes further in that we are the first to demonstrate that our context-aware model learns latent anaphora resolution through the attention mechanism."
                    },
                    {
                        "id": 201,
                        "string": "This is contrary to Bawden et al."
                    },
                    {
                        "id": 202,
                        "string": "(2018) , who do not observe increased attention between a pronoun and its antecedent in their recurrent model."
                    },
                    {
                        "id": 203,
                        "string": "We deem our model more suitable for analysis, since it has no recurrent connections and fully relies on the attention mechanism within a single attention layer."
                    },
                    {
                        "id": 204,
                        "string": "Conclusions We introduced a context-aware NMT system which is based on the Transformer architecture."
                    },
                    {
                        "id": 205,
                        "string": "When evaluated on an En-Ru parallel corpus, it outperforms both the context-agnostic baselines and a simple context-aware baseline."
                    },
                    {
                        "id": 206,
                        "string": "We observe that improvements are especially prominent for sentences containing ambiguous pronouns."
                    },
                    {
                        "id": 207,
                        "string": "We also show that the model induces anaphora relations."
                    },
                    {
                        "id": 208,
                        "string": "We believe that further improvements in handling anaphora, and by proxy translation, can be achieved by incorporating specialized features in the attention model."
                    },
                    {
                        "id": 209,
                        "string": "Our analysis has focused on the effect of context information on pronoun translation."
                    },
                    {
                        "id": 210,
                        "string": "Future work could also investigate whether context-aware NMT systems learn other discourse phenomena, for example whether they improve the translation of elliptical constructions, and markers of discourse relations and information structure."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 25
                    },
                    {
                        "section": "Neural Machine Translation",
                        "n": "2",
                        "start": 26,
                        "end": 68
                    },
                    {
                        "section": "Data and setting",
                        "n": "4.1",
                        "start": 69,
                        "end": 74
                    },
                    {
                        "section": "Results and analysis",
                        "n": "5",
                        "start": 75,
                        "end": 77
                    },
                    {
                        "section": "Overall performance",
                        "n": "5.1",
                        "start": 78,
                        "end": 94
                    },
                    {
                        "section": "Analysis",
                        "n": "5.2",
                        "start": 95,
                        "end": 104
                    },
                    {
                        "section": "Top words depending on context",
                        "n": "5.2.1",
                        "start": 105,
                        "end": 122
                    },
                    {
                        "section": "Dependence on sentence length and position",
                        "n": "5.2.2",
                        "start": 123,
                        "end": 132
                    },
                    {
                        "section": "Analysis of pronoun translation",
                        "n": "5.3",
                        "start": 133,
                        "end": 133
                    },
                    {
                        "section": "Ambiguous pronouns and translation quality",
                        "n": "5.3.1",
                        "start": 134,
                        "end": 154
                    },
                    {
                        "section": "Latent anaphora resolution",
                        "n": "5.3.2",
                        "start": 155,
                        "end": 191
                    },
                    {
                        "section": "Related work",
                        "n": "6",
                        "start": 192,
                        "end": 203
                    },
                    {
                        "section": "Conclusions",
                        "n": "7",
                        "start": 204,
                        "end": 210
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/969-Figure4-1.png",
                        "caption": "Figure 4: BLEU score vs. source sentence length",
                        "page": 5,
                        "bbox": {
                            "x1": 338.88,
                            "x2": 491.03999999999996,
                            "y1": 61.44,
                            "y2": 177.12
                        }
                    },
                    {
                        "filename": "../figure/image/969-Figure3-1.png",
                        "caption": "Figure 3: Average attention to context vs. source token position",
                        "page": 5,
                        "bbox": {
                            "x1": 104.64,
                            "x2": 255.35999999999999,
                            "y1": 232.79999999999998,
                            "y2": 346.08
                        }
                    },
                    {
                        "filename": "../figure/image/969-Figure2-1.png",
                        "caption": "Figure 2: Average attention to context words vs. both source and context length",
                        "page": 5,
                        "bbox": {
                            "x1": 100.8,
                            "x2": 259.2,
                            "y1": 61.44,
                            "y2": 191.04
                        }
                    },
                    {
                        "filename": "../figure/image/969-Table6-1.png",
                        "caption": "Table 6: Agreement with CoreNLP for test sets of pronouns having a nominal antecedent in context sentence (%).",
                        "page": 6,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 524.16,
                            "y1": 193.44,
                            "y2": 264.0
                        }
                    },
                    {
                        "filename": "../figure/image/969-Table4-1.png",
                        "caption": "Table 4: BLEU for test sets of pronouns having a nominal antecedent in context sentence. N: number of examples in the test set.",
                        "page": 6,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 287.03999999999996,
                            "y1": 193.44,
                            "y2": 250.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/969-Table3-1.png",
                        "caption": "Table 3: BLEU for test sets with coreference between pronoun and a word in context sentence. We show both N, the total number of instances in a particular test set, and number of instances with pronominal antecedent. Significant BLEU differences are in bold.",
                        "page": 6,
                        "bbox": {
                            "x1": 104.64,
                            "x2": 490.08,
                            "y1": 62.879999999999995,
                            "y2": 132.0
                        }
                    },
                    {
                        "filename": "../figure/image/969-Table5-1.png",
                        "caption": "Table 5: BLEU for test sets of pronoun “it” having a nominal antecedent in context sentence. N: number of examples in the test set.",
                        "page": 6,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 289.44,
                            "y1": 305.76,
                            "y2": 376.32
                        }
                    },
                    {
                        "filename": "../figure/image/969-Figure1-1.png",
                        "caption": "Figure 1: Encoder of the discourse-aware model",
                        "page": 2,
                        "bbox": {
                            "x1": 309.59999999999997,
                            "x2": 523.1999999999999,
                            "y1": 61.44,
                            "y2": 324.96
                        }
                    },
                    {
                        "filename": "../figure/image/969-Table7-1.png",
                        "caption": "Table 7: Agreement with CoreNLP for test sets of pronouns having a nominal antecedent in context sentence (%). Examples with ≥1 noun in context sentence.",
                        "page": 7,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 288.0,
                            "y1": 62.879999999999995,
                            "y2": 132.0
                        }
                    },
                    {
                        "filename": "../figure/image/969-Figure5-1.png",
                        "caption": "Figure 5: An example of an attention map between source and context. On the y-axis are the source tokens, on the x-axis the context tokens. Note the high attention between “it” and its antecedent “heart”.",
                        "page": 7,
                        "bbox": {
                            "x1": 319.68,
                            "x2": 510.24,
                            "y1": 183.84,
                            "y2": 312.0
                        }
                    },
                    {
                        "filename": "../figure/image/969-Table8-1.png",
                        "caption": "Table 8: Performance of CoreNLP and our model’s attention mechanism compared to human assessment. Examples with ≥1 noun in context sentence.",
                        "page": 7,
                        "bbox": {
                            "x1": 341.76,
                            "x2": 491.03999999999996,
                            "y1": 62.879999999999995,
                            "y2": 119.03999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/969-Table9-1.png",
                        "caption": "Table 9: Performance of CoreNLP and our model’s attention mechanism compared to human assessment (%). Examples with ≥1 noun in context sentence.",
                        "page": 7,
                        "bbox": {
                            "x1": 349.91999999999996,
                            "x2": 483.35999999999996,
                            "y1": 390.71999999999997,
                            "y2": 447.35999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/969-Table1-1.png",
                        "caption": "Table 1: Automatic evaluation: BLEU. Significant differences at p < 0.01 are in bold.",
                        "page": 3,
                        "bbox": {
                            "x1": 310.56,
                            "x2": 522.24,
                            "y1": 62.879999999999995,
                            "y2": 145.92
                        }
                    },
                    {
                        "filename": "../figure/image/969-Table2-1.png",
                        "caption": "Table 2: Top-10 words with the highest average attention to context words. attn gives an average attention to context words, pos gives an average position of the source word. Left part is for words on all positions, right — for words on positions higher than first.",
                        "page": 4,
                        "bbox": {
                            "x1": 314.88,
                            "x2": 517.4399999999999,
                            "y1": 62.879999999999995,
                            "y2": 213.12
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-6"
        },
        {
            "slides": {
                "2": {
                    "title": "Graph to String Translation",
                    "text": [
                        "Translation = generation of target-side surface words in order, conditioned on source semantic nodes and previously generated words.",
                        "Start in the (virtual) root",
                        "At each step, transition to a semantic node and emit a target word",
                        "A single node can be visited multiple times",
                        "One transition can move anywhere in the LF"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Translation Example",
                    "text": [
                        "Figure 2 : An example of the translation process illustrating several first steps of translating the sentence into German (Ich mochte dir einen Sandwich...).",
                        "Labels in italics correspond to the shortest undirected paths between the nodes."
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": [
                        "figure/image/984-Figure1-1.png"
                    ]
                },
                "4": {
                    "title": "Alignment of Graph Nodes",
                    "text": [
                        "How do we align source-side semantic nodes to target-side words?"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "5": {
                    "title": "Alignment of Graph Nodes Gibbs Sampling",
                    "text": [
                        "Alignment ( transition) distribution P(ai ) modeled as a categorical distribution:",
                        "Translation ( emission) distribution modeled as a set of categorical distributions, one for each source semantic node:",
                        "P(ei |nai c(lemma(nai ei",
                        "Sample from the following distribution:"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "6": {
                    "title": "Alignment of Graph Nodes Evaluation",
                    "text": [
                        "I Linearize the LF, run GIZA++ (standard word alignment)",
                        "I Heuristic linearization, try to preserve source surface word order",
                        "I Source-side nodes to source-side tokens",
                        "I Source-target word alignment GIZA++",
                        "Manual inspection of alignments",
                        "Alignment composition clearly superior",
                        "Not much difference between GIZA++ and parser alignments"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                }
            },
            "paper_title": "A Discriminative Model for Semantics-to-String Translation",
            "paper_id": "984",
            "paper": {
                "title": "A Discriminative Model for Semantics-to-String Translation",
                "abstract": "We present a feature-rich discriminative model for machine translation which uses an abstract semantic representation on the source side. We include our model as an additional feature in a phrase-based decoder and we show modest gains in BLEU score in an n-best re-ranking experiment.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The goal of machine translation is to take source language utterances and convert them into fluent target language utterances with the same meaning."
                    },
                    {
                        "id": 1,
                        "string": "Most recent approaches learn transformations using statistical techniques on parallel data."
                    },
                    {
                        "id": 2,
                        "string": "Meaning equivalent representations of words and phrases are learned directly from natural data, as are other syntactic operations such as reordering."
                    },
                    {
                        "id": 3,
                        "string": "However, commonly used methods have a very simple view of the linguistic data."
                    },
                    {
                        "id": 4,
                        "string": "Each word is generally modeled independently, for instance, and the relations between words are generally captured only in fixed phrases or as syntactic relationships."
                    },
                    {
                        "id": 5,
                        "string": "Recently there has been a resurgence of interest in unified semantic representations: deep analyses with heavy normalization of morphology, syntax, and even semantic representations."
                    },
                    {
                        "id": 6,
                        "string": "In particular, Abstract Meaning Representation (AMR, Banarescu et al."
                    },
                    {
                        "id": 7,
                        "string": "(2013) ) is a novel representation of (sentential) semantics."
                    },
                    {
                        "id": 8,
                        "string": "Such representations could influence a number of natural language understanding and generation tasks, particularly machine translation."
                    },
                    {
                        "id": 9,
                        "string": "Deeper models can be used for multiple aspects of the translation modeling problem."
                    },
                    {
                        "id": 10,
                        "string": "Building translation models that rely on a deeper representation of the input allows for a more parsimonious translation model: morphologically related words can be handled in a unified manner; semantically related concepts are immediately adjacent and available for modeling, etc."
                    },
                    {
                        "id": 11,
                        "string": "Language models using deep representations might help us model which interpretations are more plausible."
                    },
                    {
                        "id": 12,
                        "string": "We present an initial discriminative method for modeling the likelihood of a target language surface string given source language deep semantics."
                    },
                    {
                        "id": 13,
                        "string": "This approach relies on an automatic parser for source language semantics."
                    },
                    {
                        "id": 14,
                        "string": "We use a system that parses into AMR-like structures (Vanderwende et al., 2015) , and apply the resulting model as an additional feature in a translation system."
                    },
                    {
                        "id": 15,
                        "string": "Related Work There is a large body of related work on utilizing deep language representation in NLP and MT in particular."
                    },
                    {
                        "id": 16,
                        "string": "This is not surprising considering that such representations provide abstractions of many language-specific phenomena, effectively bringing different languages closer together."
                    },
                    {
                        "id": 17,
                        "string": "A number of machine translation systems starting as early as the 1950s therefore used a form of transfer: the source sentences were parsed, and those parsed representations were translated into target representations."
                    },
                    {
                        "id": 18,
                        "string": "Finally text generation was applied."
                    },
                    {
                        "id": 19,
                        "string": "The level of analysis is somewhat arguable -sometimes it was purely syntactic, but in other cases it reached into the semantic domain."
                    },
                    {
                        "id": 20,
                        "string": "One of the earliest architectures was described in 1957 (Yngve, 1957) ."
                    },
                    {
                        "id": 21,
                        "string": "More contemporary examples of such systems include KANT (Nyberg and Mitamura, 1992) , which used a very deep representation close to an interlingua, early versions of SysTran and Microsoft Translator, or more recently TectoMT (Popel andŽabokrtský, 2010) for English→Czech translation."
                    },
                    {
                        "id": 22,
                        "string": "AMR itself has recently been used for abstractive summarization (Liu et al., 2015) ."
                    },
                    {
                        "id": 23,
                        "string": "In this work, sentences in the document to be summarized are parsed to AMRs, then a decoding algorithm is run to produce a summary graph."
                    },
                    {
                        "id": 24,
                        "string": "The surface realization of this graph then constitutes the final sum- mary."
                    },
                    {
                        "id": 25,
                        "string": "(Jones et al., 2012) presents an MT approach that can exploit semantic graphs such as AMR, in a continuation of earlier work that abstracted translation away from strings (Yamada and Knight, 2001; Galley et al., 2004) ."
                    },
                    {
                        "id": 26,
                        "string": "While rule extraction algorithms such as (Galley et al., 2004) operate on trees and have also been applied to semantic parsing problems (Li et al., 2013) , Jones et al."
                    },
                    {
                        "id": 27,
                        "string": "(2012) generalized these approaches by inducing synchronous hyperedge replacement grammars (HRG), which operate on graphs."
                    },
                    {
                        "id": 28,
                        "string": "In contrast to (Jones et al., 2012) , our work does not have to deal with the complexities of HRG decoding, which runs in O(n 3 ) (Jones et al., 2012) , as our decoder is simply a phrase-based decoder."
                    },
                    {
                        "id": 29,
                        "string": "Discriminative models have been used in statistical MT many times."
                    },
                    {
                        "id": 30,
                        "string": "Global lexicon model (Mauser et al., 2009 ) and phrase-sense disambiguation (Carpuat and Wu, 2007) are perhaps the best known methods."
                    },
                    {
                        "id": 31,
                        "string": "Similarly to Carpuat and Wu (2007) , we use the classifier to rescore phrasal translations, however we do not train a separate classifier for each source phrase."
                    },
                    {
                        "id": 32,
                        "string": "Instead, we train a global model -similarly to Subotin (2011) or more recently Tamchyna et al."
                    },
                    {
                        "id": 33,
                        "string": "(2014) ."
                    },
                    {
                        "id": 34,
                        "string": "Features for our model are very different from previous work because they come from a deep representation and therefore should capture semantic relations between the languages, instead of surface or morpho-syntactic correspondences."
                    },
                    {
                        "id": 35,
                        "string": "Semantic Representation Our representation of sentence semantics is based on Logical Form (Vanderwende, 2015) ."
                    },
                    {
                        "id": 36,
                        "string": "LFs are labeled directed graphs whose nodes roughly correspond to content words in the sentence."
                    },
                    {
                        "id": 37,
                        "string": "Edge labels describe semantic relations between nodes."
                    },
                    {
                        "id": 38,
                        "string": "Additional linguistic information, such as verb subcategorization frames, definiteness, tense etc., is stored in graph nodes as bits."
                    },
                    {
                        "id": 39,
                        "string": "Figure 1 shows a sentence parsed into the logical form."
                    },
                    {
                        "id": 40,
                        "string": "Nodes are represented by word lemmas."
                    },
                    {
                        "id": 41,
                        "string": "Relations include Dsub for deep subject, Dobj and Dind for direct and indirect objects etc."
                    },
                    {
                        "id": 42,
                        "string": "Bits are shown as flags in parentheses."
                    },
                    {
                        "id": 43,
                        "string": "Note that this graph may have cycles -for example, the Dobj of \"take\" is \"sandwich\", but \"take\" is also the Attrib of \"sandwich\"."
                    },
                    {
                        "id": 44,
                        "string": "The verb \"take\" is also missing its obligatory subject which is replaced by the free variable X."
                    },
                    {
                        "id": 45,
                        "string": "The logical form can be converted using a sequence of rules to a representation which conforms to the AMR specification (Vanderwende et al., 2015) ."
                    },
                    {
                        "id": 46,
                        "string": "We do not use the full conversion pipeline in our work, so our semantic graphs are somewhere between the LF and AMR."
                    },
                    {
                        "id": 47,
                        "string": "Notably, we keep the bits which serve as important features for the discriminative modeling of translation."
                    },
                    {
                        "id": 48,
                        "string": "Graph-to-String Translation We develop models for semantic-graph-to-string translation."
                    },
                    {
                        "id": 49,
                        "string": "These models are essentially discriminative translation models, relying on a decomposition structure similar to both maximum entropy language models and IBM Models 1, 2 (Brown et al., 1993) , and the HMM translation model (Vogel et al., 1996) ."
                    },
                    {
                        "id": 50,
                        "string": "In particular, we see translation as a process of selecting target words in order conditioned on source language representation as well as prior target words."
                    },
                    {
                        "id": 51,
                        "string": "Similar to the IBM Models, we see each target word as being generated based on source concepts, though in our case the concepts are semantic graph nodes rather than surface words."
                    },
                    {
                        "id": 52,
                        "string": "That is, we assume the existence of an alignment, though it aligns the target words to source semantic graph nodes rather than surface words."
                    },
                    {
                        "id": 53,
                        "string": "Our model views translation as generation of the target-side sentence given the source-side semantic graph."
                    },
                    {
                        "id": 54,
                        "string": "We assume a generative process which operates as follows."
                    },
                    {
                        "id": 55,
                        "string": "We begin in the virtual root node of the graph."
                    },
                    {
                        "id": 56,
                        "string": "At each step, we transition to a graph node and we generate a target-side word."
                    },
                    {
                        "id": 57,
                        "string": "We proceed left-to-right on the target side and we stop once the whole target sentence is generated."
                    },
                    {
                        "id": 58,
                        "string": "Figure 2 shows an example of this process."
                    },
                    {
                        "id": 59,
                        "string": "Say we have a source semantic graph G with nodes V = {n 1 ..n S }, edges E ⊂ V × V , and a root node n R for R ∈ 1..S. Then the likelihood of a target string E = (e 1 , ..., e T ) and alignment A = (a 1 , ..., a T ) with a i ∈ 0..S is as follows, with a 0 = R: P (A, E|G) = T i=1 P (a i |a i−1 1 , e i−1 1 , G) P (e i |a i 1 , e i−1 1 , G) (1) In this generative story, we first predict each alignment position and then predict each translated word."
                    },
                    {
                        "id": 60,
                        "string": "The transition distribution P (a i | · · · ) resembles that of the HMM alignment model, though the features are somewhat different."
                    },
                    {
                        "id": 61,
                        "string": "The translation distribution P (e i | · · · ) may take on several forms."
                    },
                    {
                        "id": 62,
                        "string": "For the purposes of alignment, we explore a simple categorical distribution as in the IBM models."
                    },
                    {
                        "id": 63,
                        "string": "For translation reranking, we instead use a feature-rich approach conditioned on a variety of source and target context."
                    },
                    {
                        "id": 64,
                        "string": "Alignment of Semantic Graph Nodes We have experimented with a number of techniques for aligning source-side semantic graph nodes to target-side surface words."
                    },
                    {
                        "id": 65,
                        "string": "Gibbs sampling."
                    },
                    {
                        "id": 66,
                        "string": "We can attempt to directly align the target language words to the source language nodes using a generative HMM-style model."
                    },
                    {
                        "id": 67,
                        "string": "Unlike the HMM word alignment model (Vogel et al., 1996) , the likelihood of jumping between nodes is based on the graph path between those nodes, rather than the linear distance."
                    },
                    {
                        "id": 68,
                        "string": "Starting from the generative story of Equation 1, we make several simplifying assumptions."
                    },
                    {
                        "id": 69,
                        "string": "First we assume that the alignment distribution P (a i | · · · ) is modeled as a categorical distribution: P (a i |a i−1 , G) ∝ c(LABEL(a i−1 , a i )) The function LABEL(u, v) produces a string describing the labels along the shortest (undirected) path between the two nodes."
                    },
                    {
                        "id": 70,
                        "string": "Next, we assume that the translation distribution is modeled as a set of categorical distributions, one for each source semantic node: P (e i |n a i ) ∝ c(LEMMA(n a i ) → e i ) This model is sensitive to the order in which source language information is presented in the target language."
                    },
                    {
                        "id": 71,
                        "string": "The alignment variables a i are not observed."
                    },
                    {
                        "id": 72,
                        "string": "We use Gibbs sampling rather than EM so that we can incorporate a sparse prior when estimating the parameters of the model and the assignments to these latent alignment variables."
                    },
                    {
                        "id": 73,
                        "string": "At each iteration, we shuffle the sentences in our training data."
                    },
                    {
                        "id": 74,
                        "string": "Then for each sentence, we visit all its tokens in a random order and re-align them."
                    },
                    {
                        "id": 75,
                        "string": "We sample the new alignment according to the Markov blanket, which has the following probability distribution: P (t|n i ) ∝ c(LEMMA(n i ) → t) + α c(LEMMA(n i )) + αL × c(LABEL(n i , n i−1 )) + β T + βP × c(LABEL(n i+1 , n i )) + β T + βP (2) L, P stand for the number of lemma/path types, respectively."
                    },
                    {
                        "id": 76,
                        "string": "T is the total number of tokens in the corpus."
                    },
                    {
                        "id": 77,
                        "string": "Overall, the formula describes the probability of the edge coming into the node n i , the token emission and finally the outgoing edge."
                    },
                    {
                        "id": 78,
                        "string": "We evaluate this probability for each node n i in the graph and re-align the token according to the random sample from this distribution."
                    },
                    {
                        "id": 79,
                        "string": "α and β are hyper-parameters specifying the concentration parameters of symmetric Dirichlet priors over the transition and emission distributions."
                    },
                    {
                        "id": 80,
                        "string": "Specifying values less than 1 for these hyper-parameters pushes the model toward sparse solutions."
                    },
                    {
                        "id": 81,
                        "string": "They are tuned by a grid search which evaluates model perplexity on a held-out set."
                    },
                    {
                        "id": 82,
                        "string": "Direct GIZA++."
                    },
                    {
                        "id": 83,
                        "string": "GIZA++ (Och and Ney, 2000 ) is a commonly used toolkit for word alignment which implements the IBM models."
                    },
                    {
                        "id": 84,
                        "string": "In this setting, we linearized the semantic graph nodes using a simple heuristic based on the surface word order and aligned them directly to the target-side sentences."
                    },
                    {
                        "id": 85,
                        "string": "We experimented with different symmetrizations and found that grow-diag-final-and gives the best results."
                    },
                    {
                        "id": 86,
                        "string": "Composed alignments."
                    },
                    {
                        "id": 87,
                        "string": "We divided the alignment problem into two stages: aligning semantic graph nodes to source-side words and aligning the source-and target-side words (i.e., standard MT word alignment)."
                    },
                    {
                        "id": 88,
                        "string": "We then simply compose the two alignments."
                    },
                    {
                        "id": 89,
                        "string": "For the alignment between source graph nodes and source surface words, we have two options: we can either train a GIZA++ model or we can use gold alignments provided by the semantic parser."
                    },
                    {
                        "id": 90,
                        "string": "For the second stage, we need to train a GIZA++ model."
                    },
                    {
                        "id": 91,
                        "string": "We evaluated the different strategies by manually inspecting the resulting alignments."
                    },
                    {
                        "id": 92,
                        "string": "We found that the composition of two separate alignment steps produces clearly superior results, even if it seems arguable whether such division simplifies the task."
                    },
                    {
                        "id": 93,
                        "string": "Therefore, for the remaining experiments, we used the composition of gold alignment and GIZA++, although two GIZA++ steps performed comparably well."
                    },
                    {
                        "id": 94,
                        "string": "Model For our discriminative model, the alignment is assumed to be given."
                    },
                    {
                        "id": 95,
                        "string": "At training time, it is the alignment produced by the parser composed with GIZA++ surface word alignment."
                    },
                    {
                        "id": 96,
                        "string": "At test time, we compose the alignment between graph nodes and source surface tokens (given by the parser) with the bilingual surface word alignment provided by the MT decoder."
                    },
                    {
                        "id": 97,
                        "string": "Turning to the translation distribution, we use a maximum entropy model to learn the conditional probability: P (e i |n a i , n a i−1 , G, e i−1 i−k+1 ) = exp w · f (e i , n a i , n a i−1 , G, e i−1 i−k+1 ) Z (3) where Z is defined as e ∈GEN (na i ) exp(w · f (e , n a i , n a i−1 , G, e i−1 i−k+1 )) The GEN(n) function produces the possible translations of the deep lemma associated with node n. We collect all translations observed in the training data and keep the 30 most frequent ones for each lemma."
                    },
                    {
                        "id": 98,
                        "string": "Our model thus assigns zero probability to unseen translations."
                    },
                    {
                        "id": 99,
                        "string": "Because of the size of our training data, we used online learning."
                    },
                    {
                        "id": 100,
                        "string": "We implemented a parallelized (multi-threaded) version of the standard stochastic gradient descent algorithm (SGD)."
                    },
                    {
                        "id": 101,
                        "string": "Our learning rate was fixed -using line search, we found the optimal rate to be 0.05."
                    },
                    {
                        "id": 102,
                        "string": "Our batch size was set to one; different batch sizes made almost no difference in model performance."
                    },
                    {
                        "id": 103,
                        "string": "We used online L1 regularization (Tsuruoka et al., 2009 ) with weight 1."
                    },
                    {
                        "id": 104,
                        "string": "We implemented feature hashing to further improve performance and set the hash length to 22 bits."
                    },
                    {
                        "id": 105,
                        "string": "We shuffled our data and split it into five parts which were processed independently and their final weights were averaged."
                    },
                    {
                        "id": 106,
                        "string": "Feature Set Our semantic representation enables us to use a very rich set of features, including information commonly used by both translation models and language models."
                    },
                    {
                        "id": 107,
                        "string": "We extract a significant amount of information from the graph node n a i aligned to the generated word: • lemma, • part of speech, • all bits."
                    },
                    {
                        "id": 108,
                        "string": "We extract the same features from the previous graph node (n a i−1 ), from the parent node."
                    },
                    {
                        "id": 109,
                        "string": "(If there are multiple parents in the graph, we break ties in a consistent but heuristic manner, picking the leftmost parent node according to its position in the source sentence) We also gather all the bits of the parent and the parent relation."
                    },
                    {
                        "id": 110,
                        "string": "These features may capture agreement phenomena."
                    },
                    {
                        "id": 111,
                        "string": "We also look at the shortest path in the semantic graph from the previous node to the current one and we extract features which describe it: • path length, • relations (edges) along the path."
                    },
                    {
                        "id": 112,
                        "string": "We use the lemmas of all nodes in the semantic graph as bag-of-word features, as well as all the surface words in the source sentence."
                    },
                    {
                        "id": 113,
                        "string": "We also extract lemmas of nodes within a given distance from the current node (i.e."
                    },
                    {
                        "id": 114,
                        "string": "graph context), as well as the relation that led to these nodes."
                    },
                    {
                        "id": 115,
                        "string": "Together, these features ground the current node in its semantic context."
                    },
                    {
                        "id": 116,
                        "string": "An additional set of features handle the fact that source nodes may generate multiple target words, and the distribution over subsequent words should be different."
                    },
                    {
                        "id": 117,
                        "string": "We have a feature indicating the number of words generated from the current node, both in isolation, conjoined with the lemma, and conjoined with the part of speech."
                    },
                    {
                        "id": 118,
                        "string": "We also have a feature for each word previously generated by this same node, again in isolation, in conjunction with the lemma, and in conjunction with the part of speech."
                    },
                    {
                        "id": 119,
                        "string": "This helps prevent the model from generating multiple copies of same target word given a source node."
                    },
                    {
                        "id": 120,
                        "string": "On the target side, we use several previous tokens as features."
                    },
                    {
                        "id": 121,
                        "string": "These may act as discriminative language model features."
                    },
                    {
                        "id": 122,
                        "string": "During MT decoding, our model therefore must maintain state, which could present a computational issue."
                    },
                    {
                        "id": 123,
                        "string": "The language model features present similar complexity as conventional MT state, and the features about prior words generated from the same node require greater memory."
                    },
                    {
                        "id": 124,
                        "string": "Were this cost to become prohibitive, a simpler form of the prior word features would likely suffice."
                    },
                    {
                        "id": 125,
                        "string": "Experiments We tested our model in an n-best re-ranking experiment."
                    },
                    {
                        "id": 126,
                        "string": "We began by training a basic phrase-based MT system for English→French on 1 million parallel sentence pairs and produced 1000-best lists for three test sets provided for the Workshop on Statistical Machine Translation (Bojar et al., 2013 ) -WMT 2009 , 2010 and 2013."
                    },
                    {
                        "id": 127,
                        "string": "This system had a set of 13 commonly used features: four channel model scores (forward and backward MLE and lexical weighting scores), a 5-gram language model, five lexicalized reordering model scores (corresponding to different ordering outcomes), linear distortion penalty, word count, and phrase count."
                    },
                    {
                        "id": 128,
                        "string": "The system was optimized using minimum error rate training (Och, 2003) For reranking, we gathered 1000-best lists for the development and test sets."
                    },
                    {
                        "id": 129,
                        "string": "We added six scores from our model to each translation in the n-best lists."
                    },
                    {
                        "id": 130,
                        "string": "We included the total log probability, the sum of unnormalized scores, and the rank of the given output."
                    },
                    {
                        "id": 131,
                        "string": "In addition, we had count features indicating the number of words that were not in the GEN set of the model, the number of NULLs (effectively deleted nodes), and a count of times a target word appeared in a stopword list."
                    },
                    {
                        "id": 132,
                        "string": "In the end, each translation had a total of 19 features: 13 from the original features and 6 from this approach."
                    },
                    {
                        "id": 133,
                        "string": "Next, we ran one iteration of the MERT optimizer on these 1000-best lists for all of the features."
                    },
                    {
                        "id": 134,
                        "string": "Because this was a reranking experiment rather than decoding, we did not repeatedly gather n-best lists as in decoding."
                    },
                    {
                        "id": 135,
                        "string": "The resulting feature weights were used to rescore the test n-best lists and evaluated the using BLEU; Table 1 shows the results."
                    },
                    {
                        "id": 136,
                        "string": "We obtained a modest but consistent improvement."
                    },
                    {
                        "id": 137,
                        "string": "Once the model is used directly in the decoder, the gains should increase as it will be able to influence decoding."
                    },
                    {
                        "id": 138,
                        "string": "Conclusion We have presented an initial attempt at including semantic features in a statistical machine translation system."
                    },
                    {
                        "id": 139,
                        "string": "Our approach uses discriminative training and a broad set of features to capture morphological, syntactic, and semantic information in a single model."
                    },
                    {
                        "id": 140,
                        "string": "Although our gains are not particularly large yet, we believe that additional ef-fort on feature engineering and decoder integration could lead to more substantial gains."
                    },
                    {
                        "id": 141,
                        "string": "Our approach is gated by the accuracy and consistency of the semantic parser."
                    },
                    {
                        "id": 142,
                        "string": "We have used a broad coverage parser with accuracy competitive to the current state-of-the-art, but even the stateof-the-art is rather low."
                    },
                    {
                        "id": 143,
                        "string": "It would be interesting to explore more robust features spanning multiple analyses, or to combine the outputs of multiple parsers."
                    },
                    {
                        "id": 144,
                        "string": "Even syntax-based machine translation systems are dependent on accurate parsers (Quirk and Corston-Oliver, 2006) ; deeper analyses are likely to be more dependent on parse quality."
                    },
                    {
                        "id": 145,
                        "string": "In a similar vein, it would be interesting to evaluate the impact of morphological, syntactic, and semantic features separately."
                    },
                    {
                        "id": 146,
                        "string": "A careful feature ablation and exploration would help identify promising areas for future research."
                    },
                    {
                        "id": 147,
                        "string": "We have only scratched the surface of possible integrations."
                    },
                    {
                        "id": 148,
                        "string": "Even this model could be applied to MT systems in multiple ways."
                    },
                    {
                        "id": 149,
                        "string": "For instance, rather than applying from source to target, we might evaluate in a noisy channel sense."
                    },
                    {
                        "id": 150,
                        "string": "That is, we could predict the source language surface forms given the target language translations."
                    },
                    {
                        "id": 151,
                        "string": "Furthermore, this would allow incorporation of a target semantic language model."
                    },
                    {
                        "id": 152,
                        "string": "This latter approach is particularly attractive, as it would explicitly model the semantic plausibility of the target."
                    },
                    {
                        "id": 153,
                        "string": "Of course, this would require target language semantic analysis: either we would be forced to parse n-best outcomes from some baseline system, or integrate the construction of target language semantics into the MT system."
                    },
                    {
                        "id": 154,
                        "string": "We believe that including such models of semantic plausibility holds great promise in preventing \"word salad\" outputs from MT systems: sentences that simply cannot be interpreted by humans."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 14
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 15,
                        "end": 34
                    },
                    {
                        "section": "Semantic Representation",
                        "n": "3",
                        "start": 35,
                        "end": 47
                    },
                    {
                        "section": "Graph-to-String Translation",
                        "n": "4",
                        "start": 48,
                        "end": 63
                    },
                    {
                        "section": "Alignment of Semantic Graph Nodes",
                        "n": "4.1",
                        "start": 64,
                        "end": 93
                    },
                    {
                        "section": "Model",
                        "n": "4.2",
                        "start": 94,
                        "end": 105
                    },
                    {
                        "section": "Feature Set",
                        "n": "4.3",
                        "start": 106,
                        "end": 124
                    },
                    {
                        "section": "Experiments",
                        "n": "5",
                        "start": 125,
                        "end": 137
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 138,
                        "end": 154
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/984-Figure2-1.png",
                        "caption": "Figure 2: An example of the translation process illustrating several first steps of translating the sentence from Figure 1 into German (“Ich möchte dir einen Sandwich...”). Labels in italics correspond to the shortest undirected paths between the nodes.",
                        "page": 2,
                        "bbox": {
                            "x1": 120.0,
                            "x2": 478.56,
                            "y1": 60.0,
                            "y2": 119.03999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/984-Table1-1.png",
                        "caption": "Table 1: BLEU scores of n-best reranking in English→French translation.",
                        "page": 4,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 524.16,
                            "y1": 222.72,
                            "y2": 277.92
                        }
                    },
                    {
                        "filename": "../figure/image/984-Figure1-1.png",
                        "caption": "Figure 1: Logical Form (computed tree) for the sentence: I would like to give you a sandwich taken from the fridge.",
                        "page": 1,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 521.28,
                            "y1": 61.44,
                            "y2": 172.32
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-7"
        },
        {
            "slides": {
                "0": {
                    "title": "Learning under Domain Shift",
                    "text": [
                        "State-of-the-art domain adaptation approaches",
                        "evaluate on proprietary datasets or on a single benchmark",
                        "Only compare against weak baselines",
                        "Almost none evaluate against approaches from the extensive semi-supervised learning (SSL) literature"
                    ],
                    "page_nums": [
                        1,
                        2,
                        3,
                        4,
                        5,
                        6
                    ],
                    "images": []
                },
                "1": {
                    "title": "Revisiting Semi Supervised Learning",
                    "text": [
                        "Classics in a Neural World",
                        "How do classics in SSL compare to recent advances?",
                        "Can we combine the best of both worlds?",
                        "How well do these approaches work on out-of-distribution data?"
                    ],
                    "page_nums": [
                        7,
                        8,
                        9,
                        10
                    ],
                    "images": []
                },
                "3": {
                    "title": "Self training",
                    "text": [
                        "1. Train model on labeled data.",
                        "2. Use confident predictions on unlabeled data as training examples. Repeat.",
                        "- Er ror a mpli"
                    ],
                    "page_nums": [
                        16,
                        17,
                        18,
                        19,
                        20
                    ],
                    "images": []
                },
                "4": {
                    "title": "Self training variants",
                    "text": [
                        "Output probabilities in neural networks are poorly calibrated.",
                        "Throttling (Abney, 2007), i.e. selecting the top n highest confidence unlabeled examples works best.",
                        "Training until convergence on labeled data and then on unlabeled data works best."
                    ],
                    "page_nums": [
                        21,
                        22,
                        23,
                        24,
                        25,
                        26
                    ],
                    "images": []
                },
                "5": {
                    "title": "Tri training",
                    "text": [
                        "1. Train three models on bootstrapped samples.",
                        "2. Use predictions on unlabeled data for third if two agree.",
                        "Final prediction: majority voting"
                    ],
                    "page_nums": [
                        27,
                        28,
                        29,
                        30,
                        31,
                        32,
                        33,
                        34,
                        35,
                        36,
                        37,
                        38,
                        39
                    ],
                    "images": []
                },
                "6": {
                    "title": "Tri training with disagreement",
                    "text": [
                        "1. Train three models on bootstrapped samples.",
                        "2. Use predictions on unlabeled data for third if two agree and prediction differs.",
                        "dependen t mo dels"
                    ],
                    "page_nums": [
                        40,
                        41,
                        42,
                        43,
                        44,
                        45,
                        46,
                        47,
                        48
                    ],
                    "images": []
                },
                "7": {
                    "title": "Tri training hyper parameters",
                    "text": [
                        "Producing predictions for all unlabeled examples is expensive",
                        "Sample number of unlabeled examples",
                        "Not effective for classic approaches, but essential for our method"
                    ],
                    "page_nums": [
                        49,
                        50,
                        51,
                        52,
                        53,
                        54
                    ],
                    "images": []
                },
                "8": {
                    "title": "Multi task Tri training",
                    "text": [
                        "1. Train one model with 3 objective functions.",
                        "2. Use predictions on unlabeled data for third if two agree.",
                        "Restrict final layers to use different representations.",
                        "Train third objective function only on pseudo labeled to bridge domain shift.",
                        "m2 F orthogonality constraint (Bousmalis et al., 2016)",
                        "Loss: L() = log Pmi(y h Lorth"
                    ],
                    "page_nums": [
                        55,
                        56,
                        57,
                        58,
                        59,
                        60,
                        61,
                        62,
                        63,
                        64,
                        65,
                        66,
                        67,
                        68,
                        69,
                        70,
                        71,
                        72,
                        73,
                        74,
                        75
                    ],
                    "images": [
                        "figure/image/989-Figure1-1.png"
                    ]
                },
                "9": {
                    "title": "Data and Tasks",
                    "text": [
                        "Sentiment analysis on Amazon reviews dataset (Blitzer et al, 2006)",
                        "POS tagging on SANCL 2012 dataset (Petrov and McDonald, 2012)"
                    ],
                    "page_nums": [
                        76,
                        77,
                        78,
                        79
                    ],
                    "images": []
                },
                "13": {
                    "title": "Takeaways",
                    "text": [
                        "Classic tri-training works best: outperforms recent state-of-the-art methods for sentiment analysis.",
                        "We address the drawback of tri-training (space & time complexity) via the proposed MT-Tri model",
                        "MT-Tri works best on sentiment, but not for POS.",
                        "Comparing neural methods to classics (strong baselines)",
                        "Evaluation on multiple tasks domains"
                    ],
                    "page_nums": [
                        108,
                        109,
                        110,
                        111
                    ],
                    "images": [
                        "figure/image/989-Figure1-1.png"
                    ]
                }
            },
            "paper_title": "Strong Baselines for Neural Semi-Supervised Learning under Domain Shift",
            "paper_id": "989",
            "paper": {
                "title": "Strong Baselines for Neural Semi-Supervised Learning under Domain Shift",
                "abstract": "Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Deep neural networks (DNNs) excel at learning from labeled data and have achieved state of the art in a wide array of supervised NLP tasks such as dependency parsing (Dozat and Manning, 2017) , named entity recognition (Lample et al., 2016) , and semantic role labeling (He et al., 2017) ."
                    },
                    {
                        "id": 1,
                        "string": "In contrast, learning from unlabeled data, especially under domain shift, remains a challenge."
                    },
                    {
                        "id": 2,
                        "string": "This is common in many real-world applications where the distribution of the training and test data differs."
                    },
                    {
                        "id": 3,
                        "string": "Many state-of-the-art domain adaptation approaches leverage task-specific characteristics such as sentiment words (Blitzer et al., 2006; Wu and Huang, 2016) or distributional features (Schn-abel and Schütze, 2014; Yin et al., 2015) which do not generalize to other tasks."
                    },
                    {
                        "id": 4,
                        "string": "Other approaches that are in theory more general only evaluate on proprietary datasets (Kim et al., 2017) or on a single benchmark (Zhou et al., 2016) , which carries the risk of overfitting to the task."
                    },
                    {
                        "id": 5,
                        "string": "In addition, most models only compare against weak baselines and, strikingly, almost none considers evaluating against approaches from the extensive semi-supervised learning (SSL) literature (Chapelle et al., 2006) ."
                    },
                    {
                        "id": 6,
                        "string": "In this work, we make the argument that such algorithms make strong baselines for any task in line with recent efforts highlighting the usefulness of classic approaches (Melis et al., 2017; Denkowski and Neubig, 2017) ."
                    },
                    {
                        "id": 7,
                        "string": "We re-evaluate bootstrapping algorithms in the context of DNNs."
                    },
                    {
                        "id": 8,
                        "string": "These are general-purpose semi-supervised algorithms that treat the model as a black box and can thus be used easily-with a few additions-with the current generation of NLP models."
                    },
                    {
                        "id": 9,
                        "string": "Many of these methods, though, were originally developed with in-domain performance in mind, so their effectiveness in a domain adaptation setting remains unexplored."
                    },
                    {
                        "id": 10,
                        "string": "In particular, we re-evaluate three traditional bootstrapping methods, self-training (Yarowsky, 1995) , tri-training (Zhou and Li, 2005) , and tritraining with disagreement (Søgaard, 2010) for neural network-based approaches on two NLP tasks with different characteristics, namely, a sequence prediction and a classification task (POS tagging and sentiment analysis)."
                    },
                    {
                        "id": 11,
                        "string": "We evaluate the methods across multiple domains on two wellestablished benchmarks, without taking any further task-specific measures, and compare to the best results published in the literature."
                    },
                    {
                        "id": 12,
                        "string": "We make the somewhat surprising observation that classic tri-training outperforms task-agnostic state-of-the-art semi-supervised learning (Laine and Aila, 2017) and recent neural adaptation approaches (Ganin et al., 2016; Saito et al., 2017) ."
                    },
                    {
                        "id": 13,
                        "string": "In addition, we propose multi-task tri-training, which reduces the main deficiency of tri-training, namely its time and space complexity."
                    },
                    {
                        "id": 14,
                        "string": "It establishes a new state of the art on unsupervised domain adaptation for sentiment analysis but it is outperformed by classic tri-training for POS tagging."
                    },
                    {
                        "id": 15,
                        "string": "Contributions Our contributions are: a) We propose a novel multi-task tri-training method."
                    },
                    {
                        "id": 16,
                        "string": "b) We show that tri-training can serve as a strong and robust semi-supervised learning baseline for the current generation of NLP models."
                    },
                    {
                        "id": 17,
                        "string": "c) We perform an extensive evaluation of bootstrapping 1 algorithms compared to state-of-the-art approaches on two benchmark datasets."
                    },
                    {
                        "id": 18,
                        "string": "d) We shed light on the task and data characteristics that yield the best performance for each model."
                    },
                    {
                        "id": 19,
                        "string": "Neural bootstrapping methods We first introduce three classic bootstrapping methods, self-training, tri-training, and tri-training with disagreement and detail how they can be used with neural networks."
                    },
                    {
                        "id": 20,
                        "string": "For in-depth details we refer the reader to (Abney, 2007; Chapelle et al., 2006; Zhu and Goldberg, 2009 )."
                    },
                    {
                        "id": 21,
                        "string": "We introduce our novel multitask tri-training method in §2.3."
                    },
                    {
                        "id": 22,
                        "string": "Self-training Self-training (Yarowsky, 1995; McClosky et al., 2006b ) is one of the earliest and simplest bootstrapping approaches."
                    },
                    {
                        "id": 23,
                        "string": "In essence, it leverages the model's own predictions on unlabeled data to obtain additional information that can be used during training."
                    },
                    {
                        "id": 24,
                        "string": "Typically the most confident predictions are taken at face value, as detailed next."
                    },
                    {
                        "id": 25,
                        "string": "Self-training trains a model m on a labeled training set L and an unlabeled data set U ."
                    },
                    {
                        "id": 26,
                        "string": "At each iteration, the model provides predictions m(x) in the form of a probability distribution over classes for all unlabeled examples x in U ."
                    },
                    {
                        "id": 27,
                        "string": "If the probability assigned to the most likely class is higher than a predetermined threshold τ , x is added to the labeled examples with p(x) = arg max m(x) as pseudo-label."
                    },
                    {
                        "id": 28,
                        "string": "This instantiation is the most widely used and shown in Algorithm 1."
                    },
                    {
                        "id": 29,
                        "string": "Calibration It is well-known that output probabilities in neural networks are poorly calibrated (Guo et al., 2017) ."
                    },
                    {
                        "id": 30,
                        "string": "Using a fixed threshold τ is thus Algorithm 1 Self-training (Abney, 2007) if max m(x) > τ then 5: L ← L ∪ {(x, p(x))} 6: until no more predictions are confident not the best choice."
                    },
                    {
                        "id": 31,
                        "string": "While the absolute confidence value is inaccurate, we can expect that the relative order of confidences is more robust."
                    },
                    {
                        "id": 32,
                        "string": "For this reason, we select the top n unlabeled examples that have been predicted with the highest confidence after every epoch and add them to the labeled data."
                    },
                    {
                        "id": 33,
                        "string": "This is one of the many variants for self-training, called throttling (Abney, 2007) ."
                    },
                    {
                        "id": 34,
                        "string": "We empirically confirm that this outperforms the classic selection in our experiments."
                    },
                    {
                        "id": 35,
                        "string": "Online learning In contrast to many classic algorithms, DNNs are trained online by default."
                    },
                    {
                        "id": 36,
                        "string": "We compare training setups and find that training until convergence on labeled data and then training until convergence using self-training performs best."
                    },
                    {
                        "id": 37,
                        "string": "Classic self-training has shown mixed success."
                    },
                    {
                        "id": 38,
                        "string": "In parsing it proved successful only with small datasets (Reichart and Rappoport, 2007) or when a generative component is used together with a reranker in high-data conditions (McClosky et al., 2006b; Suzuki and Isozaki, 2008) ."
                    },
                    {
                        "id": 39,
                        "string": "Some success was achieved with careful task-specific data selection (Petrov and McDonald, 2012) , while others report limited success on a variety of NLP tasks (Plank, 2011; Van Asch and Daelemans, 2016; van der Goot et al., 2017) ."
                    },
                    {
                        "id": 40,
                        "string": "Its main downside is that the model is not able to correct its own mistakes and errors are amplified, an effect that is increased under domain shift."
                    },
                    {
                        "id": 41,
                        "string": "Tri-training Tri-training (Zhou and Li, 2005 ) is a classic method that reduces the bias of predictions on unlabeled data by utilizing the agreement of three independently trained models."
                    },
                    {
                        "id": 42,
                        "string": "Tri-training (cf."
                    },
                    {
                        "id": 43,
                        "string": "Algorithm 2) first trains three models m 1 , m 2 , and m 3 on bootstrap samples of the labeled data L. An unlabeled data point is added to the training set of a model m i if the other two models m j and m k agree on its label."
                    },
                    {
                        "id": 44,
                        "string": "Training stops when the classifiers do not change anymore."
                    },
                    {
                        "id": 45,
                        "string": "Tri-training with disagreement (Søgaard, 2010) Algorithm 2 Tri-training (Zhou and Li, 2005) L i ← ∅ 7: for x ∈ U do 8: if p j (x) = p k (x)(j, k = i) then 9: L i ← L i ∪ {(x, p j (x))} m i ← train_model(L ∪ L i ) 10: until none of m i changes 11: apply majority vote over m i is based on the intuition that a model should only be strengthened in its weak points and that the labeled data should not be skewed by easy data points."
                    },
                    {
                        "id": 46,
                        "string": "In order to achieve this, it adds a simple modification to the original algorithm (altering line 8 in Algorithm 2), requiring that for an unlabeled data point on which m j and m k agree, the other model m i disagrees on the prediction."
                    },
                    {
                        "id": 47,
                        "string": "Tri-training with disagreement is more data-efficient than tritraining and has achieved competitive results on part-of-speech tagging (Søgaard, 2010) ."
                    },
                    {
                        "id": 48,
                        "string": "Sampling unlabeled data Both tri-training and tri-training with disagreement can be very expensive in their original formulation as they require to produce predictions for each of the three models on all unlabeled data samples, which can be in the millions in realistic applications."
                    },
                    {
                        "id": 49,
                        "string": "We thus propose to sample a number of unlabeled examples at every epoch."
                    },
                    {
                        "id": 50,
                        "string": "For all traditional bootstrapping approaches we sample 10k candidate instances in each epoch."
                    },
                    {
                        "id": 51,
                        "string": "For the neural approaches we use a linearly growing candidate sampling scheme proposed by (Saito et al., 2017) , increasing the candidate pool size as the models become more accurate."
                    },
                    {
                        "id": 52,
                        "string": "Confidence thresholding Similar to selftraining, we can introduce an additional requirement that pseudo-labeled examples are only added if the probability of the prediction of at least one model is higher than some threshold τ ."
                    },
                    {
                        "id": 53,
                        "string": "We did not find this to outperform prediction without threshold for traditional tri-training, but thresholding proved essential for our method ( §2.3)."
                    },
                    {
                        "id": 54,
                        "string": "The most important condition for tri-training and tri-training with disagreement is that the models are diverse."
                    },
                    {
                        "id": 55,
                        "string": "Typically, bootstrap samples are used to create this diversity (Zhou and Li, 2005; Søgaard, 2010) ."
                    },
                    {
                        "id": 56,
                        "string": "However, training separate models on bootstrap samples of a potentially large amount of training data is expensive and takes a lot of time."
                    },
                    {
                        "id": 57,
                        "string": "This drawback motivates our approach."
                    },
                    {
                        "id": 58,
                        "string": "Multi-task tri-training In order to reduce both the time and space complexity of tri-training, we propose Multi-task Tritraining (MT-Tri)."
                    },
                    {
                        "id": 59,
                        "string": "MT-Tri leverages insights from multi-task learning (MTL) (Caruana, 1993) to share knowledge across models and accelerate training."
                    },
                    {
                        "id": 60,
                        "string": "Rather than storing and training each model separately, we propose to share the parameters of the models and train them jointly using MTL."
                    },
                    {
                        "id": 61,
                        "string": "2 All models thus collaborate on learning a joint representation, which improves convergence."
                    },
                    {
                        "id": 62,
                        "string": "The output softmax layers are model-specific and are only updated for the input of the respective model."
                    },
                    {
                        "id": 63,
                        "string": "We show the model in Figure 1 (as instantiated for POS tagging)."
                    },
                    {
                        "id": 64,
                        "string": "As the models leverage a joint representation, we need to ensure that the features used for prediction in the softmax layers of the different models are as diverse as possible, so that the models can still learn from each other's predictions."
                    },
                    {
                        "id": 65,
                        "string": "In contrast, if the parameters in all output softmax layers were the same, the method would degenerate to self-training."
                    },
                    {
                        "id": 66,
                        "string": "To guarantee diversity, we introduce an orthogonality constraint (Bousmalis et al., 2016) as an additional loss term, which we define as follows: L orth = W m 1 W m 2 2 F (1) where | · 2 F is the squared Frobenius norm and W m 1 and W m 2 are the softmax output parameters of the two source and pseudo-labeled output layers m 1 and m 2 , respectively."
                    },
                    {
                        "id": 67,
                        "string": "The orthogonality constraint encourages the models not to rely on the same features for prediction."
                    },
                    {
                        "id": 68,
                        "string": "As enforcing pairwise orthogonality between three matrices is not possible, we only enforce orthogonality between the softmax output layers of m 1 and m 2 , 3 while m 3 is gradually trained to be more target-specific."
                    },
                    {
                        "id": 69,
                        "string": "We parameterize L orth by γ=0.01 following ."
                    },
                    {
                        "id": 70,
                        "string": "We do not further tune γ."
                    },
                    {
                        "id": 71,
                        "string": "More formally, let us illustrate the model by taking the sequence prediction task (Figure 1 ) as illustration."
                    },
                    {
                        "id": 72,
                        "string": "Given an utterance with labels y 1 , .., y n , our Multi-task Tri-training loss consists of three task-specific (m 1 , m 2 , m 3 ) tagging loss functions (where h is the uppermost Bi-LSTM encoding): (2) In contrast to classic tri-training, we can train the multi-task model with its three model-specific outputs jointly and without bootstrap sampling on the labeled source domain data until convergence, as the orthogonality constraint enforces different representations between models m 1 and m 2 ."
                    },
                    {
                        "id": 73,
                        "string": "From this point, we can leverage the pair-wise agreement of two output layers to add pseudo-labeled examples as training data to the third model."
                    },
                    {
                        "id": 74,
                        "string": "We train the third output layer m 3 only on pseudo-labeled target instances in order to make tri-training more robust to a domain shift."
                    },
                    {
                        "id": 75,
                        "string": "For the final prediction, majority voting of all three output layers is used, which resulted in the best instantiation, together with confidence thresholding (τ = 0.9, except for highresource POS where τ = 0.8 performed slightly better)."
                    },
                    {
                        "id": 76,
                        "string": "We also experimented with using a domainadversarial loss (Ganin et al., 2016) on the jointly learned representation, but found this not to help."
                    },
                    {
                        "id": 77,
                        "string": "The full pseudo-code is given in Algorithm 3."
                    },
                    {
                        "id": 78,
                        "string": "L(θ) = − i 1,..,n log P m i (y| h) + γL orth Computational complexity The motivation for MT-Tri was to reduce the space and time complexity of tri-training."
                    },
                    {
                        "id": 79,
                        "string": "We thus give an estimate of its efficiency gains."
                    },
                    {
                        "id": 80,
                        "string": "MT-Tri is~3× more spaceefficient than regular tri-training; tri-training stores one set of parameters for each of the three models, while MT-Tri only stores one set of parameters (we use three output layers, but these make up a comparatively small part of the total parameter budget)."
                    },
                    {
                        "id": 81,
                        "string": "In terms of time efficiency, tri-training first 3 We also tried enforcing orthogonality on a hidden layer rather than the output layer, but this did not help."
                    },
                    {
                        "id": 82,
                        "string": "L i ← ∅ 5: for x ∈ U do 6: if p j (x) = p k (x)(j, k = i) then 7: L i ← L i ∪ {(x, p j (x))} 8: if i = 3 then m i = train_model(L i ) 9: elsem i ← train_model(L ∪ L i ) 10: until end condition is met 11: apply majority vote over m i requires to train each of the models from scratch."
                    },
                    {
                        "id": 83,
                        "string": "The actual tri-training takes about the same time as training from scratch and requires a separate forward pass for each model, effectively training three independent models simultaneously."
                    },
                    {
                        "id": 84,
                        "string": "In contrast, MT-Tri only necessitates one forward pass as well as the evaluation of the two additional output layers (which takes a negligible amount of time) and requires about as many epochs as tri-training until convergence (see Table 3 , second column) while adding fewer unlabeled examples per epoch (see Section 3.4)."
                    },
                    {
                        "id": 85,
                        "string": "In our experiments, MT-Tri trained about 5-6× faster than traditional tri-training."
                    },
                    {
                        "id": 86,
                        "string": "MT-Tri can be seen as a self-ensembling technique, where different variations of a model are used to create a stronger ensemble prediction."
                    },
                    {
                        "id": 87,
                        "string": "Recent approaches in this line are snapshot ensembling ) that ensembles models converged to different minima during a training run, asymmetric tri-training (Saito et al., 2017) (ASYM) that leverages agreement on two models as information for the third, and temporal ensembling (Laine and Aila, 2017) , which ensembles predictions of a model at different epochs."
                    },
                    {
                        "id": 88,
                        "string": "We tried to compare to temporal ensembling in our experiments, but were not able to obtain consistent results."
                    },
                    {
                        "id": 89,
                        "string": "4 We compare to the closest most recent method, asymmetric tritraining (Saito et al., 2017) ."
                    },
                    {
                        "id": 90,
                        "string": "It differs from ours in two aspects: a) ASYM leverages only pseudolabels from data points on which m 1 and m 2 agree, and b) it uses only one task (m 3 ) as final predictor."
                    },
                    {
                        "id": 91,
                        "string": "In essence, our formulation of MT-Tri is closer to the original tri-training formulation (agreements on two provide pseudo-labels to the third) thereby incorporating more diversity."
                    },
                    {
                        "id": 92,
                        "string": "(Petrov and McDonald, 2012) for POS tagging (above) and the Amazon Reviews dataset (Blitzer et al., 2006) for sentiment analysis (below)."
                    },
                    {
                        "id": 93,
                        "string": "Experiments In order to ascertain which methods are robust across different domains, we evaluate on two widely used unsupervised domain adaptation datasets for two tasks, a sequence labeling and a classification task, cf."
                    },
                    {
                        "id": 94,
                        "string": "Table 1 for data statistics."
                    },
                    {
                        "id": 95,
                        "string": "POS tagging For POS tagging we use the SANCL 2012 shared task dataset (Petrov and McDonald, 2012) and compare to the top results in both low and high-data conditions (Schnabel and Schütze, 2014; Yin et al., 2015) ."
                    },
                    {
                        "id": 96,
                        "string": "Both are strong baselines, as the FLORS tagger has been developed for this challenging dataset and it is based on contextual distributional features (excluding the word's identity), and hand-crafted suffix and shape features (including some languagespecific morphological features)."
                    },
                    {
                        "id": 97,
                        "string": "We want to gauge to what extent we can adopt a nowadays fairly standard (but more lexicalized) general neural tagger."
                    },
                    {
                        "id": 98,
                        "string": "Our POS tagging model is a state-of-the-art Bi-LSTM tagger (Plank et al., 2016) with word and 100-dim character embeddings."
                    },
                    {
                        "id": 99,
                        "string": "Word embeddings are initialized with the 100-dim Glove embeddings (Pennington et al., 2014) ."
                    },
                    {
                        "id": 100,
                        "string": "The BiLSTM has one hidden layer with 100 dimensions."
                    },
                    {
                        "id": 101,
                        "string": "The base POS model is trained on WSJ with early stopping on the WSJ development set, using patience 2, Gaussian noise with σ = 0.2 and word dropout with p = 0.25 (Kiperwasser and Goldberg, 2016) ."
                    },
                    {
                        "id": 102,
                        "string": "Regarding data, the source domain is the Ontonotes 4.0 release of the Penn treebank Wall Street Journal (WSJ) annotated for 48 fine-grained POS tags."
                    },
                    {
                        "id": 103,
                        "string": "This amounts to 30,060 labeled sen-tences."
                    },
                    {
                        "id": 104,
                        "string": "We use 100,000 WSJ sentences from 1988 as unlabeled data, following Schnabel and Schütze (2014) ."
                    },
                    {
                        "id": 105,
                        "string": "5 As target data, we use the five SANCL domains (answers, emails, newsgroups, reviews, weblogs)."
                    },
                    {
                        "id": 106,
                        "string": "We restrict the amount of unlabeled data for each SANCL domain to the first 100k sentences, and do not do any pre-processing."
                    },
                    {
                        "id": 107,
                        "string": "We consider the development set of ANSWERS as our only target dev set to set hyperparameters."
                    },
                    {
                        "id": 108,
                        "string": "This may result in suboptimal per-domain settings but better resembles an unsupervised adaptation scenario."
                    },
                    {
                        "id": 109,
                        "string": "Sentiment analysis For sentiment analysis, we evaluate on the Amazon reviews dataset (Blitzer et al., 2006) ."
                    },
                    {
                        "id": 110,
                        "string": "Reviews with 1 to 3 stars are ranked as negative, while reviews with 4 or 5 stars are ranked as positive."
                    },
                    {
                        "id": 111,
                        "string": "The dataset consists of four domains, yielding 12 adaptation scenarios."
                    },
                    {
                        "id": 112,
                        "string": "We use the same pre-processing and architecture as used in (Ganin et al., 2016; Saito et al., 2017) : 5,000-dimensional tf-idf weighted unigram and bigram features as input; 2k labeled source samples and 2k unlabeled target samples for training, 200 labeled target samples for validation, and between 3k-6k samples for testing."
                    },
                    {
                        "id": 113,
                        "string": "The model is an MLP with one hidden layer with 50 dimensions, sigmoid activations, and a softmax output."
                    },
                    {
                        "id": 114,
                        "string": "We compare against the Variational Fair Autoencoder (VFAE) (Louizos et al., 2015) model and domain-adversarial neural networks (DANN) (Ganin et al., 2016) ."
                    },
                    {
                        "id": 115,
                        "string": "Baselines Besides comparing to the top results published on both datasets, we include the following baselines: a) the task model trained on the source domain; b) self-training (Self); c) tri-training (Tri); d) tri-training with disagreement (Tri-D); and e) asymmetric tri-training (Saito et al., 2017) ."
                    },
                    {
                        "id": 116,
                        "string": "Our proposed model is multi-task tri-training (MT-Tri)."
                    },
                    {
                        "id": 117,
                        "string": "We implement our models in DyNet ."
                    },
                    {
                        "id": 118,
                        "string": "Reporting single evaluation scores might result in biased results (Reimers and Gurevych, 2017) ."
                    },
                    {
                        "id": 119,
                        "string": "Throughout the paper, we report mean accuracy and standard deviation over five runs for POS tagging and over ten runs for Results Sentiment analysis We show results for sentiment analysis for all 12 domain adaptation scenarios in Figure 2 ."
                    },
                    {
                        "id": 120,
                        "string": "For clarity, we also show the accuracy scores averaged across each target domain as well as a global macro average in Table 2 Self-training achieves surprisingly good results but is not able to compete with tri-training."
                    },
                    {
                        "id": 121,
                        "string": "Tritraining with disagreement is only slightly better than self-training, showing that the disagreement component might not be useful when there is a strong domain shift."
                    },
                    {
                        "id": 122,
                        "string": "Tri-training achieves the best average results on two target domains and clearly outperforms the state of the art on average."
                    },
                    {
                        "id": 123,
                        "string": "MT-Tri finally outperforms the state of the art on 3/4 domains, and even slightly traditional tritraining, resulting in the overall best method."
                    },
                    {
                        "id": 124,
                        "string": "This improvement is mainly due to the B->E and D->E scenarios, on which tri-training struggles."
                    },
                    {
                        "id": 125,
                        "string": "These domain pairs are among those with the highest Adistance (Blitzer et al., 2007) , which highlights that tri-training has difficulty dealing with a strong shift in domain."
                    },
                    {
                        "id": 126,
                        "string": "Our method is able to mitigate this deficiency by training one of the three output layers only on pseudo-labeled target domain examples."
                    },
                    {
                        "id": 127,
                        "string": "In addition, MT-Tri is more efficient as it adds a smaller number of pseudo-labeled examples than tri-training at every epoch."
                    },
                    {
                        "id": 128,
                        "string": "For sentiment analysis, tri-training adds around 1800-1950/2000 unlabeled examples at every epoch, while MT-Tri only adds around 100-300 in early epochs."
                    },
                    {
                        "id": 129,
                        "string": "This shows that the orthogonality constraint is useful for inducing diversity."
                    },
                    {
                        "id": 130,
                        "string": "In addition, adding fewer examples poses a smaller risk of swamping the learned representations with useless signals and is more akin to fine-tuning, the standard method for supervised domain adaptation (Howard and Ruder, 2018) ."
                    },
                    {
                        "id": 131,
                        "string": "We observe an asymmetry in the results between some of the domain pairs, e.g."
                    },
                    {
                        "id": 132,
                        "string": "B->D and D->B."
                    },
                    {
                        "id": 133,
                        "string": "We hypothesize that the asymmetry may be due to properties of the data and that the domains are relatively far apart e.g., in terms of A-distance."
                    },
                    {
                        "id": 134,
                        "string": "In fact, asymmetry in these domains is already reflected Table 4 : Accuracy for POS tagging on the dev and test sets of the SANCL domains, models trained on full source data setup."
                    },
                    {
                        "id": 135,
                        "string": "Values for methods with * are from (Schnabel and Schütze, 2014) ."
                    },
                    {
                        "id": 136,
                        "string": "in the results of Blitzer et al."
                    },
                    {
                        "id": 137,
                        "string": "(2007) and is corroborated in the results for asymmetric tri-training (Saito et al., 2017) and our method."
                    },
                    {
                        "id": 138,
                        "string": "We note a weakness of this dataset is high variance."
                    },
                    {
                        "id": 139,
                        "string": "Existing approaches only report the mean, which makes an objective comparison difficult."
                    },
                    {
                        "id": 140,
                        "string": "For this reason, we believe it is essential to evaluate proposed approaches also on other tasks."
                    },
                    {
                        "id": 141,
                        "string": "POS tagging Results for tagging in the low-data regime (10% of WSJ) are given in Table 3 ."
                    },
                    {
                        "id": 142,
                        "string": "Self-training does not work for the sequence prediction task."
                    },
                    {
                        "id": 143,
                        "string": "We report only the best instantia-tion (throttling with n=800)."
                    },
                    {
                        "id": 144,
                        "string": "Our results contribute to negative findings regarding self-training (Plank, 2011; Van Asch and Daelemans, 2016 )."
                    },
                    {
                        "id": 145,
                        "string": "In the low-data setup, tri-training with disagreement works best, reaching an overall average accuracy of 89.70, closely followed by classic tritraining, and significantly outperforming the baseline on 4/5 domains."
                    },
                    {
                        "id": 146,
                        "string": "The exception is newsgroups, a difficult domain with high OOV rate where none of the approches beats the baseline (see §3.4)."
                    },
                    {
                        "id": 147,
                        "string": "Our proposed MT-Tri is better than asymmetric tritraining, but falls below classic tri-training."
                    },
                    {
                        "id": 148,
                        "string": "It beats  the baseline significantly on only 2/5 domains (answers and emails)."
                    },
                    {
                        "id": 149,
                        "string": "The FLORS tagger (Yin et al., 2015) fares better."
                    },
                    {
                        "id": 150,
                        "string": "Its contextual distributional features are particularly helpful on unknown word-tag combinations (see § 3.4), which is a limitation of the lexicalized generic bi-LSTM tagger."
                    },
                    {
                        "id": 151,
                        "string": "For the high-data setup (Table 4 ) results are similar."
                    },
                    {
                        "id": 152,
                        "string": "Disagreement, however, is only favorable in the low-data setups; the effect of avoiding easy points no longer holds in the full data setup."
                    },
                    {
                        "id": 153,
                        "string": "Classic tritraining is the best method."
                    },
                    {
                        "id": 154,
                        "string": "In particular, traditional tri-training is complementary to word embedding initialization, pushing the non-pre-trained baseline to the level of SRC with Glove initalization."
                    },
                    {
                        "id": 155,
                        "string": "Tritraining pushes performance even further and results in the best model, significantly outperforming the baseline again in 4/5 cases, and reaching FLORS performance on weblogs."
                    },
                    {
                        "id": 156,
                        "string": "Multi-task tritraining is often slightly more effective than asymmetric tri-training (Saito et al., 2017) ; however, improvements for both are not robust across domains, sometimes performance even drops."
                    },
                    {
                        "id": 157,
                        "string": "The model likely is too simplistic for such a high-data POS setup, and exploring shared-private models might prove more fruitful ."
                    },
                    {
                        "id": 158,
                        "string": "On the test sets, tri-training performs consistently the best."
                    },
                    {
                        "id": 159,
                        "string": "POS analysis We analyze POS tagging accuracy with respect to word frequency 6 and unseen word-tag combinations (UWT) on the dev sets."
                    },
                    {
                        "id": 160,
                        "string": "known tags, OOVs and unknown word-tag (UWT) rate."
                    },
                    {
                        "id": 161,
                        "string": "The SANCL dataset is overall very challenging: OOV rates are high (6.8-11% compared to 2.3% in WSJ), so is the unknown word-tag (UWT) rate (answers and emails contain 2.91% and 3.47% UWT compared to 0.61% on WSJ) and almost all target domains even contain unknown tags (Schnabel and Schütze, 2014 ) (unknown tags: ADD,GW,NFP,XX), except for weblogs."
                    },
                    {
                        "id": 162,
                        "string": "Email is the domain with the highest OOV rate and highest unknown-tag-for-known-words rate."
                    },
                    {
                        "id": 163,
                        "string": "We plot accuracy with respect to word frequency on email in Figure 3 , analyzing how the three methods fare in comparison to the baseline on this difficult domain."
                    },
                    {
                        "id": 164,
                        "string": "Regarding OOVs, the results in Table 5 (second part) show that classic tri-training outperforms the source model (trained on only source data) on 3/5 domains in terms of OOV accuracy, except on two domains with high OOV rate (newsgroups and weblogs)."
                    },
                    {
                        "id": 165,
                        "string": "In general, we note that tri-training works best on OOVs and on low-frequency tokens, which is also shown in Figure 3 (leftmost bins)."
                    },
                    {
                        "id": 166,
                        "string": "Both other methods fall typically below the baseline in terms of OOV accuracy, but MT-Tri still outperforms Asym in 4/5 cases."
                    },
                    {
                        "id": 167,
                        "string": "Table 5 (last part) also shows that no bootstrapping method works well on unknown word-tag combinations."
                    },
                    {
                        "id": 168,
                        "string": "UWT tokens are very difficult to predict correctly using an unsupervised approach; the less lexicalized and more context-driven approach taken by FLORS is clearly superior for these cases, resulting in higher UWT accuracies for 4/5 domains."
                    },
                    {
                        "id": 169,
                        "string": "Related work Learning under Domain Shift There is a large body of work on domain adaptation."
                    },
                    {
                        "id": 170,
                        "string": "Studies on unsupervised domain adaptation include early work on bootstrapping (Steedman et al., 2003; McClosky et al., 2006a) , shared feature representations (Blitzer et al., 2006 (Blitzer et al., , 2007 and instance weighting (Jiang and Zhai, 2007) ."
                    },
                    {
                        "id": 171,
                        "string": "Recent ap-proaches include adversarial learning (Ganin et al., 2016) and fine-tuning (Sennrich et al., 2016) ."
                    },
                    {
                        "id": 172,
                        "string": "There is almost no work on bootstrapping approaches for recent neural NLP, in particular under domain shift."
                    },
                    {
                        "id": 173,
                        "string": "Tri-training is less studied, and only recently re-emerged in the vision community (Saito et al., 2017) , albeit is not compared to classic tri-training."
                    },
                    {
                        "id": 174,
                        "string": "Neural network ensembling Related work on self-ensembling approaches includes snapshot ensembling  or temporal ensembling (Laine and Aila, 2017) ."
                    },
                    {
                        "id": 175,
                        "string": "In general, the line between \"explicit\" and \"implicit\" ensembling , like dropout (Srivastava et al., 2014) or temporal ensembling (Saito et al., 2017) , is more fuzzy."
                    },
                    {
                        "id": 176,
                        "string": "As we noted earlier our multi-task learning setup can be seen as a form of self-ensembling."
                    },
                    {
                        "id": 177,
                        "string": "Multi-task learning in NLP Neural networks are particularly well-suited for MTL allowing for parameter sharing (Caruana, 1993) ."
                    },
                    {
                        "id": 178,
                        "string": "Recent NLP conferences witnessed a \"tsunami\" of deep learning papers (Manning, 2015) , followed by what we call a multi-task learning \"wave\": MTL has been successfully applied to a wide range of NLP tasks (Cohn and Specia, 2013; Cheng et al., 2015; Luong et al., 2015; Plank et al., 2016; Fang and Cohn, 2016; Ruder et al., 2017; Augenstein et al., 2018) ."
                    },
                    {
                        "id": 179,
                        "string": "Related to it is the pioneering work on adversarial learning (DANN) (Ganin et al., 2016) ."
                    },
                    {
                        "id": 180,
                        "string": "For sentiment analysis we found tri-training and our MT-Tri model to outperform DANN."
                    },
                    {
                        "id": 181,
                        "string": "Our MT-Tri model lends itself well to shared-private models such as those proposed recently Kim et al., 2017) , which extend upon (Ganin et al., 2016) by having separate source and target-specific encoders."
                    },
                    {
                        "id": 182,
                        "string": "Conclusions We re-evaluate a range of traditional generalpurpose bootstrapping algorithms in the context of neural network approaches to semi-supervised learning under domain shift."
                    },
                    {
                        "id": 183,
                        "string": "For the two examined NLP tasks classic tri-training works the best and even outperforms a recent state-of-the-art method."
                    },
                    {
                        "id": 184,
                        "string": "The drawback of tri-training it its time and space complexity."
                    },
                    {
                        "id": 185,
                        "string": "We therefore propose a more efficient multi-task tri-training model, which outperforms both traditional tri-training and recent alternatives in the case of sentiment analysis."
                    },
                    {
                        "id": 186,
                        "string": "For POS tagging, classic tri-training is superior, performing especially well on OOVs and low frequency to-kens, which suggests it is less affected by error propagation."
                    },
                    {
                        "id": 187,
                        "string": "Overall we emphasize the importance of comparing neural approaches to strong baselines and reporting results across several runs."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 18
                    },
                    {
                        "section": "Neural bootstrapping methods",
                        "n": "2",
                        "start": 19,
                        "end": 21
                    },
                    {
                        "section": "Self-training",
                        "n": "2.1",
                        "start": 22,
                        "end": 40
                    },
                    {
                        "section": "Tri-training",
                        "n": "2.2",
                        "start": 41,
                        "end": 57
                    },
                    {
                        "section": "Multi-task tri-training",
                        "n": "2.3",
                        "start": 58,
                        "end": 91
                    },
                    {
                        "section": "Experiments",
                        "n": "3",
                        "start": 92,
                        "end": 94
                    },
                    {
                        "section": "POS tagging",
                        "n": "3.1",
                        "start": 95,
                        "end": 108
                    },
                    {
                        "section": "Sentiment analysis",
                        "n": "3.2",
                        "start": 109,
                        "end": 114
                    },
                    {
                        "section": "Baselines",
                        "n": "3.3",
                        "start": 115,
                        "end": 168
                    },
                    {
                        "section": "Related work",
                        "n": "4",
                        "start": 169,
                        "end": 181
                    },
                    {
                        "section": "Conclusions",
                        "n": "5",
                        "start": 182,
                        "end": 187
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/989-Table2-1.png",
                        "caption": "Table 2: Average accuracy scores for each SA target domain. *: result from Saito et al. (2017).",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 288.0,
                            "y1": 501.59999999999997,
                            "y2": 633.12
                        }
                    },
                    {
                        "filename": "../figure/image/989-Figure2-1.png",
                        "caption": "Figure 2: Average results for unsupervised domain adaptation on the Amazon dataset. Domains: B (Book), D (DVD), E (Electronics), K (Kitchen). Results for VFAE, DANN, and Asym are from Saito et al. (2017).",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 543.36,
                            "y1": 61.44,
                            "y2": 280.32
                        }
                    },
                    {
                        "filename": "../figure/image/989-Table4-1.png",
                        "caption": "Table 4: Accuracy for POS tagging on the dev and test sets of the SANCL domains, models trained on full source data setup. Values for methods with * are from (Schnabel and Schütze, 2014).",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 235.67999999999998,
                            "y2": 538.0799999999999
                        }
                    },
                    {
                        "filename": "../figure/image/989-Table3-1.png",
                        "caption": "Table 3: Accuracy scores on dev set of target domain for POS tagging for 10% labeled data. Avg: average over the 5 SANCL domains. Hyperparameter ep (epochs) is tuned on Answers dev. µpseudo: average amount of added pseudo-labeled data. FLORS: results for Batch (u:big) from (Yin et al., 2015) (see §3).",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 65.75999999999999,
                            "y2": 174.23999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/989-Figure1-1.png",
                        "caption": "Figure 1: Multi-task tri-training (MT-Tri).",
                        "page": 2,
                        "bbox": {
                            "x1": 317.76,
                            "x2": 515.04,
                            "y1": 61.44,
                            "y2": 178.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/989-Figure3-1.png",
                        "caption": "Figure 3: POS accuracy per binned log frequency.",
                        "page": 7,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 61.44,
                            "y2": 163.2
                        }
                    },
                    {
                        "filename": "../figure/image/989-Table5-1.png",
                        "caption": "Table 5: Accuracy scores on dev sets for OOV and unknown word-tag (UWT) tokens.",
                        "page": 7,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 291.36,
                            "y1": 62.879999999999995,
                            "y2": 262.08
                        }
                    },
                    {
                        "filename": "../figure/image/989-Table1-1.png",
                        "caption": "Table 1: Number of labeled and unlabeled sentences for each domain in the SANCL 2012 dataset (Petrov and McDonald, 2012) for POS tagging (above) and the Amazon Reviews dataset (Blitzer et al., 2006) for sentiment analysis (below).",
                        "page": 4,
                        "bbox": {
                            "x1": 89.75999999999999,
                            "x2": 270.24,
                            "y1": 62.4,
                            "y2": 190.07999999999998
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-8"
        },
        {
            "slides": {
                "0": {
                    "title": "Contributions",
                    "text": [
                        "Question Answering (Q&A) and Spoken Language Understanding (SLU) under the same parsing framework:",
                        "Public Q&A corpora (English)",
                        "Proprietary Alexa SLU corpus (English)",
                        "Transfer learning to learn parsers on low-resource domains, for both Q&A and SLU:"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "3": {
                    "title": "Parser",
                    "text": [
                        "Which cinemas screen Star Wars tonight?",
                        "Time Title Title Time",
                        "tonight Title Title Time",
                        "Transition-based parser of Cheng et al. (2017) + character-level embeddings and copy mechanism:",
                        "t0 tn x0 xn nt0 ntn"
                    ],
                    "page_nums": [
                        12,
                        13,
                        14,
                        15,
                        16,
                        17,
                        18,
                        19,
                        20,
                        21,
                        22,
                        23,
                        24
                    ],
                    "images": []
                },
                "4": {
                    "title": "Results",
                    "text": [
                        "DATA TASK DOMAIN ACCURACY",
                        "Overnight Q&A publications calendar housing recipes restaurants basketball blocks social",
                        "Alexa SLU search recipes cinema bookings closet",
                        "DATA TASK DOMAIN BASELINE Copy",
                        "DATA TASK DOMAIN BASELINE Attention"
                    ],
                    "page_nums": [
                        25,
                        26,
                        27,
                        29
                    ],
                    "images": []
                },
                "8": {
                    "title": "Trasfer Learning Multi task Learning",
                    "text": [
                        "HR DOMAIN LR DOMAIN",
                        "TER COPY TER COPY",
                        "t0 tn x0 xn t0 tn x0 xn"
                    ],
                    "page_nums": [
                        36
                    ],
                    "images": []
                },
                "10": {
                    "title": "Multi task Learning for Alexa SLU",
                    "text": [
                        "t0 tn x0 xn nt0 ntn"
                    ],
                    "page_nums": [
                        38
                    ],
                    "images": []
                },
                "13": {
                    "title": "Takeaways",
                    "text": [
                        "Executable semantic parsing unifies Q&A and SLU;",
                        "One model for all is fine but some choices must be revisited (e.g. attention, copy);",
                        "Transfer learning for low-resource domains on Q&A and SLU."
                    ],
                    "page_nums": [
                        41
                    ],
                    "images": []
                }
            },
            "paper_title": "Practical Semantic Parsing for Spoken Language Understanding",
            "paper_id": "990",
            "paper": {
                "title": "Practical Semantic Parsing for Spoken Language Understanding",
                "abstract": "Executable semantic parsing is the task of converting natural language utterances into logical forms that can be directly used as queries to get a response. We build a transfer learning framework for executable semantic parsing. We show that the framework is effective for Question Answering (Q&A) as well as for Spoken Language Understanding (SLU). We further investigate the case where a parser on a new domain can be learned by exploiting data on other domains, either via multitask learning between the target domain and an auxiliary domain or via pre-training on the auxiliary domain and fine-tuning on the target domain. With either flavor of transfer learning, we are able to improve performance on most domains; we experiment with public data sets such as Overnight and NLmaps as well as with commercial SLU data. The experiments carried out on data sets that are different in nature show how executable semantic parsing can unify different areas of NLP such as Q&A and SLU.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Due to recent advances in speech recognition and language understanding, conversational interfaces such as Alexa, Cortana, and Siri are becoming more common."
                    },
                    {
                        "id": 1,
                        "string": "They currently have two large uses cases."
                    },
                    {
                        "id": 2,
                        "string": "First, a user can use them to complete a specific task, such as playing music."
                    },
                    {
                        "id": 3,
                        "string": "Second, a user can use them to ask questions where the questions are answered by querying knowledge graph or database back-end."
                    },
                    {
                        "id": 4,
                        "string": "Typically, under a common interface, there exist two disparate systems that can handle each use cases."
                    },
                    {
                        "id": 5,
                        "string": "The system underlying the first use case is known as a spoken language understanding (SLU) system."
                    },
                    {
                        "id": 6,
                        "string": "Typical commercial SLU systems rely on predicting a coarse user intent and then tagging each word in the utterance to * Work conducted while interning at Amazon Alexa AI."
                    },
                    {
                        "id": 7,
                        "string": "the intent's slots."
                    },
                    {
                        "id": 8,
                        "string": "This architecture is popular due to its simplicity and robustness."
                    },
                    {
                        "id": 9,
                        "string": "On the other hand, Q&A, which need systems to produce more complex structures such as trees and graphs, requires a more comprehensive understanding of human language."
                    },
                    {
                        "id": 10,
                        "string": "One possible system that can handle such a task is an executable semantic parser (Liang, 2013; Kate et al., 2005) ."
                    },
                    {
                        "id": 11,
                        "string": "Given a user utterance, an executable semantic parser can generate tree or graph structures that represent logical forms that can be used to query a knowledge base or database."
                    },
                    {
                        "id": 12,
                        "string": "In this work, we propose executable semantic parsing as a common framework for both uses cases by framing SLU as executable semantic parsing that unifies the two use cases."
                    },
                    {
                        "id": 13,
                        "string": "For Q&A, the input utterances are parsed into logical forms that represent the machine-readable representation of the question, while in SLU, they represent the machine-readable representation of the user intent and slots."
                    },
                    {
                        "id": 14,
                        "string": "One added advantage of using parsing for SLU is the ability to handle more complex linguistic phenomena such as coordinated intents that traditional SLU systems struggle to handle (Agarwal et al., 2018) ."
                    },
                    {
                        "id": 15,
                        "string": "Our parsing model is an extension of the neural transition-based parser of Cheng et al."
                    },
                    {
                        "id": 16,
                        "string": "(2017) ."
                    },
                    {
                        "id": 17,
                        "string": "A major issue with semantic parsing is the availability of the annotated logical forms to train the parsers, which are expensive to obtain."
                    },
                    {
                        "id": 18,
                        "string": "A solution is to rely more on distant supervisions such as by using question-answer pairs (Clarke et al., 2010; ."
                    },
                    {
                        "id": 19,
                        "string": "Alternatively, it is possible to exploit annotated logical forms from a different domain or related data set."
                    },
                    {
                        "id": 20,
                        "string": "In this paper, we focus on the scenario where data sets for several domains exist but only very little data for a new one is available and apply transfer learning techniques to it."
                    },
                    {
                        "id": 21,
                        "string": "A common way to implement transfer learning is by first pre-training the model on a domain on which a large data set is available and subsequently fine-tuning the model on the target domain (Thrun, 1996; Zoph et al., 2016) ."
                    },
                    {
                        "id": 22,
                        "string": "We also consider a multi-task learning (MTL) approach."
                    },
                    {
                        "id": 23,
                        "string": "MTL refers to machine learning models that improve generalization by training on more than one task."
                    },
                    {
                        "id": 24,
                        "string": "MTL has been used for a number of NLP problems such as tagging (Collobert and Weston, 2008) , syntactic parsing (Luong et al., 2015) , machine translation Luong et al., 2015) and semantic parsing (Fan et al., 2017) ."
                    },
                    {
                        "id": 25,
                        "string": "See Caruana (1997) and Ruder (2017) for an overview of MTL."
                    },
                    {
                        "id": 26,
                        "string": "A good Q&A data set for our domain adaptation scenario is the Overnight data set (Wang et al., 2015b) , which contains sentences annotated with Lambda Dependency-Based Compositional Semantics (Lambda DCS; Liang 2013) for eight different domains."
                    },
                    {
                        "id": 27,
                        "string": "However, it includes only a few hundred sentences for each domain, and its vocabularies are relatively small."
                    },
                    {
                        "id": 28,
                        "string": "We also experiment with a larger semantic parsing data set (NLmaps; Lawrence and Riezler 2016) ."
                    },
                    {
                        "id": 29,
                        "string": "For SLU, we work with data from a commercial conversational assistant that has a much larger vocabulary size."
                    },
                    {
                        "id": 30,
                        "string": "One common issue in parsing is how to deal with rare or unknown words, which is usually addressed by either delexicalization or by implementing a copy mechanism (Gulcehre et al., 2016) ."
                    },
                    {
                        "id": 31,
                        "string": "We show clear differences in the outcome of these and other techniques when applied to data sets of varying sizes."
                    },
                    {
                        "id": 32,
                        "string": "Our contributions are as follows: • We propose a common semantic parsing framework for Q&A and SLU and demonstrate its broad applicability and effectiveness."
                    },
                    {
                        "id": 33,
                        "string": "• We report parsing baselines for Overnight for which exact match parsing scores have not been yet published."
                    },
                    {
                        "id": 34,
                        "string": "• We show that SLU greatly benefits from a copy mechanism, which is also beneficial for NLmaps but not Overnight."
                    },
                    {
                        "id": 35,
                        "string": "• We investigate the use of transfer learning and show that it can facilitate parsing on lowresource domains."
                    },
                    {
                        "id": 36,
                        "string": "Transition-based Parser Transition-based parsers are widely used for dependency parsing (Nivre, 2008; Dyer et al., 2015) and they have been also applied to semantic parsing tasks (Wang et al., 2015a; Cheng et al., 2017) ."
                    },
                    {
                        "id": 37,
                        "string": "In syntactic parsing, a transition system is usually defined as a quadruple: T = {S, A, I, E}, where S is a set of states, A is a set of actions, I is the initial state, and E is a set of end states."
                    },
                    {
                        "id": 38,
                        "string": "A state is composed of a buffer, a stack, and a set of arcs: S = (β, σ, A)."
                    },
                    {
                        "id": 39,
                        "string": "In the initial state, the buffer contains all the words in the input sentence while the stack and the set of subtrees are empty: S 0 = (w 0 | ."
                    },
                    {
                        "id": 40,
                        "string": "."
                    },
                    {
                        "id": 41,
                        "string": "."
                    },
                    {
                        "id": 42,
                        "string": "|w N , ∅, ∅)."
                    },
                    {
                        "id": 43,
                        "string": "Terminal states have empty stack and buffer: S T = (∅, ∅, A)."
                    },
                    {
                        "id": 44,
                        "string": "During parsing, the stack stores words that have been removed from the buffer but have not been fully processed yet."
                    },
                    {
                        "id": 45,
                        "string": "Actions can be performed to advance the transition system's state: they can either consume words in the buffer and move them to the stack (SHIFT) or combine words in the stack to create new arcs (LEFT-ARC and RIGHT-ARC, depending on the direction of the arc) 1 ."
                    },
                    {
                        "id": 46,
                        "string": "Words in the buffer are processed left-toright until an end state is reached, at which point the set of arcs will contain the full output tree."
                    },
                    {
                        "id": 47,
                        "string": "The parser needs to be able to predict the next action based on its current state."
                    },
                    {
                        "id": 48,
                        "string": "Traditionally, supervised techniques are used to learn such classifiers, using a parallel corpus of sentences and their output trees."
                    },
                    {
                        "id": 49,
                        "string": "Trees can be converted to states and actions using an oracle system."
                    },
                    {
                        "id": 50,
                        "string": "For a detailed explanation of transition-based parsing, see Nivre (2003) and Nivre (2008) ."
                    },
                    {
                        "id": 51,
                        "string": "Neural Transition-based Parser with Stack-LSTMs In this paper, we consider the neural executable semantic parser of Cheng et al."
                    },
                    {
                        "id": 52,
                        "string": "(2017) , which follows the transition-based parsing paradigm."
                    },
                    {
                        "id": 53,
                        "string": "Its transition system differs from traditional systems as the words are not consumed from the buffer because in executable semantic parsing, there are no strict alignments between words in the input and nodes in the tree."
                    },
                    {
                        "id": 54,
                        "string": "The neural architecture encodes the buffer using a Bi-LSTM (Graves, 2012) and the stack as a Stack-LSTM (Dyer et al., 2015) , a recurrent network that allows for push and pop operations."
                    },
                    {
                        "id": 55,
                        "string": "Additionally, the previous actions are also represented with an LSTM."
                    },
                    {
                        "id": 56,
                        "string": "The output of these networks is fed into feed-forward layers and softmax layers are used to predict the next action given the current state."
                    },
                    {
                        "id": 57,
                        "string": "The possible actions are REDUCE, which pops an item from the stack, TER, which creates a terminal node (i.e., a leaf in the tree), and NT, which creates a non-terminal node."
                    },
                    {
                        "id": 58,
                        "string": "When the next action is either TER or NT, additional softmax layers predict the output token to be generated."
                    },
                    {
                        "id": 59,
                        "string": "Since the buffer does not change while parsing, an attention mechanism is used to focus on specific words given the current state of the parser."
                    },
                    {
                        "id": 60,
                        "string": "We extend the model of Cheng et al."
                    },
                    {
                        "id": 61,
                        "string": "(2017) by adding character-level embeddings and a copy mechanism."
                    },
                    {
                        "id": 62,
                        "string": "When using only word embeddings, out-of-vocabulary words are usually mapped to one embedding vector and do not exploit morphological features."
                    },
                    {
                        "id": 63,
                        "string": "Our model encodes words by feeding each character embedding onto an LSTM and concatenate its output to the word embedding: x = {e w ; h M c }, (1) where e w is the word embedding of the input word w and h M c is the last hidden state of the characterlevel LSTM over the characters of the input word w = c 0 , ."
                    },
                    {
                        "id": 64,
                        "string": "."
                    },
                    {
                        "id": 65,
                        "string": "."
                    },
                    {
                        "id": 66,
                        "string": ", c M ."
                    },
                    {
                        "id": 67,
                        "string": "Rare words are usually handled by either delexicalizing the output or by using a copy mechanism."
                    },
                    {
                        "id": 68,
                        "string": "Delexicalization involves substituting named entities with a specific token in an effort to reduce the number of rare and unknown words."
                    },
                    {
                        "id": 69,
                        "string": "Copy relies on the fact that when rare or unknown words must be generated, they usually appear in the same form in the input sentence and they can be therefore copied from the input itself."
                    },
                    {
                        "id": 70,
                        "string": "Our copy implementation follows the strategy of Fan et al."
                    },
                    {
                        "id": 71,
                        "string": "(2017) , where the output of the generation layer is concatenated to the scores of an attention mechanism (Bahdanau et al., 2015) , which express the relevance of each input word with respect to the current state of the parser."
                    },
                    {
                        "id": 72,
                        "string": "In the experiments that follow, we compare delexicalization with copy mechanism on different setups."
                    },
                    {
                        "id": 73,
                        "string": "A depiction of the full model is shown in Figure 1 ."
                    },
                    {
                        "id": 74,
                        "string": "Transfer learning We consider the scenario where large training corpora are available for some domains and we want to bootstrap a parser for a new domain where little training data is available."
                    },
                    {
                        "id": 75,
                        "string": "We investigate the use of two transfer learning approaches: pre-training and multi-task learning."
                    },
                    {
                        "id": 76,
                        "string": "Figure 1 : The full neural transition-based parsing model."
                    },
                    {
                        "id": 77,
                        "string": "x 0 , x 1 , ."
                    },
                    {
                        "id": 78,
                        "string": "."
                    },
                    {
                        "id": 79,
                        "string": "."
                    },
                    {
                        "id": 80,
                        "string": ", x n HISTORY ."
                    },
                    {
                        "id": 81,
                        "string": "."
                    },
                    {
                        "id": 82,
                        "string": "."
                    },
                    {
                        "id": 83,
                        "string": "BUFFER ."
                    },
                    {
                        "id": 84,
                        "string": "."
                    },
                    {
                        "id": 85,
                        "string": "."
                    },
                    {
                        "id": 86,
                        "string": "STACK ."
                    },
                    {
                        "id": 87,
                        "string": "."
                    },
                    {
                        "id": 88,
                        "string": "."
                    },
                    {
                        "id": 89,
                        "string": "ATTENTION FEED-FORWARD LAYERS TER RED NT t 0 ."
                    },
                    {
                        "id": 90,
                        "string": "."
                    },
                    {
                        "id": 91,
                        "string": "."
                    },
                    {
                        "id": 92,
                        "string": "t n x 0 ."
                    },
                    {
                        "id": 93,
                        "string": "."
                    },
                    {
                        "id": 94,
                        "string": "."
                    },
                    {
                        "id": 95,
                        "string": "x n TER COPY nt 0 ."
                    },
                    {
                        "id": 96,
                        "string": "."
                    },
                    {
                        "id": 97,
                        "string": "."
                    },
                    {
                        "id": 98,
                        "string": "nt n NT Representations of stack, buffer, and previous actions are used to predict the next action."
                    },
                    {
                        "id": 99,
                        "string": "When the TER or NT actions are chosen, further layers are used to predict (or copy) the token."
                    },
                    {
                        "id": 100,
                        "string": "For MTL, the different tasks share most of the architecture and only the output layers, which are responsible for predicting the output tokens, are separate for each task."
                    },
                    {
                        "id": 101,
                        "string": "When multi-tasking across domains of the same data set, we expect that most layers of the neural parser, such as the ones responsible for learning the word embeddings and the stack and buffer representation, will learn similar features and can, therefore, be shared."
                    },
                    {
                        "id": 102,
                        "string": "We implement two different MTL setups: a) when separate heads are used for both the TER classifier and the NT classifier, which is expected to be effective when transferring across tasks that do not share output vocabulary; and b) when a separate head is used only for the TER classifier, more appropriate when the non-terminals space is mostly shared."
                    },
                    {
                        "id": 103,
                        "string": "Data In order to investigate the flexibility of the executable semantic parsing framework, we evaluate models on Q&A data sets as well as on commercial SLU data sets."
                    },
                    {
                        "id": 104,
                        "string": "For Q&A, we consider Overnight (Wang et al., 2015b) and NLmaps (Lawrence and Riezler, 2016) ."
                    },
                    {
                        "id": 105,
                        "string": "Overnight It contains sentences annotated with Lambda DCS (Liang, 2013) ."
                    },
                    {
                        "id": 106,
                        "string": "The sentences are divided into eight domains: calendar, blocks, housing, restaurants, publications, recipes, socialnetwork, and basketball."
                    },
                    {
                        "id": 107,
                        "string": "As shown in Table 1 , the number of sentences and the terminal vocabularies are small, which makes the learning more challenging, preventing us from using data-hungry approaches such as sequence-to-sequence models."
                    },
                    {
                        "id": 108,
                        "string": "The current state-of-the-art results, to the best of our knowledge, are reported by Su and Yan (2017) ."
                    },
                    {
                        "id": 109,
                        "string": "Previous work on this data set use denotation accuracy as a metric."
                    },
                    {
                        "id": 110,
                        "string": "In this paper, we use logical form exact match accuracy across all data sets."
                    },
                    {
                        "id": 111,
                        "string": "NLmaps It contains more than two thousand questions about geographical facts, retrieved from OpenStreetMap (Haklay and Weber, 2008) ."
                    },
                    {
                        "id": 112,
                        "string": "Unfortunately, this data set is not divided into subdomains."
                    },
                    {
                        "id": 113,
                        "string": "While NLmaps has comparable sizes with some of the Overnight domains, its vocabularies are much larger: containing 160 terminals, 24 non-terminals and 280 word types (Table 1) ."
                    },
                    {
                        "id": 114,
                        "string": "The current state-of-the-art results on this data set are reported by Duong et al."
                    },
                    {
                        "id": 115,
                        "string": "(2017) ."
                    },
                    {
                        "id": 116,
                        "string": "SLU We select five domains from our SLU data set: search, recipes, cinema, bookings, and closet."
                    },
                    {
                        "id": 117,
                        "string": "In order to investigate the use case of a new lowresource domain exploiting a higher-resource domain, we selected a mix of high-resource and lowresource domains."
                    },
                    {
                        "id": 118,
                        "string": "Details are shown in Table 1 ."
                    },
                    {
                        "id": 119,
                        "string": "We extracted shallow trees from data originally collected for intent/slot tagging: intents become the root of the tree, slot types are attached to the roots as their children and slot values are in turn attached to their slot types as their children."
                    },
                    {
                        "id": 120,
                        "string": "An example is shown in Figure 2 ."
                    },
                    {
                        "id": 121,
                        "string": "A similar approach to transform intent/slot data into tree structures has been recently employed by Gupta et al."
                    },
                    {
                        "id": 122,
                        "string": "(2018b) ."
                    },
                    {
                        "id": 123,
                        "string": "Experiments We first run experiments on single-task semantic parsing to observe the differences among the three different data sources discussed in Section 4."
                    },
                    {
                        "id": 124,
                        "string": "Specifically, we explore the impact of an attention mechanism on the performance as well as the comparison between delexicalization and a copy mechanism for dealing with data sparsity."
                    },
                    {
                        "id": 125,
                        "string": "The metric used to evaluate parsers is the exact match accuracy, defined as the ratio of sentences cor-  rectly parsed."
                    },
                    {
                        "id": 126,
                        "string": "Attention Because the buffer is not consumed as in traditional transition-based parsers, Cheng et al."
                    },
                    {
                        "id": 127,
                        "string": "(2017) use an additive attention mechanism (Bahdanau et al., 2015) to focus on the more relevant words in the buffer for the current state of the stack."
                    },
                    {
                        "id": 128,
                        "string": "In order to find the impact of attention on the different data sets, we run ablation experiments, as shown in Table 2 (left side)."
                    },
                    {
                        "id": 129,
                        "string": "We found that attention between stack and buffer is not always beneficial: it appears to be helpful for larger data sets while harmful for smaller data sets."
                    },
                    {
                        "id": 130,
                        "string": "Attention is, however, useful for NLmaps, regardless of the  data size."
                    },
                    {
                        "id": 131,
                        "string": "Even though NLmaps data is similarly sized to some of the Overnight domains, its terminal space is considerably larger, perhaps making attention more important even with a smaller data set."
                    },
                    {
                        "id": 132,
                        "string": "On the other hand, the high-resource SLU's cinema domain is not able to benefit from the attention mechanism."
                    },
                    {
                        "id": 133,
                        "string": "We note that the performance of this model on NLmaps falls behind the state of the art (Duong et al., 2017) ."
                    },
                    {
                        "id": 134,
                        "string": "The hyper-parameters of our model were however not tuned on this data set."
                    },
                    {
                        "id": 135,
                        "string": "Handling Sparsity A popular way to deal with the data sparsity problem is to delexicalize the data, that is replacing rare and unknown words with coarse categories."
                    },
                    {
                        "id": 136,
                        "string": "In our experiment, we use a named entity recognition system 2 to replace names with their named entity types."
                    },
                    {
                        "id": 137,
                        "string": "Alternatively, it is possible to use a copy mechanism to enable the decoder to copy rare words from the input rather than generating them from its limited vocabulary."
                    },
                    {
                        "id": 138,
                        "string": "We compare the two solutions across all data sets on the right side of Table 2 ."
                    },
                    {
                        "id": 139,
                        "string": "Regardless of the data set, the copy mechanism generally outperforms delexicalization."
                    },
                    {
                        "id": 140,
                        "string": "We also note that delexi-2 https://spacy.io calization has unexpected catastrophic effects on exact match accuracy for calendar and housing."
                    },
                    {
                        "id": 141,
                        "string": "For Overnight, however, the system with copy mechanism is outperformed by the system without attention."
                    },
                    {
                        "id": 142,
                        "string": "This is unsurprising as the copy mechanism is based on attention, which is not effective on Overnight (Section 5.1)."
                    },
                    {
                        "id": 143,
                        "string": "The inefficacy of copy mechanisms on the Overnight data set was also discussed in Jia and Liang (2016) , where answer accuracy, rather than parsing accuracy, was used as a metric."
                    },
                    {
                        "id": 144,
                        "string": "As such, the results are not directly comparable."
                    },
                    {
                        "id": 145,
                        "string": "For NLmaps and all SLU domains, using a copy mechanism results in an average accuracy improvement of 16% over the baseline."
                    },
                    {
                        "id": 146,
                        "string": "It is worth noting that the copy mechanism is unsurprisingly effective for SLU data due to the nature of the data set: the SLU trees were obtained from data collected for slot tagging, and as such, each leaf in the tree has to be copied from the input sentence."
                    },
                    {
                        "id": 147,
                        "string": "Even though Overnight often yields different conclusions, most likely due to its small vocabulary size, the similar behaviors observed for NLmaps and SLU is reassuring, confirming that it is possible to unify Q&A and SLU under the same umbrella framework of executable semantic parsing."
                    },
                    {
                        "id": 148,
                        "string": "In order to compare the NLmaps results with Lawrence and Riezler (2016) , we also compute F1 scores for the data set."
                    },
                    {
                        "id": 149,
                        "string": "Our baseline outperforms previous results, achieving a score of 0.846."
                    },
                    {
                        "id": 150,
                        "string": "Our best F1 results are also obtained when adding the copy mechanism, achieving a score of 0.874."
                    },
                    {
                        "id": 151,
                        "string": "Transfer Learning The first set of experiments involve transfer learning across Overnight domains."
                    },
                    {
                        "id": 152,
                        "string": "For this data set, the non-terminal vocabulary is mostly shared across domains."
                    },
                    {
                        "id": 153,
                        "string": "As such, we use the architecture where only the TER output classifier is not shared."
                    },
                    {
                        "id": 154,
                        "string": "Selecting the best auxiliary domain by maximizing the overlap with the main domain was not successful, and we instead performed an exhaustive search over the domain pairs on the development set."
                    },
                    {
                        "id": 155,
                        "string": "In the interest of space, for each main domain, we report results for the best auxiliary domain (Table 3)."
                    },
                    {
                        "id": 156,
                        "string": "We note that MTL and pre-training provide similar results and provide an average improvement of 4%."
                    },
                    {
                        "id": 157,
                        "string": "As expected, we observe more substantial improvements for smaller domains."
                    },
                    {
                        "id": 158,
                        "string": "We performed the same set of experiments on   the SLU domains, as shown in Table 4 ."
                    },
                    {
                        "id": 159,
                        "string": "In this case, the non-terminal vocabulary can vary significantly across domains."
                    },
                    {
                        "id": 160,
                        "string": "We therefore choose to use the MTL architecture where both TER and NT output classifiers are not shared."
                    },
                    {
                        "id": 161,
                        "string": "Also for SLU, there is no clear winner between pre-training and MTL."
                    },
                    {
                        "id": 162,
                        "string": "Nevertheless, they always outperform the baseline, demonstrating the importance of transfer learning, especially for smaller domains."
                    },
                    {
                        "id": 163,
                        "string": "While the focus of this transfer learning framework is in exploiting high-resource domains annotated in the same way as a new low-resource domain, we also report a preliminary experiment on transfer learning across tasks."
                    },
                    {
                        "id": 164,
                        "string": "We selected the recipes domain, which exists in both Overnight and SLU."
                    },
                    {
                        "id": 165,
                        "string": "While the SLU data set is significantly different from Overnight, deriving from a corpus annotated with intent/slot labels, as discussed in Section 4, we found promising results using pre-training, increasing the accuracy from 58.3 to 61.1."
                    },
                    {
                        "id": 166,
                        "string": "A full investigation of transfer learning across domains belonging to heterogeneous data sets is left for future work."
                    },
                    {
                        "id": 167,
                        "string": "The experiments on transfer learning demon- Related work A large collection of logical forms of different nature exist in the semantic parsing literature: semantic role schemes (Palmer et al., 2005; Meyers et al., 2004; Baker et al., 1998) , syntax/semantics interfaces (Steedman, 1996) , executable logical forms (Liang, 2013; Kate et al., 2005) , and general purpose meaning representations (Banarescu et al., 2013; Abend and Rappoport, 2013 Cheng et al."
                    },
                    {
                        "id": 168,
                        "string": "(2017) , which is inspired by Recurrent Neural Network Grammars (Dyer et al., 2016) ."
                    },
                    {
                        "id": 169,
                        "string": "We extend the model with ideas inspired by Gulcehre et al."
                    },
                    {
                        "id": 170,
                        "string": "(2016) and Luong and Manning (2016) ."
                    },
                    {
                        "id": 171,
                        "string": "We build our multi-task learning architecture upon the rich literature on the topic."
                    },
                    {
                        "id": 172,
                        "string": "MTL was first introduce in Caruana (1997) ."
                    },
                    {
                        "id": 173,
                        "string": "It has been since used for a number of NLP problems such as tagging (Collobert and Weston, 2008) , syntactic parsing (Luong et al., 2015) , and machine translation Luong et al., 2015) ."
                    },
                    {
                        "id": 174,
                        "string": "The closest to our work is Fan et al."
                    },
                    {
                        "id": 175,
                        "string": "(2017) , where MTL architectures are built on top of an attentive sequenceto-sequence model (Bahdanau et al., 2015) ."
                    },
                    {
                        "id": 176,
                        "string": "We instead focus on transfer learning across domains of the same data sets and employ a different architecture which promises to be less data-hungry than sequence-to-sequence models."
                    },
                    {
                        "id": 177,
                        "string": "Typical SLU systems rely on domain-specific semantic parsers that identify intents and slots in a sentence."
                    },
                    {
                        "id": 178,
                        "string": "Traditionally, these tasks were performed by linear machine learning models (Sha and Pereira, 2003) but more recently jointlytrained DNN models are used (Mesnil et al., 2015; Hakkani-Tür et al., 2016) with differing contexts (Gupta et al., 2018a; Vishal Ishwar Naik, 2018) ."
                    },
                    {
                        "id": 179,
                        "string": "More recently there has been work on extending the traditional intent/slot framework using targeted parsing to handle more complex linguistic phenomenon like coordination (Gupta et al., 2018c; Agarwal et al., 2018) ."
                    },
                    {
                        "id": 180,
                        "string": "Conclusions We framed SLU as an executable semantic parsing task, which addresses a limitation of current commercial SLU systems."
                    },
                    {
                        "id": 181,
                        "string": "By applying our framework to different data sets, we demonstrate that the framework is effective for Q&A as well as for SLU."
                    },
                    {
                        "id": 182,
                        "string": "We explored a typical scenario where it is necessary to learn a semantic parser for a new domain with little data, but other high-resource domains are available."
                    },
                    {
                        "id": 183,
                        "string": "We show the effectiveness of our system and both pre-training and MTL on different domains and data sets."
                    },
                    {
                        "id": 184,
                        "string": "Preliminary experiment results on transfer learning across domains belonging to heterogeneous data sets suggest future work in this area."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 35
                    },
                    {
                        "section": "Transition-based Parser",
                        "n": "2",
                        "start": 36,
                        "end": 50
                    },
                    {
                        "section": "Neural Transition-based Parser with",
                        "n": "2.1",
                        "start": 51,
                        "end": 73
                    },
                    {
                        "section": "Transfer learning",
                        "n": "3",
                        "start": 74,
                        "end": 102
                    },
                    {
                        "section": "Data",
                        "n": "4",
                        "start": 103,
                        "end": 122
                    },
                    {
                        "section": "Experiments",
                        "n": "5",
                        "start": 123,
                        "end": 125
                    },
                    {
                        "section": "Attention",
                        "n": "5.1",
                        "start": 126,
                        "end": 134
                    },
                    {
                        "section": "Handling Sparsity",
                        "n": "5.2",
                        "start": 135,
                        "end": 150
                    },
                    {
                        "section": "Transfer Learning",
                        "n": "5.3",
                        "start": 151,
                        "end": 166
                    },
                    {
                        "section": "Related work",
                        "n": "6",
                        "start": 167,
                        "end": 179
                    },
                    {
                        "section": "Conclusions",
                        "n": "7",
                        "start": 180,
                        "end": 184
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/990-Figure1-1.png",
                        "caption": "Figure 1: The full neural transition-based parsing model. Representations of stack, buffer, and previous actions are used to predict the next action. When the TER or NT actions are chosen, further layers are used to predict (or copy) the token.",
                        "page": 2,
                        "bbox": {
                            "x1": 309.59999999999997,
                            "x2": 515.04,
                            "y1": 66.24,
                            "y2": 317.76
                        }
                    },
                    {
                        "filename": "../figure/image/990-Table4-1.png",
                        "caption": "Table 4: Transfer learning results for SLU domains. BL + Copy is the model without transfer learning. PRETR. stands for pre-training. Again, the numbers are exact match accuracy.",
                        "page": 5,
                        "bbox": {
                            "x1": 80.64,
                            "x2": 284.15999999999997,
                            "y1": 263.52,
                            "y2": 340.32
                        }
                    },
                    {
                        "filename": "../figure/image/990-Table3-1.png",
                        "caption": "Table 3: Transfer learning results for the Overnight domains. BL − Att is the model without transfer learning. PRETR. stands for pre-training. Again, we report exact match accuracy.",
                        "page": 5,
                        "bbox": {
                            "x1": 93.6,
                            "x2": 271.2,
                            "y1": 62.4,
                            "y2": 190.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/990-Table2-1.png",
                        "caption": "Table 2: Left side: Ablation experiments on attention mechanism. Right side: Comparison between delexicalization and copy mechanism. BL is the model of Section 2.1, −Att refers to the same model without attention, +Delex is the system with delexicalization and in +Copy we use a copy mechanism instead. The scores indicate the percentage of correct parses.",
                        "page": 4,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 289.44,
                            "y1": 62.4,
                            "y2": 280.32
                        }
                    },
                    {
                        "filename": "../figure/image/990-Table1-1.png",
                        "caption": "Table 1: Details of training data. # is the number of sentences, TER is the terminal vocabulary size, NT is the nonterminal vocabulary size and Words is the input vocabulary size.",
                        "page": 3,
                        "bbox": {
                            "x1": 317.76,
                            "x2": 517.4399999999999,
                            "y1": 242.39999999999998,
                            "y2": 486.24
                        }
                    },
                    {
                        "filename": "../figure/image/990-Figure2-1.png",
                        "caption": "Figure 2: Conversion from intent/slot tags to tree for the sentence Which cinemas screen Star Wars tonight?",
                        "page": 3,
                        "bbox": {
                            "x1": 336.47999999999996,
                            "x2": 506.4,
                            "y1": 89.75999999999999,
                            "y2": 197.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-9"
        },
        {
            "slides": {
                "0": {
                    "title": "Abstract Meaning Representation AMR",
                    "text": [
                        "He ate the pizza with his fingers."
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "AMR to text generation English",
                    "text": [
                        "He ate the pizza with his fingers."
                    ],
                    "page_nums": [
                        2,
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Previous work",
                    "text": [
                        "Konstas et al. (2017): sequential encoder;"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "This work",
                    "text": [
                        "He ate the pizza with his fingers.",
                        "Are improvements in graph encoders due to reentrancies?",
                        "Graph: Graph Convolutional Network (GCN; Kipf and Welling, 2017)."
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Sequential input Konstas et al 2017",
                    "text": [
                        ":arg0 he :arg1 pizza :instrument finger :part-of he eat-01",
                        ":arg0 eat-01 he :arg1 pizza :instr. finger part-of he",
                        "eat-01 :arg0 he :arg1 pizza :instrument finger :part-of he"
                    ],
                    "page_nums": [
                        6,
                        7,
                        8
                    ],
                    "images": []
                },
                "5": {
                    "title": "Tree structured input",
                    "text": [
                        "eat-01 :arg0 he :arg1 pizza :instrument finger :part-of he",
                        ":arg0 he :arg1 pizza :instr. finger part-of he eat-01"
                    ],
                    "page_nums": [
                        9,
                        10,
                        11,
                        12
                    ],
                    "images": []
                },
                "6": {
                    "title": "Graph structured input",
                    "text": [
                        ":arg0 he :arg1 pizza :instrument finger :part-of he eat-01",
                        "eat-01 :arg0 he :arg1 pizza :instrument finger :part-of he",
                        ":arg0 he :arg1 pizza :instr. finger part-of he eat-01"
                    ],
                    "page_nums": [
                        13,
                        14
                    ],
                    "images": []
                },
                "8": {
                    "title": "Comparison between models dev set R1",
                    "text": [
                        "Seq TreeLSTM GCN-Tree GCN-Graph"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                },
                "9": {
                    "title": "Comparison with previous work test set R1",
                    "text": [
                        "Konstas(seq) Song(graph) GCN-Tree GCN-Graph",
                        "Konstas: sequential baseline, Konstas et al. (2017)"
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                },
                "12": {
                    "title": "Long range dependencies",
                    "text": [
                        "He ate the pizza with a fork.",
                        "eat-01 :arg0 he :arg1 pizza :instrument fork",
                        "Model Max dependency length"
                    ],
                    "page_nums": [
                        20
                    ],
                    "images": []
                },
                "13": {
                    "title": "Generation example",
                    "text": [
                        "communicate-01 lawyer significant-other ex",
                        "REF tell your ex that all communication needs to go through the lawyer",
                        "Seq tell that all the communication go through lawyer",
                        "Tree tell your ex, tell your ex, the need for all the communication",
                        "Graph tell your ex the need to go through a lawyer"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": []
                },
                "16": {
                    "title": "More examples",
                    "text": [
                        "Graph i dont tell him but he finds out. i didnt tell him but he was out. i dont tell him but found out. i dont tell him but he found out.",
                        "Graph if you tell people they can help you , if you tell him, you can help you ! if you tell person_name you, you can help you . if you tell them, you can help you .",
                        "Graph i d recommend you go and see your doctor too. i recommend you go to see your doctor who is going to see your doctor. you recommend going to see your doctor too. i recommend you going to see your doctor too."
                    ],
                    "page_nums": [
                        27,
                        28,
                        29,
                        30
                    ],
                    "images": []
                }
            },
            "paper_title": "Structural Neural Encoders for AMR-to-text Generation",
            "paper_id": "992",
            "paper": {
                "title": "Structural Neural Encoders for AMR-to-text Generation",
                "abstract": "AMR-to-text generation is a problem recently introduced to the NLP community, in which the goal is to generate sentences from Abstract Meaning Representation (AMR) graphs. Sequence-to-sequence models can be used to this end by converting the AMR graphs to strings. Approaching the problem while working directly with graphs requires the use of graph-to-sequence models that encode the AMR graph into a vector representation. Such encoding has been shown to be beneficial in the past, and unlike sequential encoding, it allows us to explicitly capture reentrant structures in the AMR graphs. We investigate the extent to which reentrancies (nodes with multiple parents) have an impact on AMR-to-text generation by comparing graph encoders to tree encoders, where reentrancies are not preserved. We show that improvements in the treatment of reentrancies and long-range dependencies contribute to higher overall scores for graph encoders. Our best model achieves 24.40 BLEU on LDC2015E86, outperforming the state of the art by 1.1 points and 24.54 BLEU on LDC2017T10, outperforming the state of the art by 1.24 points.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Abstract Meaning Representation (AMR; Banarescu et al."
                    },
                    {
                        "id": 1,
                        "string": "2013 ) is a semantic graph representation that abstracts away from the syntactic realization of a sentence, where nodes in the graph represent concepts and edges represent semantic relations between them."
                    },
                    {
                        "id": 2,
                        "string": "AMRs are graphs, rather than trees, because co-references and control structures result in nodes with multiple parents, called reentrancies."
                    },
                    {
                        "id": 3,
                        "string": "For instance, the AMR of Figure 1 (a) contains a reentrancy between finger and he, caused by the possessive pronoun his."
                    },
                    {
                        "id": 4,
                        "string": "AMR-to-text generation is the task of automatically generating natural language from AMR graphs."
                    },
                    {
                        "id": 5,
                        "string": "Attentive encoder/decoder architectures, commonly used for Neural Machine Translation (NMT), have been explored for this task (Konstas et al., 2017; Song et al., 2018; Beck et al., 2018) ."
                    },
                    {
                        "id": 6,
                        "string": "In order to use sequence-to-sequence models, Konstas et al."
                    },
                    {
                        "id": 7,
                        "string": "(2017) reduce the AMR graphs to sequences, while Song et al."
                    },
                    {
                        "id": 8,
                        "string": "(2018) and Beck et al."
                    },
                    {
                        "id": 9,
                        "string": "(2018) directly encode them as graphs."
                    },
                    {
                        "id": 10,
                        "string": "Graph encoding allows the model to explicitly encode reentrant structures present in the AMR graphs."
                    },
                    {
                        "id": 11,
                        "string": "While central to AMR, reentrancies are often hard to treat both in parsing and in generation."
                    },
                    {
                        "id": 12,
                        "string": "Previous work either removed them from the graphs, hence obtaining sequential (Konstas et al., 2017) or tree-structured (Liu et al., 2015; Takase et al., 2016) data, while other work maintained them but did not analyze their impact on performance (e.g., Song et al., 2018; Beck et al., 2018) ."
                    },
                    {
                        "id": 13,
                        "string": "Damonte et al."
                    },
                    {
                        "id": 14,
                        "string": "(2017) showed that state-of-the-art parsers do not perform well in predicting reentrant structures, while van Noord and Bos (2017) compared different pre-and post-processing techniques to improve the performance of sequenceto-sequence parsers with respect to reentrancies."
                    },
                    {
                        "id": 15,
                        "string": "It is not yet clear whether explicit encoding of reentrancies is beneficial for generation."
                    },
                    {
                        "id": 16,
                        "string": "In this paper, we compare three types of encoders for AMR: 1) sequential encoders, which reduce AMR graphs to sequences; 2) tree encoders, which ignore reentrancies; and 3) graph encoders."
                    },
                    {
                        "id": 17,
                        "string": "We pay particular attention to two phenomena: reentrancies, which mark co-reference and control structures, and long-range dependencies in the AMR graphs, which are expected to benefit from structural encoding."
                    },
                    {
                        "id": 18,
                        "string": "The contributions of the paper are two-fold: • We present structural encoders for the encoder/decoder framework and show the benefits of graph encoders not only compared to sequential encoders but also compared to tree encoders, which have not been studied so far for AMR-to-text generation."
                    },
                    {
                        "id": 19,
                        "string": "• We show that better treatment of reentrancies and long-range dependencies contributes to improvements in the graph encoders."
                    },
                    {
                        "id": 20,
                        "string": "Our best model, based on a graph encoder, achieves state-of-the-art results for both the LDC2015E86 dataset (24.40 on BLEU and 23.79 on Meteor) and the LDC2017T10 dataset (24.54 on BLEU and 24.07 on Meteor)."
                    },
                    {
                        "id": 21,
                        "string": "Input Representations Graph-structured AMRs AMRs are normally represented as rooted and directed graphs: G 0 = (V 0 , E 0 , L), V 0 = {v 1 , v 2 , ."
                    },
                    {
                        "id": 22,
                        "string": "."
                    },
                    {
                        "id": 23,
                        "string": "."
                    },
                    {
                        "id": 24,
                        "string": ", v n }, root ∈ V 0 , where V 0 are the graph vertices (or nodes) and root is a designated root node in V 0 ."
                    },
                    {
                        "id": 25,
                        "string": "The edges in the AMR are labeled: E 0 ⊆ V 0 × L × V 0 , L = { 1 , 2 , ."
                    },
                    {
                        "id": 26,
                        "string": "."
                    },
                    {
                        "id": 27,
                        "string": "."
                    },
                    {
                        "id": 28,
                        "string": ", n }."
                    },
                    {
                        "id": 29,
                        "string": "Each edge e ∈ E 0 is a triple: e = (i, label, j), where i ∈ V 0 is the parent node, label ∈ L is the edge label and j ∈ V 0 is the child node."
                    },
                    {
                        "id": 30,
                        "string": "In order to obtain unlabeled edges, thus decreasing the total number of parameters required by the models, we replace each labeled edge e = (i, label, j) with two unlabeled edges: e 1 = (i, label), e 2 = (label, j): G = (V, E), V = V 0 ∪ L = {v 1 , ."
                    },
                    {
                        "id": 31,
                        "string": "."
                    },
                    {
                        "id": 32,
                        "string": "."
                    },
                    {
                        "id": 33,
                        "string": ", v n , 1 , ."
                    },
                    {
                        "id": 34,
                        "string": "."
                    },
                    {
                        "id": 35,
                        "string": "."
                    },
                    {
                        "id": 36,
                        "string": ", n }, E ⊆ (V 0 × L) ∪ (L × V 0 )."
                    },
                    {
                        "id": 37,
                        "string": "Each unlabeled edge e ∈ E is a pair: e = (i, j), where one of the following holds: 1. i ∈ V 0 and j ∈ L; 2. i ∈ L and j ∈ V 0 ."
                    },
                    {
                        "id": 38,
                        "string": "For instance, the edge between eat-01 and he with label :arg0 of Figure 1 (a) is replaced by two edges in Figure 1(d) : an edge between eat-01 and :arg0 and another one between :arg0 and he."
                    },
                    {
                        "id": 39,
                        "string": "The process, also used in Beck et al."
                    },
                    {
                        "id": 40,
                        "string": "(2018) , tranforms the input graph into its equivalent Levi graph (Levi, 1942) ."
                    },
                    {
                        "id": 41,
                        "string": "Tree-structured AMRs In order to obtain tree structures, it is necessary to discard the reentrancies from the AMR graphs."
                    },
                    {
                        "id": 42,
                        "string": "Similarly to Takase et al."
                    },
                    {
                        "id": 43,
                        "string": "(2016) , we replace nodes with n > 1 incoming edges with n identically labeled nodes, each with a single incoming edge."
                    },
                    {
                        "id": 44,
                        "string": "Sequential AMRs Following Konstas et al."
                    },
                    {
                        "id": 45,
                        "string": "(2017) , the input sequence is a linearized and anonymized AMR graph."
                    },
                    {
                        "id": 46,
                        "string": "Linearization is used to convert the graph into a sequence: x = x 1 , ."
                    },
                    {
                        "id": 47,
                        "string": "."
                    },
                    {
                        "id": 48,
                        "string": "."
                    },
                    {
                        "id": 49,
                        "string": ", x N , x i ∈ V. The depth-first traversal of the graph defines the indexing between nodes and tokens in the sequence."
                    },
                    {
                        "id": 50,
                        "string": "For instance, the root node is x 1 , its leftmost child is x 2 and so on."
                    },
                    {
                        "id": 51,
                        "string": "Nodes with multiple parents are visited more than once."
                    },
                    {
                        "id": 52,
                        "string": "At each visit, their labels are repeated in the sequence, effectively losing reentrancy information, as shown in Figure 1 (b)."
                    },
                    {
                        "id": 53,
                        "string": "Anonymization removes names and rare words with coarse categories to reduce data sparsity."
                    },
                    {
                        "id": 54,
                        "string": "An alternative to anonymization is to employ a copy mechanism (Gulcehre et al., 2016) , where the models learn to copy rare words from the input itself."
                    },
                    {
                        "id": 55,
                        "string": "In this paper, we follow the anonymization approach."
                    },
                    {
                        "id": 56,
                        "string": "Encoders In this section, we review the encoders adopted as building blocks for our tree and graph encoders."
                    },
                    {
                        "id": 57,
                        "string": "Recurrent Neural Network Encoders We reimplement the encoder of Konstas et al."
                    },
                    {
                        "id": 58,
                        "string": "(2017) , where the sequential linearization is the input to a bidirectional LSTM (BiLSTM; Graves et al."
                    },
                    {
                        "id": 59,
                        "string": "2013) network."
                    },
                    {
                        "id": 60,
                        "string": "The hidden state of the BiL-STM at step i is used as a context-aware word representation of the i-th token in the sequence: e 1:N = BiLSTM(x 1:N ), where e i ∈ R d , d is the size of the output embeddings."
                    },
                    {
                        "id": 61,
                        "string": "TreeLSTM Encoders Tree-Structured Long Short-Term Memory Networks (TreeLSTM; Tai et al."
                    },
                    {
                        "id": 62,
                        "string": "2015) have been introduced primarily as a way to encode the hierarchical structure of syntactic trees (Tai et al., 2015) , but they have also been applied to AMR for the task of headline generation (Takase et al., 2016) ."
                    },
                    {
                        "id": 63,
                        "string": "TreeLSTMs assume tree-structured input, so AMR graphs must be preprocessed to respect this constraint: reentrancies, which play an essential role in AMR, must be removed, thereby transforming the graphs into trees."
                    },
                    {
                        "id": 64,
                        "string": "We use the Child-Sum variant introduced by Tai et al."
                    },
                    {
                        "id": 65,
                        "string": "(2015) , which processes the tree in a bottomup pass."
                    },
                    {
                        "id": 66,
                        "string": "When visiting a node, the hidden states of its children are summed up in a single vector which is then passed into recurrent gates."
                    },
                    {
                        "id": 67,
                        "string": "In order to use information from both incoming and outgoing edges (parents and children), we employ bidirectional TreeLSTMs (Eriguchi et al., 2016) , where the bottom-up pass is followed by a top-down pass."
                    },
                    {
                        "id": 68,
                        "string": "The top-down state of the root node is obtained by feeding the bottom-up state of the root node through a feed-forward layer: h ↓ root = tanh(W r h ↑ root + b), where h ↑ i is the hidden state of node x i ∈ V for the bottom-up pass and h ↓ i is the hidden state of node x i for the top-down pass."
                    },
                    {
                        "id": 69,
                        "string": "The bottom up states for all other nodes are computed with an LSTM, with the cell state given by their parent nodes: h ↓ i = LSTM(h ↑ p(i) , h ↑ i ), where p(i) is the parent of node x i in the tree."
                    },
                    {
                        "id": 70,
                        "string": "The final hidden states are obtained by concatenating the states from the bottom-up pass and the topdown pass: h i = h ↓ i ; h ↑ i ."
                    },
                    {
                        "id": 71,
                        "string": "The hidden state of the root node is usually used as a representation for the entire tree."
                    },
                    {
                        "id": 72,
                        "string": "In order to use attention over all nodes, as in traditional NMT (Bahdanau et al., 2015) , we can however build node embeddings by extracting the hidden states of each node in the tree: e 1:N = h 1:N , where e i ∈ R d , d is the size of the output embed- dings."
                    },
                    {
                        "id": 73,
                        "string": "The encoder is related to the TreeLSTM encoder of Takase et al."
                    },
                    {
                        "id": 74,
                        "string": "(2016) , which however encodes labeled trees and does not use a top-down pass."
                    },
                    {
                        "id": 75,
                        "string": "Graph Convolutional Network Encoders Graph Convolutional Network (GCN; Duvenaud et al."
                    },
                    {
                        "id": 76,
                        "string": "2015; Kipf and Welling 2016) is a neural network architecture that learns embeddings of nodes in a graph by looking at its nearby nodes."
                    },
                    {
                        "id": 77,
                        "string": "In Natural Language Processing, GCNs have been used for Semantic Role Labeling , NMT (Bastings et al., 2017) , Named Entity Recognition (Cetoli et al., 2017) and text generation (Marcheggiani and Perez-Beltrachini, 2018) ."
                    },
                    {
                        "id": 78,
                        "string": "A graph-to-sequence neural network was first introduced by Xu et al."
                    },
                    {
                        "id": 79,
                        "string": "(2018) ."
                    },
                    {
                        "id": 80,
                        "string": "The authors review the similarities between their approach, GCN and another approach, based on GRUs (Li et al., 2015) ."
                    },
                    {
                        "id": 81,
                        "string": "The latter recently inspired a graphto-sequence architecture for AMR-to-text generation (Beck et al., 2018) ."
                    },
                    {
                        "id": 82,
                        "string": "Simultaneously, Song et al."
                    },
                    {
                        "id": 83,
                        "string": "(2018) proposed a graph encoder based on LSTMs."
                    },
                    {
                        "id": 84,
                        "string": "The architectures of Song et al."
                    },
                    {
                        "id": 85,
                        "string": "(2018) and Beck et al."
                    },
                    {
                        "id": 86,
                        "string": "(2018) are both based on the same core computation of a GCN, which sums over the embeddings of the immediate neighborhood of each node: h (k+1) i = σ j∈N (i) W (k) (j,i) h (k) j + b (k) , where h (k) i is the embeddings of node x i ∈ V at layer k, σ is a non-linear activation function, N (i) is the set of the immediate neighbors of x i , W (k) (j,i) ∈ R m×m and b (k) ∈ R m , with m being the size of the embeddings."
                    },
                    {
                        "id": 87,
                        "string": "It is possible to use recurrent networks to model the update of the node embeddings."
                    },
                    {
                        "id": 88,
                        "string": "Specifically, Beck et al."
                    },
                    {
                        "id": 89,
                        "string": "(2018) uses a GRU layer where the gates are modeled as GCN layers."
                    },
                    {
                        "id": 90,
                        "string": "Song et al."
                    },
                    {
                        "id": 91,
                        "string": "(2018) did not use the activation function σ and perform an LSTM update instead."
                    },
                    {
                        "id": 92,
                        "string": "The systems of Song et al."
                    },
                    {
                        "id": 93,
                        "string": "(2018) and Beck et al."
                    },
                    {
                        "id": 94,
                        "string": "(2018) further differ in design and implementation decisions such as in the use of edge label and edge directionality."
                    },
                    {
                        "id": 95,
                        "string": "Throughout the rest of the paper, we follow the traditional, non-recurrent, implementation of GCN also adopted in other NLP tasks Bastings et al., 2017; Cetoli et al., 2017) ."
                    },
                    {
                        "id": 96,
                        "string": "In our experiments, the node embeddings are computed as follows: h (k+1) i = σ j∈N (i) W (k) dir(j,i) h (k) j + b (k) , (1) where dir(j, i) indicates the direction of the edge between x j and x i (i.e., outgoing or incoming edge)."
                    },
                    {
                        "id": 97,
                        "string": "The hidden vectors from the last layer of the GCN network are finally used to represent each node in the graph: e 1:N = h (K) 1 , ."
                    },
                    {
                        "id": 98,
                        "string": "."
                    },
                    {
                        "id": 99,
                        "string": "."
                    },
                    {
                        "id": 100,
                        "string": ", h (K) N , where K is the number of GCN layers used, e i ∈ R d , d is the size of the output embeddings."
                    },
                    {
                        "id": 101,
                        "string": "To regularize the models we apply dropout (Srivastava et al., 2014) as well as edge dropout ."
                    },
                    {
                        "id": 102,
                        "string": "We also include highway connections (Srivastava et al., 2015) between GCN layers."
                    },
                    {
                        "id": 103,
                        "string": "While GCN can naturally be used to encode graphs, they can also be applied to trees by removing reentrancies from the input graphs."
                    },
                    {
                        "id": 104,
                        "string": "In the experiments of Section 5, we explore GCN-based models both as graph encoders (reentrancies are maintained) as well as tree encoders (reentrancies are ignored)."
                    },
                    {
                        "id": 105,
                        "string": "x 1 x 2 ."
                    },
                    {
                        "id": 106,
                        "string": "."
                    },
                    {
                        "id": 107,
                        "string": "."
                    },
                    {
                        "id": 108,
                        "string": "x N GCN/TreeLSTM h 1 h 2 ."
                    },
                    {
                        "id": 109,
                        "string": "."
                    },
                    {
                        "id": 110,
                        "string": "."
                    },
                    {
                        "id": 111,
                        "string": "h N h 1 h 2 ."
                    },
                    {
                        "id": 112,
                        "string": "."
                    },
                    {
                        "id": 113,
                        "string": "."
                    },
                    {
                        "id": 114,
                        "string": "hn BiLSTM e 1 e 2 ."
                    },
                    {
                        "id": 115,
                        "string": "."
                    },
                    {
                        "id": 116,
                        "string": "."
                    },
                    {
                        "id": 117,
                        "string": "en x 1 x 2 ."
                    },
                    {
                        "id": 118,
                        "string": "."
                    },
                    {
                        "id": 119,
                        "string": "."
                    },
                    {
                        "id": 120,
                        "string": "x N x 1 x 2 ."
                    },
                    {
                        "id": 121,
                        "string": "."
                    },
                    {
                        "id": 122,
                        "string": "."
                    },
                    {
                        "id": 123,
                        "string": "xn BiLSTM h 1 h 2 ."
                    },
                    {
                        "id": 124,
                        "string": "."
                    },
                    {
                        "id": 125,
                        "string": "."
                    },
                    {
                        "id": 126,
                        "string": "hn h 1 h 2 ."
                    },
                    {
                        "id": 127,
                        "string": "."
                    },
                    {
                        "id": 128,
                        "string": "."
                    },
                    {
                        "id": 129,
                        "string": "h N GCN/TreeLSTM e 1 e 2 ."
                    },
                    {
                        "id": 130,
                        "string": "."
                    },
                    {
                        "id": 131,
                        "string": "."
                    },
                    {
                        "id": 132,
                        "string": "e N Figure 2 : Two ways of stacking recurrent and structural models."
                    },
                    {
                        "id": 133,
                        "string": "Left side: structure on top of sequence, where the structural encoders are applied to the hidden vectors computed by the BiLSTM."
                    },
                    {
                        "id": 134,
                        "string": "Right side: sequence on top of structure, where the structural encoder is used to create better embeddings which are then fed to the BiLSTM."
                    },
                    {
                        "id": 135,
                        "string": "The dotted lines refer to the process of converting the graph into a sequence or vice-versa."
                    },
                    {
                        "id": 136,
                        "string": "Stacking Encoders We aimed at stacking the explicit source of structural information provided by TreeLSTMs and GCNs with the sequential information which BiL-STMs extract well."
                    },
                    {
                        "id": 137,
                        "string": "This was shown to be effective for other tasks with both TreeLSTMs (Eriguchi et al., 2016; Chen et al., 2017) and GCNs Cetoli et al., 2017; Bastings et al., 2017) ."
                    },
                    {
                        "id": 138,
                        "string": "In previous work, the structural encoders (tree or graph) were used on top of the BiLSTM network: first, the input is passed through the sequential encoder, the output of which is then fed into the structural encoder."
                    },
                    {
                        "id": 139,
                        "string": "While we experiment with this approach, we also propose an alternative solution where the BiLSTM network is used on top of the structural encoder: the input embeddings are refined by exploiting the explicit structural information given by the graph."
                    },
                    {
                        "id": 140,
                        "string": "The refined embeddings are then fed into the BiLSTM networks."
                    },
                    {
                        "id": 141,
                        "string": "See Figure 2 for a graphical representation of the two approaches."
                    },
                    {
                        "id": 142,
                        "string": "In our experiments, we found this approach to be more effective."
                    },
                    {
                        "id": 143,
                        "string": "Compared to models that interleave structural and recurrent components such as the systems of Song et al."
                    },
                    {
                        "id": 144,
                        "string": "(2018) and Beck et al."
                    },
                    {
                        "id": 145,
                        "string": "(2018) , stacking the components allows us to test for their contributions more easily."
                    },
                    {
                        "id": 146,
                        "string": "Structure on Top of Sequence In this setup, BiLSTMs are used as in Section 3.1 to encode the linearized and anonymized AMR."
                    },
                    {
                        "id": 147,
                        "string": "The context provided by the BiLSTM is a sequential one."
                    },
                    {
                        "id": 148,
                        "string": "We then apply either GCN or TreeLSTM on the output of the BiLSTM, by initializing the GCN or TreeLSTM embeddings with the BiLSTM hidden states."
                    },
                    {
                        "id": 149,
                        "string": "We call these models SEQGCN and SEQTREELSTM."
                    },
                    {
                        "id": 150,
                        "string": "Sequence on Top of Structure We also propose a different approach for integrating graph information into the encoder, by swapping the order of the BiLSTM and the structural encoder: we aim at using the structured information provided by the AMR graph as a way to refine the original word representations."
                    },
                    {
                        "id": 151,
                        "string": "We first apply the structural encoder to the input graphs."
                    },
                    {
                        "id": 152,
                        "string": "The GCN or TreeLSTM representations are then fed into the BiLSTM."
                    },
                    {
                        "id": 153,
                        "string": "We call these models GCNSEQ and TREELSTMSEQ."
                    },
                    {
                        "id": 154,
                        "string": "The motivation behind this approach is that we know that BiLSTMs, given appropriate input embeddings, are very effective at encoding the input sequences."
                    },
                    {
                        "id": 155,
                        "string": "In order to exploit their strength, we do not amend their output but rather provide them with better input embeddings to start with, by explicitly taking the graph relations into account."
                    },
                    {
                        "id": 156,
                        "string": "Experiments We use both BLEU (Papineni et al., 2002) and Meteor (Banerjee and Lavie, 2005) as evaluation metrics."
                    },
                    {
                        "id": 157,
                        "string": "1 We report results on the AMR dataset LDC2015E86 and LDC2017T10."
                    },
                    {
                        "id": 158,
                        "string": "All systems are implemented in PyTorch (Paszke et al., 2017) using the framework OpenNMT-py (Klein et al., 2017) ."
                    },
                    {
                        "id": 159,
                        "string": "Hyperparameters of each model were tuned on the development set of LDC2015E86."
                    },
                    {
                        "id": 160,
                        "string": "For the GCN components, we use two layers, ReLU activations, and tanh highway layers."
                    },
                    {
                        "id": 161,
                        "string": "We use single layer LSTMs."
                    },
                    {
                        "id": 162,
                        "string": "We train with SGD with the initial learning rate set to 1 and decay to 0.8."
                    },
                    {
                        "id": 163,
                        "string": "Batch size is set to 100."
                    },
                    {
                        "id": 164,
                        "string": "2 We first evaluate the overall performance of the models, after which we focus on two phenomena that we expect to benefit most from structural encoders: reentrancies and long-range dependencies."
                    },
                    {
                        "id": 165,
                        "string": "Table 1 shows the comparison on the development split of the LDC2015E86 dataset between sequential, tree and graph encoders."
                    },
                    {
                        "id": 166,
                        "string": "The sequential encoder (SEQ) is a re-implementation of Konstas et al."
                    },
                    {
                        "id": 167,
                        "string": "(2017) ."
                    },
                    {
                        "id": 168,
                        "string": "We test both approaches of stacking structural and sequential components: structure on top of sequence (SEQTREELSTM and SEQGCN), and sequence on top of structure (TREELSTMSEQ and GCNSEQ)."
                    },
                    {
                        "id": 169,
                        "string": "To inspect the effect of the sequential component, we run ablation tests by removing the RNNs altogether (TREELSTM and GCN)."
                    },
                    {
                        "id": 170,
                        "string": "GCN-based models are used both as tree encoders (reentrancies are removed) and graph encoders (reentrancies are maintained)."
                    },
                    {
                        "id": 171,
                        "string": "For both TreeLSTM-based and GCN-based models, our proposed approach of applying the structural encoder before the RNN achieves better scores."
                    },
                    {
                        "id": 172,
                        "string": "This is especially true for GCN-based models, for which we also note a drastic drop in performance when the RNN is removed, highlighting the importance of a sequential component."
                    },
                    {
                        "id": 173,
                        "string": "On the other hand, RNN layers seem to have less impact on TreeLSTM-based models."
                    },
                    {
                        "id": 174,
                        "string": "This outcome is not unexpected, as TreeLSTMs already use LSTM gates in their computation."
                    },
                    {
                        "id": 175,
                        "string": "The results show a clear advantage of tree and graph encoders over the sequential encoder."
                    },
                    {
                        "id": 176,
                        "string": "The best performing model is GCNSEQ, both as a tree and as a graph encoder, with the latter obtaining the highest results."
                    },
                    {
                        "id": 177,
                        "string": "Table 2 shows the comparison between our best sequential (SEQ), tree (GCNSEQ without reentrancies, henceforth called TREE) and graph en-   coders (GCNSEQ with reentrancies, henceforth called GRAPH) on the test set of LDC2015E86 and LDC2017T10."
                    },
                    {
                        "id": 178,
                        "string": "We also include state-of-the-art results reported on these datasets for sequential encoding (Konstas et al., 2017) and graph encoding (Song et al., 2018; Beck et al., 2018) ."
                    },
                    {
                        "id": 179,
                        "string": "3 In order to mitigate the effects of random seeds, we train five models with different random seeds and report the results of the median model, according to their BLEU score on the development set (Beck et al., 2018) ."
                    },
                    {
                        "id": 180,
                        "string": "We achieve state-of-the-art results with both tree and graph encoders, demonstrating the efficacy of our GCNSeq approach."
                    },
                    {
                        "id": 181,
                        "string": "The graph encoder outperforms the other systems and previous work on both datasets."
                    },
                    {
                        "id": 182,
                        "string": "These results demonstrate the benefit of structural encoders over purely sequential ones as well as the advantage of explicitly including reentrancies."
                    },
                    {
                        "id": 183,
                        "string": "The differences between our graph encoder and that of Song et al."
                    },
                    {
                        "id": 184,
                        "string": "(2018) and Beck et al."
                    },
                    {
                        "id": 185,
                        "string": "(2018) were discussed in Section 3.3."
                    },
                    {
                        "id": 186,
                        "string": "3 We run comparisons on systems without ensembling nor additional data."
                    },
                    {
                        "id": 187,
                        "string": "Reentrancies Overall scores show an advantage of graph encoder over tree and sequential encoders, but they do not shed light into how this is achieved."
                    },
                    {
                        "id": 188,
                        "string": "Because graph encoders are the only ones to model reentrancies explicitly, we expect them to deal better with these structures."
                    },
                    {
                        "id": 189,
                        "string": "It is, however, possible that the other models are capable of handling these structures implicitly."
                    },
                    {
                        "id": 190,
                        "string": "Moreover, the dataset contains a large number of examples that do not involve any reentrancies, as shown in Table 3 , so that the overall scores may not be representative of the ability of models to capture reentrancies."
                    },
                    {
                        "id": 191,
                        "string": "It is expected that the benefit of the graph models will be more evident for those examples containing more reentrancies."
                    },
                    {
                        "id": 192,
                        "string": "To test this hypothesis, we evaluate the various scenarios as a function of the number of reentrancies in each example, using the Meteor score as a metric."
                    },
                    {
                        "id": 193,
                        "string": "4 Table 4 shows that the gap between the graph encoder and the other encoders is widest for examples with more than six reentrancies."
                    },
                    {
                        "id": 194,
                        "string": "The Meteor score of the graph encoder for these cases is 3.1% higher than the one for the sequential encoder and 2.3% higher than the score achieved by the tree encoder, demonstrating that explicitly encoding reentrancies is more beneficial than the overall scores suggest."
                    },
                    {
                        "id": 195,
                        "string": "Interestingly, it can also be observed that the graph model outperforms the tree model also for examples with no reentrancies, where tree and graph structures are identical."
                    },
                    {
                        "id": 196,
                        "string": "This suggests that preserving reentrancies in the training data has other beneficial effects."
                    },
                    {
                        "id": 197,
                        "string": "In Section 5.2 we explore one: better handling of long-range dependencies."
                    },
                    {
                        "id": 198,
                        "string": "Manual Inspection In order to further explore how the graph model handles reentrancies differently from the other models, we performed a manual inspection of the models' output."
                    },
                    {
                        "id": 199,
                        "string": "We selected examples containing reentrancies, where the graph model performs better than the other models."
                    },
                    {
                        "id": 200,
                        "string": "These are shown in Table 5 ."
                    },
                    {
                        "id": 201,
                        "string": "In Example (1), we note that the graph model is the only one that correctly predicts the phrase he finds out."
                    },
                    {
                        "id": 202,
                        "string": "The wrong verb tense is due to the lack of tense information in AMR graphs."
                    },
                    {
                        "id": 203,
                        "string": "In the sequential model, the pronoun is chosen correctly, but the wrong verb is predicted, while in the tree model the pronoun is missing."
                    },
                    {
                        "id": 204,
                        "string": "In Example (2) , only the graph model correctly generates the phrase you tell them, while none of the models use people as the subject of the predicate can."
                    },
                    {
                        "id": 205,
                        "string": "In Example (3), both the graph and the sequential models deal well with the control structure caused by the recommend predicate."
                    },
                    {
                        "id": 206,
                        "string": "The sequential model, however, overgenerates a wh-clause."
                    },
                    {
                        "id": 207,
                        "string": "Finally, in Example (4) the tree and graph models deal correctly with the possessive pronoun to generate the phrase tell your ex, while the sequential model does not."
                    },
                    {
                        "id": 208,
                        "string": "Overall, we note that the graph model produces a more accurate output than sequential and tree models by generating the correct pronouns and mentions when control verbs and co-references are involved."
                    },
                    {
                        "id": 209,
                        "string": "Contrastive Pairs For a quantitative analysis of how the different models handle pronouns, we use a method to inspect NMT output for specific linguistic analysis based on contrastive pairs (Sennrich, 2017) ."
                    },
                    {
                        "id": 210,
                        "string": "Given a reference output sentence, a contrastive sentence is generated by introducing a mistake related to the phenomenon we are interested in evaluating."
                    },
                    {
                        "id": 211,
                        "string": "The probability that the model assigns to the reference sentence is then compared to that of the contrastive sentence."
                    },
                    {
                        "id": 212,
                        "string": "The accuracy of a model is determined by the percentage of examples in which the reference sentence has a higher probability than the contrastive sentence."
                    },
                    {
                        "id": 213,
                        "string": "We produce contrastive examples by running CoreNLP (Manning et al., 2014) to identify coreferences, which are the primary cause of reentrancies, and introducing a mistake."
                    },
                    {
                        "id": 214,
                        "string": "When an expression has multiple mentions, the antecedent is repeated in the linearized AMR."
                    },
                    {
                        "id": 215,
                        "string": "For instance, the linearization of Figure 1(b) contains the token he twice, which instead appears only once in the sen-tence."
                    },
                    {
                        "id": 216,
                        "string": "This repetition may result in generating the token he twice, rather than using a pronoun to refer back to it."
                    },
                    {
                        "id": 217,
                        "string": "To investigate this possible mistake, we replace one of the mentions with the antecedent (e.g., John ate the pizza with his fingers is replaced with John ate the pizza with John fingers, which is ungrammatical and as such should be less likely)."
                    },
                    {
                        "id": 218,
                        "string": "An alternative hypothesis is that even when the generation system correctly decides to predict a pronoun, it selects the wrong one."
                    },
                    {
                        "id": 219,
                        "string": "To test for this, we produce contrastive examples where a pronoun is replaced by either a different type of pronoun (e.g., John ate the pizza with his fingers is replaced with John ate the pizza with him fingers) or by the same type of pronoun but for a different number (John ate the pizza with their fingers) or different gender (John ate the pizza with her fingers)."
                    },
                    {
                        "id": 220,
                        "string": "Note from Figure 1 that the graph-structured AMR is the one that more directly captures the relation between finger and he, and as such it is expected to deal better with this type of mistakes."
                    },
                    {
                        "id": 221,
                        "string": "From the test split of LDC2017T10, we generated 251 contrastive examples due to antecedent replacements, 912 due to pronoun type replacements, 1840 due to number replacements and 95 due to gender replacements."
                    },
                    {
                        "id": 222,
                        "string": "5 The results are shown in Table 6 ."
                    },
                    {
                        "id": 223,
                        "string": "The sequential encoder performs surprisingly well at this task, with better or on par performance with respect to the tree encoder."
                    },
                    {
                        "id": 224,
                        "string": "The graph encoder outperforms the sequential encoder only for pronoun number and gender replacements."
                    },
                    {
                        "id": 225,
                        "string": "Future work is required to more precisely analyze if the different models cope with pronomial mentions in significantly different ways."
                    },
                    {
                        "id": 226,
                        "string": "Other approaches to inspect phenomena of co-reference and control verbs can also be explored, for instance by devising specific training objectives (Linzen et al., 2016) ."
                    },
                    {
                        "id": 227,
                        "string": "Long-range Dependencies When we encode a long sequence, interactions between items that appear distant from each other in the sequence are difficult to capture."
                    },
                    {
                        "id": 228,
                        "string": "The problem of long-range dependencies in natural language is well known for RNN architectures (Bengio et al., 1994) ."
                    },
                    {
                        "id": 229,
                        "string": "Indeed, the need to solve this problem motivated the introduction of LSTM models, which are known to model long-range dependencies better than traditional RNNs."
                    },
                    {
                        "id": 230,
                        "string": "(1) REF i dont tell him but he finds out , SEQ i did n't tell him but he was out ."
                    },
                    {
                        "id": 231,
                        "string": "TREE i do n't tell him but found out ."
                    },
                    {
                        "id": 232,
                        "string": "GRAPH i do n't tell him but he found out ."
                    },
                    {
                        "id": 233,
                        "string": "( 2) Because the nodes in the graphs are not aligned with words in the sentence, AMR has no notion of distance between the nodes taking part in an edge."
                    },
                    {
                        "id": 234,
                        "string": "In order to define the length of an AMR edge, we resort to the AMR linearization discussed in Section 2."
                    },
                    {
                        "id": 235,
                        "string": "Given the linearization of the AMR x 1 , ."
                    },
                    {
                        "id": 236,
                        "string": "."
                    },
                    {
                        "id": 237,
                        "string": "."
                    },
                    {
                        "id": 238,
                        "string": ", x N , as discussed in Section 2, and an edge between two nodes x i and x j , the length of the edge is defined as |j − i|."
                    },
                    {
                        "id": 239,
                        "string": "For instance, in the AMR of Figure 1 , the edge between eat-01 and :instrument is a dependency of length five, because of the distance between the two words in the linearization eat-01 :arg0 he :arg1 pizza :instrument."
                    },
                    {
                        "id": 240,
                        "string": "We then compute the maximum dependency length for each AMR graph."
                    },
                    {
                        "id": 241,
                        "string": "To verify the hypothesis that long-range dependencies contribute to the improvements of graph models, we compare the models as a function of the maximum dependency length in each example."
                    },
                    {
                        "id": 242,
                        "string": "Longer dependencies are sometimes caused by reentrancies, as in the dependency between :part-of and he in Figure 1 ."
                    },
                    {
                        "id": 243,
                        "string": "To verify that the contribution in terms of longer dependencies is complementary to that of reentrancies, we exclude sentences with reentrancies from this analysis."
                    },
                    {
                        "id": 244,
                        "string": "Table 7 shows the statistics for this measure."
                    },
                    {
                        "id": 245,
                        "string": "Results are shown in Table 8 ."
                    },
                    {
                        "id": 246,
                        "string": "The graph encoder always outperforms both the sequential and the tree encoder."
                    },
                    {
                        "id": 247,
                        "string": "The gap with the sequential encoder increases for longer dependencies."
                    },
                    {
                        "id": 248,
                        "string": "This indicates that longer dependencies are an important factor in improving results for both tree and graph encoders, especially for the latter."
                    },
                    {
                        "id": 249,
                        "string": "Conclusions We introduced models for AMR-to-text generation with the purpose of investigating the difference between sequential, tree and graph encoders."
                    },
                    {
                        "id": 250,
                        "string": "We showed that encoding reentrancies improves overall performance."
                    },
                    {
                        "id": 251,
                        "string": "We observed bigger benefits when the input AMR graphs have a larger number of reentrant structures and longer dependencies."
                    },
                    {
                        "id": 252,
                        "string": "Our best graph encoder, which consists of a GCN wired to a BiLSTM network, improves over the state of the art on all tested datasets."
                    },
                    {
                        "id": 253,
                        "string": "We inspected the differences between the models, especially in terms of co-references and control structures."
                    },
                    {
                        "id": 254,
                        "string": "Further exploration of graph encoders is left to future work, which may result crucial to improve performance further."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 20
                    },
                    {
                        "section": "Input Representations",
                        "n": "2",
                        "start": 21,
                        "end": 54
                    },
                    {
                        "section": "Encoders",
                        "n": "3",
                        "start": 55,
                        "end": 56
                    },
                    {
                        "section": "Recurrent Neural Network Encoders",
                        "n": "3.1",
                        "start": 57,
                        "end": 60
                    },
                    {
                        "section": "TreeLSTM Encoders",
                        "n": "3.2",
                        "start": 61,
                        "end": 74
                    },
                    {
                        "section": "Graph Convolutional Network Encoders",
                        "n": "3.3",
                        "start": 75,
                        "end": 135
                    },
                    {
                        "section": "Stacking Encoders",
                        "n": "4",
                        "start": 136,
                        "end": 145
                    },
                    {
                        "section": "Structure on Top of Sequence",
                        "n": "4.1",
                        "start": 146,
                        "end": 149
                    },
                    {
                        "section": "Sequence on Top of Structure",
                        "n": "4.2",
                        "start": 150,
                        "end": 155
                    },
                    {
                        "section": "Experiments",
                        "n": "5",
                        "start": 156,
                        "end": 186
                    },
                    {
                        "section": "Reentrancies",
                        "n": "5.1",
                        "start": 187,
                        "end": 197
                    },
                    {
                        "section": "Manual Inspection",
                        "n": "5.1.1",
                        "start": 198,
                        "end": 208
                    },
                    {
                        "section": "Contrastive Pairs",
                        "n": "5.1.2",
                        "start": 209,
                        "end": 226
                    },
                    {
                        "section": "Long-range Dependencies",
                        "n": "5.2",
                        "start": 227,
                        "end": 248
                    },
                    {
                        "section": "Conclusions",
                        "n": "6",
                        "start": 249,
                        "end": 254
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/992-Table4-1.png",
                        "caption": "Table 4: Differences, with respect to the sequential baseline, in the Meteor score of the test split of LDC2017T10 as a function of the number of reentrancies.",
                        "page": 5,
                        "bbox": {
                            "x1": 331.68,
                            "x2": 501.12,
                            "y1": 62.4,
                            "y2": 143.04
                        }
                    },
                    {
                        "filename": "../figure/image/992-Table2-1.png",
                        "caption": "Table 2: Scores on the test split of LDC2015E86 and LDC2017T10. TREE is the tree-based GCNSEQ and GRAPH is the graph-based GCNSEQ.",
                        "page": 5,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 288.0,
                            "y1": 192.48,
                            "y2": 253.92
                        }
                    },
                    {
                        "filename": "../figure/image/992-Table3-1.png",
                        "caption": "Table 3: Counts of reentrancies for the development and test split of LDC2017T10",
                        "page": 5,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 285.12,
                            "y1": 316.8,
                            "y2": 384.0
                        }
                    },
                    {
                        "filename": "../figure/image/992-Table6-1.png",
                        "caption": "Table 6: Accuracy (%) of models, on the test split of LDC201T10, for different categories of contrastive errors: antecedent (Antec.), pronoun type (Type), number (Num.), and gender (Gender).",
                        "page": 7,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 286.08,
                            "y1": 350.88,
                            "y2": 418.08
                        }
                    },
                    {
                        "filename": "../figure/image/992-Table7-1.png",
                        "caption": "Table 7: Counts of longest dependencies for the development and test split of LDC2017T10",
                        "page": 7,
                        "bbox": {
                            "x1": 79.67999999999999,
                            "x2": 283.2,
                            "y1": 496.79999999999995,
                            "y2": 563.04
                        }
                    },
                    {
                        "filename": "../figure/image/992-Table8-1.png",
                        "caption": "Table 8: Differences, with respect to the sequential baseline, in the Meteor score of the test split of LDC2017T10 as a function of the maximum dependency length.",
                        "page": 7,
                        "bbox": {
                            "x1": 96.0,
                            "x2": 266.4,
                            "y1": 618.24,
                            "y2": 699.36
                        }
                    },
                    {
                        "filename": "../figure/image/992-Table5-1.png",
                        "caption": "Table 5: Examples of generation from AMR graphs containing reentrancies. REF is the reference sentence.",
                        "page": 7,
                        "bbox": {
                            "x1": 94.56,
                            "x2": 503.03999999999996,
                            "y1": 62.4,
                            "y2": 303.36
                        }
                    },
                    {
                        "filename": "../figure/image/992-Figure2-1.png",
                        "caption": "Figure 2: Two ways of stacking recurrent and structural models. Left side: structure on top of sequence, where the structural encoders are applied to the hidden vectors computed by the BiLSTM. Right side: sequence on top of structure, where the structural encoder is used to create better embeddings which are then fed to the BiLSTM. The dotted lines refer to the process of converting the graph into a sequence or vice-versa.",
                        "page": 3,
                        "bbox": {
                            "x1": 333.59999999999997,
                            "x2": 499.2,
                            "y1": 62.879999999999995,
                            "y2": 276.0
                        }
                    },
                    {
                        "filename": "../figure/image/992-Table1-1.png",
                        "caption": "Table 1: BLEU and Meteor (%) scores on the development split of LDC2015E86.",
                        "page": 4,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 524.16,
                            "y1": 62.4,
                            "y2": 235.2
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-10"
        },
        {
            "slides": {
                "0": {
                    "title": "Sentence Summarization",
                    "text": [
                        "Generate a shorter version of a given sentence",
                        "Preserve its original meaning",
                        "Design or refine appealing headlines"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "1": {
                    "title": "Seq2seq Summarization",
                    "text": [
                        "Require less human efforts",
                        "Achieve the state-of-the-art performance"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Problems of Seq2seq Summarization",
                    "text": [
                        "Solely depend on the source text to generate summaries",
                        "3% of summaries 3 words",
                        "4 summaries repeat a word for 99 times",
                        "Focus on extraction rather than abstraction"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Template based Summarization",
                    "text": [
                        "A traditional approach to abstractive summarization",
                        "Fill an incomplete with the input text using the manually defined rules",
                        "Be able to produce fluent and informative summaries",
                        "Template [REGION] shares [open/close] [NUMBER] percent [lower/higher]",
                        "Source hong kong shares closed down #.# percent on friday due to an absence of buyers and fresh incentives .",
                        "Summary hong kong shares close #.# percent lower"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Problems of Template based Summarization",
                    "text": [
                        "Template construction is extremely time-consuming and requires a plenty of domain knowledge",
                        "It is impossible to develop all templates for summaries in various domains"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Motivation",
                    "text": [
                        "Use actual summaries in the training datasets as soft templates to combine seq2seq and template-based summarization",
                        "Seq2seq Guide the generation of seq2seq",
                        "Template-based Automatically learn to rewrite from soft templates"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "7": {
                    "title": "Contributions",
                    "text": [
                        "Introduce soft templates to improve the readability and stability in seq2seq",
                        "Extend seq2seq to conduct template reranking and template-aware summary generation simultaneously",
                        "Fuse the IR-based ranking technique and seq2seq-based generation technique, utilizing both supervisions",
                        "Demonstrate potential to generate diversely"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "8": {
                    "title": "Flow Chat",
                    "text": [
                        "Retrieve Search actual summaries as candidate soft templates",
                        "Rerank Find out the most proper soft template from the candidates",
                        "Rewrite Generate the summary based on source sentence and soft template",
                        "Retrieve Rerank Rewrite Sentence Candidates Template Summary"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": [
                        "figure/image/993-Figure1-1.png"
                    ]
                },
                "14": {
                    "title": "Setting",
                    "text": [
                        "Dataset Gigaword (sentence, headline) pairs",
                        "Dataset Train Dev. Test"
                    ],
                    "page_nums": [
                        18
                    ],
                    "images": [
                        "figure/image/993-Table1-1.png"
                    ]
                },
                "15": {
                    "title": "ROUGE Performance",
                    "text": [
                        "Re3Sum significantly outperforms other approaches",
                        "Model ROUGE-1 ROUGE-2 ROUGE-L"
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": [
                        "figure/image/993-Table3-1.png"
                    ]
                },
                "16": {
                    "title": "Linguistic Quality Performance",
                    "text": [
                        "Low LEN DIF and LESS 3 Stable",
                        "Low NEW NE and NEW UP Faithful",
                        "Item Template OpenNMT Re3Sum"
                    ],
                    "page_nums": [
                        20
                    ],
                    "images": [
                        "figure/image/993-Table5-1.png"
                    ]
                },
                "17": {
                    "title": "Effects of Template",
                    "text": [
                        "Performance highly relies on templates",
                        "The rewriting ability is strong",
                        "Type ROUGE-1 ROUGE-2 ROUGE-L"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": [
                        "figure/image/993-Table6-1.png"
                    ]
                },
                "18": {
                    "title": "Generation Diversity",
                    "text": [
                        "OpenNMT Beam search n-best outputs",
                        "Re3Sum Provide different templates",
                        "Source anny ainge said thursday he had two one-hour meetings with the new owners of the boston celtics but no deal has been completed for him to return to the franchise .",
                        "Target ainge says no deal completed with celtics major says no deal with spain on gibraltar Templates roush racing completes deal with red sox owner",
                        "Re3Sum ainge says no deal done with celtics ainge talks with new owners ainge talks with celtics owners OpenNMT ainge talks with new owners"
                    ],
                    "page_nums": [
                        22
                    ],
                    "images": []
                },
                "19": {
                    "title": "Conclusion",
                    "text": [
                        "Introduce soft templates as additional input to guide seq2seq summarization",
                        "Combine IR-based ranking techniques and seq2seq-based generation techniques to utilize both supervisions",
                        "Improve informativeness, stability, readability and diversity"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": []
                }
            },
            "paper_title": "Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization",
            "paper_id": "993",
            "paper": {
                "title": "Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization",
                "abstract": "Most previous seq2seq summarization systems purely depend on the source text to generate summaries, which tends to work unstably. Inspired by the traditional template-based summarization approaches, this paper proposes to use existing summaries as soft templates to guide the seq2seq model. To this end, we use a popular IR platform to Retrieve proper summaries as candidate templates. Then, we extend the seq2seq framework to jointly conduct template Reranking and templateaware summary generation (Rewriting). Experiments show that, in terms of informativeness, our model significantly outperforms the state-of-the-art methods, and even soft templates themselves demonstrate high competitiveness. In addition, the import of high-quality external summaries improves the stability and readability of generated summaries.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The exponentially growing online information has necessitated the development of effective automatic summarization systems."
                    },
                    {
                        "id": 1,
                        "string": "In this paper, we focus on an increasingly intriguing task, i.e., abstractive sentence summarization (Rush et al., 2015a) , which generates a shorter version of a given sentence while attempting to preserve its original meaning."
                    },
                    {
                        "id": 2,
                        "string": "It can be used to design or refine appealing headlines."
                    },
                    {
                        "id": 3,
                        "string": "Recently, the application of the attentional sequence-to-sequence (seq2seq) framework has attracted growing attention and achieved state-of-the-art performance on this task (Rush et al., 2015a; Chopra et al., 2016; Nallapati et al., 2016) ."
                    },
                    {
                        "id": 4,
                        "string": "Most previous seq2seq models purely depend on the source text to generate summaries."
                    },
                    {
                        "id": 5,
                        "string": "However, as reported in many studies (Koehn and Knowles, 2017) , the performance of a seq2seq model deteriorates quickly with the increase of the length of generation."
                    },
                    {
                        "id": 6,
                        "string": "Our experiments also show that seq2seq models tend to \"lose control\" sometimes."
                    },
                    {
                        "id": 7,
                        "string": "For example, 3% of summaries contain less than 3 words, while there are 4 summaries repeating a word for even 99 times."
                    },
                    {
                        "id": 8,
                        "string": "These results largely reduce the informativeness and readability of the generated summaries."
                    },
                    {
                        "id": 9,
                        "string": "In addition, we find seq2seq models usually focus on copying source words in order, without any actual \"summarization\"."
                    },
                    {
                        "id": 10,
                        "string": "Therefore, we argue that, the free generation based on the source sentence is not enough for a seq2seq model."
                    },
                    {
                        "id": 11,
                        "string": "Template based summarization (e.g., Zhou and Hovy (2004) ) is a traditional approach to abstractive summarization."
                    },
                    {
                        "id": 12,
                        "string": "In general, a template is an incomplete sentence which can be filled with the input text using the manually defined rules."
                    },
                    {
                        "id": 13,
                        "string": "For instance, a concise template to conclude the stock market quotation is: [REGION] shares [open/close] [NUMBER] percent [lower/higher], e.g., \"hong kong shares close #.# percent lower\"."
                    },
                    {
                        "id": 14,
                        "string": "Since the templates are written by humans, the produced summaries are usually fluent and informative."
                    },
                    {
                        "id": 15,
                        "string": "However, the construction of templates is extremely time-consuming and requires a plenty of domain knowledge."
                    },
                    {
                        "id": 16,
                        "string": "Moreover, it is impossible to develop all templates for summaries in various domains."
                    },
                    {
                        "id": 17,
                        "string": "Inspired by retrieve-based conversation systems (Ji et al., 2014) , we assume the golden summaries of the similar sentences can provide a reference point to guide the input sentence summarization process."
                    },
                    {
                        "id": 18,
                        "string": "We call these existing summaries soft templates since no actual rules are nee-ded to build new summaries from them."
                    },
                    {
                        "id": 19,
                        "string": "Due to the strong rewriting ability of the seq2seq framework (Cao et al., 2017a) , in this paper, we propose to combine the seq2seq and template based summarization approaches."
                    },
                    {
                        "id": 20,
                        "string": "We call our summarization system Re 3 Sum, which consists of three modules: Retrieve, Rerank and Rewrite."
                    },
                    {
                        "id": 21,
                        "string": "We utilize a widely-used Information Retrieval (IR) platform to find out candidate soft templates from the training corpus."
                    },
                    {
                        "id": 22,
                        "string": "Then, we extend the seq2seq model to jointly learn template saliency measurement (Rerank) and final summary generation (Rewrite)."
                    },
                    {
                        "id": 23,
                        "string": "Specifically, a Recurrent Neural Network (RNN) encoder is applied to convert the input sentence and each candidate template into hidden states."
                    },
                    {
                        "id": 24,
                        "string": "In Rerank, we measure the informativeness of a candidate template according to its hidden state relevance to the input sentence."
                    },
                    {
                        "id": 25,
                        "string": "The candidate template with the highest predicted informativeness is regarded as the actual soft template."
                    },
                    {
                        "id": 26,
                        "string": "In Rewrite, the summary is generated according to the hidden states of both the sentence and template."
                    },
                    {
                        "id": 27,
                        "string": "We conduct extensive experiments on the popular Gigaword dataset (Rush et al., 2015b) ."
                    },
                    {
                        "id": 28,
                        "string": "Experiments show that, in terms of informativeness, Re 3 Sum significantly outperforms the state-ofthe-art seq2seq models, and even soft templates themselves demonstrate high competitiveness."
                    },
                    {
                        "id": 29,
                        "string": "In addition, the import of high-quality external summaries improves the stability and readability of generated summaries."
                    },
                    {
                        "id": 30,
                        "string": "The contributions of this work are summarized as follows: • We propose to introduce soft templates as additional input to improve the readability and stability of seq2seq summarization systems."
                    },
                    {
                        "id": 31,
                        "string": "Code and results can be found at http://www4.comp.polyu."
                    },
                    {
                        "id": 32,
                        "string": "edu.hk/˜cszqcao/ • We extend the seq2seq framework to conduct template reranking and template-aware summary generation simultaneously."
                    },
                    {
                        "id": 33,
                        "string": "• We fuse the popular IR-based and seq2seqbased summarization systems, which fully utilize the supervisions from both sides."
                    },
                    {
                        "id": 34,
                        "string": "Method As shown in Fig."
                    },
                    {
                        "id": 35,
                        "string": "1 we choose the one with the maximal actual saliency score in C, which speeds up convergence and shows no obvious side effect in the experiments."
                    },
                    {
                        "id": 36,
                        "string": "Then, we jointly conduct reranking and rewriting through a shared encoder."
                    },
                    {
                        "id": 37,
                        "string": "Specifically, both the sentence x and the soft template r are converted into hidden states with a RNN encoder."
                    },
                    {
                        "id": 38,
                        "string": "In the Rerank module, we measure the saliency of r according to its hidden state relevance to x."
                    },
                    {
                        "id": 39,
                        "string": "In the Rewrite module, a RNN decoder combines the hidden states of x and r to generate a summary y."
                    },
                    {
                        "id": 40,
                        "string": "More details will be described in the rest of this section Retrieve The purpose of this module is to find out candidate templates from the training corpus."
                    },
                    {
                        "id": 41,
                        "string": "We assume that similar sentences should hold similar summary patterns."
                    },
                    {
                        "id": 42,
                        "string": "Therefore, given a sentence x, we find out its analogies in the corpus and pick their summaries as the candidate templates."
                    },
                    {
                        "id": 43,
                        "string": "Since the size of our dataset is quite large (over 3M), we leverage the widely-used Information Retrieve (IR) system Lucene 1 to index and search efficiently."
                    },
                    {
                        "id": 44,
                        "string": "We keep the default settings of Lucene 2 to build the IR system."
                    },
                    {
                        "id": 45,
                        "string": "For each input sentence, we select top 30 searching results as candidate templates."
                    },
                    {
                        "id": 46,
                        "string": "Jointly Rerank and Rewrite To conduct template-aware seq2seq generation (rewriting), it is a necessary step to encode both the source sentence x and soft template r into hidden states."
                    },
                    {
                        "id": 47,
                        "string": "Considering that the matching networks based on hidden states have demonstrated the strong ability to measure the relevance of two pieces of texts (e.g., ), we propose to jointly conduct reranking and rewriting through a shared encoding step."
                    },
                    {
                        "id": 48,
                        "string": "Specifically, we employ a bidirectional Recurrent Neural Network (BiRNN) encoder  to read x and r. Take the sentence x as an example."
                    },
                    {
                        "id": 49,
                        "string": "Its hidden state of the forward RNN at timestamp i can be Figure 1: Flow chat of the proposed method."
                    },
                    {
                        "id": 50,
                        "string": "We use the dashed line for Retrieve since there is an IR system embedded."
                    },
                    {
                        "id": 51,
                        "string": "represented by: − → h x i = RNN(x i , − → h x i−1 ) (1) The BiRNN consists of a forward RNN and a backward RNN."
                    },
                    {
                        "id": 52,
                        "string": "Suppose the corresponding out- puts are [ − → h x 1 ; · · · ; − → h x −1 ] and [ ← − h x 1 ; · · · ; ← − h x −1 ] , respectively, where the index \"−1\" stands for the last element."
                    },
                    {
                        "id": 53,
                        "string": "Then, the composite hidden state of a word is the concatenation of the two RNN repre- sentations, i.e., h x i = [ − → h x i ; ← − h x i ]."
                    },
                    {
                        "id": 54,
                        "string": "The entire repre- sentation for the source sentence is [h x 1 ; · · · ; h x −1 ] ."
                    },
                    {
                        "id": 55,
                        "string": "Since a soft template r can also be regarded as a readable concise sentence, we use the same BiRNN encoder to convert it into hidden states [h r 1 ; · · · ; h r −1 ]."
                    },
                    {
                        "id": 56,
                        "string": "Rerank In Retrieve, the template candidates are ranked according to the text similarity between the corresponding indexed sentences and the input sentence."
                    },
                    {
                        "id": 57,
                        "string": "However, for the summarization task, we expect the soft template r resembles the actual summary y * as much as possible."
                    },
                    {
                        "id": 58,
                        "string": "Here we use the widely-used summarization evaluation metrics ROUGE (Lin, 2004) to measure the actual saliency s * (r, y * ) (see Section 3.2)."
                    },
                    {
                        "id": 59,
                        "string": "We utilize the hidden states of x and r to predict the saliency s of the template."
                    },
                    {
                        "id": 60,
                        "string": "Specifically, we regard the output of the BiRNN as the representation of the sentence or template: h x = [ ← − h x 1 ; − → h x −1 ] (2) h r = [ ← − h r 1 ; − → h r −1 ] (3) Next, we use Bilinear network to predict the saliency of the template for the input sentence."
                    },
                    {
                        "id": 61,
                        "string": "s(r, x) = sigmoid(h r W s h T x + b s ), (4) where W s and b s are parameters of the Bilinear network, and we add the sigmoid activation function to make the range of s consistent with the actual saliency s * ."
                    },
                    {
                        "id": 62,
                        "string": "According to , Bilinear outperforms multi-layer forward neural networks in relevance measurement."
                    },
                    {
                        "id": 63,
                        "string": "As shown later, the difference of s and s * will provide additional supervisions for the seq2seq framework."
                    },
                    {
                        "id": 64,
                        "string": "Rewrite The soft template r selected by the Rerank module has already competed with the state-of-the-art method in terms of ROUGE evaluation (see Table 4 )."
                    },
                    {
                        "id": 65,
                        "string": "However, r usually contains a lot of named entities that does not appear in the source (see Table 5 )."
                    },
                    {
                        "id": 66,
                        "string": "Consequently, it is hard to ensure that the soft templates are faithful to the input sentences."
                    },
                    {
                        "id": 67,
                        "string": "Therefore, we leverage the strong rewriting ability of the seq2seq model to generate more faithful and informative summaries."
                    },
                    {
                        "id": 68,
                        "string": "Specifically, since the input of our system consists of both the sentence and soft template, we use the concatenation function 3 to combine the hidden states of the sentence and template: H c = [h x 1 ; · · · ; h x −1 ; h r 1 ; · · · ; h r −1 ] (5) The combined hidden states are fed into the prevailing attentional RNN decoder  to generate the decoding hidden state at the position t: s t = Att-RNN(s t−1 , y t−1 , H c ), (6) where y t−1 is the previous output summary word."
                    },
                    {
                        "id": 69,
                        "string": "Finally, a sof tmax layer is introduced to predict the current summary word: o t = sof tmax(s t W o ), (7) where W o is a parameter matrix."
                    },
                    {
                        "id": 70,
                        "string": "Learning There are two types of costs in our system."
                    },
                    {
                        "id": 71,
                        "string": "For Rerank, we expect the predicted saliency s(r, x) close to the actual saliency s * (r, y * )."
                    },
                    {
                        "id": 72,
                        "string": "Therefore, J R (θ) = CE(s(r, x), s * (r, y * )) (8) = −s * log s − (1 − s * ) log(1 − s), where θ stands for the model parameters."
                    },
                    {
                        "id": 73,
                        "string": "For Rewrite, the learning goal is to maximize the estimated probability of the actual summary y * ."
                    },
                    {
                        "id": 74,
                        "string": "We adopt the common negative log-likelihood (NLL) as the loss function: J G (θ) = − log(p(y * |x, r)) (9) = − t log(o t [y * t ]) To make full use of supervisions from both sides, we combine the above two costs as the final loss function: J(θ) = J R (θ) + J G (θ) (10) We use mini-batch Stochastic Gradient Descent (SGD) to tune model parameters."
                    },
                    {
                        "id": 75,
                        "string": "The batch size is 64."
                    },
                    {
                        "id": 76,
                        "string": "To enhance generalization, we introduce dropout (Srivastava et al., 2014) with probability p = 0.3 for the RNN layers."
                    },
                    {
                        "id": 77,
                        "string": "The initial learning rate is 1, and it will decay by 50% if the generation loss does not decrease on the validation set."
                    },
                    {
                        "id": 78,
                        "string": "Experiments Datasets We conduct experiments on the Annotated English Gigaword corpus, as with (Rush et al., 2015b) ."
                    },
                    {
                        "id": 79,
                        "string": "This parallel corpus is produced by pairing the first sentence in the news article and its headline as the summary with heuristic rules."
                    },
                    {
                        "id": 80,
                        "string": "All the training, development and test datasets can be downloaded at https://github."
                    },
                    {
                        "id": 81,
                        "string": "com/harvardnlp/sent-summary."
                    },
                    {
                        "id": 82,
                        "string": "The statistics of the Gigaword corpus is presented in Table 1."
                    },
                    {
                        "id": 83,
                        "string": "AvgSourceLen is the average input sentence length and AvgTargetLen is the average summary length."
                    },
                    {
                        "id": 84,
                        "string": "COPY means the copy ratio in the summaries (without stopwords)."
                    },
                    {
                        "id": 85,
                        "string": "Evaluation Metrics We adopt ROUGE (Lin, 2004) for automatic evaluation."
                    },
                    {
                        "id": 86,
                        "string": "ROUGE has been the standard evaluation metric for DUC shared tasks since 2004."
                    },
                    {
                        "id": 87,
                        "string": "It measures the quality of summary by computing the overlapping lexical units between the candidate summary and actual summaries, such as unigram, bi-gram and longest common subsequence (LCS)."
                    },
                    {
                        "id": 88,
                        "string": "Following the common practice, we report ROUGE-1 (uni-gram), ROUGE-2 (bi-gram) and ROUGE-L (LCS) F1 scores 4 in the following experiments."
                    },
                    {
                        "id": 89,
                        "string": "We also measure the actual saliency of a candidate template r with its combined ROUGE scores given the actual summary y * : s * (r, y * ) = RG(r, y * ) + RG(r, y * ), (11) where \"RG\" stands for ROUGE for short."
                    },
                    {
                        "id": 90,
                        "string": "ROUGE mainly evaluates informativeness."
                    },
                    {
                        "id": 91,
                        "string": "We also introduce a series of metrics to measure the summary quality from the following aspects: LEN DIF The absolute value of the length difference between the generated summaries and the actual summaries."
                    },
                    {
                        "id": 92,
                        "string": "We use mean value ± standard deviation to illustrate this item."
                    },
                    {
                        "id": 93,
                        "string": "The average value partially reflects the readability and informativeness, while the standard deviation links to stability."
                    },
                    {
                        "id": 94,
                        "string": "LESS 3 The number of the generated summaries, which contains less than three tokens."
                    },
                    {
                        "id": 95,
                        "string": "These extremely short summaries are usually unreadable."
                    },
                    {
                        "id": 96,
                        "string": "COPY The proportion of the summary words (without stopwords) copied from the source sentence."
                    },
                    {
                        "id": 97,
                        "string": "A seriously large copy ratio indicates that the summarization system pays more attention to compression rather than required abstraction."
                    },
                    {
                        "id": 98,
                        "string": "NEW NE The number of the named entities that do not appear in the source sentence or actual summary."
                    },
                    {
                        "id": 99,
                        "string": "Intuitively, the appearance of new named entities in the summary is likely to bring unfaithfulness."
                    },
                    {
                        "id": 100,
                        "string": "We use Stanford Co-reNLP (Manning et al., 2014) to recognize named entities."
                    },
                    {
                        "id": 101,
                        "string": "Implementation Details We use the popular seq2seq framework Open-NMT 5 as the starting point."
                    },
                    {
                        "id": 102,
                        "string": "To make our model more general, we retain the default settings of OpenNMT to build the network architecture."
                    },
                    {
                        "id": 103,
                        "string": "Specifically, the dimensions of word embeddings and RNN are both 500, and the encoder and decoder structures are two-layer bidirectional Long Short Term Memory Networks (LSTMs)."
                    },
                    {
                        "id": 104,
                        "string": "The only difference is that we add the argument \"share embeddings\" to share the word embeddings between the encoder and decoder."
                    },
                    {
                        "id": 105,
                        "string": "This practice largely reduces model parameters for the monolingual task."
                    },
                    {
                        "id": 106,
                        "string": "On our computer (GPU: GTX 1080, Memory: 16G, CPU: i7-7700K), the training spends about 2 days."
                    },
                    {
                        "id": 107,
                        "string": "During test, we use beam search of size 5 to generate summaries."
                    },
                    {
                        "id": 108,
                        "string": "We add the argument \"replace unk\" to replace the generated unknown words with the source word that holds the highest attention weight."
                    },
                    {
                        "id": 109,
                        "string": "Since the generated summaries are often shorter than the actual ones, we introduce an additional length penalty argument \"alpha 1\" to encourage longer generation, like Wu et al."
                    },
                    {
                        "id": 110,
                        "string": "(2016) ."
                    },
                    {
                        "id": 111,
                        "string": "Baselines We compare our proposed model with the following state-of-the-art neural summarization systems: 2015) for summarization."
                    },
                    {
                        "id": 112,
                        "string": "This model contained two-layer LSTMs with 500 hidden units in each layer."
                    },
                    {
                        "id": 113,
                        "string": "OpenNMT We also implement the standard attentional seq2seq model with OpenNMT."
                    },
                    {
                        "id": 114,
                        "string": "All the settings are the same as our system."
                    },
                    {
                        "id": 115,
                        "string": "It is noted that OpenNMT officially examined the Gigaword dataset."
                    },
                    {
                        "id": 116,
                        "string": "We distinguish the official result 6 and our experimental result with suffixes \"O\" and \"I\" respectively."
                    },
                    {
                        "id": 117,
                        "string": "FTSum Cao et al."
                    },
                    {
                        "id": 118,
                        "string": "(2017b) encoded the facts extracted from the source sentence to improve both the faithfulness and informativeness of generated summaries."
                    },
                    {
                        "id": 119,
                        "string": "In addition, to evaluate the effectiveness of our joint learning framework, we develop a baseline named \"PIPELINE\"."
                    },
                    {
                        "id": 120,
                        "string": "Its architecture is identical to Re 3 Sum."
                    },
                    {
                        "id": 121,
                        "string": "However, it trains the Rerank module and Rewrite module in pipeline."
                    },
                    {
                        "id": 122,
                        "string": "tentional seq2seq model OpenNMT I ."
                    },
                    {
                        "id": 123,
                        "string": "Therefore, it is safe to conclude that soft templates have great contribute to guide the generation of summaries."
                    },
                    {
                        "id": 124,
                        "string": "Informativeness Evaluation We also examine the performance of directly regarding soft templates as output summaries."
                    },
                    {
                        "id": 125,
                        "string": "We introduce five types of different soft templates: Random An existing summary randomly selected from the training corpus."
                    },
                    {
                        "id": 126,
                        "string": "First The top-ranked candidate template given by the Retrieve module."
                    },
                    {
                        "id": 127,
                        "string": "Max The template with the maximal actual ROUGE scores among the 30 candidate templates."
                    },
                    {
                        "id": 128,
                        "string": "Optimal An existing summary in the training corpus which holds the maximal ROUGE scores."
                    },
                    {
                        "id": 129,
                        "string": "Rerank The template with the maximal predicted ROUGE scores among the 30 candidate templates."
                    },
                    {
                        "id": 130,
                        "string": "It is the actual soft template we adopt."
                    },
                    {
                        "id": 131,
                        "string": "As shown in Table 4 , the performance of Random is terrible, indicating it is impossible to use one summary template to fit various actual summaries."
                    },
                    {
                        "id": 132,
                        "string": "Rerank largely outperforms First, which verifies the effectiveness of the Rerank module."
                    },
                    {
                        "id": 133,
                        "string": "However, according to Max and Rerank, we find the Rerank performance of Re 3 Sum is far from perfect."
                    },
                    {
                        "id": 134,
                        "string": "Likewise, comparing Max and First, we observe that the improving capacity of the Retrieve module is high."
                    },
                    {
                        "id": 135,
                        "string": "Notice that Optimal greatly exceeds all the state-of-the-art approaches."
                    },
                    {
                        "id": 136,
                        "string": "This finding strongly supports our practice of using existing summaries to guide the seq2seq models."
                    },
                    {
                        "id": 137,
                        "string": "Linguistic Quality Evaluation We also measure the linguistic quality of generated summaries from various aspects, and the results are present in Table 5 ."
                    },
                    {
                        "id": 138,
                        "string": "As can be seen from the rows \"LEN DIF\" and \"LESS 3\", the performance of Re 3 Sum is almost the same as that of soft templates."
                    },
                    {
                        "id": 139,
                        "string": "The soft templates indeed well guide the summary generation."
                    },
                    {
                        "id": 140,
                        "string": "Compared with Source grid positions after the final qualifying session in the indonesian motorcycle grand prix at the sentul circuit , west java , saturday : UNK Target indonesian motorcycle grand prix grid positions Template grid positions for british grand prix OpenNMT circuit Re 3 Sum grid positions for indonesian grand prix Source india 's children are getting increasingly overweight and unhealthy and the government is asking schools to ban junk food , officials said thursday ."
                    },
                    {
                        "id": 141,
                        "string": "Target indian government asks schools to ban junk food Template skorean schools to ban soda junk food OpenNMT india 's children getting fatter Re 3 Sum indian schools to ban junk food Table 7 : Examples of generated summaries."
                    },
                    {
                        "id": 142,
                        "string": "We use Bold font to indicate the crucial rewriting behavior from the templates to generated summaries."
                    },
                    {
                        "id": 143,
                        "string": "Re 3 Sum, the standard deviation of LEN DF is 0.7 times larger in OpenNMT, indicating that Open-NMT works quite unstably."
                    },
                    {
                        "id": 144,
                        "string": "Moreover, OpenNMT generates 53 extreme short summaries, which seriously reduces readability."
                    },
                    {
                        "id": 145,
                        "string": "Meanwhile, the copy ratio of actual summaries is 36%."
                    },
                    {
                        "id": 146,
                        "string": "Therefore, the copy mechanism is severely overweighted in OpenNMT."
                    },
                    {
                        "id": 147,
                        "string": "Our model is encouraged to generate according to human-written soft templates, which relatively diminishes copying from the source sentences."
                    },
                    {
                        "id": 148,
                        "string": "Look at the last row \"NEW NE\"."
                    },
                    {
                        "id": 149,
                        "string": "A number of new named entities appear in the soft templates, which makes them quite unfaithful to source sentences."
                    },
                    {
                        "id": 150,
                        "string": "By contrast, this index in Re 3 Sum is close to the OpenNMT's."
                    },
                    {
                        "id": 151,
                        "string": "It highlights the rewriting ability of our seq2seq framework."
                    },
                    {
                        "id": 152,
                        "string": "Effect of Templates In this section, we investigate how soft templates affect our model."
                    },
                    {
                        "id": 153,
                        "string": "At the beginning, we feed different types of soft templates (refer to Table 4 ) into the Rewriting module of Re 3 Sum."
                    },
                    {
                        "id": 154,
                        "string": "As illustrated in Table 6 , the more high-quality templates are provided, the higher ROUGE scores are achieved."
                    },
                    {
                        "id": 155,
                        "string": "It is interesting to see that,while the ROUGE-2 score of Random templates is zero, our model can still generate acceptable summaries with Random templates."
                    },
                    {
                        "id": 156,
                        "string": "It seems that Re 3 Sum can automatically judge whether the soft templates are trustworthy and ignore the seriously irrelevant ones."
                    },
                    {
                        "id": 157,
                        "string": "We believe that the joint learning with the Rerank model plays a vital role here."
                    },
                    {
                        "id": 158,
                        "string": "Next, we manually inspect the summaries generated by different methods."
                    },
                    {
                        "id": 159,
                        "string": "We find the outputs of Re 3 Sum are usually longer and more flu-ent than the outputs of OpenNMT."
                    },
                    {
                        "id": 160,
                        "string": "Some illustrative examples are shown in Table 7 ."
                    },
                    {
                        "id": 161,
                        "string": "In Example 1, there is no predicate in the source sentence."
                    },
                    {
                        "id": 162,
                        "string": "Since OpenNMT prefers selecting source words around the predicate to form the summary, it fails on this sentence."
                    },
                    {
                        "id": 163,
                        "string": "By contract, Re 3 Sum rewrites the template and produces an informative summary."
                    },
                    {
                        "id": 164,
                        "string": "In Example 2, OpenNMT deems the starting part of the sentences are more important, while our model, guided by the template, focuses on the second part to generate the summary."
                    },
                    {
                        "id": 165,
                        "string": "In the end, we test the ability of our model to generate diverse summaries."
                    },
                    {
                        "id": 166,
                        "string": "In practice, a system that can provide various candidate summaries is probably more welcome."
                    },
                    {
                        "id": 167,
                        "string": "Specifically, two candidate templates with large text dissimilarity are manually fed into the Rewriting module."
                    },
                    {
                        "id": 168,
                        "string": "The corresponding generated summaries are shown in Table 8."
                    },
                    {
                        "id": 169,
                        "string": "For the sake of comparison, we also present the 2-best results of OpenNMT with beam search."
                    },
                    {
                        "id": 170,
                        "string": "As can be seen, with different templates given, our model is likely to generate dissimilar summaries."
                    },
                    {
                        "id": 171,
                        "string": "In contrast, the 2-best results of OpenNMT is almost the same, and often a shorter summary is only a piece of the other one."
                    },
                    {
                        "id": 172,
                        "string": "To sum up, our model demonstrates promising prospect in generation diversity."
                    },
                    {
                        "id": 173,
                        "string": "Related Work Abstractive sentence summarization aims to produce a shorter version of a given sentence while preserving its meaning (Chopra et al., 2016) ."
                    },
                    {
                        "id": 174,
                        "string": "This task is similar to text simplification (Saggion, 2017) and facilitates headline design and refine."
                    },
                    {
                        "id": 175,
                        "string": "Early studies on sentence summariza- (Zhou and Hovy, 2004) , syntactic tree pruning (Knight and Marcu, 2002; Clarke and Lapata, 2008) and statistical machine translation techniques (Banko et al., 2000) ."
                    },
                    {
                        "id": 176,
                        "string": "Recently, the application of the attentional seq2seq framework has attracted growing attention and achieved state-of-the-art performance on this task (Rush et al., 2015a; Chopra et al., 2016; Nallapati et al., 2016) ."
                    },
                    {
                        "id": 177,
                        "string": "In addition to the direct application of the general seq2seq framework, researchers attempted to integrate various properties of summarization."
                    },
                    {
                        "id": 178,
                        "string": "For example, Nallapati et al."
                    },
                    {
                        "id": 179,
                        "string": "(2016) enriched the encoder with hand-crafted features such as named entities and POS tags."
                    },
                    {
                        "id": 180,
                        "string": "These features have played important roles in traditional feature based summarization systems."
                    },
                    {
                        "id": 181,
                        "string": "Gu et al."
                    },
                    {
                        "id": 182,
                        "string": "(2016) found that a large proportion of the words in the summary were copied from the source text."
                    },
                    {
                        "id": 183,
                        "string": "Therefore, they proposed CopyNet which considered the copying mechanism during generation."
                    },
                    {
                        "id": 184,
                        "string": "Recently, See et al."
                    },
                    {
                        "id": 185,
                        "string": "(2017) used the coverage mechanism to discourage repetition."
                    },
                    {
                        "id": 186,
                        "string": "Cao et al."
                    },
                    {
                        "id": 187,
                        "string": "(2017b) encoded facts extracted from the source sentence to enhance the summary faithfulness."
                    },
                    {
                        "id": 188,
                        "string": "There were also studies to modify the loss function to fit the evaluation metrics."
                    },
                    {
                        "id": 189,
                        "string": "For instance, Ayana et al."
                    },
                    {
                        "id": 190,
                        "string": "(2016) applied the Minimum Risk Training strategy to maximize the ROUGE scores of generated sum-maries."
                    },
                    {
                        "id": 191,
                        "string": "Paulus et al."
                    },
                    {
                        "id": 192,
                        "string": "(2017) used the reinforcement learning algorithm to optimize a mixed objective function of likelihood and ROUGE scores."
                    },
                    {
                        "id": 193,
                        "string": "Guu et al."
                    },
                    {
                        "id": 194,
                        "string": "(2017) also proposed to encode human-written sentences to improvement the performance of neural text generation."
                    },
                    {
                        "id": 195,
                        "string": "However, they handled the task of Language Modeling and randomly picked an existing sentence in the training corpus."
                    },
                    {
                        "id": 196,
                        "string": "In comparison, we develop an IR system to find proper existing summaries as soft templates."
                    },
                    {
                        "id": 197,
                        "string": "Moreover, Guu et al."
                    },
                    {
                        "id": 198,
                        "string": "(2017) used a general seq2seq framework while we extend the seq2seq framework to conduct template reranking and template-aware summary generation simultaneously."
                    },
                    {
                        "id": 199,
                        "string": "Conclusion and Future Work This paper proposes to introduce soft templates as additional input to guide the seq2seq summarization."
                    },
                    {
                        "id": 200,
                        "string": "We use the popular IR platform Lucene to retrieve proper existing summaries as candidate soft templates."
                    },
                    {
                        "id": 201,
                        "string": "Then we extend the seq2seq framework to jointly conduct template reranking and template-aware summary generation."
                    },
                    {
                        "id": 202,
                        "string": "Experiments show that our model can generate informative, readable and stable summaries."
                    },
                    {
                        "id": 203,
                        "string": "In addition, our model demonstrates promising prospect in generation diversity."
                    },
                    {
                        "id": 204,
                        "string": "We believe our work can be extended in vari-ous aspects."
                    },
                    {
                        "id": 205,
                        "string": "On the one hand, since the candidate templates are far inferior to the optimal ones, we intend to improve the Retrieve module, e.g., by indexing both the sentence and summary fields."
                    },
                    {
                        "id": 206,
                        "string": "On the other hand, we plan to test our system on the other tasks such as document-level summarization and short text conversation."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 33
                    },
                    {
                        "section": "Method",
                        "n": "2",
                        "start": 34,
                        "end": 39
                    },
                    {
                        "section": "Retrieve",
                        "n": "2.1",
                        "start": 40,
                        "end": 45
                    },
                    {
                        "section": "Jointly Rerank and Rewrite",
                        "n": "2.2",
                        "start": 46,
                        "end": 55
                    },
                    {
                        "section": "Rerank",
                        "n": "2.2.1",
                        "start": 56,
                        "end": 63
                    },
                    {
                        "section": "Rewrite",
                        "n": "2.2.2",
                        "start": 64,
                        "end": 69
                    },
                    {
                        "section": "Learning",
                        "n": "2.3",
                        "start": 70,
                        "end": 77
                    },
                    {
                        "section": "Datasets",
                        "n": "3.1",
                        "start": 78,
                        "end": 84
                    },
                    {
                        "section": "Evaluation Metrics",
                        "n": "3.2",
                        "start": 85,
                        "end": 100
                    },
                    {
                        "section": "Implementation Details",
                        "n": "3.3",
                        "start": 101,
                        "end": 110
                    },
                    {
                        "section": "Baselines",
                        "n": "3.4",
                        "start": 111,
                        "end": 123
                    },
                    {
                        "section": "Informativeness Evaluation",
                        "n": "3.5",
                        "start": 124,
                        "end": 136
                    },
                    {
                        "section": "Linguistic Quality Evaluation",
                        "n": "3.6",
                        "start": 137,
                        "end": 151
                    },
                    {
                        "section": "Effect of Templates",
                        "n": "3.7",
                        "start": 152,
                        "end": 172
                    },
                    {
                        "section": "Related Work",
                        "n": "4",
                        "start": 173,
                        "end": 198
                    },
                    {
                        "section": "Conclusion and Future Work",
                        "n": "5",
                        "start": 199,
                        "end": 206
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/993-Table6-1.png",
                        "caption": "Table 6: ROUGE F1 (%) performance of Re3Sum generated with different soft templates.",
                        "page": 5,
                        "bbox": {
                            "x1": 313.92,
                            "x2": 519.36,
                            "y1": 62.879999999999995,
                            "y2": 145.92
                        }
                    },
                    {
                        "filename": "../figure/image/993-Table4-1.png",
                        "caption": "Table 4: ROUGE F1 (%) performance of different types of soft templates.",
                        "page": 5,
                        "bbox": {
                            "x1": 100.8,
                            "x2": 261.12,
                            "y1": 304.8,
                            "y2": 388.32
                        }
                    },
                    {
                        "filename": "../figure/image/993-Table3-1.png",
                        "caption": "Table 3: ROUGE F1 (%) performance. “RG” represents “ROUGE” for short. “∗” indicates statistical significance of the corresponding model with respect to the baseline model on the 95% confidence interval in the official ROUGE script.",
                        "page": 5,
                        "bbox": {
                            "x1": 81.6,
                            "x2": 280.32,
                            "y1": 62.879999999999995,
                            "y2": 215.04
                        }
                    },
                    {
                        "filename": "../figure/image/993-Table5-1.png",
                        "caption": "Table 5: Statistics of different types of summaries.",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 291.36,
                            "y1": 672.48,
                            "y2": 743.04
                        }
                    },
                    {
                        "filename": "../figure/image/993-Table7-1.png",
                        "caption": "Table 7: Examples of generated summaries. We use Bold font to indicate the crucial rewriting behavior from the templates to generated summaries.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 62.879999999999995,
                            "y2": 233.28
                        }
                    },
                    {
                        "filename": "../figure/image/993-Figure1-1.png",
                        "caption": "Figure 1: Flow chat of the proposed method. We use the dashed line for Retrieve since there is an IR system embedded.",
                        "page": 2,
                        "bbox": {
                            "x1": 116.64,
                            "x2": 481.44,
                            "y1": 61.44,
                            "y2": 113.28
                        }
                    },
                    {
                        "filename": "../figure/image/993-Table8-1.png",
                        "caption": "Table 8: Examples of generation with diversity. We use Bold font to indicate the difference between two summaries",
                        "page": 7,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 62.879999999999995,
                            "y2": 313.92
                        }
                    },
                    {
                        "filename": "../figure/image/993-Table1-1.png",
                        "caption": "Table 1: Data statistics for English Gigaword. AvgSourceLen is the average input sentence length and AvgTargetLen is the average summary length. COPY means the copy ratio in the summaries (without stopwords).",
                        "page": 3,
                        "bbox": {
                            "x1": 325.92,
                            "x2": 507.35999999999996,
                            "y1": 217.44,
                            "y2": 287.03999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/993-Figure2-1.png",
                        "caption": "Figure 2: Jointly Rerank and Rewrite",
                        "page": 3,
                        "bbox": {
                            "x1": 82.56,
                            "x2": 515.04,
                            "y1": 61.44,
                            "y2": 173.28
                        }
                    },
                    {
                        "filename": "../figure/image/993-Table2-1.png",
                        "caption": "Table 2: Final perplexity on the development set. † indicates the value is cited from the corresponding paper. ABS+, Featseq2seq and Luong-NMT do not provide this value.",
                        "page": 4,
                        "bbox": {
                            "x1": 352.8,
                            "x2": 480.47999999999996,
                            "y1": 544.8,
                            "y2": 642.24
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-11"
        },
        {
            "slides": {
                "0": {
                    "title": "Conversational Agents",
                    "text": [
                        "Sorry, I dont understand what youre saying",
                        "Data augmentation might help"
                    ],
                    "page_nums": [
                        1,
                        2,
                        3,
                        4,
                        5,
                        6,
                        7
                    ],
                    "images": []
                },
                "1": {
                    "title": "Paraphrase Generation",
                    "text": [
                        "Rephrasing a given text in multiple ways",
                        "Paraphrases how could i increase my height ? what should i do to increase body height ? what are the ways to increase height ? are there some ways to increase body height ?"
                    ],
                    "page_nums": [
                        8,
                        9,
                        10,
                        11,
                        12
                    ],
                    "images": []
                },
                "2": {
                    "title": "Current State",
                    "text": [
                        "Source how do i increase body height ?",
                        "Synonym how do i grow body height ?",
                        "Phrase how do i increase the body measurement vertically?",
                        "Beam how do i increase my height ? how do i increase my body height how do i increase the height ? how would i increase my body height"
                    ],
                    "page_nums": [
                        13,
                        14,
                        15,
                        16,
                        17,
                        18,
                        19,
                        20,
                        21,
                        22,
                        23
                    ],
                    "images": []
                },
                "3": {
                    "title": "What can we do",
                    "text": [
                        "Source how do i increase body height ?",
                        "Beam how do i increase my height ? how can i decrease my body weight ? what do i do to increase the height ? i am 17, what steps to take to decrease weight ?"
                    ],
                    "page_nums": [
                        24,
                        25,
                        26,
                        27,
                        28,
                        29
                    ],
                    "images": []
                },
                "4": {
                    "title": "What we need",
                    "text": [
                        "Find k diverse paraphrases with high fidelity",
                        "Method based on subset selection of candidate (sub)sequences"
                    ],
                    "page_nums": [
                        30,
                        31,
                        32
                    ],
                    "images": []
                },
                "5": {
                    "title": "Subset Selection",
                    "text": [
                        "how do i increase my how can i decrease the how can i grow the what ways exist to increase how would I increase the how do I decrease the",
                        "how do i increase my how can i decrease the how can i grow the how do i increase my what ways exist to increase how can i grow the how would I increase the what ways exist to increase how do I decrease the are there ways to increase",
                        "If F is sub modular + monotone = Greedy algo. with good bounds exists"
                    ],
                    "page_nums": [
                        33,
                        34,
                        35,
                        36
                    ],
                    "images": []
                },
                "6": {
                    "title": "Sub modularity",
                    "text": [
                        "F = # Unique Coloured items"
                    ],
                    "page_nums": [
                        37,
                        38,
                        39,
                        40,
                        41,
                        42,
                        43
                    ],
                    "images": []
                },
                "8": {
                    "title": "DiPS",
                    "text": [
                        "Induce Diversity while not compromising on Fidelity",
                        "Diversity C omponents Fidelity Co mponents",
                        "where can film I How find that that picture",
                        "I get can I Where can I : 3k Candidate Subsequ ences Source Sentence",
                        "Where can I find t h a t film? Where can I get t h a t movie?",
                        "Rewards unique n-grams How can I get that picture?",
                        "Where can I get that film?",
                        "I find that picture",
                        "Where can I get that movie?",
                        "Where can I <eos> <sos> ENCODER DECODER",
                        "Enc o d e r k- sequences",
                        "Enc o d e r D e c o d er k- sequences",
                        "How can I get that picture Fidelity"
                    ],
                    "page_nums": [
                        45,
                        46,
                        47,
                        48,
                        49,
                        50,
                        51,
                        52,
                        53,
                        54,
                        55,
                        59,
                        60
                    ],
                    "images": []
                },
                "9": {
                    "title": "Diversity Components",
                    "text": [
                        "Rewards Structural Coverage where can film I How N find that that picture",
                        "I get can I Where can I : 3k Candidate Subsequences Source Sentence",
                        "n xngram : 3k Candidate Subse q uences Source Sentence",
                        "Where can I n=1 xX find t h a t film? Where can I get t h a t movi Where can I get that movie? Rewards unique n-grams How can I get th at picture?",
                        "Rewards unique n-grams How can I get that picture?",
                        "Synonym (similar embeddings) W here can I get that film? S t r uct u r al C o v era g e Where can I find that pictu re Ho w can I g et that pictu re (x i, xj) k- sequences",
                        "Where can I find that pictu re Ho w can I g et that pictu re k- sequences",
                        "Rew ar ds Stru ct ural C over age xi V (t) xjX",
                        "Where can I get that Where movie? can I <eos> <sos> ENCODER DECODER",
                        ": 3k Candidate Subse q uences n xngram",
                        "I get can I Where can I Source Sentence",
                        "Where can I find t h a t film Where can I get t h a t movie",
                        "(xi, xj) EditDistance(xi, xj)",
                        "Where can I get that Where movie? can I <eos> <sos> |xi |xj"
                    ],
                    "page_nums": [
                        56,
                        57,
                        58
                    ],
                    "images": []
                },
                "10": {
                    "title": "Fidelity Components",
                    "text": [
                        "where can film I How find that that picture",
                        "I get can I Where can I : 3k Candidate Subsequences Source Sentence N",
                        "Where can I find t h a t film? Where can I get t h a t movie n |xn-gram sn-gram",
                        "xX n=1 Rewards unique n-grams How can I get that picture?",
                        "Where can I get that film? Embedding based Similarity",
                        "Where can I find that picture",
                        "wix Where can I get that movie?",
                        "where can film I How Lexical Similarity find that that picture",
                        "How can I get that picture (x, s)",
                        "Rewards Structural Coverage xX"
                    ],
                    "page_nums": [
                        61,
                        62,
                        63
                    ],
                    "images": []
                },
                "11": {
                    "title": "DiPS Objective",
                    "text": [
                        "DDiivveerrssiittyy C Coommppoonneenntts s FFiiddeelliittyy CCoo mmppoonneenntts s",
                        "where where can can film film I I How How find find that that that that picture picture",
                        "I I get get can can I I Where Where can can I I : : 3k 3k Candidate Candidate Subsequences Subsequences Source Source Sentence Sentence",
                        "Where Where can can I I find find t t h h a a t t film? film Where can I get t h a t movie Where can I get t h a t movie",
                        "Rewards Rewards unique unique n-grams n-grams How can I get that picture? How can I get that picture?",
                        "Synonym (similar embeddings) Synonym (similar embeddings) Where Where can can I I get get that that film? film?",
                        "Where Where can can can can I I find find that that picture picture How How I I get get that that picture picture Rewards Rewards Structural Structural Coverage Coverage",
                        "Where Where can can I I get get that that movie? movie?"
                    ],
                    "page_nums": [
                        64,
                        65,
                        66,
                        67
                    ],
                    "images": []
                },
                "12": {
                    "title": "Fidelity and Diversity",
                    "text": [
                        "SBS DBS VAE-SVG DPP SSR DiPS (Ours) DiPS induces diversity without",
                        "compro4m-Diisstininctg (D iovenrs itfiy) delity"
                    ],
                    "page_nums": [
                        68,
                        69,
                        70,
                        71
                    ],
                    "images": []
                },
                "13": {
                    "title": "Data Augmentation Paraphrase Detection",
                    "text": [
                        "No Aug SBS DPP SSR DBS DiPS (Ours)",
                        "Di PS data augmentation helps in paraphrase detection"
                    ],
                    "page_nums": [
                        72,
                        73
                    ],
                    "images": []
                },
                "14": {
                    "title": "Data Augmentation for Intent Classification",
                    "text": [
                        "No. Aug SBS DBS Syn. Rep Cont. Aug DiPS (Ours)",
                        "Da ta augmentation using DiPS improves inten t classification"
                    ],
                    "page_nums": [
                        74,
                        75,
                        76,
                        77
                    ],
                    "images": []
                }
            },
            "paper_title": "Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation",
            "paper_id": "995",
            "paper": {
                "title": "Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation",
                "abstract": "Introduction Paraphrasing is the task of rephrasing a given text in multiple ways such that the semantics of the generated sentences remain unaltered. Paraphrasing Quality can be attributed to two key characteristics -fidelity which measures the semantic similarity between the input text and generated text, and diversity, which measures the lexical dissimilarity between generated sentences. Many previous works (Prakash et al., 2016; Gupta et al., 2018; Li et al., 2018) address the task of obtaining semantically similar paraphrases.",
                "text": [
                    {
                        "id": 0,
                        "string": "Inducing diversity in the task of paraphrasing is an important problem in NLP with applications in data augmentation and conversational agents."
                    },
                    {
                        "id": 1,
                        "string": "Previous paraphrasing approaches have mainly focused on the issue of generating semantically similar paraphrases, while paying little attention towards diversity."
                    },
                    {
                        "id": 2,
                        "string": "In fact, most of the methods rely solely on top-k beam search sequences to obtain a set of paraphrases."
                    },
                    {
                        "id": 3,
                        "string": "The resulting set, however, contains many structurally similar sentences."
                    },
                    {
                        "id": 4,
                        "string": "In this work, we focus on the task of obtaining highly diverse paraphrases while not compromising on paraphrasing quality."
                    },
                    {
                        "id": 5,
                        "string": "We provide a novel formulation of the problem in terms of monotone submodular function maximization, specifically targeted towards the task of paraphrasing."
                    },
                    {
                        "id": 6,
                        "string": "Additionally, we demonstrate the effectiveness of our method for data augmentation on multiple tasks such as intent classification and paraphrase recognition."
                    },
                    {
                        "id": 7,
                        "string": "In order to drive further research, we have made the source code available."
                    },
                    {
                        "id": 8,
                        "string": "cases desirable, to produce lexically diverse ones."
                    },
                    {
                        "id": 9,
                        "string": "Diversity in paraphrase generation finds applications in text simplification (Nisioi et al., 2017; Xu et al., 2015) , document summarization (Li et al., 2009; Nema et al., 2017) , QA systems (Fader et al., 2013; Bernhard and Gurevych, 2008) , data augmentation (Zhang et al., 2015; Wang and Yang, 2015) , conversational agents (Li et al., 2016) and information retrieval (Anick and Tipirneni, 1999) ."
                    },
                    {
                        "id": 10,
                        "string": "To obtain a set of multiple paraphrases, most of the current paraphrasing models rely solely on topk beam search sequences."
                    },
                    {
                        "id": 11,
                        "string": "The resulting set, however, contains many structurally similar sentences with only minor, word level changes."
                    },
                    {
                        "id": 12,
                        "string": "There have been some prior works (Li and Jurafsky, 2016; Elhamifar et al., 2012) which address the notion of diversity in NLP, including in sequence learning frameworks (Song et al., 2018; Vijayakumar et al., 2018) ."
                    },
                    {
                        "id": 13,
                        "string": "Although Song et al."
                    },
                    {
                        "id": 14,
                        "string": "(2018) address the issue of diversity in the scenario of neural conversation models using determinantal point processes (DPP), it could be naturally used for paraphrasing."
                    },
                    {
                        "id": 15,
                        "string": "On similar lines, subset selection based on Simultaneous Sparse Recovery (SSR) (Elhamifar et al., 2012) can also be easily adapted for the same task."
                    },
                    {
                        "id": 16,
                        "string": "Though these methods are helpful in maximizing diversity, they are restrictive in terms of re-taining fidelity with respect to the source sentence."
                    },
                    {
                        "id": 17,
                        "string": "Addressing the task of diverse paraphrasing through the lens of monotone submodular function maximization (Fujishige, 2005; Krause and Golovin; Bach et al., 2013) alleviates this problem and also provides a few additional benefits."
                    },
                    {
                        "id": 18,
                        "string": "Firstly, the submodular objective offers better flexibility in terms of controlling diversity as well as fidelity."
                    },
                    {
                        "id": 19,
                        "string": "Secondly, there exists a simple greedy algorithm for solving monotone submodular function maximization (Nemhauser et al., 1978) , which guarantees the diverse solution to be almost as good as the optimal solution."
                    },
                    {
                        "id": 20,
                        "string": "Finally, many submodular programs are fast and scalable to large datasets."
                    },
                    {
                        "id": 21,
                        "string": "Below, we list the main contributions of our paper."
                    },
                    {
                        "id": 22,
                        "string": "We introduce Diverse Paraphraser using Submodularity (DiPS)."
                    },
                    {
                        "id": 23,
                        "string": "DiPS maximizes a novel submodular objective function specifically targeted towards paraphrasing."
                    },
                    {
                        "id": 24,
                        "string": "2."
                    },
                    {
                        "id": 25,
                        "string": "We perform extensive experiments to show the effectiveness of our method in generating structurally diverse paraphrases without compromising on fidelity."
                    },
                    {
                        "id": 26,
                        "string": "We also compare against several possible diversity inducing schemes."
                    },
                    {
                        "id": 27,
                        "string": "3."
                    },
                    {
                        "id": 28,
                        "string": "We demonstrate the utility of diverse paraphrases generated via DiPS as data augmentation schemes on multiple tasks such as intent and question classification."
                    },
                    {
                        "id": 29,
                        "string": "(See et al., 2017) for generating paraphrases and an evaluator based on (Parikh et al., 2016) to penalize non-paraphrastic generations."
                    },
                    {
                        "id": 30,
                        "string": "Several other works (Cao et al., 2017; Iyyer et al., 2018) exist for paraphrasing, though they have either been superseded by newer models or are not-directly applicable to our settings."
                    },
                    {
                        "id": 31,
                        "string": "However, most of these methods focus on the issue of generating semantically similar paraphrases, while paying little attention towards diversity."
                    },
                    {
                        "id": 32,
                        "string": "Diversity in paraphrasing models was first explored by (Gupta et al., 2018) where they propose to generate variations based on different samples from the latent space in a deep generative framework."
                    },
                    {
                        "id": 33,
                        "string": "Although diversity in paraphrasing models has not been explored extensively, methods have been proposed to address diversity in other NLP tasks (Li et al., 2016 (Li et al., , 2015 Gimpel et al., 2013) ."
                    },
                    {
                        "id": 34,
                        "string": "Diverse beam search proposed by (Vijayakumar et al., 2018) generates k-diverse sequences by dividing the candidate subsequences at each time step into several groups and penalizing subsequences which are similar to prior groups."
                    },
                    {
                        "id": 35,
                        "string": "The most relevant to our approach is the method proposed by (Song et al., 2018) for neural conversation models where they incorporate diversity by using DPP to select diverse subsequences at each time step."
                    },
                    {
                        "id": 36,
                        "string": "Although their work is addressed in the scenario of neural conversation models, it could be naturally adapted to paraphrasing models and thus we use it as a baseline."
                    },
                    {
                        "id": 37,
                        "string": "Submodular functions have been applied to a wide variety of problems in machine learning (Iyer and Bilmes, 2013; Jegelka and Bilmes, 2011; Krause and Guestrin, 2011; Kolmogorov and Zabih, 2002) and have recently attracted much attention in several NLP tasks including document summarization (Lin and Bilmes, 2011) , data selection in machine translation (Kirchhoff and Bilmes, 2014) and goal-oriented chatbot training (Dimovski et al., 2018) ."
                    },
                    {
                        "id": 38,
                        "string": "However, their application to sequence generation is largely unexplored."
                    },
                    {
                        "id": 39,
                        "string": "Data augmentation is a technique for increasing the size of labeled training sets by leveraging task specific transformations which preserve class labels."
                    },
                    {
                        "id": 40,
                        "string": "While the technique is ubiquitous in the vision community (Krizhevsky et al., 2012; Ratner et al., 2017) , data-augmentation in NLP is largely under-explored."
                    },
                    {
                        "id": 41,
                        "string": "Most current augmentation schemes involve thesaurus based synonym replacement (Zhang et al., 2015; Wang and Yang, 2015) , and replacement by words with paradigmatic relations (Kobayashi, 2018) ."
                    },
                    {
                        "id": 42,
                        "string": "Both of these F 1 S ← ∅ 2 N ← V 3 while |S| < k do 4 x * ← argmax x∈N F(S ∪ {x}) 5 S ← S ∪ {x * } 6 N ← N \\ {x * } 7 end 8 return S approaches try to boost the generalization abilities of downstream classification models through word-level substitutions."
                    },
                    {
                        "id": 43,
                        "string": "However, they are inherently restrictive in terms of the diversity they can offer."
                    },
                    {
                        "id": 44,
                        "string": "Our work offers a data-augmentation scheme via high quality paraphrases."
                    },
                    {
                        "id": 45,
                        "string": "Background: Submodularity Let V = {v 1 , ."
                    },
                    {
                        "id": 46,
                        "string": "."
                    },
                    {
                        "id": 47,
                        "string": "."
                    },
                    {
                        "id": 48,
                        "string": ", v n } be a set of objects, which we refer to as the ground set, and F : 2 V → R be a set function which works on subsets S of V to return a real value."
                    },
                    {
                        "id": 49,
                        "string": "The task is to find a subset S of bounded cardinality say |S| ≤ k that maximizes the function F, i.e., argmax S⊆V F(S)."
                    },
                    {
                        "id": 50,
                        "string": "In general, solving this problem is intractable."
                    },
                    {
                        "id": 51,
                        "string": "However, if the function F is monotone non-decreasing submodular, then although the problem is still NPcomplete, there exists a greedy algorithm ( Algorithm 1) (Nemhauser et al., 1978) that finds an approximate solution which is guaranteed to be within 0.632 of the optimal solution."
                    },
                    {
                        "id": 52,
                        "string": "Submodular functions are set functions F : 2 V → R, where 2 V denotes the power set of ground set V. Submodular functions satisfy the following equivalent properties of diminishing returns: ∀X, Y ⊆ V with X ⊆ Y , and ∀s ∈ V \\ Y , we have the following."
                    },
                    {
                        "id": 53,
                        "string": "F(X ∪ {s}) − F(X) ≥ F(Y ∪ {s}) − F(Y ) (1) In other words, the value addition due to incorporation of s decreases as the subset grows from X to Y ."
                    },
                    {
                        "id": 54,
                        "string": "Equivalently, ∀X, Y ⊆ V , we have, F(X) + F(Y ) ≥ F(X ∪ Y ) + F(X ∩ Y ) In case the above inequalities are equalities, the function F is said to be modular."
                    },
                    {
                        "id": 55,
                        "string": "Let F(s|X) F(X ∪ {s}) − F(X)."
                    },
                    {
                        "id": 56,
                        "string": "Therefore, F is submodular if F(s|X) ≥ F(s|Y ) for X ⊆ Y ."
                    },
                    {
                        "id": 57,
                        "string": "t ← 0; P ← ∅ 4 while t < T do 5 Generate top 3k most probable subsequences 6 P ← Select k based on argmax X⊆V (t) F(X) using Algorithm 1 7 t = t + 1 8 end 9 return P The second criteria which the function needs to satisfy for Algorithm 1 to be applicable is of monotonicity."
                    },
                    {
                        "id": 58,
                        "string": "A set function F is said to be mono- tone non-decreasing if ∀X ⊆ Y, F(X) ≤ F(Y )."
                    },
                    {
                        "id": 59,
                        "string": "Submodular functions are relevant in a large class of real-world applications, therefore making them extremely useful in practice."
                    },
                    {
                        "id": 60,
                        "string": "Additionally, submodular functions share many commonalities with convex functions, in the sense that they are closed under a number of standard operations like mixtures (non-negative weighted sum of submodular functions), truncation and some restricted compositions."
                    },
                    {
                        "id": 61,
                        "string": "The above properties will be useful when defining the submodular objective for obtaining high quality paraphrases."
                    },
                    {
                        "id": 62,
                        "string": "Methodology Similar to Prakash et al."
                    },
                    {
                        "id": 63,
                        "string": "(2016) ; Gupta et al."
                    },
                    {
                        "id": 64,
                        "string": "(2018); Li et al."
                    },
                    {
                        "id": 65,
                        "string": "(2018) , we formulate the task of paraphrase generation as a sequence-to-sequence learning problem."
                    },
                    {
                        "id": 66,
                        "string": "Previous SEQ2SEQ based approaches depend entirely on the standard crossentropy loss to produce semantically similar sentences and greedy decoding during generation."
                    },
                    {
                        "id": 67,
                        "string": "However, this does not guarantee lexical variety in the generated paraphrases."
                    },
                    {
                        "id": 68,
                        "string": "To incorporate some form of diversity, most prior approaches rely solely on top-k beam search sequences."
                    },
                    {
                        "id": 69,
                        "string": "The kbest list generated by standard beam search are a poor surrogate for the entire search space (Finkel et al., 2006) ."
                    },
                    {
                        "id": 70,
                        "string": "In fact, most of the sentences in the resulting set are structurally similar, differing only by punctuations or minor morphological variations."
                    },
                    {
                        "id": 71,
                        "string": "While being similar in the encoding scheme, our work adopts a different approach for the final decoding."
                    },
                    {
                        "id": 72,
                        "string": "We propose a framework which organi-  cally combines a sentence encoder with a diversity inducing decoder."
                    },
                    {
                        "id": 73,
                        "string": "Overview Our approach is built upon SEQ2SEQ framework."
                    },
                    {
                        "id": 74,
                        "string": "We first feed the tokenized source sentence to the encoder."
                    },
                    {
                        "id": 75,
                        "string": "The task of the decoder is to take as input the encoded representation and produce the respective paraphrase."
                    },
                    {
                        "id": 76,
                        "string": "To achieve this, we train the model using standard cross entropy loss between the generated sequence and the target paraphrase."
                    },
                    {
                        "id": 77,
                        "string": "Upon completion of training, instead of using greedy decoding or standard beam search, we use a modified decoder where we incorporate a submodular objective to obtain high quality paraphrases."
                    },
                    {
                        "id": 78,
                        "string": "Please refer to Figure 1 for an overview of the proposed method."
                    },
                    {
                        "id": 79,
                        "string": "During the generation phase, the encoder takes the source sentence as input and feeds its representation to the decoder to initiate the decoding process."
                    },
                    {
                        "id": 80,
                        "string": "At each time-step t, we consider N most probable subsequences since they are likely to be wellformed."
                    },
                    {
                        "id": 81,
                        "string": "Based on optimization of our submodular objective, a subset of size k < N are selected and sent as input to the next time step t + 1 for further generation."
                    },
                    {
                        "id": 82,
                        "string": "The process is repeated until desired output length T or <eos> token, whichever comes first."
                    },
                    {
                        "id": 83,
                        "string": "Monotone Submodular Objectives We design a parameterized class of submodular functions tailored towards the task of paraphrase generation."
                    },
                    {
                        "id": 84,
                        "string": "Let V (t) be the ground set of possible subsequences at time step t. We aim to determine a set X ⊆ V (t) that retains certain fidelity as well as diversity."
                    },
                    {
                        "id": 85,
                        "string": "Hence, we model our submodular objective function as follows: X * = argmax X⊆V (t) F(X) s.t."
                    },
                    {
                        "id": 86,
                        "string": "|X| ≤ k (2) where k is our budget (desired number of paraphrases) and F is defined as: F(X) = λL(X, s) + (1 − λ)D(X) (3) Here s is the source sentence, L(X, s) and D(X) measure fidelity and diversity, respectively."
                    },
                    {
                        "id": 87,
                        "string": "λ ∈ [0, 1] is the trade-off coefficient."
                    },
                    {
                        "id": 88,
                        "string": "This formulation clearly brings out the trade-off between the two key characteristics."
                    },
                    {
                        "id": 89,
                        "string": "Fidelity It is imperative to design functions that exploit the decoder search space to maximize the semantic similarity between the generated and the source sentence."
                    },
                    {
                        "id": 90,
                        "string": "To achieve this we build upon a known class of monotone submodular functions (Stobbe and Krause, 2010) f (X) = i∈U µ i φ i (m i (X)) (4) where U is the set of features to be defined later, µ i ≥ 0 is the feature weight, m i (X) = x∈X m i (x) is non-negative modular function and φ i is a non-negative non-decreasing concave function."
                    },
                    {
                        "id": 91,
                        "string": "Based on the analysis of concave functions in (Kirchhoff and Bilmes, 2014), we use the simple square root function as φ (φ(a) = √ a) in both of our fidelity objectives defined below."
                    },
                    {
                        "id": 92,
                        "string": "We consider two complementary notions of sentence similarity namely syntactic and semantic."
                    },
                    {
                        "id": 93,
                        "string": "To capture syntactic information we define the following function: L 1 (X, s) = µ 1 x∈X N n=1 β n |x n-gram ∩ s n-gram | (5) where |x n-gram ∩ s n-gram | represents the number of overlapping n-grams between the source and the candidate sequence x for different values of n ∈ {1, ."
                    },
                    {
                        "id": 94,
                        "string": "."
                    },
                    {
                        "id": 95,
                        "string": "."
                    },
                    {
                        "id": 96,
                        "string": ", N }(we use N = 3 )."
                    },
                    {
                        "id": 97,
                        "string": "Since longer n-gram overlaps are more valuable, we set β > 1."
                    },
                    {
                        "id": 98,
                        "string": "This function inherently increases the BLEU score between the source and the generated sentences."
                    },
                    {
                        "id": 99,
                        "string": "We address the semantic aspect of fidelity by devising a function based on the word embeddings of source and generated sentences."
                    },
                    {
                        "id": 100,
                        "string": "We define embedding based similarity between two sentences as, S(x, s) = 1 |x| w i ∈x argmax w j ∈s ψ(v w i , v w j ) (6) where v w i is the word embedding for token w i and ψ(v w i , v w j ) is the gaussian radial basis function (rbf) 1 ."
                    },
                    {
                        "id": 101,
                        "string": "For each word in the candidate sequence x, we find the best matching word in the source sentence using word level similarity."
                    },
                    {
                        "id": 102,
                        "string": "Using the above mentioned measure for embedding similarity we use the following submodular function: L 2 (X, s) = µ 2 x∈X S(x, s) (7) 1 We find gaussian rbf to work better than other similarity metrics such as cosine similarity This function helps increase the semantic homogeneity between the source and generated sequences."
                    },
                    {
                        "id": 103,
                        "string": "The above defined functions (Equation 5,7) are compositions of non-decreasing concave functions and modular functions."
                    },
                    {
                        "id": 104,
                        "string": "Thus, staying in the realm of the class of monotone submodular functions mentioned in Equation 4, we define fidelity function L(X, s) = L 1 (X, s) + L 2 (X, s) Diversity Ensuring high fidelity often comes at the cost of producing sequences that only slightly differ from each other."
                    },
                    {
                        "id": 105,
                        "string": "To encourage diversity in the generation process it is desirable to reward sequences with higher number of distinct n-grams as compared to others in the ground set V (t) ."
                    },
                    {
                        "id": 106,
                        "string": "Accordingly, we propose to use the following function: D 1 (X) = µ 3 N n=1 β n x∈X x n−gram (8) For β = 1, D 1 (X) denotes the number of distinct n-grams present in the set X."
                    },
                    {
                        "id": 107,
                        "string": "Since shorter n-grams contribute more towards diversity, we set β < 1, thereby giving more value to shorter ngrams."
                    },
                    {
                        "id": 108,
                        "string": "It is easy to see that this function is monotone non-decreasing as the number of distinct ngrams can only increase with the addition of more sequences."
                    },
                    {
                        "id": 109,
                        "string": "To see that D 1 (X) is submodular, consider adding a new sequence to two sets of sequences, one a subset of the other."
                    },
                    {
                        "id": 110,
                        "string": "Intuitively, the increment in the number of distinct n-grams when adding a new sequence to the smaller set should be larger than the increment when adding it to the larger set, as the distinct n-grams in the new sequence might have already been covered by the sequences in the larger set."
                    },
                    {
                        "id": 111,
                        "string": "Apart from distinct n-gram overlaps, we also wish to obtain sequence candidates that are not only diverse, but also cover all major structural variations."
                    },
                    {
                        "id": 112,
                        "string": "It is reasonable to expect sentences that are structurally different to have lower degree of word/phrase alignment as compared to sentences with minor lexical variations."
                    },
                    {
                        "id": 113,
                        "string": "Edit distance (Levenshtein) is a widely accepted measure to determine such dissimilarities between two sentences."
                    },
                    {
                        "id": 114,
                        "string": "To incorporate this notion of diversity, a formulation in terms of edit distance seems like a natural fit for the problem."
                    },
                    {
                        "id": 115,
                        "string": "To do so, we use the coverage function which measures the similarity of the candidate sequences X with the ground set V (t) ."
                    },
                    {
                        "id": 116,
                        "string": "The coverage function is naturally monotone submodular and is defined as: D 2 (X) = µ 4 x i ∈V (t) x j ∈X R(x i , x j ) (9) where R(x i , x j ) is an alignment based similarity measure between two sequences x i and x j given by: R(x i , x j ) = 1 − EditDistance(x i , x j ) |x i | + |x j | (10) Note that R(x i , x j ) will always lie in the range [0, 1]."
                    },
                    {
                        "id": 117,
                        "string": "Evidently, this method allows flexibility in terms of controlling diversity and fidelity."
                    },
                    {
                        "id": 118,
                        "string": "Our goal is to strike a balance between these two to obtain highquality generations."
                    },
                    {
                        "id": 119,
                        "string": "Experiments Datasets In this section we outline the datasets used for evaluating our proposed method."
                    },
                    {
                        "id": 120,
                        "string": "We specify the actual splits in Table 2 ."
                    },
                    {
                        "id": 121,
                        "string": "Based on the task, we categorize them into the following: Baseline Several models have sought to increase diversity, albeit with different goals and techniques."
                    },
                    {
                        "id": 122,
                        "string": "However, majority of the prior works in this area have focused on the task of producing diverse responses in dialog systems (Li et al., 2016; Ritter et al., 2011) and not paraphrasing."
                    },
                    {
                        "id": 123,
                        "string": "Given the lack of relevant baselines, we compare our model against the following methods: (Li et al., 2018) Note that the first four baselines are trained using the same SEQ2SEQ network and differ only in the decoding phase."
                    },
                    {
                        "id": 124,
                        "string": "Intrinsic Evaluation 1."
                    },
                    {
                        "id": 125,
                        "string": "Fidelity: To evaluate our method for fidelity of generated paraphrases, we use three machine translation metrics which have been shown to be suitable for paraphrase evaluation task (Wubben et al., 2010) : BLEU (Papineni et al., 2002)(upto bigrams), ME-TEOR (Banerjee and Lavie, 2005) and TER-Plus (Snover et al., 2009 )."
                    },
                    {
                        "id": 126,
                        "string": "Diversity: We report degree of diversity by calculating the number of distinct n-grams (n ∈ {1, 2, 3, 4}) ."
                    },
                    {
                        "id": 127,
                        "string": "The value is scaled by the number of generated tokens to avoid favoring long sequences."
                    },
                    {
                        "id": 128,
                        "string": "In addition to fidelity and diversity, we evaluate the efficacy of our method by using the generated paraphrases as augmented samples in the task of paraphrase recognition on the Quora-PR dataset."
                    },
                    {
                        "id": 129,
                        "string": "We perform experiments with multiple augmentation settings for the following classifiers: 1."
                    },
                    {
                        "id": 130,
                        "string": "LogReg: Simple Logistic Regression model."
                    },
                    {
                        "id": 131,
                        "string": "We use a set of hand-crafted features, the de-tails of which can be found in the supplementary."
                    },
                    {
                        "id": 132,
                        "string": "2."
                    },
                    {
                        "id": 133,
                        "string": "SiameseLSTM: Siamese adaptation of LSTM to measure quality between two sentences (Mueller and Thyagarajan, 2016) We also perform ablation testing to highlight the importance of each submodular component."
                    },
                    {
                        "id": 134,
                        "string": "Details can be found in the supplementary section."
                    },
                    {
                        "id": 135,
                        "string": "Data-Augmentation We evaluate the importance of using high quality paraphrases in two downstream classification tasks namely intent-classification and questionclassification."
                    },
                    {
                        "id": 136,
                        "string": "Our original generation model is trained on Quora-Div question pairs."
                    },
                    {
                        "id": 137,
                        "string": "Since intentclassification and question-classification contain questions, this setting seems like a good fit to perform transfer learning."
                    },
                    {
                        "id": 138,
                        "string": "We perform experiments on the following standard classifier models: 1."
                    },
                    {
                        "id": 139,
                        "string": "LogRegDA: Simple logistic regression model trained using hand-crafted features."
                    },
                    {
                        "id": 140,
                        "string": "For details, please refer to the supplementary section."
                    },
                    {
                        "id": 141,
                        "string": "LSTM: Single layered LSTM classification model."
                    },
                    {
                        "id": 142,
                        "string": "In addition to SBS and DBS, we use the following data-augmentation baselines for comparison: Setup We train our SEQ2SEQ model with attention (Bahdanau et al., 2014) for up to 50 epochs using the adam optimizer (Kingma and Ba, 2014) with initial learning rate set to 2e-4."
                    },
                    {
                        "id": 143,
                        "string": "During the generation phase, we follow standard beam search till   the number of generated tokens is nearly half the source sequence length (token level) to avoid possibly erroneous sentences."
                    },
                    {
                        "id": 144,
                        "string": "We then apply submodular maximization stochastically with probability p at each time step."
                    },
                    {
                        "id": 145,
                        "string": "Since each candidate subsequence is extended by a single token at every time-step, information added might not necessarily be useful as our submodular components work on sentence level."
                    },
                    {
                        "id": 146,
                        "string": "This approach is time efficient and avoids redundant computations."
                    },
                    {
                        "id": 147,
                        "string": "For each augmentation setting, we randomly select sentences from the training data and generate its paraphrases."
                    },
                    {
                        "id": 148,
                        "string": "We then add them in the training data with the same label as that of the source sentence."
                    },
                    {
                        "id": 149,
                        "string": "We evaluate the performance on different classification models in terms of accuracy."
                    },
                    {
                        "id": 150,
                        "string": "Based on the formulation of the objective function, it should be clear that diversity would attain maximum value at (or around) λ = 0 albeit at the cost of fidelity."
                    },
                    {
                        "id": 151,
                        "string": "This is certainly not a desirable property for paraphrasing systems."
                    },
                    {
                        "id": 152,
                        "string": "To address this, we perform hyperparameter tuning for λ value by analyzing the trade-off between diversity and fidelity based on varying λ values."
                    },
                    {
                        "id": 153,
                        "string": "In practice, diversity metric attains saturation at certain λ range (usually 0.2 -0.5)."
                    },
                    {
                        "id": 154,
                        "string": "This behaviour can be seen in Figure 2 ."
                    },
                    {
                        "id": 155,
                        "string": "Corresponding plot for Twitter, the effect of λ on fidelity and additional details about the hyperparameters can be found in the supplementary."
                    },
                    {
                        "id": 156,
                        "string": "Results Our experiments were geared towards answering the following primary questions: Q1."
                    },
                    {
                        "id": 157,
                        "string": "Is DiPS able to generate diverse paraphrases without compromising on fidelity?"
                    },
                    {
                        "id": 158,
                        "string": "(Section 6.1) Q2."
                    },
                    {
                        "id": 159,
                        "string": "Are paraphrase generated by DiPS useful in data-augmentation?"
                    },
                    {
                        "id": 160,
                        "string": "(Section 6.2) Intrinsic Evaluation We compare our method against recent paraphrasing models as well as multiple diversity inducing schemes."
                    },
                    {
                        "id": 161,
                        "string": "DiPS outperforms these baseline models in terms of fidelity metrics namely BLEU, ME-TEOR and TERp."
                    },
                    {
                        "id": 162,
                        "string": "A high METEOR score and a low TERp score indicate the presence of not only exact words but also synonyms and semantically similar phrases."
                    },
                    {
                        "id": 163,
                        "string": "Notably, our model is not only able to achieve substantial gains over other diversity inducing schemes but is also able to do so without compromising on fidelity."
                    },
                    {
                        "id": 164,
                        "string": "Diversity and fidelity scores are reported in Table 4 and Table 3 , respectively."
                    },
                    {
                        "id": 165,
                        "string": "As described in Section 5.3, we evaluate the accuracy of paraphrase recognition models when provided with training data augmented using different schemes."
                    },
                    {
                        "id": 166,
                        "string": "It is reasonable to expect that high quality paraphrases would tend to yield better results on in-domain paraphrase recognition task."
                    },
                    {
                        "id": 167,
                        "string": "We observe that using the paraphrases generated by DiPS helps in achieving substantial gains in accuracy over other baseline schemes."
                    },
                    {
                        "id": 168,
                        "string": "Figure 3 showcases the effect of using paraphrases generated by our method as compared to other competitive paraphrasing methods."
                    },
                    {
                        "id": 169,
                        "string": "Data-augmentation Data Augmentation results for intent and question classification are shown in Table 5 ."
                    },
                    {
                        "id": 170,
                        "string": "While, SBS does not offer much lexical variability, DBS offers high diversity at the cost of fidelity."
                    },
                    {
                        "id": 171,
                        "string": "SynRep and ContAug are augmentation schemes which are limited by the amount of structural variations they can offer."
                    },
                    {
                        "id": 172,
                        "string": "DiPS on the other hand provides generation having high structural variations without compromising on fidelity."
                    },
                    {
                        "id": 173,
                        "string": "The boost in accuracy scores on both the types of classification models is indicative of the importance of using high quality paraphrases for data-augmentation."
                    },
                    {
                        "id": 174,
                        "string": "Conclusion In this paper, we have proposed DiPS, a model which generates high quality paraphrases by maximizing a novel submodular objective function designed specifically for paraphrasing."
                    },
                    {
                        "id": 175,
                        "string": "In contrast to prior works which focus exclusively either on fidelity or diversity, a submodular function based approach offers a large degree of freedom to control fidelity and diversity."
                    },
                    {
                        "id": 176,
                        "string": "Through extensive experiments on multiple standard datasets, we have demonstrated the effectiveness of our approach over numerous baselines."
                    },
                    {
                        "id": 177,
                        "string": "We observe that the diverse paraphrases generated are not only interesting and meaning preserving, but are also helpful in data augmentation."
                    },
                    {
                        "id": 178,
                        "string": "We showcase that using multiple settings on the task of intent and question classification."
                    },
                    {
                        "id": 179,
                        "string": "We hope that our approach will be useful not only for paraphrase generation and data augmentation, but also for other NLG problems in conversational agents and text summarization."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 10,
                        "end": 21
                    },
                    {
                        "section": "We introduce Diverse Paraphraser using",
                        "n": "1.",
                        "start": 22,
                        "end": 44
                    },
                    {
                        "section": "Background: Submodularity",
                        "n": "3",
                        "start": 45,
                        "end": 61
                    },
                    {
                        "section": "Methodology",
                        "n": "4",
                        "start": 62,
                        "end": 72
                    },
                    {
                        "section": "Overview",
                        "n": "4.1",
                        "start": 73,
                        "end": 82
                    },
                    {
                        "section": "Monotone Submodular Objectives",
                        "n": "4.2",
                        "start": 83,
                        "end": 118
                    },
                    {
                        "section": "Datasets",
                        "n": "5.1",
                        "start": 119,
                        "end": 120
                    },
                    {
                        "section": "Baseline",
                        "n": "5.2",
                        "start": 121,
                        "end": 123
                    },
                    {
                        "section": "Intrinsic Evaluation",
                        "n": "5.3",
                        "start": 124,
                        "end": 125
                    },
                    {
                        "section": "Diversity:",
                        "n": "2.",
                        "start": 126,
                        "end": 134
                    },
                    {
                        "section": "Data-Augmentation",
                        "n": "5.4",
                        "start": 135,
                        "end": 139
                    },
                    {
                        "section": "LSTM:",
                        "n": "2.",
                        "start": 140,
                        "end": 141
                    },
                    {
                        "section": "Setup",
                        "n": "5.5",
                        "start": 142,
                        "end": 155
                    },
                    {
                        "section": "Results",
                        "n": "6",
                        "start": 156,
                        "end": 159
                    },
                    {
                        "section": "Intrinsic Evaluation",
                        "n": "6.1",
                        "start": 160,
                        "end": 168
                    },
                    {
                        "section": "Data-augmentation",
                        "n": "6.2",
                        "start": 169,
                        "end": 173
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 174,
                        "end": 179
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/995-Table1-1.png",
                        "caption": "Table 1: Sample paraphrases generated by beam search and DiPS (our method). DiPS offers lexically diverse paraphrases without compromising on fidelity.",
                        "page": 0,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 527.04,
                            "y1": 222.23999999999998,
                            "y2": 322.08
                        }
                    },
                    {
                        "filename": "../figure/image/995-Table2-1.png",
                        "caption": "Table 2: Dataset Statistics",
                        "page": 5,
                        "bbox": {
                            "x1": 309.59999999999997,
                            "x2": 525.6,
                            "y1": 64.32,
                            "y2": 240.95999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/995-Table3-1.png",
                        "caption": "Table 3: Results on Quora-Div and Twitter dataset. Higher↑ BLEU and METEOR score is better whereas lower↓ TERp score is better. Please see Section 6 for details.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 67.2,
                            "y2": 163.2
                        }
                    },
                    {
                        "filename": "../figure/image/995-Figure2-1.png",
                        "caption": "Figure 2: Effect of varying the trade-off coefficient λ in DiPS on various diversity metrics on the Quora dataset.",
                        "page": 6,
                        "bbox": {
                            "x1": 98.88,
                            "x2": 262.08,
                            "y1": 216.48,
                            "y2": 380.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/995-Table4-1.png",
                        "caption": "Table 4: Results on Quora-Div and Twitter dataset. Higher distinct scores imply better lexical diversity. Please see Section 6 for details.",
                        "page": 7,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 525.12,
                            "y1": 67.2,
                            "y2": 155.04
                        }
                    },
                    {
                        "filename": "../figure/image/995-Figure3-1.png",
                        "caption": "Figure 3: Comparison of accuracy scores of two paraphrase recognition models using different augmentation schemes (Quora-PR). Both LogReg and SiameseLSTM achieve the highest boost in performance when augmented with samples generated using DiPS",
                        "page": 7,
                        "bbox": {
                            "x1": 316.8,
                            "x2": 516.0,
                            "y1": 207.84,
                            "y2": 383.03999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/995-Table5-1.png",
                        "caption": "Table 5: Accuracy scores of two classification models on various data-augmentation schemes. Please see Section 6 for details",
                        "page": 7,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 290.4,
                            "y1": 208.32,
                            "y2": 296.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/995-Figure1-1.png",
                        "caption": "Figure 1: Overview of DiPS during decoding to generate k paraphrases. At each time step, a set of N sequences (V (t)) is used to determine k < N sequences (X∗) via submodular maximization . The above figure illustrates the motivation behind each submodular component. Please see Section 4 for details.",
                        "page": 3,
                        "bbox": {
                            "x1": 121.92,
                            "x2": 475.68,
                            "y1": 62.879999999999995,
                            "y2": 306.24
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-12"
        },
        {
            "slides": {
                "0": {
                    "title": "Neural Question Answering",
                    "text": [
                        "Question: What color is the sky?",
                        "Passage: Air is made mainly from molecules of nitrogen and oxygen.",
                        "These molecules scatter the blue colors of sunlight more effectively than the green and red colors. Therefore, a clean sky appears blue."
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "3": {
                    "title": "Open Question Answering",
                    "text": [
                        "Question: What color is the sky?",
                        "Relevant Text Model Answer Span Document Retrieval"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "5": {
                    "title": "Two Possible Approaches",
                    "text": [
                        "Select a single paragraph from the input, and run the model on that paragraph",
                        "Run the model on many paragraphs from the input, and have itassign a confidence score to its results on each paragraph"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "6": {
                    "title": "This Work",
                    "text": [
                        "Improve several of the key design decision that arise when training on document-level data",
                        "Study ways to train models to produce correct confidence scores"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "8": {
                    "title": "Pipeline Method Noisy Supervision",
                    "text": [
                        "Document level data can be expected to be distantly supervised:",
                        "Question: Which British general was killed at Khartoum in 1885?",
                        "In February 1884 Gordon returned to the Sudan to evacuate Egyptian forces.",
                        "Rebels broke into the city , killing Gordon and the other defenders. The British public reacted to his death by acclaiming ' Gordon of Khartoum , a saint.",
                        "However, historians have since suggested that Gordon defied orders and.",
                        "Need a training objective that can handle multiple (noisy) answer spans",
                        "Use the summed objective from Kadlec et al (2016), that optimizes the log sum of",
                        "the probability of all answer spans",
                        "Remains agnostic to how probability mass is distributed among the answer spans"
                    ],
                    "page_nums": [
                        9,
                        10
                    ],
                    "images": []
                },
                "12": {
                    "title": "Learning Well Calibrated Confidence Scores",
                    "text": [
                        "Train the model on both answering-containing and non-answering containing",
                        "paragraph and use a modified objective function",
                        "Merge: Concatenate sampled paragraphs together",
                        "No-Answer: Process paragraphs independently, and allow the model to place",
                        "probability mass on a no-answer output",
                        "Sigmoid: Assign an independent probability on each span using the sigmoid",
                        "Shared-Norm: Process paragraphs independently, but compute the span",
                        "probability across spans in all paragraphs"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "15": {
                    "title": "Pipeline Method Results on TriviaQA Web",
                    "text": [
                        "Uses BiDAF as the model",
                        "Select paragraphs by truncating documents",
                        "Select answer-spans randomly EM",
                        "word embeddings (Peters et al., 2017)",
                        "TriviaQA Baseline Our Baseline +TF-IDF +Sum +TF-IDF +Sum +Model +TF-IDF +Sum"
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                },
                "16": {
                    "title": "TriviaQA Leaderboard Exact Match Scores",
                    "text": [
                        "Model Web-All Web-Verified Wiki-All Wiki-Verified",
                        "Best leaderboard entry (mingyan)",
                        "Dynamic Integration of Background"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": [
                        "figure/image/996-Figure4-1.png"
                    ]
                },
                "18": {
                    "title": "Building an Open Question Answering System",
                    "text": [
                        "Use Bing web search and a Wikipedia entity linker to locate relevant documents",
                        "Extract the top 12 paragraphs, as found using the linear paragraph ranker",
                        "Use the model trained for TriviaQA Unfiltered to find the final answer"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": [
                        "figure/image/996-Figure2-1.png"
                    ]
                }
            },
            "paper_title": "Simple and Effective Multi-Paragraph Reading Comprehension",
            "paper_id": "996",
            "paper": {
                "title": "Simple and Effective Multi-Paragraph Reading Comprehension",
                "abstract": "We introduce a method of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Most current question answering models cannot scale to document or multi-document input, and naively applying these models to each paragraph independently often results in them being distracted by irrelevant text. We show that it is possible to significantly improve performance by using a modified training scheme that teaches the model to ignore non-answer containing paragraphs. Our method involves sampling multiple paragraphs from each document, and using an objective function that requires the model to produce globally correct output. We additionally identify and improve upon a number of other design decisions that arise when working with document-level data. Experiments on TriviaQA and SQuAD shows our method advances the state of the art, including a 10 point gain on TriviaQA.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Teaching machines to answer arbitrary usergenerated questions is a long-term goal of natural language processing."
                    },
                    {
                        "id": 1,
                        "string": "For a wide range of questions, existing information retrieval methods are capable of locating documents that are likely to contain the answer."
                    },
                    {
                        "id": 2,
                        "string": "However, automatically extracting the answer from those texts remains an open challenge."
                    },
                    {
                        "id": 3,
                        "string": "The recent success of neural models at answering questions given a related paragraph (Wang et al., 2017c; Tan et al., 2017) suggests they have the potential to be a key part of * Work completed while interning at the Allen Institute for Artificial Intelligence a solution to this problem."
                    },
                    {
                        "id": 4,
                        "string": "Most neural models are unable to scale beyond short paragraphs, so typically this requires adapting a paragraph-level model to process document-level input."
                    },
                    {
                        "id": 5,
                        "string": "There are two basic approaches to this task."
                    },
                    {
                        "id": 6,
                        "string": "Pipelined approaches select a single paragraph from the input documents, which is then passed to the paragraph model to extract an answer (Joshi et al., 2017; Wang et al., 2017a) ."
                    },
                    {
                        "id": 7,
                        "string": "Confidence based methods apply the model to multiple paragraphs and return the answer with the highest confidence (Chen et al., 2017a) ."
                    },
                    {
                        "id": 8,
                        "string": "Confidence methods have the advantage of being robust to errors in the (usually less sophisticated) paragraph selection step, however they require a model that can produce accurate confidence scores for each paragraph."
                    },
                    {
                        "id": 9,
                        "string": "As we shall show, naively trained models often struggle to meet this requirement."
                    },
                    {
                        "id": 10,
                        "string": "In this paper we start by proposing an improved pipelined method which achieves state-of-the-art results."
                    },
                    {
                        "id": 11,
                        "string": "Then we introduce a method for training models to produce accurate per-paragraph confidence scores, and we show how combining this method with multiple paragraph selection further increases performance."
                    },
                    {
                        "id": 12,
                        "string": "Our pipelined method focuses on addressing the challenges that come with training on documentlevel data."
                    },
                    {
                        "id": 13,
                        "string": "We use a linear classifier to select which paragraphs to train and test on."
                    },
                    {
                        "id": 14,
                        "string": "Since annotating entire documents is expensive, data of this sort is typically distantly supervised, meaning only the answer text, not the answer spans, are known."
                    },
                    {
                        "id": 15,
                        "string": "To handle the noise this creates, we use a summed objective function that marginalizes the model's output over all locations the answer text occurs."
                    },
                    {
                        "id": 16,
                        "string": "We apply this approach with a model design that integrates some recent ideas in reading comprehension models, including selfattention (Cheng et al., 2016) and bi-directional attention (Seo et al., 2016) ."
                    },
                    {
                        "id": 17,
                        "string": "Our confidence method extends this approach to better handle the multi-paragraph setting."
                    },
                    {
                        "id": 18,
                        "string": "Previous approaches trained the model on questions paired with paragraphs that are known a priori to contain the answer."
                    },
                    {
                        "id": 19,
                        "string": "This has several downsides: the model is not trained to produce low confidence scores for paragraphs that do not contain an answer, and the training objective does not require confidence scores to be comparable between paragraphs."
                    },
                    {
                        "id": 20,
                        "string": "We resolve these problems by sampling paragraphs from the context documents, including paragraphs that do not contain an answer, to train on."
                    },
                    {
                        "id": 21,
                        "string": "We then use a shared-normalization objective where paragraphs are processed independently, but the probability of an answer candidate is marginalized over all paragraphs sampled from the same document."
                    },
                    {
                        "id": 22,
                        "string": "This requires the model to produce globally correct output even though each paragraph is processed independently."
                    },
                    {
                        "id": 23,
                        "string": "We evaluate our work on TriviaQA (Joshi et al., 2017) in the wiki, web, and unfiltered setting."
                    },
                    {
                        "id": 24,
                        "string": "Our model achieves a nearly 10 point lead over published prior work."
                    },
                    {
                        "id": 25,
                        "string": "We additionally perform an ablation study on our pipelined method, and we show the effectiveness of our multi-paragraph methods on a modified version of SQuAD (Rajpurkar et al., 2016) where only the correct document, not the correct paragraph, is known."
                    },
                    {
                        "id": 26,
                        "string": "Finally, we combine our model with a web search backend to build a demonstration end-to-end QA system 1 , and show it performs well on questions from the TREC question answering task (Voorhees et al., 1999) ."
                    },
                    {
                        "id": 27,
                        "string": "We release our code 2 to facilitate future work."
                    },
                    {
                        "id": 28,
                        "string": "Pipelined Method In this section we propose a pipelined QA system, where a single paragraph is selected and passed to a paragraph-level question answering model."
                    },
                    {
                        "id": 29,
                        "string": "Paragraph Selection If there is a single source document, we select the paragraph with the smallest TF-IDF cosine distance with the question."
                    },
                    {
                        "id": 30,
                        "string": "Document frequencies are computed using the individual paragraphs within the document."
                    },
                    {
                        "id": 31,
                        "string": "If there are multiple input documents, we found it beneficial to use a linear classifier that uses the same TF-IDF score, whether the paragraph was the first in its document, how many tokens preceded it, and the number of question words it includes as features."
                    },
                    {
                        "id": 32,
                        "string": "The classifier is trained on the distantly supervised objective of selecting paragraphs that contain at least one answer span."
                    },
                    {
                        "id": 33,
                        "string": "On TriviaQA web, relative to truncating the document as done by prior work, this improves the chance of the selected text containing the correct answer from 83.1% to 85.1%."
                    },
                    {
                        "id": 34,
                        "string": "Handling Noisy Labels Question: Which British general was killed at Khartoum in 1885?"
                    },
                    {
                        "id": 35,
                        "string": "Answer: Gordon Context: In February 1885 Gordon returned to the Sudan to evacuate Egyptian forces."
                    },
                    {
                        "id": 36,
                        "string": "Khartoum came under siege the next month and rebels broke into the city, killing Gordon and the other defenders."
                    },
                    {
                        "id": 37,
                        "string": "The British public reacted to his death by acclaiming 'Gordon of Khartoum', a saint."
                    },
                    {
                        "id": 38,
                        "string": "However, historians have suggested that Gordon..."
                    },
                    {
                        "id": 39,
                        "string": "Figure 1 : Noisy supervision can cause many spans of text that contain the answer, but are not situated in a context that relates to the question (red), to distract the model from learning from more relevant spans (green)."
                    },
                    {
                        "id": 40,
                        "string": "In a distantly supervised setup we label all text spans that match the answer text as being correct."
                    },
                    {
                        "id": 41,
                        "string": "This can lead to training the model to select unwanted answer spans."
                    },
                    {
                        "id": 42,
                        "string": "Figure 1 contains an example."
                    },
                    {
                        "id": 43,
                        "string": "To handle this difficulty, we use a summed objective function similar to the one from Kadlec et al."
                    },
                    {
                        "id": 44,
                        "string": "(2016) , that optimizes the negative loglikelihood of selecting any correct answer span."
                    },
                    {
                        "id": 45,
                        "string": "The models we consider here work by independently predicting the start and end token of the answer span, so we take this approach for both predictions."
                    },
                    {
                        "id": 46,
                        "string": "For example, the objective for predicting the answer start token becomes − log a∈A p a where A is the set of tokens that start an answer and p i is the answer-start probability predicted by the model for token i."
                    },
                    {
                        "id": 47,
                        "string": "This objective has the advantage of being agnostic to how the model distributes probability mass across the possible answer spans, allowing the model to focus on only the most relevant spans."
                    },
                    {
                        "id": 48,
                        "string": "Model We use a model with the following layers (shown in Figure 2 ): Embedding: We embed words using pretrained word vectors."
                    },
                    {
                        "id": 49,
                        "string": "We concatenate these with character-derived word embeddings, which are produced by embedding characters using a learned embedding matrix and then applying a convolutional neural network and max-pooling."
                    },
                    {
                        "id": 50,
                        "string": "Pre-Process: A shared bi-directional GRU (Cho et al., 2014) is used to process the question and passage embeddings."
                    },
                    {
                        "id": 51,
                        "string": "Attention: The attention mechanism from the Bi-Directional Attention Flow (BiDAF) model (Seo et al., 2016) is used to build a queryaware context representation."
                    },
                    {
                        "id": 52,
                        "string": "Let h i and q j be the vector for context word i and question word j, and n q and n c be the lengths of the question and context respectively."
                    },
                    {
                        "id": 53,
                        "string": "We compute attention between context word i and question word j as: a ij = w 1 · h i + w 2 · q j + w 3 · (h i q j ) where w 1 , w 2 , and w 3 are learned vectors and is element-wise multiplication."
                    },
                    {
                        "id": 54,
                        "string": "We then compute an attended vector c i for each context token as: p ij = e a ij nq j=1 e a ij c i = nq j=1 q j p ij We also compute a query-to-context vector q c : m i = max 1≤j≤nq a ij p i = e m i nc i=1 e m i q c = nc i=1 h i p i The final vector for each token is built by concatenating h i , c i , h i c i , and q c c i ."
                    },
                    {
                        "id": 55,
                        "string": "In our model we subsequently pass the result through a linear layer with ReLU activations."
                    },
                    {
                        "id": 56,
                        "string": "Self-Attention: Next we use a layer of residual self-attention."
                    },
                    {
                        "id": 57,
                        "string": "The input is passed through another bi-directional GRU."
                    },
                    {
                        "id": 58,
                        "string": "Then we apply the same attention mechanism, only now between the passage and itself."
                    },
                    {
                        "id": 59,
                        "string": "In this case we do not use query-tocontext attention and we set a ij = −inf if i = j."
                    },
                    {
                        "id": 60,
                        "string": "As before, we pass the concatenated output through a linear layer with ReLU activations."
                    },
                    {
                        "id": 61,
                        "string": "The result is then summed with the original input."
                    },
                    {
                        "id": 62,
                        "string": "Prediction: In the last layer of our model a bidirectional GRU is applied, followed by a linear layer to compute answer start scores for each token."
                    },
                    {
                        "id": 63,
                        "string": "The hidden states are concatenated with the input and fed into a second bi-directional GRU and linear layer to predict answer end scores."
                    },
                    {
                        "id": 64,
                        "string": "The softmax function is applied to the start and end scores to produce answer start and end probabilities."
                    },
                    {
                        "id": 65,
                        "string": "Dropout: We apply variational dropout (Gal and Ghahramani, 2016) to the input to all the GRUs and the input to the attention mechanisms at a rate of 0.2."
                    },
                    {
                        "id": 66,
                        "string": "Confidence Method We adapt this model to the multi-paragraph setting by using the un-normalized and un-exponentiated (i.e., before the softmax operator is applied) score given to each span as a measure of the model's confidence."
                    },
                    {
                        "id": 67,
                        "string": "For the boundary-based models we use here, a span's score is the sum of the start and end score given to its start and end token."
                    },
                    {
                        "id": 68,
                        "string": "At test time we run the model on each paragraph and select the answer span with the highest confidence."
                    },
                    {
                        "id": 69,
                        "string": "This is the approach taken by Chen et al."
                    },
                    {
                        "id": 70,
                        "string": "(2017a) ."
                    },
                    {
                        "id": 71,
                        "string": "Our experiments in Section 5 show that these confidence scores can be very poor if the model is only trained on answer-containing paragraphs, as done by prior work."
                    },
                    {
                        "id": 72,
                        "string": "Table 1 contains some qualitative examples of the errors that occur."
                    },
                    {
                        "id": 73,
                        "string": "We hypothesize that there are two key sources of error."
                    },
                    {
                        "id": 74,
                        "string": "First, for models trained with the softmax objective, the pre-softmax scores for all spans can be arbitrarily increased or decreased by a constant value without changing the resulting softmax probability distribution."
                    },
                    {
                        "id": 75,
                        "string": "As a result, nothing prevents models from producing scores that are arbitrarily all larger or all smaller for one paragraph ...one 2001 study finding a quarter square kilometer (62 acres) of Ecuadorian rainforest supports more than 1,100 tree species The affected region was approximately 1,160,000 square miles (3,000,000 km2) of rainforest, compared to 734,000 square miles Who was Warsz?"
                    },
                    {
                        "id": 76,
                        "string": "....In actuality, Warsz was a 12th/13th century nobleman who owned a village located at the modern.... One of the most famous people born in Warsaw was Maria Sklodowska -Curie, who achieved international... How much did the initial LM weight in kg?"
                    },
                    {
                        "id": 77,
                        "string": "The initial LM model weighed approximately 33,300 pounds (15,000 kg), and..."
                    },
                    {
                        "id": 78,
                        "string": "The module was 11.42 feet (3.48 m) tall, and weighed approximately 12,250 pounds (5,560 kg) Table 1 : Examples from SQuAD where a model was less confident in a correct extraction from one paragraph (left) than in an incorrect extraction from another (right)."
                    },
                    {
                        "id": 79,
                        "string": "Even if the passage has no correct answer and does not contain any question words, the model assigns high confidence to phrases that match the category the question is asking about."
                    },
                    {
                        "id": 80,
                        "string": "Because the confidence scores are not well-calibrated, this confidence is often higher than the confidence assigned to correct answer spans in different paragraphs, even when those correct spans have better contextual evidence."
                    },
                    {
                        "id": 81,
                        "string": "than another."
                    },
                    {
                        "id": 82,
                        "string": "Second, if the model only sees paragraphs that contain answers, it might become too confident in heuristics or patterns that are only effective when it is known a priori that an answer exists."
                    },
                    {
                        "id": 83,
                        "string": "For example, the model might become too reliant on selecting answers that match semantic type the question is asking about, causing it be easily distracted by other entities of that type when they appear in irrelevant text."
                    },
                    {
                        "id": 84,
                        "string": "This kind of error has also been observed when distractor sentences are added to the context (Jia and Liang, 2017) We experiment with four approaches to training models to produce comparable confidence scores, shown in the following subsections."
                    },
                    {
                        "id": 85,
                        "string": "In all cases we will sample paragraphs that do not contain an answer as additional training points."
                    },
                    {
                        "id": 86,
                        "string": "Shared-Normalization In this approach a modified objective function is used where span start and end scores are normalized across all paragraphs sampled from the same context."
                    },
                    {
                        "id": 87,
                        "string": "This means that paragraphs from the same context use a shared normalization factor in the final softmax operations."
                    },
                    {
                        "id": 88,
                        "string": "We train on this objective by including multiple paragraphs from the same context in each mini-batch."
                    },
                    {
                        "id": 89,
                        "string": "The key idea is that this will force the model to produce scores that are comparable between paragraphs, even though it does not have access to information about what other paragraphs are being considered."
                    },
                    {
                        "id": 90,
                        "string": "Merge As an alternative to the previous method, we experiment with concatenating all paragraphs sam-pled from the same context together during training."
                    },
                    {
                        "id": 91,
                        "string": "A paragraph separator token with a learned embedding is added before each paragraph."
                    },
                    {
                        "id": 92,
                        "string": "No-Answer Option We also experiment with allowing the model to select a special \"no-answer\" option for each paragraph."
                    },
                    {
                        "id": 93,
                        "string": "First we re-write our objective as: − log e sa n i=1 e s i − log e g b n j=1 e g j = − log e sa+g b n i=1 n j=1 e s i +g j where s j and g j are the scores for the start and end bounds produced by the model for token j, and a and b are the correct start and end tokens."
                    },
                    {
                        "id": 94,
                        "string": "We have the model compute another score, z, to represent the weight given to a \"no-answer\" possibility."
                    },
                    {
                        "id": 95,
                        "string": "Our revised objective function becomes: − log (1 − δ)e z + δe sa+g b e z + n i=1 n j=1 e s i +g j where δ is 1 if an answer exists and 0 otherwise."
                    },
                    {
                        "id": 96,
                        "string": "If there are multiple answer spans we use the same objective, except the numerator includes the summation over all answer start and end tokens."
                    },
                    {
                        "id": 97,
                        "string": "We compute z by adding an extra layer at the end of our model."
                    },
                    {
                        "id": 98,
                        "string": "We build input vectors by taking the summed hidden states of the RNNs used to predict the start/end token scores weighed by the start/end probabilities, and using a learned attention vector on the output of the self-attention layer."
                    },
                    {
                        "id": 99,
                        "string": "These vectors are fed into a two layer network with an 80 dimensional hidden layer and ReLU activations that produces z as its only output."
                    },
                    {
                        "id": 100,
                        "string": "Sigmoid As a final baseline, we consider training models with the sigmoid loss objective function."
                    },
                    {
                        "id": 101,
                        "string": "That is, we compute a start/end probability for each token by applying the sigmoid function to the start/end scores of each token."
                    },
                    {
                        "id": 102,
                        "string": "A cross entropy loss is used on each individual probability."
                    },
                    {
                        "id": 103,
                        "string": "The intuition is that, since the scores are being evaluated independently of one another, they are more likely to be comparable between different paragraphs."
                    },
                    {
                        "id": 104,
                        "string": "Experimental Setup Datasets We evaluate our approach on four datasets: Triv-iaQA unfiltered (Joshi et al., 2017) , a dataset of questions from trivia databases paired with documents found by completing a web search of the questions; TriviaQA wiki, the same dataset but only including Wikipedia articles; TriviaQA web, a dataset derived from TriviaQA unfiltered by treating each question-document pair where the document contains the question answer as an individual training point; and SQuAD (Rajpurkar et al., 2016) , a collection of Wikipedia articles and crowdsourced questions."
                    },
                    {
                        "id": 105,
                        "string": "Preprocessing We note that for TriviaQA web we do not subsample as was done by Joshi et al."
                    },
                    {
                        "id": 106,
                        "string": "(2017) , instead training on the all 530k training examples."
                    },
                    {
                        "id": 107,
                        "string": "We also observe that TriviaQA documents often contain many small paragraphs, so we restructure the documents by merging consecutive paragraphs together up to a target size."
                    },
                    {
                        "id": 108,
                        "string": "We use a maximum paragraph size of 400 unless stated otherwise."
                    },
                    {
                        "id": 109,
                        "string": "Paragraph separator tokens with learned embeddings are added between merged paragraphs to preserve formatting information."
                    },
                    {
                        "id": 110,
                        "string": "We are also careful to mark all spans of text that would be considered an exact match by the official evaluation script, which includes some minor text pre-processing, as answer spans, not just spans that are an exact string match with the answer text."
                    },
                    {
                        "id": 111,
                        "string": "Sampling Our confidence-based approaches are trained by sampling paragraphs from the context during training."
                    },
                    {
                        "id": 112,
                        "string": "For SQuAD and TriviaQA web we take Model EM F1 baseline (Joshi et al., 2017) the top four paragraphs as judged by our paragraph ranking function (see Section 2.1)."
                    },
                    {
                        "id": 113,
                        "string": "We sample two different paragraphs from those four each epoch to train on."
                    },
                    {
                        "id": 114,
                        "string": "Since we observe that the higherranked paragraphs are more likely to contain the context needed to answer the question, we sample the highest ranked paragraph that contains an answer twice as often as the others."
                    },
                    {
                        "id": 115,
                        "string": "For the merge and shared-norm approaches, we additionally require that at least one of the paragraphs contains an answer span, and both of those paragraphs are included in the same mini-batch."
                    },
                    {
                        "id": 116,
                        "string": "For TriviaQA wiki we repeat the process but use the top 8 paragraphs, and for TriviaQA unfiltered we use the top 16, because much more context is given in these settings."
                    },
                    {
                        "id": 117,
                        "string": "Implementation We train the model with the Adadelta optimizer (Zeiler, 2012) with a batch size 60 for Triv-iaQA and 45 for SQuAD."
                    },
                    {
                        "id": 118,
                        "string": "At test time we select the most probable answer span of length less than or equal to 8 for TriviaQA and 17 for SQuAD."
                    },
                    {
                        "id": 119,
                        "string": "The GloVe 300 dimensional word vectors released by Pennington et al."
                    },
                    {
                        "id": 120,
                        "string": "(2014) are used for word embeddings."
                    },
                    {
                        "id": 121,
                        "string": "On SQuAD, we use a dimensionality of size 100 for the GRUs and of size 200 for the linear layers employed after each attention mechanism."
                    },
                    {
                        "id": 122,
                        "string": "We found for TriviaQA, likely because there is more data, using a larger dimensionality of 140 for each GRU and 280 for the linear layers is beneficial."
                    },
                    {
                        "id": 123,
                        "string": "During training, we maintain an exponential moving average of the weights with a decay rate of 0.999."
                    },
                    {
                        "id": 124,
                        "string": "We use the weight averages at test time."
                    },
                    {
                        "id": 125,
                        "string": "We do not update the word vectors during training."
                    },
                    {
                        "id": 126,
                        "string": "Results TriviaQA Web and TriviaQA Wiki First, we do an ablation study on TriviaQA web to show the effects of our proposed methods for our pipeline model."
                    },
                    {
                        "id": 127,
                        "string": "We start with a baseline following the one used by Joshi et al."
                    },
                    {
                        "id": 128,
                        "string": "(2017) ."
                    },
                    {
                        "id": 129,
                        "string": "This   system uses BiDAF (Seo et al., 2016) as the paragraph model, and selects a random answer span from each paragraph each epoch to train on."
                    },
                    {
                        "id": 130,
                        "string": "The first 400 tokens of each document are used during training, and the first 800 during testing."
                    },
                    {
                        "id": 131,
                        "string": "When using the TF-IDF paragraph selection approach, we instead break the documents into paragraphs of size 400 when training and 800 when testing, and select the top-ranked paragraph to feed into the model."
                    },
                    {
                        "id": 132,
                        "string": "As shown in Table 2 , our baseline outperforms the results reported by Joshi et al."
                    },
                    {
                        "id": 133,
                        "string": "(2017) significantly, likely because we are not subsampling the data."
                    },
                    {
                        "id": 134,
                        "string": "We find both TF-IDF ranking and the sum objective to be effective."
                    },
                    {
                        "id": 135,
                        "string": "Using our refined model increases the gain by another 4 points."
                    },
                    {
                        "id": 136,
                        "string": "Next we show the results of our confidencebased approaches."
                    },
                    {
                        "id": 137,
                        "string": "For this comparison we split documents into paragraphs of at most 400 tokens, and rank them using TF-IDF cosine distance."
                    },
                    {
                        "id": 138,
                        "string": "Then we measure the performance of our proposed approaches as the model is used to independently process an increasing number of these paragraphs, and the highest confidence answer is selected as the final output."
                    },
                    {
                        "id": 139,
                        "string": "The results are shown in Figure 3 ."
                    },
                    {
                        "id": 140,
                        "string": "On this dataset even the model trained without any of the proposed training methods (\"none\") im- Figure 4 : Results for our confidence methods on TriviaQA unfiltered."
                    },
                    {
                        "id": 141,
                        "string": "The shared-norm approach is the strongest, while the baseline model starts to lose performance as more paragraphs are used."
                    },
                    {
                        "id": 142,
                        "string": "proves as more paragraphs are used, showing it does a passable job at focusing on the correct paragraph."
                    },
                    {
                        "id": 143,
                        "string": "The no-answer option training approach lead to a significant improvement, and the sharednorm and merge approaches are even better."
                    },
                    {
                        "id": 144,
                        "string": "We use the shared-norm approach for evaluation on the TriviaQA test sets."
                    },
                    {
                        "id": 145,
                        "string": "We found that increasing the paragraph size to 800 at test time, and to 600 during training, was slightly beneficial, allowing our model to reach 66.04 EM and 70.98 F1 on the dev set."
                    },
                    {
                        "id": 146,
                        "string": "As shown in Table 3 , our model is firmly ahead of prior work on both the TriviaQA web and TriviaQA wiki test sets."
                    },
                    {
                        "id": 147,
                        "string": "Since our submission, a few additional entries have been added to the public leader for this dataset 5 , although to the best of our knowledge these results have not yet been published."
                    },
                    {
                        "id": 148,
                        "string": "TriviaQA Unfiltered Next we apply our confidence methods to Trivi-aQA unfiltered."
                    },
                    {
                        "id": 149,
                        "string": "This dataset is of particular interest because the system is not told which document contains the answer, so it provides a plausible simulation of answering a question using a document Figure 5 : Results for our confidence methods on document-level SQuAD."
                    },
                    {
                        "id": 150,
                        "string": "The shared-norm model is the only model that does not lose performance when exposed to large numbers of paragraphs."
                    },
                    {
                        "id": 151,
                        "string": "retrieval system."
                    },
                    {
                        "id": 152,
                        "string": "We show the same graph as before for this dataset in Figure 4 ."
                    },
                    {
                        "id": 153,
                        "string": "Our methods have an even larger impact on this dataset, probably because there are many more relevant and irrelevant paragraphs for each question, making paragraph selection more important."
                    },
                    {
                        "id": 154,
                        "string": "Note the naively trained model starts to lose performance as more paragraphs are used, showing that errors are being caused by the model being overly confident in incorrect extractions."
                    },
                    {
                        "id": 155,
                        "string": "We achieve a score of 61.55 EM and 67.61 F1 on the dev set."
                    },
                    {
                        "id": 156,
                        "string": "This advances the only prior result reported for this dataset, 50.6 EM and 57.3 F1 from Wang et al."
                    },
                    {
                        "id": 157,
                        "string": "(2017b) , by 10 points."
                    },
                    {
                        "id": 158,
                        "string": "SQuAD We additionally evaluate our model on SQuAD."
                    },
                    {
                        "id": 159,
                        "string": "SQuAD questions were not built to be answered independently of their context paragraph, which makes it unclear how effective of an evaluation tool they can be for document-level question answering."
                    },
                    {
                        "id": 160,
                        "string": "To assess this we manually label 500 random questions from the training set."
                    },
                    {
                        "id": 161,
                        "string": "We categorize questions as: 1."
                    },
                    {
                        "id": 162,
                        "string": "Context-independent, meaning it can be understood independently of the paragraph."
                    },
                    {
                        "id": 163,
                        "string": "2."
                    },
                    {
                        "id": 164,
                        "string": "Document-dependent, meaning it can be understood given the article's title."
                    },
                    {
                        "id": 165,
                        "string": "For example, \"What individual is the school named after?\""
                    },
                    {
                        "id": 166,
                        "string": "for the document \"Harvard University\"."
                    },
                    {
                        "id": 167,
                        "string": "3."
                    },
                    {
                        "id": 168,
                        "string": "Paragraph-dependent, meaning it can only be understood given its paragraph."
                    },
                    {
                        "id": 169,
                        "string": "For example, \"What was the first step in the reforms?\"."
                    },
                    {
                        "id": 170,
                        "string": "We find 67.4% of the questions to be contextindependent, 22.6% to be document-dependent, and the remaining 10% to be paragraphdependent."
                    },
                    {
                        "id": 171,
                        "string": "There are many document-dependent questions because questions are frequently about the subject of the document."
                    },
                    {
                        "id": 172,
                        "string": "Since a reasonably high fraction of the questions can be understood given the document they are from, and to isolate our analysis from the retrieval mechanism used, we choose to evaluate on the document-level."
                    },
                    {
                        "id": 173,
                        "string": "We build documents by concatenating all the paragraphs in SQuAD from the same article together into a single document."
                    },
                    {
                        "id": 174,
                        "string": "Given the correct paragraph (i.e., in the standard SQuAD setting) our model reaches 72.14 EM and 81.05 F1 and can complete 26 epochs of training in less than five hours."
                    },
                    {
                        "id": 175,
                        "string": "Most of our variations to handle the multi-paragraph setting caused a minor (up to half a point) drop in performance, while the sigmoid version fell behind by a point and a half."
                    },
                    {
                        "id": 176,
                        "string": "We graph the document-level performance in Figure 5 ."
                    },
                    {
                        "id": 177,
                        "string": "For SQuAD, we find it crucial to employ one of the suggested confidence training techniques."
                    },
                    {
                        "id": 178,
                        "string": "The base model starts to drop in performance once more than two paragraphs are used."
                    },
                    {
                        "id": 179,
                        "string": "However, the shared-norm approach is able to reach a peak performance of 72.37 F1 and 64.08 EM given 15 paragraphs."
                    },
                    {
                        "id": 180,
                        "string": "Given our estimate that 10% of the questions are ambiguous if the paragraph is unknown, our approach appears to have adapted to the document-level task very well."
                    },
                    {
                        "id": 181,
                        "string": "Finally, we compare the shared-norm model with the document-level result reported by Chen et al."
                    },
                    {
                        "id": 182,
                        "string": "(2017a) ."
                    },
                    {
                        "id": 183,
                        "string": "We re-evaluate our model using the documents used by Chen et al."
                    },
                    {
                        "id": 184,
                        "string": "(2017a) , which consist of the same Wikipedia articles SQuAD was built from, but downloaded at different dates."
                    },
                    {
                        "id": 185,
                        "string": "The advantage of this dataset is that it does not allow the model to know a priori which paragraphs were filtered out during the construction of SQuAD."
                    },
                    {
                        "id": 186,
                        "string": "The disadvantage is that some of the articles have been edited since the questions were written, so some questions may no longer be answerable."
                    },
                    {
                        "id": 187,
                        "string": "Our model achieves 59.14 EM and 67.34 F1 on this dataset, which significantly outperforms the 49.7 EM reported by Chen et al."
                    },
                    {
                        "id": 188,
                        "string": "(2017a) ."
                    },
                    {
                        "id": 189,
                        "string": "Curated TREC We perform one final experiment that tests our model as part of an end-to-end question answering system."
                    },
                    {
                        "id": 190,
                        "string": "For document retrieval, we re-implement the pipeline from Joshi et al."
                    },
                    {
                        "id": 191,
                        "string": "(2017) ."
                    },
                    {
                        "id": 192,
                        "string": "Given a question, we retrieve up to 10 web documents us-Model Accuracy S-Norm (ours) 53.31 YodaQA with Bing (Baudiš, 2015) , 37.18 YodaQA (Baudiš, 2015) , 34.26 DrQA + DS (Chen et al., 2017a) 25.7 Table 4 : Results on the Curated TREC corpus, Yo-daQA results extracted from its github page 7 ing a Bing web search of the question, and all Wikipedia articles about entities the entity linker TAGME (Ferragina and Scaiella, 2010) identifies in the question."
                    },
                    {
                        "id": 193,
                        "string": "We then use our linear paragraph ranker to select the 16 most relevant paragraphs from all these documents, which are passed to our model to locate the final answer span."
                    },
                    {
                        "id": 194,
                        "string": "We choose to use the shared-norm model trained on the TriviaQA unfiltered dataset since it is trained using multiple web documents as input."
                    },
                    {
                        "id": 195,
                        "string": "We use the same heuristics as Joshi et al."
                    },
                    {
                        "id": 196,
                        "string": "(2017) to filter out trivia or QA websites to ensure questions cannot be trivially answered using webpages that directly address the question."
                    },
                    {
                        "id": 197,
                        "string": "A demo of the system is publicly available 8 ."
                    },
                    {
                        "id": 198,
                        "string": "We find accuracy on the TriviaQA unfiltered questions remains almost unchanged (within half a percent exact match score) when using our document retrieval method instead of the given documents, showing our pipeline does a good job of producing evidence documents that are similar to the ones in the training data."
                    },
                    {
                        "id": 199,
                        "string": "We test the system on questions from the TREC QA tasks (Voorhees et al., 1999) , in particular a curated set of questions from Baudiš (2015) , the same dataset used in Chen et al."
                    },
                    {
                        "id": 200,
                        "string": "(2017a) ."
                    },
                    {
                        "id": 201,
                        "string": "We apply our system to the 694 test questions without retraining on the train questions."
                    },
                    {
                        "id": 202,
                        "string": "We compare against DrQA (Chen et al., 2017a) and YodaQA (Baudiš, 2015) ."
                    },
                    {
                        "id": 203,
                        "string": "It is important to note that these systems use different document corpora (Wikipedia for DrQA, and Wikipedia, several knowledge bases, and optionally Bing web search for YodaQA) and different training data (SQuAD and the TREC training questions for DrQA, and TREC only for YodaQA), so we cannot make assertions about the relative performance of individual components."
                    },
                    {
                        "id": 204,
                        "string": "Nevertheless, it is instructive to show how the methods we experiment with in this work can advance an end-to-end QA system."
                    },
                    {
                        "id": 205,
                        "string": "The results are listed in  racy mark."
                    },
                    {
                        "id": 206,
                        "string": "This is a strong proof-of-concept that neural paragraph reading combined with existing document retrieval methods can advance the stateof-the-art on general question answering."
                    },
                    {
                        "id": 207,
                        "string": "It also shows that, despite the noise, the data from Trivi-aQA is sufficient to train models that can be effective on out-of-domain QA tasks."
                    },
                    {
                        "id": 208,
                        "string": "Discussion We found that models that have only been trained on answer-containing paragraphs can perform very poorly in the multi-paragraph setting."
                    },
                    {
                        "id": 209,
                        "string": "The results were particularly bad for SQuAD; we think this is partly because the paragraphs are shorter, so the model had less exposure to irrelevant text."
                    },
                    {
                        "id": 210,
                        "string": "The shared-norm approach consistently outperformed the other methods, especially on SQuAD and TriviaQA unfiltered, where many paragraphs were needed to reach peak performance."
                    },
                    {
                        "id": 211,
                        "string": "Figures  3, 4 , and 5 show this technique has a minimal effect on the performance when only one paragraph is used, suggesting the model's per-paragraph performance is preserved."
                    },
                    {
                        "id": 212,
                        "string": "Meanwhile, it can be seen the accuracy of the shared-norm model never drops as more paragraphs are added, showing it successfully resolves the problem of being distracted by irrelevant text."
                    },
                    {
                        "id": 213,
                        "string": "The no-answer and merge approaches were moderately effective, we suspect because they at least expose the model to more irrelevant text."
                    },
                    {
                        "id": 214,
                        "string": "However, these methods do not address the fundamental issue of requiring confidence scores to be comparable between independent applications of the model to different paragraphs, which is why we think they lagged behind."
                    },
                    {
                        "id": 215,
                        "string": "The sigmoid objective function reduces the paragraph-level performance considerably, especially on the TriviaQA datasets."
                    },
                    {
                        "id": 216,
                        "string": "We suspect this is because it is vulnerable to label noise, as discussed in Section 2.2."
                    },
                    {
                        "id": 217,
                        "string": "Error Analysis We perform an error analysis by labeling 200 random TriviaQA web dev-set errors made by the shared-norm model."
                    },
                    {
                        "id": 218,
                        "string": "We found 40.5% of the er-rors were caused because the document did not contain sufficient evidence to answer the question, and 17% were caused by the correct answer not being contained in the answer key."
                    },
                    {
                        "id": 219,
                        "string": "The distribution of the remaining errors is shown in Table 5 ."
                    },
                    {
                        "id": 220,
                        "string": "We found quite a few cases where a sentence contained the answer, but the model was unable to extract it due to complex syntactic structure or paraphrasing."
                    },
                    {
                        "id": 221,
                        "string": "Two kinds of multi-sentence reading errors were also common: cases that required connecting multiple statements made in a single paragraph, and long-range coreference cases where a sentence's subject was named in a previous paragraph."
                    },
                    {
                        "id": 222,
                        "string": "Finally, some questions required background knowledge, or required the model to extract answers that were only stated indirectly (e.g., examining a list to extract the nth element)."
                    },
                    {
                        "id": 223,
                        "string": "Overall, these results suggest good avenues for improvement are to continue advancing the sentence and paragraph level reading comprehension abilities of the model, and adding a mechanism to handle document-level coreferences."
                    },
                    {
                        "id": 224,
                        "string": "Related Work Reading Comprehension Datasets."
                    },
                    {
                        "id": 225,
                        "string": "The state of the art in reading comprehension has been rapidly advanced by neural models, in no small part due to the introduction of many large datasets."
                    },
                    {
                        "id": 226,
                        "string": "The first large scale datasets for training neural reading comprehension models used a Cloze-style task, where systems must predict a held out word from a piece of text (Hermann et al., 2015; Hill et al., 2015) ."
                    },
                    {
                        "id": 227,
                        "string": "Additional datasets including SQuAD (Rajpurkar et al., 2016) , WikiReading (Hewlett et al., 2016) , MS Marco (Nguyen et al., 2016) and Triv-iaQA (Joshi et al., 2017) provided more realistic questions."
                    },
                    {
                        "id": 228,
                        "string": "Another dataset of trivia questions, Quasar-T (Dhingra et al., 2017) , was introduced recently that uses ClueWeb09 (Callan et al., 2009) as its source for documents."
                    },
                    {
                        "id": 229,
                        "string": "In this work we choose to focus on SQuAD because it is well studied, and TriviaQA because it is more challenging and features documents and multi-document contexts (Quasar T is similar, but was released after we started work on this project)."
                    },
                    {
                        "id": 230,
                        "string": "Neural Reading Comprehension."
                    },
                    {
                        "id": 231,
                        "string": "Neural reading comprehension systems typically use some form of attention (Wang and Jiang, 2016) , although alternative architectures exist (Chen et al., 2017a; Weissenborn et al., 2017b) ."
                    },
                    {
                        "id": 232,
                        "string": "Our model follows this approach, but includes some recent advances such as variational dropout (Gal and Ghahramani, 2016) and bi-directional attention (Seo et al., 2016) ."
                    },
                    {
                        "id": 233,
                        "string": "Self-attention has been used in several prior works (Cheng et al., 2016; Wang et al., 2017c; Pan et al., 2017) ."
                    },
                    {
                        "id": 234,
                        "string": "Our approach to allowing a reading comprehension model to produce a per-paragraph no-answer score is related to the approach used in the BiDAF-T (Min et al., 2017) model to produce per-sentence classification scores, although we use an attentionbased method instead of max-pooling."
                    },
                    {
                        "id": 235,
                        "string": "Open QA."
                    },
                    {
                        "id": 236,
                        "string": "Open question answering has been the subject of much research, especially spurred by the TREC question answering track (Voorhees et al., 1999) ."
                    },
                    {
                        "id": 237,
                        "string": "Knowledge bases can be used, such as in (Berant et al., 2013) , although the resulting systems are limited by the quality of the knowledge base."
                    },
                    {
                        "id": 238,
                        "string": "Systems that try to answer questions using natural language resources such as YodaQA (Baudiš, 2015) typically use pipelined methods to retrieve related text, build answer candidates, and pick a final output."
                    },
                    {
                        "id": 239,
                        "string": "Neural Open QA."
                    },
                    {
                        "id": 240,
                        "string": "Open question answering with neural models was considered by Chen et al."
                    },
                    {
                        "id": 241,
                        "string": "(2017a) , where researchers trained a model on SQuAD and combined it with a retrieval engine for Wikipedia articles."
                    },
                    {
                        "id": 242,
                        "string": "Our work differs because we focus on explicitly addressing the problem of applying the model to multiple paragraphs."
                    },
                    {
                        "id": 243,
                        "string": "A pipelined approach to QA was recently proposed by Wang et al."
                    },
                    {
                        "id": 244,
                        "string": "(2017a) , where a ranker model is used to select a paragraph for the reading comprehension model to process."
                    },
                    {
                        "id": 245,
                        "string": "More recent work has considered evidence aggregation techniques (Wang et al., 2017b; Swayamdipta et al., 2017) ."
                    },
                    {
                        "id": 246,
                        "string": "Our work shows paragraph-level models that produce well-calibrated confidence scores can effectively exploit large amounts of text without aggregation, although integrating aggregation techniques could further improve our results."
                    },
                    {
                        "id": 247,
                        "string": "Conclusion We have shown that, when using a paragraph-level QA model across multiple paragraphs, our training method of sampling non-answer-containing paragraphs while using a shared-norm objective function can be very beneficial."
                    },
                    {
                        "id": 248,
                        "string": "Combining this with our suggestions for paragraph selection, using the summed training objective, and our model design allows us to advance the state of the art on TriviaQA."
                    },
                    {
                        "id": 249,
                        "string": "As shown by our demo, this work can be directly applied to building deep-learningpowered open question answering systems."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 26
                    },
                    {
                        "section": "Pipelined Method",
                        "n": "2",
                        "start": 27,
                        "end": 28
                    },
                    {
                        "section": "Paragraph Selection",
                        "n": "2.1",
                        "start": 29,
                        "end": 33
                    },
                    {
                        "section": "Handling Noisy Labels",
                        "n": "2.2",
                        "start": 34,
                        "end": 47
                    },
                    {
                        "section": "Model",
                        "n": "2.3",
                        "start": 48,
                        "end": 65
                    },
                    {
                        "section": "Confidence Method",
                        "n": "3",
                        "start": 66,
                        "end": 85
                    },
                    {
                        "section": "Shared-Normalization",
                        "n": "3.1",
                        "start": 86,
                        "end": 89
                    },
                    {
                        "section": "Merge",
                        "n": "3.2",
                        "start": 90,
                        "end": 91
                    },
                    {
                        "section": "No-Answer Option",
                        "n": "3.3",
                        "start": 92,
                        "end": 99
                    },
                    {
                        "section": "Sigmoid",
                        "n": "3.4",
                        "start": 100,
                        "end": 103
                    },
                    {
                        "section": "Datasets",
                        "n": "4.1",
                        "start": 104,
                        "end": 104
                    },
                    {
                        "section": "Preprocessing",
                        "n": "4.2",
                        "start": 105,
                        "end": 110
                    },
                    {
                        "section": "Sampling",
                        "n": "4.3",
                        "start": 111,
                        "end": 116
                    },
                    {
                        "section": "Implementation",
                        "n": "4.4",
                        "start": 117,
                        "end": 125
                    },
                    {
                        "section": "TriviaQA Web and TriviaQA Wiki",
                        "n": "5.1",
                        "start": 126,
                        "end": 147
                    },
                    {
                        "section": "TriviaQA Unfiltered",
                        "n": "5.2",
                        "start": 148,
                        "end": 157
                    },
                    {
                        "section": "SQuAD",
                        "n": "5.3",
                        "start": 158,
                        "end": 188
                    },
                    {
                        "section": "Curated TREC",
                        "n": "5.4",
                        "start": 189,
                        "end": 207
                    },
                    {
                        "section": "Discussion",
                        "n": "5.5",
                        "start": 208,
                        "end": 216
                    },
                    {
                        "section": "Error Analysis",
                        "n": "5.6",
                        "start": 217,
                        "end": 223
                    },
                    {
                        "section": "Related Work",
                        "n": "6",
                        "start": 224,
                        "end": 245
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 246,
                        "end": 249
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/996-Figure4-1.png",
                        "caption": "Figure 4: Results for our confidence methods on TriviaQA unfiltered. The shared-norm approach is the strongest, while the baseline model starts to lose performance as more paragraphs are used.",
                        "page": 5,
                        "bbox": {
                            "x1": 310.08,
                            "x2": 522.24,
                            "y1": 199.68,
                            "y2": 316.32
                        }
                    },
                    {
                        "filename": "../figure/image/996-Figure3-1.png",
                        "caption": "Figure 3: Results on TriviaQA web when applying our models to multiple paragraphs from each document. Most of our training methods improve the model’s ability to utilize more text.",
                        "page": 5,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 287.03999999999996,
                            "y1": 199.68,
                            "y2": 316.32
                        }
                    },
                    {
                        "filename": "../figure/image/996-Table3-1.png",
                        "caption": "Table 3: Published TriviaQA results. Our approach advances the state of the art by about 10 points on these datasets4",
                        "page": 5,
                        "bbox": {
                            "x1": 84.96,
                            "x2": 510.24,
                            "y1": 62.879999999999995,
                            "y2": 145.92
                        }
                    },
                    {
                        "filename": "../figure/image/996-Figure1-1.png",
                        "caption": "Figure 1: Noisy supervision can cause many spans of text that contain the answer, but are not situated in a context that relates to the question (red), to distract the model from learning from more relevant spans (green).",
                        "page": 1,
                        "bbox": {
                            "x1": 309.59999999999997,
                            "x2": 522.24,
                            "y1": 205.44,
                            "y2": 321.12
                        }
                    },
                    {
                        "filename": "../figure/image/996-Figure5-1.png",
                        "caption": "Figure 5: Results for our confidence methods on document-level SQuAD. The shared-norm model is the only model that does not lose performance when exposed to large numbers of paragraphs.",
                        "page": 6,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 287.03999999999996,
                            "y1": 64.8,
                            "y2": 179.51999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/996-Figure2-1.png",
                        "caption": "Figure 2: High level outline of our model.",
                        "page": 2,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 291.36,
                            "y1": 62.879999999999995,
                            "y2": 358.08
                        }
                    },
                    {
                        "filename": "../figure/image/996-Table4-1.png",
                        "caption": "Table 4: Results on the Curated TREC corpus, YodaQA results extracted from its github page7",
                        "page": 7,
                        "bbox": {
                            "x1": 86.88,
                            "x2": 273.12,
                            "y1": 61.44,
                            "y2": 114.24
                        }
                    },
                    {
                        "filename": "../figure/image/996-Table5-1.png",
                        "caption": "Table 5: Error analysis on TriviaQA web.",
                        "page": 7,
                        "bbox": {
                            "x1": 324.96,
                            "x2": 506.4,
                            "y1": 61.44,
                            "y2": 132.0
                        }
                    },
                    {
                        "filename": "../figure/image/996-Table1-1.png",
                        "caption": "Table 1: Examples from SQuAD where a model was less confident in a correct extraction from one paragraph (left) than in an incorrect extraction from another (right). Even if the passage has no correct answer and does not contain any question words, the model assigns high confidence to phrases that match the category the question is asking about. Because the confidence scores are not well-calibrated, this confidence is often higher than the confidence assigned to correct answer spans in different paragraphs, even when those correct spans have better contextual evidence.",
                        "page": 3,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 521.28,
                            "y1": 62.879999999999995,
                            "y2": 197.28
                        }
                    },
                    {
                        "filename": "../figure/image/996-Table2-1.png",
                        "caption": "Table 2: Results on TriviaQA web using our pipelined method.",
                        "page": 4,
                        "bbox": {
                            "x1": 326.88,
                            "x2": 503.03999999999996,
                            "y1": 62.879999999999995,
                            "y2": 135.35999999999999
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-13"
        },
        {
            "slides": {
                "0": {
                    "title": "Time Critical Events",
                    "text": [
                        "Disaster events (earthquake, flood) Urgent needs for affected people",
                        "Information gathering in real-time is the most challenging part",
                        "Relief operations Humanitarian organizations and local administration need information to help and launch response"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Artificial Intelligence for Digital Response AIDR",
                    "text": [
                        "Response time-line today Response time-line our target",
                        "Delayed decision-making Delayed crisis response Target Early decision-making Rapid crisis response"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Artificial Intelligence for Digital Response",
                    "text": [
                        "Informative Not informative Dont know or cant judge Facilitates decision makers Hurricane Irma Hurricane Hurricane California Mexico Iraq & Iran Sri Lanka Harvey Maria wildfires earthquake earthquake f loods",
                        "Small amount of labeled data and large amount of unlabeled data at the beginning of the event",
                        "Labeled data from the past event. Can we use them?",
                        "What about domain shift?"
                    ],
                    "page_nums": [
                        3,
                        4,
                        5
                    ],
                    "images": []
                },
                "3": {
                    "title": "Our Solutions Contributions",
                    "text": [
                        "How to use large amount of unlabeled data and small amount of labeled data from the same event?",
                        "How to transfer knowledge from the past events",
                        "=> Adversarial domain adaptions"
                    ],
                    "page_nums": [
                        6,
                        7
                    ],
                    "images": []
                },
                "6": {
                    "title": "Semi Supervised Learning",
                    "text": [
                        "L: number of labeled instances (x1:L, y1:L)",
                        "U: number of unlabeled instances (xL+1:L+U)",
                        "Design a classifier f: x y"
                    ],
                    "page_nums": [
                        10,
                        11
                    ],
                    "images": [
                        "figure/image/998-Figure1-1.png"
                    ]
                },
                "7": {
                    "title": "Graph based Semi Supervised Learning",
                    "text": [
                        "Nodes: Instances (labeled and unlabeled)",
                        "Edges: n x n similarity matrix",
                        "Each entry ai,j indicates a similarity between instance i and j",
                        "We construct the graph using k-nearest neighbor (k=10)",
                        "Requires n(n-1)/2 distance computation",
                        "K-d tree data structure to reduce the computational complexity",
                        "Feature Vector: taking the averaging of the word2vec vectors",
                        "Semi-Supervised component: Loss function",
                        "Learns the internal representations (embedding) by predicting a node in the graph context",
                        "Two types of context",
                        "1. Context is based on the graph to encode structural",
                        "2. Context is based on the labels to inject label information into the embeddings",
                        "{U,V} Convolution filters and dense layer parameters",
                        "{Vc,W} Parameters specific to the supervised part",
                        "{Vg,C} Parameters specific to the semi-supervised part"
                    ],
                    "page_nums": [
                        12,
                        13,
                        14,
                        15,
                        16,
                        17,
                        18,
                        19
                    ],
                    "images": []
                },
                "10": {
                    "title": "Corpus",
                    "text": [
                        "A small part of the tweets has been annotated using crowdflower",
                        "Relevant: injured or dead people, infrastructure damage, urgent needs of affected people, donation requests",
                        "Dataset Relevant Irrelevant Train Dev Test",
                        "Nepal earthquake: 50K Queensland flood: 21K"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": []
                },
                "11": {
                    "title": "Experiments and Results",
                    "text": [
                        "Model trained using Convolution Neural Network (CNN)",
                        "Model trained using CNN were used to automatically label unlabeled data",
                        "Instances with classifier confidence >=0.75 were used to retrain a new model",
                        "Experiments AUC P R F1",
                        "Domain Adaptation Baseline (Transfer Baseline):",
                        "Trained CNN model on source (an event) and tested on target (another event)",
                        "Source Target AUC P R F1",
                        "Combining all the components of the network",
                        "Domain Adversarial with Graph Embedding"
                    ],
                    "page_nums": [
                        25,
                        26,
                        27,
                        28,
                        29
                    ],
                    "images": []
                },
                "12": {
                    "title": "Summary",
                    "text": [
                        "We have seen how graph-embedding based semi-supervised approach can be useful for small labeled data scenario",
                        "How can we use existing data and apply domain adaptation technique",
                        "We propose how both techniques can be combined"
                    ],
                    "page_nums": [
                        30
                    ],
                    "images": []
                },
                "13": {
                    "title": "Limitation and Future Study",
                    "text": [
                        "Graph embedding is computationally expensive",
                        "Graph constructed using averaged vector from word2vec",
                        "Explored binary class problem",
                        "Convoluted feature for graph construction",
                        "Domain adaptation: labeled and unlabeled data from target"
                    ],
                    "page_nums": [
                        31
                    ],
                    "images": []
                }
            },
            "paper_title": "Domain Adaptation with Adversarial Training and Graph Embeddings",
            "paper_id": "998",
            "paper": {
                "title": "Domain Adaptation with Adversarial Training and Graph Embeddings",
                "abstract": "The success of deep neural networks (DNNs) is heavily dependent on the availability of labeled data. However, obtaining labeled data is a big challenge in many real-world problems. In such scenarios, a DNN model can leverage labeled and unlabeled data from a related domain, but it has to deal with the shift in data distributions between the source and the target domains. In this paper, we study the problem of classifying social media posts during a crisis event (e.g., Earthquake). For that, we use labeled and unlabeled data from past similar events (e.g., Flood) and unlabeled data for the current event. We propose a novel model that performs adversarial learning based domain adaptation to deal with distribution drifts and graph based semi-supervised learning to leverage unlabeled data within a single unified deep learning framework. Our experiments with two real-world crisis datasets collected from Twitter demonstrate significant improvements over several baselines.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The application that motivates our work is the time-critical analysis of social media (Twitter) data at the sudden-onset of an event like natural or man-made disasters (Imran et al., 2015) ."
                    },
                    {
                        "id": 1,
                        "string": "In such events, affected people post timely and useful information of various types such as reports of injured or dead people, infrastructure damage, urgent needs (e.g., food, shelter, medical assistance) on these social networks."
                    },
                    {
                        "id": 2,
                        "string": "Humanitarian organizations believe timely access to this important information from social networks can help significantly and reduce both human loss and economic dam-age (Varga et al., 2013; Power et al., 2013) ."
                    },
                    {
                        "id": 3,
                        "string": "In this paper, we consider the basic task of classifying each incoming tweet during a crisis event (e.g., Earthquake) into one of the predefined classes of interest (e.g., relevant vs. nonrelevant) in real-time."
                    },
                    {
                        "id": 4,
                        "string": "Recently, deep neural networks (DNNs) have shown great performance in classification tasks in NLP and data mining."
                    },
                    {
                        "id": 5,
                        "string": "However the success of DNNs on a task depends heavily on the availability of a large labeled dataset, which is not a feasible option in our setting (i.e., classifying tweets at the onset of an Earthquake)."
                    },
                    {
                        "id": 6,
                        "string": "On the other hand, in most cases, we can have access to a good amount of labeled and abundant unlabeled data from past similar events (e.g., Floods) and possibly some unlabeled data for the current event."
                    },
                    {
                        "id": 7,
                        "string": "In such situations, we need methods that can leverage the labeled and unlabeled data in a past event (we refer to this as a source domain), and that can adapt to a new event (we refer to this as a target domain) without requiring any labeled data in the new event."
                    },
                    {
                        "id": 8,
                        "string": "In other words, we need models that can do domain adaptation to deal with the distribution drift between the domains and semi-supervised learning to leverage the unlabeled data in both domains."
                    },
                    {
                        "id": 9,
                        "string": "Most recent approaches to semi-supervised learning (Yang et al., 2016) and domain adaptation (Ganin et al., 2016) use the automatic feature learning capability of DNN models."
                    },
                    {
                        "id": 10,
                        "string": "In this paper, we extend these methods by proposing a novel model that performs domain adaptation and semi-supervised learning within a single unified deep learning framework."
                    },
                    {
                        "id": 11,
                        "string": "In this framework, the basic task-solving network (a convolutional neural network in our case) is put together with two other networks -one for semi-supervised learning and the other for domain adaptation."
                    },
                    {
                        "id": 12,
                        "string": "The semisupervised component learns internal representa-tions (features) by predicting contextual nodes in a graph that encodes similarity between labeled and unlabeled training instances."
                    },
                    {
                        "id": 13,
                        "string": "The domain adaptation is achieved by training the feature extractor (or encoder) in adversary with respect to a domain discriminator, a binary classifier that tries to distinguish the domains."
                    },
                    {
                        "id": 14,
                        "string": "The overall idea is to learn high-level abstract representation that is discriminative for the main classification task, but is invariant across the domains."
                    },
                    {
                        "id": 15,
                        "string": "We propose a stochastic gradient descent (SGD) algorithm to train the components of our model simultaneously."
                    },
                    {
                        "id": 16,
                        "string": "The evaluation of our proposed model is conducted using two Twitter datasets on scenarios where there is only unlabeled data in the target domain."
                    },
                    {
                        "id": 17,
                        "string": "Our results demonstrate the following."
                    },
                    {
                        "id": 18,
                        "string": "Our source code is available on Github 1 and the data is available on CrisisNLP 2 ."
                    },
                    {
                        "id": 19,
                        "string": "The rest of the paper is organized as follows."
                    },
                    {
                        "id": 20,
                        "string": "In Section 2, we present the proposed method, i.e., domain adaptation and semi-supervised graph embedding learning."
                    },
                    {
                        "id": 21,
                        "string": "In Section 3, we present the experimental setup and baselines."
                    },
                    {
                        "id": 22,
                        "string": "The results and analysis are presented in Section 4."
                    },
                    {
                        "id": 23,
                        "string": "In Section 5, we present the works relevant to this study."
                    },
                    {
                        "id": 24,
                        "string": "Finally, conclusions appear in Section 6."
                    },
                    {
                        "id": 25,
                        "string": "The Model We demonstrate our approach for domain adaptation with adversarial training and graph embedding on a tweet classification task to support crisis response efforts."
                    },
                    {
                        "id": 26,
                        "string": "Let D l S = {t i , y i } Ls i=1 and D u S = {t i } Us i=1 be the set of labeled and unlabeled tweets for a source crisis event S (e.g., Nepal earthquake), where y i ∈ {1, ."
                    },
                    {
                        "id": 27,
                        "string": "."
                    },
                    {
                        "id": 28,
                        "string": "."
                    },
                    {
                        "id": 29,
                        "string": ", K} is the class label for tweet t i , L s and U s are the number of labeled and unlabeled tweets for the source event, respectively."
                    },
                    {
                        "id": 30,
                        "string": "In addition, we have unlabeled tweets D u T = {t i } Ut i=1 for a target event T (e.g., Queensland flood) with U t being the number of unlabeled tweets in the target domain."
                    },
                    {
                        "id": 31,
                        "string": "Our ultimate goal is to train a cross-domain model p(y|t, θ) with parameters θ that can classify any tweet in the target event T without having any information about class labels in T ."
                    },
                    {
                        "id": 32,
                        "string": "Figure 1 shows the overall architecture of our neural model."
                    },
                    {
                        "id": 33,
                        "string": "The input to the network is a tweet t = (w 1 , ."
                    },
                    {
                        "id": 34,
                        "string": "."
                    },
                    {
                        "id": 35,
                        "string": "."
                    },
                    {
                        "id": 36,
                        "string": ", w n ) containing words that come from a finite vocabulary V defined from the training set."
                    },
                    {
                        "id": 37,
                        "string": "The first layer of the network maps each of these words into a distributed representation R d by looking up a shared embedding matrix E ∈ R |V|×d ."
                    },
                    {
                        "id": 38,
                        "string": "We initialize the embedding matrix E in our network with word embeddings that are pretrained on a large crisis dataset (Subsection 2.5)."
                    },
                    {
                        "id": 39,
                        "string": "However, embedding matrix E can also be initialize randomly."
                    },
                    {
                        "id": 40,
                        "string": "The output of the look-up layer is a matrix X ∈ R n×d , which is passed through a number of convolution and pooling layers to learn higher-level feature representations."
                    },
                    {
                        "id": 41,
                        "string": "A convolution operation applies a filter u ∈ R k.d to a window of k vectors to produce a new feature h t as h t = f (u.X t:t+k−1 ) (1) where X t:t+k−1 is the concatenation of k look-up vectors, and f is a nonlinear activation; we use rectified linear units or ReLU."
                    },
                    {
                        "id": 42,
                        "string": "We apply this filter to each possible k-length windows in X with stride size of 1 to generate a feature map h j as: h j = [h 1 , ."
                    },
                    {
                        "id": 43,
                        "string": "."
                    },
                    {
                        "id": 44,
                        "string": "."
                    },
                    {
                        "id": 45,
                        "string": ", h n+k−1 ] (2) We repeat this process N times with N different filters to get N different feature maps."
                    },
                    {
                        "id": 46,
                        "string": "We use a wide convolution (Kalchbrenner et al., 2014) , which ensures that the filters reach the entire tweet, including the boundary words."
                    },
                    {
                        "id": 47,
                        "string": "This is done by performing zero-padding, where out-ofrange (i.e., t<1 or t>n) vectors are assumed to be zero."
                    },
                    {
                        "id": 48,
                        "string": "With wide convolution, o zero-padding size and 1 stride size, each feature map contains (n + 2o − k + 1) convoluted features."
                    },
                    {
                        "id": 49,
                        "string": "After the convolution, we apply a max-pooling operation to each of the feature maps, where µ p (h j ) refers to the max operation applied to each window of p features with stride size of 1 in the feature map h i ."
                    },
                    {
                        "id": 50,
                        "string": "Intuitively, the convolution operation composes local features into higherlevel representations in the feature maps, and maxpooling extracts the most important aspects of each feature map while reducing the output dimensionality."
                    },
                    {
                        "id": 51,
                        "string": "Since each convolution-pooling operation is performed independently, the features extracted become invariant in order (i.e., where they occur in the tweet)."
                    },
                    {
                        "id": 52,
                        "string": "To incorporate order information between the pooled features, we include a fully-connected (dense) layer m = [µ p (h 1 ), · · · , µ p (h N )] (3 z = f (V m) (4) where V is the weight matrix."
                    },
                    {
                        "id": 53,
                        "string": "We choose a convolutional architecture for feature composition because it has shown impressive results on similar tasks in a supervised setting (Nguyen et al., 2017) ."
                    },
                    {
                        "id": 54,
                        "string": "The network at this point splits into three branches (shaded with three different colors in Figure 1 ) each of which serves a different purpose and contributes a separate loss to the overall loss of the model as defined below: L(Λ, Φ, Ω, Ψ) = L C (Λ, Φ) + λg L G (Λ, Ω) + λ d L D (Λ, Ψ) (5) where Λ = {U, V } are the convolutional filters and dense layer weights that are shared across the three branches."
                    },
                    {
                        "id": 55,
                        "string": "The first component L C (Λ, Φ) is a supervised classification loss based on the labeled data in the source event."
                    },
                    {
                        "id": 56,
                        "string": "The second component L G (Λ, Ω) is a graph-based semi-supervised loss that utilizes both labeled and unlabeled data in the source and target events to induce structural similarity between training instances."
                    },
                    {
                        "id": 57,
                        "string": "The third component L D (Λ, Ω) is an adversary loss that again uses all available data in the source and target domains to induce domain invariance in the learned features."
                    },
                    {
                        "id": 58,
                        "string": "The tunable hyperparameters λ g and λ d control the relative strength of the components."
                    },
                    {
                        "id": 59,
                        "string": "Supervised Component The supervised component induces label information (e.g., relevant vs. non-relevant) directly in the network through the classification loss L C (Λ, Φ), which is computed on the labeled instances in the source event, D l S ."
                    },
                    {
                        "id": 60,
                        "string": "Specifically, this branch of the network, as shown at the top in Figure 1 , takes the shared representations z as input and pass it through a task-specific dense layer z c = f (V c z) (6) where V c is the corresponding weight matrix."
                    },
                    {
                        "id": 61,
                        "string": "The activations z c along with the activations from the semi-supervised branch z s are used for classification."
                    },
                    {
                        "id": 62,
                        "string": "More formally, the classification layer defines a Softmax p(y = k|t, θ) = exp W T k [z c ; z s ] k exp W T k [z c ; z s ] (7) where [."
                    },
                    {
                        "id": 63,
                        "string": "; .]"
                    },
                    {
                        "id": 64,
                        "string": "denotes concatenation of two column vectors, W k are the class weights, and θ = {U, V, V c , W } defines the relevant parameters for this branch of the network with Λ = {U, V } being the shared parameters and Φ = {V c , W } being the parameters specific to this branch."
                    },
                    {
                        "id": 65,
                        "string": "Once learned, we use θ for prediction on test tweets."
                    },
                    {
                        "id": 66,
                        "string": "The classification loss L C (Λ, Φ) (or L C (θ)) is defined as LC(Λ, Φ) = − 1 Ls Ls i=1 I(yi = k) log p(yi = k|ti, Λ, Φ) (8) where I(.)"
                    },
                    {
                        "id": 67,
                        "string": "is an indicator function that returns 1 when the argument is true, otherwise it returns 0."
                    },
                    {
                        "id": 68,
                        "string": "Semi-supervised Component The semi-supervised branch (shown at the middle in Figure 1 ) induces structural similarity between training instances (labeled or unlabeled) in the source and target events."
                    },
                    {
                        "id": 69,
                        "string": "We adopt the recently proposed graph-based semi-supervised deep learning framework (Yang et al., 2016) , which shows impressive gains over existing semisupervised methods on multiple datasets."
                    },
                    {
                        "id": 70,
                        "string": "In this framework, a \"similarity\" graph G first encodes relations between training instances, which is then used by the network to learn internal representations (i.e., embeddings)."
                    },
                    {
                        "id": 71,
                        "string": "Learning Graph Embeddings The semi-supervised branch takes the shared representation z as input and learns internal representations by predicting a node in the graph context of the input tweet."
                    },
                    {
                        "id": 72,
                        "string": "Following (Yang et al., 2016) , we use negative sampling to compute the loss for predicting the context node, and we sample two types of contextual nodes: (i) one is based on the graph G to encode structural information, and (ii) the second is based on the labels in D l S to incorporate label information through this branch of the network."
                    },
                    {
                        "id": 73,
                        "string": "The ratio of positive and negative samples is controlled by a random variable ρ 1 ∈ (0, 1), and the proportion of the two context types is controlled by another random variable ρ 2 ∈ (0, 1); see Algorithm 1 of (Yang et al., 2016) for details on the sampling procedure."
                    },
                    {
                        "id": 74,
                        "string": "Let (j, γ) is a tuple sampled from the distribution p(j, γ|i, D l S , G), where j is a context node of an input node i and γ ∈ {+1, −1} denotes whether it is a positive or a negative sample; γ = +1 if t i and t j are neighbors in the graph (for graph-based context) or they both have same labels (for label-based context), otherwise γ = −1."
                    },
                    {
                        "id": 75,
                        "string": "The negative log loss for context prediction L G (Λ, Ω) can be written as L G (Λ, Ω) = − 1 Ls + Us Ls+Us i=1 E (j,γ) log σ γC T j zg(i) (9) where z g (i) = f (V g z(i)) defines another dense layer (marked as Dense (z g ) in Figure 1 ) having weights V g , and C j is the weight vector associated with the context node t j ."
                    },
                    {
                        "id": 76,
                        "string": "Note that here Λ = {U, V } defines the shared parameters and Ω = {V g , C} defines the parameters specific to the semi-supervised branch of the network."
                    },
                    {
                        "id": 77,
                        "string": "Graph Construction Typically graphs are constructed based on a relational knowledge source, e.g., citation links in (Lu and Getoor, 2003) , or distance between instances (Zhu, 2005) ."
                    },
                    {
                        "id": 78,
                        "string": "However, we do not have access to such a relational knowledge in our setting."
                    },
                    {
                        "id": 79,
                        "string": "On the other hand, computing distance between n(n−1)/2 pairs of instances to construct the graph is also very expensive (Muja and Lowe, 2014) ."
                    },
                    {
                        "id": 80,
                        "string": "Therefore, we choose to use k-nearest neighborbased approach as it has been successfully used in other study (Steinbach et al., 2000) ."
                    },
                    {
                        "id": 81,
                        "string": "The nearest neighbor graph consists of n vertices and for each vertex, there is an edge set consisting of a subset of n instances, i.e., tweets in our training set."
                    },
                    {
                        "id": 82,
                        "string": "The edge is defined by the distance measure d(i, j) between tweets t i and t j , where the value of d represents how similar the two tweets are."
                    },
                    {
                        "id": 83,
                        "string": "We used k-d tree data structure (Bentley, 1975) to efficiently find the nearest instances."
                    },
                    {
                        "id": 84,
                        "string": "To construct the graph, we first represent each tweet by averaging the word2vec vectors of its words, and then we measure d(i, j) by computing the Euclidean distance between the vectors."
                    },
                    {
                        "id": 85,
                        "string": "The number of nearest neighbor k was set to 10."
                    },
                    {
                        "id": 86,
                        "string": "The reason of averaging the word vectors is that it is computationally simpler and it captures the relevant semantic information for our task in hand."
                    },
                    {
                        "id": 87,
                        "string": "Likewise, we choose to use Euclidean distance instead of cosine for computational efficiency."
                    },
                    {
                        "id": 88,
                        "string": "Domain Adversarial Component The network described so far can learn abstract features through convolutional and dense layers that are discriminative for the classification task (relevant vs. non-relevant)."
                    },
                    {
                        "id": 89,
                        "string": "The supervised branch of the network uses labels in the source event to induce label information directly, whereas the semi-supervised branch induces similarity information between labeled and unlabeled instances."
                    },
                    {
                        "id": 90,
                        "string": "However, our goal is also to make these learned features invariant across domains or events (e.g., Nepal Earthquake vs. Queensland Flood)."
                    },
                    {
                        "id": 91,
                        "string": "We achieve this by domain adversarial training of neural networks (Ganin et al., 2016) ."
                    },
                    {
                        "id": 92,
                        "string": "We put a domain discriminator, another branch in the network (shown at the bottom in Figure 1 ) that takes the shared internal representation z as input, and tries to discriminate between the domains of the input -in our case, whether the input tweet is from D S or from D T ."
                    },
                    {
                        "id": 93,
                        "string": "The domain discriminator is defined by a sigmoid function: δ = p(d = 1|t, Λ, Ψ) = sigm(w T d z d ) (10) where d ∈ {0, 1} denotes the domain of the input tweet t, w d are the final layer weights of the discriminator, and z d = f (V d z) defines the hidden layer of the discriminator with layer weights V d ."
                    },
                    {
                        "id": 94,
                        "string": "Here Λ = {U, V } defines the shared parameters, and Ψ = {V d , w d } defines the parameters specific to the domain discriminator."
                    },
                    {
                        "id": 95,
                        "string": "We use the negative log-probability as the discrimination loss: J i (Λ, Ψ) = −d i logδ − (1 − d i ) log 1 −δ (11) We can write the overall domain adversary loss over the source and target domains as L D (Λ, Ψ) = − 1 Ls + Us Ls+Us i=1 J i (Λ, Ψ) − 1 Ut U t i=1 J i (Λ, Ψ) (12) where L s + U s and U t are the number of training instances in the source and target domains, respectively."
                    },
                    {
                        "id": 96,
                        "string": "In adversarial training, we seek parameters (saddle point) such that θ * = argmin Λ,Φ,Ω max Ψ L(Λ, Φ, Ω, Ψ) (13) which involves a maximization with respect to Ψ and a minimization with respect to {Λ, Φ, Ω}."
                    },
                    {
                        "id": 97,
                        "string": "In other words, the updates of the shared parameters Λ = {U, V } for the discriminator work adversarially to the rest of the network, and vice versa."
                    },
                    {
                        "id": 98,
                        "string": "This is achieved by reversing the gradients of the discrimination loss L D (Λ, Ψ), when they are backpropagated to the shared layers (see Figure 1 )."
                    },
                    {
                        "id": 99,
                        "string": "Model Training Algorithm 1 illustrates the training algorithm based on stochastic gradient descent (SGD)."
                    },
                    {
                        "id": 100,
                        "string": "We first initialize the model parameters."
                    },
                    {
                        "id": 101,
                        "string": "The word embedding matrix E is initialized with pre-trained word2vec vectors (see Subsection 2.5) and is kept fixed during training."
                    },
                    {
                        "id": 102,
                        "string": "3 Other parameters are initialized with small random numbers sampled from 3 Tuning E on our task by backpropagation increased the training time immensely (3 days compared to 5 hours on a Tesla GPU) without any significant performance gain."
                    },
                    {
                        "id": 103,
                        "string": "a uniform distribution (Bengio and Glorot, 2010) ."
                    },
                    {
                        "id": 104,
                        "string": "We use AdaDelta (Zeiler, 2012) adaptive update to update the parameters."
                    },
                    {
                        "id": 105,
                        "string": "In each iteration, we do three kinds of gradient updates to account for the three different loss components."
                    },
                    {
                        "id": 106,
                        "string": "First, we do an epoch over all the training instances updating the parameters for the semi-supervised loss, then we do an epoch over the labeled instances in the source domain, each time updating the parameters for the supervised and the domain adversary losses."
                    },
                    {
                        "id": 107,
                        "string": "Finally, we do an epoch over the unlabeled instances in the two domains to account for the domain adversary loss."
                    },
                    {
                        "id": 108,
                        "string": "The main challenge in adversarial training is to balance the competing components of the network."
                    },
                    {
                        "id": 109,
                        "string": "If one component becomes smarter than the other, its loss to the shared layer becomes useless, and the training fails to converge (Arjovsky et al., 2017) ."
                    },
                    {
                        "id": 110,
                        "string": "Equivalently, if one component becomes weaker, its loss overwhelms that of the other, causing the training to fail."
                    },
                    {
                        "id": 111,
                        "string": "In our experiments, we observed the domain discriminator is weaker than the rest of the network."
                    },
                    {
                        "id": 112,
                        "string": "This could be due to the noisy nature of tweets, which makes the job for the domain discriminator harder."
                    },
                    {
                        "id": 113,
                        "string": "To balance the components, we would want the error signals from the discriminator to be fairly weak, also we would want the supervised loss to have more impact than the semi-supervised loss."
                    },
                    {
                        "id": 114,
                        "string": "In our experiments, the weight of the domain adversary loss λ d was fixed to 1e − 8, and the weight of the semi-supervised loss λ g was fixed to 1e − 2."
                    },
                    {
                        "id": 115,
                        "string": "Other sophisticated weighting schemes have been proposed recently (Ganin et al., 2016; Arjovsky et al., 2017; Metz et al., 2016) ."
                    },
                    {
                        "id": 116,
                        "string": "It would be interesting to see how our model performs using these advanced tuning methods, which we leave as a future work."
                    },
                    {
                        "id": 117,
                        "string": "Crisis Word Embedding As mentioned, we used word embeddings that are pre-trained on a crisis dataset."
                    },
                    {
                        "id": 118,
                        "string": "To train the wordembedding model, we first pre-processed tweets collected using the AIDR system  during different events occurred between 2014 and 2016."
                    },
                    {
                        "id": 119,
                        "string": "In the preprocessing step, we lowercased the tweets and removed URLs, digit, time patterns, special characters, single character, username started with the @ symbol."
                    },
                    {
                        "id": 120,
                        "string": "After preprocessing, the resulting dataset contains about 364 million tweets and about 3 billion words."
                    },
                    {
                        "id": 121,
                        "string": "There are several approaches to train word embeddings such as continuous bag-of-words (CBOW) and skip-gram models of wrod2vec (Mikolov et al., 2013) , and Glove (Pennington et al., 2014) ."
                    },
                    {
                        "id": 122,
                        "string": "For our work, we trained the CBOW model from word2vec."
                    },
                    {
                        "id": 123,
                        "string": "While training CBOW, we filtered out words with a frequency less than or equal to 5, and we used a context window size of 5 and k = 5 negative samples."
                    },
                    {
                        "id": 124,
                        "string": "The resulting embedding model contains about 2 million words with vector dimensions of 300."
                    },
                    {
                        "id": 125,
                        "string": "Experimental Settings In this section, we describe our experimental settings -datasets used, settings of our models, compared baselines, and evaluation metrics."
                    },
                    {
                        "id": 126,
                        "string": "Datasets To conduct the experiment and evaluate our system, we used two real-world Twitter datasets collected during the 2015 Nepal earthquake (NEQ) and the 2013 Queensland floods (QFL)."
                    },
                    {
                        "id": 127,
                        "string": "These datasets are comprised of millions of tweets collected through the Twitter streaming API 4 using event-specific keywords/hashtags."
                    },
                    {
                        "id": 128,
                        "string": "To obtain the labeled examples for our task we employed paid workers from the Crowdflower 5a crowdsourcing platform."
                    },
                    {
                        "id": 129,
                        "string": "The annotation consists of two classes relevant and non-relevant."
                    },
                    {
                        "id": 130,
                        "string": "For the annotation, we randomly sampled 11,670 and 10,033 tweets from the Nepal earthquake and the Queensland floods datasets, respectively."
                    },
                    {
                        "id": 131,
                        "string": "Given a  tweet, we asked crowdsourcing workers to assign the \"relevant\" label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the \"non-relevant\" label."
                    },
                    {
                        "id": 132,
                        "string": "We split the labeled data into 60% as training, 30% as test and 10% as development."
                    },
                    {
                        "id": 133,
                        "string": "Table 1 shows the resulting datasets with class-wise distributions."
                    },
                    {
                        "id": 134,
                        "string": "Data preprocessing was performed by following the same steps used to train the word2vec model (Subsection 2.5)."
                    },
                    {
                        "id": 135,
                        "string": "In all the experiments, the classification task consists of two classes: relevant and non-relevant."
                    },
                    {
                        "id": 136,
                        "string": "Model Settings and Baselines In order to demonstrate the effectiveness of our joint learning approach, we performed a series of experiments."
                    },
                    {
                        "id": 137,
                        "string": "To understand the contribution of different network components, we performed an ablation study showing how the model performs as a semi-supervised model alone and as a domain adaptation model alone, and then we compare them with the combined model that incorporates all the components."
                    },
                    {
                        "id": 138,
                        "string": "Settings for Semi-supervised Learning As a baseline for the semi-supervised experiments, we used the self-training approach (Scudder, 1965) ."
                    },
                    {
                        "id": 139,
                        "string": "For this purpose, we first trained a supervised model using the CNN architecture (i.e., shared components followed by the supervised part in Figure 1 )."
                    },
                    {
                        "id": 140,
                        "string": "The trained model was then used to automatically label the unlabeled data."
                    },
                    {
                        "id": 141,
                        "string": "Instances with a classifier confidence score ≥ 0.75 were then used to retrain a new model."
                    },
                    {
                        "id": 142,
                        "string": "Next, we run experiments using our graphbased semi-supervised approach (i.e., shared components followed by the supervised and semisupervised parts in Figure 1) , which exploits unlabeled data."
                    },
                    {
                        "id": 143,
                        "string": "For reducing the computational cost, we randomly selected 50K unlabeled instances from the same domain."
                    },
                    {
                        "id": 144,
                        "string": "For our semi-supervised setting, one of the main goals was to understand how much labeled data is sufficient to obtain a reasonable result."
                    },
                    {
                        "id": 145,
                        "string": "Therefore, we experimented our system by incrementally adding batches of instances, such as 100, 500, 2000, 5000, and all instances from the training set."
                    },
                    {
                        "id": 146,
                        "string": "Such an understanding can help us design the model at the onset of a crisis event with sufficient amount of labeled data."
                    },
                    {
                        "id": 147,
                        "string": "To demonstrate that the semi-supervised approach outperforms the supervised baseline, we run supervised experiments using the same number of labeled instances."
                    },
                    {
                        "id": 148,
                        "string": "In the supervised setting, only z c activations in Figure 1 are used for classification."
                    },
                    {
                        "id": 149,
                        "string": "Settings for Domain Adaptation To set a baseline for the domain adaptation experiments, we train a CNN model (i.e., shared components followed by the supervised part in Figure 1 ) on one event (source) and test it on another event (target)."
                    },
                    {
                        "id": 150,
                        "string": "We call this as transfer baseline."
                    },
                    {
                        "id": 151,
                        "string": "To assess the performance of our domain adaptation technique alone, we exclude the semisupervised component from the network."
                    },
                    {
                        "id": 152,
                        "string": "We train and evaluate models with this network configuration using different source and target domains."
                    },
                    {
                        "id": 153,
                        "string": "Finally, we integrate all the components of the network as shown in Figure 1 and run domain adaptation experiments using different source and target domains."
                    },
                    {
                        "id": 154,
                        "string": "In all our domain adaptation experiments, we only use unlabeled instances from the target domain."
                    },
                    {
                        "id": 155,
                        "string": "In domain adaption literature, this is known as unsupervised adaptation."
                    },
                    {
                        "id": 156,
                        "string": "Training Settings We use 100, 150, and 200 filters each having the window size of 2, 3, and 4, respectively, and pooling length of 2, 3, and 4, respectively."
                    },
                    {
                        "id": 157,
                        "string": "We do not tune these hyperparameters in any experimental setting since the goal was to have an end-to-end comparison with the same hyperparameter setting and understand whether our approach can outperform the baselines or not."
                    },
                    {
                        "id": 158,
                        "string": "Furthermore, we do not filter out any vocabulary item in any settings."
                    },
                    {
                        "id": 159,
                        "string": "As mentioned before in Subsection 2.4, we used AdaDelta (Zeiler, 2012) to update the model parameters in each SGD step."
                    },
                    {
                        "id": 160,
                        "string": "The learning rate was set to 0.1 when optimizing on the classification loss and to 0.001 when optimizing on the semisupervised loss."
                    },
                    {
                        "id": 161,
                        "string": "The learning rate for domain adversarial training was set to 1.0."
                    },
                    {
                        "id": 162,
                        "string": "The maximum number of epochs was set to 200, and dropout rate of 0.02 was used to avoid overfitting (Srivastava et al., 2014) ."
                    },
                    {
                        "id": 163,
                        "string": "We used validation-based early stopping using the F-measure with a patience of 25, Table 2 : Results using supervised, self-training, and graph-based semi-supervised approaches in terms of Weighted average AUC, precision (P), recall (R) and F-measure (F1)."
                    },
                    {
                        "id": 164,
                        "string": "i.e., we stop training if the score does not increase for 25 consecutive epochs."
                    },
                    {
                        "id": 165,
                        "string": "Evaluation Metrics To measure the performance of the trained models using different approaches described above, we use weighted average precision, recall, F-measure, and Area Under ROC-Curve (AUC), which are standard evaluation measures in the NLP and machine learning communities."
                    },
                    {
                        "id": 166,
                        "string": "The rationale behind choosing the weighted metric is that it takes into account the class imbalance problem."
                    },
                    {
                        "id": 167,
                        "string": "Results and Discussion In this section, we present the experimental results and discuss our main findings."
                    },
                    {
                        "id": 168,
                        "string": "Semi-supervised Learning In Table 2 , we present the results obtained from the supervised, self-training based semi-supervised, and our graph-based semi-supervised experiments for the both datasets."
                    },
                    {
                        "id": 169,
                        "string": "It can be clearly observed that the graph-based semi-supervised approach outperforms the two baselines -supervised and self-training based semi-supervised."
                    },
                    {
                        "id": 170,
                        "string": "Specifically, the graph-based approach shows 4% to 13% absolute improvements in terms of F1 scores for the Nepal and Queensland datasets, respectively."
                    },
                    {
                        "id": 171,
                        "string": "To determine how the semi-supervised approach performs in the early hours of an event when only fewer labeled instances are available, we mimic a batch-wise (not to be confused with minibatch in SGD) learning setting."
                    },
                    {
                        "id": 172,
                        "string": "In Table 3 , we present the results using different batch sizes -100, 500, 1,000, 2,000, and all labels."
                    },
                    {
                        "id": 173,
                        "string": "From the results, we observe that models' performance improve as we include more labeled data Table 3 : Weighted average F-measure for the graph-based semi-supervised settings using different batch sizes."
                    },
                    {
                        "id": 174,
                        "string": "L refers to labeled data, U refers to unlabeled data, All L refers to all labeled instances for that particular dataset."
                    },
                    {
                        "id": 175,
                        "string": "-from 43.63 to 60.89 for NEQ and from 48.97 to 80.16 for QFL in the case of labeled only (L)."
                    },
                    {
                        "id": 176,
                        "string": "When we compare supervised vs. semi-supervised (L vs. L+U), we observe significant improvements in F1 scores for the semi-supervised model for all batches over the two datasets."
                    },
                    {
                        "id": 177,
                        "string": "As we include unlabeled instances with labeled instances from the same event, performance significantly improves in each experimental setting giving 5% to 26% absolute improvements over the supervised models."
                    },
                    {
                        "id": 178,
                        "string": "These improvements demonstrate the effectiveness of our approach."
                    },
                    {
                        "id": 179,
                        "string": "We also notice that our semi-supervised approach can perform above 90% depending on the event."
                    },
                    {
                        "id": 180,
                        "string": "Specifically, major improvements are observed from batch size 100 to 1,000, however, after that the performance improvements are comparatively minor."
                    },
                    {
                        "id": 181,
                        "string": "The results obtained using batch sizes 500 and 1,000 are reasonably in the acceptable range when labeled and unlabeled instances are combined (i.e., L+50kU for Nepal and L+∼21kU for Queensland), which is also a reasonable number of training examples to obtain at the onset of an event."
                    },
                    {
                        "id": 182,
                        "string": "Domain Adaptation In  The results with domain adversarial training show improvements across both events -from 1.8% to 4.1% absolute gains in F1."
                    },
                    {
                        "id": 183,
                        "string": "These results attest that adversarial training is an effective approach to induce domain invariant features in the internal representation as shown previously by Ganin et al."
                    },
                    {
                        "id": 184,
                        "string": "(2016) ."
                    },
                    {
                        "id": 185,
                        "string": "Finally, when we do both semi-supervised learning and unsupervised domain adaptation, we get further improvements in F1 scores ranging from 5% to 7% absolute gains."
                    },
                    {
                        "id": 186,
                        "string": "From these improvements, we can conclude that domain adaptation with adversarial training along with graphbased semi-supervised learning is an effective method to leverage unlabeled and labeled data from a different domain."
                    },
                    {
                        "id": 187,
                        "string": "Note that for our domain adaptation methods, we only use unlabeled data from the target domain."
                    },
                    {
                        "id": 188,
                        "string": "Hence, we foresee future improvements of this approach by utilizing a small amount of target domain labeled data."
                    },
                    {
                        "id": 189,
                        "string": "Related Work Two lines of research are directly related to our work: (i) semi-supervised learning and (ii) domain adaptation."
                    },
                    {
                        "id": 190,
                        "string": "Several models have been proposed for semi-supervised learning."
                    },
                    {
                        "id": 191,
                        "string": "The earliest approach is self-training (Scudder, 1965) , in which a trained model is first used to label unlabeled data instances followed by the model retraining with the most confident predicted labeled instances."
                    },
                    {
                        "id": 192,
                        "string": "The co-training (Mitchell, 1999) approach assumes that features can be split into two sets and each subset is then used to train a classifier with an assumption that the two sets are conditionally independent."
                    },
                    {
                        "id": 193,
                        "string": "Then each classifier classifies the unlabeled data, and then most confident data instances are used to re-train the other classifier, this process repeats multiple times."
                    },
                    {
                        "id": 194,
                        "string": "In the graph-based semi-supervised approach, nodes in a graph represent labeled and unlabeled instances and edge weights represent the similarity between them."
                    },
                    {
                        "id": 195,
                        "string": "The structural information encoded in the graph is then used to regularize a model (Zhu, 2005) ."
                    },
                    {
                        "id": 196,
                        "string": "There are two paradigms in semi-supervised learning: 1) inductive -learning a function with which predictions can be made on unobserved instances, 2) transductive -no explicit function is learned and predictions can only be made on observed instances."
                    },
                    {
                        "id": 197,
                        "string": "As mentioned before, inductive semi-supervised learning is preferable over the transductive approach since it avoids building the graph each time it needs to infer the labels for the unlabeled instances."
                    },
                    {
                        "id": 198,
                        "string": "In our work, we use a graph-based inductive deep learning approach proposed by Yang et al."
                    },
                    {
                        "id": 199,
                        "string": "(2016) to learn features in a deep learning model by predicting contextual (i.e., neighboring) nodes in the graph."
                    },
                    {
                        "id": 200,
                        "string": "However, our approach is different from Yang et al."
                    },
                    {
                        "id": 201,
                        "string": "(2016) in several ways."
                    },
                    {
                        "id": 202,
                        "string": "First, we construct the graph by computing the distance between tweets based on word embeddings."
                    },
                    {
                        "id": 203,
                        "string": "Second, instead of using count-based features, we use a convolutional neural network (CNN) to compose high-level features from the distributed representation of the words in a tweet."
                    },
                    {
                        "id": 204,
                        "string": "Finally, for context prediction, instead of performing a random walk, we select nodes based on their similarity in the graph."
                    },
                    {
                        "id": 205,
                        "string": "Similar similarity-based graph has shown impressive results in learning sentence representations (Saha et al., 2017) ."
                    },
                    {
                        "id": 206,
                        "string": "In the literature, the proposed approaches for domain adaptation include supervised, semisupervised and unsupervised."
                    },
                    {
                        "id": 207,
                        "string": "It also varies from linear kernelized approach (Blitzer et al., 2006) to non-linear deep neural network techniques (Glorot et al., 2011; Ganin et al., 2016) ."
                    },
                    {
                        "id": 208,
                        "string": "One direction of research is to focus on feature space distribution matching by reweighting the samples from the source domain (Gong et al., 2013) to map source into target."
                    },
                    {
                        "id": 209,
                        "string": "The overall idea is to learn a good feature representation that is invariant across domains."
                    },
                    {
                        "id": 210,
                        "string": "In the deep learning paradigm, Glorot et al."
                    },
                    {
                        "id": 211,
                        "string": "(Glorot et al., 2011) used Stacked Denoising Auto-Encoders (SDAs) for domain adaptation."
                    },
                    {
                        "id": 212,
                        "string": "SDAs learn a robust feature representation, which is artificially corrupted with small Gaussian noise."
                    },
                    {
                        "id": 213,
                        "string": "Adversarial training of neural networks has shown big impact recently, especially in areas such as computer vision, where generative unsupervised models have proved capable of synthesizing new images (Goodfellow et al., 2014; Radford et al., 2015; Makhzani et al., 2015) ."
                    },
                    {
                        "id": 214,
                        "string": "Ganin et al."
                    },
                    {
                        "id": 215,
                        "string": "(2016) proposed domain adversarial neural networks (DANN) to learn discriminative but at the same time domain-invariant representations, with domain adaptation as a target."
                    },
                    {
                        "id": 216,
                        "string": "We extend this work by combining with semi-supervised graph embedding for unsupervised domain adaptation."
                    },
                    {
                        "id": 217,
                        "string": "In a recent work, Kipf and Welling (2016) present CNN applied directly on graph-structured datasets -citation networks and on a knowledge graph dataset."
                    },
                    {
                        "id": 218,
                        "string": "Their study demonstrate that graph convolution network for semi-supervised classification performs better compared to other graph based approaches."
                    },
                    {
                        "id": 219,
                        "string": "Conclusions In this paper, we presented a deep learning framework that performs domain adaptation with adversarial training and graph-based semi-supervised learning to leverage labeled and unlabeled data from related events."
                    },
                    {
                        "id": 220,
                        "string": "We use a convolutional neural network to compose high-level representation from the input, which is then passed to three components that perform supervised training, semisupervised learning and domain adversarial training."
                    },
                    {
                        "id": 221,
                        "string": "For domain adaptation, we considered a scenario, where we have only unlabeled data in the target event."
                    },
                    {
                        "id": 222,
                        "string": "Our evaluation on two crisis-related tweet datasets demonstrates that by combining domain adversarial training with semi-supervised learning, our model gives significant improvements over their respective baselines."
                    },
                    {
                        "id": 223,
                        "string": "We have also presented results of batch-wise incremental training of the graph-based semi-supervised approach and show approximation regarding the number of labeled examples required to get an acceptable performance at the onset of an event."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 24
                    },
                    {
                        "section": "The Model",
                        "n": "2",
                        "start": 25,
                        "end": 58
                    },
                    {
                        "section": "Supervised Component",
                        "n": "2.1",
                        "start": 59,
                        "end": 67
                    },
                    {
                        "section": "Semi-supervised Component",
                        "n": "2.2",
                        "start": 68,
                        "end": 70
                    },
                    {
                        "section": "Learning Graph Embeddings",
                        "n": "2.2.1",
                        "start": 71,
                        "end": 76
                    },
                    {
                        "section": "Graph Construction",
                        "n": "2.2.2",
                        "start": 77,
                        "end": 87
                    },
                    {
                        "section": "Domain Adversarial Component",
                        "n": "2.3",
                        "start": 88,
                        "end": 98
                    },
                    {
                        "section": "Model Training",
                        "n": "2.4",
                        "start": 99,
                        "end": 116
                    },
                    {
                        "section": "Crisis Word Embedding",
                        "n": "2.5",
                        "start": 117,
                        "end": 123
                    },
                    {
                        "section": "Experimental Settings",
                        "n": "3",
                        "start": 124,
                        "end": 125
                    },
                    {
                        "section": "Datasets",
                        "n": "3.1",
                        "start": 126,
                        "end": 134
                    },
                    {
                        "section": "Model Settings and Baselines",
                        "n": "3.2",
                        "start": 135,
                        "end": 137
                    },
                    {
                        "section": "Settings for Semi-supervised Learning",
                        "n": "3.2.1",
                        "start": 138,
                        "end": 148
                    },
                    {
                        "section": "Settings for Domain Adaptation",
                        "n": "3.2.2",
                        "start": 149,
                        "end": 155
                    },
                    {
                        "section": "Training Settings",
                        "n": "3.2.3",
                        "start": 156,
                        "end": 163
                    },
                    {
                        "section": "Evaluation Metrics",
                        "n": "3.2.4",
                        "start": 164,
                        "end": 166
                    },
                    {
                        "section": "Results and Discussion",
                        "n": "4",
                        "start": 167,
                        "end": 167
                    },
                    {
                        "section": "Semi-supervised Learning",
                        "n": "4.1",
                        "start": 168,
                        "end": 181
                    },
                    {
                        "section": "Domain Adaptation",
                        "n": "4.2",
                        "start": 182,
                        "end": 188
                    },
                    {
                        "section": "Related Work",
                        "n": "5",
                        "start": 189,
                        "end": 218
                    },
                    {
                        "section": "Conclusions",
                        "n": "6",
                        "start": 219,
                        "end": 223
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/998-Figure1-1.png",
                        "caption": "Figure 1: The system architecture of the domain adversarial network with graph-based semi-supervised learning. The shared components part is shared by supervised, semi-supervised and domain classifier.",
                        "page": 2,
                        "bbox": {
                            "x1": 142.56,
                            "x2": 452.64,
                            "y1": 65.75999999999999,
                            "y2": 249.12
                        }
                    },
                    {
                        "filename": "../figure/image/998-Table1-1.png",
                        "caption": "Table 1: Distribution of labeled datasets for Nepal earthquake (NEQ) and Queensland flood (QFL).",
                        "page": 5,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 524.16,
                            "y1": 63.839999999999996,
                            "y2": 108.0
                        }
                    },
                    {
                        "filename": "../figure/image/998-Table4-1.png",
                        "caption": "Table 4: Domain adaptation experimental results. Weighted average AUC, precision (P), recall (R) and F-measure (F1).",
                        "page": 7,
                        "bbox": {
                            "x1": 316.8,
                            "x2": 516.0,
                            "y1": 88.8,
                            "y2": 269.28
                        }
                    },
                    {
                        "filename": "../figure/image/998-Table3-1.png",
                        "caption": "Table 3: Weighted average F-measure for the graph-based semi-supervised settings using different batch sizes. L refers to labeled data, U refers to unlabeled data, All L refers to all labeled instances for that particular dataset.",
                        "page": 7,
                        "bbox": {
                            "x1": 88.8,
                            "x2": 273.12,
                            "y1": 131.51999999999998,
                            "y2": 161.28
                        }
                    },
                    {
                        "filename": "../figure/image/998-Table2-1.png",
                        "caption": "Table 2: Results using supervised, self-training, and graph-based semi-supervised approaches in terms of Weighted average AUC, precision (P), recall (R) and F-measure (F1).",
                        "page": 6,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 525.12,
                            "y1": 143.51999999999998,
                            "y2": 185.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-14"
        },
        {
            "slides": {
                "2": {
                    "title": "Comparable Corpora",
                    "text": [
                        "Problem No large collections of comparable texts for all domains and language pairs exist",
                        "Objective To extract high-quality comparable corpora on specific domains",
                        "Pilot language pair EnglishSpanish",
                        "Pilot domains Science, Computer Science, Sports",
                        "Currently experimenting on more than 700 domains and 10 languages"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "3": {
                    "title": "Comparable Corpora Characteristic Vocabulary",
                    "text": [
                        "Retrieve every article associated to the top category of the domain",
                        "Merge the articles contents and apply standard and ad-hoc pre-processing",
                        "Select the top-k tf-sorted tokens as the characteristic vocabulary",
                        "(we consider 10% of the tokens)",
                        "Articles Vocabulary en es en es"
                    ],
                    "page_nums": [
                        15,
                        16,
                        17,
                        18
                    ],
                    "images": []
                },
                "4": {
                    "title": "Comparable Corpora Graph exploration",
                    "text": [
                        "Slice of the Spanish Wikipedia category graph departing from categories",
                        "Sport and Science (as in Spring 2015)",
                        "Scientific Sport Science disciplines",
                        "Mountain Earth Sports sports sciencies",
                        "Geology Mountains Mountaineering Geology by country",
                        "Mountains by country Mountains of Andorra Mountain ran- ges of Spain Geology of Spain",
                        "Mountains of the Pyrenees",
                        "Perform a breadth-first search departing from the root category",
                        "Visit nodes only once to avoid loops and repeating traversed paths",
                        "Stop at the level when most categories do not belong to the domain",
                        "Heuristic A category belongs to the domain if its title contains at least one term from the characteristic vocabulary",
                        "Explore until a minimum percentage of the categories in a tree level belong to the domain",
                        "Category pato in Spanish -literally \"duck\"- refers to a sport rather than an animal!!!",
                        "Article pairs selected according to two criteria: 50% and 60%",
                        "Articles Distance from the root",
                        "en-es en-es en es en es"
                    ],
                    "page_nums": [
                        19,
                        20,
                        21,
                        22,
                        23
                    ],
                    "images": [
                        "figure/image/1004-Table3-1.png",
                        "figure/image/1004-Figure1-1.png"
                    ]
                },
                "5": {
                    "title": "Parallelisation Similarity Models",
                    "text": [
                        "Character 3-grams (cosine) [McNamee and Mayfield, 2004]",
                        "Translated word 1-grams in both directions (cosine)",
                        "Length factor [Pouliquen et al., 2003]",
                        "Probable lengths of translations of d"
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": []
                },
                "6": {
                    "title": "Parallelisation Corpus for Preliminary Evaluation",
                    "text": [
                        "30 article pairs (10 per domain)",
                        "Annotated at sentence level",
                        "Three classes: parallel, comparable, and other",
                        "Each pair was annotated by 2 volunteers mean Cohens"
                    ],
                    "page_nums": [
                        26
                    ],
                    "images": []
                },
                "7": {
                    "title": "Parallelisation Threshold Definition",
                    "text": [
                        "c3g cog monoen monoes len",
                        "S Slen S F1 S F1len"
                    ],
                    "page_nums": [
                        27,
                        28
                    ],
                    "images": [
                        "figure/image/1004-Table5-1.png"
                    ]
                },
                "9": {
                    "title": "Impact Corpora",
                    "text": [
                        "in domain out of domain",
                        "Generation of the Wikipedia dev and test sets",
                        "Select only sentences starting with a letter and longer than three tokens",
                        "Compute the perplexity of each sentence pair (with respect to a",
                        "Sort the pairs according to similarity and perplexity",
                        "Manually select the first k parallel sentences"
                    ],
                    "page_nums": [
                        31,
                        32
                    ],
                    "images": []
                },
                "10": {
                    "title": "Impact Corpora Statistics",
                    "text": [
                        "CS Sc Sp All"
                    ],
                    "page_nums": [
                        33
                    ],
                    "images": [
                        "figure/image/1004-Table7-1.png",
                        "figure/image/1004-Table9-1.png"
                    ]
                },
                "11": {
                    "title": "Impact Phrase based SMT System",
                    "text": [
                        "Language model 5-gram interpolated Kneser-Ney discounting, SRILM",
                        "Translation model Moses package",
                        "Weights optimization MERT against BLEU"
                    ],
                    "page_nums": [
                        34
                    ],
                    "images": []
                },
                "12": {
                    "title": "Impact Experiments definition",
                    "text": [
                        "Out of domain Training Wikipedia and Europarl",
                        "Test Wikipedia (+Gnome for CS)"
                    ],
                    "page_nums": [
                        35
                    ],
                    "images": []
                },
                "13": {
                    "title": "Impact Results on Wikipedia in domain",
                    "text": [
                        "CS Sc Sp Un"
                    ],
                    "page_nums": [
                        36
                    ],
                    "images": [
                        "figure/image/1004-Table11-1.png",
                        "figure/image/1004-Table8-1.png",
                        "figure/image/1004-Table12-1.png"
                    ]
                },
                "15": {
                    "title": "Impact Translation Instances",
                    "text": [
                        "Source All internet packets have a source IP address and a destination",
                        "EP Todos los paquetes de internet tienen un origen direccion IP y destino direccion IP.",
                        "EP+union-CS Todos los paquetes de internet tienen una direccion IP de origen y una direccion IP de destino.",
                        "Awareness of terms (possible overfitting?)",
                        "Source Attack of the Killer Tomatoes is a 2D platform video game developed by Imagineering and released in 1991 for the NES.",
                        "EP el ataque de los tomates es un asesino 2D plataforma video-juego desarrollados por Imagineering y liberados en",
                        "Reference Attack of the Killer Tomatoes es un videojuego de plataformas en 2D desarrollado por Imagineering y lanzado en 1991 para el NES.",
                        "Source Fractal compression is a lossy compression method for digital images, based on fractals.",
                        "EP Fractal compresion es un metodo para lossy compresion digital imagenes , basada en fractals.",
                        "EP+union-CS La compresion fractal es un metodo de compresion con perdida para imagenes digitales, basado en fractales."
                    ],
                    "page_nums": [
                        38,
                        39,
                        40
                    ],
                    "images": []
                },
                "16": {
                    "title": "Impact Results on News out of domain",
                    "text": [
                        "CS Sc Sp Un"
                    ],
                    "page_nums": [
                        41
                    ],
                    "images": [
                        "figure/image/1004-Table11-1.png",
                        "figure/image/1004-Table12-1.png",
                        "figure/image/1004-Table9-1.png"
                    ]
                },
                "17": {
                    "title": "Final Remarks",
                    "text": [
                        "A simple model to extract domain-specific comparable corpora from",
                        "The domain-specific corpora showed to be useful to feed SMT systems, but other tasks are possible",
                        "We are currently comparing our model against an IR-based system",
                        "The platform currently operates in more language pairs, including",
                        "French, Catalan, German, and Arabic; but it can operate in any language and domain",
                        "The prototype is coded in Java (and depends on JWPL). We plan to release it in short!"
                    ],
                    "page_nums": [
                        43,
                        44,
                        45
                    ],
                    "images": []
                }
            },
            "paper_title": "A Factory of Comparable Corpora from Wikipedia",
            "paper_id": "1004",
            "paper": {
                "title": "A Factory of Comparable Corpora from Wikipedia",
                "abstract": "Multiple approaches to grab comparable data from the Web have been developed up to date. Nevertheless, coming out with a high-quality comparable corpus of a specific topic is not straightforward. We present a model for the automatic extraction of comparable texts in multiple languages and on specific topics from Wikipedia. In order to prove the value of the model, we automatically extract parallel sentences from the comparable collections and use them to train statistical machine translation engines for specific domains. Our experiments on the English-Spanish pair in the domains of Computer Science, Science, and Sports show that our in-domain translator performs significantly better than a generic one when translating in-domain Wikipedia articles. Moreover, we show that these corpora can help when translating out-of-domain texts.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Multilingual corpora with different levels of comparability are useful for a range of natural language processing (NLP) tasks."
                    },
                    {
                        "id": 1,
                        "string": "Comparable corpora were first used for extracting parallel lexicons (Rapp, 1995; Fung, 1995) ."
                    },
                    {
                        "id": 2,
                        "string": "Later they were used for feeding statistical machine translation (SMT) systems (Uszkoreit et al., 2010) and in multilingual retrieval models (Schönhofen et al., 2007; Potthast et al., 2008) ."
                    },
                    {
                        "id": 3,
                        "string": "SMT systems estimate the statistical models from bilingual texts (Koehn, 2010) ."
                    },
                    {
                        "id": 4,
                        "string": "Since only the words that appear in the corpus can be translated, having a corpus of the right domain is important to have high coverage."
                    },
                    {
                        "id": 5,
                        "string": "However, it is evident that no large collections of parallel texts for all domains and language pairs exist."
                    },
                    {
                        "id": 6,
                        "string": "In some cases, only general-domain parallel corpora are available; in some others there are no parallel resources at all."
                    },
                    {
                        "id": 7,
                        "string": "One of the main sources of parallel data is the Web: websites in multiple languages are crawled and contents retrieved to obtain multilingual data."
                    },
                    {
                        "id": 8,
                        "string": "Wikipedia, an on-line community-curated encyclopaedia with editions in multiple languages, has been used as a source of data for these purposesfor instance, (Adafre and de Rijke, 2006; Potthast et al., 2008; Otero and López, 2010; Plamada and Volk, 2012) ."
                    },
                    {
                        "id": 9,
                        "string": "Due to its encyclopaedic nature, editors aim at organising its content within a dense taxonomy of categories."
                    },
                    {
                        "id": 10,
                        "string": "1 Such a taxonomy can be exploited to extract comparable and parallel corpora on specific topics and knowledge domains."
                    },
                    {
                        "id": 11,
                        "string": "This allows to study how different topics are analysed in different languages, extract multilingual lexicons, or train specialised machine translation systems, just to mention some instances."
                    },
                    {
                        "id": 12,
                        "string": "Nevertheless, the process is not straightforward."
                    },
                    {
                        "id": 13,
                        "string": "The community-generated nature of the Wikipedia has produced a reasonably good -yet chaotic-taxonomy in which categories are linked to each other at will, even if sometimes no relationship among them exists, and the borders dividing different areas are far from being clearly defined."
                    },
                    {
                        "id": 14,
                        "string": "The rest of the paper is distributed as follows."
                    },
                    {
                        "id": 15,
                        "string": "We briefly overview the definition of comparability levels in the literature and show the difficulties inherent to extracting comparable corpora from Wikipedia (Section 2)."
                    },
                    {
                        "id": 16,
                        "string": "We propose a simple and effective platform for the extraction of comparable corpora from Wikipedia (Section 3)."
                    },
                    {
                        "id": 17,
                        "string": "We describe a simple model for the extraction of parallel sentences from comparable corpora (Section 4) ."
                    },
                    {
                        "id": 18,
                        "string": "Experimental results are reported on each of these sub-tasks for three domains using the English and Spanish Wikipedia editions."
                    },
                    {
                        "id": 19,
                        "string": "We present an application-oriented evaluation of the comparable corpora by studying the impact of the extracted parallel sentences on a statistical machine translation system (Section 5)."
                    },
                    {
                        "id": 20,
                        "string": "Finally, we draw conclusions and outline ongoing work (Section 6)."
                    },
                    {
                        "id": 21,
                        "string": "Background Comparability in multilingual corpora is a fuzzy concept that has received alternative definitions without reaching an overall consensus (Rapp, 1995; Eagles Document Eag-Tcwg-Ctyp, 1996; Fung, 1998; Fung and Cheung, 2004; Wu and Fung, 2005; McEnery and Xiao, 2007; Sharoff et al., 2013) ."
                    },
                    {
                        "id": 22,
                        "string": "Ideally, a comparable corpus should contain texts in multiple languages which are similar in terms of form and content."
                    },
                    {
                        "id": 23,
                        "string": "Regarding content, they should observe similar structure, function, and a long list of characteristics: register, field, tenor, mode, time, and dialect (Maia, 2003) ."
                    },
                    {
                        "id": 24,
                        "string": "Nevertheless, finding these characteristics in real-life data collections is virtually impossible."
                    },
                    {
                        "id": 25,
                        "string": "Therefore, we attach to the following simpler four-class classification (Skadiņa et al., 2010) : (i) Parallel texts are true and accurate translations or approximate translations with minor languagespecific variations."
                    },
                    {
                        "id": 26,
                        "string": "(ii) Strongly comparable texts are closely related texts reporting the same event or describing the same subject."
                    },
                    {
                        "id": 27,
                        "string": "(iii) Weakly comparable texts include texts in the same narrow subject domain and genre, but describing different events, as well as texts within the same broader domain and genre, but varying in sub-domains and specific genres."
                    },
                    {
                        "id": 28,
                        "string": "(iv) Non-comparable texts are pairs of texts drawn at random from a pair of very large collections of texts in two or more languages."
                    },
                    {
                        "id": 29,
                        "string": "Wikipedia is a particularly suitable source of multilingual text with different levels of comparability, given that it covers a large amount of languages and topics."
                    },
                    {
                        "id": 30,
                        "string": "2 Articles can be connected via interlanguage links (i.e., a link from a page in one Wikipedia language to an equivalent page in another language)."
                    },
                    {
                        "id": 31,
                        "string": "Although there are some missing links and an article can be linked by two or more articles from the same language (Hecht and Gergle, 2010) , the number of available links allows to exploit the multilinguality of Wikipedia."
                    },
                    {
                        "id": 32,
                        "string": "Still, extracting a comparable corpus on a specific domain from Wikipedia is not so straightforward."
                    },
                    {
                        "id": 33,
                        "string": "One can take advantage of the usergenerated categories associated to most articles."
                    },
                    {
                        "id": 34,
                        "string": "Ideally, the categories and sub-categories would compose a hierarchically organized taxonomy, e.g., in the form of a category tree."
                    },
                    {
                        "id": 35,
                        "string": "Nevertheless, 2 Wikipedia contains 288 language editions out of which 277 are active and 12 have more than 1M articles at the time of writing, June 2015 (http://en.wikipedia.org/ wiki/List_of_Wikipedias)."
                    },
                    {
                        "id": 36,
                        "string": "Sport Sports Mountain sports  the categories in Wikipedia compose a denselyconnected graph with highly overlapping categories, cycles, etc."
                    },
                    {
                        "id": 37,
                        "string": "As they are manually-crafted, the categories are somehow arbitrary and, among other consequences, the potential categorisation of articles does not accomplish with the properties for representing the desirable -trusty enoughcategorisation of articles from different domains."
                    },
                    {
                        "id": 38,
                        "string": "Moreover, many articles are not associated to the categories they should belong to and there is a phenomenon of over-categorization."
                    },
                    {
                        "id": 39,
                        "string": "3 Figure 1 is an example of the complexity of Wikipedia's category graph topology."
                    },
                    {
                        "id": 40,
                        "string": "Although this particular example comes from the Wikipedia in Spanish, similar phenomena exist in other editions."
                    },
                    {
                        "id": 41,
                        "string": "Firstly, the paths from different apparently unrelated categories -Sport and Science-, converge in a common node soon in the graph (node Pyrenees)."
                    },
                    {
                        "id": 42,
                        "string": "As a result, not only Pyrenees could be considered as a sub-category of both Sport and Science, but all its descendants."
                    },
                    {
                        "id": 43,
                        "string": "Secondly, cycles exist among the different categories, as in the sequence Mountains of Andorra → Pyrenees → Mountains of the Pyrenees → Mountains of Andorra."
                    },
                    {
                        "id": 44,
                        "string": "Mountaineering Mountains Mountains of Andorra Ideally, every sub-category of a category should share the same attributes, since the \"failure to observe this principle reduces the predictability [of the taxonomy] and can lead to cross-classification\" (Rowley and Hartley, 2000, p. 196) ."
                    },
                    {
                        "id": 45,
                        "string": "Although fixing this issue -inherent to all the Wikipedia editions-falls out of the scope of our research, some heuristic strategies are necessary to diminish their impact in the domain definition process."
                    },
                    {
                        "id": 46,
                        "string": "Plamada and Volk (2012) dodge this issue by extracting a domain comparable corpus using IR techniques."
                    },
                    {
                        "id": 47,
                        "string": "They use the characteristic vocabulary of the domain (100 terms extracted from an external in-domain corpus) to query a Lucene search engine 4 over the whole encyclopaedia."
                    },
                    {
                        "id": 48,
                        "string": "Our approach is completely different: we try to get along with Wikipedia's structure with a strategy to walk through the category graph departing from a root or pseudo-root category, which defines our domain of interest."
                    },
                    {
                        "id": 49,
                        "string": "We empirically set a threshold to stop exploring the graph such that the included categories most likely represent an entire domain (cf."
                    },
                    {
                        "id": 50,
                        "string": "Section 3)."
                    },
                    {
                        "id": 51,
                        "string": "This approach is more similar to Cui et al."
                    },
                    {
                        "id": 52,
                        "string": "(2008) , who explore the Wiki-Graph and score every category in order to assess its likelihood of belonging to the domain."
                    },
                    {
                        "id": 53,
                        "string": "Other tools are being developed to extract corpora from Wikipedia."
                    },
                    {
                        "id": 54,
                        "string": "Linguatools 5 released a comparable corpus extracted from Wikipedias in 253 language pairs."
                    },
                    {
                        "id": 55,
                        "string": "Unfortunately, neither their tool nor the applied methodology description are available."
                    },
                    {
                        "id": 56,
                        "string": "CatScan2 6 is a tool that allows to explore and search categories recursively."
                    },
                    {
                        "id": 57,
                        "string": "The Accurat toolkit (Pinnis et al., 2012 ; Ş tefȃnescu, Dan and Ion, Radu and Hunsicker, Sabine, 2012) 7 aligns comparable documents and extracts parallel sentences, lexicons, and named entities."
                    },
                    {
                        "id": 58,
                        "string": "Finally, the most related tool to ours: CorpusPedia 8 extracts non-aligned, softly-aligned, and strongly-aligned comparable corpora from Wikipedia (Otero and López, 2010) ."
                    },
                    {
                        "id": 59,
                        "string": "The difference with respect to our model is that they only consider the articles associated to one specific category and not to an entire domain."
                    },
                    {
                        "id": 60,
                        "string": "The inter-connection among Wikipedia editions in different languages has been exploited for multiple tasks including lexicon induction (Erdmann et al., 2008) , extraction of bilingual dictionaries (Yu and Tsujii, 2009) , and identification of particular translations (Chu et al., 2014; Prochasson and Fung, 2011) ."
                    },
                    {
                        "id": 61,
                        "string": "Different cross-language NLP tasks have particularly taken advantage of Wikipedia."
                    },
                    {
                        "id": 62,
                        "string": "Articles have been used for query translation (Schönhofen et al., 2007) and crosslanguage semantic representations for similarity estimation (Cimiano et al., 2009; Potthast et al., 2008; Sorg and Cimiano, 2012) ."
                    },
                    {
                        "id": 63,
                        "string": "The extraction of parallel corpora from Wikipedia has been a hot topic during the last years (Adafre and de Rijke, 2006; Patry and Langlais, 2011; Plamada and Volk, 2012; Smith et al., 2010; Tomás et al., 2008; Yasuda and Sumita, 2008) ."
                    },
                    {
                        "id": 64,
                        "string": "Domain-Specific Comparable Corpora Extraction In this section we describe our proposal to extract domain-specific comparable corpora from Wikipedia."
                    },
                    {
                        "id": 65,
                        "string": "The input to the pipeline is the top category of the domain (e.g., Sport)."
                    },
                    {
                        "id": 66,
                        "string": "The terminology used in this description is as follows."
                    },
                    {
                        "id": 67,
                        "string": "Let c be a Wikipedia category and c * be the top category of a domain."
                    },
                    {
                        "id": 68,
                        "string": "Let a be a Wikipedia article; a ∈ c if a contains c among its categories."
                    },
                    {
                        "id": 69,
                        "string": "Let G be the Wikipedia category graph."
                    },
                    {
                        "id": 70,
                        "string": "Vocabulary definition."
                    },
                    {
                        "id": 71,
                        "string": "The domain vocabulary represents the set of terms that better characterises the domain."
                    },
                    {
                        "id": 72,
                        "string": "We do not expect to have at our disposal the vocabulary associated to every category."
                    },
                    {
                        "id": 73,
                        "string": "Therefore, we build it from the Wikipedia itself."
                    },
                    {
                        "id": 74,
                        "string": "We collect every article a ∈ c * and apply standard pre-processing; i.e., tokenisation, stopwording, numbers and punctuation marks filtering, and stemming (Porter, 1980) ."
                    },
                    {
                        "id": 75,
                        "string": "In order to reduce noise, tokens shorter than four characters are discarded as well."
                    },
                    {
                        "id": 76,
                        "string": "The vocabulary is then composed of the top n terms, ranked by term frequency."
                    },
                    {
                        "id": 77,
                        "string": "This value is empirically determined."
                    },
                    {
                        "id": 78,
                        "string": "Graph exploration."
                    },
                    {
                        "id": 79,
                        "string": "The input for this step is G, c * (i.e., the departing node in the graph), and the domain vocabulary."
                    },
                    {
                        "id": 80,
                        "string": "Departing from c * , we perform a breadth-first search, looking for all those categories which more likely belong to the required domain."
                    },
                    {
                        "id": 81,
                        "string": "Two constraints are applied in order to make a controlled exploration of the graph: (i) in order to avoid loops and exploring already traversed paths, a node can only be visited once, (ii) in order to avoid exploring the whole categories graph, a stopping criterion is pre-defined."
                    },
                    {
                        "id": 82,
                        "string": "Our stopping criterion is inspired by the classification tree-breadth first search algorithm (Cui et al., 2008) ."
                    },
                    {
                        "id": 83,
                        "string": "The core idea is scoring the explored cate- gories to determine if they belong to the domain."
                    },
                    {
                        "id": 84,
                        "string": "Our heuristic assumes that a category belongs to the domain if its title contains at least one of the terms in the characteristic vocabulary."
                    },
                    {
                        "id": 85,
                        "string": "Nevertheless, many categories exist that may not include any of the terms in the vocabulary."
                    },
                    {
                        "id": 86,
                        "string": "(e.g., consider category pato in Spanish -literally \"duck\" in English-which, somehow surprisingly, refers to a sport rather than an animal)."
                    },
                    {
                        "id": 87,
                        "string": "Our naïve solution to this issue is to consider subsets of categories according to their depth respect to the root."
                    },
                    {
                        "id": 88,
                        "string": "An entire level of categories is considered part of the domain if a minimum percentage of its elements include vocabulary terms."
                    },
                    {
                        "id": 89,
                        "string": "In our experiments we use the English and Spanish Wikipedia editions."
                    },
                    {
                        "id": 90,
                        "string": "9 Table 1 shows some statistics, after filtering disambiguation and redirect pages."
                    },
                    {
                        "id": 91,
                        "string": "The intersection of articles and categories between the two languages represents the ceiling for the amount of parallel corpora one can gather for this pair."
                    },
                    {
                        "id": 92,
                        "string": "We focus on three domains: Computer Science (CS), Science (Sc), and Sports (Sp) -the top categories c * from which the graph is explored in order to extract the corresponding comparable corpora."
                    },
                    {
                        "id": 93,
                        "string": "Table 2 shows the number of root articles associated to c * for each domain and language."
                    },
                    {
                        "id": 94,
                        "string": "From them, we obtain domain vocabularies with a size between 100 and 400 lemmas (right-side columns) when using the top 10% terms."
                    },
                    {
                        "id": 95,
                        "string": "We ran experiments using the top 10%, 15%, 20% and 100%."
                    },
                    {
                        "id": 96,
                        "string": "The relatively small size of these vocabularies allows to manually check that 10% is the best option to characterise the desired category, higher percentages add more noise than in-domain terms."
                    },
                    {
                        "id": 97,
                        "string": "The plots in Figure 2 show the percentage of categories with at least one domain term in the ti-9 Dumps downloaded from https://dumps."
                    },
                    {
                        "id": 98,
                        "string": "wikimedia.org in July 2013 and pre-processed with JWPL (Zesch et al., 2008) tle: the starting point for our graph-based method for selecting the in-domain articles."
                    },
                    {
                        "id": 99,
                        "string": "As expected, nearly 100% of the categories in the root include domain terms and this percentage decreases with increasing depth in the tree."
                    },
                    {
                        "id": 100,
                        "string": "When extracting the corpus, one must decide the adequate percentage of positive categories allowed."
                    },
                    {
                        "id": 101,
                        "string": "High thresholds lead to small corpora whereas low thresholds lead to larger -but noisier-corpora."
                    },
                    {
                        "id": 102,
                        "string": "As in many applications, this is a trade-off between precision and recall and depends on the intended use of the corpus."
                    },
                    {
                        "id": 103,
                        "string": "The stopping level is selected for every language independently, but in order to reduce noise, the comparable corpus is only built from those articles that appear in both languages and are related via an interlanguage link."
                    },
                    {
                        "id": 104,
                        "string": "We validate the quality in terms of application-based utility of the generated comparable corpora when used in a translation system (cf."
                    },
                    {
                        "id": 105,
                        "string": "Section 5)."
                    },
                    {
                        "id": 106,
                        "string": "Therefore, we choose to give more importance to recall and opt for the corpora obtained with a threshold of 50%."
                    },
                    {
                        "id": 107,
                        "string": "Parallel Sentence Extraction In this section we describe a simple technique for extracting parallel sentences from a comparable corpus."
                    },
                    {
                        "id": 108,
                        "string": "Given a pair of articles related by an interlanguage link, we estimate the similarity between all their pairs of cross-language sentences with different text similarity measures."
                    },
                    {
                        "id": 109,
                        "string": "We repeat the process for all the pairs of articles and rank the resulting sentence pairs according to its similarity."
                    },
                    {
                        "id": 110,
                        "string": "After defining a threshold for each measure, those sentence pairs with a similarity higher than the threshold are extracted as parallel sentences."
                    },
                    {
                        "id": 111,
                        "string": "This is a non-supervised method that generates a noisy parallel corpus."
                    },
                    {
                        "id": 112,
                        "string": "The quality of the similarity measures will then affect the purity of the parallel corpus and, therefore, the quality of the translator."
                    },
                    {
                        "id": 113,
                        "string": "However, we do not need to be very restrictive with the measures here and still favour a large corpus, since the word alignment process in the SMT system can take care of part of the noise."
                    },
                    {
                        "id": 114,
                        "string": "Similarity computation."
                    },
                    {
                        "id": 115,
                        "string": "We compute similarities between pairs of sentences by means of cosine and length factor measures."
                    },
                    {
                        "id": 116,
                        "string": "The cosine similarity is calculated on three well-known characterisations in cross-language information retrieval and parallel corpora alignment: (i) character ngrams (cng) (McNamee and Mayfield, 2004); (ii) pseudo-cognates (cog) (Simard et al., 1992) ; and (iii) word 1-grams, after translation into a common language, both from English to Spanish and vice versa (mono en , mono es )."
                    },
                    {
                        "id": 117,
                        "string": "We add the (iv) length factor (len) (Pouliquen et al., 2003) as an independent measure and as penalty (multiplicative factor) on the cosine similarity."
                    },
                    {
                        "id": 118,
                        "string": "The threshold for each of the measures just introduced is empirically set in a manually annotated corpus."
                    },
                    {
                        "id": 119,
                        "string": "We define it as the value that maximises the F 1 score on this development set."
                    },
                    {
                        "id": 120,
                        "string": "To create this set, we manually annotated a corpus with 30 article pairs (10 per domain) at sentence level."
                    },
                    {
                        "id": 121,
                        "string": "We considered three sentence classes: parallel, comparable, and other."
                    },
                    {
                        "id": 122,
                        "string": "The volunteers of the exercise were given as guidelines the definitions by Skadiņa et al."
                    },
                    {
                        "id": 123,
                        "string": "(2010) of parallel text and strongly comparable text (cf."
                    },
                    {
                        "id": 124,
                        "string": "Section 2)."
                    },
                    {
                        "id": 125,
                        "string": "A pair that did not match any of these definitions had to be classified as other."
                    },
                    {
                        "id": 126,
                        "string": "Each article pair was annotated by two volunteers, native speakers of Spanish with high command of English (a total of nine volunteers participated in the process)."
                    },
                    {
                        "id": 127,
                        "string": "The mean agreement between annotators had a kappa coefficient (Cohen, 1960) of κ ∼ 0.7."
                    },
                    {
                        "id": 128,
                        "string": "A third annotator resolved disagreed sentences."
                    },
                    {
                        "id": 129,
                        "string": "10 Table 4 shows the thresholds that obtain the maximum F 1 scores."
                    },
                    {
                        "id": 130,
                        "string": "It is worth noting that, even if the values of precision and recall are relatively low -the maximum recall is 0.57 for len-, our intention with these simple measures is not to obtain the highest performance in terms of retrieval, but injecting the most useful data to the translator, even at the cost of some noise."
                    },
                    {
                        "id": 131,
                        "string": "The performance with character 3-grams is the best one, comparable to that of mono, with an F 1 of 0.36."
                    },
                    {
                        "id": 132,
                        "string": "This suggests that a translator is not mandatory for performing the sentences selection."
                    },
                    {
                        "id": 133,
                        "string": "Len and 1-grams have no discriminating power and lead to the worse scores (F 1 of 0.14 and 0.21, respectively)."
                    },
                    {
                        "id": 134,
                        "string": "We ran a second set of experiments to explore the combination of the measures."
                    },
                    {
                        "id": 135,
                        "string": "the performance obtained by averaging all the similarities (S), also after multiplying them by the length factor and/or the observed F 1 obtained in the previous experiment."
                    },
                    {
                        "id": 136,
                        "string": "Even if the length factor had shown a poor performance in isolation, it helps to lift the F 1 figures consistently after affecting the similarities."
                    },
                    {
                        "id": 137,
                        "string": "In this case, F 1 grows up to 0.43."
                    },
                    {
                        "id": 138,
                        "string": "This impact is not so relevant when the individual F 1 is used for weightingS."
                    },
                    {
                        "id": 139,
                        "string": "We applied all the measures -both combined and in isolation-on the entire comparable corpora previously extracted."
                    },
                    {
                        "id": 140,
                        "string": "Table 6 shows the amount of parallel sentences extracted by applying the empirically defined thresholds of Tables 4 and 5."
                    },
                    {
                        "id": 141,
                        "string": "As expected, more flexible alternatives, such as low-level n-grams or length factor result in a higher amount of retrieved instances, but in all cases the size of the corpora is remarkable."
                    },
                    {
                        "id": 142,
                        "string": "For the most restricted domain, CS, we get around 200k parallel sentences for a given similarity measure."
                    },
                    {
                        "id": 143,
                        "string": "For the widest domain, SC, we surpass the 1M sentence pairs."
                    },
                    {
                        "id": 144,
                        "string": "As it will be shown in the following section, these sizes are already useful to be used for training SMT systems."
                    },
                    {
                        "id": 145,
                        "string": "Some standard parallel corpora have the same order of magnitude."
                    },
                    {
                        "id": 146,
                        "string": "For tasks other than MT, where the precision on the extracted pairs can be more important than the recall, one can obtain cleaner corpora by using a threshold that maximises precision instead of F 1 ."
                    },
                    {
                        "id": 147,
                        "string": "CS Evaluation: Statistical Machine Translation Task In this section we validate the quality of the obtained corpora by studying its impact on statistical machine translation."
                    },
                    {
                        "id": 148,
                        "string": "There are several parallel corpora for the English-Spanish language pair."
                    },
                    {
                        "id": 149,
                        "string": "We select as a general-purpose corpus Europarl v7 (Koehn, 2005) , with 1.97M parallel sentences."
                    },
                    {
                        "id": 150,
                        "string": "The order of magnitude is similar to the largest corpus we have extracted from Wikipedia, so we can compare the results in a size-independent way."
                    },
                    {
                        "id": 151,
                        "string": "If our corpus extracted from Wikipedia was made up with parallel fragments of the desired domain, it should be the most adequate to translate these domains."
                    },
                    {
                        "id": 152,
                        "string": "If the quality of the parallel fragments was acceptable, it should also help when translating out-of-domain texts."
                    },
                    {
                        "id": 153,
                        "string": "In order to test these hypotheses we analyse three settings: (i) train SMT systems only with Wikipedia (WP) or Europarl (EP) to translate domain-specific texts, (ii) train SMT systems with Wikipedia and Europarl to translate domain-specific texts, and (iii) train SMT systems with Wikipedia and Europarl to translate out-of-domain texts (news)."
                    },
                    {
                        "id": 154,
                        "string": "For the out-of-domain evaluation we use the News Commentaries 2011 test set and the News Commentaries 2009 for development."
                    },
                    {
                        "id": 155,
                        "string": "11 For the in-domain evaluation we build the test and development sets in a semiautomatic way."
                    },
                    {
                        "id": 156,
                        "string": "We depart from the parallel corpora gathered in Section 4 from which sentences with more than four tokens and beginning with a letter are selected."
                    },
                    {
                        "id": 157,
                        "string": "We estimate its perplexity with respect to a language model obtained with Europarl in order to select the most fluent sentences and then we rank the parallel sentences according to their similarity and perplexity."
                    },
                    {
                        "id": 158,
                        "string": "The top-n fragments were manually revised and extracted to build the Wikipedia test (WPtest) and development (WPdev) sets."
                    },
                    {
                        "id": 159,
                        "string": "We repeated the process for the three studied domains and drew 300 parallel fragments for development for every domain and 500 for test."
                    },
                    {
                        "id": 160,
                        "string": "We removed these sentences from the corresponding training corpora."
                    },
                    {
                        "id": 161,
                        "string": "For one of the domains, CS, we also gathered a test set from a parallel corpus of GNOME localisation files (Tiedemann, 2012) ."
                    },
                    {
                        "id": 162,
                        "string": "Table 7 shows the size in number of sentences of these test sets and of the 20 Wikipedia training sets used for translation."
                    },
                    {
                        "id": 163,
                        "string": "Only one measure, that with the highest F 1 score, is selected from each family: c3g, cog, mono en andS·len (cf."
                    },
                    {
                        "id": 164,
                        "string": "Tables 4 and 5)."
                    },
                    {
                        "id": 165,
                        "string": "We also compile the corpus that results from the union of the previous four."
                    },
                    {
                        "id": 166,
                        "string": "Notice that, although we eliminate duplicates from this corpus, the size of the union is close to the sum of the individual corpora."
                    },
                    {
                        "id": 167,
                        "string": "This indicates that every similarity measure selects a different set of parallel fragments."
                    },
                    {
                        "id": 168,
                        "string": "Beside the specialised corpus for each domain, we build a larger corpus with all the data (Un)."
                    },
                    {
                        "id": 169,
                        "string": "Again, duplicate fragments coming from articles belonging to more than one domain are removed."
                    },
                    {
                        "id": 170,
                        "string": "SMT systems are trained using standard freely available software."
                    },
                    {
                        "id": 171,
                        "string": "We estimate a 5-gram language model using interpolated Kneser-Ney discounting with SRILM (Stolcke, 2002) ."
                    },
                    {
                        "id": 172,
                        "string": "Word alignment is done with GIZA++ (Och and Ney, 2003) and both phrase extraction and decoding are done with Moses (Koehn et al., 2007) ."
                    },
                    {
                        "id": 173,
                        "string": "We optimise the feature weights of the model with Minimum Error Rate Training (MERT) (Och, 2003) against the BLEU evaluation metric (Papineni et al., 2002) ."
                    },
                    {
                        "id": 174,
                        "string": "Our model considers the language model, direct and inverse phrase probabilities, direct and inverse lexical probabilities, phrase and word penalties, and a lexicalised reordering."
                    },
                    {
                        "id": 175,
                        "string": "(i) Training systems with Wikipedia or Europarl for domain-specific translation."
                    },
                    {
                        "id": 176,
                        "string": "Table 8 shows the evaluation results on WPtest."
                    },
                    {
                        "id": 177,
                        "string": "All the specialised systems obtain significant improvements with respect to the Europarl system, regardless of their size."
                    },
                    {
                        "id": 178,
                        "string": "For instance, the worst specialised system (c3g with only 95,715 sentences for CS) outperforms by more than 10 points of BLEU the general Europarl translator."
                    },
                    {
                        "id": 179,
                        "string": "The most complete system (the union of the four representatives) doubles the BLEU score for all the domains with an impressive improvement of 30 points."
                    },
                    {
                        "id": 180,
                        "string": "This is of course possible due to the nature of the test set that has been extracted from the same collection as the training data and therefore shares its structure and vocabulary."
                    },
                    {
                        "id": 181,
                        "string": "To give perspective to these high numbers we evaluate the systems trained on the CS domain  against the GNOME dataset (Table 9) ."
                    },
                    {
                        "id": 182,
                        "string": "Except for c3g, the Wikipedia translators always outperform the baseline with EP; the union system improves it by 4 BLEU points (22.41 compared to 18.15) with a four times smaller corpus."
                    },
                    {
                        "id": 183,
                        "string": "This confirms that a corpus automatically extracted with an F 1 smaller than 0.5 is still useful for SMT."
                    },
                    {
                        "id": 184,
                        "string": "Notice also that using only the in-domain data (CS) is always better than using the whole WP corpus (Un) even if the former is in general ten times smaller (cf."
                    },
                    {
                        "id": 185,
                        "string": "Table 7 )."
                    },
                    {
                        "id": 186,
                        "string": "According to this indirect evaluation of the similarity measures, character n-grams (c3g) represent the worst alternative."
                    },
                    {
                        "id": 187,
                        "string": "These results contradict the direct evaluation, where c3g and mono en had the highest F 1 scores on the development set among the individual similarity measures."
                    },
                    {
                        "id": 188,
                        "string": "The size of the corpus is not relevant here: when we train all the systems with the same amount of data, the ranking in the quality of the measures remains the same."
                    },
                    {
                        "id": 189,
                        "string": "To see this, we trained four additional systems with the top m number of parallel fragments, where m is the size of the smallest corpus for the union of domains: Un-c3g."
                    },
                    {
                        "id": 190,
                        "string": "This new comparison is reported in columns \"Comp.\""
                    },
                    {
                        "id": 191,
                        "string": "in Tables 8 and 9."
                    },
                    {
                        "id": 192,
                        "string": "In this fair comparison c3g is still the worst measure andS·len the best one."
                    },
                    {
                        "id": 193,
                        "string": "The translator built from its associated corpus outperforms with less than half of the data used for training the general one (883,366 vs. 1,965,734 parallel fragments) both in WPtest (56.78 vs. 30.63) and GNOME (19.76 vs. 18.15 )."
                    },
                    {
                        "id": 194,
                        "string": "(ii) Training systems on Wikipedia and Europarl for domain-specific translation."
                    },
                    {
                        "id": 195,
                        "string": "Now we enrich the general translator with Wikipedia data or, equivalently, complement the Wikipedia translator with out-of-domain data."
                    },
                    {
                        "id": 196,
                        "string": "Table 10 shows the results."
                    },
                    {
                        "id": 197,
                        "string": "Augmenting the size of the indomain corpus by 2 million fragments improves the results even more, about 2 points of BLEU   when using all the union data."
                    },
                    {
                        "id": 198,
                        "string": "System c3g benefits the most of the inclusion of the Europarl data."
                    },
                    {
                        "id": 199,
                        "string": "The reason is that it is the individual system with less corpus available and the one obtaining the worst results."
                    },
                    {
                        "id": 200,
                        "string": "In fact, the better the Wikipedia system, the less important the contribution from Europarl is."
                    },
                    {
                        "id": 201,
                        "string": "For the independent test set GNOME, Table 11 shows that the union corpus on CS is better than any combination of Wikipedia and Europarl."
                    },
                    {
                        "id": 202,
                        "string": "Still, as aforementioned, the best performance on this test set is obtained with a pure in-domain system (cf."
                    },
                    {
                        "id": 203,
                        "string": "are controlled by the Europarl baseline."
                    },
                    {
                        "id": 204,
                        "string": "In general, systems in which we include only texts from an unrelated domain do not improve the performance of the Europarl system alone, results of the combined system are better when we use Wikipedia texts from all the domains together (column Un) for training."
                    },
                    {
                        "id": 205,
                        "string": "This suggests that, as expected, a general Wikipedia corpus is necessary to build a general translator."
                    },
                    {
                        "id": 206,
                        "string": "This is a different problem to deal with."
                    },
                    {
                        "id": 207,
                        "string": "Conclusions and Ongoing Work In this paper we presented a model for the automatic extraction of in-domain comparable corpora from Wikipedia."
                    },
                    {
                        "id": 208,
                        "string": "It makes possible the automatic extraction of monolingual and comparable article collections as well as a one-click parallel corpus generation for on-demand language pairs and domains."
                    },
                    {
                        "id": 209,
                        "string": "Given a pair of languages and a main category, the model explores the Wikipedia categories graph and identifies a subset of categories (and their associated articles) to generate a document-aligned comparable corpus."
                    },
                    {
                        "id": 210,
                        "string": "The resulting corpus can be exploited for multiple natural language processing tasks."
                    },
                    {
                        "id": 211,
                        "string": "Here we applied it as part of a pipeline for the extraction of domainspecific parallel sentences."
                    },
                    {
                        "id": 212,
                        "string": "These parallel instances allowed for a significant improvement in the machine translation quality when compared to a generic system and applied to a domain specific corpus (in-domain)."
                    },
                    {
                        "id": 213,
                        "string": "The experiments are shown for the English-Spanish language pair and the domains Computer Science, Science, and Sports."
                    },
                    {
                        "id": 214,
                        "string": "Still it can be applied to other language pairs and domains."
                    },
                    {
                        "id": 215,
                        "string": "The prototype is currently operating in other languages."
                    },
                    {
                        "id": 216,
                        "string": "The only prerequisite is the existence of the corresponding Wikipedia edition and some basic processing tools such as a tokeniser and a lemmatiser."
                    },
                    {
                        "id": 217,
                        "string": "Our current efforts intend to generate a more robust model for parallel sentences identification and the design of other indirect evaluation schemes to validate the model performance."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 20
                    },
                    {
                        "section": "Background",
                        "n": "2",
                        "start": 21,
                        "end": 63
                    },
                    {
                        "section": "Domain-Specific Comparable Corpora Extraction",
                        "n": "3",
                        "start": 64,
                        "end": 106
                    },
                    {
                        "section": "Parallel Sentence Extraction",
                        "n": "4",
                        "start": 107,
                        "end": 146
                    },
                    {
                        "section": "Evaluation: Statistical Machine Translation Task",
                        "n": "5",
                        "start": 147,
                        "end": 206
                    },
                    {
                        "section": "Conclusions and Ongoing Work",
                        "n": "6",
                        "start": 207,
                        "end": 217
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1004-Table6-1.png",
                        "caption": "Table 6: Size of the parallel corpora extracted with each similarity measure.",
                        "page": 5,
                        "bbox": {
                            "x1": 313.92,
                            "x2": 519.36,
                            "y1": 187.68,
                            "y2": 413.28
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table4-1.png",
                        "caption": "Table 4: Best thresholds and their associated Precision (P), recall (R) and F1.",
                        "page": 5,
                        "bbox": {
                            "x1": 127.67999999999999,
                            "x2": 469.44,
                            "y1": 62.4,
                            "y2": 148.32
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table5-1.png",
                        "caption": "Table 5: Precision, recall, and F1 for the average of the similarities weighted by length model (len) and/or their F1.",
                        "page": 5,
                        "bbox": {
                            "x1": 81.6,
                            "x2": 280.32,
                            "y1": 187.68,
                            "y2": 274.08
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Figure1-1.png",
                        "caption": "Figure 1: Slice of the Spanish Wikipedia category graph (as in May 2015) departing from categories Sport and Science. Translated for clarity.",
                        "page": 1,
                        "bbox": {
                            "x1": 310.08,
                            "x2": 529.92,
                            "y1": 62.879999999999995,
                            "y2": 212.16
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table7-1.png",
                        "caption": "Table 7: Number of sentences of the Wikipedia parallel corpora used to train the SMT systems (top rows) and of the sets used for development and test.",
                        "page": 6,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 524.16,
                            "y1": 62.4,
                            "y2": 176.16
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table8-1.png",
                        "caption": "Table 8: BLEU scores obtained on the Wikipedia test sets for the 20 specialised systems described in Section 5. A comparison column (Comp.) where all the systems are trained with corpora of the same size is also included (see text).",
                        "page": 6,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 524.16,
                            "y1": 248.64,
                            "y2": 346.08
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table10-1.png",
                        "caption": "Table 10: BLEU scores obtained on the Wikipedia test set for the 20 systems trained with the combination of the Europarl (EP) and the Wikipedia corpora. The results with a Europarl system and the best one from Table 8 (union) shown for comparison.",
                        "page": 7,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 523.1999999999999,
                            "y1": 62.4,
                            "y2": 183.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table11-1.png",
                        "caption": "Table 11: BLEU scores obtained on the GNOME test set for systems trained with Europarl and Wikipedia. A system with Europarl achieves a score of 18.15.",
                        "page": 7,
                        "bbox": {
                            "x1": 346.56,
                            "x2": 486.24,
                            "y1": 284.64,
                            "y2": 378.24
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table9-1.png",
                        "caption": "Table 9: BLEU scores obtained on the GNOME test set for systems trained only with Wikipedia. A system with Europarl achieves a score of 18.15.",
                        "page": 7,
                        "bbox": {
                            "x1": 99.84,
                            "x2": 262.08,
                            "y1": 62.879999999999995,
                            "y2": 156.0
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table1-1.png",
                        "caption": "Table 1: Amount of articles and categories in the Wikipedia editions and in the intersection (i.e., pages linked across languages).",
                        "page": 3,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 288.0,
                            "y1": 62.4,
                            "y2": 129.12
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table2-1.png",
                        "caption": "Table 2: Number of articles in the root categories and size of the resulting domain vocabulary.",
                        "page": 3,
                        "bbox": {
                            "x1": 311.03999999999996,
                            "x2": 515.52,
                            "y1": 193.44,
                            "y2": 330.71999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table12-1.png",
                        "caption": "Table 12: BLEU scores for the out-of-domain evaluation on the News Commentaries 2011 test set. We show in boldface all the systems that improve the Europarl translator, which achieves a score of 27.02.",
                        "page": 8,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 287.03999999999996,
                            "y1": 62.4,
                            "y2": 175.2
                        }
                    },
                    {
                        "filename": "../figure/image/1004-Table3-1.png",
                        "caption": "Table 3: Number of article pairs according to the percentage of positive categories used to select the levels of the graph and distance from the root at which the percentage is smaller to the desired one.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 288.0,
                            "y1": 62.4,
                            "y2": 151.2
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-15"
        },
        {
            "slides": {
                "0": {
                    "title": "Time is important",
                    "text": [
                        "Understanding time is key to understanding events",
                        "Timelines (in stories, clinical records), time-slot filling, Q&A, common sense",
                        "[June, 1989] Chris Robin lives in England and he is the person that you read about in Winnie the Pooh. As a boy, Chris lived in",
                        "Cotchfield Farm. When he was three, his father wrote a poem about him. His father later wrote Winnie the Pooh in 1925.",
                        "Where did Chris Robin live? Clearly, time sensitive.",
                        "When was Chris Robin born? poem [Chris at age 3]",
                        "Requires identifying relations between events, and temporal reasoning.",
                        "Events are associated with time intervals:",
                        "A happens BEFORE/AFTER B; Time is often expressed implicitly",
                        "2 explicit time expressions per 100 tokens, but 12 temporal relations"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Example",
                    "text": [
                        "Friday in the middle of a group of men playing volleyball.",
                        "Temporal question: Which one happens first?",
                        "e1 appears first in text. Is it also earlier in time? e2 was on Friday, but we dont know when e1 happened.",
                        "No explicit lexical markers, e.g., before, since, or during."
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Example temporal determined by causal",
                    "text": [
                        "More than 10 people (e1: died), he said. A car (e2: exploded)",
                        "Friday in the middle of a group of men playing volleyball.",
                        "Temporal question: Which one happens first?",
                        "Obviously, e2:exploded is the cause and e1:died is the effect.",
                        "So, e2 happens first.",
                        "In this example, the temporal relation is determined by the causal relation.",
                        "Note also that the lexical information is important here; its likely that explode BERORE die, irrespective of the context."
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Example causal determined by temporal",
                    "text": [
                        "People raged and took to the street the government",
                        "Did the government stifle people because people raged?",
                        "Or, people raged because the government stifled people?",
                        "Both sound correct and we are not sure about the causality here.",
                        "People raged and took to the street (after) the government",
                        "Since stifled happened earlier, its obvious that the cause is stifled and the result is raged.",
                        "In this example, the causal relation is determined by the temporal relation."
                    ],
                    "page_nums": [
                        4,
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "This paper",
                    "text": [
                        "Event relations: an essential step of event understanding, which",
                        "supports applications such as story understanding/completion, summarization, and timeline construction.",
                        "[There has been a lot of work on this; see Ning et al. ACL18, presented yesterday. for a discussion of the literature and the challenges.]",
                        "This paper focuses on the joint extraction of temporal and",
                        "A temporal relation (T-Link) specifies the relation between two events along the temporal dimension.",
                        "A causal relation (C-Link) specifies the [cause effect] between two events."
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Temporal and casual relations",
                    "text": [
                        "T-Link Example: John worked out after finishing his work.",
                        "C-Link Example: He was released due to lack of evidence.",
                        "Temporal and causal relations interact with each other.",
                        "For example, there is also a T-Link between released and lack",
                        "The decisions on the T-Link type and the C-link type depend on each other, suggesting that joint reasoning could help."
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "7": {
                    "title": "Contributions",
                    "text": [
                        "1. Proposed a novel joint inference framework for temporal and causal reasoning",
                        "Assume the availability of a temporal extraction system and a causal extraction system",
                        "Enforce declarative constraints originating from the physical nature of causality",
                        "2. Constructed a new dataset with both temporal and causal relations.",
                        "We augmented the EventCausality dataset (Do et al., 2011), which comes with causal relations, with new temporal annotations."
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "8": {
                    "title": "Temporal relation extraction an ilp approach",
                    "text": [
                        "--Event node set. are events.",
                        "' --temporal relation label",
                        "-Boolean variable is there a of relation r between \" -./ $? (Y/N)",
                        "0*(+,)--score of event pair having relation",
                        "Global assignment of relations: scores in this document",
                        "'K--the relation dictated by 'F and 'G"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "9": {
                    "title": "Proposed joint approach",
                    "text": [
                        "--Event node set. are events.",
                        "' --temporal relation label",
                        "-Boolean variable is there a of relation r between \" -./ $? (Y/N)",
                        "0*(+,)--score of event pair having relation",
                        "3 4--causal relation; with corresponding variables and",
                        "T & C relations",
                        "Cause must be before effect"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                },
                "11": {
                    "title": "Back to the example temporal determined by causal",
                    "text": [
                        "More than 10 people (e1: died), he said. A car (e2: exploded)",
                        "Friday in the middle of a group of men playing volleyball.",
                        "Temporal question: Which one happens first?",
                        "Obviously, e2:exploded is the cause and e1:died is the effect.",
                        "So, e2 happens first.",
                        "In this example, the temporal relation is determined by the",
                        "Note also that the lexical information is important here; its",
                        "likely that explode BERORE die, irrespective of the context."
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "12": {
                    "title": "Temprob probabilistic knowledge base",
                    "text": [
                        "Preprocessing: Semantic Role Labeling & Temporal relations model",
                        "Result: 51K semantic frames, 80M relations",
                        "Then we simply count how many times one frame is before/after another frame, as follows. http://cogcomp.org/page/publication_view/830",
                        "Frame 1 Frame 2 Before After"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "15": {
                    "title": "Result on timebank dense",
                    "text": [
                        "TimeBank-Dense: A Benchmark Temporal Relation Dataset",
                        "The performance of temporal relation extraction:",
                        "CAEVO: the temporal system proposed along with TimeBank-Dense",
                        "CATENA: the aforementioned work post-editing temporal relations based on causal predictions, retrained on TimeBank-Dense.",
                        "System P R F1"
                    ],
                    "page_nums": [
                        18
                    ],
                    "images": []
                },
                "16": {
                    "title": "A new joint dataset",
                    "text": [
                        "TimeBank-Dense has only temporal relation annotations, so in the evaluations above, we only evaluated our temporal performance.",
                        "EventCausality dataset has only causal relation annotations.",
                        "To get a dataset with both temporal and causal relation annotations, we choose to augment the EventCausality dataset with temporal relations, using the annotation scheme we proposed in our paper [Ning et al., ACL18. A multi-axis annotation scheme for",
                        "event temporal relation annotation.]",
                        "Doc Event T-Link C-Link",
                        "*due to re-definition of events"
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": []
                },
                "17": {
                    "title": "Result on our new joint dataset",
                    "text": [
                        "P R F Acc.",
                        "The temporal performance got strictly better in P, R, and F1.",
                        "The causal performance also got improved by a large margin.",
                        "Comparing to when gold temporal relations were used, we can see that theres still much room for causal improvement.",
                        "Comparing to when gold causal relations were used, we can see that the current joint algorithm is very close to its best."
                    ],
                    "page_nums": [
                        20
                    ],
                    "images": []
                }
            },
            "paper_title": "Joint Reasoning for Temporal and Causal Relations",
            "paper_id": "1010",
            "paper": {
                "title": "Joint Reasoning for Temporal and Causal Relations",
                "abstract": "Understanding temporal and causal relations between events is a fundamental natural language understanding task. Because a cause must occur earlier than its effect, temporal and causal relations are closely related and one relation often dictates the value of the other. However, limited attention has been paid to studying these two relations jointly. This paper presents a joint inference framework for them using constrained conditional models (CCMs). Specifically, we formulate the joint problem as an integer linear programming (ILP) problem, enforcing constraints that are inherent in the nature of time and causality. We show that the joint inference framework results in statistically significant improvement in the extraction of both temporal and causal relations from text. 1",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Understanding events is an important component of natural language understanding."
                    },
                    {
                        "id": 1,
                        "string": "An essential step in this process is identifying relations between events, which are needed in order to support applications such as story completion, summarization, and timeline construction."
                    },
                    {
                        "id": 2,
                        "string": "Among the many relation types that could exist between events, this paper focuses on the joint extraction of temporal and causal relations."
                    },
                    {
                        "id": 3,
                        "string": "It is well known that temporal and causal relations interact with each other and in many cases, the decision of one relation is made primarily based on evidence from the other."
                    },
                    {
                        "id": 4,
                        "string": "In Example 1, identifying the temporal relation between e1:died and e2:exploded is 1 The dataset and code used in this paper are available at http://cogcomp.org/page/publication_ view/835 in fact a very hard case: There are no explicit temporal markers (e.g., \"before\", \"after\", or \"since\"); the events are in separate sentences so their syntactic connection is weak; although the occurrence time of e2:exploded is given (i.e., Friday) in text, it is not given for e1:died."
                    },
                    {
                        "id": 5,
                        "string": "However, given the causal relation, e2:exploded caused e1:died,it is clear that e2:exploded happened before e1:died."
                    },
                    {
                        "id": 6,
                        "string": "The temporal relation is dictated by the causal relation."
                    },
                    {
                        "id": 7,
                        "string": "Ex 1: Temporal relation dictated by causal relation."
                    },
                    {
                        "id": 8,
                        "string": "More than 10 people (e1:died) on their way to the nearest hospital, police said."
                    },
                    {
                        "id": 9,
                        "string": "A suicide car bomb (e2:exploded) on Friday in the middle of a group of men playing volleyball in northwest Pakistan."
                    },
                    {
                        "id": 10,
                        "string": "Since e2:exploded is the reason of e1:died, the temporal relation is thus e2 being before e1."
                    },
                    {
                        "id": 11,
                        "string": "Ex 2: Causal relation dictated by temporal relation."
                    },
                    {
                        "id": 12,
                        "string": "Mir-Hossein Moussavi (e3:raged) after government's efforts to (e4:stifle) protesters."
                    },
                    {
                        "id": 13,
                        "string": "Since e3:raged is temporally after e4:stifle, e4 should be the cause of e3."
                    },
                    {
                        "id": 14,
                        "string": "On the other hand, causal relation extraction can also benefit from knowing temporal relations."
                    },
                    {
                        "id": 15,
                        "string": "In Example 2, it is unclear whether the government stifled people because people raged, or people raged because the government stifled people: both situations are logically reasonable."
                    },
                    {
                        "id": 16,
                        "string": "However, if we account for the temporal relation (that is, e4:stifle happened before e3:raged), it is clear that e4:stifle is the cause and e3:raged is the effect."
                    },
                    {
                        "id": 17,
                        "string": "In this case, the causal relation is dictated by the temporal relation."
                    },
                    {
                        "id": 18,
                        "string": "The first contribution of this work is proposing a joint framework for Temporal and Causal Reasoning (TCR), inspired by these examples."
                    },
                    {
                        "id": 19,
                        "string": "Assuming the availability of a temporal extraction system and a causal extraction system, the proposed joint framework combines these two using a constrained conditional model (CCM) (Chang et al., 2012) framework, with an integer linear pro-gramming (ILP) objective (Roth and Yih, 2004) that enforces declarative constraints during the inference phase."
                    },
                    {
                        "id": 20,
                        "string": "Specifically, these constraints include: (1) A cause must temporally precede its effect."
                    },
                    {
                        "id": 21,
                        "string": "(2) Symmetry constraints, i.e., when a pair of events, (A, B) , has a temporal relation r (e.g., before), then (B, A) must have the reverse relation of r (e.g., after)."
                    },
                    {
                        "id": 22,
                        "string": "(3) Transitivity constraints, i.e., the relation between (A, C) must be temporally consistent with the relation derived from (A, B) and (B, C)."
                    },
                    {
                        "id": 23,
                        "string": "These constraints originate from the one-dimensional nature of time and the physical nature of causality and build connections between temporal and causal relations, making CCM a natural choice for this problem."
                    },
                    {
                        "id": 24,
                        "string": "As far as we know, very limited work has been done in joint extraction of both relations."
                    },
                    {
                        "id": 25,
                        "string": "Formulating the joint problem in the CCM framework is novel and thus the first contribution of this work."
                    },
                    {
                        "id": 26,
                        "string": "A key obstacle in jointly studying temporal and causal relations lies in the absence of jointly annotated data."
                    },
                    {
                        "id": 27,
                        "string": "The second contribution of this work is the development of such a jointly annotated dataset which we did by augmenting the Event-Causality dataset (Do et al., 2011) with dense temporal annotations."
                    },
                    {
                        "id": 28,
                        "string": "This dataset allows us to show statistically significant improvements on both relations via the proposed joint framework."
                    },
                    {
                        "id": 29,
                        "string": "This paper also presents an empirical result of improving the temporal extraction component."
                    },
                    {
                        "id": 30,
                        "string": "Specifically, we incorporate explicit time expressions present in the text and high-precision knowledge-based rules into the ILP objective."
                    },
                    {
                        "id": 31,
                        "string": "These sources of information have been successfully adopted by existing methods (Chambers et al., 2014; Mirza and Tonelli, 2016) , but were never used within a global ILP-based inference method."
                    },
                    {
                        "id": 32,
                        "string": "Results on TimeBank-Dense (Cassidy et al., 2014), a benchmark dataset with temporal relations only, show that these modifications can also be helpful within ILP-based methods."
                    },
                    {
                        "id": 33,
                        "string": "Related Work Temporal and causal relations can both be represented by directed acyclic graphs, where the nodes are events and the edges are labeled with either before, after, etc."
                    },
                    {
                        "id": 34,
                        "string": "(in temporal graphs), or causes and caused by (in causal graphs)."
                    },
                    {
                        "id": 35,
                        "string": "Existing work on temporal relation extraction was initiated by (Mani et al., 2006; Chambers et al., 2007; Bethard et al., 2007; Verhagen and Pustejovsky, 2008) , Ex 3: Global considerations are needed when making local decisions."
                    },
                    {
                        "id": 36,
                        "string": "The FAA on Friday (e5:announced) it will close 149 regional airport control towers because of forced spending cuts."
                    },
                    {
                        "id": 37,
                        "string": "Before Friday's (e6:announcement), it (e7:said) it would consider keeping a tower open if the airport convinces the agency it is in the \"national interest\" to do so."
                    },
                    {
                        "id": 38,
                        "string": "which formulated the problem as that of learning a classification model for determining the label of each edge locally (i.e., local methods)."
                    },
                    {
                        "id": 39,
                        "string": "A disadvantage of these early methods is that the resulting graph may break the symmetric and transitive constraints."
                    },
                    {
                        "id": 40,
                        "string": "There are conceptually two ways to enforce such graph constraints (i.e., global reasoning)."
                    },
                    {
                        "id": 41,
                        "string": "CAEVO (Chambers et al., 2014) grows the temporal graph in a multi-sieve manner, where predictions are added sieve-by-sieve."
                    },
                    {
                        "id": 42,
                        "string": "A graph closure operation had to be performed after each sieve to enforce constraints."
                    },
                    {
                        "id": 43,
                        "string": "This is solving the global inference problem greedily."
                    },
                    {
                        "id": 44,
                        "string": "A second way is to perform exact inference via ILP and the symmetry and transitivity requirements can be enforced as ILP constraints (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Denis and Muller, 2011; Do et al., 2012; Ning et al., 2017) ."
                    },
                    {
                        "id": 45,
                        "string": "We adopt the ILP approach in the temporal component of this work for two reasons."
                    },
                    {
                        "id": 46,
                        "string": "First, as we show later, it is straightforward to build a joint framework with both temporal and causal relations as an extension of it."
                    },
                    {
                        "id": 47,
                        "string": "Second, the relation between a pair of events is often determined by the relations among other events."
                    },
                    {
                        "id": 48,
                        "string": "In Ex 3, if a system is unaware of (e5, e6)=simultaneously when trying to make a decision for (e5, e7), it is likely to predict that e5 is before e7 2 ; but, in fact, (e5, e7)=after given the existence of e6."
                    },
                    {
                        "id": 49,
                        "string": "Using global considerations is thus beneficial in this context not only for generating globally consistent temporal graphs, but also for making more reliable pairwise decisions."
                    },
                    {
                        "id": 50,
                        "string": "Prior work on causal relations in natural language text was relatively sparse."
                    },
                    {
                        "id": 51,
                        "string": "Many causal extraction work in other domains assumes the existence of ground truth timestamps (e.g., (Sun et al., 2007; Güler et al., 2016) ), but gold timestamps rarely exist in natural language text."
                    },
                    {
                        "id": 52,
                        "string": "In NLP, people have focused on causal relation identification using lexical features or discourse relations."
                    },
                    {
                        "id": 53,
                        "string": "For example, based on a set of explicit causal discourse markers (e.g., \"because\", \"due to\", and \"as a result\"), Hidey and McKeown (2016) built parallel Wikipedia articles and constructed an open set of implicit markers called AltLex."
                    },
                    {
                        "id": 54,
                        "string": "A classifier was then applied to identify causality."
                    },
                    {
                        "id": 55,
                        "string": "Dunietz et al."
                    },
                    {
                        "id": 56,
                        "string": "(2017) used the concept of construction grammar to tag causally related clauses or phrases."
                    },
                    {
                        "id": 57,
                        "string": "Do et al."
                    },
                    {
                        "id": 58,
                        "string": "(2011) considered global statistics over a large corpora, the cause-effect association (CEA) scores, and combined it with discourse relations using ILP to identify causal relations."
                    },
                    {
                        "id": 59,
                        "string": "These work only focused on the causality task and did not address the temporal aspect."
                    },
                    {
                        "id": 60,
                        "string": "However, as illustrated by Examples 1-2, temporal and causal relations are closely related, as assumed by many existing works (Bethard and Martin, 2008; Rink et al., 2010) ."
                    },
                    {
                        "id": 61,
                        "string": "Here we argue that being able to capture both aspects in a joint framework provides a more complete understanding of events in natural language documents."
                    },
                    {
                        "id": 62,
                        "string": "Researchers have started paying attention to this direction recently."
                    },
                    {
                        "id": 63,
                        "string": "For example, Mostafazadeh et al."
                    },
                    {
                        "id": 64,
                        "string": "(2016b) proposed an annotation framework, CaTeRs, which captured both temporal and causal aspects of event relations in common sense stories."
                    },
                    {
                        "id": 65,
                        "string": "CATENA (Mirza and Tonelli, 2016) extended the multi-sieve framework of CAEVO to extracting both temporal and causal relations and exploited their interaction through post-editing temporal relations based on causal predictions."
                    },
                    {
                        "id": 66,
                        "string": "In this paper, we push this idea forward and tackle the problem in a joint and more principled way, as shown next."
                    },
                    {
                        "id": 67,
                        "string": "Temporal and Causal Reasoning In this section, we explain the proposed joint inference framework, Temporal and Causal Reasoning (TCR)."
                    },
                    {
                        "id": 68,
                        "string": "To start with, we focus on introducing the temporal component, and clarify how to design the transitivity constraints and how to enforce other readily available prior knowledge to improve its performance."
                    },
                    {
                        "id": 69,
                        "string": "With this temporal component already explained, we further incorporate causal relations and complete the TCR joint inference framework."
                    },
                    {
                        "id": 70,
                        "string": "Finally, we transform the joint problem into an ILP so that it can be solved using offthe-shelf packages."
                    },
                    {
                        "id": 71,
                        "string": "Temporal Component Let R T be the label set of temporal relations and E and T be the set of all events and the set of all time expressions (a.k.a."
                    },
                    {
                        "id": 72,
                        "string": "timex) in a document."
                    },
                    {
                        "id": 73,
                        "string": "For notation convenience, we use EE to represent the set of all event-event pairs; then ET and T T have obvious definitions."
                    },
                    {
                        "id": 74,
                        "string": "Given a pair in EE or ET , assume for now that we have corresponding classifiers producing confidence scores for every temporal relation in R T ."
                    },
                    {
                        "id": 75,
                        "string": "Let them be s ee (·) and s et (·), respectively."
                    },
                    {
                        "id": 76,
                        "string": "Then the inference formulation for all the temporal relations within this document is: Y = arg max Y ∈Y ∑ i∈EE s ee {i → Yi} + ∑ j∈ET s et {j → Yj} (1) where Y k ∈ R T is the temporal label of pair k ∈ MM (Let M = E ∪ T be the set of all tem- poral nodes), \"k → Y k \" represents the case where the label of pair k is predicted to be Y k , Y is a vec- torization of all these Y k 's in one document, and Y is the constrained space that Y lies in."
                    },
                    {
                        "id": 77,
                        "string": "We do not include the scores for T T because the temporal relationship between timexes can be trivially determined using the normalized dates of these timexes, as was done in (Do et al., 2012; Chambers et al., 2014; Mirza and Tonelli, 2016) ."
                    },
                    {
                        "id": 78,
                        "string": "We impose these relations via equality constraints denoted as Y 0 ."
                    },
                    {
                        "id": 79,
                        "string": "In addition, we add symmetry and transitivity constraints dictated by the nature of time (denoted by Y 1 ), and other prior knowledge derived from linguistic rules (denoted by Y 2 ), which will be explained subsequently."
                    },
                    {
                        "id": 80,
                        "string": "Finally, we set Y = ∩ 2 i=0 Y i in Eq."
                    },
                    {
                        "id": 81,
                        "string": "(1)."
                    },
                    {
                        "id": 82,
                        "string": "Transitivity Constraints."
                    },
                    {
                        "id": 83,
                        "string": "Let the dimension of Y be n. Then a standard way to construct the symmetry and transitivity constraints is shown in (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Denis and Muller, 2011; Do et al., 2012; Ning et al., 2017 ) Y 1 = { Y ∈ R n T |∀m 1,2,3 ∈ M, Y (m1,m2) =Ȳ (m2,m1) , Y (m1,m3) ∈ Trans(Y (m1,m2) , Y (m2,m3) ) } where the bar sign is used to represent the reverse relation hereafter, and Trans(r 1 , r 2 ) is a set comprised of all the temporal relations from R T that do not conflict with r 1 and r 2 ."
                    },
                    {
                        "id": 84,
                        "string": "The construction of Trans(r 1 , r 2 ) necessitates a clearer definition of R T , the importance of which is often overlooked by existing methods."
                    },
                    {
                        "id": 85,
                        "string": "Existing approaches all followed the interval representation of events (Allen, 1984) , which yields 13 temporal relations (denoted byR T here) as shown in ."
                    },
                    {
                        "id": 86,
                        "string": "\"x\" means that the label is ignored."
                    },
                    {
                        "id": 87,
                        "string": "Brackets represent time intervals along the time axis."
                    },
                    {
                        "id": 88,
                        "string": "Scheme 2 is adopted consistently in this work."
                    },
                    {
                        "id": 89,
                        "string": "ample, {before, after, includes, is included, simultaneously, vague}."
                    },
                    {
                        "id": 90,
                        "string": "For notation convenience, we denote them R T = {b, a, i, ii, s, v}."
                    },
                    {
                        "id": 91,
                        "string": "Using a reduced set is more convenient in data annotation and leads to better performance in practice."
                    },
                    {
                        "id": 92,
                        "string": "However, there has been limited discussion in the literature on how to interpret the reduced relation types."
                    },
                    {
                        "id": 93,
                        "string": "For example, is the \"before\" in R T exactly the same as the \"before\" in the original set (R T ) (as shown on the left-hand-side of Fig."
                    },
                    {
                        "id": 94,
                        "string": "1 ), or is it a combination of multiple relations inR T (the right-hand-side of Fig."
                    },
                    {
                        "id": 95,
                        "string": "1) ?"
                    },
                    {
                        "id": 96,
                        "string": "We compare two reduction schemes in Fig."
                    },
                    {
                        "id": 97,
                        "string": "1 , where scheme 1 ignores low frequency labels directly and scheme 2 absorbs low frequency ones into their temporally closest labels."
                    },
                    {
                        "id": 98,
                        "string": "The two schemes barely have differences when a system only looks at a single pair of mentions at a time (this might explain the lack of discussion over this issue in the literature), but they lead to different Trans(r 1 , r 2 ) sets and this difference can be magnified when the problem is solved jointly and when the label distribution changes across domains."
                    },
                    {
                        "id": 99,
                        "string": "To completely cover the 13 relations, we adopt scheme 2 in this work."
                    },
                    {
                        "id": 100,
                        "string": "The resulting transitivity relations are shown in Table 1 ."
                    },
                    {
                        "id": 101,
                        "string": "The top part of Table 1 is a compact representation of three generic rules; for instance, Line 1 means that the labels themselves are transitive."
                    },
                    {
                        "id": 102,
                        "string": "Note that during human annotation, if an annotator looks at a pair of events and decides that multiple well-defined relations can exist, he/she labels it vague; also, when aggregating the labels from multiple annotators, a label will be changed to vague if the annotators disagree with each other."
                    },
                    {
                        "id": 103,
                        "string": "In either case, vague is chosen to be the label when a single well-defined relation cannot be uniquely determined by the contextual information."
                    },
                    {
                        "id": 104,
                        "string": "This explains why a vague relation (v) is always added in Table 1 if more than one label in Trans(r 1 , r 2 ) is possible."
                    },
                    {
                        "id": 105,
                        "string": "As for Lines 6, 9-11 in Table 1 (where vague appears in Column r 2 ), Column Trans(r 1 ,r 2 ) was designed in such a way that r 2 cannot be uniquely determined through r 1 and Trans(r 1 ,r 2 )."
                    },
                    {
                        "id": 106,
                        "string": "For instance, r 1 is after on Line 9, if we further put before into Trans(r 1 ,r 2 ), then r 2 would be uniquely determined to be before, conflicting with r 2 being vague, so before should not be in Trans(r 1 ,r 2 )."
                    },
                    {
                        "id": 107,
                        "string": "Enforcing Linguistic Rules."
                    },
                    {
                        "id": 108,
                        "string": "Besides the transitivity constraints represented by Y 1 above, we also propose to enforce prior knowledge to further constrain the search space for Y ."
                    },
                    {
                        "id": 109,
                        "string": "Specifically, linguistic insight has resulted in rules for predicting the temporal relations with special syntactic or semantic patterns, as was done in CAEVO (a state-of-the-art method)."
                    },
                    {
                        "id": 110,
                        "string": "Since these rule predictions often have high-precision, it is worthwhile incorporating them in global reasoning methods as well."
                    },
                    {
                        "id": 111,
                        "string": "No."
                    },
                    {
                        "id": 112,
                        "string": "r1 r2 Trans(r1, r2) 1 r r r 2 r s r 3 r1 r2 Trans(r2,r1) 4 b i b, i, v 5 b ii b, ii, v 6 b v b, i, ii, v 7 a i a, i, v 8 a ii a, ii, v 9 a v a, i, ii ,v 10 i v b, a, i, v 11 ii v b, a, ii, v In the CCM framework, these rules can be represented as hard constraints in the search space for Y ."
                    },
                    {
                        "id": 113,
                        "string": "Specifically, Y2 = { Yj = rule(j), ∀j ∈ J (rule) } , (2) where J (rule) ⊆ MM is the set of pairs that can be determined by linguistic rules, and rule(j) ∈ R T is the corresponding decision for pair j according to these rules."
                    },
                    {
                        "id": 114,
                        "string": "In this work, we used the same set of rules designed by CAEVO for fair comparison."
                    },
                    {
                        "id": 115,
                        "string": "Full Model with Causal Relations Now we have presented the joint inference framework for temporal relations in Eq."
                    },
                    {
                        "id": 116,
                        "string": "(1)."
                    },
                    {
                        "id": 117,
                        "string": "It is easier to explain our complete TCR framework on top of it."
                    },
                    {
                        "id": 118,
                        "string": "Let W be the vectorization of all causal relations and add the scores from the scoring function for causality s c (·) to Eq."
                    },
                    {
                        "id": 119,
                        "string": "(1)."
                    },
                    {
                        "id": 120,
                        "string": "Specifically, the full inference formulation is now: Y ,Ŵ = arg max Y ∈Y,W ∈W Y ∑ i∈EE s ee {i → Y i } (3) + ∑ j∈ET s et {j → Y j } + ∑ k∈EE s c {k → W k } where W Y is the search space for W ."
                    },
                    {
                        "id": 121,
                        "string": "W Y depends on the temporal labels Y in the sense that W Y = {W ∈ R m C |∀i, j ∈ E, if W (i,j) = c, (4) then W (j,i) =c, and Y (i,j) = b} where m is the dimension of W (i.e., the total number of causal pairs), R C = {c,c, null} is the label set for causal relations (i.e., \"causes\", \"caused by\", and \"no relation\"), and W (i,j) is the causal label for pair (i, j)."
                    },
                    {
                        "id": 122,
                        "string": "The constraint represented by W Y means that if a pair of events i and j are labeled to be \"causes\", then the causal relation between j and i must be \"caused by\", and the temporal relation between i and j must be \"before\"."
                    },
                    {
                        "id": 123,
                        "string": "Scoring Functions In the above, we have built the joint framework on top of scoring functions s ee (·), s et (·) and s c (·)."
                    },
                    {
                        "id": 124,
                        "string": "To get s ee (·) and s et (·), we trained classifiers using the averaged perceptron algorithm (Freund and Schapire, 1998) and the same set of features used in (Do et al., 2012; Ning et al., 2017) , and then used the soft-max scores in those scoring functions."
                    },
                    {
                        "id": 125,
                        "string": "For example, that means s ee {i → r} = w T r ϕ(i) ∑ r ′ ∈RT w T r ′ ϕ(i) , i ∈ EE, r ∈ R T , where {w r } is the learned weight vector for relation r ∈ R T and ϕ(i) is the feature vector for pair i ∈ EE."
                    },
                    {
                        "id": 126,
                        "string": "Given a pair of ordered events, we need s c (·) to estimate the scores of them being \"causes\" or \"caused by\"."
                    },
                    {
                        "id": 127,
                        "string": "Since this scoring function has the same nature as s ee (·), we can reuse the features from s ee (·) and learn an averaged perceptron for s c (·)."
                    },
                    {
                        "id": 128,
                        "string": "In addition to these existing features, we also use prior statistics retrieved using our temporal system from a large corpus 3 , so as to know probabilistically which event happens before another event."
                    },
                    {
                        "id": 129,
                        "string": "For example, in Example 1, we have a pair of events, e1:died and e2:exploded."
                    },
                    {
                        "id": 130,
                        "string": "The prior knowledge we retrieved from that large corpus is that die happens before explode with probability 15% and happens after explode with probability 85%."
                    },
                    {
                        "id": 131,
                        "string": "We think this prior distribution is correlated with causal directionality, so it was also added as features when training s c (·)."
                    },
                    {
                        "id": 132,
                        "string": "Note that the scoring functions here are implementation choice."
                    },
                    {
                        "id": 133,
                        "string": "The TCR joint framework is fully extensible to other scoring functions."
                    },
                    {
                        "id": 134,
                        "string": "Convert the Joint Inference into an ILP Conveniently, the joint inference formulation in Eq."
                    },
                    {
                        "id": 135,
                        "string": "(3) can be rewritten into an ILP and solved using off-the-shelf optimization packages, e.g., (Gurobi Optimization, Inc., 2012) ."
                    },
                    {
                        "id": 136,
                        "string": "First, we define indicator variables y r i = I{Y i = r}, where I{·} is the indicator function, ∀i ∈ MM, ∀r ∈ R T ."
                    },
                    {
                        "id": 137,
                        "string": "Then let p r i = s ee {i → r} if i ∈ EE, or p r i = s et {i → r} if i ∈ ET ; similarly, let w r j = I{W i = r} be the indicator variables for W j and q r j be the score for W j = r ∈ R C ."
                    },
                    {
                        "id": 138,
                        "string": "Therefore, without constraints Y and W Y for now, Eq."
                    },
                    {
                        "id": 139,
                        "string": "(3) can be written as: y,ŵ = arg max ∑ i∈EE∪ET ∑ r∈R T p r i y r i + ∑ j∈EE ∑ r∈R C q r j w r j s.t."
                    },
                    {
                        "id": 140,
                        "string": "y r i , w r j ∈ {0, 1}, ∑ r∈R T y r i = ∑ r∈R C w r j = 1 The prior knowledge represented as Y and W Y can be conveniently converted into constraints for this optimization problem."
                    },
                    {
                        "id": 141,
                        "string": "Specifically, Y 1 has two components, symmetry and transitivity: Y1 : ∀i, j, k ∈ M, y r i,j = yr j,i , (symmetry) y r 1 i,j + y r 2 j,k − ∑ r 3 ∈Trans(r 1 ,r 2 ) y r 3 i,k ≤ 1 (transitivity) wherer is the reverse relation of r (i.e.,b = a, i = ii,s = s, andv = v), and Trans(r 1 , r 2 ) is defined in Table 1 ."
                    },
                    {
                        "id": 142,
                        "string": "As for the transitivity constraints, if both y r 1 i,j and y r 2 j,k are 1, then the constraint requires at least one of y r 3 i,k , r 3 ∈ Trans(r 1 , r 2 ) to be 1, which means the relation between i and k has to be chosen from Trans(r 1 , r 2 ), which is exactly what Y 1 is intended to do."
                    },
                    {
                        "id": 143,
                        "string": "The rules in Y 2 is written as Y 2 : y r j = I {rule(j)=r} , ∀j ∈ J (rule) (linguistic rules) where rule(j) and J (rule) have been defined in Eq."
                    },
                    {
                        "id": 144,
                        "string": "(2)."
                    },
                    {
                        "id": 145,
                        "string": "Converting the T T constraints, i.e., Y 0 , into constraints is as straightforward as Y 2 , so we omit it due to limited space."
                    },
                    {
                        "id": 146,
                        "string": "Last, converting the constraints W Y defined in Eq."
                    },
                    {
                        "id": 147,
                        "string": "(4) can be done as following: W Y : w c i,j = wc j,i ≤ y b i,j , ∀i, j ∈ E. The equality part, w c i,j = wc j,i represents the symmetry constraint of causal relations; the inequality part, w c i,j ≤ y b i,j represents that if event i causes event j, then i must be before j."
                    },
                    {
                        "id": 148,
                        "string": "Experiments In this section, we first show on TimeBank-Dense (TB-Dense) (Cassidy et al., 2014) , that the proposed framework improves temporal relation identification."
                    },
                    {
                        "id": 149,
                        "string": "We then explain how our new dataset with both temporal and causal relations was collected, based on which the proposed method improves for both relations."
                    },
                    {
                        "id": 150,
                        "string": "Temporal Performance on TB-Dense Multiple datasets with temporal annotations are available thanks to the TempEval (TE) workshops (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) ."
                    },
                    {
                        "id": 151,
                        "string": "The dataset we used here to demonstrate our improved temporal component was TB-Dense, which was annotated on top of 36 documents out of the classic TimeBank dataset (Pustejovsky et al., 2003) ."
                    },
                    {
                        "id": 152,
                        "string": "The main purpose of TB-Dense was to alleviate the known issue of sparse annotations in the evaluation dataset provided with TE3 (Uz-Zaman et al., 2013) , as pointed out in many previous work (Chambers, 2013; Cassidy et al., 2014; Chambers et al., 2014; Ning et al., 2017) ."
                    },
                    {
                        "id": 153,
                        "string": "Annotators of TB-Dense were forced to look at each pair of events or timexes within the same sentence or contiguous sentences, so that much fewer links were missed."
                    },
                    {
                        "id": 154,
                        "string": "Since causal link annotation is not available on TB-Dense, we only show our improvement in terms of temporal performance on Table 2 : Ablation study of the proposed system in terms of the standard temporal awareness metric."
                    },
                    {
                        "id": 155,
                        "string": "The baseline system is to make inference locally for each event pair without looking at the decisions from others."
                    },
                    {
                        "id": 156,
                        "string": "The \"+\" signs on lines 2-5 refer to adding a new source of information on top of its preceding system, with which the inference has to be global and done via ILP."
                    },
                    {
                        "id": 157,
                        "string": "All systems are significantly different to its preceding one with p<0.05 (McNemar's test)."
                    },
                    {
                        "id": 158,
                        "string": "TB-Dense."
                    },
                    {
                        "id": 159,
                        "string": "The standard train/dev/test split of TB-Dense was used and parameters were tuned to optimize the F 1 performance on dev."
                    },
                    {
                        "id": 160,
                        "string": "Gold events and time expressions were also used as in existing systems."
                    },
                    {
                        "id": 161,
                        "string": "The contributions of each proposed information sources are analyzed in the ablation study shown in Table 2 , where we can see the F 1 score was improved step-by-step as new sources of information were added."
                    },
                    {
                        "id": 162,
                        "string": "Recall that Y 1 represents transitivity constraints, ET represents taking eventtimex pairs into consideration, and Y 2 represents rules from CAEVO (Chambers et al., 2014) ."
                    },
                    {
                        "id": 163,
                        "string": "System 1 is the baseline we are comparing to, which is a local method predicting temporal relations one at a time."
                    },
                    {
                        "id": 164,
                        "string": "System 2 only applied Y 1 via ILP on top of all EE pairs by removing the 2nd term in Eq."
                    },
                    {
                        "id": 165,
                        "string": "(1); for fair comparison with System 1, we added the same ET predictions from System 1."
                    },
                    {
                        "id": 166,
                        "string": "System 3 incorporated ET into the ILP and mainly contributed to an increase in precision (from 42.9 to 44.3); we think that there could be more gain if more time expressions existed in the testset."
                    },
                    {
                        "id": 167,
                        "string": "With the help of additional high-precision rules (Y 2 ), the temporal performance can further be improved, as shown by System 4."
                    },
                    {
                        "id": 168,
                        "string": "Finally, using the causal extraction obtained via (Do et al., 2011) in the joint framework, the proposed method achieved the best precision, recall, and F 1 scores in our ablation study (Systems 1-5)."
                    },
                    {
                        "id": 169,
                        "string": "According to the McNemar's test (Everitt, 1992; Dietterich, 1998) , all Systems 2-5 were significantly different to its preceding system with p<0.05."
                    },
                    {
                        "id": 170,
                        "string": "The second part of Table 2 compares several state-of-the-art systems on the same test set."
                    },
                    {
                        "id": 171,
                        "string": "ClearTK (Bethard, 2013) was the top performing system in TE3 in temporal relation extraction."
                    },
                    {
                        "id": 172,
                        "string": "Since it was designed for TE3 (not TB-Dense), it expectedly achieved a moderate recall on the test set of TB-Dense."
                    },
                    {
                        "id": 173,
                        "string": "CAEVO (Chambers et al., 2014) and Ning et al."
                    },
                    {
                        "id": 174,
                        "string": "(2017) were more recent methods and achieved better scores on TB-Dense."
                    },
                    {
                        "id": 175,
                        "string": "Compared with these state-of-the-art methods, the proposed joint system (System 5) achieved the best F 1 score with a major improvement in recall."
                    },
                    {
                        "id": 176,
                        "string": "We think the low precision compared to System 8 is due to the lack of structured learning, and the low precision compared to System 7 is propagated from the baseline (System 1), which was tuned to maximize its F 1 score."
                    },
                    {
                        "id": 177,
                        "string": "However, the effectiveness of the proposed information sources is already justified in Systems 1-5."
                    },
                    {
                        "id": 178,
                        "string": "Joint Performance on Our New Dataset Data Preparation TB-Dense only has temporal relation annotations, so in the evaluations above, we only evaluated our temporal performance."
                    },
                    {
                        "id": 179,
                        "string": "One existing dataset with both temporal and causal annotations available is the Causal-TimeBank dataset (Causal-TB) (Mirza and Tonelli, 2014) ."
                    },
                    {
                        "id": 180,
                        "string": "However, Causal-TB is sparse in temporal annotations and is even sparser in causal annotations: In Table 3 , we can see that with four times more documents, Causal-TB still has fewer temporal relations (denoted as T-Links therein), compared to TB-Dense; as for causal relations (C-Links), it has less than two causal relations in each document on average."
                    },
                    {
                        "id": 181,
                        "string": "Note that the T-Link sparsity of Causal-TB originates from TimeBank, which is known to have missing links (Cassidy et al., 2014; Ning et al., 2017) ."
                    },
                    {
                        "id": 182,
                        "string": "The C-Link sparsity was a design choice of Causal-TB in which C-Links were annotated based on only explicit causal markers (e.g., \"A happened because of B\")."
                    },
                    {
                        "id": 183,
                        "string": "Another dataset with parallel annotations is CaTeRs (Mostafazadeh et al., 2016b) , which was primarily designed for the Story Cloze Test (Mostafazadeh et al., 2016a) based on common (2014) and use this new dataset to showcase the proposed joint approach."
                    },
                    {
                        "id": 184,
                        "string": "The EventCausality dataset provides relatively dense causal annotations on 25 newswire articles collected from CNN in 2010."
                    },
                    {
                        "id": 185,
                        "string": "As shown in Table 3 , it has more than 20 C-Links annotated per document on average (10 times denser than Causal-TB)."
                    },
                    {
                        "id": 186,
                        "string": "However, one issue is that its notion for events is slightly different to that in the temporal relation extraction regime."
                    },
                    {
                        "id": 187,
                        "string": "To construct parallel annotations of both temporal and causal relations, we preprocessed all the articles in EventCausality using ClearTK to extract events and then manually removed some obvious errors in them."
                    },
                    {
                        "id": 188,
                        "string": "To annotate temporal relations among these events, we adopted the annotation scheme from TB-Dense given its success in mitigating the issue of missing annotations with the following modifications."
                    },
                    {
                        "id": 189,
                        "string": "First, we used a crowdsourcing platform, Crowd-Flower, to collect temporal relation annotations."
                    },
                    {
                        "id": 190,
                        "string": "For each decision of temporal relation, we asked 5 workers to annotate and chose the majority label as our final annotation."
                    },
                    {
                        "id": 191,
                        "string": "Second, we discovered that comparisons involving ending points of events tend to be ambiguous and suffer from low inter-annotator agreement (IAA), so we asked the annotators to label relations based on the starting points of each event."
                    },
                    {
                        "id": 192,
                        "string": "This simplification does not change the nature of temporal relation extraction but leads to better annotation quality."
                    },
                    {
                        "id": 193,
                        "string": "For more details about this data collection scheme, please refer to (Ning et al., 2018b) for more details."
                    },
                    {
                        "id": 194,
                        "string": "Results Result on our new dataset jointly annotated with both temporal and causal relations is shown in Ta-  ble 4."
                    },
                    {
                        "id": 195,
                        "string": "We split the new dataset into 20 documents for training and 5 documents for testing."
                    },
                    {
                        "id": 196,
                        "string": "In the training phase, the training parameters were tuned via 5-fold cross validation on the training set."
                    },
                    {
                        "id": 197,
                        "string": "Table 4 demonstrates the improvement of the joint framework over individual components."
                    },
                    {
                        "id": 198,
                        "string": "The \"temporal only\" baseline is the improved temporal extraction system for which the joint inference with causal links has NOT been applied."
                    },
                    {
                        "id": 199,
                        "string": "The \"causal only\" baseline is to use s c (·) alone for the prediction of each pair."
                    },
                    {
                        "id": 200,
                        "string": "That is, for a pair i, if s c {i → causes} > s c {i → caused by}, we then assign \"causes\" to pair i; otherwise, we assign \"caused by\" to pair i."
                    },
                    {
                        "id": 201,
                        "string": "Note that the \"causal accuracy\" column in Table 4 was evaluated only on gold causal pairs."
                    },
                    {
                        "id": 202,
                        "string": "In the proposed joint system, the temporal and causal scores were added up for all event pairs."
                    },
                    {
                        "id": 203,
                        "string": "The temporal performance got strictly better in precision, recall, and F 1 , and the causal performance also got improved by a large margin from 70.5% to 77.3%, indicating that temporal signals and causal signals are helpful to each other."
                    },
                    {
                        "id": 204,
                        "string": "According to the McNemar's test, both improvements are significant with p<0.05."
                    },
                    {
                        "id": 205,
                        "string": "The second part of Table 4 shows that if gold relations were used, how well each component would possibly perform."
                    },
                    {
                        "id": 206,
                        "string": "Technically, these gold temporal/causal relations were enforced via adding extra constraints to ILP in Eq."
                    },
                    {
                        "id": 207,
                        "string": "(3) (imagine these gold relations as a special rule, and convert them into constraints like what we did in Eq."
                    },
                    {
                        "id": 208,
                        "string": "(2))."
                    },
                    {
                        "id": 209,
                        "string": "When using gold temporal relations, causal accuracy went up to 91.9%."
                    },
                    {
                        "id": 210,
                        "string": "That is, 91.9% of the C-Links satisfied the assumption that the cause is temporally before the effect."
                    },
                    {
                        "id": 211,
                        "string": "First, this number is much higher than the 77.3% on line 3, so there is still room for improvement."
                    },
                    {
                        "id": 212,
                        "string": "Second, it means in this dataset, there were 8.1% of the C-Links in which the cause is temporally after its effect."
                    },
                    {
                        "id": 213,
                        "string": "We will discuss this seemingly counter-intuitive phenomenon in the Discussion section."
                    },
                    {
                        "id": 214,
                        "string": "When gold causal relations were used (line 5), the temporal performance was slightly better than line 3 in terms of both precision and recall."
                    },
                    {
                        "id": 215,
                        "string": "The small difference means that the temporal performance on line 3 was already very close to its best."
                    },
                    {
                        "id": 216,
                        "string": "Compared with the first line, we can see that gold causal relations led to approximately 2% improvement in precision and recall in temporal performance, which is a reasonable margin given the fact that C-Links are often much sparser than T-Links in practice."
                    },
                    {
                        "id": 217,
                        "string": "Note that the temporal performance in Table 4 is consistently better than those in Table 2 because of the higher IAA in the new dataset."
                    },
                    {
                        "id": 218,
                        "string": "However, the improvement brought by joint reasoning with causal relations is the same, which further confirms the capability of the proposed approach."
                    },
                    {
                        "id": 219,
                        "string": "Discussion We have consistently observed that on the TB-Dense dataset, if automatically tuned to optimize its F 1 score, a system is very likely to have low precisions and high recall (e.g., Table 2 )."
                    },
                    {
                        "id": 220,
                        "string": "We notice that our system often predicts non-vague relations when the TB-Dense gold is vague, resulting in lower precision."
                    },
                    {
                        "id": 221,
                        "string": "However, on our new dataset, the same algorithm can achieve a more balanced precision and recall."
                    },
                    {
                        "id": 222,
                        "string": "This is an interesting phenomenon, possibly due to the annotation scheme difference which needs further investigation."
                    },
                    {
                        "id": 223,
                        "string": "The temporal improvements in both Table 2 and Table 4 are relatively small (although statistically significant)."
                    },
                    {
                        "id": 224,
                        "string": "This is actually not surprising because C-Links are much fewer than T-Links in newswires which focus more on the temporal development of stories."
                    },
                    {
                        "id": 225,
                        "string": "As a result, many T-Links are not accompanied with C-Links and the improvements are diluted."
                    },
                    {
                        "id": 226,
                        "string": "But for those event pairs having both T-Links and C-Links, the proposed joint framework is an important scheme to synthesize both signals and improve both."
                    },
                    {
                        "id": 227,
                        "string": "The comparison between Line 5 and Line 3 in Table 4 is a showcase of the effectiveness."
                    },
                    {
                        "id": 228,
                        "string": "We think that a deeper reason for the improvement achieved via a joint framework is that causality often encodes humans prior knowledge as global information (e.g., \"death\" is caused by \"explosion\" rather than causes \"explosion\", regardless of the local context), while temporality often focuses more on the local context."
                    },
                    {
                        "id": 229,
                        "string": "From this standpoint, temporal information and causal information are complementary and helpful to each other."
                    },
                    {
                        "id": 230,
                        "string": "When doing error analysis for the fourth line of Table 4 , we noticed some examples that break the commonly accepted temporal precedence assumption."
                    },
                    {
                        "id": 231,
                        "string": "It turns out that they are not annotation mistakes: In Example 4, e8:finished is obviously before e9:closed, but e9 is a cause of e8 since if the market did not close, the shares would not finish."
                    },
                    {
                        "id": 232,
                        "string": "In the other sentence of Example 4, she prepares before hosting her show, but e11:host is the cause of e10:prepares since if not for hosting, no preparation would be needed."
                    },
                    {
                        "id": 233,
                        "string": "In both cases, the cause is temporally after the effect because people are inclined to make projections for the future and change their behaviors before the future comes."
                    },
                    {
                        "id": 234,
                        "string": "The proposed system is currently unable to handle these examples and we believe that a better definition of what can be considered as events is needed, as part of further investigating how causality is expressed in natural language."
                    },
                    {
                        "id": 235,
                        "string": "Finally, the constraints connecting causal relations to temporal relations are designed in this paper as \"if A is the cause of B, then A must be before B\"."
                    },
                    {
                        "id": 236,
                        "string": "People have suggested other possibilities that involve the includes and simultaneously relations."
                    },
                    {
                        "id": 237,
                        "string": "While these other relations are simply different interpretations of temporal precedence (and can be easily incorporated in our framework), we find that they rarely happen in our dataset."
                    },
                    {
                        "id": 238,
                        "string": "Conclusion We presented a novel joint framework, Temporal and Causal Reasoning (TCR), using CCMs and ILP to the extraction problem of temporal and causal relations between events."
                    },
                    {
                        "id": 239,
                        "string": "To show the benefit of TCR, we have developed a new dataset that jointly annotates temporal and causal annotations, and then exhibited that TCR can improve both temporal and causal components."
                    },
                    {
                        "id": 240,
                        "string": "We hope that this notable improvement can foster more interest in jointly studying multiple aspects of events (e.g., event sequencing, coreference, parent-child relations) towards the goal of understanding events in natural language."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 32
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 33,
                        "end": 66
                    },
                    {
                        "section": "Temporal and Causal Reasoning",
                        "n": "3",
                        "start": 67,
                        "end": 70
                    },
                    {
                        "section": "Temporal Component",
                        "n": "3.1",
                        "start": 71,
                        "end": 114
                    },
                    {
                        "section": "Full Model with Causal Relations",
                        "n": "3.2",
                        "start": 115,
                        "end": 122
                    },
                    {
                        "section": "Scoring Functions",
                        "n": "3.3",
                        "start": 123,
                        "end": 133
                    },
                    {
                        "section": "Convert the Joint Inference into an ILP",
                        "n": "3.4",
                        "start": 134,
                        "end": 147
                    },
                    {
                        "section": "Experiments",
                        "n": "4",
                        "start": 148,
                        "end": 149
                    },
                    {
                        "section": "Temporal Performance on TB-Dense",
                        "n": "4.1",
                        "start": 150,
                        "end": 177
                    },
                    {
                        "section": "Data Preparation",
                        "n": "4.2.1",
                        "start": 178,
                        "end": 193
                    },
                    {
                        "section": "Results",
                        "n": "4.2.2",
                        "start": 194,
                        "end": 218
                    },
                    {
                        "section": "Discussion",
                        "n": "5",
                        "start": 219,
                        "end": 236
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 237,
                        "end": 240
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1010-Table4-1.png",
                        "caption": "Table 4: Comparison between the proposed method and existing ones, in terms of both temporal and causal performances. See Sec. 4.2.1 for description of our new dataset. Per the McNemar’s test, the joint system is significantly better than both baselines with p<0.05. Lines 4-5 provide the best possible performance the joint system could achieve if gold temporal/causal relations were given.",
                        "page": 7,
                        "bbox": {
                            "x1": 77.75999999999999,
                            "x2": 284.15999999999997,
                            "y1": 62.879999999999995,
                            "y2": 149.28
                        }
                    },
                    {
                        "filename": "../figure/image/1010-Table1-1.png",
                        "caption": "Table 1: Transitivity relations based on the label set reduction scheme 2 in Fig. 1. If (m1,m2) 7→ r1 and (m2,m3) 7→ r2, then the relation of (m1,m3) must be chosen from Trans(r1, r2), ∀m1, m2, m3 ∈ M. The top part of the table uses r to represent generic rules compactly. Notations: before (b), after (a), includes (i), is included (ii), simultaneously (s), vague (v), and r̄ represents the reverse relation of r.",
                        "page": 3,
                        "bbox": {
                            "x1": 352.8,
                            "x2": 480.47999999999996,
                            "y1": 62.879999999999995,
                            "y2": 188.16
                        }
                    },
                    {
                        "filename": "../figure/image/1010-Figure1-1.png",
                        "caption": "Figure 1: Two possible interpretations to the label set of RT = {b, a, i, ii, s, v} for the temporal relations between (A, B). “x” means that the label is ignored. Brackets represent time intervals along the time axis. Scheme 2 is adopted consistently in this work.",
                        "page": 3,
                        "bbox": {
                            "x1": 91.67999999999999,
                            "x2": 269.28,
                            "y1": 64.8,
                            "y2": 217.92
                        }
                    },
                    {
                        "filename": "../figure/image/1010-Table3-1.png",
                        "caption": "Table 3: Statistics of our new dataset with both temporal and causal relations annotated, compared with existing datasets. T-Link: Temporal relation. C-Link: Causal relation. The new dataset is much denser than Causal-TB in both T-Links and C-Links.",
                        "page": 6,
                        "bbox": {
                            "x1": 315.84,
                            "x2": 517.4399999999999,
                            "y1": 62.879999999999995,
                            "y2": 114.24
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-16"
        },
        {
            "slides": {
                "2": {
                    "title": "Updates in WMT19",
                    "text": [
                        "I reference-based human evaluation monolingual",
                        "I reference-free human evaluation bilingual",
                        "I standard reference-based metrics",
                        "I reference-less metrics QE as a Metric",
                        "I Hybrid supersampling was not needed for sys-level:",
                        "I Sufficiently large numbers of MT systems serve as datapoints."
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "3": {
                    "title": "System and Segment Level Evaluation",
                    "text": [
                        "I Participants compute one",
                        "score for the whole test set, as translated by each of the systems",
                        "The new in The company m From Friday's joi \"The unification Cermak, which New common D",
                        "I Segment Level Econo For exam The new in",
                        "score for each sentence of each systems translation"
                    ],
                    "page_nums": [
                        11,
                        12
                    ],
                    "images": []
                },
                "4": {
                    "title": "Past Metrics Tasks",
                    "text": [
                        "Rat. of Concord. Pairs",
                        "Pearson Corr Coeff based on",
                        "RR RR RR RR RR RR RR RR RR",
                        "main and secondary score reported for the system-level evaluation. and are slightly different variants regarding ties.",
                        "RR, DA, daRR are different golden truths.",
                        "Increase in number of participating teams?",
                        "I Baseline metrics: 9 + 2 reimplementations",
                        "I sacreBLEU-BLEU and sacreBLEU-chrF.",
                        "I Submitted metrics: 10 out of 24 are QE as a Metric."
                    ],
                    "page_nums": [
                        13,
                        14
                    ],
                    "images": []
                },
                "5": {
                    "title": "Data Overview This Year",
                    "text": [
                        "I Direct Assessment (DA) for sys-level.",
                        "I Derived relative ranking (daRR) for seg-level.",
                        "I Multiple languages (18 pairs):",
                        "I English (en) to/from Czech (cs), German (de), Finnish (fi),",
                        "Gujarati (gu), Kazakh (kk), Lithuanian (lt), Russian (ru), and",
                        "Chinese (zh), but excluding cs-en.",
                        "I German (de)Czech (cs) and German (de)French (fr)."
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "6": {
                    "title": "Baselines",
                    "text": [
                        "Metric Features Seg-L Sys-L sentBLEU",
                        "CDER chrF chrF+ sacreBLEU-BLEU sacreBLEU-chrF n-grams n-grams n-grams",
                        "Levenshtein distance edit distance, edit types edit distance, edit types edit distance, edit types character n-grams character n-grams n-grams n-grams",
                        "We average ( ) seg-level scores."
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                },
                "7": {
                    "title": "Participating Metrics",
                    "text": [
                        "Features char. n-grams, permutation trees contextual word embeddings char. edit distance, edit types char. edit distance, edit types learned neural representations surface linguistic features surface linguistic features word alignments",
                        "Meteor++ 2.0 (syntax+copy) word alignments",
                        "YiSi-1 srl psuedo-references, paraphrases word mover distance semantic similarity semantic similarity semantic similarity",
                        "Univ. of Amsterdam, ILCC",
                        "Dublin City University, ADA",
                        "We average ( ) their seg-level scores."
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                },
                "10": {
                    "title": "Golden Truth for Sys Level DA Pearson",
                    "text": [
                        "You have scored individual sentences: (Thank you!)",
                        "News Task has filtered and standardized this (Ave z).",
                        "We correlate it with the metric sys-level score.",
                        "Ave z BLEU CUNI-Transformer uedin online-B online-A online-G"
                    ],
                    "page_nums": [
                        20
                    ],
                    "images": []
                },
                "12": {
                    "title": "Segment Level News Task Evaluation",
                    "text": [
                        "You scored individual sentences: (Same data as above.)",
                        "Standardized, averaged seg-level golden truth score.",
                        "Could be correlated to metric seg-level scores.",
                        "but there are not enough judgements for indiv. sentences."
                    ],
                    "page_nums": [
                        22
                    ],
                    "images": []
                },
                "13": {
                    "title": "daRR Interpreting DA as RR",
                    "text": [
                        "I If score for candidate A better than B by more than 25 points",
                        "infer the pairwise comparison: A B.",
                        "I No ties in golden daRR.",
                        "I Evaluate with the known Kendalls",
                        "I On average, there are 319 of scored outputs per src segm.",
                        "I From these, we generate 4k327k daRR pairs."
                    ],
                    "page_nums": [
                        23
                    ],
                    "images": []
                },
                "15": {
                    "title": "Sys Level into English Official",
                    "text": [
                        "de-en fi-en gu-en kk-en lt-en ru-en zh-en",
                        "chrF chrF+ EED ESIM hLEPORa baseline hLEPORb baseline Meteor++ 2.0(syntax) Meteor++ 2.0(syntax+copy) NIST PER PReP sacreBLEU.BLEU sacreBLEU.chrF TER WER WMDO YiSi-0 YiSi-1 YiSi-1 srl QE as a Metric: ibm1-morpheme ibm1-pos4gram LASIM LP UNI UNI+ YiSi-2 YiSi-2 srl newstest2019",
                        "I Top: Baselines and regular metrics. Bottom: QE as a metric.",
                        "I Bold: not significantly outperformed by any others."
                    ],
                    "page_nums": [
                        25,
                        26
                    ],
                    "images": []
                },
                "17": {
                    "title": "Summary of Sys Level Wins Metrics",
                    "text": [
                        "LPs LPs LPs Corr Wins Overall wins",
                        "BLEU PER sacreBLEU-BLEU BERTr Met++ 2.0(s.) Met++ 2.0(s.+copy) WMDO hLEPORb baseline PReP"
                    ],
                    "page_nums": [
                        28
                    ],
                    "images": []
                },
                "18": {
                    "title": "Summary of Sys Level Wins QE",
                    "text": [
                        "LPs LPs LPs Corr Wins ibm1-morpheme ibm1-pos4gram"
                    ],
                    "page_nums": [
                        29
                    ],
                    "images": []
                },
                "21": {
                    "title": "Summary of Seg Level Wins Metrics",
                    "text": [
                        "LPs LPs LPs Corr Wins Tot"
                    ],
                    "page_nums": [
                        32
                    ],
                    "images": []
                },
                "22": {
                    "title": "Summary of Seg Level Wins QE",
                    "text": [
                        "LPs LPs LPs Corr Wins ibm1-morpheme ibm1-pos4gram"
                    ],
                    "page_nums": [
                        33
                    ],
                    "images": []
                },
                "24": {
                    "title": "Overall Status of MT Metrics",
                    "text": [
                        "I Sys-level very good overall:",
                        "I Pearson Correlation >.90 mostly, best reach >95 or",
                        "I Low pearsons exist but not many.",
                        "I Correlations are heavily affected by the underlying set of MT",
                        "I System-level correlations are much worse when based on only the better",
                        "I No clear winners, but have a look at this years posters.",
                        "I Seg-level much worse:",
                        "I The top Kendalls only .59.",
                        "I standard metrics correlations varies between 0.03 and 0.59.",
                        "I QE a metric obtains even negative correlations.",
                        "I Methods using embeddings are better:",
                        "I YiSi-*: Word embeddings + other types of available resources.",
                        "I ESIM: Sentence embeddings."
                    ],
                    "page_nums": [
                        36,
                        37
                    ],
                    "images": []
                },
                "25": {
                    "title": "Next Metrics Task",
                    "text": [
                        "I Yes, we will run the task!",
                        "I Big Challenge remains: References possibly worse than MT.",
                        "I Yes, we like the QE as a metric track.",
                        "I We will report the top-N plots.",
                        "I We have to summarize them somehow, though.",
                        "I Doc-level golden truth did not seem different from sys-level.",
                        "I This may change We might run doc-level metrics."
                    ],
                    "page_nums": [
                        38
                    ],
                    "images": []
                }
            },
            "paper_title": "Results of the WMT19 Metrics Shared Task: Segment-Level and Strong MT Systems Pose Big Challenges",
            "paper_id": "1012",
            "paper": {
                "title": "Results of the WMT19 Metrics Shared Task: Segment-Level and Strong MT Systems Pose Big Challenges",
                "abstract": "This paper presents the results of the WMT19 Metrics Shared Task. Participants were asked to score the outputs of the translations systems competing in the WMT19 News Translation Task with automatic metrics. 13 research groups submitted 24 metrics, 10 of which are reference-less \"metrics\" and constitute submissions to the joint task with WMT19 Quality Estimation Task, \"QE as a Metric\". In addition, we computed 11 baseline metrics, with 8 commonly applied baselines (BLEU, SentBLEU, NIST, WER, PER, TER, CDER, and chrF) and 3 reimplementations (chrF+, sacreBLEU-BLEU, and sacreBLEU-chrF). Metrics were evaluated on the system level, how well a given metric correlates with the WMT19 official manual ranking, and segment level, how well the metric correlates with human judgements of segment quality. This year, we use direct assessment (DA) as our only form of manual evaluation.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction To determine system performance in machine translation (MT), it is often more practical to use an automatic evaluation, rather than a manual one."
                    },
                    {
                        "id": 1,
                        "string": "Manual/human evaluation can be costly and time consuming, and so an automatic evaluation metric, given that it sufficiently correlates with manual evaluation, can be useful in developmental cycles."
                    },
                    {
                        "id": 2,
                        "string": "In studies involving hyperparameter tuning or architecture search, automatic metrics are necessary as the amount of human effort implicated in manual evaluation is generally prohibitively large."
                    },
                    {
                        "id": 3,
                        "string": "As objective, reproducible quantities, metrics can also facilitate cross-paper compar-isons."
                    },
                    {
                        "id": 4,
                        "string": "The WMT Metrics Shared Task 1 annually serves as a venue to validate the use of existing metrics (including baselines such as BLEU), and to develop new ones; see Koehn and Monz (2006) through Ma et al."
                    },
                    {
                        "id": 5,
                        "string": "(2018) ."
                    },
                    {
                        "id": 6,
                        "string": "In the setup of our Metrics Shared Task, an automatic metric compares an MT system's output translations with manual reference translations to produce: either (a) system-level score, i.e."
                    },
                    {
                        "id": 7,
                        "string": "a single overall score for the given MT system, or (b) segment-level scores for each of the output translations, or both."
                    },
                    {
                        "id": 8,
                        "string": "This year we teamed up with the organizers of the QE Task and hosted \"QE as a Metric\" as a joint task."
                    },
                    {
                        "id": 9,
                        "string": "In the setup of the Quality Estimation Task (Fonseca et al., 2019) , no humanproduced translations are provided to estimate the quality of output translations."
                    },
                    {
                        "id": 10,
                        "string": "Quality estimation (QE) methods are built to assess MT output based on the source or based on the translation itself."
                    },
                    {
                        "id": 11,
                        "string": "In this task, QE developers were invited to perform the same scoring as standard metrics participants, with the exception that they refrain from using a reference translation in production of their scores."
                    },
                    {
                        "id": 12,
                        "string": "We then evaluate the QE submissions in exactly the same way as regular metrics are evaluated, see below."
                    },
                    {
                        "id": 13,
                        "string": "From the point of view of correlation with manual judgements, there is no difference in metrics using or not using references."
                    },
                    {
                        "id": 14,
                        "string": "The source, reference texts, and MT system outputs for the Metrics task come from the News Translation Task (Barrault et al., 2019 , which we denote as Findings 2019)."
                    },
                    {
                        "id": 15,
                        "string": "The texts were drawn from the news domain and involve translations of English (en) to/from Czech (cs), German (de), Finnish (fi), Gujarati (gu), Kazakh (kk), Lithuanian (lt), Russian (ru) , and Chinese (zh), but excluding csen (15 language pairs)."
                    },
                    {
                        "id": 16,
                        "string": "Three other language pairs not including English were also manually evaluated as part of the News Translation Task: German→Czech and German↔French."
                    },
                    {
                        "id": 17,
                        "string": "In total, metrics could participate in 18 language pairs, with 10 target languages."
                    },
                    {
                        "id": 18,
                        "string": "In the following, we first give an overview of the task (Section 2) and summarize the baseline (Section 3) and submitted (Section 4) metrics."
                    },
                    {
                        "id": 19,
                        "string": "The results for system-and segment-level evaluation are provided in Sections 5.1 and 5.2, respectively, followed by a joint discussion Section 6."
                    },
                    {
                        "id": 20,
                        "string": "Task Setup This year, we provided task participants with one test set for each examined language pair, i.e."
                    },
                    {
                        "id": 21,
                        "string": "a set of source texts (which are commonly ignored by MT metrics), corresponding MT outputs (these are the key inputs to be scored) and a reference translation (held out for the participants of \"QE as a Metric\" track)."
                    },
                    {
                        "id": 22,
                        "string": "In the system-level, metrics aim to correlate with a system's score which is an average over many human judgments of segment translation quality produced by the given system."
                    },
                    {
                        "id": 23,
                        "string": "In the segment-level, metrics aim to produce scores that correlate best with a human ranking judgment of two output translations for a given source segment (more on the manual quality assessment in Section 2.3)."
                    },
                    {
                        "id": 24,
                        "string": "Participants were free to choose which language pairs and tracks (system/segment and reference-based/reference-free) they wanted to take part in."
                    },
                    {
                        "id": 25,
                        "string": "Source and Reference Texts The source and reference texts we use are newstest2019 from this year's WMT News Translation Task (see Findings 2019)."
                    },
                    {
                        "id": 26,
                        "string": "This set contains approximately 2,000 sentences for each translation direction (except Gujarati, Kazakh and Lithuanian which have approximately 1,000 sentences each, and German to/from French which has 1701 sentences)."
                    },
                    {
                        "id": 27,
                        "string": "The reference translations provided in new-stest2019 were created in the same direction as the MT systems were translating."
                    },
                    {
                        "id": 28,
                        "string": "The exceptions are German→Czech where both sides are translations from English and German↔French which followed last years' practice."
                    },
                    {
                        "id": 29,
                        "string": "Last year and the years before, the dataset consisted of two halves, one originating in the source language and one in the target language."
                    },
                    {
                        "id": 30,
                        "string": "This however lead to adverse artifacts in MT evaluation."
                    },
                    {
                        "id": 31,
                        "string": "System Outputs The results of the Metrics Task are affected by the actual set of MT systems participating in a given translation direction."
                    },
                    {
                        "id": 32,
                        "string": "On one hand, if all systems are very close in their translation quality, then even humans will struggle to rank them."
                    },
                    {
                        "id": 33,
                        "string": "This in turn will make the task for MT metrics very hard."
                    },
                    {
                        "id": 34,
                        "string": "On the other hand, if the task includes a wide range of systems of varying quality, correlating with humans should be generally easier, see Section 6.1 for a discussion on this."
                    },
                    {
                        "id": 35,
                        "string": "One can also expect that if the evaluated systems are of different types, they will exhibit different error patterns and various MT metrics can be differently sensitive to these patterns."
                    },
                    {
                        "id": 36,
                        "string": "This year, all MT systems included in the Metrics Task come from the News Translation Task (see Findings 2019)."
                    },
                    {
                        "id": 37,
                        "string": "There are however still noticeable differences among the various language pairs."
                    },
                    {
                        "id": 38,
                        "string": "• Unsupervised MT Systems."
                    },
                    {
                        "id": 39,
                        "string": "The German→Czech research systems were trained in an unsupervised fashion, i.e."
                    },
                    {
                        "id": 40,
                        "string": "without the access to parallel Czech-German texts (except for a couple of thousand sentences used primarily for validation)."
                    },
                    {
                        "id": 41,
                        "string": "We thus expect the research German-Czech systems to be \"more creative\" and depart further away from the references."
                    },
                    {
                        "id": 42,
                        "string": "The online systems in this language directions are however standard MT systems so the German-Czech evaluation could be to some extent bimodal."
                    },
                    {
                        "id": 43,
                        "string": "• EU Election."
                    },
                    {
                        "id": 44,
                        "string": "The French↔German translation was focused on a sub-domain of news, namely texts related EU Election."
                    },
                    {
                        "id": 45,
                        "string": "Various MT system developers may have invested more or less time to the domain adaptation."
                    },
                    {
                        "id": 46,
                        "string": "• Regular News Tasks Systems."
                    },
                    {
                        "id": 47,
                        "string": "These are all the other MT systems in the evaluation; differing in whether they are trained only on WMT provided data (\"Constrained\", or \"Unconstrained\") as in the previous years."
                    },
                    {
                        "id": 48,
                        "string": "All the freely available web services (online MT systems) are deemed unconstrained."
                    },
                    {
                        "id": 49,
                        "string": "Overall, the results are based on 233 systems across 18 language pairs."
                    },
                    {
                        "id": 50,
                        "string": "2 Manual Quality Assessment Direct Assessment (DA, Graham et al., 2013 Graham et al., , 2014a was employed as the source of the \"golden truth\" to evaluate metrics again this year."
                    },
                    {
                        "id": 51,
                        "string": "The details of this method of human evaluation are provided in Findings 2019."
                    },
                    {
                        "id": 52,
                        "string": "The basis of DA is to collect a large number of quality assessments (a number on a scale of 1-100, i.e."
                    },
                    {
                        "id": 53,
                        "string": "effectively a continuous scale) for the outputs of all MT systems."
                    },
                    {
                        "id": 54,
                        "string": "These scores are then standardized per annotator."
                    },
                    {
                        "id": 55,
                        "string": "In the past years, the underlying manual scores were reference-based (human judges had access to the same reference translation as the MT quality metric)."
                    },
                    {
                        "id": 56,
                        "string": "This year, the official WMT19 scores are reference-based (or \"monolingual\") for some language pairs and reference-free (or \"bilingual\") for others."
                    },
                    {
                        "id": 57,
                        "string": "3 Due to these different types of golden truth collection, reference-based language pairs are in a closer match with the standard referencebased metrics, while the reference-free language pairs are better fit for the \"QE as a metric\" subtask."
                    },
                    {
                        "id": 58,
                        "string": "Note that system-level manual scores are different than those of the segment-level."
                    },
                    {
                        "id": 59,
                        "string": "Since for segment-level evaluation, collecting enough DA judgements for each segment is infeasible, so we resort to converting DA judgements to 2 This year, we do not use the artificially constructed \"hybrid systems\" (Graham and Liu, 2016) because the confidence on the ranking of system-level metrics is sufficient even without hybrids."
                    },
                    {
                        "id": 60,
                        "string": "3 Specifically, the reference-based language pairs were those where the anticipated translation quality was lower or where the manual judgements were obtained with the help of anonymous crowdsourcing."
                    },
                    {
                        "id": 61,
                        "string": "Most of these cases were translations into English (fien, gu-en, kk-en, lt-en, ru-en and zh-en) and then the language pairs not involving English (de-cs, de-fr and fr-de)."
                    },
                    {
                        "id": 62,
                        "string": "The reference-less (bilingual) evaluations were those where mainly MT researchers themselves were involved in the annotations: en-cs, en-de, en-fi, en-gu, en-kk, en-lt, en-ru, en-zh."
                    },
                    {
                        "id": 63,
                        "string": "golden truth expressed as relative rankings, see Section 2.3.2."
                    },
                    {
                        "id": 64,
                        "string": "The exact methods used to calculate correlations of participating metrics with the golden truth are described below, in the two sections for system-level evaluation (Section 5.1) and segment-level evaluation (Section 5.2)."
                    },
                    {
                        "id": 65,
                        "string": "System-level Golden Truth: DA For the system-level evaluation, the collected continuous DA scores, standardized for each annotator, are averaged across all assessed segments for each MT system to produce a scalar rating for the system's performance."
                    },
                    {
                        "id": 66,
                        "string": "The underlying set of assessed segments is different for each system."
                    },
                    {
                        "id": 67,
                        "string": "Thanks to the fact that the system-level DA score is an average over many judgments, mean scores are consistent and have been found to be reproducible (Graham et al., 2013) ."
                    },
                    {
                        "id": 68,
                        "string": "For more details see Findings 2019."
                    },
                    {
                        "id": 69,
                        "string": "Segment-level Golden Truth: daRR Starting from Bojar et al."
                    },
                    {
                        "id": 70,
                        "string": "(2017) , when WMT fully switched to DA, we had to come up with a solid golden standard for segment-level judgements."
                    },
                    {
                        "id": 71,
                        "string": "Standard DA scores are reliable only when averaged over sufficient number of judgments."
                    },
                    {
                        "id": 72,
                        "string": "4 Fortunately, when we have at least two DA scores for translations of the same source input, it is possible to convert those DA scores into a relative ranking judgement, if the difference in DA scores allows conclusion that one translation is better than the other."
                    },
                    {
                        "id": 73,
                        "string": "In the following, we denote these re-interpreted DA judgements as \"daRR\", to distinguish it clearly from the relative ranking (\"RR\") golden truth used in the past years."
                    },
                    {
                        "id": 74,
                        "string": "5 Table 1 : Number of judgements for DA converted to daRR data; \"DA>1\" is the number of source input sentences in the manual evaluation where at least two translations of that same source input segment received a DA judgement; \"Ave\" is the average number of translations with at least one DA judgement available for the same source input sentence; \"DA pairs\" is the number of all possible pairs of translations of the same source input resulting from \"DA>1\"; and \"daRR\" is the number of DA pairs with an absolute difference in DA scores greater than the 25 percentage point margin."
                    },
                    {
                        "id": 75,
                        "string": "From the complete set of human assessments collected for the News Translation Task, all possible pairs of DA judgements attributed to distinct translations of the same source were converted into daRR better/worse judgements."
                    },
                    {
                        "id": 76,
                        "string": "Distinct translations of the same source input whose DA scores fell within 25 percentage points (which could have been deemed equal quality) were omitted from the evaluation of segment-level metrics."
                    },
                    {
                        "id": 77,
                        "string": "Conversion of scores in this way produced a large set of daRR judgements for all language pairs, rely on judgements collected from known-reliable volunteers and crowd-sourced workers who passed DA's quality control mechanism."
                    },
                    {
                        "id": 78,
                        "string": "Any inconsistency that could arise from reliance on DA judgements collected from low quality crowd-sourcing is thus prevented."
                    },
                    {
                        "id": 79,
                        "string": "shown in Table 1 due to combinatorial advantage of extracting daRR judgements from all possible pairs of translations of the same source input."
                    },
                    {
                        "id": 80,
                        "string": "We see that only German-French and esp."
                    },
                    {
                        "id": 81,
                        "string": "French-German can suffer from insufficient number of these simulated pairwise comparisons."
                    },
                    {
                        "id": 82,
                        "string": "The daRR judgements serve as the golden standard for segment-level evaluation in WMT19."
                    },
                    {
                        "id": 83,
                        "string": "Baseline Metrics In addition to validating popular metrics, including baselines metrics serves as comparison and prevents \"loss of knowledge\" as mentioned by Bojar et al."
                    },
                    {
                        "id": 84,
                        "string": "(2016) ."
                    },
                    {
                        "id": 85,
                        "string": "Moses scorer 6 is one of the MT evaluation tools that aggregated several useful metrics over the time."
                    },
                    {
                        "id": 86,
                        "string": "Since Macháček and Bojar (2013) , we have been using Moses scorer to provide most of the baseline metrics and kept encouraging authors of well-performing MT metrics to include them in Moses scorer."
                    },
                    {
                        "id": 87,
                        "string": "7 The baselines we report are: BLEU and NIST The metrics BLEU (Papineni et al., 2002) and NIST (Doddington, 2002) were computed using mteval-v13a.pl 8 from the OpenMT Evaluation Campaign."
                    },
                    {
                        "id": 88,
                        "string": "The tool includes its own tokenization."
                    },
                    {
                        "id": 89,
                        "string": "We run mteval with the flag --international-tokenization."
                    },
                    {
                        "id": 90,
                        "string": "9 TER, WER, PER and CDER."
                    },
                    {
                        "id": 91,
                        "string": "The metrics TER (Snover et al., 2006) , WER, PER and CDER (Leusch et al., 2006) were produced by the Moses scorer, which is used in Moses model optimization."
                    },
                    {
                        "id": 92,
                        "string": "We used the standard tokenizer script as available in Moses toolkit for tokenization."
                    },
                    {
                        "id": 93,
                        "string": "(Han et al., 2012 (Han et al., , 2013 http://github.com/poethan/LEPOR LEPORb surface linguistic features • ⊘ Dublin City University, ADAPT (Han et al., 2012 (Han et al., , 2013 Table 2 : Participants of WMT19 Metrics Shared Task."
                    },
                    {
                        "id": 94,
                        "string": "\"•\" denotes that the metric took part in (some of the language pairs) of the segment-and/or system-level evaluation."
                    },
                    {
                        "id": 95,
                        "string": "\"⊘\" indicates that the system-level scores are implied, simply taking arithmetic (macro-)average of segment-level scores."
                    },
                    {
                        "id": 96,
                        "string": "\"−\" indicates that the metric didn't participate the track (Seg/Sys-level)."
                    },
                    {
                        "id": 97,
                        "string": "A metric is learned if it is trained on a QE or metric evaluation dataset (i.e."
                    },
                    {
                        "id": 98,
                        "string": "pretraining or parsers don't count, but training on WMT 2017 metrics task data does)."
                    },
                    {
                        "id": 99,
                        "string": "For the baseline metrics available in the Moses toolkit, paths are relative to http://github.com/moses-smt/ mosesdecoder/."
                    },
                    {
                        "id": 100,
                        "string": "smoothed version of BLEU for scoring at the segment-level."
                    },
                    {
                        "id": 101,
                        "string": "We used the standard tokenizer script as available in Moses toolkit for tokenization."
                    },
                    {
                        "id": 102,
                        "string": "chrF and chrF+."
                    },
                    {
                        "id": 103,
                        "string": "The metrics chrF and chrF+ (Popović, 2015 (Popović, , 2017 are computed using their original Python implementation, see Table 2 ."
                    },
                    {
                        "id": 104,
                        "string": "We ran chrF++.py with the parameters -nw 0 -b 3 to obtain the chrF score and with -nw 1 -b 3 to obtain the chrF+ score."
                    },
                    {
                        "id": 105,
                        "string": "Note that chrF intentionally removes all spaces before matching the n-grams, detokenizing the segments but also concatenating words."
                    },
                    {
                        "id": 106,
                        "string": "10 sacreBLEU-BLEU and sacreBLEU-chrF."
                    },
                    {
                        "id": 107,
                        "string": "The metrics sacreBLEU-BLEU and sacreBLEU-chrF (Post, 2018a) are re-implementation of BLEU and chrF respectively."
                    },
                    {
                        "id": 108,
                        "string": "We ran sacreBLEU-chrF with the same parameters as chrF, but their scores are slightly different."
                    },
                    {
                        "id": 109,
                        "string": "The signature strings produced by sacreBLEU for BLEU and chrF respectively are BLEU+case.lc+lang.de-en+numrefs.1+ smooth.exp+tok.intl+version.1.3.6 and chrF3+case.mixed+lang.de-en +numchars.6+numrefs.1+space.False+ tok.13a+version.1.3.6."
                    },
                    {
                        "id": 110,
                        "string": "The baselines serve in system and segmentlevel evaluations as customary: BLEU, TER, WER, PER, CDER, sacreBLEU-BLEU and sacreBLEU-chrF for system-level only; sentBLEU for segment-level only and chrF for both."
                    },
                    {
                        "id": 111,
                        "string": "Chinese word segmentation is unfortunately not supported by the tokenization scripts mentioned above."
                    },
                    {
                        "id": 112,
                        "string": "For scoring Chinese with baseline metrics, we thus pre-processed MT outputs and reference translations with the script tokenizeChinese.py 11 by Shujian Huang, which separates Chinese characters from each other and also from non-Chinese parts."
                    },
                    {
                        "id": 113,
                        "string": "Table 2 lists the participants of the WMT19 Shared Metrics Task, along with their metrics and links to the source code where available."
                    },
                    {
                        "id": 114,
                        "string": "We have collected 24 metrics from a total of 13 research groups, with 10 reference-less \"metrics\" submitted to the joint task \"QE as a Metrich\" with WMT19 Quality Estimation Task."
                    },
                    {
                        "id": 115,
                        "string": "Submitted Metrics The rest of this section provides a brief summary of all the metrics that participated."
                    },
                    {
                        "id": 116,
                        "string": "BEER BEER (Stanojević and Sima'an, 2015) is a trained evaluation metric with a linear model that combines sub-word feature indicators (character n-grams) and global word order features (skip bigrams) to achieve a language agnostic and fast to compute evaluation metric."
                    },
                    {
                        "id": 117,
                        "string": "BEER has participated in previous years of the evaluation task."
                    },
                    {
                        "id": 118,
                        "string": "BERTr BERTr (Mathur et al., 2019) uses contextual word embeddings to compare the MT output with the reference translation."
                    },
                    {
                        "id": 119,
                        "string": "The BERTr score of a translation is the average recall score over all tokens, using a relaxed version of token matching based on BERT embeddings: namely, computing the maximum cosine similarity between the embedding of a reference token against any token in the MT output."
                    },
                    {
                        "id": 120,
                        "string": "BERTr uses bert_base_uncased embeddings for the to-English language pairs, and bert_base_multilingual_cased embeddings for all other language pairs."
                    },
                    {
                        "id": 121,
                        "string": "CharacTER CharacTER (Wang et al., 2016b,a) , identical to the 2016 setup, is a character-level metric inspired by the commonly applied translation edit rate (TER)."
                    },
                    {
                        "id": 122,
                        "string": "It is defined as the minimum number of character edits required to adjust a hypothesis, until it completely matches the reference, normalized by the length of the hypothesis sentence."
                    },
                    {
                        "id": 123,
                        "string": "CharacTER calculates the character-level edit distance while performing the shift edit on word level."
                    },
                    {
                        "id": 124,
                        "string": "Unlike the strict matching criterion in TER, a hypothesis word is considered to match a reference word and could be shifted, if the edit dis-tance between them is below a threshold value."
                    },
                    {
                        "id": 125,
                        "string": "The Levenshtein distance between the reference and the shifted hypothesis sequence is computed on the character level."
                    },
                    {
                        "id": 126,
                        "string": "In addition, the lengths of hypothesis sequences instead of reference sequences are used for normalizing the edit distance, which effectively counters the issue that shorter translations normally achieve lower TER."
                    },
                    {
                        "id": 127,
                        "string": "Similarly to other character-level metrics, CharacTER is generally applied to nontokenized outputs and references, which also holds for this year's submission with one exception."
                    },
                    {
                        "id": 128,
                        "string": "This year tokenization was carried out for en-ru hypotheses and references before calculating the scores, since this results in large improvements in terms of correlations."
                    },
                    {
                        "id": 129,
                        "string": "For other language pairs, no tokenizer was used for pre-processing."
                    },
                    {
                        "id": 130,
                        "string": "EED EED (Stanchev et al., 2019 ) is a characterbased metric, which builds upon CDER."
                    },
                    {
                        "id": 131,
                        "string": "It is defined as the minimum number of operations of an extension to the conventional edit distance containing a \"jump\" operation."
                    },
                    {
                        "id": 132,
                        "string": "The edit distance operations (insertions, deletions and substitutions) are performed at the character level and jumps are performed when a blank space is reached."
                    },
                    {
                        "id": 133,
                        "string": "Furthermore, the coverage of multiple characters in the hypothesis is penalised by the introduction of a coverage penalty."
                    },
                    {
                        "id": 134,
                        "string": "The sum of the length of the reference and the coverage penalty is used as the normalisation term."
                    },
                    {
                        "id": 135,
                        "string": "ESIM Enhanced Sequential Inference Model (ESIM; Chen et al., 2017; Mathur et al., 2019 ) is a neural model proposed for Natural Language Inference that has been adapted for MT evaluation."
                    },
                    {
                        "id": 136,
                        "string": "It uses cross-sentence attention and sentence matching heuristics to generate a representation of the translation and the reference, which is fed to a feedforward regressor."
                    },
                    {
                        "id": 137,
                        "string": "The metric is trained on singly-annotated Direct Assessment data that has been collected for evaluating WMT systems: all WMT 2018 to-English data for the to-English language pairs, and all WMT 2018 data for all other language pairs."
                    },
                    {
                        "id": 138,
                        "string": "hLEPORb_baseline, hLEPORa_baseline The submitted metric hLEPOR_baseline is a metric based on the factor combination of length penalty, precision, recall, and position difference penalty."
                    },
                    {
                        "id": 139,
                        "string": "The weighted harmonic mean is applied to group the factors together with tunable weight parameters."
                    },
                    {
                        "id": 140,
                        "string": "The systemlevel score is calculated with the same formula but with each factor weighted using weight estimated at system-level and not at segmentlevel."
                    },
                    {
                        "id": 141,
                        "string": "In this submitted baseline version, hLE-POR_baseline was not tuned for each language pair separately but the default weights were applied across all submitted language pairs."
                    },
                    {
                        "id": 142,
                        "string": "Further improvements can be achieved by tuning the weights according to the development data, adding morphological information and applying n-gram factor scores into it (e.g."
                    },
                    {
                        "id": 143,
                        "string": "part-of-speech, n-gram precision and n-gram recall that were added into LEPOR in WMT13.)."
                    },
                    {
                        "id": 144,
                        "string": "The basic model factors and further development with parameters setting were described in the paper (Han et al., 2012) and (Han et al., 2013) ."
                    },
                    {
                        "id": 145,
                        "string": "For sentence-level score, only hLE-PORa_baseline was submitted with scores calculated as the weighted harmonic mean of all the designed factors using default parameters."
                    },
                    {
                        "id": 146,
                        "string": "For system-level score, both hLEPORa_baseline and hLE-PORb_baseline were submitted, where hLEPORa_baseline is the the average score of all sentence-level scores, and hLE-PORb_baseline is calculated via the same sentence-level hLEPOR equation but replacing each factor value with its system-level counterpart."
                    },
                    {
                        "id": 147,
                        "string": "PReP PReP (Yoshimura et al., 2019 ) is a method for filtering pseudo-references to achieve a good match with a gold reference."
                    },
                    {
                        "id": 148,
                        "string": "At the beginning, the source sentence is translated with some off-the-shelf MT systems to create a set of pseudo-references."
                    },
                    {
                        "id": 149,
                        "string": "(Here the MT systems were Google Translate and Microsoft Bing Translator.)"
                    },
                    {
                        "id": 150,
                        "string": "The pseudoreferences are then filtered using BERT (Devlin et al., 2019) fine-tuned on the MPRC corpus (Dolan and Brockett, 2005) , estimating the probability of the paraphrase between gold reference and pseudo-references."
                    },
                    {
                        "id": 151,
                        "string": "Thanks to the high quality of the underlying MT systems, a large portion of their outputs is indeed considered as a valid paraphrase."
                    },
                    {
                        "id": 152,
                        "string": "The final metric score is calculated simply with SentBLEU with these multiple references."
                    },
                    {
                        "id": 153,
                        "string": "WMDO WMDO (Chow et al., 2019b ) is a metric based on distance between distributions in the semantic vector space."
                    },
                    {
                        "id": 154,
                        "string": "Matching in the semantic space has been investigated for translation evaluation, but the constraints of a translation's word order have not been fully explored."
                    },
                    {
                        "id": 155,
                        "string": "Building on the Word Mover's Distance metric and various word embeddings, WMDO introduces a fragmentation penalty to account for fluency of a translation."
                    },
                    {
                        "id": 156,
                        "string": "This word order extension is shown to perform better than standard WMD, with promising results against other types of metrics."
                    },
                    {
                        "id": 157,
                        "string": "YiSi-0, YiSi-1, YiSi-1_srl, YiSi-2, YiSi-2_srl YiSi (Lo, 2019 ) is a unified semantic MT quality evaluation and estimation metric for languages with different levels of available resources."
                    },
                    {
                        "id": 158,
                        "string": "YiSi-1 is a MT evaluation metric that measures the semantic similarity between a machine translation and human references by aggregating the idf-weighted lexical semantic similarities based on the contextual embeddings extracted from BERT and optionally incorporating shallow semantic structures (denoted as YiSi-1_srl)."
                    },
                    {
                        "id": 159,
                        "string": "YiSi-0 is the degenerate version of YiSi-1 that is ready-to-deploy to any language."
                    },
                    {
                        "id": 160,
                        "string": "It uses longest common character substring to measure the lexical similarity."
                    },
                    {
                        "id": 161,
                        "string": "YiSi-2 is the bilingual, reference-less version for MT quality estimation, which uses the contextual embeddings extracted from BERT to evaluate the crosslingual lexical semantic similarity between the input and MT output."
                    },
                    {
                        "id": 162,
                        "string": "Like YiSi-1, YiSi-2 can exploit shallow semantic structures as well (denoted as YiSi-2_srl)."
                    },
                    {
                        "id": 163,
                        "string": "QE Systems In addition to the submitted standard metrics, 10 quality estimation systems were submitted to the \"QE as a Metric\" track."
                    },
                    {
                        "id": 164,
                        "string": "The submitted QE systems are evaluated in the same settings as metrics to facilitate comparison."
                    },
                    {
                        "id": 165,
                        "string": "Their descriptions can be found in the Findings of the WMT 2019 Shared Task on Quality Estimation (Fonseca et al., 2019) ."
                    },
                    {
                        "id": 166,
                        "string": "Results We discuss system-level results for news task systems in Section 5.1."
                    },
                    {
                        "id": 167,
                        "string": "The segment-level results are in Section 5.2."
                    },
                    {
                        "id": 168,
                        "string": "System-Level Evaluation As in previous years, we employ the Pearson correlation (r) as the main evaluation measure for system-level metrics."
                    },
                    {
                        "id": 169,
                        "string": "The Pearson correlation is as follows: r = ∑ n i=1 (Hi − H)(Mi − M ) √ ∑ n i=1 (Hi − H) 2 √ ∑ n i=1 (Mi − M ) 2 (1) where H i are human assessment scores of all systems in a given translation direction, M i are the corresponding scores as predicted by a given metric."
                    },
                    {
                        "id": 170,
                        "string": "H and M are their means, respectively."
                    },
                    {
                        "id": 171,
                        "string": "Since some metrics, such as BLEU, aim to achieve a strong positive correlation with human assessment, while error metrics, such as TER, aim for a strong negative correlation we compare metrics via the absolute value |r| of a    YiSi.1 Figure 1 : System-level metric significance test results for DA human assessment for into English and out-of English language pairs (newstest2019): Green cells denote a statistically significant increase in correlation with human assessment for the metric in a given row over the metric in a given column according to Williams test."
                    },
                    {
                        "id": 172,
                        "string": "− − 0.487 − − ibm1-pos4gram 0.339 − − − − − − LASIM 0.247 − − − − 0.310 − LP 0.474 − − − − 0.488 − UNI 0.846 0.930 − − − 0.805 − UNI+ 0.850 0.924 − − − 0.808 − YiSi-2 0.796 0.642 0.566 0.324 0.442 0.339 0.940 YiSi-2_srl 0.804 − − − − − 0.947 newstest2019 − − 0.810 − − ibm1-pos4gram − 0.393 − − − − − − LASIM − 0.871 − − − − 0.823 − LP − 0.569 − − − − 0.661 − UNI 0.028 0.841 0.907 − − − 0.919 − UNI+ − − − − − − 0.918 − USFD − 0.224 − − − − 0.857 − USFD-TL − 0.091 − − − − 0.771 − YiSi-2 0. given metric's correlation with human assessment."
                    },
                    {
                        "id": 173,
                        "string": "System-Level Results Tables 3, 4 and 5 provide the system-level correlations of metrics evaluating translation of newstest2019."
                    },
                    {
                        "id": 174,
                        "string": "The underlying texts are part of the WMT19 News Translation test set (new-stest2019) and the underlying MT systems are all MT systems participating in the WMT19 News Translation Task."
                    },
                    {
                        "id": 175,
                        "string": "As recommended by Graham and Baldwin (2014), we employ Williams significance test (Williams, 1959) to identify differences in correlation that are statistically significant."
                    },
                    {
                        "id": 176,
                        "string": "Williams test is a test of significance of a difference in dependent correlations and therefore suitable for evaluation of metrics."
                    },
                    {
                        "id": 177,
                        "string": "Correlations not significantly outperformed by any other metric for the given language pair are highlighted in bold in Tables 3, 4 and 5."
                    },
                    {
                        "id": 178,
                        "string": "Since pairwise comparisons of metrics may be also of interest, e.g."
                    },
                    {
                        "id": 179,
                        "string": "to learn which metrics significantly outperform the most widely employed metric BLEU, we include significance test results for every competing pair of metrics including our baseline metrics in Figure 1 and Figure 2 ."
                    },
                    {
                        "id": 180,
                        "string": "This year, the increased number of systems participating in the news tasks has provided a larger sample of system scores for testing metrics."
                    },
                    {
                        "id": 181,
                        "string": "Since we already have sufficiently conclusive results on genuine MT systems, we do not need to generate hybrid system results as in Graham and Liu (2016) and past metrics tasks."
                    },
                    {
                        "id": 182,
                        "string": "Segment-Level Evaluation Segment-level evaluation relies on the manual judgements collected in the News Translation Task evaluation."
                    },
                    {
                        "id": 183,
                        "string": "This year, again we were unable to follow the methodology outlined in Graham et al."
                    },
                    {
                        "id": 184,
                        "string": "(2015) for evaluation of segment-level metrics because the sampling of sentences did not provide sufficient number of assessments of the same segment."
                    },
                    {
                        "id": 185,
                        "string": "We therefore convert pairs of DA scores for competing translations to daRR better/worse preferences as described in Section 2.3.2."
                    },
                    {
                        "id": 186,
                        "string": "We measure the quality of metrics' segmentlevel scores against the daRR golden truth using a Kendall's Tau-like formulation, which is an adaptation of the conventional Kendall's Tau coefficient."
                    },
                    {
                        "id": 187,
                        "string": "Since we do not have a total order ranking of all translations, it is not possible to apply conventional Kendall's Tau (Graham et al., 2015) ."
                    },
                    {
                        "id": 188,
                        "string": "Our Kendall's Tau-like formulation, τ , is as follows: τ = |Concordant| − |Discordant| |Concordant| + |Discordant| (2) where Concordant is the set of all human comparisons for which a given metric suggests the same order and Discordant is the set of all human comparisons for which a given metric disagrees."
                    },
                    {
                        "id": 189,
                        "string": "The formula is not specific with respect to ties, i.e."
                    },
                    {
                        "id": 190,
                        "string": "cases where the annotation says that the two outputs are equally good."
                    },
                    {
                        "id": 191,
                        "string": "The way in which ties (both in human and metric judgement) were incorporated in computing Kendall τ has changed across the years of WMT Metrics Tasks."
                    },
                    {
                        "id": 192,
                        "string": "Here we adopt the version used in WMT17 daRR evaluation."
                    },
                    {
                        "id": 193,
                        "string": "For a detailed discussion on other options, see also Macháček and Bojar (2014) ."
                    },
                    {
                        "id": 194,
                        "string": "Whether or not a given comparison of a pair of distinct translations of the same source input, s 1 and s 2 , is counted as a concordant (Conc) or disconcordant (Disc) pair is defined by the following matrix: Metric s 1 < s 2 s 1 = s 2 s 1 > s 2 Human s 1 < s 2 Conc Disc Disc s 1 = s 2 − − − s 1 > s 2 Disc Disc Conc In the notation of Macháček and Bojar (2014) , this corresponds to the setup used in WMT12 (with a different underlying method of manual judgements, RR): Metric WMT12 < = > Human < 1 -1 -1 = X X X > -1 -1 1 The key differences between the evaluation used in WMT14-WMT16 and evaluation used in WMT17-WMT19 were (1) the move from RR to daRR and (2) the treatment of ties."
                    },
                    {
                        "id": 195,
                        "string": "In the years 2014-2016, ties in metrics scores were not penalized."
                    },
                    {
                        "id": 196,
                        "string": "With the move to daRR, where the quality of the two candidate translations Table 6 : Segment-level metric results for to-English language pairs in newstest2019: absolute Kendall's Tau formulation of segment-level metric scores with DA scores; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold."
                    },
                    {
                        "id": 197,
                        "string": "Table 7 : Segment-level metric results for out-of-English language pairs in newstest2019: absolute Kendall's Tau formulation of segment-level metric scores with DA scores; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold."
                    },
                    {
                        "id": 198,
                        "string": "is deemed substantially different and no ties in human judgements arise, it makes sense to penalize ties in metrics' predictions in order to promote discerning metrics."
                    },
                    {
                        "id": 199,
                        "string": "− − 0.069 − − ibm1-pos4gram −0.153 − − − − − − LASIM −0.024 − − − − 0.022 − LP −0.096 − − − − −0.035 − UNI 0.022 0.202 − − − 0.084 − UNI+ 0.015 0.211 − − − 0.089 − YiSi-2 0.068 0.126 −0.001 0.096 0.075 0.053 0.253 YiSi-2_srl 0.068 − − − − − 0.246 newstest2019 ibm1-morpheme −0.135 −0.003 −0.005 − − −0.165 − − ibm1-pos4gram − −0.123 − − − − − − LASIM − 0.147 − − − − −0.24 − LP − −0.119 − − − − −0.158 − UNI 0.060 0.129 0.351 − − − 0.226 − UNI+ − − − − − − 0.222 − USFD − −0.029 − − − − 0.136 − USFD-TL − −0.037 − − − − 0.191 − YiSi-2 0.069 0.212 0.239 0.147 0.187 0.003 −0.155 0.044 YiSi-2_srl − 0.236 − − − − − 0.034 newstest2019 Note that the penalization of ties makes our evaluation asymmetric, dependent on whether the metric predicted the tie for a pair where humans predicted <, or >."
                    },
                    {
                        "id": 200,
                        "string": "It is now important to interpret the meaning of the comparison identically for humans and metrics."
                    },
                    {
                        "id": 201,
                        "string": "For error metrics, we thus reverse the sign of the metric score prior to the comparison with human scores: higher scores have to indicate better translation quality."
                    },
                    {
                        "id": 202,
                        "string": "In WMT19, the original authors did this for CharacTER."
                    },
                    {
                        "id": 203,
                        "string": "To summarize, the WMT19 Metrics Task for segment-level evaluation: • ensures that error metrics are first converted to the same orientation as the human judgements, i.e."
                    },
                    {
                        "id": 204,
                        "string": "higher score indicating higher translation quality, • excludes all human ties (this is already implied by the construction of daRR from DA judgements), Figure 3 : daRR segment-level metric significance test results for into English and out-of English language pairs (newstest2019): Green cells denote a significant win for the metric in a given row over the metric in a given column according bootstrap resampling."
                    },
                    {
                        "id": 205,
                        "string": "Figure 4 : daRR segment-level metric significance test results for German to Czech, German to French and French to German (newstest2019): Green cells denote a significant win for the metric in a given row over the metric in a given column according bootstrap resampling."
                    },
                    {
                        "id": 206,
                        "string": "• counts metric's ties as a Discordant pairs."
                    },
                    {
                        "id": 207,
                        "string": "We employ bootstrap resampling (Koehn, 2004; Graham et al., 2014b) to estimate confidence intervals for our Kendall's Tau formulation, and metrics with non-overlapping 95% confidence intervals are identified as having statistically significant difference in performance."
                    },
                    {
                        "id": 208,
                        "string": "Segment-Level Results Results of the segment-level human evaluation for translations sampled from the News Translation Task are shown in Tables 6, 7 Discussion This year, human data was collected from reference-based evaluations (or \"monolingual\") and reference-free evaluations (or \"bilingual\")."
                    },
                    {
                        "id": 209,
                        "string": "The reference-based (monolingual) evaluations were obtained with the help of anonymous crowdsourcing, while the reference-less (bilingual) evaluations were mainly from MT researchers who committed their time contribution to the manual evaluation for each submitted system."
                    },
                    {
                        "id": 210,
                        "string": "Stability across MT Systems The observed performance of metrics depends on the underlying texts and systems that participate in the News Translation Task (see Section 2)."
                    },
                    {
                        "id": 211,
                        "string": "For the strongest MT systems, distinguishing which system outputs are better is hard, even for human assessors."
                    },
                    {
                        "id": 212,
                        "string": "On the other hand, if the systems are spread across a wide performance range, it will be easier for metrics to correlate with human judgements."
                    },
                    {
                        "id": 213,
                        "string": "To provide a more reliable view, we created plots of Pearson correlation when the underlying set of MT systems is reduced to top n ones."
                    },
                    {
                        "id": 214,
                        "string": "One sample such plot is in Figure 5 , all language pairs and most of the metrics are in Appendix A."
                    },
                    {
                        "id": 215,
                        "string": "As the plot documents, the official correlations reported in Tables 3 to 5 can lead to wrong conclusions."
                    },
                    {
                        "id": 216,
                        "string": "sacreBLEU-BLEU correlates at .969 when all systems are considered, but as we start considering only the top n systems, the correlation falls relatively quickly."
                    },
                    {
                        "id": 217,
                        "string": "With 10 systems, we are below .5 and when only the top 6 or 4 systems are considered, the correlation falls even to the negave values."
                    },
                    {
                        "id": 218,
                        "string": "Note that correlations point estimates (the value in the y-axis) become noiser with the decreasing number of the underlying MT systems."
                    },
                    {
                        "id": 219,
                        "string": "Figure 6 explains the situation and illus- Top 8 Top 10 Top 12 Top 15 All systems Figure 6 trates the sensitivity of the observed correlations to the exact set of systems."
                    },
                    {
                        "id": 220,
                        "string": "On the full set of systems, the single outlier (the worstperforming system called en_de_task) helps to achieve a great positive correlation."
                    },
                    {
                        "id": 221,
                        "string": "The majority of MT systems however form a cloud with Pearson correlation around .5 and the top 4 systems actually exhibit a negative correlation of the human score and sacreBLEU-BLEU."
                    },
                    {
                        "id": 222,
                        "string": "In Appendix A, baseline metrics are plotted in grey in all the plots, so that their trends can be observed jointly."
                    },
                    {
                        "id": 223,
                        "string": "In general, most baselines have similar correlations, as most baselines use similar features (n-gram or word-level features, with the exception of chrF)."
                    },
                    {
                        "id": 224,
                        "string": "In a number of language pairs (de-en, de-fr, en-de, en-kk, lten, ru-en, zh-en), baseline correlations tend towards 0 (no correlation) or even negative Pearson correlation."
                    },
                    {
                        "id": 225,
                        "string": "For a widely applied metric such as sacreBLEU-BLEU, our analysis reveals weak correlation in comparing top stateof-the-art systems in these language pairs, especially in en-de, de-en, ru-en, and zh-en."
                    },
                    {
                        "id": 226,
                        "string": "We will restrict our analysis to those language pairs where the baseline metrics have an obvious downward trend (de-en, de-fr, en-de, en-kk, lt-en, ru-en, zh-en)."
                    },
                    {
                        "id": 227,
                        "string": "Examining the topn correlation in the submitted metrics (not including QE systems), most metrics show the same degredation in correlation as the baselines."
                    },
                    {
                        "id": 228,
                        "string": "We note BERTr as the one exception consistently degrading less and retaining positive correlation compared to other submitted metrics and baselines, in the language pairs where it participated."
                    },
                    {
                        "id": 229,
                        "string": "For QE systems, we noticed that in some instances, QE systems have upward correlation trends when other metrics and baselines have downward trends."
                    },
                    {
                        "id": 230,
                        "string": "For instance, LP, UNI, and UNI+ in the de-en language pair, YiSi-2 in en-kk, and UNI and UNI+ in ru-en."
                    },
                    {
                        "id": 231,
                        "string": "These results suggest that QE systems such as UNI and UNI+ perform worse on judging systems of wide ranging quality, but better for top performing systems, or perhaps for systems closer in quality."
                    },
                    {
                        "id": 232,
                        "string": "If our method of human assessment is sound, we should believe that BLEU, a widely applied metric, is no longer a reliable metric for judging our best systems."
                    },
                    {
                        "id": 233,
                        "string": "Future investigations are needed to understand when BLEU applies well, and why BLEU is not effective for output from our state of the art models."
                    },
                    {
                        "id": 234,
                        "string": "Metrics and QE systems such as BERTr, ESIM, YiSi that perform well at judging our best systems often use more semantic features compared to our n-gram/char-gram based baselines."
                    },
                    {
                        "id": 235,
                        "string": "Future metrics may want to explore a) whether semantic features such as contextual word embeddings are achieving semantic understanding and b) whether semantic understanding is the true source of a metric's performance gains."
                    },
                    {
                        "id": 236,
                        "string": "It should be noted that some language pairs do not show the strong degrading pattern with top-n systems this year, for instance en-cs, engu, en-ru, or kk-en."
                    },
                    {
                        "id": 237,
                        "string": "English-Chinese is particularly interesting because we see a clear trend towards better correlations as we reduce the set of underlying systems to the top scoring ones."
                    },
                    {
                        "id": 238,
                        "string": "Overall Metric Performance System-Level Evaluation In system-level evaluation, the series of YiSi metrics achieve the highest correlations in several language pairs and it is not significantly outperformed by any other metrics (denoted as a \"win\" in the following) for almost all language pairs."
                    },
                    {
                        "id": 239,
                        "string": "The new metric ESIM performs best on 5 language languages (18 language pairs) and obtains 11 \"wins\" out of 16 language pairs in which ESIM participated."
                    },
                    {
                        "id": 240,
                        "string": "The metric EED performs better for language pairs out-of English and excluding En-glish compared to into-English language pairs, achieving 7 out of 11 \"wins\" there."
                    },
                    {
                        "id": 241,
                        "string": "Segment-Level Evaluation For segment-level evaluation, most language pairs are quite discerning, with only one or two metrics taking the \"winner\" position (of not being significantly surpassed by others)."
                    },
                    {
                        "id": 242,
                        "string": "Only French-German differs, with all metrics performing similarly except the significantly worse sentBLEU."
                    },
                    {
                        "id": 243,
                        "string": "YiSi-1_srl stands out as the \"winner\" for all language pairs in which it participated."
                    },
                    {
                        "id": 244,
                        "string": "The excluded language pairs were probably due to the lack of semantic information required by YiSi-1_srl."
                    },
                    {
                        "id": 245,
                        "string": "YiSi-1 participated all language pairs and its correlations are comparable with those of YiSi-1_srl."
                    },
                    {
                        "id": 246,
                        "string": "ESIM obtain 6 \"winners\" out of all 18 languages pairs."
                    },
                    {
                        "id": 247,
                        "string": "Both YiSi and ESIM are based on neural networks (YiSi via word and phrase embeddings, as well as other types of available resources, ESIM via sentence embeddings)."
                    },
                    {
                        "id": 248,
                        "string": "This is a confirmation of a trend observed last year."
                    },
                    {
                        "id": 249,
                        "string": "QE Systems as Metrics Generally, correlations for the standard reference-based metrics are obviously better than those in \"QE as a Metric\" track, both when using monolingual and bilingual golden truth."
                    },
                    {
                        "id": 250,
                        "string": "In system-level evaluation, correlations for \"QE as a Metric\" range from 0.028 to 0.947 across all language pairs and all metrics but they are very unstable."
                    },
                    {
                        "id": 251,
                        "string": "Even for a single metric, take UNI for example, the correlations range from 0.028 to 0.930 across language pairs."
                    },
                    {
                        "id": 252,
                        "string": "In segment-level evaluation, correlations for QE metrics range from -0.153 to 0.351 across all language pairs and show the same instability across language pairs for a given metric."
                    },
                    {
                        "id": 253,
                        "string": "In either case, we do not see any pattern that could explain the behaviour, e.g."
                    },
                    {
                        "id": 254,
                        "string": "whether the manual evaluation was monolingual or bilingual, or the characteristics of the given language pair."
                    },
                    {
                        "id": 255,
                        "string": "Dependence on Implementation As it already happened in the past, we had multiple implementations for some metrics, BLEU and chrF in particular."
                    },
                    {
                        "id": 256,
                        "string": "The detailed configuration of BLEU and sacreBLEU-BLEU differ and hence their scores and correlation results are different."
                    },
                    {
                        "id": 257,
                        "string": "chrF and sacreBLEU-chrF use the same parameters and should thus deliver the same scores but we still observe some differences, leading to different correlations."
                    },
                    {
                        "id": 258,
                        "string": "For instance for German-French Pearson correlation, chrF obtains 0.931 (no win) but sacreBLEU-chrF reaches 0.952, tying for a win with other metrics."
                    },
                    {
                        "id": 259,
                        "string": "We thus fully support the call for clarity by Post (2018b) and invite authors of metrics to include their implementations either in Moses scorer or sacreBLEU to achieve a long-term assessment of their metric."
                    },
                    {
                        "id": 260,
                        "string": "Conclusion This paper summarizes the results of WMT19 shared task in machine translation evaluation, the Metrics Shared Task."
                    },
                    {
                        "id": 261,
                        "string": "Participating metrics were evaluated in terms of their correlation with human judgement at the level of the whole test set (system-level evaluation), as well as at the level of individual sentences (segment-level evaluation)."
                    },
                    {
                        "id": 262,
                        "string": "We reported scores for standard metrics requiring the reference as well as quality estimation systems which took part in the track \"QE as a metric\", joint with the Quality Estimation task."
                    },
                    {
                        "id": 263,
                        "string": "For system-level, best metrics reach over 0.95 Pearson correlation or better across several language pairs."
                    },
                    {
                        "id": 264,
                        "string": "As expected, QE systems are visibly in all language pairs but they can also reach high system-level correlations, up to .947 (Chinese-English) or .936 (English-German) by YiSi-1_srl or over .9 for multiple language pairs by UNI."
                    },
                    {
                        "id": 265,
                        "string": "An important caveat is that the correlations are heavily affected by the underlying set of MT systems."
                    },
                    {
                        "id": 266,
                        "string": "We explored this by reducing the set of systems to top-n ones for various ns and found out that for many language pairs, system-level correlations are much worse when based on only the better performing systems."
                    },
                    {
                        "id": 267,
                        "string": "With both good and bad MT systems partic-ipating in the news task, the metrics results can be overly optimistic compared to what we get when evaluating state-of-the-art systems."
                    },
                    {
                        "id": 268,
                        "string": "In terms of segment-level Kendall's τ results, the standard metrics correlations varied between 0.03 and 0.59, and QE systems obtained even negative correlations."
                    },
                    {
                        "id": 269,
                        "string": "The results confirm the observation from the last year, namely metrics based on word or sentence-level embeddings (YiSi and ESIM), achieve the highest performance."
                    },
                    {
                        "id": 270,
                        "string": "A Correlations for Top-N Systems"
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 19
                    },
                    {
                        "section": "Task Setup",
                        "n": "2",
                        "start": 20,
                        "end": 24
                    },
                    {
                        "section": "Source and Reference Texts",
                        "n": "2.1",
                        "start": 25,
                        "end": 30
                    },
                    {
                        "section": "System Outputs",
                        "n": "2.2",
                        "start": 31,
                        "end": 49
                    },
                    {
                        "section": "Manual Quality Assessment",
                        "n": "2.3",
                        "start": 50,
                        "end": 64
                    },
                    {
                        "section": "System-level Golden Truth: DA",
                        "n": "2.3.1",
                        "start": 65,
                        "end": 68
                    },
                    {
                        "section": "Segment-level Golden Truth: daRR",
                        "n": "2.3.2",
                        "start": 69,
                        "end": 82
                    },
                    {
                        "section": "Baseline Metrics",
                        "n": "3",
                        "start": 83,
                        "end": 114
                    },
                    {
                        "section": "Submitted Metrics",
                        "n": "4",
                        "start": 115,
                        "end": 115
                    },
                    {
                        "section": "BEER",
                        "n": "4.1",
                        "start": 115,
                        "end": 117
                    },
                    {
                        "section": "BERTr",
                        "n": "4.2",
                        "start": 118,
                        "end": 120
                    },
                    {
                        "section": "CharacTER",
                        "n": "4.3",
                        "start": 121,
                        "end": 129
                    },
                    {
                        "section": "EED",
                        "n": "4.4",
                        "start": 130,
                        "end": 134
                    },
                    {
                        "section": "ESIM",
                        "n": "4.5",
                        "start": 135,
                        "end": 137
                    },
                    {
                        "section": "hLEPORb_baseline, hLEPORa_baseline",
                        "n": "4.6",
                        "start": 138,
                        "end": 146
                    },
                    {
                        "section": "PReP",
                        "n": "4.8",
                        "start": 147,
                        "end": 152
                    },
                    {
                        "section": "WMDO",
                        "n": "4.9",
                        "start": 153,
                        "end": 156
                    },
                    {
                        "section": "YiSi-0, YiSi-1, YiSi-1_srl, YiSi-2, YiSi-2_srl",
                        "n": "4.10",
                        "start": 157,
                        "end": 162
                    },
                    {
                        "section": "QE Systems",
                        "n": "4.11",
                        "start": 163,
                        "end": 164
                    },
                    {
                        "section": "Results",
                        "n": "5",
                        "start": 165,
                        "end": 167
                    },
                    {
                        "section": "System-Level Evaluation",
                        "n": "5.1",
                        "start": 168,
                        "end": 172
                    },
                    {
                        "section": "System-Level Results",
                        "n": "5.1.1",
                        "start": 173,
                        "end": 181
                    },
                    {
                        "section": "Segment-Level Evaluation",
                        "n": "5.2",
                        "start": 182,
                        "end": 205
                    },
                    {
                        "section": "Segment-Level Results",
                        "n": "5.2.1",
                        "start": 206,
                        "end": 207
                    },
                    {
                        "section": "Discussion",
                        "n": "6",
                        "start": 208,
                        "end": 209
                    },
                    {
                        "section": "Stability across MT Systems",
                        "n": "6.1",
                        "start": 210,
                        "end": 237
                    },
                    {
                        "section": "System-Level Evaluation",
                        "n": "6.2.1",
                        "start": 238,
                        "end": 240
                    },
                    {
                        "section": "Segment-Level Evaluation",
                        "n": "6.2.2",
                        "start": 241,
                        "end": 248
                    },
                    {
                        "section": "QE Systems as Metrics",
                        "n": "6.2.3",
                        "start": 249,
                        "end": 254
                    },
                    {
                        "section": "Dependence on Implementation",
                        "n": "6.3",
                        "start": 255,
                        "end": 259
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 260,
                        "end": 270
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1012-Figure1-1.png",
                        "caption": "Figure 1: System-level metric significance test results for DA human assessment for into English and out-of English language pairs (newstest2019): Green cells denote a statistically significant increase in correlation with human assessment for the metric in a given row over the metric in a given column according to Williams test.",
                        "page": 10,
                        "bbox": {
                            "x1": 115.19999999999999,
                            "x2": 480.0,
                            "y1": 70.56,
                            "y2": 693.12
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Table7-1.png",
                        "caption": "Table 7: Segment-level metric results for out-of-English language pairs in newstest2019: absolute Kendall’s Tau formulation of segment-level metric scores with DA scores; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.",
                        "page": 14,
                        "bbox": {
                            "x1": 88.8,
                            "x2": 505.44,
                            "y1": 62.4,
                            "y2": 347.03999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Table8-1.png",
                        "caption": "Table 8: Segment-level metric results for language pairs not involving English in newstest2019: absolute Kendall’s Tau formulation of segment-level metric scores with DA scores; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.",
                        "page": 14,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 292.32,
                            "y1": 439.68,
                            "y2": 655.1999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Table4-1.png",
                        "caption": "Table 4: Absolute Pearson correlation of out-of-English system-level metrics with DA human assessment in newstest2019; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.",
                        "page": 9,
                        "bbox": {
                            "x1": 99.84,
                            "x2": 497.28,
                            "y1": 203.04,
                            "y2": 572.16
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Table6-1.png",
                        "caption": "Table 6: Segment-level metric results for to-English language pairs in newstest2019: absolute Kendall’s Tau formulation of segment-level metric scores with DA scores; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.",
                        "page": 13,
                        "bbox": {
                            "x1": 85.92,
                            "x2": 508.32,
                            "y1": 231.84,
                            "y2": 543.36
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Table5-1.png",
                        "caption": "Table 5: Absolute Pearson correlation of system-level metrics for language pairs not involving English with DA human assessment in newstest2019; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.",
                        "page": 12,
                        "bbox": {
                            "x1": 192.95999999999998,
                            "x2": 404.15999999999997,
                            "y1": 101.75999999999999,
                            "y2": 400.32
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Figure2-1.png",
                        "caption": "Figure 2: System-level metric significance test results for DA human assessment in newstest2019 for German to Czech, German to French and French to German; green cells denote a statistically significant increase in correlation with human assessment for the metric in a given row over the metric in a given column according to Williams test.",
                        "page": 12,
                        "bbox": {
                            "x1": 115.19999999999999,
                            "x2": 480.0,
                            "y1": 537.12,
                            "y2": 661.4399999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Table1-1.png",
                        "caption": "Table 1: Number of judgements for DA converted to daRR data; “DA>1” is the number of source input sentences in the manual evaluation where at least two translations of that same source input segment received a DA judgement; “Ave” is the average number of translations with at least one DA judgement available for the same source input sentence; “DA pairs” is the number of all possible pairs of translations of the same source input resulting from “DA>1”; and “daRR” is the number of DA pairs with an absolute difference in DA scores greater than the 25 percentage point margin.",
                        "page": 3,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 288.0,
                            "y1": 62.4,
                            "y2": 360.0
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Figure4-1.png",
                        "caption": "Figure 4: daRR segment-level metric significance test results for German to Czech, German to French and French to German (newstest2019): Green cells denote a significant win for the metric in a given row over the metric in a given column according bootstrap resampling.",
                        "page": 16,
                        "bbox": {
                            "x1": 114.72,
                            "x2": 480.0,
                            "y1": 62.879999999999995,
                            "y2": 188.64
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Figure5-1.png",
                        "caption": "Figure 5: Pearson correlations of sacreBLEUBLEU for English-German system-level evaluation for all systems (left) down to only top 4 systems (right). The y-axis spans from -1 to +1, baseline metrics for the language pair in grey.",
                        "page": 16,
                        "bbox": {
                            "x1": 349.91999999999996,
                            "x2": 482.88,
                            "y1": 258.71999999999997,
                            "y2": 345.59999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Table3-1.png",
                        "caption": "Table 3: Absolute Pearson correlation of to-English system-level metrics with DA human assessment in newstest2019; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.",
                        "page": 8,
                        "bbox": {
                            "x1": 92.64,
                            "x2": 502.08,
                            "y1": 188.16,
                            "y2": 587.04
                        }
                    },
                    {
                        "filename": "../figure/image/1012-Figure3-1.png",
                        "caption": "Figure 3: daRR segment-level metric significance test results for into English and out-of English language pairs (newstest2019): Green cells denote a significant win for the metric in a given row over the metric in a given column according bootstrap resampling.",
                        "page": 15,
                        "bbox": {
                            "x1": 108.0,
                            "x2": 486.71999999999997,
                            "y1": 72.96,
                            "y2": 702.72
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-17"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivations",
                    "text": [
                        "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                        "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "Insufficient or even unavailable training data of emerging classes is a big",
                        "challenge in real-world text classification.",
                        "Zero-shot text classification recognising text documents of classes that",
                        "have never been seen in the learning stage",
                        "In this paper, we propose a two-phase framework together with data",
                        "augmentation and feature augmentation to solve this problem."
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Zero shot Text Classification",
                    "text": [
                        "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "Let and be disjoint sets of seen and unseen classes of the classification",
                        "In the learning stage, a training set is given where",
                        "is the document containing a sequence of words",
                        "is the class of",
                        "In the inference stage, the goal is to predict the class of each document, , in",
                        "Supportive semantic knowledge is needed to generally infer the features of unseen classes using patterns learned from seen classes."
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Our Proposed Framework Overview",
                    "text": [
                        "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                        "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "We integrate four kinds of semantic",
                        "knowledge into our framework:",
                        "Data augmentation technique helps the classifiers be aware of the existence of unseen classes without accessing their real data. Feature augmentation provides additional information which relates the document and the unseen classes to generalise the zero-shot reasoning."
                    ],
                    "page_nums": [
                        4,
                        5
                    ],
                    "images": [
                        "figure/image/1014-Figure1-1.png",
                        "figure/image/1014-Figure2-1.png"
                    ]
                },
                "3": {
                    "title": "Phase 1 Coarse grained Classification",
                    "text": [
                        "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                        "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "Each seen class has its own CNN text classifier to predict",
                        "The classifier is trained with all documents of its class in the training set",
                        "as positive examples and the rest as negative examples.",
                        "For a test document , this phase computes for every seen",
                        "If there exists a class such that > , it predicts",
                        "is a classification threshold for the class , calculated based on the",
                        "threshold adaptation method from (Shu et al., 2017)"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Phase 2 Fine grained Classification",
                    "text": [
                        "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                        "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "The traditional classifier is a multi-class classifier (|| classes) with a softmax",
                        "output, so it requires only the word embeddings as an input.",
                        "The zero-shot classifier is a binary classifier with a sigmoid output. It takes a text document and a class as inputs and predicts the confidence"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": [
                        "figure/image/1014-Figure1-1.png"
                    ]
                },
                "6": {
                    "title": "Phase 2 Zero shot Classifier",
                    "text": [
                        "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                        "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "The zero-shot classifier predicts",
                        "shows how the word and",
                        "the class are related considering",
                        "the relations in a general",
                        "This classifier is trained with a training data from seen classes only."
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": [
                        "figure/image/1014-Figure2-1.png"
                    ]
                },
                "8": {
                    "title": "Experiments",
                    "text": [
                        "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                        "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "DBpedia ontology : 14 classes",
                        "20newsgroups : 20 classes"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": [
                        "figure/image/1014-Table1-1.png"
                    ]
                },
                "9": {
                    "title": "An Experiments for Phase 1",
                    "text": [
                        "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                        "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "Compare with DOC a",
                        "For seen classes, our",
                        "DOC on both datasets.",
                        "improved the accuracy of",
                        "unseen classes clearly and led to higher overall accuracy in every setting."
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "10": {
                    "title": "An Experiments for Phase 2",
                    "text": [
                        "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                        "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "Using only could not find",
                        "out the correct unseen class",
                        "accuracy of predicting unseen",
                        "highest accuracy in all settings."
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": [
                        "figure/image/1014-Table6-1.png",
                        "figure/image/1014-Table5-1.png"
                    ]
                },
                "11": {
                    "title": "An Experiments for the Whole Framework",
                    "text": [
                        "Imperial College Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                        "London Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "Table 2: The accuracy of the whole framework compared with the baselines.",
                        "Label RNN + FC",
                        "Unseen / - Similarity RNN (Pushp and 5",
                        "Dataset rate Yi Count-based (Sappadla Autoencoder Srivastava, CNN + FC Ours"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": [
                        "figure/image/1014-Table2-1.png"
                    ]
                },
                "12": {
                    "title": "Conclusions",
                    "text": [
                        "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                        "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo",
                        "To tackle zero-shot text classification, we proposed a novel CNN-based two-",
                        "phase framework together with data augmentation and feature augmentation.",
                        "The experiments show that",
                        "data augmentation improved the accuracy in detecting instances from unseen",
                        "feature augmentation enabled knowledge transfer from seen to unseen classes",
                        "our work achieved the highest overall accuracy compared with all the baselines",
                        "and recent approaches in all settings.",
                        "multi-label classification with a larger amount of data",
                        "utilise semantic units defined by linguists in the zero-shot scenario"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                }
            },
            "paper_title": "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
            "paper_id": "1014",
            "paper": {
                "title": "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification",
                "abstract": "Insufficient or even unavailable training data of emerging classes is a big challenge of many classification tasks, including text classification. Recognising text documents of classes that have never been seen in the learning stage, so-called zero-shot text classification, is therefore difficult and only limited previous works tackled this problem. In this paper, we propose a two-phase framework together with data augmentation and feature augmentation to solve this problem. Four kinds of semantic knowledge (word embeddings, class descriptions, class hierarchy, and a general knowledge graph) are incorporated into the proposed framework to deal with instances of unseen classes effectively. Experimental results show that each and the combination of the two phases achieve the best overall accuracy compared with baselines and recent approaches in classifying real-world texts under the zeroshot scenario. * Piyawat Lertvittayakumjorn and Jingqing Zhang contributed equally to this project.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction As one of the most fundamental problems in machine learning, automatic classification has been widely studied in several domains."
                    },
                    {
                        "id": 1,
                        "string": "However, many approaches, proven to be effective in traditional classification tasks, cannot catch up with a dynamic and open environment where new classes can emerge after the learning stage (Romera-Paredes and Torr, 2015) ."
                    },
                    {
                        "id": 2,
                        "string": "For example, the number of topics on social media is growing rapidly, and the classification models are required to recognise the text of the new topics using only general information (e.g., descriptions of the topics) since labelled training instances are unfeasible to obtain for each new topic (Lee et al., 2011) ."
                    },
                    {
                        "id": 3,
                        "string": "This scenario holds in many real-world domains such as object recognition and medical diagnosis (Xian et al., 2017; World Health Organization, 1996) ."
                    },
                    {
                        "id": 4,
                        "string": "Zero-shot learning (ZSL) for text classification aims to classify documents of classes which are absent from the learning stage."
                    },
                    {
                        "id": 5,
                        "string": "Although it is challenging for a machine to achieve, humans are able to learn new concepts by transferring knowledge from known to unknown domains based on high-level descriptions and semantic representations (Thrun and Pratt, 1998) ."
                    },
                    {
                        "id": 6,
                        "string": "Therefore, without labelled data of unseen classes, a zero-shot learning framework is expected to exploit supportive semantic knowledge (e.g., class descriptions, relations among classes, and external domain knowledge) to generally infer the features of unseen classes using patterns learned from seen classes."
                    },
                    {
                        "id": 7,
                        "string": "So far, three main types of semantic knowledge have been employed in general zero-shot scenarios ."
                    },
                    {
                        "id": 8,
                        "string": "The most widely used one is semantic attributes of classes such as visual concepts (e.g., colours, shapes) and semantic properties (e.g., behaviours, functions) (Lampert et al., 2009; Zhao et al., 2018) ."
                    },
                    {
                        "id": 9,
                        "string": "The second type is concept ontology, including class hierarchy and knowledge graphs, which represents relationships among classes and features Fergus et al., 2010) ."
                    },
                    {
                        "id": 10,
                        "string": "The third type is semantic word embeddings which capture implicit relationships between words thanks to a large training text corpus (Socher et al., 2013; Norouzi et al., 2013) ."
                    },
                    {
                        "id": 11,
                        "string": "Nonetheless, concerning ZSL in text classification particularly, there are few studies exploiting one of these knowledge types and none has considered the combinations of them (Pushp and Srivastava, 2017; Dauphin et al., 2013) ."
                    },
                    {
                        "id": 12,
                        "string": "Moreover, some previous works used different datasets to train and test, but there is similarity between classes in the training and testing set."
                    },
                    {
                        "id": 13,
                        "string": "For example, in (Dauphin et al., 2013) , the class \"imdb.com\" in the training set naturally corresponds to the class \"Movies\" in the testing set."
                    },
                    {
                        "id": 14,
                        "string": "Hence, these methods are not working under a strict zero-shot scenario."
                    },
                    {
                        "id": 15,
                        "string": "To tackle the zero-shot text classification problem, this paper proposes a novel two-phase framework together with data augmentation and feature augmentation (Figure 1 )."
                    },
                    {
                        "id": 16,
                        "string": "In addition, four kinds of semantic knowledge including word embeddings, class descriptions, class hierarchy, and a general knowledge graph (ConceptNet) are exploited in the framework to effectively learn the unseen classes."
                    },
                    {
                        "id": 17,
                        "string": "Both of the two phases are based on convolutional neural networks (Kim, 2014) ."
                    },
                    {
                        "id": 18,
                        "string": "The first phase called coarse-grained classification judges if a document is from seen or unseen classes."
                    },
                    {
                        "id": 19,
                        "string": "Then, the second phase, named finegrained classification, finally decides its class."
                    },
                    {
                        "id": 20,
                        "string": "Note that all the classifiers in this framework are trained using labelled data of seen classes (and augmented text data) only."
                    },
                    {
                        "id": 21,
                        "string": "None of the steps learns from the labelled data of unseen classes."
                    },
                    {
                        "id": 22,
                        "string": "The contributions of our work can be summarised as follows."
                    },
                    {
                        "id": 23,
                        "string": "• We propose a novel deep learning based twophase framework, including coarse-grained and fine-grained classification, to tackle the zero-shot text classification problem."
                    },
                    {
                        "id": 24,
                        "string": "Unlike some previous works, our framework does not require semantic correspondence between classes in a training stage and classes in an inference stage."
                    },
                    {
                        "id": 25,
                        "string": "In other words, the seen and unseen classes can be clearly different."
                    },
                    {
                        "id": 26,
                        "string": "• We propose a novel data augmentation technique called topic translation to strengthen the capability of our framework to detect documents from unseen classes effectively."
                    },
                    {
                        "id": 27,
                        "string": "• We propose a method to perform feature augmentation by using integrated semantic knowledge to transfer the knowledge learned from seen to unseen classes in the zero-shot scenario."
                    },
                    {
                        "id": 28,
                        "string": "In the remainder of this paper, we firstly explain our proposed zero-shot text classification framework in section 2."
                    },
                    {
                        "id": 29,
                        "string": "Experiments and results, which demonstrate the performance of our framework, are presented in section 3."
                    },
                    {
                        "id": 30,
                        "string": "Related works are discussed in section 4."
                    },
                    {
                        "id": 31,
                        "string": "Finally, section 5 concludes our work and mentions possible future work."
                    },
                    {
                        "id": 32,
                        "string": "Methodology Problem Formulation Let C S and C U be disjoint sets of seen and unseen classes of the classification respectively."
                    },
                    {
                        "id": 33,
                        "string": "In the learning stage, a training set {(x 1 , y 1 ), ."
                    },
                    {
                        "id": 34,
                        "string": "."
                    },
                    {
                        "id": 35,
                        "string": "."
                    },
                    {
                        "id": 36,
                        "string": ", (x n , y n )} is given where x i is the i-th document containing a sequence of words [w i 1 , w i 2 , ."
                    },
                    {
                        "id": 37,
                        "string": "."
                    },
                    {
                        "id": 38,
                        "string": "."
                    },
                    {
                        "id": 39,
                        "string": ", w i t ] and y i ∈ C S is the class of x i ."
                    },
                    {
                        "id": 40,
                        "string": "In the inference stage, the goal is to predict the class of each document,ŷ i , in a testing set which has the same data format as the training set except that y i comes from C S ∪ C U ."
                    },
                    {
                        "id": 41,
                        "string": "Note that (i) every class comes with a class label and a class description ( Figure 2a ); (ii) a class hierarchy showing superclass-subclass relationships is also provided ( Figure 2b) ; (iii) the documents from unseen classes cannot be observed to train the framework."
                    },
                    {
                        "id": 42,
                        "string": "Overview and Notations As discussed in the Introduction, our proposed classification framework consists of two phases ( Figure 1 )."
                    },
                    {
                        "id": 43,
                        "string": "The first phase, coarse-grained classification, predicts whether an input document comes from seen or unseen classes."
                    },
                    {
                        "id": 44,
                        "string": "We also apply a data augmentation technique in this phase to help the classifiers be aware of the existence of unseen classes without accessing their real data."
                    },
                    {
                        "id": 45,
                        "string": "Then the second phase, fine-grained classification, finally specifies the class of the input document."
                    },
                    {
                        "id": 46,
                        "string": "It uses either a traditional classifier or a zero-shot classifier depending on the coarse-grained prediction given by Phase 1."
                    },
                    {
                        "id": 47,
                        "string": "Also, feature augmentation based on semantic knowledge is used to provide additional information which relates the document and the unseen classes to generalise the zero-shot reasoning."
                    },
                    {
                        "id": 48,
                        "string": "We use the following notations in Figure 1 and throughout this paper."
                    },
                    {
                        "id": 49,
                        "string": "• The list of embeddings of each word in the document x i is denoted by v i w = [v i w 1 , v i w 2 , ."
                    },
                    {
                        "id": 50,
                        "string": "."
                    },
                    {
                        "id": 51,
                        "string": "."
                    },
                    {
                        "id": 52,
                        "string": ", v i wt ]."
                    },
                    {
                        "id": 53,
                        "string": "• The embedding of each class label c is denoted by v c , ∀c ∈ C S ∪ C U ."
                    },
                    {
                        "id": 54,
                        "string": "It is assumed that each class has a one-word class label."
                    },
                    {
                        "id": 55,
                        "string": "If the class label has more than one word, a similar one-word class label is provided to find v c ."
                    },
                    {
                        "id": 56,
                        "string": "• As augmented features, the relationship vec-tor v i w j ,c shows the degree of relatedness between the word w j and the class c according to semantic knowledge."
                    },
                    {
                        "id": 57,
                        "string": "Hence, the list of relationship vectors between each word in x i and each class c ∈ C S ∪ C U is denoted by v i w,c = [v i w 1 ,c , v i w 2 ,c , ."
                    },
                    {
                        "id": 58,
                        "string": "."
                    },
                    {
                        "id": 59,
                        "string": "."
                    },
                    {
                        "id": 60,
                        "string": ", v i wt,c ]."
                    },
                    {
                        "id": 61,
                        "string": "We will explain the construction method in section 2.4.1."
                    },
                    {
                        "id": 62,
                        "string": "Phase 1: Coarse-grained Classification Given a document x i , Phase 1 performs a binary classification to decide whetherŷ i ∈ C S orŷ i / ∈ C S ."
                    },
                    {
                        "id": 63,
                        "string": "In this phase, each seen class c s ∈ C S has its own CNN classifier (with a subsequent dense layer and a sigmoid output) to predict the confidence that x i comes from the class c s , i.e., p(ŷ i = c s |x i )."
                    },
                    {
                        "id": 64,
                        "string": "The classifier uses v i w as an input and it is trained using a binary cross entropy loss with all documents of its class in the training set as positive examples and the rest as negative examples."
                    },
                    {
                        "id": 65,
                        "string": "For a test document x i , this phase computes p(ŷ i = c s |x i ) for every seen class c s in C S ."
                    },
                    {
                        "id": 66,
                        "string": "If there exists a class c s such that p(ŷ i = c s |x i ) > τ s , it predictsŷ i ∈ C S ; otherwise,ŷ i / ∈ C S ."
                    },
                    {
                        "id": 67,
                        "string": "τ s is a classification threshold for the class c s , calculated based on the threshold adaptation method from (Shu et al., 2017) ."
                    },
                    {
                        "id": 68,
                        "string": "Data Augmentation During the learning stage, the classifiers in Phase 1 use negative examples solely from seen classes, so they may not be able to differentiate the positive class from unseen classes."
                    },
                    {
                        "id": 69,
                        "string": "Hence, when the names of unseen classes are known in the inference stage, we try to introduce them to the classifiers in Phase 1 via augmented data so they can learn to reject the instances likely from unseen classes."
                    },
                    {
                        "id": 70,
                        "string": "We do data augmentation by translating a document from its original seen class to a new unseen class using analogy."
                    },
                    {
                        "id": 71,
                        "string": "We call this process topic translation."
                    },
                    {
                        "id": 72,
                        "string": "In the word level, we translate a word w in a document of class c to a corresponding word w in the context of a target class c by solving an analogy question \"c:w :: c :?\"."
                    },
                    {
                        "id": 73,
                        "string": "For example, solving the analogy \"company:firm :: village:?\""
                    },
                    {
                        "id": 74,
                        "string": "via word embeddings , we know that the word \"firm\" in a document of class \"company\" can be translated into the word \"hamlet\" in the context of class \"village\"."
                    },
                    {
                        "id": 75,
                        "string": "Our framework adopts the 3COSMUL method by Levy and Goldberg (2014) to solve the analogy question and find candidates of w : w = argmax x∈V cos(x, c ) cos(x, w) cos(x, c) + where V is a vocabulary set and cos(a, b) is a cosine similarity score between the vectors of word a and word b."
                    },
                    {
                        "id": 76,
                        "string": "Also, is a small number (i.e., 0.001) added to prevent division by zero."
                    },
                    {
                        "id": 77,
                        "string": "In the document level, we follow Algorithm 1 to translate a document of class c into the topic of another class c ."
                    },
                    {
                        "id": 78,
                        "string": "To explain, we translate all nouns, verbs, adjectives, and adverbs in the given document to the target class, word-by-word, using the word-level analogy."
                    },
                    {
                        "id": 79,
                        "string": "The word to replace must have the same part of speech as the original word and all the replacements in one document are 1-to-1 relations, enforced by replace dict in Algorithm 1."
                    },
                    {
                        "id": 80,
                        "string": "With this idea, we can create augmented documents for the unseen classes by topic-translation from the documents of seen classes in the training dataset."
                    },
                    {
                        "id": 81,
                        "string": "After that, we can use the augmented documents as additional negative examples for all the CNNs in Phase 1 to make them aware of the tone of unseen classes."
                    },
                    {
                        "id": 82,
                        "string": "Phase 2 decides the most appropriate classŷ i for x i using two CNN classifiers: a traditional classifier and a zero-shot classifier as shown in Figure  1 ."
                    },
                    {
                        "id": 83,
                        "string": "Ifŷ i ∈ C S predicted by Phase 1, the traditional classifier will finally select a class c s ∈ C S asŷ i ."
                    },
                    {
                        "id": 84,
                        "string": "Otherwise, ifŷ i / ∈ C S , the zero-shot classifier will be used to select a class c u ∈ C U asŷ i ."
                    },
                    {
                        "id": 85,
                        "string": "The traditional classifier and the zero-shot classifier have an identical CNN-based structure followed by two dense layers but their inputs and outputs are different."
                    },
                    {
                        "id": 86,
                        "string": "The traditional classifier is a multi-class classifier (|C S | classes) with a softmax output, so it requires only the word embeddings v i w as an input."
                    },
                    {
                        "id": 87,
                        "string": "This classifier is trained using a cross entropy loss with a training dataset whose examples are from seen classes only."
                    },
                    {
                        "id": 88,
                        "string": "In contrast, the zero-shot classifier is a binary classifier with a sigmoid output."
                    },
                    {
                        "id": 89,
                        "string": "Specifically, it takes a text document x i and a class c as inputs and predicts the confidence p(ŷ i = c|x i )."
                    },
                    {
                        "id": 90,
                        "string": "However, in practice, we utilise v i w to represent x i , v c to represent the class c, and also augmented features v i w,c to provide more information on how intimate the connections between words and the class c are."
                    },
                    {
                        "id": 91,
                        "string": "Altogether, for each word w j , the classifier receives the concatenation of three vectors (i.e., [v i w j ; v c ; v i w j ,c ]) as an input."
                    },
                    {
                        "id": 92,
                        "string": "This classifier is trained using a binary cross entropy loss with a training data from seen classes only, but we expect this classifier to work well on unseen classes thanks to the distinctive patterns of v i w,c in positive examples of every class."
                    },
                    {
                        "id": 93,
                        "string": "This is how we transfer knowledge from seen to unseen classes in ZSL."
                    },
                    {
                        "id": 94,
                        "string": "Feature Augmentation The relationship vector v w j ,c contains augmented features we input to the zero-shot classifier."
                    },
                    {
                        "id": 95,
                        "string": "v w j ,c shows how the word w j and the class c are related considering the relations in a general knowledge graph."
                    },
                    {
                        "id": 96,
                        "string": "In this work, we use ConceptNet providing general knowledge of natural language words and phrases (Speer and Havasi, 2013) ."
                    },
                    {
                        "id": 97,
                        "string": "A subgraph of ConceptNet is shown in Figure 2c as an illustration."
                    },
                    {
                        "id": 98,
                        "string": "Nodes in ConceptNet are words or phrases, while edges connecting two nodes show how they are related either syntactically or semantically."
                    },
                    {
                        "id": 99,
                        "string": "We firstly represent a class c as three sets of nodes in ConceptNet by processing the class hierarchy, class label, and class description of c. (1) the class nodes is a set of nodes of the class label c and any tokens inside c if c has more than one word."
                    },
                    {
                        "id": 100,
                        "string": "(2) superclass nodes is a set of nodes of all the superclasses of c according to the class hierarchy."
                    },
                    {
                        "id": 101,
                        "string": "(3) description nodes is a set of nodes of all nouns in the description of the class c. For example, if c is the class \"Educational Institution\", according to Figure 2a -2b, the three sets of Con-ceptNet nodes for this class are: (1) educational institution, educational, institution (2) organization, agent (3) place, people, ages, education."
                    },
                    {
                        "id": 102,
                        "string": "To construct v w j ,c , we consider whether the word w j is connected to the members of the three sets above within K hops by particular types of relations or not 1 ."
                    },
                    {
                        "id": 103,
                        "string": "For each of the three sets, we construct a vector with 3K + 1 dimensions."
                    },
                    {
                        "id": 104,
                        "string": "• v[0] = 1 if w j is a node in that set; otherwise, v[0] = 0."
                    },
                    {
                        "id": 105,
                        "string": "• for k = 0, ."
                    },
                    {
                        "id": 106,
                        "string": "."
                    },
                    {
                        "id": 107,
                        "string": "."
                    },
                    {
                        "id": 108,
                        "string": ", K − 1: v[3k + 1] = 1 if there is a node in the set whose shortest path to w j is k + 1."
                    },
                    {
                        "id": 109,
                        "string": "Otherwise, v[3k + 1] = 0."
                    },
                    {
                        "id": 110,
                        "string": "-v[3k + 2] is the number of nodes in the set whose shortest path to w j is k + 1."
                    },
                    {
                        "id": 111,
                        "string": "-v[3k +3] is v[3k +2 ] divided by the total number of nodes in the set."
                    },
                    {
                        "id": 112,
                        "string": "Thus, the vector associated to each set shows how w j is semantically close to that set."
                    },
                    {
                        "id": 113,
                        "string": "Finally, we concatenate the constructed vectors from the three sets to become v w j ,c with 3×(3K+1) dimensions."
                    },
                    {
                        "id": 114,
                        "string": "Experiments Datasets We used two textual datasets for the experiments."
                    },
                    {
                        "id": 115,
                        "string": "The vocabulary size of each dataset was limited by 20,000 most frequent words and all numbers were excluded."
                    },
                    {
                        "id": 116,
                        "string": "(1) DBpedia ontology dataset  includes 14 non-overlapping classes and textual data collected from Wikipedia."
                    },
                    {
                        "id": 117,
                        "string": "Each class has 40,000 training and 5,000 testing samples."
                    },
                    {
                        "id": 118,
                        "string": "(2) The 20newsgroups dataset 2 has 20 topics each of which has approximately 1,000 documents."
                    },
                    {
                        "id": 119,
                        "string": "70% of the documents of each class were randomly selected for training, and the remaining 30% were used as a testing set."
                    },
                    {
                        "id": 120,
                        "string": "Implementation Details 3 In our experiments, two different rates of unseen classes, 50% and 25%, were chosen and the corresponding sizes of C S and C U are shown in Table 1 ."
                    },
                    {
                        "id": 121,
                        "string": "For each dataset and each unseen rate, the random 1 In this paper, we only consider the most common types of positive relations which are RelatedTo, IsA, PartOf, and AtLocation."
                    },
                    {
                        "id": 122,
                        "string": "They cover ∼60% of all edges in ConceptNet."
                    },
                    {
                        "id": 123,
                        "string": "2 http://qwone.com/∼jason/20Newsgroups/ 3 Code: https://github.com/JingqingZ/KG4ZeroShotText."
                    },
                    {
                        "id": 124,
                        "string": "selection of (C S , C U ) were repeated ten times and these ten groups were used by all the experiments with this setting for a fair comparison."
                    },
                    {
                        "id": 125,
                        "string": "All documents from C U were removed from the training set accordingly."
                    },
                    {
                        "id": 126,
                        "string": "Finally, the results from all the ten groups were averaged."
                    },
                    {
                        "id": 127,
                        "string": "In Phase 1, the structure of each classifier was identical."
                    },
                    {
                        "id": 128,
                        "string": "The CNN layer had three filter sizes [3, 4, 5] with 400 filters for each filter size and the subsequent dense layer had 300 units."
                    },
                    {
                        "id": 129,
                        "string": "For data augmentation, we used gensim with an implementation of 3COSMUL (Řehůřek and Sojka, 2010) to solve the word-level analogy (line 5 in Algorithm 1)."
                    },
                    {
                        "id": 130,
                        "string": "Also, the numbers of augmented text documents per unseen class for every setting (if used) are indicated in Table 1 ."
                    },
                    {
                        "id": 131,
                        "string": "These numbers were set empirically considering the number of available training documents to be translated."
                    },
                    {
                        "id": 132,
                        "string": "In Phase 2, the traditional classifier and the zero-shot classifier had the same structure, in which the CNN layer had three filter sizes [2, 4, 8] with 600 filters for each filter size and the two intermediate dense layers had 400 and 100 units respectively."
                    },
                    {
                        "id": 133,
                        "string": "For feature augmentation, the maximum path length K in ConceptNet was set to 3 to create the relationship vectors 4 ."
                    },
                    {
                        "id": 134,
                        "string": "The DBpedia ontology 5 was used to construct a class hierarchy of the DBpedia dataset."
                    },
                    {
                        "id": 135,
                        "string": "The class hierarchy of the 20newsgroups dataset was constructed based on the namespaces initially provided by the dataset."
                    },
                    {
                        "id": 136,
                        "string": "Meanwhile, the classes descriptions of both datasets were picked from Macmillan Dictionary 6 as appropriate."
                    },
                    {
                        "id": 137,
                        "string": "For both phases, we used 200-dim GloVe vectors 7 for word embeddings v w and v c (Pennington et al., 2014)."
                    },
                    {
                        "id": 138,
                        "string": "All the deep neural networks were implemented with TensorLayer (Dong et al., 2017a) and TensorFlow (Abadi et al., 2016) ."
                    },
                    {
                        "id": 139,
                        "string": "Baselines and Evaluation Metrics We compared each phase and the overall framework with the following approaches and settings."
                    },
                    {
                        "id": 140,
                        "string": "Phase 1: Proposed by (Shu et al., 2017) , DOC is a state-of-the-art open-world text classification approach which classifies a new sample into a seen class or \"reject\" if the sample does not belong to any seen classes."
                    },
                    {
                        "id": 141,
                        "string": "The DOC uses a single CNN and a 1-vs-rest sigmoid output layer with threshold adjustment."
                    },
                    {
                        "id": 142,
                        "string": "Unlike DOC, the classifiers in the proposed Phase 1 work individually."
                    },
                    {
                        "id": 143,
                        "string": "However, for a fair comparison, we used DOC only as a binary classifier in this phase (ŷ i ∈ C S orŷ i / ∈ C S )."
                    },
                    {
                        "id": 144,
                        "string": "Phase 2: To see how well the augmented feature v w,c work in ZSL, we ran the zero-shot classifier with different combinations of inputs."
                    },
                    {
                        "id": 145,
                        "string": "Particularly, five combinations of v w , v c , and v w,c were tested with documents from unseen classes only (traditional ZSL)."
                    },
                    {
                        "id": 146,
                        "string": "The whole framework: (1) Count-based model selected the class whose label appears most frequently in the document asŷ i ."
                    },
                    {
                        "id": 147,
                        "string": "(2) Label similarity (Sappadla et al., 2016) is an unsupervised approach which calculates the cosine similarity between the sum of word embeddings of each class label and the sum of word embeddings of every n-gram (n = 1, 2, 3) in the document."
                    },
                    {
                        "id": 148,
                        "string": "We adopted this approach to do single-label classification by predicting the class that got the highest similarity score among all classes."
                    },
                    {
                        "id": 149,
                        "string": "(3) RNN Au-toEncoder was built based on a Seq2Seq model with LSTM (512 hidden units), and it was trained to encode documents and class labels onto the same latent space."
                    },
                    {
                        "id": 150,
                        "string": "The cosine similarity was applied to select a class label closest to the document on the latent space."
                    },
                    {
                        "id": 151,
                        "string": "(4) RNN+FC refers to the architecture 2 proposed in (Pushp and Srivastava, 2017) ."
                    },
                    {
                        "id": 152,
                        "string": "It used an RNN layer with LSTM (512 hidden units) followed by two dense layers with 400 and 100 units respectively."
                    },
                    {
                        "id": 153,
                        "string": "(5) CNN+FC replaced the RNN in the previous model with a CNN, which has the identical structure as the zero-shot classifier in Phase 2."
                    },
                    {
                        "id": 154,
                        "string": "Both RNN+FC and CNN+FC predicted the confidence p(ŷ i = c|x i ) given v w and v c ."
                    },
                    {
                        "id": 155,
                        "string": "The class with the highest confidence was selected asŷ i ."
                    },
                    {
                        "id": 156,
                        "string": "For Phase 1, we used the accuracy for binary classification (y,ŷ i ∈ C S or y,ŷ i / ∈ C S ) as an evaluation metric."
                    },
                    {
                        "id": 157,
                        "string": "In contrast, for Phase 2 and the whole framework, we used the multi-class classification accuracy (ŷ i = y i ) as a metric."
                    },
                    {
                        "id": 158,
                        "string": "Results and Discussion The evaluation of Phase 1 (coarse-grained classification) checks if each x i was correctly delivered to the right classifier in Phase 2."
                    },
                    {
                        "id": 159,
                        "string": "Table 3 shows the performance of Phase 1 with and without augmented data compared with DOC."
                    },
                    {
                        "id": 160,
                        "string": "Considering test documents from seen classes only, our framework outperformed DOC on both datasets."
                    },
                    {
                        "id": 161,
                        "string": "In addition, the augmented data improved the accuracy of detecting documents from unseen classes clearly and led to higher overall accuracy in every setting."
                    },
                    {
                        "id": 162,
                        "string": "Despite no real labelled data from unseen classes, the augmented data generated by topic translation helped Phase 1 better detect documents from unseen classes."
                    },
                    {
                        "id": 163,
                        "string": "Table 4 shows some examples of augmented data from the DBpedia dataset."
                    },
                    {
                        "id": 164,
                        "string": "Even if they are not completely understandable, they contain the tone of the target classes."
                    },
                    {
                        "id": 165,
                        "string": "Although Phase 1 provided confidence scores for all seen classes, we could not use them to predictŷ i directly since the distribution of scores of positive examples from different CNNs are different."
                    },
                    {
                        "id": 166,
                        "string": "Figure 3 shows that the distribution of confidence scores of the class \"Artist\" had a noticeably larger variance and was clearly different from the class \"Building\"."
                    },
                    {
                        "id": 167,
                        "string": "Hence, even if p(ŷ i = \"Building\"|x i ) > p(ŷ i = \"Artist\"|x i ), we cannot conclude that x i is more likely to come from the class \"Building\"."
                    },
                    {
                        "id": 168,
                        "string": "This is why a traditional classifier in Phase 2 is necessary."
                    },
                    {
                        "id": 169,
                        "string": "Regarding Phase 2, fine-grained classification is in charge of predictingŷ i and it employs two classifiers which were tested separately."
                    },
                    {
                        "id": 170,
                        "string": "Assuming Phase 1 is perfect, the classifiers in Phase 2 should be able to find the right class."
                    },
                    {
                        "id": 171,
                        "string": "The purpose of Table 5 is to show that the traditional CNN classifier in Phase 2 was highly accurate."
                    },
                    {
                        "id": 172,
                        "string": "Mitra perdulca is a species of sea snail a marine gastropod mollusk in the family Mitridae the miters or miter snails."
                    },
                    {
                        "id": 173,
                        "string": "Animal → Plant Arecaceae perdulca is a flowering of port aster a naval mollusk gastropod in the fabaceae Clusiaceae the tiliaceae or rockery amaryllis."
                    },
                    {
                        "id": 174,
                        "string": "Animal → Athlete Mira perdulca is a swimmer of sailing sprinter an Olympian limpets gastropod in the basketball Middy the miters or miter skater."
                    },
                    {
                        "id": 175,
                        "string": "Table 4 : Examples of augmented data translated from a document of the original class \"Animal\" into two target classes \"Plant\" and \"Athlete\"."
                    },
                    {
                        "id": 176,
                        "string": "Besides, given test documents from unseen classes only, the performance of the zero-shot classifier in Phase 2 is shown in Table 6 ."
                    },
                    {
                        "id": 177,
                        "string": "Based on the construction method, v w,c quantified the relatedness between words and the class but, unlike v w and v c , it did not include detailed semantic meaning."
                    },
                    {
                        "id": 178,
                        "string": "Thus, the classifier using v w,c only could not find out the correct unseen class and neither   hand, the combination of [v w ; v c ], which included semantic embeddings of both words and the class label, increased the accuracy of predicting unseen classes clearly."
                    },
                    {
                        "id": 179,
                        "string": "However, the zero-shot classifier fed by the combination of all three types of inputs [v w ; v c ; v w,c ] achieved the highest accuracy in all settings."
                    },
                    {
                        "id": 180,
                        "string": "It asserts that the integration of semantic knowledge we proposed is an effective means for knowledge transfer from seen to unseen classes in the zero-shot scenario."
                    },
                    {
                        "id": 181,
                        "string": "Last but most importantly, we compared the whole framework with four baselines as shown in Table 2 ."
                    },
                    {
                        "id": 182,
                        "string": "First, the count-based model is a rulebased model so it failed to predict documents from seen classes accurately and resulted in unpleasant overall results."
                    },
                    {
                        "id": 183,
                        "string": "This was similar to the label similarity approach even though it had higher degree of flexibility."
                    },
                    {
                        "id": 184,
                        "string": "Next, the RNN Autoencoder was trained without any supervision sinceŷ i was predicted based on the cosine similarity."
                    },
                    {
                        "id": 185,
                        "string": "We believe the implicit semantic relatedness between classes caused the failure of the RNN Autoencoder."
                    },
                    {
                        "id": 186,
                        "string": "Besides, the CNN+FC and RNN+FC had same inputs and outputs and it was clear that CNN+FC performed better than RNN+FC in the experiment."
                    },
                    {
                        "id": 187,
                        "string": "However, neither CNN+FC nor RNN+FC was able to transfer the knowledge learned from seen to unseen classes."
                    },
                    {
                        "id": 188,
                        "string": "Finally, our two-phase framework has competitive prediction accuracy on unseen classes while maintaining the accuracy on seen classes."
                    },
                    {
                        "id": 189,
                        "string": "This made it achieve the highest overall accuracy on both datasets and both unseen rates."
                    },
                    {
                        "id": 190,
                        "string": "In conclusion, by using integrated semantic knowledge, the proposed two-phase framework with data and feature augmentation is a promising step to tackle this challenging zero-shot problem."
                    },
                    {
                        "id": 191,
                        "string": "[v w ; v w,c ] and [v c ; v w, Furthermore, another benefit of the framework is high flexibility."
                    },
                    {
                        "id": 192,
                        "string": "As the modules in Figure 1 has less coupling to one another, it is flexible to improve or customise each of them."
                    },
                    {
                        "id": 193,
                        "string": "For example, we can deploy an advanced language understanding model, e.g., BERT (Devlin et al., 2018) , as a traditional classifier."
                    },
                    {
                        "id": 194,
                        "string": "Moreover, we may replace Con-ceptNet with a domain-specific knowledge graph to deal with medical texts."
                    },
                    {
                        "id": 195,
                        "string": "Related Work Zero-shot Text Classification There are a few more related works to discuss besides recent approaches we compared with in the experiments (explained in section 3.3)."
                    },
                    {
                        "id": 196,
                        "string": "Dauphin et al."
                    },
                    {
                        "id": 197,
                        "string": "(2013) predicted semantic utterance of texts by mapping class labels and text samples into the same semantic space and classifying each sample to the closest class label."
                    },
                    {
                        "id": 198,
                        "string": "learned the embeddings of classes, documents, and words jointly in the learning stage."
                    },
                    {
                        "id": 199,
                        "string": "Hence, it can perform well in domain-specific classification, but this is possible only with a large amount of training data."
                    },
                    {
                        "id": 200,
                        "string": "Overall, most of the previous works exploited semantic relationships between classes and documents via embeddings."
                    },
                    {
                        "id": 201,
                        "string": "In contrast, our proposed framework leverages not only the word embeddings but also other semantic knowledge."
                    },
                    {
                        "id": 202,
                        "string": "While word embeddings are used to solve analogy for data augmentation in Phase 1, the other semantic knowledge sources (in Figure 2 ) are integrated into relationship vectors and used as augmented features in Phase 2."
                    },
                    {
                        "id": 203,
                        "string": "Furthermore, our framework does not require any semantic correspondences between seen and unseen classes."
                    },
                    {
                        "id": 204,
                        "string": "Data Augmentation in NLP In the face of insufficient data, data augmentation has been widely used to improve generalisation of deep neural networks especially in computer vision (Krizhevsky et al., 2012) and multimodality (Dong et al., 2017b) , but it is still not a common practice in natural language processing."
                    },
                    {
                        "id": 205,
                        "string": "Recent works have explored data augmentation in NLP tasks such as machine translation and text classification (Saito et al., 2017; Fadaee et al., 2017; Kobayashi, 2018) , and the algorithms were designed to preserve semantic meaning of an original document by using synonyms (Zhang and Le-Cun, 2015) or adding noises (Xie et al., 2017) , for example."
                    },
                    {
                        "id": 206,
                        "string": "In contrast, our proposed data augmentation technique translates a document from one meaning (its original class) to another meaning (an unseen class) by analogy in order to substitute unavailable labelled data of the unseen class."
                    },
                    {
                        "id": 207,
                        "string": "Feature Augmentation in NLP Apart from improving classification accuracy, feature augmentation is also used in domain adaptation to transfer knowledge between a source and a target domain (Pan et al., 2010b; Fang and Chiang, 2018; Chen et al., 2018 )."
                    },
                    {
                        "id": 208,
                        "string": "An early research paper applying feature augmentation in NLP is Daume III (2007) which targeted domain adaptation on sequence labelling tasks."
                    },
                    {
                        "id": 209,
                        "string": "After that, feature augmentation was used in several NLP tasks such as cross-domain sentiment classification (Pan et al., 2010a), multi-domain machine translation (Clark et al., 2012) , semantic argument classification (Batubara et al., 2018) , etc."
                    },
                    {
                        "id": 210,
                        "string": "Our work is different from previous works not only that we applied this technique to zero-shot text classification but also that we integrated many types of semantic knowledge to create the augmented features."
                    },
                    {
                        "id": 211,
                        "string": "Conclusion and Future Work To tackle zero-shot text classification, we proposed a novel CNN-based two-phase framework together with data augmentation and feature augmentation."
                    },
                    {
                        "id": 212,
                        "string": "The experiments show that data augmentation by topic translation improved the accuracy in detecting instances from unseen classes, while feature augmentation enabled knowledge transfer from seen to unseen classes for zero-shot learning."
                    },
                    {
                        "id": 213,
                        "string": "Thanks to the framework and the integrated semantic knowledge, our work achieved the highest overall accuracy compared with all the baselines and recent approaches in all settings."
                    },
                    {
                        "id": 214,
                        "string": "In the future, we plan to extend our framework to do multi-label classification with a larger amount of data, and also study how semantic units defined by linguists can be used in the zero-shot scenario."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 31
                    },
                    {
                        "section": "Problem Formulation",
                        "n": "2.1",
                        "start": 32,
                        "end": 41
                    },
                    {
                        "section": "Overview and Notations",
                        "n": "2.2",
                        "start": 42,
                        "end": 61
                    },
                    {
                        "section": "Phase 1: Coarse-grained Classification",
                        "n": "2.3",
                        "start": 62,
                        "end": 67
                    },
                    {
                        "section": "Data Augmentation",
                        "n": "2.3.1",
                        "start": 68,
                        "end": 93
                    },
                    {
                        "section": "Feature Augmentation",
                        "n": "2.4.1",
                        "start": 94,
                        "end": 113
                    },
                    {
                        "section": "Datasets",
                        "n": "3.1",
                        "start": 114,
                        "end": 119
                    },
                    {
                        "section": "Implementation Details 3",
                        "n": "3.2",
                        "start": 120,
                        "end": 138
                    },
                    {
                        "section": "Baselines and Evaluation Metrics",
                        "n": "3.3",
                        "start": 139,
                        "end": 157
                    },
                    {
                        "section": "Results and Discussion",
                        "n": "3.4",
                        "start": 158,
                        "end": 194
                    },
                    {
                        "section": "Zero-shot Text Classification",
                        "n": "4.1",
                        "start": 195,
                        "end": 203
                    },
                    {
                        "section": "Data Augmentation in NLP",
                        "n": "4.2",
                        "start": 204,
                        "end": 206
                    },
                    {
                        "section": "Feature Augmentation in NLP",
                        "n": "4.3",
                        "start": 207,
                        "end": 210
                    },
                    {
                        "section": "Conclusion and Future Work",
                        "n": "5",
                        "start": 211,
                        "end": 214
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1014-Figure3-1.png",
                        "caption": "Figure 3: The distributions of confidence scores of positive examples from four seen classes of DBpedia in Phase 1.",
                        "page": 5,
                        "bbox": {
                            "x1": 315.36,
                            "x2": 517.4399999999999,
                            "y1": 61.919999999999995,
                            "y2": 192.0
                        }
                    },
                    {
                        "filename": "../figure/image/1014-Figure1-1.png",
                        "caption": "Figure 1: The overview of the proposed framework with two phases. The coarse-grained phase judges if an input document xi comes from seen or unseen classes. The fine-grained phase finally decides the class ŷi. All notations are defined in section 2.1-2.2.",
                        "page": 1,
                        "bbox": {
                            "x1": 78.72,
                            "x2": 526.0799999999999,
                            "y1": 62.879999999999995,
                            "y2": 213.12
                        }
                    },
                    {
                        "filename": "../figure/image/1014-Table6-1.png",
                        "caption": "Table 6: The accuracy of the zero-shot classifier in Phase 2 given documents from unseen classes only.",
                        "page": 6,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 527.04,
                            "y1": 351.84,
                            "y2": 439.2
                        }
                    },
                    {
                        "filename": "../figure/image/1014-Table4-1.png",
                        "caption": "Table 4: Examples of augmented data translated from a document of the original class “Animal” into two target classes “Plant” and “Athlete”.",
                        "page": 6,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 289.44,
                            "y1": 462.71999999999997,
                            "y2": 575.04
                        }
                    },
                    {
                        "filename": "../figure/image/1014-Table2-1.png",
                        "caption": "Table 2: The accuracy of the whole framework compared with the baselines.",
                        "page": 6,
                        "bbox": {
                            "x1": 78.72,
                            "x2": 518.4,
                            "y1": 62.879999999999995,
                            "y2": 229.92
                        }
                    },
                    {
                        "filename": "../figure/image/1014-Table3-1.png",
                        "caption": "Table 3: The accuracy of Phase 1 with and without augmented data compared with DOC .",
                        "page": 6,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 286.08,
                            "y1": 270.71999999999997,
                            "y2": 415.2
                        }
                    },
                    {
                        "filename": "../figure/image/1014-Table5-1.png",
                        "caption": "Table 5: The accuracy of the traditional classifier in Phase 2 given documents from seen classes only.",
                        "page": 6,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 525.12,
                            "y1": 270.71999999999997,
                            "y2": 305.28
                        }
                    },
                    {
                        "filename": "../figure/image/1014-Figure2-1.png",
                        "caption": "Figure 2: Illustrations of semantic knowledge integrated into our framework: (a) class labels and class descriptions (b) class hierarchy and (c) a subgraph of the general knowledge graph (ConceptNet).",
                        "page": 2,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 290.4,
                            "y1": 63.839999999999996,
                            "y2": 312.0
                        }
                    },
                    {
                        "filename": "../figure/image/1014-Table1-1.png",
                        "caption": "Table 1: The rates of unseen classes and the numbers of augmented documents (per unseen class) in the experiments",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 534.24,
                            "y1": 591.84,
                            "y2": 655.1999999999999
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-18"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivation",
                    "text": [
                        "Most high-performance data-driven models rely on a large amount of labeled training data. However, a model trained on one language usually performs poorly on another language.",
                        "Extend existing services to more languages:",
                        "Collect, select, and pre-process data",
                        "Compile guidelines for new languages",
                        "Train annotators to qualify for annotation tasks",
                        "Adjudicate annotations and assess the annotation quality and inter-annotator agreement",
                        "Adjudicate annotations and assess inter-annotator agreement",
                        "languages are spoken today",
                        "Rapid and low-cost development of capabilities for low-resource languages.",
                        "Disaster response and recovery"
                    ],
                    "page_nums": [
                        1,
                        2
                    ],
                    "images": []
                },
                "1": {
                    "title": "TRANSFER LEARNING and MULTI TASK LEARNING",
                    "text": [
                        "Leverage existing data of related languages and tasks and transfer knowledge to our target task.",
                        "The Tasman Sea lies between lAustralie est separee de lAsie par les mers dArafuraet",
                        "Australia and New Zealand. de Timor et de la Nouvelle-Zelande par la mer de Tasman",
                        "Multi-task Learning (MTL) is an effective solution for knowledge transfer across tasks.",
                        "In the context of neural network architectures, we usually perform MTL by sharing parameters across models.",
                        "Task A Data Parameter Sharing: When optimizing model A , we update",
                        "and hence . In this way, we can partially train model B as ."
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Sequence labeling",
                    "text": [
                        "To illustrate our idea, we take sequence labeling as a case study.",
                        "In the NLP context, the goal of sequence labeling is to assign a categorical label (e.g., Part-of-speech tag) to each token in a sentence.",
                        "It underlies a range of fundamental NLP tasks, including POS Tagging, Name Tagging, and Chunking.",
                        "Koalas are largely sedentary and sleep up to 20 hours a day.",
                        "NNS VBP RB JJ CC VB IN TO CD NNS DT NN",
                        "PER NAME TAGGING B-PER E-PER GPE GPE",
                        "Itamar Rabinovich, who as Israel's ambassador to Washington conducted unfruitful negotiations with",
                        "Syria, told Israel Radio it looked like Damascus wated to talk rather than fight.",
                        "B-, I-, E-, S-: beginning of a mention, inside of a mention, the end of a mention and a single-token mention",
                        "O: not part of any mention Although we only focus on sequence labeling in this work, our architecture can be adapted for many NLP tasks with slight modification."
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Base model lstm crf chiu and nichols 2016",
                    "text": [
                        "The CRF layer models the dependencies between labels.",
                        "The linear layer projects hidden states to label space.",
                        "The Bidirectional LSTM (long-short term memory) processes the input sentence from both directional, encodeing each token and its context into a vector",
                        "Input Sentence Each token in the given sentence is",
                        "represented as the combination of its word embedding and character feature vector.",
                        "Features Character- level CNN",
                        "Word Embedding Character Embedding"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Previous transfer models for sequence labeling",
                    "text": [
                        "T-A: Cross-domain transfer T-B: Cross-domain transfer With disparate label T-C: Cross-lingual Transfer sets",
                        "Yang et al. (2017) proposed three transfer learning architectures for different use cases.",
                        "* Above figures are adapted from (Yang et al., 2017)"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Our model multi lingual multi task architecture",
                    "text": [
                        "combines multi-lingual transfer and multi-task transfer is able to transfer knowledge from multiple sources"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": [
                        "figure/image/1017-Figure2-1.png"
                    ]
                },
                "6": {
                    "title": "Our model multi lingual multi task model",
                    "text": [
                        "Cross-task Transfer POS Tagging Name Tagging",
                        "Cross-lingual Transfer English Spanish",
                        "The bidirectional LSTM, character embeddings and character-level networks serve as the basis of the architecture. This level of parameter sharing aims to provide universal word representation and feature extraction capability for all tasks and languages"
                    ],
                    "page_nums": [
                        8,
                        9
                    ],
                    "images": []
                },
                "7": {
                    "title": "Our model multi lingual multi task model cross lingual transfer",
                    "text": [
                        "For the same task, most components are shared between languages.",
                        "Although our architecture does not require aligned cross-lingual word embeddings, we also evaluate it with aligned embeddings generated using MUSEs unsupervised model (Conneau et al. 2017)."
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": [
                        "figure/image/1017-Figure2-1.png"
                    ]
                },
                "8": {
                    "title": "Our model multi lingual multi task model linear layer",
                    "text": [
                        "English: improvement, development, payment,",
                        "French: vraiment, completement, immediatement",
                        "We combine the output of the shared linear layer and the output of the language-specific linear layer using",
                        "where . and are optimized during training. is the LSTM hidden states. As is a square matrix, , , and have the same dimension",
                        "We add a language-specific linear layer to allow the model to behave differently towards some features for different languages."
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": [
                        "figure/image/1017-Figure2-1.png"
                    ]
                },
                "9": {
                    "title": "Our model multi lingual multi task model cross task transfer",
                    "text": [
                        "Linear layers and CRF layers are not shared between different tasks.",
                        "Tasks of the same language use the same embedding matrix: mutually enhance word representations"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": [
                        "figure/image/1017-Figure2-1.png"
                    ]
                },
                "10": {
                    "title": "Alternating training",
                    "text": [
                        "To optimize multiple tasks within one model, we adopt the alternating training approach in (Luong et",
                        "At each training step, we sample a task with probability:",
                        "In our experiments, instead of tuning mixing rate , we estimate it by:",
                        "where is the task coefficient, is the language coefficient, and is the number of training examples. (or ) takes the value 1 if the task (or language) of is the same as that of the target task; Otherwise it takes the value 0.1."
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "12": {
                    "title": "Experiments setup",
                    "text": [
                        "50-dimensional pre-trained word embeddings",
                        "English, Spanish and Dutch: Wikipedia",
                        "Chechen: TAC KBP 2017 10-Language EDL Pilot Evaluation Source Corpus",
                        "Cross-lingual word embedding: we aligned mono-lingual pre-trained word embeddings with MUSE",
                        "50-dimensional randomly initialized character embeddings",
                        "Optimization: SGD with momentum (), gradient clipping (threshold: 5.0) and exponential learning rate decay.",
                        "Highway Activation Function SeLU",
                        "LSTM Hidden State Size"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "14": {
                    "title": "Experiments comparison with state of the art models",
                    "text": [
                        "Our Model We also compared our model with state-of-the-art models with all training data."
                    ],
                    "page_nums": [
                        19,
                        20
                    ],
                    "images": []
                },
                "15": {
                    "title": "Experiments cross task transfer vs cross lingual transfer",
                    "text": [
                        "With 100 Dutch training sentences:",
                        "The baseline model misses the name",
                        "The cross-task transfer model finds the name but assigns a wrong tag to Marx.",
                        "The cross-lingual transfer model correctly identifies the whole name.",
                        "The task-specific knowledge that B-PER",
                        "S-PER is an invalid transition will not be learned in the POS Tagging model.",
                        "The cross-lingual transfer model transfers such knowledge through the shared CRF layer."
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": [
                        "figure/image/1017-Table5-1.png"
                    ]
                }
            },
            "paper_title": "A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling",
            "paper_id": "1017",
            "paper": {
                "title": "A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling",
                "abstract": "We propose a multi-lingual multi-task architecture to develop supervised models with a minimal amount of labeled data for sequence labeling. In this new architecture, we combine various transfer models using two layers of parameter sharing. On the first layer, we construct the basis of the architecture to provide universal word representation and feature extraction capability for all models. On the second level, we adopt different parameter sharing strategies for different transfer schemes. This architecture proves to be particularly effective for low-resource settings, when there are less than 200 training sentences for the target task. Using Name Tagging as a target task, our approach achieved 4.3%-50.5% absolute Fscore gains compared to the mono-lingual single-task baseline model. 1 #1 [DUTCH]: If a Palestinian State is, however, the first thing the Palestinians will do. ⋆ [B] Als er een Palestijnse staat komt, is dat echter het eerste wat de Palestijnen zullen doen ⋆ [A] Als er een [S-MISC Palestijnse] staat komt, is dat echter het eerste wat de [S-MISC Palestijnen] zullen doen #2 [DUTCH]: That also frustrates the Muscovites, who still live in the proud capital of Russia but can not look at the soaps that the stupid farmers can see on the outside. ⋆ [B] Ook dat frustreert de Moskovieten , die toch in de fiere hoofdstad van Rusland wonen maar niet naar de soaps kunnen kijken die de domme boeren op de buiten wel kunnen zien ⋆ [A] Ook dat frustreert de [S-MISC Moskovieten] , die toch in de fiere hoofdstad van [S-LOC Rusland] wonen maar niet naar de soaps kunnen kijken die de domme boeren op de buiten wel kunnen zien #3 [DUTCH]: And the PMS centers are merging with the centers for school supervision, the MSTs.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction When we use supervised learning to solve Natural Language Processing (NLP) problems, we typically train an individual model for each task with task-specific labeled data."
                    },
                    {
                        "id": 1,
                        "string": "However, our target task may be intrinsically linked to other tasks."
                    },
                    {
                        "id": 2,
                        "string": "For example, Part-of-speech (POS) tagging and Name Tagging can both be considered as sequence labeling; Machine Translation (MT) and Abstractive Text Summarization both require the ability to understand the source text and generate natural language sentences."
                    },
                    {
                        "id": 3,
                        "string": "Therefore, it is valuable to transfer knowledge from related tasks to the target task."
                    },
                    {
                        "id": 4,
                        "string": "Multi-task Learning (MTL) is one of * * Part of this work was done when the first author was on an internship at Facebook."
                    },
                    {
                        "id": 5,
                        "string": "1 The code of our model is available at https://github."
                    },
                    {
                        "id": 6,
                        "string": "com/limteng-rpi/mlmt the most effective solutions for knowledge transfer across tasks."
                    },
                    {
                        "id": 7,
                        "string": "In the context of neural network architectures, we usually perform MTL by sharing parameters across models (Ruder, 2017) ."
                    },
                    {
                        "id": 8,
                        "string": "Previous studies (Collobert and Weston, 2008; Dong et al., 2015; Luong et al., 2016; Liu et al., 2018; Yang et al., 2017) have proven that MTL is an effective approach to boost the performance of related tasks such as MT and parsing."
                    },
                    {
                        "id": 9,
                        "string": "However, most of these previous efforts focused on tasks and languages which have sufficient labeled data but hit a performance ceiling on each task alone."
                    },
                    {
                        "id": 10,
                        "string": "Most NLP tasks, including some well-studied ones such as POS tagging, still suffer from the lack of training data for many low-resource languages."
                    },
                    {
                        "id": 11,
                        "string": "According to Ethnologue 2 , there are 7, 099 living languages in the world."
                    },
                    {
                        "id": 12,
                        "string": "It is an unattainable goal to annotate data in all languages, especially for tasks with complicated annotation requirements."
                    },
                    {
                        "id": 13,
                        "string": "Furthermore, some special applications (e.g., disaster response and recovery) require rapid development of NLP systems for extremely low-resource languages."
                    },
                    {
                        "id": 14,
                        "string": "Therefore, in this paper, we concentrate on enhancing supervised models in low-resource settings by borrowing knowledge learned from related high-resource languages and tasks."
                    },
                    {
                        "id": 15,
                        "string": "In (Yang et al., 2017) , the authors simulated a low-resource setting for English and Spanish by downsampling the training data for the target task."
                    },
                    {
                        "id": 16,
                        "string": "However, for most low-resource languages, the data sparsity problem also lies in related tasks and languages."
                    },
                    {
                        "id": 17,
                        "string": "Under such circumstances, a single transfer model can only bring limited improvement."
                    },
                    {
                        "id": 18,
                        "string": "To tackle this issue, we propose a multi-lingual multi-task architecture which combines different transfer models within a unified architecture through two levels of parameter sharing."
                    },
                    {
                        "id": 19,
                        "string": "In the first level, we share character embeddings, character-level convolutional neural networks, and word-level long-short term memory layer across all models."
                    },
                    {
                        "id": 20,
                        "string": "These components serve as a basis to connect multiple models and transfer universal knowledge among them."
                    },
                    {
                        "id": 21,
                        "string": "In the second level, we adopt different sharing strategies for different transfer schemes."
                    },
                    {
                        "id": 22,
                        "string": "For example, we use the same output layer for all Name Tagging tasks to share task-specific knowledge (e.g., I-PER 3 should not be assigned to the first word in a sentence)."
                    },
                    {
                        "id": 23,
                        "string": "To illustrate our idea, we take sequence labeling as a case study."
                    },
                    {
                        "id": 24,
                        "string": "In the NLP context, the goal of sequence labeling is to assign a categorical label (e.g., POS tag) to each token in a sentence."
                    },
                    {
                        "id": 25,
                        "string": "It underlies a range of fundamental NLP tasks, including POS Tagging, Name Tagging, and chunking."
                    },
                    {
                        "id": 26,
                        "string": "Experiments show that our model can effectively transfer various types of knowledge from different auxiliary tasks and obtains up to 50.5% absolute F-score gains on Name Tagging compared to the mono-lingual single-task baseline."
                    },
                    {
                        "id": 27,
                        "string": "Additionally, our approach does not rely on a large amount of auxiliary task data to achieve the improvement."
                    },
                    {
                        "id": 28,
                        "string": "Using merely 1% auxiliary data, we already obtain up to 9.7% absolute gains in Fscore."
                    },
                    {
                        "id": 29,
                        "string": "Model Basic Architecture The goal of sequence labeling is to assign a categorical label to each token in a given sentence."
                    },
                    {
                        "id": 30,
                        "string": "Though traditional methods such as Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs) (Lafferty et al., 2001; Ratinov and Roth, 2009; Passos et al., 2014) achieved high performance on sequence labeling tasks, they typically relied on hand-crafted features, therefore it is difficult to adapt them to new tasks or languages."
                    },
                    {
                        "id": 31,
                        "string": "To avoid task-specific engineering, (Collobert et al., 2011) proposed a feed-forward neural network model that only requires word embeddings trained on a large scale corpus as features."
                    },
                    {
                        "id": 32,
                        "string": "After that, several neural models based on the combination of long-short term memory (LSTM) and CRFs (Ma and Hovy, 2016; Lample et al., 2016; Chiu and Nichols, 2016) were proposed and 3 We adopt the BIOES annotation scheme."
                    },
                    {
                        "id": 33,
                        "string": "Prefixes B-, I-, E-, and S-represent the beginning of a mention, inside of a mention, the end of a mention and a single-token mention respectively."
                    },
                    {
                        "id": 34,
                        "string": "The O tag is assigned to a word which is not part of any mention."
                    },
                    {
                        "id": 35,
                        "string": "achieved better performance on sequence labeling tasks."
                    },
                    {
                        "id": 36,
                        "string": "Figure 1: LSTM-CNNs: an LSTM-CRFs-based model for Sequence Labeling LSTM-CRFs-based models are well-suited for multi-lingual multi-task learning for three reasons: (1) They learn features from word and character embeddings and therefore require little feature engineering; (2) As the input and output of each layer in a neural network are abstracted as vectors, it is fairly straightforward to share components between neural models; (3) Character embeddings can serve as a bridge to transfer morphological and semantic information between languages with identical or similar scripts, without requiring cross-lingual dictionaries or parallel sentences."
                    },
                    {
                        "id": 37,
                        "string": "Therefore, we design our multi-task multilingual architecture based on the LSTM-CNNs model proposed in (Chiu and Nichols, 2016) ."
                    },
                    {
                        "id": 38,
                        "string": "The overall framework is illustrated in Figure 1 ."
                    },
                    {
                        "id": 39,
                        "string": "First, each word w i is represented as the combination x i of two parts, word embedding and character feature vector, which is extracted from character embeddings of the characters in w i using convolutional neural networks (CharCNN)."
                    },
                    {
                        "id": 40,
                        "string": "On top of that, a bidirectional LSTM processes the sequence x = {x 1 , x 2 , ...} in both directions and encodes each word and its context into a fixed-size vector h i ."
                    },
                    {
                        "id": 41,
                        "string": "Next, a linear layer converts h i to a score vector y i , in which each component represents the predicted score of a target tag."
                    },
                    {
                        "id": 42,
                        "string": "In order to model correlations between tags, a CRFs layer is added at the top to generate the best tagging path for the whole sequence."
                    },
                    {
                        "id": 43,
                        "string": "In the CRFs layer, given an input sentence x of length L and the output of the linear layer y, the score of a sequence of tags z is defined as: S(x, y, z) = L ∑ t=1 (A z t−1 ,zt + y t,zt ), where A is a transition matrix in which A p,q represents the binary score of transitioning from tag p to tag q, and y t,z represents the unary score of assigning tag z to the t-th word."
                    },
                    {
                        "id": 44,
                        "string": "Given the ground truth sequence of tags z, we maximize the following objective function during the training phase: O = log P (z|x) = S(x, y, z) − log ∑ z∈Z e S(x,y,z) , where Z is the set of all possible tagging paths."
                    },
                    {
                        "id": 45,
                        "string": "We emphasize that our actual implementation differs slightly from the LSTM-CNNs model."
                    },
                    {
                        "id": 46,
                        "string": "We do not use additional word-and characterlevel explicit symbolic features (e.g., capitalization and lexicon) as they may require additional language-specific knowledge."
                    },
                    {
                        "id": 47,
                        "string": "Additionally, we transform character feature vectors using highway networks (Srivastava et al., 2015) , which is reported to enhance the overall performance by (Kim et al., 2016) and (Liu et al., 2018) ."
                    },
                    {
                        "id": 48,
                        "string": "Highway networks is a type of neural network that can smoothly switch its behavior between transforming and carrying information."
                    },
                    {
                        "id": 49,
                        "string": "Multi-task Multi-lingual Architecture MTL can be employed to improve performance on multiple tasks at the same time, such as MT and parsing in (Luong et al., 2016) ."
                    },
                    {
                        "id": 50,
                        "string": "However, in our scenario, we only focused on enhancing the performance of a low-resource task, which is our target task or main task."
                    },
                    {
                        "id": 51,
                        "string": "Our proposed architecture aims to transfer knowledge from a set of auxiliary tasks to the main task."
                    },
                    {
                        "id": 52,
                        "string": "For simplicity, we refer to a model of a main (auxiliary) task as a main (auxiliary) model."
                    },
                    {
                        "id": 53,
                        "string": "To jointly train multiple models, we perform multi-task learning using parameter sharing."
                    },
                    {
                        "id": 54,
                        "string": "Let Θ i be the set of parameters for model m i and Θ i,j = Θ i ∩ Θ j be the shared parameters between m i and m j ."
                    },
                    {
                        "id": 55,
                        "string": "When optimizing model m i , we update Θ i and hence Θ i,j ."
                    },
                    {
                        "id": 56,
                        "string": "In this way, we can partially train model m j as Θ i,j ⊆ Θ j ."
                    },
                    {
                        "id": 57,
                        "string": "Previously, each MTL model generally uses a single transfer scheme."
                    },
                    {
                        "id": 58,
                        "string": "In order to merge different transfer models into a unified architecture, we employ two levels of parameter sharing as follows."
                    },
                    {
                        "id": 59,
                        "string": "On the first level, we construct the basis of the architecture by sharing character embeddings, CharCNN and bidirectional LSTM among all models."
                    },
                    {
                        "id": 60,
                        "string": "This level of parameter sharing aims to provide universal word representation and feature extraction capability for all tasks and languages."
                    },
                    {
                        "id": 61,
                        "string": "Character Embeddings and Character-level CNNs."
                    },
                    {
                        "id": 62,
                        "string": "Character features can represent morphological and semantic information; e.g., the English morpheme dis-usually indicates negation and reversal as in \"disagree\" and \"disapproval\"."
                    },
                    {
                        "id": 63,
                        "string": "For low-resource languages lacking in data to suffice the training of high-quality word embeddings, character embeddings learned from other languages may provide crucial information for labeling, especially for rare and out-of-vocabulary words."
                    },
                    {
                        "id": 64,
                        "string": "Take the English word \"overflying\" (flying over) as an example."
                    },
                    {
                        "id": 65,
                        "string": "Even if it is rare or absent in the corpus, we can still infer the word meaning from its suffix over-(above), root fly, and prefix -ing (present participle form)."
                    },
                    {
                        "id": 66,
                        "string": "In our architecture, we share character embeddings and the CharCNN between languages with identical or similar scripts to enhance word representation for low-resource languages."
                    },
                    {
                        "id": 67,
                        "string": "Bidirectional LSTM."
                    },
                    {
                        "id": 68,
                        "string": "The bidirectional LSTM layer is essential to extract character, word, and contextual information from a sentence."
                    },
                    {
                        "id": 69,
                        "string": "However, with a large number of parameters, it cannot be fully trained only using the low-resource task data."
                    },
                    {
                        "id": 70,
                        "string": "To tackle this issue, we share the bidirectional LSTM layer across all models."
                    },
                    {
                        "id": 71,
                        "string": "Bear in mind that because our architecture does not require aligned cross-lingual word embeddings, sharing this layer across languages may confuse the model as it equally handles embeddings in different spaces."
                    },
                    {
                        "id": 72,
                        "string": "Nevertheless, under low-resource circumstances, data sparsity is the most critical factor that affects the performance."
                    },
                    {
                        "id": 73,
                        "string": "On top of this basis, we adopt different parameter sharing strategies for different transfer schemes."
                    },
                    {
                        "id": 74,
                        "string": "For cross-task transfer, we use the same word embedding matrix across tasks so that they can mutually enhance word representations."
                    },
                    {
                        "id": 75,
                        "string": "For cross-lingual transfer, we share the linear layer and CRFs layer among languages to transfer taskspecific knowledge, such as the transition score between two tags."
                    },
                    {
                        "id": 76,
                        "string": "Word Embeddings."
                    },
                    {
                        "id": 77,
                        "string": "For most words, in addition to character embeddings, word embeddings are still crucial to represent semantic informa-Figure 2: Multi-task Multi-lingual Architecture tion."
                    },
                    {
                        "id": 78,
                        "string": "We use the same word embedding matrix for tasks in the same language."
                    },
                    {
                        "id": 79,
                        "string": "The matrix is initialized with pre-trained embeddings and optimized as parameters during training."
                    },
                    {
                        "id": 80,
                        "string": "Thus, task-specific knowledge can be encoded into the word embeddings by one task and subsequently utilized by another one."
                    },
                    {
                        "id": 81,
                        "string": "For a low-resource language even without sufficient raw text, we mix its data with a related high-resource language to train word embeddings."
                    },
                    {
                        "id": 82,
                        "string": "In this way, we merge both corpora and hence their vocabularies."
                    },
                    {
                        "id": 83,
                        "string": "Recently, Conneau et al."
                    },
                    {
                        "id": 84,
                        "string": "(2017) proposed a domain-adversarial method to align two monolingual word embedding matrices without crosslingual supervision such as a bilingual dictionary."
                    },
                    {
                        "id": 85,
                        "string": "Although cross-lingual word embeddings are not required, we evaluate our framework with aligned embeddings generated using this method."
                    },
                    {
                        "id": 86,
                        "string": "Experiment results show that the incorporation of crosslingual embeddings substantially boosts the performance under low-resource settings."
                    },
                    {
                        "id": 87,
                        "string": "Linear Layer and CRFs."
                    },
                    {
                        "id": 88,
                        "string": "As the tag set varies from task to task, the linear layer and CRFs can only be shared across languages."
                    },
                    {
                        "id": 89,
                        "string": "We share these layers to transfer task-specific knowledge to the main model."
                    },
                    {
                        "id": 90,
                        "string": "For example, our model corrects [S-PER Charles] [S-PER Picqué] to [B-PER Charles] [E-PER Picqué] because the CRFs layer fully trained on other languages assigns a low score to the rare transition S-PER→S-PER and promotes B-PER→E-PER."
                    },
                    {
                        "id": 91,
                        "string": "In addition to the shared linear layer, we add an unshared language-specific linear layer to allow the model to behave differently toward some features for different languages."
                    },
                    {
                        "id": 92,
                        "string": "For example, the suffix -ment usually indicates nouns in English whereas indicates adverbs in French."
                    },
                    {
                        "id": 93,
                        "string": "We combine the output of the shared linear layer y u and the output of the language-specific linear layer y s using: y = g ⊙ y s + (1 − g) ⊙ y u , where g = σ(W g h + b g )."
                    },
                    {
                        "id": 94,
                        "string": "W g and b g are optimized during training."
                    },
                    {
                        "id": 95,
                        "string": "h is the LSTM hidden states."
                    },
                    {
                        "id": 96,
                        "string": "As W g is a square matrix, y, y s , and y u have the same dimension."
                    },
                    {
                        "id": 97,
                        "string": "Although we only focus on sequence labeling in this work, our architecture can be adapted for many NLP tasks with slight modification."
                    },
                    {
                        "id": 98,
                        "string": "For example, for text classification tasks, we can take the last hidden state of the forward LSTM as the sentence representation and replace the CRFs layer with a Softmax layer."
                    },
                    {
                        "id": 99,
                        "string": "In our model, each task has a separate object function."
                    },
                    {
                        "id": 100,
                        "string": "To optimize multiple tasks within one model, we adopt the alternating training approach in (Luong et al., 2016) ."
                    },
                    {
                        "id": 101,
                        "string": "At each training step, we sample a task d i with probability r i ∑ j r j , where r i is the mixing rate value assigned to d i ."
                    },
                    {
                        "id": 102,
                        "string": "In our experiments, instead of tuning r i , we estimate it by: r i = µ i ζ i √ N i , where µ i is the task coefficient, ζ i is the language coefficient, and N i is the number of training examples."
                    },
                    {
                        "id": 103,
                        "string": "µ i (or ζ i ) takes the value 1 if the task (or language) of d i is the same as that of the target task; Otherwise it takes the value 0.1."
                    },
                    {
                        "id": 104,
                        "string": "For example, given English Name Tagging as the target task, the task coefficient µ and language coefficient ζ of Spanish Name Tagging are 0.1 and 1 respectively."
                    },
                    {
                        "id": 105,
                        "string": "While assigning lower mixing rate values to auxiliary tasks, this formula also takes the amount of data into consideration."
                    },
                    {
                        "id": 106,
                        "string": "Thus, auxiliary tasks receive higher probabilities to reduce overfitting when we have a smaller amount of main task data."
                    },
                    {
                        "id": 107,
                        "string": "Experiments Data Sets For Name Tagging, we use the following data sets: Dutch (NLD) and Spanish (ESP) data from the CoNLL 2002 shared task (Tjong Kim Sang, 2002) , English (ENG) data from the CoNLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003) , Russian (RUS) data from LDC2016E95 (Russian Representative Language Pack), and Chechen (CHE) data from TAC KBP 2017 10-Language EDL Pilot Evaluation Source Corpus 4 ."
                    },
                    {
                        "id": 108,
                        "string": "We select Chechen as another target language in addition to Dutch and Spanish because it is a truly under-resourced language and its related language, Russian, also lacks NLP resources."
                    },
                    {
                        "id": 109,
                        "string": "For POS Tagging, we use English, Dutch, Spanish, and Russian data from the CoNLL 2017 shared task (Zeman et al., 2017; Nivre et al., 2017) ."
                    },
                    {
                        "id": 110,
                        "string": "In this data set, each token is annotated with two POS tags, UPOS (universal POS tag) and XPOS (language-specific POS tag)."
                    },
                    {
                        "id": 111,
                        "string": "We use UPOS because it is consistent throughout all languages."
                    },
                    {
                        "id": 112,
                        "string": "Experimental Setup We use 50-dimensional pre-trained word embeddings and 50-dimensional randomly initialized character embeddings."
                    },
                    {
                        "id": 113,
                        "string": "We train word embeddings using the word2vec package 5 ."
                    },
                    {
                        "id": 114,
                        "string": "English, Span-ish, and Dutch embeddings are trained on corresponding Wikipedia articles (2017-12-20 dumps) ."
                    },
                    {
                        "id": 115,
                        "string": "Russian embeddings are trained on documents in LDC2016E95."
                    },
                    {
                        "id": 116,
                        "string": "Chechen embeddings are trained on documents in TAC KBP 2017 10-Language EDL Pilot Evaluation Source Corpus."
                    },
                    {
                        "id": 117,
                        "string": "To learn a mapping between mono-lingual word embeddings and obtain cross-lingual embeddings, we use the unsupervised model in the MUSE library 6 (Conneau et al., 2017) ."
                    },
                    {
                        "id": 118,
                        "string": "Although word embeddings are fine-tuned during training, we update the embedding matrix in a sparse way and thus do not have to update a large number of parameters."
                    },
                    {
                        "id": 119,
                        "string": "We optimize parameters using Stochastic Gradient Descent with momentum, gradient clipping and exponential learning rate decay."
                    },
                    {
                        "id": 120,
                        "string": "At step t, the learning rate α t is updated using α t = α 0 * ρ t/T , where α 0 is the initial learning rate, ρ is the decay rate, and T is the decay step."
                    },
                    {
                        "id": 121,
                        "string": "7 To reduce overfitting, we apply Dropout (Srivastava et al., 2014) to the output of the LSTM layer."
                    },
                    {
                        "id": 122,
                        "string": "We conduct hyper-parameter optimization by exploring the space of parameters shown in Table 2 using random search (Bergstra and Bengio, 2012) ."
                    },
                    {
                        "id": 123,
                        "string": "Due to time constraints, we only perform parameter sweeping on the Dutch Name Tagging task with 200 training examples."
                    },
                    {
                        "id": 124,
                        "string": "We select the set of parameters that achieves the best performance on the development set and apply it to all models."
                    },
                    {
                        "id": 125,
                        "string": "Comparison of Different Models In Figure 3 , 4, and 5, we compare our model with the mono-lingual single-task LSTM-CNNs model (denoted as baseline), cross-task transfer model, and cross-lingual transfer model in low-resource settings with Dutch, Spanish, and Chechen Name Tagging as the main task respectively."
                    },
                    {
                        "id": 126,
                        "string": "We use English as the related language for Dutch and Spanish, and use Russian as the related language for Chechen."
                    },
                    {
                        "id": 127,
                        "string": "For cross-task transfer, we take POS Tagging as the auxiliary task."
                    },
                    {
                        "id": 128,
                        "string": "Because the CoNLL 2017 data does not include Chechen, we only use Russian POS Tagging and Russian Name Tagging as auxiliary tasks for Chechen Name Tagging."
                    },
                    {
                        "id": 129,
                        "string": "We take Name Tagging as the target task for three reasons: (1) POS Tagging has a much lower requirement for the amount of training data."
                    },
                    {
                        "id": 130,
                        "string": "For example, using only 10 training sentences, our baseline model achieves 75.5% and 82.9% prediction accuracy on Dutch and Spanish; (2) Compared to POS Tagging, Name Tagging has been considered as a more challenging task; (3) Existing POS Tagging resources are relatively richer than Name Tagging ones; e.g., the CoNLL 2017 data set provides POS Tagging training data for 45 languages."
                    },
                    {
                        "id": 131,
                        "string": "Name Tagging also has a higher annotation cost as its annotation guidelines are usually more complicated."
                    },
                    {
                        "id": 132,
                        "string": "We can see that our model substantially outperforms the mono-lingual single-task baseline model and obtains visible gains over single transfer models."
                    },
                    {
                        "id": 133,
                        "string": "When trained with less than 50 main tasks training sentences, cross-lingual transfer consistently surpasses cross-task transfer, which is not surprising because in the latter scheme, the linear layer and CRFs layer of the main model are not shared with other models and thus cannot be fully trained with little data."
                    },
                    {
                        "id": 134,
                        "string": "Because there are only 20,400 sentences in Chechen documents, we also experiment with the data augmentation method described in Section 2.2 by training word embeddings on a mixture of Russian and Chechen data."
                    },
                    {
                        "id": 135,
                        "string": "This method yields additional 3.5%-10.0% absolute F-score gains."
                    },
                    {
                        "id": 136,
                        "string": "We also experiment with transferring from English to Chechen."
                    },
                    {
                        "id": 137,
                        "string": "Because Chechen uses Cyrillic alphabet , we convert its data set to Latin script."
                    },
                    {
                        "id": 138,
                        "string": "Surprisingly, although these two languages are not close, we get more improvement by using English as the auxiliary language."
                    },
                    {
                        "id": 139,
                        "string": "In Table 3 , we compare our model with state-ofthe-art models using all Dutch or Spanish Name Tagging data."
                    },
                    {
                        "id": 140,
                        "string": "Results show that although we design this architecture for low-resource settings, it also achieves good performance in high-resource settings."
                    },
                    {
                        "id": 141,
                        "string": "In this experiment, with sufficient training data for the target task, we perform another round of parameter sweeping."
                    },
                    {
                        "id": 142,
                        "string": "We increase the embedding sizes and LSTM hidden state size to 100 and 225 respectively."
                    },
                    {
                        "id": 143,
                        "string": "Qualitative Analysis In Table 4 , we compare Name Tagging results from the baseline model and our model, both trained with 100 main task sentences."
                    },
                    {
                        "id": 144,
                        "string": "The first three examples show that shared character-level networks can transfer different levels of morphological and semantic information."
                    },
                    {
                        "id": 145,
                        "string": "Table 3 : Comparison with state-of-the-art models."
                    },
                    {
                        "id": 146,
                        "string": "In example #1, the baseline model fails to identify \"Palestijnen\", an unseen word in the Dutch data, while our model can recognize it because the shared CharCNN represents it in a way similar to its corresponding English word \"Palestinians\", which occurs 20 times."
                    },
                    {
                        "id": 147,
                        "string": "In addition to mentions, the shared CharCNN can also improve representations of context words, such as \"staat\" (state) in the example."
                    },
                    {
                        "id": 148,
                        "string": "For some words dissimilar to corresponding English words, the CharCNN may enhance their word representations by transferring morpheme-level knowledge."
                    },
                    {
                        "id": 149,
                        "string": "For example, in sentence #2, our model is able to identify \"Rusland\" (Russia) as the suffix -land is usually associated with location names in the English data; e.g., Finland."
                    },
                    {
                        "id": 150,
                        "string": "Furthermore, the CharCNN is capable of capturing some word-level patterns, such as capitalized hyphenated compound and acronym as example #3 shows."
                    },
                    {
                        "id": 151,
                        "string": "In this sentence, neither \"PMScentra\" nor \"MST\" can be found in auxiliary task data, while we observe a number of similar expressions, such as American-style and LDP."
                    },
                    {
                        "id": 152,
                        "string": "The transferred knowledge also helps reduce overfitting."
                    },
                    {
                        "id": 153,
                        "string": "For example, in sentence #4, the baseline model mistakenly tags \"sección\" (section) and \"consellería\" (department) as organizations because their capitalized forms usually appear in Spanish organization names."
                    },
                    {
                        "id": 154,
                        "string": "With knowledge learned in auxiliary tasks that a lowercased word is rarely tagged as a proper noun, our model is able to avoid overfitting and correct these errors."
                    },
                    {
                        "id": 155,
                        "string": "Sentence #5 shows an opposite situation, where the capitalized word \"campesinos\" (farm worker) never appears in Spanish names."
                    },
                    {
                        "id": 156,
                        "string": "In Table 5 , we show differences between cross-lingual transfer and cross-task transfer."
                    },
                    {
                        "id": 157,
                        "string": "Although the cross-task transfer model recognizes \"Ingeborg Marx\" missed by the baseline model, it mistakenly assigns an S-PER tag to \"Marx\"."
                    },
                    {
                        "id": 158,
                        "string": "Instead, from English Name Tagging, the cross-lingual transfer model borrows task-specific knowledge through the shared CRFs layer that (1) B-PER→S-PER is an invalid transition, and (2) even if we assign S-PER to \"Ingeborg\", it is rare to have continuous person names without any conjunction or punctuation."
                    },
                    {
                        "id": 159,
                        "string": "Thus, the cross-lingual model promotes the sequence B-PER→E-PER."
                    },
                    {
                        "id": 160,
                        "string": "In Figure 6 , we depict the change of tag distribution with the number of training sentences."
                    },
                    {
                        "id": 161,
                        "string": "When trained with less than 100 sentences, the baseline model only correctly predicts a few tags dominated by frequent types."
                    },
                    {
                        "id": 162,
                        "string": "By contrast, our model has a visibly higher recall and better predicts infrequent tags, which can be attributed to the implicit data augmentation and inductive bias introduced by MTL (Ruder, 2017) ."
                    },
                    {
                        "id": 163,
                        "string": "For example, if all location names in the Dutch training data are single-token ones, the baseline model will inevitably overfit to the tag S-LOC and possibly label \"Caldera de Taburiente\" as [S-LOC Caldera] [S-LOC de] [S-LOC Taburiente], whereas with the shared CRFs layer fully trained on English Name Tagging, our model prefers B-LOC→I-LOC→E-LOC, which receives a higher transition score."
                    },
                    {
                        "id": 164,
                        "string": "Ablation Studies In order to quantify the contributions of individual components, we conduct ablation studies on Dutch Name Tagging with different numbers of training sentences for the target task."
                    },
                    {
                        "id": 165,
                        "string": "For the basic model, we we use separate LSTM layers and  remove the character embeddings, highway networks, language-specific layer, and Dropout layer."
                    },
                    {
                        "id": 166,
                        "string": "As Table 6 shows, adding each component usually enhances the performance (F-score, %), while the impact also depends on the size of the target task data."
                    },
                    {
                        "id": 167,
                        "string": "For example, the language-specific layer slightly impairs the performance with only 10 training sentences."
                    },
                    {
                        "id": 168,
                        "string": "However, this is unsurpris-ing as it introduces additional parameters that are only trained by the target task data."
                    },
                    {
                        "id": 169,
                        "string": "Table 6 : Performance comparison between models with different components (C: character embedding; L: shared LSTM; S: language-specific layer; H: highway networks; D: dropout)."
                    },
                    {
                        "id": 170,
                        "string": "Effect of the Amount of Auxiliary Task Data For many low-resource languages, their related languages are also low-resource."
                    },
                    {
                        "id": 171,
                        "string": "To evaluate our model's sensitivity to the amount of auxiliary task data, we fix the size of main task data and downsample all auxiliary task data with sample rates from 1% to 50%."
                    },
                    {
                        "id": 172,
                        "string": "As Figure 7 shows, the performance goes up when we raise the sample rate from 1% to 20%."
                    },
                    {
                        "id": 173,
                        "string": "However, we do not observe significant improvement when we further increase the sample rate."
                    },
                    {
                        "id": 174,
                        "string": "By comparing scores in Figure 3 and Figure 7 , we can see that using only 1% auxiliary data, our model already obtains 3.7%-9.7% absolute F-score gains."
                    },
                    {
                        "id": 175,
                        "string": "Due to space limitations, we only show curves for Dutch Name Tagging, while we observe similar results on other tasks."
                    },
                    {
                        "id": 176,
                        "string": "Therefore, we may conclude that our model does not heavily rely on the amount of auxiliary task data."
                    },
                    {
                        "id": 177,
                        "string": "Related Work Multi-task Learning has been applied in different NLP areas, such as machine translation (Luong et al., 2016; Dong et al., 2015; Domhan and Hieber, 2017 ), text classification (Liu et al., 2017) , dependency parsing , textual entailment (Hashimoto et al., 2017) , text summarization (Isonuma et al., 2017) and sequence labeling (Collobert and Weston, 2008; Søgaard and Goldberg, 2016; Rei, 2017; Peng and Dredze, 2017; Yang et al., 2017; von Däniken and Cieliebak, 2017; Aguilar et al., 2017; Liu et al., 2018) Collobert and Weston (2008) is an early attempt that applies MTL to sequence labeling."
                    },
                    {
                        "id": 178,
                        "string": "The authors train a CNN model jointly on POS Tagging, Semantic Role Labeling, Name Tagging, chunking, and language modeling using parameter sharing."
                    },
                    {
                        "id": 179,
                        "string": "Instead of using other sequence labeling tasks, Rei (2017) and Liu et al."
                    },
                    {
                        "id": 180,
                        "string": "(2018) take language modeling as the secondary training objective to extract semantic and syntactic knowledge from large scale raw text without additional supervision."
                    },
                    {
                        "id": 181,
                        "string": "In (Yang et al., 2017) , the authors propose three transfer models for crossdomain, cross-application, and cross-lingual trans-fer for sequence labeling, and also simulate a lowresource setting by downsampling the training data."
                    },
                    {
                        "id": 182,
                        "string": "By contrast, we combine cross-task transfer and cross-lingual transfer within a unified architecture to transfer different types of knowledge from multiple auxiliary tasks simultaneously."
                    },
                    {
                        "id": 183,
                        "string": "In addition, because our model is designed for lowresource settings, we share components among models in a different way (e.g., the LSTM layer is shared across all models)."
                    },
                    {
                        "id": 184,
                        "string": "Differing from most MTL models, which perform supervisions for all tasks on the outermost layer, (Søgaard and Goldberg, 2016) proposes an MTL model which supervised tasks at different levels."
                    },
                    {
                        "id": 185,
                        "string": "It shows that supervising low-level tasks such as POS Tagging at lower layer obtains better performance."
                    },
                    {
                        "id": 186,
                        "string": "Conclusions and Future Work We design a multi-lingual multi-task architecture for low-resource settings."
                    },
                    {
                        "id": 187,
                        "string": "We evaluate the model on sequence labeling tasks with three language pairs."
                    },
                    {
                        "id": 188,
                        "string": "Experiments show that our model can effectively transfer different types of knowledge to improve the main model."
                    },
                    {
                        "id": 189,
                        "string": "It substantially outperforms the mono-lingual single-task baseline model, cross-lingual transfer model, and crosstask transfer model."
                    },
                    {
                        "id": 190,
                        "string": "The next step of this research is to apply this architecture to other types of tasks, such as Event Extract and Semantic Role Labeling that involve structure prediction."
                    },
                    {
                        "id": 191,
                        "string": "We also plan to explore the possibility of integrating incremental learning into this architecture to adapt a trained model for new tasks rapidly."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 28
                    },
                    {
                        "section": "Basic Architecture",
                        "n": "2.1",
                        "start": 29,
                        "end": 48
                    },
                    {
                        "section": "Multi-task Multi-lingual Architecture",
                        "n": "2.2",
                        "start": 49,
                        "end": 106
                    },
                    {
                        "section": "Data Sets",
                        "n": "3.1",
                        "start": 107,
                        "end": 111
                    },
                    {
                        "section": "Experimental Setup",
                        "n": "3.2",
                        "start": 112,
                        "end": 124
                    },
                    {
                        "section": "Comparison of Different Models",
                        "n": "3.3",
                        "start": 125,
                        "end": 142
                    },
                    {
                        "section": "Qualitative Analysis",
                        "n": "3.4",
                        "start": 143,
                        "end": 163
                    },
                    {
                        "section": "Ablation Studies",
                        "n": "3.5",
                        "start": 164,
                        "end": 169
                    },
                    {
                        "section": "Effect of the Amount of Auxiliary Task Data",
                        "n": "3.6",
                        "start": 170,
                        "end": 176
                    },
                    {
                        "section": "Related Work",
                        "n": "4",
                        "start": 177,
                        "end": 185
                    },
                    {
                        "section": "Conclusions and Future Work",
                        "n": "5",
                        "start": 186,
                        "end": 191
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1017-Figure4-1.png",
                        "caption": "Figure 4: Performance on Spanish Name Tagging.",
                        "page": 5,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 526.56,
                            "y1": 282.71999999999997,
                            "y2": 423.35999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Figure5-1.png",
                        "caption": "Figure 5: Performance on Chechen Name Tagging.",
                        "page": 5,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 525.6,
                            "y1": 463.68,
                            "y2": 603.36
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Figure3-1.png",
                        "caption": "Figure 3: Performance on Dutch Name Tagging. We scale the horizontal axis to show more details under 100 sentences. Our Model*: our model with MUSE cross-lingual embeddings.",
                        "page": 5,
                        "bbox": {
                            "x1": 308.15999999999997,
                            "x2": 525.6,
                            "y1": 62.4,
                            "y2": 202.56
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Figure1-1.png",
                        "caption": "Figure 1: LSTM-CNNs: an LSTM-CRFs-based model for Sequence Labeling",
                        "page": 1,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 525.12,
                            "y1": 101.75999999999999,
                            "y2": 264.96
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Table3-1.png",
                        "caption": "Table 3: Comparison with state-of-the-art models.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 291.36,
                            "y1": 61.44,
                            "y2": 244.32
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Figure6-1.png",
                        "caption": "Figure 6: The distribution of correctly predicted tags on Dutch Name Tagging. The height of each stack indicates the number of a certain tag.",
                        "page": 6,
                        "bbox": {
                            "x1": 308.15999999999997,
                            "x2": 524.64,
                            "y1": 466.56,
                            "y2": 604.8
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Table6-1.png",
                        "caption": "Table 6: Performance comparison between models with different components (C: character embedding; L: shared LSTM; S: language-specific layer; H: highway networks; D: dropout).",
                        "page": 7,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 471.84,
                            "y2": 557.28
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Table4-1.png",
                        "caption": "Table 4: Name Tagging results, each of which contains an English translation, result of the baseline",
                        "page": 7,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 62.4,
                            "y2": 379.2
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Table5-1.png",
                        "caption": "Table 5: Comparing cross-task transfer and crosslingual transfer on Dutch Name Tagging with 100 training sentences.",
                        "page": 7,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 291.36,
                            "y1": 432.47999999999996,
                            "y2": 585.12
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Figure2-1.png",
                        "caption": "Figure 2: Multi-task Multi-lingual Architecture",
                        "page": 3,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 524.16,
                            "y1": 62.879999999999995,
                            "y2": 271.2
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Figure7-1.png",
                        "caption": "Figure 7: The effect of the amount of auxiliary task data on Dutch Name Tagging.",
                        "page": 8,
                        "bbox": {
                            "x1": 72.48,
                            "x2": 289.44,
                            "y1": 210.72,
                            "y2": 337.91999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Table1-1.png",
                        "caption": "Table 1: Name Tagging data set statistics: #token and #name (between parentheses).",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 291.36,
                            "y1": 446.88,
                            "y2": 514.0799999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1017-Table2-1.png",
                        "caption": "Table 2: Hyper-parameter search space.",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 464.64,
                            "y2": 563.04
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-19"
        },
        {
            "slides": {
                "2": {
                    "title": "Discourse Marker",
                    "text": [
                        "A discourse marker is a word or a phrase that plays a role in managing the flow and structure of discourse.",
                        "Examples: so, because, and, but, or"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "3": {
                    "title": "Discourse Marker and NLI",
                    "text": [
                        "But Because If Although And So"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "4": {
                    "title": "Related Works",
                    "text": [
                        "SOTA Neural Network Models",
                        "Transfer Learning for NLI"
                    ],
                    "page_nums": [
                        7,
                        8
                    ],
                    "images": []
                },
                "5": {
                    "title": "Discourse Marker Prediction DMP",
                    "text": [
                        "Its rainy outside But + We will not take the umbrella",
                        "(S1, S2) Neural Networks M",
                        "Max pooling over all the hidden states Prediction"
                    ],
                    "page_nums": [
                        9,
                        10
                    ],
                    "images": []
                },
                "10": {
                    "title": "Experiments Analysis",
                    "text": [
                        "Premise: 3 young man in hoods standing in the middle of a quiet street facing the camera. Hypothesis: Three people sit by a busy street bare-headed."
                    ],
                    "page_nums": [
                        19,
                        20
                    ],
                    "images": [
                        "figure/image/1023-Table5-1.png"
                    ]
                }
            },
            "paper_title": "Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference",
            "paper_id": "1023",
            "paper": {
                "title": "Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference",
                "abstract": "Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is one of the most important problems in natural language processing. It requires to infer the logical relationship between two given sentences. While current approaches mostly focus on the interaction architectures of the sentences, in this paper, we propose to transfer knowledge from some important discourse markers to augment the quality of the NLI model. We observe that people usually use some discourse markers such as \"so\" or \"but\" to represent the logical relationship between two sentences. These words potentially have deep connections with the meanings of the sentences, thus can be utilized to help improve the representations of them. Moreover, we use reinforcement learning to optimize a new objective function with a reward defined by the property of the NLI datasets to make full use of the labels information. Experiments show that our method achieves the state-of-the-art performance on several large-scale datasets. 1 Here sentences mean either the whole sentences or the main clauses of a compound sentence.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction In this paper, we focus on the task of Natural Language Inference (NLI), which is known as a significant yet challenging task for natural language understanding."
                    },
                    {
                        "id": 1,
                        "string": "In this task, we are given two sentences which are respectively called premise and hypothesis."
                    },
                    {
                        "id": 2,
                        "string": "The goal is to determine whether the logical relationship between them is entailment, neutral, or contradiction."
                    },
                    {
                        "id": 3,
                        "string": "Recently, performance on NLI (Chen et al., 2017b; Gong et al., 2018; Chen et al., 2017c ) * corresponding author Premise: A soccer game with multiple males playing."
                    },
                    {
                        "id": 4,
                        "string": "Hypothesis: Some men are playing a sport."
                    },
                    {
                        "id": 5,
                        "string": "Label: Entailment Premise: An older and younger man smiling."
                    },
                    {
                        "id": 6,
                        "string": "Hypothesis: Two men are smiling and laughing at the cats playing on the floor."
                    },
                    {
                        "id": 7,
                        "string": "Label: Neutral Premise: A black race car starts up in front of a crowd of people Hypothesis: A man is driving down a lonely road."
                    },
                    {
                        "id": 8,
                        "string": "Label: Contradiction has been significantly boosted since the release of some high quality large-scale benchmark datasets such as SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2017) ."
                    },
                    {
                        "id": 9,
                        "string": "Table 1 shows some examples in SNLI."
                    },
                    {
                        "id": 10,
                        "string": "Most state-of-the-art works focus on the interaction architectures between the premise and the hypothesis, while they rarely concerned the discourse relations of the sentences, which is a core issue in natural language understanding."
                    },
                    {
                        "id": 11,
                        "string": "People usually use some certain set of words to express the discourse relation between two sentences 1 ."
                    },
                    {
                        "id": 12,
                        "string": "These words, such as \"but\" or \"and\", are denoted as discourse markers."
                    },
                    {
                        "id": 13,
                        "string": "These discourse markers have deep connections with the intrinsic relations of two sentences and intuitively correspond to the intent of NLI, such as \"but\" to \"contradiction\", \"so\" to \"entailment\", etc."
                    },
                    {
                        "id": 14,
                        "string": "Very few NLI works utilize this information revealed by discourse markers."
                    },
                    {
                        "id": 15,
                        "string": "proposed to use discourse markers to help rep-resent the meanings of the sentences."
                    },
                    {
                        "id": 16,
                        "string": "However, they represent each sentence by a single vector and directly concatenate them to predict the answer, which is too simple and not ideal for the largescale datasets."
                    },
                    {
                        "id": 17,
                        "string": "In this paper, we propose a Discourse Marker Augmented Network for natural language inference, where we transfer the knowledge from the existing supervised task: Discourse Marker Prediction (DMP) , to an integrated NLI model."
                    },
                    {
                        "id": 18,
                        "string": "We first propose a sentence encoder model that learns the representations of the sentences from the DMP task and then inject the encoder to the NLI network."
                    },
                    {
                        "id": 19,
                        "string": "Moreover, because our NLI datasets are manually annotated, each example from the datasets might get several different labels from the annotators although they will finally come to a consensus and also provide a certain label."
                    },
                    {
                        "id": 20,
                        "string": "In consideration of that different confidence level of the final labels should be discriminated, we employ reinforcement learning with a reward defined by the uniformity extent of the original labels to train the model."
                    },
                    {
                        "id": 21,
                        "string": "The contributions of this paper can be summarized as follows."
                    },
                    {
                        "id": 22,
                        "string": "• Unlike previous studies, we solve the task of the natural language inference via transferring knowledge from another supervised task."
                    },
                    {
                        "id": 23,
                        "string": "We propose the Discourse Marker Augmented Network to combine the learned encoder of the sentences with the integrated NLI model."
                    },
                    {
                        "id": 24,
                        "string": "• According to the property of the datasets, we incorporate reinforcement learning to optimize a new objective function to make full use of the labels' information."
                    },
                    {
                        "id": 25,
                        "string": "• We conduct extensive experiments on two large-scale datasets to show that our method achieves better performance than other stateof-the-art solutions to the problem."
                    },
                    {
                        "id": 26,
                        "string": "Task Description Natural Language Inference (NLI) In the natural language inference tasks, we are given a pair of sentences (P, H), which respectively means the premise and hypothesis."
                    },
                    {
                        "id": 27,
                        "string": "Our goal is to judge whether their logical relationship between their meanings by picking a label from a small set: entailment (The hypothesis is definitely a true description of the premise), neutral (The hypothesis might be a true description of the premise), and contradiction (The hypothesis is definitely a false description of the premise)."
                    },
                    {
                        "id": 28,
                        "string": "Discourse Marker Prediction (DMP) For DMP, we are given a pair of sentences (S 1 , S 2 ), which is originally the first half and second half of a complete sentence."
                    },
                    {
                        "id": 29,
                        "string": "The model must predict which discourse marker was used by the author to link the two ideas from a set of candidates."
                    },
                    {
                        "id": 30,
                        "string": "Sentence Encoder Model Following , we use BookCorpus  as our training data for discourse marker prediction, which is a dataset of text from unpublished novels, and it is large enough to avoid bias towards any particular domain or application."
                    },
                    {
                        "id": 31,
                        "string": "After preprocessing, we obtain a dataset with the form (S 1 , S 2 , m), which means the first half sentence, the last half sentence, and the discourse marker that connected them in the original text."
                    },
                    {
                        "id": 32,
                        "string": "Our goal is to predict the m given S 1 and S 2 ."
                    },
                    {
                        "id": 33,
                        "string": "We first use Glove (Pennington et al., 2014) to transform {S t } 2 t=1 into vectors word by word and subsequently input them to a bi-directional LSTM: − → h i t = − −−− → LSTM(Glove(S i t )), i = 1, ..., |S t | ← − h i t = ← −−− − LSTM(Glove(S i t )), i = |S t |, ..., 1 (1) where Glove(w) is the embedding vector of the word w from the Glove lookup table, |S t | is the length of the sentence S t ."
                    },
                    {
                        "id": 34,
                        "string": "We apply max pooling on the concatenation of the hidden states from both directions, which provides regularization and shorter back-propagation paths (Collobert and Weston, 2008) , to extract the features of the whole sequences of vectors: − → r t = Max dim ([ − → h 1 t ; − → h 2 t ; ...; − − → h |St| t ]) ← − r t = Max dim ([ ← − h 1 t ; ← − h 2 t ; ...; ← − − h |St| t ]) (2) where Max dim means that the max pooling is performed across each dimension of the concatenated vectors, [; ] denotes concatenation."
                    },
                    {
                        "id": 35,
                        "string": "Moreover, we combine the last hidden state from both directions and the results of max pooling to represent our sentences: where r t is the representation vector of the sentence S t ."
                    },
                    {
                        "id": 36,
                        "string": "To predict the discource marker between S 1 and S 2 , we combine the representations of them with some linear operation: r t = [ − → r t ; ← − r t ; − − → h |St| t ; ← − h 1 t ] (3) r = [r 1 ; r 2 ; r 1 + r 2 ; r 1 r 2 ] (4) where is elementwise product."
                    },
                    {
                        "id": 37,
                        "string": "Finally we project r to a vector of label size (the total number of discourse markers in the dataset) and use softmax function to normalize the probability distribution."
                    },
                    {
                        "id": 38,
                        "string": "Discourse Marker Augmented Network As presented in Figure 1 , we show how our Discourse Marker Augmented Network incorporates the learned encoder into the NLI model."
                    },
                    {
                        "id": 39,
                        "string": "Encoding Layer We denote the premise as P and the hypothesis as H. To encode the words, we use the concatenation of following parts: Word Embedding: Similar to the previous section, we map each word to a vector space by using pre-trained word vectors GloVe."
                    },
                    {
                        "id": 40,
                        "string": "Character Embedding: We apply Convolutional Neural Networks (CNN) over the characters of each word."
                    },
                    {
                        "id": 41,
                        "string": "This approach is proved to be helpful in handling out-of-vocab (OOV) words (Yang et al., 2017) ."
                    },
                    {
                        "id": 42,
                        "string": "POS and NER tags: We use the part-of-speech (POS) tags and named-entity recognition (NER) tags to get syntactic information and entity label of the words."
                    },
                    {
                        "id": 43,
                        "string": "Following (Pan et al., 2017b) , we apply the skip-gram model (Mikolov et al., 2013) to train two new lookup tables of POS tags and NER tags respectively."
                    },
                    {
                        "id": 44,
                        "string": "Each word can get its own POS embedding and NER embedding by these lookup tables."
                    },
                    {
                        "id": 45,
                        "string": "This approach represents much better geometrical features than common used one-hot vectors."
                    },
                    {
                        "id": 46,
                        "string": "Exact Match: Inspired by the machine comprehension tasks (Chen et al., 2017a) , we want to know whether every word in P is in H (and H in P )."
                    },
                    {
                        "id": 47,
                        "string": "We use three binary features to indicate whether the word can be exactly matched to any question word, which respectively means original form, lowercase and lemma form."
                    },
                    {
                        "id": 48,
                        "string": "For encoding, we pass all sequences of vectors into a bi-directional LSTM and obtain: p i = BiLSTM(f rep (P i ), p i−1 ), i = 1, ..., n u j = BiLSTM(f rep (H j ), u j−1 ), j = 1, ..., m (5) where f rep (x) = [Glove(x); Char(x); POS(x); NER(x); EM(x)] is the concatenation of the embedding vectors and the feature vectors of the word x, n = |P |, m = |H|."
                    },
                    {
                        "id": 49,
                        "string": "Interaction Layer In this section, we feed the results of the encoding layer and the learned sentence encoder into the attention mechanism, which is responsible for linking and fusing information from the premise and the hypothesis words."
                    },
                    {
                        "id": 50,
                        "string": "We first obtain a similarity matrix A ∈ R n×m between the premise and hypothesis by A ij = v 1 [p i ; u j ; p i • u j ; r p ; r h ] (6) where v 1 is the trainable parameter, r p and r h are sentences representations from the equation (3) learned in the Section 3, which denote the premise and hypothesis respectively."
                    },
                    {
                        "id": 51,
                        "string": "In addition to previous popular similarity matrix, we incorporate the relevance of each word of P (H) to the whole sentence of H(P )."
                    },
                    {
                        "id": 52,
                        "string": "Now we use A to obtain the attentions and the attended vectors in both directions."
                    },
                    {
                        "id": 53,
                        "string": "To signify the attention of the i-th word of P to every word of H, we use the weighted sum of u j by A i: :ũ i = j A ij · u j (7) whereũ i is the attention vector of the i-th word of P for the entire H. In the same way, thep j is obtained via:p j = i A ij · p i (8) To model the local inference between aligned word pairs, we integrate the attention vectors with the representation vectors via: p i = f ([p i ;ũ i ; p i −ũ i ; p i ũ i ]) u j = f ([u j ;p j ; u j −p j ; u j p j ]) (9) where f is a 1-layer feed-forward neural network with the ReLU activation function,p i andû j are local inference vectors."
                    },
                    {
                        "id": 54,
                        "string": "Inspired by (Seo et al., 2016) and (Chen et al., 2017b) , we use a modeling layer to capture the interaction between the premise and the hypothesis."
                    },
                    {
                        "id": 55,
                        "string": "Specifically, we use bi-directional LSTMs as building blocks: p M i = BiLSTM(p i , p M i−1 ) u M j = BiLSTM(û j , u M j−1 ) (10) Here, p M i and u M j are the modeling vectors which contain the crucial information and relationship among the sentences."
                    },
                    {
                        "id": 56,
                        "string": "We compute the representation of the whole sentence by the weighted average of each word:  where v 2 , v 3 are trainable vectors."
                    },
                    {
                        "id": 57,
                        "string": "We don't share these parameter vectors in this seemingly parallel strucuture because there is some subtle difference between the premise and hypothesis, which will be discussed later in Section 5. p M = i exp(v 2 p M i ) i exp(v 2 p M i ) p M i u M = j exp(v 3 u M j ) j exp(v 3 u M j ) u M j (11) Output Layer The NLI task requires the model to predict the logical relation from the given set: entailment, neutral or contradiction."
                    },
                    {
                        "id": 58,
                        "string": "We obtain the probability distribution by a linear function with softmax function: d = softmax(W[p M ; u M ; p M u M ; r p r h ]) (12) where W is a trainable parameter."
                    },
                    {
                        "id": 59,
                        "string": "We combine the representations of the sentences computed above with the representations learned from DMP to obtain the final prediction."
                    },
                    {
                        "id": 60,
                        "string": "Training As shown in Table 2 , many examples from our datasets are labeled by several people, and the choices of the annotators are not always consistent."
                    },
                    {
                        "id": 61,
                        "string": "For instance, when the label number is 3 in SNLI, \"total=0\" means that no examples have 3 annotators (maybe more or less); \"correct=8748\" means that there are 8748 examples whose number of correct labels is 3 (the number of annotators maybe 4 or 5, but some provided wrong labels)."
                    },
                    {
                        "id": 62,
                        "string": "Although all the labels for each example will be unified to a final (correct) label, diversity of the labels for a single example indicates the low confidence of the result, which is not ideal to only use the final label to optimize the model."
                    },
                    {
                        "id": 63,
                        "string": "We propose a new objective function that combines both the log probabilities of the ground-truth label and a reward defined by the property of the datasets for the reinforcement learning."
                    },
                    {
                        "id": 64,
                        "string": "The most widely used objective function for the natural language inference is to minimize the negative log cross-entropy loss: J CE (Θ) = − 1 N N k log(d k l ) (13) where Θ are all the parameters to optimize, N is the number of examples in the dataset, d l is the probability of the ground-truth label l. However, directly using the final label to train the model might be difficult in some situations, where the example is confusing and the labels from the annotators are different."
                    },
                    {
                        "id": 65,
                        "string": "For instance, consider an example from the SNLI dataset: • P : \"A smiling costumed woman is holding an umbrella.\""
                    },
                    {
                        "id": 66,
                        "string": "• H: \"A happy woman in a fairy costume holds an umbrella.\""
                    },
                    {
                        "id": 67,
                        "string": "The final label is neutral, but the original labels from the five annotators are neural, neural, entailment, contradiction, neural, in which case the relation between \"smiling\" and \"happy\" might be under different comprehension."
                    },
                    {
                        "id": 68,
                        "string": "The final label's confidence of this example is obviously lower than an example that all of its labels are the same."
                    },
                    {
                        "id": 69,
                        "string": "To simulate the thought of human being more closely, in this paper, we tackle this problem by using the REINFORCE algorithm (Williams, 1992) to minimize the negative expected reward, which is defined as: J RL (Θ) = −E l∼π(l|P,H) [R(l, {l * })] (14) where π(l|P, H) is the previous action policy that predicts the label given P and H, {l * } is the set of annotated labels, and R(l, {l * }) = number of l in {l * } |{l * }| (15) is the reward function defined to measure the distance to all the ideas of the annotators."
                    },
                    {
                        "id": 70,
                        "string": "To avoid of overwriting its earlier results and further stabilize training, we use a linear function to integrate the above two objective functions: J(Θ) = λJ CE (Θ) + (1 − λ)J RL (Θ) (16) where λ is a tunable hyperparameter."
                    },
                    {
                        "id": 71,
                        "string": "Experiments Datasets BookCorpus: We use the dataset from BookCorpus  to pre-train our sentence encoder model."
                    },
                    {
                        "id": 72,
                        "string": "We preprocessed and collected discourse markers from BookCorpus as ."
                    },
                    {
                        "id": 73,
                        "string": "We finally curated a dataset of 6527128 pairs of sentences for 8 discourse markers, whose statistics are shown in Table 3 ."
                    },
                    {
                        "id": 74,
                        "string": "SNLI: Stanford Natural Language Inference(Bowman et al., 2015) is a collection of more than 570k human annotated sentence pairs labeled for entailment, contradiction, and semantic independence."
                    },
                    {
                        "id": 75,
                        "string": "SNLI is two orders of magnitude larger than all other resources of its type."
                    },
                    {
                        "id": 76,
                        "string": "The premise data is extracted from the captions of the Flickr30k corpus (Young et al., 2014) , the hypothesis data and the labels are manually annotated."
                    },
                    {
                        "id": 77,
                        "string": "The original SNLI corpus contains also the other category, which includes the sentence pairs lacking consensus among multiple human annotators."
                    },
                    {
                        "id": 78,
                        "string": "We remove this category and use the same split as in (Bowman et al., 2015) and other previous work."
                    },
                    {
                        "id": 79,
                        "string": "MultiNLI: Multi-Genre Natural Language Inference (Williams et al., 2017) is another large-scale corpus for the task of NLI."
                    },
                    {
                        "id": 80,
                        "string": "MultiNLI has 433k sentences pairs and is in the same format as SNLI, but it includes a more diverse range of text, as well as an auxiliary test set for cross-genre transfer evaluation."
                    },
                    {
                        "id": 81,
                        "string": "Half of these selected genres appear in training set while the rest are not, creating in-domain (matched) and cross-domain (mismatched) development/test sets."
                    },
                    {
                        "id": 82,
                        "string": "Method SNLI MultiNLI Matched Mismatched 300D LSTM encoders (Bowman et al., 2016) 80.6 --300D Tree-based CNN encoders (Mou et al., 2016) 82.1 --4096D BiLSTM with max-pooling (Conneau et al., 2017) 84.5 --600D Gumbel TreeLSTM encoders (Choi et al., 2017) 86.0 --600D Residual stacked encoders (Nie and Bansal, 2017) 86.0 74.6 73.6 Gated-Att BiLSTM (Chen et al., 2017d) -73.2 73.6 100D LSTMs with attention (Rocktäschel et al., 2016) 83.5 --300D re-read LSTM (Sha et al., 2016) 87.5 --DIIN (Gong et al., 2018) 88.0 78.8 77.8 Biattentive Classification Network (McCann et al., 2017) 88.1 --300D CAFE (Tay et al., 2017) 88.5 78.7 77.9 KIM (Chen et al., 2017b) 88.6 --600D ESIM + 300D Syntactic TreeLSTM (Chen et al., 2017c) 88.8 --DIIN(Ensemble) (Gong et al., 2018) 88.9 80.0 78.7 KIM(Ensemble) (Chen et al., 2017b) 89.1 --300D CAFE(Ensemble) (Tay et al., 2017) 89 Implementation Details We use the Stanford CoreNLP toolkit  to tokenize the words and generate POS and NER tags."
                    },
                    {
                        "id": 83,
                        "string": "The word embeddings are initialized by 300d Glove (Pennington et al., 2014) , the dimensions of POS and NER embeddings are 30 and 10."
                    },
                    {
                        "id": 84,
                        "string": "The dataset we use to train the embeddings of POS tags and NER tags are the training set given by SNLI."
                    },
                    {
                        "id": 85,
                        "string": "We apply Tensorflow r1.3 as our neural network framework."
                    },
                    {
                        "id": 86,
                        "string": "We set the hidden size as 300 for all the LSTM layers and apply dropout (Srivastava et al., 2014) between layers with an initial ratio of 0.9, the decay rate as 0.97 for every 5000 step."
                    },
                    {
                        "id": 87,
                        "string": "We use the AdaDelta for optimization as described in (Zeiler, 2012) with ρ as 0.95 and as 1e-8."
                    },
                    {
                        "id": 88,
                        "string": "We set our batch size as 36 and the initial learning rate as 0.6."
                    },
                    {
                        "id": 89,
                        "string": "The parameter λ in the objective function is set to be 0.2."
                    },
                    {
                        "id": 90,
                        "string": "For DMP task, we use stochastic gradient descent with initial learning rate as 0.1, and we anneal by half each time the validation accuracy is lower than the previous epoch."
                    },
                    {
                        "id": 91,
                        "string": "The number of epochs is set to be 10, and the feedforward dropout rate is 0.2."
                    },
                    {
                        "id": 92,
                        "string": "The learned encoder in subsequent NLI task is trainable."
                    },
                    {
                        "id": 93,
                        "string": "Results In (2016) proposed a simple baseline that uses LSTM to encode the whole sentences and feed them into a MLP classifier to predict the final inference relationship, they achieve an accuracy of 80.6% on SNLI."
                    },
                    {
                        "id": 94,
                        "string": "Nie and Bansal (2017) test their model on both SNLI and MiltiNLI, and achieves competitive results."
                    },
                    {
                        "id": 95,
                        "string": "In the medium part, we show the results of other neural network models."
                    },
                    {
                        "id": 96,
                        "string": "Obviously, the performance of most of the integrated methods are better than the sentence encoding based models above."
                    },
                    {
                        "id": 97,
                        "string": "Both DIIN (Gong et al., 2018) and  We present the ensemble results on both datasets in the bottom part of the table 4."
                    },
                    {
                        "id": 98,
                        "string": "We build an ensemble model which consists of 10 single models with the same architecture but initialized with different parameters."
                    },
                    {
                        "id": 99,
                        "string": "The performance of our model achieves 89.6% on SNLI, 80.3% on matched MultiNLI and 79.4% on mismatched MultiNLI, which are all state-of-the-art results."
                    },
                    {
                        "id": 100,
                        "string": "Ablation Analysis As shown in Table 5 , we conduct an ablation experiment on SNLI development dataset to evaluate the individual contribution of each component of our model."
                    },
                    {
                        "id": 101,
                        "string": "Firstly we only use the results of the sentence encoder model to predict the answer, in other words, we represent each sentence by a single vector and use dot product with a linear function to do the classification."
                    },
                    {
                        "id": 102,
                        "string": "The result is obviously not satisfactory, which indicates that only using sentence embedding from discourse markers to predict the answer is not ideal in large-scale datasets."
                    },
                    {
                        "id": 103,
                        "string": "We then remove the sentence encoder model, which means we don't use the knowledge transferred from the DMP task and thus the representations r p and r h are set to be zero vectors in the equation (6) and the equation (12)."
                    },
                    {
                        "id": 104,
                        "string": "We observe that the performance drops significantly to 87.24%, which is nearly 1.5% to our DMAN model, which indicates that the discourse markers have deep connections with the logical relations between two sentences they links."
                    },
                    {
                        "id": 105,
                        "string": "When Figure 2 : Performance when the sentence encoder is pretrained on different discourse markers sets."
                    },
                    {
                        "id": 106,
                        "string": "\"NONE\" means the model doesn't use any discourse markers; \"ALL\" means the model use all the discourse markers."
                    },
                    {
                        "id": 107,
                        "string": "we remove the character-level embedding and the POS and NER features, the performance drops a lot."
                    },
                    {
                        "id": 108,
                        "string": "We conjecture that those feature tags help the model represent the words as a whole while the char-level embedding can better handle the outof-vocab (OOV) or rare words."
                    },
                    {
                        "id": 109,
                        "string": "The exact match feature also demonstrates its effectiveness in the ablation result."
                    },
                    {
                        "id": 110,
                        "string": "Finally, we ablate the reinforcement learning part, in other words, we only use the original loss function to optimize the model (set λ = 1)."
                    },
                    {
                        "id": 111,
                        "string": "The result drops about 0.5%, which proves that it is helpful to utilize all the information from the annotators."
                    },
                    {
                        "id": 112,
                        "string": "Semantic Analysis In Figure 2 , we show the performance on the three relation labels when the model is pre-trained on different discourse markers sets."
                    },
                    {
                        "id": 113,
                        "string": "In other words, we removed discourse marker from the original set each time and use the rest 7 discourse markers to pre-train the sentence encoder in the DMP task and then train the DMAN."
                    },
                    {
                        "id": 114,
                        "string": "As we can see, there is a sharp decline of accuracy when removing \"but\", \"because\" and \"although\"."
                    },
                    {
                        "id": 115,
                        "string": "We can intuitively speculate that \"but\" and \"although\" have direct connections with the contradiction label (which drops most significantly) while \"because\" has some links with the entailment label."
                    },
                    {
                        "id": 116,
                        "string": "We observe that some discourse markers such as \"if\" or \"before\" contribute much less than other words which have strong logical hints, although they  actually improve the performance of the model."
                    },
                    {
                        "id": 117,
                        "string": "Compared to the other two categories, the \"contradiction\" label examples seem to benefit the most from the pre-trained sentence encoder."
                    },
                    {
                        "id": 118,
                        "string": "Visualization In Figure 3 , we also provide a visualized analysis of the hidden representation from similarity matrix A (computed in the equation (6) ) in the situations that whether we use the discourse markers or not."
                    },
                    {
                        "id": 119,
                        "string": "We pick a sentence pair whose premise is \"3 young man in hoods standing in the middle of a quiet street facing the camera.\""
                    },
                    {
                        "id": 120,
                        "string": "and hypothesis is \"Three people sit by a busy street bareheaded.\""
                    },
                    {
                        "id": 121,
                        "string": "We observe that the values are highly correlated among the synonyms like \"people\" with \"man\", \"three\" with \"3\" in both situations."
                    },
                    {
                        "id": 122,
                        "string": "However, words that might have contradictory meanings like \"hoods\" with \"bareheaded\", \"quiet\" with \"busy\" perform worse without the discourse markers augmentation, which conforms to the conclusion that the \"contradiction\" label examples benefit a lot which is observed in the Section 5.5."
                    },
                    {
                        "id": 123,
                        "string": "6 Related Work Discourse Marker Applications This work is inspired most directly by the DisSent model and Discourse Prediction Task of , which introduce the use of the discourse markers information for the pretraining of sentence encoders."
                    },
                    {
                        "id": 124,
                        "string": "They follow  to collect a large sentence pairs corpus from Book-Corpus  and propose a sentence representation based on that."
                    },
                    {
                        "id": 125,
                        "string": "They also apply their pre-trained sentence encoder to a series of natural language understanding tasks such as sentiment analysis, question-type, entailment, and relatedness."
                    },
                    {
                        "id": 126,
                        "string": "However, all those datasets are provided by Conneau et al."
                    },
                    {
                        "id": 127,
                        "string": "(2017) for evaluating sentence embeddings and are almost all small-scale and are not able to support more complex neural network."
                    },
                    {
                        "id": 128,
                        "string": "Moreover, they represent each sentence by a single vector and directly combine them to predict the answer, which is not able to interact among the words level."
                    },
                    {
                        "id": 129,
                        "string": "In closely related work, Jernite et al."
                    },
                    {
                        "id": 130,
                        "string": "(2017) propose a model that also leverage discourse relations."
                    },
                    {
                        "id": 131,
                        "string": "However, they manually group the discourse markers into several categories based on human knowledge and predict the category instead of the explicit discourse marker phrase."
                    },
                    {
                        "id": 132,
                        "string": "However, the size of their dataset is much smaller than that in , and sometimes there has been disagreement among annotators about what exactly is the correct categorization of discourse relations (Hobbs, 1990) ."
                    },
                    {
                        "id": 133,
                        "string": "Unlike previous works, we insert the sentence encoder into an integrated network to augment the semantic representation for NLI tasks rather than directly combining the sentence embeddings to predict the relations."
                    },
                    {
                        "id": 134,
                        "string": "Natural Language Inference Earlier research on the natural language inference was based on small-scale datasets (Marelli et al., 2014) , which relied on traditional methods such as shallow methods (Glickman et al., 2005) , natural logic methods(MacCartney and Manning, 2007) , etc."
                    },
                    {
                        "id": 135,
                        "string": "These datasets are either not large enough to support complex deep neural network models or too easy to challenge natural language."
                    },
                    {
                        "id": 136,
                        "string": "Large and complicated networks have been successful in many natural language processing tasks (Zhu et al., 2017; Chen et al., 2017e; Pan et al., 2017a) ."
                    },
                    {
                        "id": 137,
                        "string": "Recently, Bowman et al."
                    },
                    {
                        "id": 138,
                        "string": "(2015) released Stanford Natural language Inference (SNLI) dataset, which is a high-quality and large-scale benchmark, thus inspired many significant works (Bowman et al., 2016; Mou et al., 2016; Vendrov et al., 2016; Conneau et al., 2017; Gong et al., 2018; McCann et al., 2017; Chen et al., 2017b; Choi et al., 2017; Tay et al., 2017) ."
                    },
                    {
                        "id": 139,
                        "string": "Most of them focus on the improvement of the interaction architectures and obtain competitive results, while transfer learning from external knowledge is popular as well."
                    },
                    {
                        "id": 140,
                        "string": "Vendrov et al."
                    },
                    {
                        "id": 141,
                        "string": "(2016) incorpated Skipthought , which is an unsupervised sequence model that has been proven to generate useful sentence embedding."
                    },
                    {
                        "id": 142,
                        "string": "McCann et al."
                    },
                    {
                        "id": 143,
                        "string": "(2017) proposed to transfer the pre-trained encoder from the neural machine translation (NMT) to the NLI tasks."
                    },
                    {
                        "id": 144,
                        "string": "Our method combines a pre-trained sentence encoder from the DMP task with an integrated NLI model to compose a novel framework."
                    },
                    {
                        "id": 145,
                        "string": "Furthermore, unlike previous studies, we make full use of the labels provided by the annotators and employ policy gradient to optimize a new objective function in order to simulate the thought of human being."
                    },
                    {
                        "id": 146,
                        "string": "Conclusion In this paper, we propose Discourse Marker Augmented Network for the task of the natural language inference."
                    },
                    {
                        "id": 147,
                        "string": "We transfer the knowledge learned from the discourse marker prediction task to the NLI task to augment the semantic representation of the model."
                    },
                    {
                        "id": 148,
                        "string": "Moreover, we take the various views of the annotators into consideration and employ reinforcement learning to help optimize the model."
                    },
                    {
                        "id": 149,
                        "string": "The experimental evaluation shows that our model achieves the state-of-the-art results on SNLI and MultiNLI datasets."
                    },
                    {
                        "id": 150,
                        "string": "Future works involve the choice of discourse markers and some other transfer learning sources."
                    },
                    {
                        "id": 151,
                        "string": "Acknowledgements This work was supported in part by the National Nature Science Foundation of China (Grant Nos: 61751307), in part by the grant ZJU Research 083650 of the ZJUI Research Program from Zhejiang University and in part by the National Youth Top-notch Talent Support Program."
                    },
                    {
                        "id": 152,
                        "string": "The experiments are supported by Chengwei Yao in the Experiment Center of the College of Computer Science and Technology, Zhejiang university."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 24
                    },
                    {
                        "section": "Natural Language Inference (NLI)",
                        "n": "2.1",
                        "start": 25,
                        "end": 27
                    },
                    {
                        "section": "Discourse Marker Prediction (DMP)",
                        "n": "2.2",
                        "start": 28,
                        "end": 29
                    },
                    {
                        "section": "Sentence Encoder Model",
                        "n": "3",
                        "start": 30,
                        "end": 37
                    },
                    {
                        "section": "Discourse Marker Augmented Network",
                        "n": "4",
                        "start": 38,
                        "end": 38
                    },
                    {
                        "section": "Encoding Layer",
                        "n": "4.1",
                        "start": 39,
                        "end": 48
                    },
                    {
                        "section": "Interaction Layer",
                        "n": "4.2",
                        "start": 49,
                        "end": 57
                    },
                    {
                        "section": "Output Layer",
                        "n": "4.3",
                        "start": 58,
                        "end": 59
                    },
                    {
                        "section": "Training",
                        "n": "4.4",
                        "start": 60,
                        "end": 70
                    },
                    {
                        "section": "Datasets",
                        "n": "5.1",
                        "start": 71,
                        "end": 82
                    },
                    {
                        "section": "Implementation Details",
                        "n": "5.2",
                        "start": 83,
                        "end": 92
                    },
                    {
                        "section": "Results",
                        "n": "5.3",
                        "start": 93,
                        "end": 99
                    },
                    {
                        "section": "Ablation Analysis",
                        "n": "5.4",
                        "start": 100,
                        "end": 111
                    },
                    {
                        "section": "Semantic Analysis",
                        "n": "5.5",
                        "start": 112,
                        "end": 117
                    },
                    {
                        "section": "Visualization",
                        "n": "5.6",
                        "start": 118,
                        "end": 122
                    },
                    {
                        "section": "Discourse Marker Applications",
                        "n": "6.1",
                        "start": 123,
                        "end": 133
                    },
                    {
                        "section": "Natural Language Inference",
                        "n": "6.2",
                        "start": 134,
                        "end": 145
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 146,
                        "end": 148
                    },
                    {
                        "section": "Acknowledgements",
                        "n": "8",
                        "start": 149,
                        "end": 152
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1023-Table1-1.png",
                        "caption": "Table 1: Three examples in SNLI dataset.",
                        "page": 0,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 529.4399999999999,
                            "y1": 223.2,
                            "y2": 407.03999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1023-Table4-1.png",
                        "caption": "Table 4: Performance on the SNLI dataset and the MultiNLI dataset. In the top part, we show sentence encoding-based models; In the medium part, we present the performance of integrated neural network models; In the bottom part, we show the results of ensemble models.",
                        "page": 5,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 520.3199999999999,
                            "y1": 68.64,
                            "y2": 360.0
                        }
                    },
                    {
                        "filename": "../figure/image/1023-Table5-1.png",
                        "caption": "Table 5: Ablations on the SNLI development dataset.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 291.36,
                            "y1": 62.4,
                            "y2": 197.28
                        }
                    },
                    {
                        "filename": "../figure/image/1023-Figure2-1.png",
                        "caption": "Figure 2: Performance when the sentence encoder is pretrained on different discourse markers sets. “NONE” means the model doesn’t use any discourse markers; “ALL” means the model use all the discourse markers.",
                        "page": 6,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 544.3199999999999,
                            "y1": 61.44,
                            "y2": 240.0
                        }
                    },
                    {
                        "filename": "../figure/image/1023-Figure1-1.png",
                        "caption": "Figure 1: Overview of our Discource Marker Augmented Network, comprising the part of Discourse Marker Prediction (upper) for pre-training and Natural Language Inferance (bottom) to which the learned knowledge will be transferred.",
                        "page": 2,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 524.16,
                            "y1": 66.72,
                            "y2": 239.51999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1023-Figure3-1.png",
                        "caption": "Figure 3: Comparison of the visualized similarity relations.",
                        "page": 7,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 278.4,
                            "y1": 68.16,
                            "y2": 310.08
                        }
                    },
                    {
                        "filename": "../figure/image/1023-Table2-1.png",
                        "caption": "Table 2: Statistics of the labels of SNLI and MuliNLI. Total means the number of examples whose number of annotators is in the left column. Correct means the number of examples whose number of correct labels from the annotators is in the left column.",
                        "page": 3,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 536.16,
                            "y1": 62.879999999999995,
                            "y2": 159.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1023-Table3-1.png",
                        "caption": "Table 3: Statistics of discouse markers in our dataset from BookCorpus.",
                        "page": 4,
                        "bbox": {
                            "x1": 325.92,
                            "x2": 505.44,
                            "y1": 62.4,
                            "y2": 197.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-20"
        },
        {
            "slides": {
                "0": {
                    "title": "Executing Context Dependent Instructions",
                    "text": [
                        "Task: map a sequence of instructions to actions",
                        "Modeling Context Learning from"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Executing a Sequence of Instructions",
                    "text": [
                        "Empty out the leftmost beaker of purple chemical",
                        "Then, add the contents of the first beaker to the second",
                        "Then, drain 1 unit from it",
                        "Same for 1 more unit"
                    ],
                    "page_nums": [
                        2,
                        3,
                        4,
                        5,
                        6,
                        7,
                        8,
                        9
                    ],
                    "images": []
                },
                "2": {
                    "title": "Problem Setup",
                    "text": [
                        "Task: follow sequence of instructions",
                        "Learning from instructions and corresponding world states",
                        "Empty out the leftmost beaker of purple chemical",
                        "Then, add the contents of the first beaker to the second",
                        "Then, drain 1 unit from it",
                        "Same for 1 more unit"
                    ],
                    "page_nums": [
                        10,
                        11,
                        12,
                        13,
                        14,
                        15
                    ],
                    "images": []
                },
                "4": {
                    "title": "Today",
                    "text": [
                        "1. Attention-based model for generating sequences of system actions that modify the environment",
                        "2. Exploration-based learning procedure that avoids biases learned early in training"
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                },
                "5": {
                    "title": "System Actions",
                    "text": [
                        "Each beaker is a stack",
                        "Actions are pop and push",
                        "pop pop pop push brown; push brown; push brown;"
                    ],
                    "page_nums": [
                        18
                    ],
                    "images": []
                },
                "6": {
                    "title": "Meaning Representation",
                    "text": [
                        "push brown; push brown; push brown;"
                    ],
                    "page_nums": [
                        19,
                        20
                    ],
                    "images": []
                },
                "9": {
                    "title": "Reward Function",
                    "text": [
                        "Source state s s0 Target state",
                        "if if a stops the sequence and a stops the sequence and s0 s0 is the goal state is not the goal state",
                        "is closer to the goal state than is closer to the goal state than s0"
                    ],
                    "page_nums": [
                        39,
                        40,
                        41
                    ],
                    "images": []
                },
                "11": {
                    "title": "Learned Biases",
                    "text": [
                        "Early during learning, model learns it can get positive reward by predicting the pop actions",
                        "Less likely to get positive reward with push action",
                        "Becomes biased against push - during later exploration, push is never sampled!",
                        "Compounding effect: never learns to generate push actions"
                    ],
                    "page_nums": [
                        49
                    ],
                    "images": []
                },
                "12": {
                    "title": "Single step Reward Observation",
                    "text": [
                        "Our approach: observe reward of all actions by looking one step ahead during exploration",
                        "Observe reward for actions like push"
                    ],
                    "page_nums": [
                        50
                    ],
                    "images": []
                },
                "14": {
                    "title": "Simple Exploration",
                    "text": [
                        "Only observe states along sampled trajectory",
                        "Observe sampled states and single-step ahead"
                    ],
                    "page_nums": [
                        52,
                        53,
                        54,
                        55,
                        56,
                        57,
                        58
                    ],
                    "images": []
                },
                "15": {
                    "title": "Single step Observation",
                    "text": [
                        "Add the third beaker to the first",
                        "push 1 orange push 1 yellow"
                    ],
                    "page_nums": [
                        59,
                        60,
                        61,
                        62,
                        63,
                        64,
                        65,
                        66,
                        67,
                        68,
                        69,
                        70
                    ],
                    "images": []
                },
                "17": {
                    "title": "Alchemy",
                    "text": [
                        "pop pop pop push brown; push brown; push brown;"
                    ],
                    "page_nums": [
                        72
                    ],
                    "images": []
                },
                "18": {
                    "title": "Scene",
                    "text": [
                        "T he person with a red shirt and a blue hat moves t o the right end",
                        "remove_person remove_hat add_person red add_hat blue"
                    ],
                    "page_nums": [
                        73
                    ],
                    "images": []
                },
                "19": {
                    "title": "Tangrams",
                    "text": [
                        "Swap the third and fourth figures",
                        "remove 4 insert 3 boat"
                    ],
                    "page_nums": [
                        74
                    ],
                    "images": []
                },
                "22": {
                    "title": "Ablations",
                    "text": [
                        "Without World State Context",
                        "Need access to previous instructions",
                        "Need access to world state"
                    ],
                    "page_nums": [
                        78
                    ],
                    "images": []
                }
            },
            "paper_title": "Situated Mapping of Sequential Instructions to Actions with Single-step Reward Observation",
            "paper_id": "1032",
            "paper": {
                "title": "Situated Mapping of Sequential Instructions to Actions with Single-step Reward Observation",
                "abstract": "We propose a learning approach for mapping context-dependent sequential instructions to actions. We address the problem of discourse and state dependencies with an attention-based model that considers both the history of the interaction and the state of the world. To train from start and goal states without access to demonstrations, we propose SESTRA, a learning algorithm that takes advantage of singlestep reward observations and immediate expected reward maximization. We evaluate on the SCONE domains, and show absolute accuracy improvements of 9.8%-25.3% across the domains over approaches that use high-level logical representations.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction An agent executing a sequence of instructions must address multiple challenges, including grounding the language to its observed environment, reasoning about discourse dependencies, and generating actions to complete high-level goals."
                    },
                    {
                        "id": 1,
                        "string": "For example, consider the environment and instructions in Figure 1 , in which a user describes moving chemicals between beakers and mixing chemicals together."
                    },
                    {
                        "id": 2,
                        "string": "To execute the second instruction, the agent needs to resolve sixth beaker and last one to objects in the environment."
                    },
                    {
                        "id": 3,
                        "string": "The third instruction requires resolving it to the rightmost beaker mentioned in the second instruction, and reasoning about the set of actions required to mix the colors in the beaker to brown."
                    },
                    {
                        "id": 4,
                        "string": "In this paper, we describe a model and learning approach to map sequences of instructions to actions."
                    },
                    {
                        "id": 5,
                        "string": "Our model considers previous utterances and the world state to select actions, learns to combine simple actions to achieve complex goals, and can be trained using (Long et al., 2016) ALCHEMY domain, including a start state (top), sequence of instructions, and a goal state (bottom)."
                    },
                    {
                        "id": 6,
                        "string": "Each instruction is annotated with a sequence of actions from the set of actions we define for ALCHEMY."
                    },
                    {
                        "id": 7,
                        "string": "goal states without access to demonstrations."
                    },
                    {
                        "id": 8,
                        "string": "The majority of work on executing sequences of instructions focuses on mapping instructions to high-level formal representations, which are then evaluated to generate actions (e.g., Chen and Mooney, 2011; Long et al., 2016) ."
                    },
                    {
                        "id": 9,
                        "string": "For example, the third instruction in Figure 1 will be mapped to mix(prev_arg1), indicating that the mix action should be applied to first argument of the previous action (Long et al., 2016; Guu et al., 2017) ."
                    },
                    {
                        "id": 10,
                        "string": "In contrast, we focus on directly generating the sequence of actions."
                    },
                    {
                        "id": 11,
                        "string": "This requires resolving references without explicitly modeling them, and learning the sequences of actions required to complete high-level actions; for example, that mixing requires removing everything in the beaker and replacing with the same number of brown items."
                    },
                    {
                        "id": 12,
                        "string": "A key challenge in executing sequences of instructions is considering contextual cues from both the history of the interaction and the state of the world."
                    },
                    {
                        "id": 13,
                        "string": "Instructions often refer to previously mentioned objects (e.g., it in Figure 1 ) or actions (e.g., do it again)."
                    },
                    {
                        "id": 14,
                        "string": "The world state provides the set of objects the instruction may refer to, and implicitly determines the available actions."
                    },
                    {
                        "id": 15,
                        "string": "For example, liquid can not be removed from an empty beaker."
                    },
                    {
                        "id": 16,
                        "string": "Both types of contexts continuously change during an interaction."
                    },
                    {
                        "id": 17,
                        "string": "As new instructions are given, the instruction history expands, and as the agent acts the world state changes."
                    },
                    {
                        "id": 18,
                        "string": "We propose an attentionbased model that takes as input the current instruction, previous instructions, the initial world state, and the current state."
                    },
                    {
                        "id": 19,
                        "string": "At each step, the model computes attention encodings of the different inputs, and predicts the next action to execute."
                    },
                    {
                        "id": 20,
                        "string": "We train the model given instructions paired with start and goal states without access to the correct sequence of actions."
                    },
                    {
                        "id": 21,
                        "string": "During training, the agent learns from rewards received through exploring the environment with the learned policy by mapping instructions to sequences of actions."
                    },
                    {
                        "id": 22,
                        "string": "In practice, the agent learns to execute instructions gradually, slowly correctly predicting prefixes of the correct sequences of increasing length as learning progress."
                    },
                    {
                        "id": 23,
                        "string": "A key challenge is learning to correctly select actions that are only required later in execution sequences."
                    },
                    {
                        "id": 24,
                        "string": "Early during learning, these actions receive negative updates, and the agent learns to assign them low probabilities."
                    },
                    {
                        "id": 25,
                        "string": "This results in an exploration problem in later stages, where actions that are only required later are not sampled during exploration."
                    },
                    {
                        "id": 26,
                        "string": "For example, in the ALCHEMY domain shown in Figure 1 , the agent behavior early during execution of instructions can be accomplished by only using POP actions."
                    },
                    {
                        "id": 27,
                        "string": "As a result, the agent quickly learns a strong bias against PUSH actions, which in practice prevents the policy from exploring them again."
                    },
                    {
                        "id": 28,
                        "string": "We address this with a learning algorithm that observes the reward for all possible actions for each visited state, and maximizes the immediate expected reward."
                    },
                    {
                        "id": 29,
                        "string": "We evaluate our approach on SCONE (Long et al., 2016) , which includes three domains, and is used to study recovering predicate logic meaning representations for sequential instructions."
                    },
                    {
                        "id": 30,
                        "string": "We study the problem of generating a sequence of low-level actions, and re-define the set of actions for each domain."
                    },
                    {
                        "id": 31,
                        "string": "For example, we treat the beakers in the ALCHEMY domain as stacks and use only POP and PUSH actions."
                    },
                    {
                        "id": 32,
                        "string": "Our approach robustly learns to execute sequential instructions with up to 89.1% task-completion accuracy for single instruction, and 62.7% for complete sequences."
                    },
                    {
                        "id": 33,
                        "string": "Our code is available at https://github.com/clic-lab/scone."
                    },
                    {
                        "id": 34,
                        "string": "Technical Overview Task and Notation Let S be the set of all possible world states, X be the set of all natural language instructions, and A be the set of all actions."
                    },
                    {
                        "id": 35,
                        "string": "An instructionx ∈ X of length |x| is a sequence of tokens x 1 , ...x |x| ."
                    },
                    {
                        "id": 36,
                        "string": "Executing an action modifies the world state following a transition function T : S × A → S. For example, the ALCHEMY domain includes seven beakers that contain colored liquids."
                    },
                    {
                        "id": 37,
                        "string": "The world state defines the content of each beaker."
                    },
                    {
                        "id": 38,
                        "string": "We treat each beaker as a stack."
                    },
                    {
                        "id": 39,
                        "string": "The actions are POP N and PUSH N C, where 1 ≤ N ≤ 7 is the beaker number and C is one of six colors."
                    },
                    {
                        "id": 40,
                        "string": "There are a total of 50 actions, including the STOP action."
                    },
                    {
                        "id": 41,
                        "string": "Section 6 describes the domains in detail."
                    },
                    {
                        "id": 42,
                        "string": "Given a start state s 1 and a sequence of instructions x 1 , ."
                    },
                    {
                        "id": 43,
                        "string": "."
                    },
                    {
                        "id": 44,
                        "string": "."
                    },
                    {
                        "id": 45,
                        "string": ",x n , our goal is to generate the sequence of actions specified by the instructions starting from s 1 ."
                    },
                    {
                        "id": 46,
                        "string": "We treat the execution of a sequence of instructions as executing each instruction in turn."
                    },
                    {
                        "id": 47,
                        "string": "The executionē of an instructionx i starting at a state s 1 and given the history of the instruction sequence x 1 , ."
                    },
                    {
                        "id": 48,
                        "string": "."
                    },
                    {
                        "id": 49,
                        "string": "."
                    },
                    {
                        "id": 50,
                        "string": ",x i−1 is a sequence of state-action pairsē = (s 1 , a 1 ), ..., (s m , a m ) , where a k ∈ A, s k+1 = T (s k , a k )."
                    },
                    {
                        "id": 51,
                        "string": "The final action a m is the special action STOP, which indicates the execution has terminated."
                    },
                    {
                        "id": 52,
                        "string": "The final state is then s m , as T (s k , STOP) = s k ."
                    },
                    {
                        "id": 53,
                        "string": "Executing a sequence of instructions in order generates a sequence ē 1 , ...,ē n , whereē i is the execution of instructionx i ."
                    },
                    {
                        "id": 54,
                        "string": "When referring to states and actions in an indexed executionē i , the k-th state and action are s i,k and a i,k ."
                    },
                    {
                        "id": 55,
                        "string": "We execute instructions one after the other:ē 1 starts at the interaction initial state s 1 and s i+1,1 = s i,|ē i | , where s i+1,1 is the start state ofē i+1 and s i,|ē i | is the final state ofē i ."
                    },
                    {
                        "id": 56,
                        "string": "Model We model the agent with a neural network policy (Section 4)."
                    },
                    {
                        "id": 57,
                        "string": "At step k of executing the i-th instruction, the model input is the current instructionx i , the previous instructions x 1 , ."
                    },
                    {
                        "id": 58,
                        "string": "."
                    },
                    {
                        "id": 59,
                        "string": "."
                    },
                    {
                        "id": 60,
                        "string": ",x i−1 , the world state s 1 at the beginning of executingx i , and the current state s k ."
                    },
                    {
                        "id": 61,
                        "string": "The model predicts the next action a k to execute."
                    },
                    {
                        "id": 62,
                        "string": "If a k = STOP, we switch to the next instruction, or if at the end of the instruction sequence, terminate."
                    },
                    {
                        "id": 63,
                        "string": "Otherwise, we update the state to s k+1 = T (s k , a k )."
                    },
                    {
                        "id": 64,
                        "string": "The model uses attention to process the different inputs and a recurrent neural network (RNN) decoder to generate actions (Bahdanau et al., 2015) ."
                    },
                    {
                        "id": 65,
                        "string": "Learning We assume access to a set of N instruction sequences, where each instruction in each sequence is paired with its start and goal states."
                    },
                    {
                        "id": 66,
                        "string": "During training, we create an example for each instruction."
                    },
                    {
                        "id": 67,
                        "string": "Formally, the training set is {(x (j) i , s (j) i,1 , x (j) 1 , ."
                    },
                    {
                        "id": 68,
                        "string": "."
                    },
                    {
                        "id": 69,
                        "string": "."
                    },
                    {
                        "id": 70,
                        "string": ",x (j) i−1 , g (j) i )} N,n (j) j=1,i=1 , wherex (j) i is an instruction, s (j) i,1 is a start state, x (j) 1 , ."
                    },
                    {
                        "id": 71,
                        "string": "."
                    },
                    {
                        "id": 72,
                        "string": "."
                    },
                    {
                        "id": 73,
                        "string": ",x (j) i−1 is the instruction history, g (j) i is the goal state, and n (j) is the length of the j-th instruction sequence."
                    },
                    {
                        "id": 74,
                        "string": "This training data contains no evidence about the actions and intermediate states required to execute each instruction."
                    },
                    {
                        "id": 75,
                        "string": "1 We use a learning method that maximizes the expected immediate reward for a given state (Section 5)."
                    },
                    {
                        "id": 76,
                        "string": "The reward accounts for task-completion and distance to the goal via potential-based reward shaping."
                    },
                    {
                        "id": 77,
                        "string": "Evaluation We evaluate exact task completion for sequences of instructions on a test set {(s (j) 1 , x (j) 1 , ."
                    },
                    {
                        "id": 78,
                        "string": "."
                    },
                    {
                        "id": 79,
                        "string": "."
                    },
                    {
                        "id": 80,
                        "string": ",x (j) n j , g (j) )} N j=1 , where g (j) is the oracle goal state of executing instructions x (j) 1 , ."
                    },
                    {
                        "id": 81,
                        "string": "."
                    },
                    {
                        "id": 82,
                        "string": "."
                    },
                    {
                        "id": 83,
                        "string": ",x (j) n j in order starting from s (j) 1 ."
                    },
                    {
                        "id": 84,
                        "string": "We also evaluate single-instruction task completion using per-instruction annotated start and goal states."
                    },
                    {
                        "id": 85,
                        "string": "Related Work Executing instructions has been studied using the SAIL corpus (MacMahon et al., 2006) with focus on navigation using high-level logical representations (Chen and Mooney, 2011; Chen, 2012; Artzi et al., 2014) and lowlevel actions (Mei et al., 2016) ."
                    },
                    {
                        "id": 86,
                        "string": "While SAIL includes sequences of instructions, the data demonstrates limited discourse phenomena, and instructions are often processed in isolation."
                    },
                    {
                        "id": 87,
                        "string": "Approaches that consider as input the entire sequence focused on segmentation (Andreas and Klein, 2015) ."
                    },
                    {
                        "id": 88,
                        "string": "Recently, other navigation tasks were proposed with focus on single instructions (Anderson et al., 2018; Janner et al., 2018) ."
                    },
                    {
                        "id": 89,
                        "string": "We focus on sequences of environment manipulation instructions and modeling contextual cues from both the changing environment and instruction history."
                    },
                    {
                        "id": 90,
                        "string": "Manipulation using single-sentence instructions has been stud-ied using the Blocks domain (Bisk et al., 2016 (Bisk et al., , 2018 Misra et al., 2017; Tan and Bansal, 2018) ."
                    },
                    {
                        "id": 91,
                        "string": "Our work is related to the work of Branavan et al."
                    },
                    {
                        "id": 92,
                        "string": "(2009) and Vogel and Jurafsky (2010) ."
                    },
                    {
                        "id": 93,
                        "string": "While both study executing sequences of instructions, similar to SAIL, the data includes limited discourse dependencies."
                    },
                    {
                        "id": 94,
                        "string": "In addition, both learn with rewards computed from surface-form similarity between text in the environment and the instruction."
                    },
                    {
                        "id": 95,
                        "string": "We do not rely on such similarities, but instead use a state distance metric."
                    },
                    {
                        "id": 96,
                        "string": "Language understanding in interactive scenarios that include multiple turns has been studied with focus on dialogue for querying database systems using the ATIS corpus (Hemphill et al., 1990; Dahl et al., 1994) ."
                    },
                    {
                        "id": 97,
                        "string": "Tür et al."
                    },
                    {
                        "id": 98,
                        "string": "(2010) surveys work on ATIS."
                    },
                    {
                        "id": 99,
                        "string": "Miller et al."
                    },
                    {
                        "id": 100,
                        "string": "(1996) , Collins (2009), and Suhr et al."
                    },
                    {
                        "id": 101,
                        "string": "(2018) modeled context dependence in ATIS for generating formal representations."
                    },
                    {
                        "id": 102,
                        "string": "In contrast, we focus on environments that change during execution and directly generating environment actions, a scenario that is more related to robotic agents than database query."
                    },
                    {
                        "id": 103,
                        "string": "The SCONE corpus (Long et al., 2016) was designed to reflect a broad set of discourse context-dependence phenomena."
                    },
                    {
                        "id": 104,
                        "string": "It was studied extensively using logical meaning representations (Long et al., 2016; Guu et al., 2017; Fried et al., 2018) ."
                    },
                    {
                        "id": 105,
                        "string": "In contrast, we are interested in directly generating actions that modify the environment."
                    },
                    {
                        "id": 106,
                        "string": "This requires generating lower-level actions and learning procedures that are otherwise hardcoded in the logic (e.g., mixing action in Figure 1) ."
                    },
                    {
                        "id": 107,
                        "string": "Except for Fried et al."
                    },
                    {
                        "id": 108,
                        "string": "(2018) , previous work on SCONE assumes access only to the initial and final states during training."
                    },
                    {
                        "id": 109,
                        "string": "This form of supervision does not require operating the agent manually to acquire the correct sequence of actions, a difficult task in robotic agents with complex control."
                    },
                    {
                        "id": 110,
                        "string": "Goal state supervision has been studied for instructional language (e.g., Branavan et al., 2009; Bisk et al., 2016) , and more extensively in question answering when learning with answer annotations only (e.g., Clarke et al., 2010; Liang et al., 2011; Kwiatkowski et al., 2013; Berant et al., 2013; Liang, 2014, 2015; ."
                    },
                    {
                        "id": 111,
                        "string": "Model We map sequences of instructions x 1 , ."
                    },
                    {
                        "id": 112,
                        "string": "."
                    },
                    {
                        "id": 113,
                        "string": "."
                    },
                    {
                        "id": 114,
                        "string": ",x n to actions by executing the instructions in or-Utterance initial state s 1 < l a t e x i t s h a 1 _ b a s e 6 4 = \" p S K e R C 6 K r a b k R j 9 Z F y 6 P 3 V r m k v 4 = \" > A A A C U H i c b Z B N b 9 Q w E I a d B U o J X 1 s 4 c r F Y U X F a J R S p 5 V a p F 4 5 F I r T S J l p N J r O t V d u J 7 E n L K k r / B 7 + m V z h z 4 q d w A u 8 H E m w Z y f K r 9 x 1 7 N E / Z a O U 5 S X 5 E g z t 3 7 2 3 d 3 3 4 Q P 3 z 0 + M n T 4 c 6 z T 7 5 u H V K G t a 7 d a Q m e t L K U s W J N p 4 0 j M K W m k / L i a J G f X J L z q r Y f e d 5 Q Y e D M q p l C 4 G B N h 3 u 7 8 v o 6 Z / r M X c Z M D i x S L / N c 7 s q V q 6 x i B V p 6 B i b Z S z 9 N p 8 N R M k 6 W J W + L d C 1 G Y l 3 H 0 5 1 o K 6 9 q b A 1 Z R g 3 e T 9 K k 4 a I D x w o 1 9 X H e e m o A L + C M J k F a M O S L b r l d L 1 8 F p 5 K z 2 o V j W S 7 d v 1 9 0 Y L y f m z J 0 G u B z v 5 k t z P 9 m l V 9 8 u D G d Z w d F W L p p m S y u h s 9 a L b m W C 3 y y U o 6 Q 9 T w I Q B f Q o M R z c I C B n Y / j 3 J G l K 6 y N A V t 1 O f a T t O i 6 3 B k 5 S v s + D u T S T U 6 3 R f Z m / G 6 c f H g 7 O k z W C L f F C / F S v B a p 2 B e H 4 r 0 4 F p l A 8 U X c i K / i W / Q 9 + h n 9 G k S r 1 j + 3 e C 7 + q U H 8 G 3 e a s y o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" p S K e R C 6 K r a b k R j 9 Z F y 6 P 3 V r m k v 4 = \" > A A A C U H i c b Z B N b 9 Q w E I a d B U o J X 1 s 4 c r F Y U X F a J R S p 5 V a p F 4 5 F I r T S J l p N J r O t V d u J 7 E n L K k r / B 7 + m V z h z 4 q d w A u 8 H E m w Z y f K r 9 x 1 7 N E / Z a O U 5 S X 5 E g z t 3 7 2 3 d 3 3 4 Q P 3 z 0 + M n T 4 c 6 z T 7 5 u H V K G t a 7 d a Q m e t L K U s W J N p 4 0 j M K W m k / L i a J G f X J L z q r Y f e d 5 Q Y e D M q p l C 4 G B N h 3 u 7 8 v o 6 Z / r M X c Z M D i x S L / N c 7 s q V q 6 x i B V p 6 B i b Z S z 9 N p 8 N R M k 6 W J W + L d C 1 G Y l 3 H 0 5 1 o K 6 9 q b A 1 Z R g 3 e T 9 K k 4 a I D x w o 1 9 X H e e m o A L + C M J k F a M O S L b r l d L 1 8 F p 5 K z 2 o V j W S 7 d v 1 9 0 Y L y f m z J 0 G u B z v 5 k t z P 9 m l V 9 8 u D G d Z w d F W L p p m S y u h s 9 a L b m W C 3 y y U o 6 Q 9 T w I Q B f Q o M R z c I C B n Y / j 3 J G l K 6 y N A V t 1 O f a T t O i 6 3 B k 5 S v s + D u T S T U 6 3 R f Z m / G 6 c f H g 7 O k z W C L f F C / F S v B a p 2 B e H 4 r 0 4 F p l A 8 U X c i K / i W / Q 9 + h n 9 G k S r 1 j + 3 e C 7 + q U H 8 G 3 e a s y o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" p S K e R C 6 K r a b k R j 9 Z F y 6 P 3 V r m k v 4 = \" > A A Figure 2 : Illustration of the model architecture while generating the third action a 3 in the third utterancex 3 from Figure 1 ."
                    },
                    {
                        "id": 115,
                        "string": "Context vectors computed using attention are highlighted in blue."
                    },
                    {
                        "id": 116,
                        "string": "The model takes as input vector encodings from the current and previous instructionsx 1 ,x 2 , andx 3 , the initial state s 1 , the current state s 3 , and the previous action a 2 ."
                    },
                    {
                        "id": 117,
                        "string": "Instruction encodings are computed with a bidirectional RNN."
                    },
                    {
                        "id": 118,
                        "string": "We attend over the previous and current instructions and the initial and current states."
                    },
                    {
                        "id": 119,
                        "string": "We use an MLP to select the next action."
                    },
                    {
                        "id": 120,
                        "string": "der."
                    },
                    {
                        "id": 121,
                        "string": "The model generates an executionē = (s 1 , a 1 ), ."
                    },
                    {
                        "id": 122,
                        "string": "."
                    },
                    {
                        "id": 123,
                        "string": "."
                    },
                    {
                        "id": 124,
                        "string": ", (s m i , a m i ) for each instructionx i ."
                    },
                    {
                        "id": 125,
                        "string": "The agent context, the information available to the agent at step k, iss k = (x i , x 1 , ."
                    },
                    {
                        "id": 126,
                        "string": "."
                    },
                    {
                        "id": 127,
                        "string": "."
                    },
                    {
                        "id": 128,
                        "string": ",x i−1 , s k ,ē[: k]), whereē[: k] is the execution up until but not including step k. In contrast to the world state, the agent context also includes instructions and the execution so far."
                    },
                    {
                        "id": 129,
                        "string": "The agent policy π θ (s k , a) is modeled as a probabilistic neural network parametrized by θ, wheres k is the agent context at step k and a is an action."
                    },
                    {
                        "id": 130,
                        "string": "To generate executions, we generate one action at a time, execute the action, and observe the new world state."
                    },
                    {
                        "id": 131,
                        "string": "In step k of executing the i-th instruction, the network inputs are the current utterancex i , the previous instructions x 1 , ."
                    },
                    {
                        "id": 132,
                        "string": "."
                    },
                    {
                        "id": 133,
                        "string": "."
                    },
                    {
                        "id": 134,
                        "string": ",x i−1 , the initial state s 1 at beginning of executingx i , and the current state s k ."
                    },
                    {
                        "id": 135,
                        "string": "When executing a sequence of instructions, the initial state s 1 is either the state at the beginning of executing the sequence or the final state of the execution of the previous instruction."
                    },
                    {
                        "id": 136,
                        "string": "Figure 2 illustrates our architecture."
                    },
                    {
                        "id": 137,
                        "string": "W f j Z U N / x C J 0 = \" > A A A C Q X i c b V D L T h t B E J w l 4 b U 8 Y o c j l 1 E M i J O 1 i y I B N 0 v h w J F I G C x 5 V 9 b s b B t G z G M 1 0 w t Y q / 0 B v i Z X c s 5 P 8 A u c o l x z Y W y M l N i U 1 F K p q n t 6 u r J C C o d R 9 B Q s f P i 4 u L S 8 s h q u r W 9 s f m o 0 P 1 8 4 U 1 o O X W 6 k s b 2 M O Z B C Q x c F S u g V F p j K J F x m N 9 / G / u U t W C e M P s d R A a l i V 1 o M B W f o p U F j Z 4 8 m C P d Y n Q A 3 O d i a J g l 9 0 5 w W R Q F Y D x q t q B 1 N Q O d J P C U t M s X Z o B k s J b n h p Q K N X D L n + n F U Y F o x i 4 J L q M O k d F A w f s O u o O + p Z g p c W k 3 O q b d l d W 1 9 Y 1 K d f P K p L n m 0 O K p T P V N x A x I k U A L B U q 4 y T Q w F U m 4 j u 7 O B v 7 1 P W g j 0 u Q S + x m E i v U S 0 R W c o Z U 6 l e 0 A 4 R G L s 1 x r S J A a Z A i 0 p K Z z 2 K n U v L o 3 B J 0 m / p j U y B j N T t V Z C O K U 5 8 o O 4 p I Z 0 / a 9 D M O C a R R c Q u k G u Y G M 8 T v W g 7 a l C V N g w m L 4 h 5 L u W S W m 3 V T b Z w 8 Z q r 8 7 C q a M 6 a v I V i q G t 2 b S G 4 j / e r E Z D J z Y j t 2 T s B B J l i M k f L S 8 m 0 u K K R 2 E R G O h g a P s W 8 K 4 F v Z + y m + Z Z h x t l K 4 b 2 K z g g a d K s S Q u A l 6 2 / b A o A q 1 o z S 9 L 1 y b n T + Y 0 T V o H 9 d O 6 d 3 F U b d l d W 1 9 Y 1 K d f P K p L n m 0 O K p T P V N x A x I k U A L B U q 4 y T Q w F U m 4 j u 7 O B v 7 1 P W g j 0 u Q S + x m E i v U S 0 R W c o Z U 6 l e 0 A 4 R G L s 1 x r S J A a Z A i 0 p K Z z 2 K n U v L o 3 B J 0 m / p j U y B j N T t V Z C O K U 5 8 o O 4 p I Z 0 / a 9 D M O C a R R c Q u k G u Y G M 8 T v W g 7 a l C V N g w m L 4 h 5 L u W S W m 3 V T b Z w 8 Z q r 8 7 C q a M 6 a v I V i q G t 2 b S G 4 j / e r E Z D J z Y j t 2 T s B B J l i M k f L S 8 m 0 u K K R 2 E R G O h g a P s W 8 K 4 F v Z + y m + Z Z h x t l K 4 b 2 K z g g a d K s S Q u A l 6 2 / b A o A q 1 o z S 9 L 1 y b n T + Y 0 T V o H 9 d O 6 d 3 F U b d l d W 1 9 Y 1 K d f P K p L n m 0 O K p T P V N x A x I k U A L B U q 4 y T Q w F U m 4 j u 7 O B v 7 1 P W g j 0 u Q S + x m E i v U S 0 R W c o Z U 6 l e 0 A 4 R G L s 1 x r S J A a Z A i 0 p K Z z 2 K n U v L o 3 B J 0 m / p j U y B j N T t V Z C O K U 5 8 o O 4 p I Z 0 / a 9 D M O C a R R c Q u k G u Y G M 8 T v W g 7 a l C V N g w m L 4 h 5 L u W S W m 3 V T b Z w 8 Z q r 8 7 C q a M 6 a v I V i q G t 2 b S G 4 j / e r E Z D J z Y j t 2 T s B B J l i M k f L S 8 m 0 u K K R 2 E R G O h g a P s W 8 K 4 F v Z + y m + Z Z h x t l K 4 b 2 K z g g a d K s S Q u A l 6 2 / b A o A q 1 o z S 9 L 1 y b n T + Y 0 T V o H 9 d O 6 d 3 F U a 3 j j C J f I D t k l + 8 Q n x 6 R B z k m T t A g n T + S Z v J B X 5 8 3 5 c D 6 d r 1 H p j D P u 2 S J / 4 H z / A G + O q t s = < / l a t e x i t > a2 < l a t e x i t s h a 1 _ b a s e 6 4 = \" S U b k Z W m j 3 P o R h h s H Q X 1 z V F H C x 3 E = \" > A A A C H n i c b V D L S s N A F J 3 4 q D W + W l 2 6 G S y C q 5 I U Q d 0 V 3 L i s a G 2 h C W U y u W 2 H z k z C z E Q p I Z / g V t d + j S t x q 3 / j 9 L H Q 1 g M X D u f c F y d K O d P G 8 7 6 d t f W N z d J W e d v d 2 d 3 b P 6 h U D x 9 0 k i k K b Z r w R H U j o o E z C W 3 D D I d u q o C I i E M n G l 9 P / c 4 j K M 0 S e W 8 m K Y S C D C U b M E q M l e 5 I v 9 G v 1 L y 6 N w N e J f 6 C 1 N A C r X 7 V K Q V x Q j M B 0 l B O t O 7 5 X m r C n C j D K I f C D T I N K a F j M o S e p Z I I 0 G E + + 7 X A p 1 a J 8 S B R t q T B M / X 3 R E 6 E 1 h M R 2 U 5 B z E g v e 1 P x X y / W 0 4 V L 1 8 3 g M s y Z T D M D k s 6 P D z K O T Y K n Y e C Y K a C G T y w h V D H 7 P 6 Y j o g g 1 N j L X D R R I e K K J E E T G e U C L Q o T 4 = < / l a t e x i t > < l a t e x i t s h a _ b a s e 6 4 = \" S U b k Z W m j 3 P o R h h s H Q X z V F H C x 3 E = \" > A A A C H n i c b V D L S s N A F J 3 4 q D W + W l 2 6 G S y C q 5 I U Q d 0 V 3 L i s a G 2 h C W U y u W 2 H z k z C z E Q p I Z / g V t d + j S t x q 3 / j 9 L H Q 1 g M X D u f c F y d K O d P G 8 7 6 d t f W N z d J W e d v d 2 d 3 b P 6 h U D x 9 0 k i k K b Z r w R H U j o o E z C W 3 D D I d u q o C I i E M n G l 9 P / c 4 j K M 0 S e W 8 m K Y S C D C U b M E q M l e 5 I v 9 G v 1 L y 6 N w N e J f 6 C 1 N A C r X 7 V K Q V x Q j M B 0 l B O t O 7 5 X m r C n C j D K I f C D T I N K a F j M o S e p Z I I 0 G E + + 7 X A p 1 a J 8 S B R t q T B M / X 3 R E 6 E 1 h M R 2 U 5 B z E g v e 1 P x X y / W 0 4 V L 1 8 3 g M s y Z T D M D k s 6 P D z K O T Y K n Y e C Y K a C G T y w h V D H 7 P 6 Y j o g g 1 N j L X D R R I e K K J E E T G e U C L Q o T 4 = < / l a t e x i t > < l a t e x i t s h a _ b a s e 6 4 = \" S U b k Z W m j 3 P o R h h s H Q X z V F H C x 3 E = \" > A A A C H n i c b V D L S s N A F J 3 4 q D W + W l 2 6 G S y C q 5 I U Q d 0 V 3 L i s a G 2 h C W U y u W 2 H z k z C z E Q p I Z / g V t d + j S t x q 3 / j 9 L H Q 1 g M X D u f c F y d K O d P G 8 7 6 d t f W N z d J W e d v d 2 d 3 b P 6 h U D x 9 0 k i k K b Z r w R H U j o o E z C W 3 D D I d u q o C I i E M n G l 9 P / c 4 j K M 0 S e W 8 m K Y S C D C U b M E q M l e 5 I v 9 G v 1 L y 6 N w N e J f 6 C 1 N A C r X 7 V K Q V x Q j M B 0 l B O t O 7 5 X m r C n C j D K I f C D T I N K a F j M o S e p Z I I 0 G E + + 7 X A p 1 a J 8 S B R t q T B M / X 3 R E 6 E 1 h M R 2 U 5 B z E g v e 1 P x X y / W 0 4 V L 1 8 3 g M s y Z T D M D k s 6 P D z K O T Y K n Y e C Y K a C G T y w h V D H 7 P 6 Y j o g g 1 N j L X D R R I e K K J E E T G e U C L U n d Y n l j j F o d c = \" > A A A C K X i c b V A 9 T 8 M w F H T K d / g q M L J Y V E h M V Q J I w F a J h b F I l F Z q Q u U 4 L 2 D V d i L b A Z U o / 4 M V Z n 4 N E 7 D y R 3 D a D l A 4 y d L p 7 j 2 / 0 0 U Z Z 9 p 4 3 o d T m 5 t f W F x a X n F X 1 9 Y 3 N u t b 2 9 c 6 z R W F D k 1 5 q n o R 0 c C Z h I 5 h h k M v U 0 B E x K E b D c 8 r v 3 s P S r N U X p l R B q E g t 5 I l j B J j p Z t A E H M X J c V j e U M H R 4 N 6 w 2 t 6 Y + C / x J + S B p q i P d h y F o M 4 p b k A a S g n W v d 9 L z N h Q Z R h l E P p B r m G j N A h u Y W + p Z I I 0 G E x j l 3 i f a v E O E m V f d L g s f p z o y B C 6 5 G I 7 G Q V U 8 9 6 l f i v F + v q w 5 n r J j k N C y a z 3 I C k k + N J z r F J c d U L j p k C a v j I E k I V s / k x v S O K U G P b c 9 1 A g Y Q H m g p B Z F w E t O z 7 Y V E E S u C G X U n d Y n l j j F o d c = \" > A A A C K X i c b V A 9 T 8 M w F H T K d / g q M L J Y V E h M V Q J I w F a J h b F I l F Z q Q u U 4 L 2 D V d i L b A Z U o / 4 M V Z n 4 N E 7 D y R 3 D a D l A 4 y d L p 7 j 2 / 0 0 U Z Z 9 p 4 3 o d T m 5 t f W F x a X n F X 1 9 Y 3 N u t b 2 9 c 6 z R W F D k 1 5 q n o R 0 c C Z h I 5 h h k M v U 0 B E x K E b D c 8 r v 3 s P S r N U X p l R B q E g t 5 I l j B J j p Z t A E H M X J c V j e U M H R 4 N 6 w 2 t 6 Y + C / x J + S B p q i P d h y F o M 4 p b k A a S g n W v d 9 L z N h Q Z R h l E P p B r m G j N A h u Y W + p Z I I 0 G E x j l 3 i f a v E O E m V f d L g s f p z o y B C 6 5 G I 7 G Q V U 8 9 6 l f i v F + v q w 5 n r J j k N C y a z 3 I C k k + N J z r F J c d U L j p k C a v j I E k I V s / k x v S O K U G P b c 9 1 A g Y Q H m g p B Z F w E t O z 7 Y V E E S u C G X U n d Y n l j j F o d c = \" > A A A C K X i c b V A 9 T 8 M w F H T K d / g q M L J Y V E h M V Q J I w F a J h b F I l F Z q Q u U 4 L 2 D V d i L b A Z U o / 4 M V Z n 4 N E 7 D y R 3 D a D l A 4 y d L p 7 j 2 / 0 0 U Z Z 9 p 4 3 o d T m 5 t f W F x a X n F X 1 9 Y 3 N u t b 2 9 c 6 z R W F D k 1 5 q n o R 0 c C Z h I 5 h h k M v U 0 B E x K E b D c 8 r v 3 s P S r N U X p l R B q E g t 5 I l j B J j p Z t A E H M X J c V j e U M H R 4 N 6 w 2 t 6 Y + C / x J + S B p q i P d h y F o M 4 p b k A a S g n W v d 9 L z N h Q Z R h l E P p B r m G j N A h u Y W + p Z I I 0 G E x j l 3 i f a v E O E m V f d L g s f p z o y B C 6 5 G I 7 G Q V U 8 9 6 l f i v F + v q w 5 n r J j k N C y a z 3 I C k k + N J z r F J c d U L j p k C a v j I E k I V s / k x v S O K U G P b c 9 1 A g Y Q H m g p B Z F w E t O z 7 Y V E E S u C G X V w A I A u e t Y 5 K 9 K 0 r 4 A S s = \" > A A A C K X i c b V A 9 T 8 M w F H T 4 J n z D y G J R I T F V C S A B G x I L Y 5 E I r d S E y n F e W g v b i W w H V K L 8 D 1 a Y + T V M w M o f w W k 7 Q O E k a O z Q l E I a M Y z 1 Y m J B s 4 k B I Y Z D p 1 c A R E x h 3 Z 8 d 1 H 7 7 X t Q m m X y 2 g x z i A T p S 5 Y y S o y V b k N B z C B O y 8 f q N u 8 d 9 T Y b X t M b A f 8 l / o Q 0 0 A S t 3 p a z E C Y Z L Q R I Q z n R u u t 7 u Y l K o g y j H C o 3 L D T k h N 6 R P n Q t l U S A j s p R 7 A r v W y X B a a b s k w a P 1 J 8 b J R F a D 0 V s J + u Y e t q r x X + 9 R N c f T l 0 3 6 W l U M p k X B i Q d H 0 8 L j k 2 G 6 1 5 w w h R Q w 4 e W E K q Y z Y / p g C h C j W 3 P d U M F E h 5 o J g S R S R n S q u t H Z R k q g R t + V V w A I A u e t Y 5 K 9 K 0 r 4 A S s = \" > A A A C K X i c b V A 9 T 8 M w F H T 4 J n z D y G J R I T F V C S A B G x I L Y 5 E I r d S E y n F e W g v b i W w H V K L 8 D 1 a Y + T V M w M o f w W k 7 Q O E k a O z Q l E I a M Y z 1 Y m J B s 4 k B I Y Z D p 1 c A R E x h 3 Z 8 d 1 H 7 7 X t Q m m X y 2 g x z i A T p S 5 Y y S o y V b k N B z C B O y 8 f q N u 8 d 9 T Y b X t M b A f 8 l / o Q 0 0 A S t 3 p a z E C Y Z L Q R I Q z n R u u t 7 u Y l K o g y j H C o 3 L D T k h N 6 R P n Q t l U S A j s p R 7 A r v W y X B a a b s k w a P 1 J 8 b J R F a D 0 V s J + u Y e t q r x X + 9 R N c f T l 0 3 6 W l U M p k X B i Q d H 0 8 L j k 2 G 6 1 5 w w h R Q w 4 e W E K q Y z Y / p g C h C j W 3 P d U M F E h 5 o J g S R S R n S q u t H Z R k q g R t + V V w A I A u e t Y 5 K 9 K 0 r 4 A S s = \" > A A A C K X i c b V A 9 T 8 M w F H T 4 J n z D y G J R I T F V C S A B G x I L Y 5 E I r d S E y n F e W g v b i W w H V K L 8 D 1 a Y + T V M w M o f w W k 7 Q O E k a O z Q l E I a M Y z 1 Y m J B s 4 k B I Y Z D p 1 c A R E x h 3 Z 8 d 1 H 7 7 X t Q m m X y 2 g x z i A T p S 5 Y y S o y V b k N B z C B O y 8 f q N u 8 d 9 T Y b X t M b A f 8 l / o Q 0 0 A S t 3 p a z E C Y Z L Q R I Q z n R u u t 7 u Y l K o g y j H C o 3 L D T k h N 6 R P n Q t l U S A j s p R 7 A r v W y X B a a b s k w a P 1 J 8 b J R F a D 0 V s J + u Y e t q r x X + 9 R N c f T l 0 3 6 W l U M p k X B i Q d H 0 8 L j k 2 G 6 1 5 w w h R Q w 4 e W E K q Y z Y / p g C h C j W 3 P d U M F E h 5 o J g S R S R n S q u t H Z R k q g R t + V b m 2 O X + 6 p 7 8 k O G y e N b 2 r 4 8 a 5 N 6 l w C e 2 i P X S A f H S C z t E l a q E A U a T Q E 3 p G L 8 6 r 8 + a 8 O 5 / j 0 R l n s r O D f s H 5 + g Z W B a Z a < / l a t e x i t > z s 1,3 < l a t e x i t s h a 1 _ b a s e 6 4 = \" n 3 E k G 0 a 5 j S i H q V L m z t o d l U t a w 1 s = \" > A A A C L 3 i c b V B N S 8 Q w F E z 9 t n 6 t e v Q S X A Q P s r Q q q L c F L x 4 V X B W 2 d U n T V w 0 m a U l S d Q 3 9 K 1 7 1 7 K / R i 3 j 1 X 5 i u e 9 D V g c A w 8 1 7 e M E n B m T Z B 8 O a N j U 9 M T k 3 P z P p z 8 w u L S 4 3 l l T O d l 4 p C h + Y 8 V x c J 0 c C Z h I 5 h h s N F o Y C I h M N 5 c n N Y + + e 3 o D T L 5 a n p F x A L c i V Z x i g x T u o 1 V i J B z H W S 2 Y f q U v d s u L V T 9 R r N o B U M g P + S c E i a a I j j 3 r I 3 F a U 5 L Q V I Q z n R u h s G h Y k t U Y Z R D p U f l R o K Q m / I F X Q d l U S A j u 0 g f I U 3 n J L i L F f u S Y M H 6 s 8 N S 4 T W f Z G 4 y T q q H v V q 8 V 8 v 1 f W H I 9 d N t h 9 b J o v S g K T f x 7 O S We generate continuous vector representations for all inputs."
                    },
                    {
                        "id": 138,
                        "string": "Each input is represented as a set of vectors that are then processed with an attention function to generate a single vector representation (Luong et al., 2015) ."
                    },
                    {
                        "id": 139,
                        "string": "We assume access to a domain-specific encoding function ENC(s) that, given a state s, generates a set of vectors S representing the objects in the state."
                    },
                    {
                        "id": 140,
                        "string": "For example, in the ALCHEMY domain, a vector is generated for each beaker using an RNN."
                    },
                    {
                        "id": 141,
                        "string": "Section 6 describes the different domains and their encoding functions."
                    },
                    {
                        "id": 142,
                        "string": "We use a single bidirectional RNN with a long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) recurrence to encode the instructions."
                    },
                    {
                        "id": 143,
                        "string": "All instructionsx 1 ,."
                    },
                    {
                        "id": 144,
                        "string": "."
                    },
                    {
                        "id": 145,
                        "string": "."
                    },
                    {
                        "id": 146,
                        "string": ",x i are encoded with a single RNN by concatenating them tox ."
                    },
                    {
                        "id": 147,
                        "string": "We use two delimiter tokens: one separates previous instructions, and the other separates the previous instructions from the current one."
                    },
                    {
                        "id": 148,
                        "string": "The forward LSTM RNN hidden states are computed as: 2 −−→ hj+1 = − −−−− → LSTM E φ I (x j+1 ); − → hj , where φ I is a learned word embedding function and − −−−− → LSTM E is the forward LSTM recurrence function."
                    },
                    {
                        "id": 149,
                        "string": "We use a similar computation to compute the backward hidden states ← − h j ."
                    },
                    {
                        "id": 150,
                        "string": "For each token x j inx , a vector representation h j = − → h j ; ← − h j is computed."
                    },
                    {
                        "id": 151,
                        "string": "We then create two sets of vectors, one for all the vectors of the current instruction and one for the previous instructions: X c = {h j } J+|x i | j=J X p = {h j } j<J j=0 where J is the index inx where the current instructionx i begins."
                    },
                    {
                        "id": 152,
                        "string": "Separating the vectors to two sets will allows computing separate attention on the current instruction and previous ones."
                    },
                    {
                        "id": 153,
                        "string": "To compute each input representation during decoding, we use a bi-linear attention function (Luong et al., 2015) ."
                    },
                    {
                        "id": 154,
                        "string": "Given a set of vectors H, a query vector h q , and a weight matrix W, the attention function ATTEND(H, h q , W) computes a context vector z: αi ∝ exp(h i Wh q ) : i = 0, ."
                    },
                    {
                        "id": 155,
                        "string": "."
                    },
                    {
                        "id": 156,
                        "string": "."
                    },
                    {
                        "id": 157,
                        "string": ", |H| z = |H| i=1 αihi ."
                    },
                    {
                        "id": 158,
                        "string": "We use a decoder to generate actions."
                    },
                    {
                        "id": 159,
                        "string": "At each time step k, we compute an input representation using the attention function, update the decoder state, and compute the next action to execute."
                    },
                    {
                        "id": 160,
                        "string": "Attention is first computed over the vectors of the current instruction, which is then used to attend over the other inputs."
                    },
                    {
                        "id": 161,
                        "string": "We compute the context vectors z c k and z p k for the current instruction and previous instructions: z c k = ATTEND(X c , h d k−1 , W c ) z p k = ATTEND(X p , [h d k−1 , z c k ], W p ) , where h d k−1 is the decoder hidden state for step k − 1, and X c and X p are the sets of vector representations for the current instruction and previous instructions."
                    },
                    {
                        "id": 162,
                        "string": "Two attention heads are used over both the initial and current states."
                    },
                    {
                        "id": 163,
                        "string": "This allows the model to attend to more than one location in a state at once, for example when transferring items from one beaker to another in ALCHEMY."
                    },
                    {
                        "id": 164,
                        "string": "The current state is computed by the transition function s k = T (s k−1 , a k−1 ), where s k−1 and a k−1 are the state and action at step k − 1."
                    },
                    {
                        "id": 165,
                        "string": "The context vectors for the initial state s 1 and the current state s k are: z s 1,k = [ATTEND(ENC(s1), [h d k−1 , z c k ], W s b ,1 ); ATTEND(ENC(s1), [h d k−1 , z c k ], W s b ,2 )] z s k,k = [ATTEND(ENC(s k ), [h d k−1 , z c k ], W sc,1 ); ATTEND(ENC(s k ), [h d k−1 , z c k ], W sc,2 )] , where all W * , * are learned weight matrices."
                    },
                    {
                        "id": 166,
                        "string": "We concatenate all computed context vectors with an embedding of the previous action a k−1 to create the input for the decoder: h k = tanh([z c k ; z p k ; z s 1,k ; z s k,k ; φ O (a k−1 )]W d + b d ) h d k = LSTM D h k ; h d k−1 , where φ O is a learned action embedding function and LSTM D is the LSTM decoder recurrence."
                    },
                    {
                        "id": 167,
                        "string": "Given the decoder state h d k , the next action a k is predicted with a multi-layer perceptron (MLP)."
                    },
                    {
                        "id": 168,
                        "string": "The actions in our domains decompose to an action type and at most two arguments."
                    },
                    {
                        "id": 169,
                        "string": "3 For example, the action PUSH 1 B in ALCHEMY has the type PUSH and two arguments: a beaker number and a color."
                    },
                    {
                        "id": 170,
                        "string": "Section 6 describes the actions of each domain."
                    },
                    {
                        "id": 171,
                        "string": "The probability of an action is: 3 We use a NULL argument for unused arguments."
                    },
                    {
                        "id": 172,
                        "string": "h a k = tanh(h d k W a ) s k,a T = h a k ba T s k,a 1 = h a k ba 1 s k,a 2 = h a k ba 2 p(a k = aT (a1, a2) |s k ; θ) ∝ exp(s k,a T + s k,a 1 + s k,a 2 ) , where a T , a 1 , and a 2 are an action type, first argument, and second argument."
                    },
                    {
                        "id": 173,
                        "string": "If the predicted action is STOP, the execution is complete."
                    },
                    {
                        "id": 174,
                        "string": "Otherwise, we execute the action a k to generate the next state s k+1 , and update the agent contexts k tos k+1 by appending the pair (s k , a k ) to the executionē and replacing the current state with s k+1 ."
                    },
                    {
                        "id": 175,
                        "string": "The model parameters θ include: the embedding functions φ I and φ O ; the recurrence param- eters for − −−−− → LSTM E , ← −−−− − LSTM E , and LSTM D ; W C , W P , W s b ,1 , W s b ,2 , W sc,1 , W sc,2 , W d , W a , and b d ; and the domain dependent parameters, including the parameters of the encoding function ENC and the action type, first argument, and second argument weights b a T , b a 1 , and b a 2 ."
                    },
                    {
                        "id": 176,
                        "string": "Learning We estimate the policy parameters θ using an exploration-based learning algorithm that maximizes the immediate expected reward."
                    },
                    {
                        "id": 177,
                        "string": "Broadly speaking, during learning, we observe the agent behavior given the current policy, and for each visited state compute the expected immediate reward by observing rewards for all actions."
                    },
                    {
                        "id": 178,
                        "string": "We assume access to a set of training examples {(x (j) i , s (j) i,1 , x (j) 1 , ."
                    },
                    {
                        "id": 179,
                        "string": "."
                    },
                    {
                        "id": 180,
                        "string": "."
                    },
                    {
                        "id": 181,
                        "string": ",x (j) i−1 , g (j) i )} N,n (j) j=1,i=1 , where each instructionx (j) i is paired with a start state s (j) i,1 , the previous instructions in the sequence x Reward The reward R (j) i : S × S × A → R is defined for each example j and instruction i: R (j) i (s, a, s ) = P (j) i (s, a, s ) + φ (j) i (s ) − φ (j) i (s) , where s is a source state, a is an action, and s is a target state."
                    },
                    {
                        "id": 182,
                        "string": "4 P   i and negative for stopping in an incorrect Algorithm 1 SESTRA: Single-step Reward Observation."
                    },
                    {
                        "id": 183,
                        "string": "Input: Training data {(x (j) i , s (j) i,1 , x (j) 1 , ."
                    },
                    {
                        "id": 184,
                        "string": "."
                    },
                    {
                        "id": 185,
                        "string": "."
                    },
                    {
                        "id": 186,
                        "string": ",x (j) i−1 , g (j) i )} N,n (j) j=1,i=1 , learning rate µ, entropy regularization coefficient λ, episode limit horizon M ."
                    },
                    {
                        "id": 187,
                        "string": "Definitions: π θ is a policy parameterized by θ, BEG is a special action to use for the first decoder step, and STOP indicates end of an execution."
                    },
                    {
                        "id": 188,
                        "string": "T (s, a) is the state transition function, H is an entropy function, R i (s, a, s ) is the reward function for example j and instruction i, and RMSPROP divides each weight by a running average of its squared gradient (Tieleman and Hinton, 2012) ."
                    },
                    {
                        "id": 189,
                        "string": "Output: Parameters θ defining a learned policy π θ ."
                    },
                    {
                        "id": 190,
                        "string": "1: for t = 1, ."
                    },
                    {
                        "id": 191,
                        "string": "."
                    },
                    {
                        "id": 192,
                        "string": "."
                    },
                    {
                        "id": 193,
                        "string": ", T, j = 1, ."
                    },
                    {
                        "id": 194,
                        "string": "."
                    },
                    {
                        "id": 195,
                        "string": "."
                    },
                    {
                        "id": 196,
                        "string": ", N do 2: for i = 1, ."
                    },
                    {
                        "id": 197,
                        "string": "."
                    },
                    {
                        "id": 198,
                        "string": "."
                    },
                    {
                        "id": 199,
                        "string": ", n (j) do 3:ē ← , k ← 0, a0 ← BEG 4: » Rollout up to STOP or episode limit."
                    },
                    {
                        "id": 200,
                        "string": "5: while a k = STOP ∧ k < M do 6: k ← k + 1 7:s k ← (xi, x1, ."
                    },
                    {
                        "id": 201,
                        "string": "."
                    },
                    {
                        "id": 202,
                        "string": "."
                    },
                    {
                        "id": 203,
                        "string": ",xi−1 , s k ,ē[: k]) 8: » Sample an action from policy."
                    },
                    {
                        "id": 204,
                        "string": "9: a k ∼ π θ (s k , ·) 10: s k+1 ← T (s k , a k ) 11:ē ← [ē; (s k , a k ) ] 12: ∆ ←0 13: for k = 1, ."
                    },
                    {
                        "id": 205,
                        "string": "."
                    },
                    {
                        "id": 206,
                        "string": "."
                    },
                    {
                        "id": 207,
                        "string": ", k do 14: » Compute the entropy of π θ (s k , ·)."
                    },
                    {
                        "id": 208,
                        "string": "15: ∆ ← ∆ + λ∇ θ H(π θ (s k , ·)) 16: for a ∈ A do 17: s ← T (s k , a) 18: » Compute gradient for action a."
                    },
                    {
                        "id": 209,
                        "string": "19: ∆ ← ∆ + R (j) i (s k , a, s )∇ θ π θ (s k , a) 20: θ ← θ + µRMSPROP ∆ k 21: return θ state or taking an invalid action: P (j) i (s, a, s ) =          1.0 a = STOP ∧ s = g (j) i −1.0 a = STOP ∧ s = g (j) i −1.0 − δ s = s −δ otherwise , where δ is a verbosity penalty."
                    },
                    {
                        "id": 210,
                        "string": "The case s = s indicates that a was invalid in state s, as in this domain, all valid actions except STOP modify the state."
                    },
                    {
                        "id": 211,
                        "string": "We use a potential-based shaping term φ (Ng et al., 1999) , where φ (j) i (s ) − φ (j) i (s) (j) i (s) = −||s − g (j) i || computes the edit distance between the state s and the goal, measured over the objects in each state."
                    },
                    {
                        "id": 212,
                        "string": "The shaping term densifies the reward, providing a meaningful signal for learning in nonterminal states."
                    },
                    {
                        "id": 213,
                        "string": "Objective We maximize the immediate expected reward over all actions and use entropy regularization."
                    },
                    {
                        "id": 214,
                        "string": "The gradient is approximated by sampling an executionē = (s 1 , a 1 ), ."
                    },
                    {
                        "id": 215,
                        "string": "."
                    },
                    {
                        "id": 216,
                        "string": "."
                    },
                    {
                        "id": 217,
                        "string": ", (s k , a k ) using our current policy: ∇ θ J = 1 k k k =1 a∈A R (s k , a, T (s k , a)) ∇ θ π(s k , a) +λ∇ θ H(π(s k , ·)) , where H(π(s k , ·) is the entropy term."
                    },
                    {
                        "id": 218,
                        "string": "Algorithm Algorithm 1 shows the Single-step Reward Observation (SESTRA) learning algorithm."
                    },
                    {
                        "id": 219,
                        "string": "We iterate over the training data T times (line 1)."
                    },
                    {
                        "id": 220,
                        "string": "For each example j and turn i, we first perform a rollout by sampling an executionē from π θ with at most M actions (lines 5-11)."
                    },
                    {
                        "id": 221,
                        "string": "If the rollout reaches the horizon without predicting STOP, we set the problem reward P (j) i to −1.0 for the last step."
                    },
                    {
                        "id": 222,
                        "string": "Given the sampled states visited, we compute the entropy (line 15) and observe the immediate reward for all actions (line 19) for each step."
                    },
                    {
                        "id": 223,
                        "string": "Entropy and rewards are used to accumulate the gradient, which is applied to the parameters using RMSPROP (Dauphin et al., 2015) (line 20)."
                    },
                    {
                        "id": 224,
                        "string": "Discussion Observing the rewards for all actions for each visited state addresses an on-policy learning exploration problem."
                    },
                    {
                        "id": 225,
                        "string": "Actions that consistently receive negative reward early during learning will be visited with very low probability later on, and in practice, often not explored at all."
                    },
                    {
                        "id": 226,
                        "string": "Because the network is randomly initialized, these early negative rewards are translated into strong general biases that are not grounded well in the observed context."
                    },
                    {
                        "id": 227,
                        "string": "Our algorithm exposes the agent to such actions later on when they receive positive rewards even though the agent does not explore them during rollout."
                    },
                    {
                        "id": 228,
                        "string": "For example, in ALCHEMY, POP actions are sufficient to complete the first steps of good executions."
                    },
                    {
                        "id": 229,
                        "string": "As a result, early during learning, the agent learns a strong bias against PUSH actions."
                    },
                    {
                        "id": 230,
                        "string": "In practice, the agent then will not explore PUSH actions again."
                    },
                    {
                        "id": 231,
                        "string": "In our algorithm, as the agent learns to roll out the correct POP prefix, it is then exposed to the reward for the first PUSH even though it likely sampled another POP."
                    },
                    {
                        "id": 232,
                        "string": "It then unlearns its bias towards predicting POP."
                    },
                    {
                        "id": 233,
                        "string": "Our learning algorithm can be viewed as a costsensitive variant of the oracle in DAGGER (Ross et al., 2011) , where it provides the rewards for all actions instead of an oracle action."
                    },
                    {
                        "id": 234,
                        "string": "It is also related to Locally Optimal Learning to Search (LOLS; Chang et al., 2015) with two key distinctions: (a) instead of using different roll-in and roll-out policies, we use the model policy; and (b) we branch at each step, instead of once, but do not rollout Chang et al., 2015) and our learning algorithm (SESTRA, right We count occurrences of coreference between instructions (e.g., he leaves in SCENE) and ellipsis (e.g., then, drain 2 units in ALCHEMY), when the last explicit mention of the referent was 1, 2, 3, or 4 turns in the past."
                    },
                    {
                        "id": 235,
                        "string": "We also report the average number of multi-turn references per interaction (Refs/Ex)."
                    },
                    {
                        "id": 236,
                        "string": "from branched actions since we only optimize the immediate reward."
                    },
                    {
                        "id": 237,
                        "string": "Figure 3 illustrates the comparison."
                    },
                    {
                        "id": 238,
                        "string": "Our summation over immediate rewards for all actions is related the summation of estimated Q-values for all actions in the Mean Actor-Critic algorithm (Asadi et al., 2017) ."
                    },
                    {
                        "id": 239,
                        "string": "Finally, our approach is related to Misra et al."
                    },
                    {
                        "id": 240,
                        "string": "(2017) , who also maximize the immediate reward, but do not observe rewards for all actions for each state."
                    },
                    {
                        "id": 241,
                        "string": "SCONE Domains and Data SCONE has three domains: ALCHEMY, SCENE, and TANGRAMS."
                    },
                    {
                        "id": 242,
                        "string": "Each interaction contains five instructions."
                    },
                    {
                        "id": 243,
                        "string": "Table 1 shows data statistics."
                    },
                    {
                        "id": 244,
                        "string": "Table 2 shows discourse reference analysis."
                    },
                    {
                        "id": 245,
                        "string": "State encodings are detailed in the Supplementary Material."
                    },
                    {
                        "id": 246,
                        "string": "ALCHEMY Each environment in ALCHEMY contains seven numbered beakers, each containing up to four colored chemicals in order."
                    },
                    {
                        "id": 247,
                        "string": "Figure 1 shows an example."
                    },
                    {
                        "id": 248,
                        "string": "Instructions describe pouring chemicals between and out of beakers, and mixing beakers."
                    },
                    {
                        "id": 249,
                        "string": "We treat all beakers as stacks."
                    },
                    {
                        "id": 250,
                        "string": "There are two action types: PUSH and POP."
                    },
                    {
                        "id": 251,
                        "string": "POP takes a beaker index, and removes the top color."
                    },
                    {
                        "id": 252,
                        "string": "PUSH takes a beaker index and a color, and adds the color at the top of the beaker."
                    },
                    {
                        "id": 253,
                        "string": "To encode a state, we encode each beaker with an RNN, and concatenate the last output with the beaker index embedding."
                    },
                    {
                        "id": 254,
                        "string": "The set of vectors is the state embedding."
                    },
                    {
                        "id": 255,
                        "string": "SCENE Each environment in SCENE contains ten positions, each containing at most one person defined by a shirt color and an optional hat color."
                    },
                    {
                        "id": 256,
                        "string": "Instructions describe adding or removing people, moving a person to another position, and moving a person's hat to another person."
                    },
                    {
                        "id": 257,
                        "string": "There are four action types: ADD_PERSON, ADD_HAT, REMOVE_PERSON, and REMOVE_HAT."
                    },
                    {
                        "id": 258,
                        "string": "ADD_PERSON and ADD_HAT take a position to place the person or hat and the color of the person's shirt or hat."
                    },
                    {
                        "id": 259,
                        "string": "REMOVE_PERSON and REMOVE_HAT take the position to remove a person or hat from."
                    },
                    {
                        "id": 260,
                        "string": "To encode a state, we use a bidirectional RNN over the ordered positions."
                    },
                    {
                        "id": 261,
                        "string": "The input for each position is a concatenation of the color embeddings for the person and hat."
                    },
                    {
                        "id": 262,
                        "string": "The set of RNN hidden states is the state embedding."
                    },
                    {
                        "id": 263,
                        "string": "TANGRAMS Each environment in TANGRAMS is a list containing at most five unique objects."
                    },
                    {
                        "id": 264,
                        "string": "Instructions describe removing or inserting an object into a position in the list, or swapping the positions of two items."
                    },
                    {
                        "id": 265,
                        "string": "There are two action types: INSERT and REMOVE."
                    },
                    {
                        "id": 266,
                        "string": "INSERT takes the position to insert an object, and the object identifier."
                    },
                    {
                        "id": 267,
                        "string": "REMOVE takes an object position."
                    },
                    {
                        "id": 268,
                        "string": "We embed each object by concatenating embeddings for its type and position."
                    },
                    {
                        "id": 269,
                        "string": "The resulting set is the state embedding."
                    },
                    {
                        "id": 270,
                        "string": "Experimental Setup Evaluation Following Long et al."
                    },
                    {
                        "id": 271,
                        "string": "(2016) , we evaluate task completion accuracy using exact match between the final state and the annotated goal state."
                    },
                    {
                        "id": 272,
                        "string": "We report accuracy for complete interactions (5utts), the first three utterances of each interaction (3utts), and single instructions (Inst)."
                    },
                    {
                        "id": 273,
                        "string": "For single instructions, execution starts from the annotated start state of the instruction."
                    },
                    {
                        "id": 274,
                        "string": "Systems We report performance of ablations and two baseline systems: POLICYGRADIENT: policy gradient with cumulative episodic reward without a baseline, and CONTEXTUALBANDIT: the contextual bandit approach of Misra et al."
                    },
                    {
                        "id": 275,
                        "string": "(2017) ."
                    },
                    {
                        "id": 276,
                        "string": "Both systems use the reward with the shaping term and our model."
                    },
                    {
                        "id": 277,
                        "string": "We also report supervised learning results (SUPERVISED) by heuristically generating correct executions and computing maximum-likelihood estimate using contextaction demonstration pairs."
                    },
                    {
                        "id": 278,
                        "string": "Only the supervised approach uses the heuristically generated labels."
                    },
                    {
                        "id": 279,
                        "string": "Although the results are not comparable, we also report the performance of previous approaches to SCONE."
                    },
                    {
                        "id": 280,
                        "string": "All three approaches generate logical representations based on lambda calculus."
                    },
                    {
                        "id": 281,
                        "string": "In contrast to our approach, this requires an ontology of hand built symbols and rules to evaluate the logical forms."
                    },
                    {
                        "id": 282,
                        "string": "Fried et al."
                    },
                    {
                        "id": 283,
                        "string": "(2018) uses supervised learning with annotated logical forms."
                    },
                    {
                        "id": 284,
                        "string": "Training Details For test results, we run each experiment five times and report results for the model with best validation interaction accuracy."
                    },
                    {
                        "id": 285,
                        "string": "For ablations, we do the same with three experiments."
                    },
                    {
                        "id": 286,
                        "string": "We use a batch size of 20."
                    },
                    {
                        "id": 287,
                        "string": "We stop training using a validation set sampled from the training data."
                    },
                    {
                        "id": 288,
                        "string": "We hold the validation set constant for each domain for all experiments."
                    },
                    {
                        "id": 289,
                        "string": "We use patience over the average reward, and select the best model using interaction-level (5utts) validation accuracy."
                    },
                    {
                        "id": 290,
                        "string": "We tune λ, δ, and M on the development set."
                    },
                    {
                        "id": 291,
                        "string": "The selected values and other implementation details are described in the Supplementary Material."
                    },
                    {
                        "id": 292,
                        "string": "Table 3 shows test results."
                    },
                    {
                        "id": 293,
                        "string": "Our approach significantly outperforms POLICYGRADIENT and CON-TEXTUALBANDIT, both of which suffer due to biases learned early during learning, hindering later exploration."
                    },
                    {
                        "id": 294,
                        "string": "This problem does not appear in TANGRAMS, where no action type is dominant at the beginning of executions, and all methods perform well."
                    },
                    {
                        "id": 295,
                        "string": "POLICYGRADIENT completely fails to learn ALCHEMY and SCENE due to observing only negative total rewards early during learning."
                    },
                    {
                        "id": 296,
                        "string": "Results Using a baseline, for example with an actor-critic method, will potentially close the gap to CONTEX-TUALBANDIT."
                    },
                    {
                        "id": 297,
                        "string": "However, it is unlikely to address the on-policy exploration problem."
                    },
                    {
                        "id": 298,
                        "string": "Table 4 shows development results, including model ablation studies."
                    },
                    {
                        "id": 299,
                        "string": "Removing previous instructions (-previous instructions) or both states (-current and initial state) reduces performance across all domains."
                    },
                    {
                        "id": 300,
                        "string": "Removing only the initial state (-initial state) or the current state (-current state) shows mixed results across the domains."
                    },
                    {
                        "id": 301,
                        "string": "Providing access to both initial and current states increases performance for ALCHEMY, but reduces performance on the other domains."
                    },
                    {
                        "id": 302,
                        "string": "We hypothesize that this is due to the increase in the number of parameters outweighing what is relatively marginal information for these domains."
                    },
                    {
                        "id": 303,
                        "string": "In our development and test results we use a single architecture across the three domains, the full approach, which has the highest interactive-level accuracy when averaged across the three domains (62.7 5utts)."
                    },
                    {
                        "id": 304,
                        "string": "We also report mean and standard deviation for our approach over five trials."
                    },
                    {
                        "id": 305,
                        "string": "We observe exceptionally high variance in performance on SCENE, where some experiments fail to learn and training performance remains exceptionally low (Figure 4) ."
                    },
                    {
                        "id": 306,
                        "string": "This highlights the sensitivity of the model to the random effects of initialization, dropout, and ordering of training examples."
                    },
                    {
                        "id": 307,
                        "string": "We analyze the instruction-level errors made by our best models when the agent is provided the correct initial state for the instruction."
                    },
                    {
                        "id": 308,
                        "string": "We study fifty examples in each domain to identify the type of failures."
                    },
                    {
                        "id": 309,
                        "string": "Table 5 shows the counts of major error categories."
                    },
                    {
                        "id": 310,
                        "string": "We consider multiple reference resolution errors."
                    },
                    {
                        "id": 311,
                        "string": "State reference errors indicate a failure to resolve a reference to the world state."
                    },
                    {
                        "id": 312,
                        "string": "For example, in ALCHEMY, the phrase leftmost red beaker specifies a beaker in the environment."
                    },
                    {
                        "id": 313,
                        "string": "If the model picked the correct action, but the wrong beaker, we count it as a state reference."
                    },
                    {
                        "id": 314,
                        "string": "We distinguish between multi-turn reference errors that should be feasible, and these that that are impossible to solve without access to states before executing previous utterances, which are not provided to our model."
                    },
                    {
                        "id": 315,
                        "string": "For example, in TANGRAMS, the instruction put it back in the same place refers to a previouslyremoved item."
                    },
                    {
                        "id": 316,
                        "string": "Because the agent only has access to the world state after following this instruction, it does not observe what kind of item was previously removed, and cannot identify the item to add."
                    },
                    {
                        "id": 317,
                        "string": "We    also find a significant number of errors due to ambiguous or incorrect instructions."
                    },
                    {
                        "id": 318,
                        "string": "For example, the SCENE instruction person in green appears on the right end is ambiguous."
                    },
                    {
                        "id": 319,
                        "string": "In the annotated goal, it is interpreted as referring to a person already in the environment, who moves to the 10th position."
                    },
                    {
                        "id": 320,
                        "string": "However, it can also be interpreted as a new person in green appearing in the 10th position."
                    },
                    {
                        "id": 321,
                        "string": "We also study performance with respect to multi-turn coreference by observing whether the model was able to identify the correct referent for each occurrence included in the analysis in Table 2 ."
                    },
                    {
                        "id": 322,
                        "string": "The models were able to correctly resolve 92.3%, 88.7%, and 76.0% of references in ALCHEMY, SCENE, and TANGRAMS respectively."
                    },
                    {
                        "id": 323,
                        "string": "Finally, we include attention visualization for examples from the three domains in the Supplementary Material."
                    },
                    {
                        "id": 324,
                        "string": "Discussion We propose a model to reason about contextdependent instructional language that display strong dependencies both on the history of the interaction and the state of the world."
                    },
                    {
                        "id": 325,
                        "string": "Future modeling work may include using intermediate world states from previous turns in the interaction, which is required for some of the most complex references in the data."
                    },
                    {
                        "id": 326,
                        "string": "We propose to train our model using SESTRA, a learning algorithm that takes advantage of single-step reward observations to overcome learned biases in on-policy learning."
                    },
                    {
                        "id": 327,
                        "string": "Our learning approach requires additional reward observations in comparison to conventional reinforcement learning."
                    },
                    {
                        "id": 328,
                        "string": "However, it is particularly suitable to recovering from biases acquired early during learning, for example due to biased action spaces, which is likely to lead to incorrect blame assignment in neural network policies."
                    },
                    {
                        "id": 329,
                        "string": "When the domain and model are less susceptible to such biases, the benefit of the additional reward observations is less pronounced."
                    },
                    {
                        "id": 330,
                        "string": "One possible direction for future work is to use an estimator to predict rewards for all actions, rather than observing them."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 33
                    },
                    {
                        "section": "Technical Overview",
                        "n": "2",
                        "start": 34,
                        "end": 84
                    },
                    {
                        "section": "Related Work",
                        "n": "3",
                        "start": 85,
                        "end": 110
                    },
                    {
                        "section": "Model",
                        "n": "4",
                        "start": 111,
                        "end": 175
                    },
                    {
                        "section": "Learning",
                        "n": "5",
                        "start": 176,
                        "end": 240
                    },
                    {
                        "section": "SCONE Domains and Data",
                        "n": "6",
                        "start": 241,
                        "end": 269
                    },
                    {
                        "section": "Experimental Setup",
                        "n": "7",
                        "start": 270,
                        "end": 295
                    },
                    {
                        "section": "Results",
                        "n": "8",
                        "start": 296,
                        "end": 323
                    },
                    {
                        "section": "Discussion",
                        "n": "9",
                        "start": 324,
                        "end": 330
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1032-Figure1-1.png",
                        "caption": "Figure 1: Example from the SCONE (Long et al., 2016) ALCHEMY domain, including a start state (top), sequence of instructions, and a goal state (bottom). Each instruction is annotated with a sequence of actions from the set of actions we define for ALCHEMY.",
                        "page": 0,
                        "bbox": {
                            "x1": 287.52,
                            "x2": 525.12,
                            "y1": 221.76,
                            "y2": 408.0
                        }
                    },
                    {
                        "filename": "../figure/image/1032-Table1-1.png",
                        "caption": "Table 1: Data statistics for ALCHEMY (ALC), SCENE (SCE), and TANGRAMS (TAN).",
                        "page": 6,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 282.24,
                            "y1": 188.64,
                            "y2": 261.12
                        }
                    },
                    {
                        "filename": "../figure/image/1032-Table2-1.png",
                        "caption": "Table 2: Counts of discourse phenomena in SCONE from 30 randomly selected development interactions for each domain. We count occurrences of coreference between instructions (e.g., he leaves in SCENE) and ellipsis (e.g., then, drain 2 units in ALCHEMY), when the last explicit mention of the referent was 1, 2, 3, or 4 turns in the past. We also report the average number of multi-turn references per interaction (Refs/Ex).",
                        "page": 6,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 289.44,
                            "y1": 289.44,
                            "y2": 361.91999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1032-Figure3-1.png",
                        "caption": "Figure 3: Illustration of LOLS (left; Chang et al., 2015) and our learning algorithm (SESTRA, right). LOLS branches a single time, and samples complete rollout for each branch to obtain the trajectory loss. SESTRA uses a complete on-policy rollout and singlestep branching for all actions in each sample state.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 289.44,
                            "y1": 62.879999999999995,
                            "y2": 118.08
                        }
                    },
                    {
                        "filename": "../figure/image/1032-Figure4-1.png",
                        "caption": "Figure 4: Instruction-level training accuracy per epoch when training five models on SCENE, demonstrating the effect of randomization in the learning method. Three of five experiments fail to learn effective models. The red and blue learning trajectories are overlapping.",
                        "page": 7,
                        "bbox": {
                            "x1": 82.08,
                            "x2": 279.84,
                            "y1": 66.24,
                            "y2": 155.51999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1032-Figure2-1.png",
                        "caption": "Figure 2: Illustration of the model architecture while generating the third action a3 in the third utterance x̄3 from Figure 1. Context vectors computed using attention are highlighted in blue. The model takes as input vector encodings from the current and previous instructions x̄1, x̄2, and x̄3, the initial state s1, the current state s3, and the previous action a2. Instruction encodings are computed with a bidirectional RNN. We attend over the previous and current instructions and the initial and current states. We use an MLP to select the next action.",
                        "page": 3,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 62.879999999999995,
                            "y2": 203.04
                        }
                    },
                    {
                        "filename": "../figure/image/1032-Table4-1.png",
                        "caption": "Table 4: Development results, including model ablations. We also report mean µ and standard deviation σ for all metrics for our approach across five experiments. We bold the best performing variations of our model.",
                        "page": 8,
                        "bbox": {
                            "x1": 88.8,
                            "x2": 509.28,
                            "y1": 192.48,
                            "y2": 324.96
                        }
                    },
                    {
                        "filename": "../figure/image/1032-Table3-1.png",
                        "caption": "Table 3: Test accuracies for single instructions (Inst), first-three instructions (3utts), and full interactions (5utts).",
                        "page": 8,
                        "bbox": {
                            "x1": 116.64,
                            "x2": 480.47999999999996,
                            "y1": 61.44,
                            "y2": 166.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1032-Table5-1.png",
                        "caption": "Table 5: Common error counts in the three domains.",
                        "page": 8,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 288.0,
                            "y1": 365.76,
                            "y2": 420.0
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-21"
        },
        {
            "slides": {
                "0": {
                    "title": "Semantic Role Labelling",
                    "text": [
                        "Subject Manner Verb Object Time",
                        "John surreptitiously ate the burrito at 2am.",
                        "Applied to improve state-of-the-art in NLP tasks such as Question Answering",
                        "Commonly used interface to facilitate Data Exploration and Information Extraction [Stanovsky et al 2018] [Chiticariu et al. 2018]",
                        "Considerable interest in general-purpose SRL parsers"
                    ],
                    "page_nums": [
                        1,
                        2,
                        3,
                        4
                    ],
                    "images": []
                },
                "1": {
                    "title": "Qa srl",
                    "text": [
                        "Subject Manner Verb Object Time",
                        "John surreptitiously ate the burrito at 2am.",
                        "ng ing ea ten",
                        "teso methi aten? thi ng ea ten",
                        "Who a was s ometh",
                        "How When was some",
                        "Wh Aux Subj Verb Obj Prep Obj2",
                        "How did didnt might will someone something stem past past participle present someone something on to by from someone something",
                        "didnt might will stem someone What Where When Why How someone something past something on to by from someone something past participle present",
                        "What did someone eat? the burrito",
                        "Who someone stem What did",
                        "Where When Why How didnt might will something past someone something on to by from someone something past participle present",
                        "Who ate something? John"
                    ],
                    "page_nums": [
                        5,
                        6,
                        17,
                        18,
                        19,
                        20,
                        21,
                        22,
                        23
                    ],
                    "images": []
                },
                "2": {
                    "title": "Goal",
                    "text": [
                        "A high-quality, large-scale parser for QA-SRL"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "3": {
                    "title": "Challenges",
                    "text": [
                        "1. Scale up QA-SRL data annotation",
                        "75k sentence dataset in 9 days",
                        "2. Train a QA-SRL Parser",
                        "A much larger super eruption in Colorado produced over 5,000 cubic kilometers of material.",
                        "What produced something? A much larger super eruption Produced Where did something produce something? in Colorado What did something produce? over 5,000 cubic kilometers of material Where didnt someone appear to do something? In the video Who didnt appear to do something? the perpetrators appeared When did someone appear? never In the video, the perpetrators never appeared to look at the camera. look at the camera What didnt someone appear to do? to look at the camera Where didn't someone look at something? In the video look Who didnt look? the perpetrators What didnt someone look at? the camera Some of the vegetarians Who met someone? vegetarians met Some of the vegetarians he met were members of the Theosophical Society, which had been founded in 1875 to further universal brotherhood, and which was devoted to the study of Buddhist and Hindu literature.",
                        "Who met? he What did someone meet? members of the Theosophical Society members of the Theosophical Society What had been founded? the Theosophical Society founded in 1875 When was something founded? Why has something been founded? to further universal brotherhood What was devoted to something? members of the Theosophical Society"
                    ],
                    "page_nums": [
                        9,
                        10,
                        11,
                        12,
                        13
                    ],
                    "images": [
                        "figure/image/1037-Table8-1.png"
                    ]
                },
                "4": {
                    "title": "Large scale QA SRL",
                    "text": [
                        "1. Scale up QA-SRL data annotation",
                        "2. Train a QA-SRL Parser"
                    ],
                    "page_nums": [
                        15,
                        37
                    ],
                    "images": []
                },
                "5": {
                    "title": "Easier Annotation",
                    "text": [
                        "UCCA ~6k sentences 4 Trained Annotators",
                        "Semantic Proto-roles ~7k sentences MTurk [Reisinger et al. 2015]",
                        "Groningen Meaning Bank ~40k sentences [Basile et al. 2012]",
                        "QASRL 1.0 ~3k sentences Trained annotators [He et al. 2015]",
                        "QA-SRL 2.0 75k sentences MTurk"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                },
                "6": {
                    "title": "Annotation Pipeline",
                    "text": [
                        "x John surreptitiously ate the burrito at 2am",
                        "Predicate detection Identify verbs with POS + heuristics",
                        "One worker writes as many QA-SRL questions as possible, and provides the answer",
                        "Validation 2 workers are shows questions, provide answers or mark as invalid"
                    ],
                    "page_nums": [
                        24,
                        25,
                        26,
                        77,
                        78
                    ],
                    "images": []
                },
                "9": {
                    "title": "Dataset",
                    "text": [
                        "1 annotator provides questions",
                        "2 annotators validate -> 3 spans / question",
                        "Question invalid if any annotator marks invalid",
                        "Additional 3 validators for small dense dev and test set",
                        "3000 sentences 75k sentences",
                        "Several weeks 9 days",
                        "2.43 questions / verb questions / verb"
                    ],
                    "page_nums": [
                        32,
                        33
                    ],
                    "images": [
                        "figure/image/1037-Table2-1.png"
                    ]
                },
                "11": {
                    "title": "QA SRL Parsing",
                    "text": [
                        "Argument detection John surreptitiously the burrito at 2pm",
                        "Local Question generation Sequential Who ate something? How did someone eat something? What did someone eat? When did someone eat something?",
                        "Span-based Model John surreptitiously the burrito at 2pm"
                    ],
                    "page_nums": [
                        41,
                        42,
                        43,
                        44,
                        45,
                        46,
                        47
                    ],
                    "images": []
                },
                "12": {
                    "title": "Argument Detection BIO Model",
                    "text": [
                        "Alternating Bi-LSTM with Highway Connections and Recurrent Dropout [He et al 2017]",
                        "Input includes predicate indicator",
                        "John surreptitiously ate the burrito at 2pm",
                        "B B O B I B I",
                        "John surreptitiously the burrito at 2pm"
                    ],
                    "page_nums": [
                        48,
                        49,
                        50
                    ],
                    "images": []
                },
                "13": {
                    "title": "Argument Detection Span Model",
                    "text": [
                        "Form a representation of every possible span",
                        "John surreptitiously ate the burrito at 2pm",
                        "John John surreptitiously surreptitiously ate the the burrito the burrito at at 2pm"
                    ],
                    "page_nums": [
                        51,
                        52,
                        53,
                        54,
                        55,
                        56
                    ],
                    "images": []
                },
                "14": {
                    "title": "Argument Detection",
                    "text": [
                        "4 layer Alternating Bi-LSTM with Highway Connections and",
                        "Recurrent Dropout [He et al 2017]",
                        "Trained to maximize log-likelihood"
                    ],
                    "page_nums": [
                        57,
                        58
                    ],
                    "images": []
                },
                "18": {
                    "title": "Evaluation Questions",
                    "text": [
                        "Who ate something? Who eaten was something by?",
                        "Exact Match (full question)"
                    ],
                    "page_nums": [
                        69,
                        70
                    ],
                    "images": []
                },
                "19": {
                    "title": "Full Parsing Accuracy",
                    "text": [
                        "Exact match f-score (Span & Question)"
                    ],
                    "page_nums": [
                        72
                    ],
                    "images": []
                },
                "20": {
                    "title": "Large scale QA SRL Parsing",
                    "text": [
                        "1. Scale up QA-SRL data annotation",
                        "2. Train a QA-SRL Parser"
                    ],
                    "page_nums": [
                        73
                    ],
                    "images": []
                },
                "22": {
                    "title": "Evaluation",
                    "text": [
                        "Exact Match for Question Generation is overly harsh",
                        "Who ate something? Who eaten was something by?",
                        "Penalizes correct predictions missing from data"
                    ],
                    "page_nums": [
                        81
                    ],
                    "images": []
                },
                "23": {
                    "title": "Human Evaluation",
                    "text": [
                        "Validate model predictions with 6 annotators",
                        "Generated Question valid if 5 out of 6 annotators provided answers",
                        "Predicted span correct if exactly matches any annotators answer"
                    ],
                    "page_nums": [
                        82
                    ],
                    "images": []
                },
                "26": {
                    "title": "Example Output",
                    "text": [
                        "Some of the vegetarians he met were members of the Theosophical Society, which had been founded in 1875 to further universal brotherhood, and which was devoted to the study of Buddhist and Hindu literature.",
                        "Who met someone? Some of the vegetarians met Who met? he",
                        "What did someone meet? members of the Theosophical Society members of the Theosophical Society What had been founded? the Theosophical Society founded When was something founded? in 1875",
                        "Why has something been founded? to further universal brotherhood",
                        "What was devoted to something? members of the Theosophical Society devoted What was something devoted to? the study of Buddhist and Hindu literature"
                    ],
                    "page_nums": [
                        86
                    ],
                    "images": []
                }
            },
            "paper_title": "Large-Scale QA-SRL Parsing",
            "paper_id": "1037",
            "paper": {
                "title": "Large-Scale QA-SRL Parsing",
                "abstract": "We present a new large-scale corpus of Question-Answer driven Semantic Role Labeling (QA-SRL) annotations, and the first high-quality QA-SRL parser. Our corpus, QA-SRL Bank 2.0, consists of over 250,000 question-answer pairs for over 64,000 sentences across 3 domains and was gathered with a new crowd-sourcing scheme that we show has high precision and good recall at modest cost. We also present neural models for two QA-SRL subtasks: detecting argument spans for a predicate and generating questions to label the semantic relationship. The best models achieve question accuracy of 82.6% and span-level accuracy of 77.6% (under human evaluation) on the full pipelined QA-SRL prediction task. They can also, as we show, be used to gather additional annotations at low cost.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Learning semantic parsers to predict the predicateargument structures of a sentence is a long standing, open challenge (Palmer et al., 2005; Baker et al., 1998) ."
                    },
                    {
                        "id": 1,
                        "string": "Such systems are typically trained from datasets that are difficult to gather, 1 but recent research has explored training nonexperts to provide this style of semantic supervision (Abend and Rappoport, 2013; Basile et al., 2012; Reisinger et al., 2015; He et al., 2015) ."
                    },
                    {
                        "id": 2,
                        "string": "In this paper, we show for the first time that it is possible to go even further by crowdsourcing a large * Much of this work was done while these authors were at the Allen Institute for Artificial Intelligence."
                    },
                    {
                        "id": 3,
                        "string": "1 The PropBank (Bonial et al., 2010) and FrameNet (Ruppenhofer et al., 2016) annotation guides are 89 and 119 pages, respectively."
                    },
                    {
                        "id": 4,
                        "string": "In 1950 Alan M. Turing published \"Computing machinery and intelligence\" in Mind, in which he proposed that machines could be tested for intelligence using questions and answers."
                    },
                    {
                        "id": 5,
                        "string": "Why was something being used?"
                    },
                    {
                        "id": 6,
                        "string": "tested for intelligence Figure 1 : An annotated sentence from our dataset."
                    },
                    {
                        "id": 7,
                        "string": "Question 6 was not produced by crowd workers in the initial collection, but was produced by our parser as part of Data Expansion (see Section 5.)"
                    },
                    {
                        "id": 8,
                        "string": "scale dataset that can be used to train high quality parsers at modest cost."
                    },
                    {
                        "id": 9,
                        "string": "We adopt the Question-Answer-driven Semantic Role Labeling (QA-SRL) (He et al., 2015) annotation scheme."
                    },
                    {
                        "id": 10,
                        "string": "QA-SRL is appealing because it is intuitive to non-experts, has been shown to closely match the structure of traditional predicate-argument structure annotation schemes (He et al., 2015) , and has been used for end tasks such as Open IE (Stanovsky and Dagan, 2016) ."
                    },
                    {
                        "id": 11,
                        "string": "In QA-SRL, each predicate-argument relationship is labeled with a question-answer pair (see Figure 1 )."
                    },
                    {
                        "id": 12,
                        "string": "He et al."
                    },
                    {
                        "id": 13,
                        "string": "(2015) showed that high precision QA-SRL annotations can be gathered with limited training but that high recall is challenging to achieve; it is relatively easy to gather answerable questions, but difficult to ensure that every possible question is labeled for every verb."
                    },
                    {
                        "id": 14,
                        "string": "For this reason, they hired and trained hourly annotators and only labeled a relatively small dataset (3000 sentences)."
                    },
                    {
                        "id": 15,
                        "string": "Our first contribution is a new, scalable approach for crowdsourcing QA-SRL."
                    },
                    {
                        "id": 16,
                        "string": "We introduce a streamlined web interface (including an autosuggest mechanism and automatic quality control to boost recall) and use a validation stage to en-sure high precision (i.e."
                    },
                    {
                        "id": 17,
                        "string": "all the questions must be answerable)."
                    },
                    {
                        "id": 18,
                        "string": "With this approach, we produce QA-SRL Bank 2.0, a dataset with 133,479 verbs from 64,018 sentences across 3 domains, totaling 265,140 question-answer pairs, in just 9 days."
                    },
                    {
                        "id": 19,
                        "string": "Our analysis shows that the data has high precision with good recall, although it does not cover every possible question."
                    },
                    {
                        "id": 20,
                        "string": "Figure 1 shows example annotations."
                    },
                    {
                        "id": 21,
                        "string": "Using this data, our second contribution is a comparison of several new models for learning a QA-SRL parser."
                    },
                    {
                        "id": 22,
                        "string": "We follow a pipeline approach where the parser does (1) unlabeled span detection to determine the arguments of a given verb, and (2) question generation to label the relationship between the predicate and each detected span."
                    },
                    {
                        "id": 23,
                        "string": "Our best model uses a span-based representation similar to that introduced by Lee et al."
                    },
                    {
                        "id": 24,
                        "string": "(2016) and a custom LSTM to decode questions from a learned span encoding."
                    },
                    {
                        "id": 25,
                        "string": "Our model does not require syntactic information and can be trained directly from the crowdsourced span labels."
                    },
                    {
                        "id": 26,
                        "string": "Experiments demonstrate that the model does well on our new data, achieving up to 82.2% spandetection F1 and 47.2% exact-match question accuracy relative to the human annotations."
                    },
                    {
                        "id": 27,
                        "string": "We also demonstrate the utility of learning to predict easily interpretable QA-SRL structures, using a simple data bootstrapping approach to expand our dataset further."
                    },
                    {
                        "id": 28,
                        "string": "By tuning our model to favor recall, we over-generate questions which can be validated using our annotation pipeline, allowing for greater recall without requiring costly redundant annotations in the question writing step."
                    },
                    {
                        "id": 29,
                        "string": "Performing this procedure on the training and development sets grows them by 20% and leads to improvements when retraining our models."
                    },
                    {
                        "id": 30,
                        "string": "Our final parser is highly accurate, achieving 82.6% question accuracy and 77.6% span-level precision in an human evaluation."
                    },
                    {
                        "id": 31,
                        "string": "Our data, code, and trained models will be made publicly available."
                    },
                    {
                        "id": 32,
                        "string": "2 Data Annotation A QA-SRL annotation consists of a set of question-answer pairs for each verbal predicate in a sentence, where each answer is a set of contiguous spans from the sentence."
                    },
                    {
                        "id": 33,
                        "string": "QA-SRL questions are defined by a 7-slot template shown in Table 1 ."
                    },
                    {
                        "id": 34,
                        "string": "We introduce a crowdsourcing pipeline to collect annotations rapidly, cheaply, and at large scale."
                    },
                    {
                        "id": 35,
                        "string": "2 http://qasrl.org Figure 2 : Interface for the generation step."
                    },
                    {
                        "id": 36,
                        "string": "Autocomplete shows completions of the current QA-SRL slot, and auto-suggest shows fully-formed questions (highlighted green) based on the previous questions."
                    },
                    {
                        "id": 37,
                        "string": "Pipeline Our crowdsourcing pipeline consists of a generation and validation step."
                    },
                    {
                        "id": 38,
                        "string": "In the generation step, a sentence with one of its verbs marked is shown to a single worker, who must write QA-SRL questions for the verb and highlight their answers in the sentence."
                    },
                    {
                        "id": 39,
                        "string": "The questions are passed to the validation step, where n workers answer each question or mark it as invalid."
                    },
                    {
                        "id": 40,
                        "string": "In each step, no two answers to distinct questions may overlap with each other, to prevent redundancy."
                    },
                    {
                        "id": 41,
                        "string": "Instructions Workers are instructed that a valid question-answer pair must satisfy three criteria: 1) the question is grammatical, 2) the questionanswer pair is asking about the time, place, participants, etc., of the target verb, and 3) all correct answers to each question are given."
                    },
                    {
                        "id": 42,
                        "string": "Autocomplete We provide an autocomplete drop-down to streamline question writing."
                    },
                    {
                        "id": 43,
                        "string": "Autocomplete is implemented as a Non-deterministic Finite Automaton (NFA) whose states correspond to the 7 QA-SRL slots paired with a partial representation of the question's syntax."
                    },
                    {
                        "id": 44,
                        "string": "We use the NFA to make the menu more compact by disallowing obviously ungrammatical combinations (e.g., What did been appeared?"
                    },
                    {
                        "id": 45,
                        "string": "), and the syntactic representation to auto-suggest complete questions about arguments that have not yet been covered (see Figure 2 )."
                    },
                    {
                        "id": 46,
                        "string": "The auto-suggest feature significantly reduces the number of keystrokes required to enter new questions after the first one, speeding up the annotation process and making it easier for annotators to provide higher recall."
                    },
                    {
                        "id": 47,
                        "string": "He et al."
                    },
                    {
                        "id": 48,
                        "string": "(2015) interviewed and hired contractors to annotate data at much smaller scale for a cost of about 50c per verb."
                    },
                    {
                        "id": 49,
                        "string": "Our annotation scheme is cheaper, far more scalable, and provides more (though noisier) supervision for answer spans."
                    },
                    {
                        "id": 50,
                        "string": "To allow for more careful evaluation, we validated 5,205 sentences at a higher density (up to 1,000 for each domain in dev and test), re-running the generated questions through validation with n = 3 for a total of 6 answer annotations for each question."
                    },
                    {
                        "id": 51,
                        "string": "Quality Judgments of question validity had moderate agreement."
                    },
                    {
                        "id": 52,
                        "string": "About 89.5% of validator judgments rated a question as valid, and the agreement rate between judgments of the same question on whether the question is invalid is 90.9%."
                    },
                    {
                        "id": 53,
                        "string": "This gives a Fleiss's Kappa of 0.51."
                    },
                    {
                        "id": 54,
                        "string": "In the higherdensity re-run, validators were primed to be more critical: 76.5% of judgments considered a question valid, and agreement was at 83.7%, giving a Fleiss's Kappa of 0.55."
                    },
                    {
                        "id": 55,
                        "string": "Despite being more critical in the denser annotation round, questions marked valid in the original dataset were marked valid by the new annotators in 86% of cases, showing our data's relatively high precision."
                    },
                    {
                        "id": 56,
                        "string": "The high precision of our annotation pipeline is also backed up by our small-scale manual evaluation (see Coverage below)."
                    },
                    {
                        "id": 57,
                        "string": "Answer spans for each question also exhibit 4 www.mturk.com Table 3 : Precision and recall of our annotation pipeline on a merged and validated subset of 100 verbs."
                    },
                    {
                        "id": 58,
                        "string": "The unfiltered number represents relaxing the restriction that none of 2 validators marked the question as invalid."
                    },
                    {
                        "id": 59,
                        "string": "good agreement."
                    },
                    {
                        "id": 60,
                        "string": "On the original dataset, each answer span has a 74.8% chance to exactly match one provided by another annotator (up to two), and on the densely annotated subset, each answer span has an 83.1% chance to exactly match one provided by another annotator (up to five)."
                    },
                    {
                        "id": 61,
                        "string": "Coverage Accurately measuring recall for QA-SRL annotations is an open challenge."
                    },
                    {
                        "id": 62,
                        "string": "For example, question 6 in Figure 1 reveals an inferred temporal relation that would not be annotated as part of traditional SRL."
                    },
                    {
                        "id": 63,
                        "string": "Exhaustively enumerating the full set of such questions is difficult, even for experts."
                    },
                    {
                        "id": 64,
                        "string": "However, we can compare to the original QA-SRL dataset (He et al., 2015) , where Wikipedia sentences were annotated with 2.43 questions per verb."
                    },
                    {
                        "id": 65,
                        "string": "Our data has lower-but loosely comparable-recall, with 2.05 questions per verb in Wikipedia."
                    },
                    {
                        "id": 66,
                        "string": "In order to further analyze the quality of our annotations relative to (He et al., 2015) , we reannotate a 100-verb subset of their data both manually (aiming for exhaustivity) and with our crowdsourcing pipeline."
                    },
                    {
                        "id": 67,
                        "string": "We merge the three sets of annotations, manually remove bad questions (and their answers), and calculate the precision and recall of the crowdsourced annotations and those of He et al."
                    },
                    {
                        "id": 68,
                        "string": "(2015) against this pooled, filtered dataset (using the span detection metrics described in Section 4)."
                    },
                    {
                        "id": 69,
                        "string": "Results, shown in Table 3 , show that our pipeline produces comparable precision with only a modest decrease in recall."
                    },
                    {
                        "id": 70,
                        "string": "Interestingly, readding the questions rejected in the validation step greatly increases recall with only a small decrease in precision, showing that validators sometimes rejected questions considered valid by the authors."
                    },
                    {
                        "id": 71,
                        "string": "However, we use the filtered dataset for our experiments, and in Section 5, we show how another crowdsourcing step can further improve recall."
                    },
                    {
                        "id": 72,
                        "string": "Models Given a sentence X = x 0 , ."
                    },
                    {
                        "id": 73,
                        "string": "."
                    },
                    {
                        "id": 74,
                        "string": "."
                    },
                    {
                        "id": 75,
                        "string": ", x n , the goal of a QA-SRL parser is to produce a set of tuples (v i , Q i , S i ), where v ∈ {0, ."
                    },
                    {
                        "id": 76,
                        "string": "."
                    },
                    {
                        "id": 77,
                        "string": "."
                    },
                    {
                        "id": 78,
                        "string": ", n} is the index of a verbal predicate, Q i is a question, and S i ∈ {(i, j) | i, j ∈ [0, n], j ≥ i} is a set of spans which are valid answers."
                    },
                    {
                        "id": 79,
                        "string": "Our proposed parsers construct these tuples in a three-step pipeline: 1."
                    },
                    {
                        "id": 80,
                        "string": "Verbal predicates are identified using the same POS-tags and heuristics as in data collection (see Section 2)."
                    },
                    {
                        "id": 81,
                        "string": "2."
                    },
                    {
                        "id": 82,
                        "string": "Unlabeled span detection selects a set S v of spans as arguments for a given verb v. 3."
                    },
                    {
                        "id": 83,
                        "string": "Question generation predicts a question for each span in S v ."
                    },
                    {
                        "id": 84,
                        "string": "Spans are then grouped by question, giving each question a set of answers."
                    },
                    {
                        "id": 85,
                        "string": "We describe two models for unlabeled span detection in section 3.1, followed by question generation in section 3.2."
                    },
                    {
                        "id": 86,
                        "string": "All models are built on an LSTM encoding of the sentence."
                    },
                    {
                        "id": 87,
                        "string": "Like , we start with an input X v = {x 0 ."
                    },
                    {
                        "id": 88,
                        "string": "."
                    },
                    {
                        "id": 89,
                        "string": "."
                    },
                    {
                        "id": 90,
                        "string": "x n }, where the representation x i at each time step is a concatenation of the token w i 's embedding and an embedded binary feature (i = v) which indicates whether w i is the predicate under consideration."
                    },
                    {
                        "id": 91,
                        "string": "We then compute the output representation H v = BILSTM(X v ) using a stacked alternating LSTM (Zhou and Xu, 2015) with highway connections (Srivastava et al., 2015) and recurrent dropout (Gal and Ghahramani, 2016) ."
                    },
                    {
                        "id": 92,
                        "string": "Since the span detection and question generation models both use an LSTM encoding, this component could in principle be shared between them."
                    },
                    {
                        "id": 93,
                        "string": "However, in preliminary experiments we found that sharing hurt performance, so for the remainder of this work each model is trained independently."
                    },
                    {
                        "id": 94,
                        "string": "Span Detection Given an encoded sentence H v , the goal of span detection is to select the spans S v that correspond to arguments of the given predicate."
                    },
                    {
                        "id": 95,
                        "string": "We explore two models: a sequence-tagging model with BIO encoding, and a span-based model which assigns a probability to every possible span."
                    },
                    {
                        "id": 96,
                        "string": "BIO Sequence Model Our BIO model predicts a set of spans via a sequence y where each y i ∈ {B, I, O}, representing a token at the beginning, interior, or outside of any span, respectively."
                    },
                    {
                        "id": 97,
                        "string": "Similar to , we make independent predictions for each token at training time, and use Viterbi decoding to enforce hard BIO-constraints 5 at test time."
                    },
                    {
                        "id": 98,
                        "string": "The resulting sequences are in one-to-one correspondence with sets S v of spans which are pairwise non-overlapping."
                    },
                    {
                        "id": 99,
                        "string": "The locally-normalized BIO-tag distributions are computed from the BiLSTM outputs H v = {h v0 , ."
                    },
                    {
                        "id": 100,
                        "string": "."
                    },
                    {
                        "id": 101,
                        "string": "."
                    },
                    {
                        "id": 102,
                        "string": ", h vn }: p(y t | x) ∝ exp(w tag MLP(h vt ) + b tag ) (1) Span-based Model Our span-based model makes independent binary decisions for all O(n 2 ) spans in the sentence."
                    },
                    {
                        "id": 103,
                        "string": "Following Lee et al."
                    },
                    {
                        "id": 104,
                        "string": "(2016) , the representation of a span (i, j) is the concatenation of the BiLSTM output at each endpoint: s vij = [h vi , h vj ]."
                    },
                    {
                        "id": 105,
                        "string": "(2) The probability that the span is an argument of predicate v is computed by the sigmoid function: p(y ij | X v ) = σ(w span MLP(s vij ) + b span ) (3) At training time, we minimize the binary cross entropy summed over all n 2 possible spans, counting a span as a positive example if it appears as an answer to any question."
                    },
                    {
                        "id": 106,
                        "string": "At test time, we choose a threshold τ and select every span that the model assigns probability greater than τ , allowing us to trade off precision and recall."
                    },
                    {
                        "id": 107,
                        "string": "Question Generation We introduce two question generation models."
                    },
                    {
                        "id": 108,
                        "string": "Given a span representation s vij defined in subsubsection 3.1.2, our models generate questions by picking a word for each question slot (see Section 2)."
                    },
                    {
                        "id": 109,
                        "string": "Each model calculates a joint distribution p(y | X v , s vij ) over values y = (y 1 , ."
                    },
                    {
                        "id": 110,
                        "string": "."
                    },
                    {
                        "id": 111,
                        "string": "."
                    },
                    {
                        "id": 112,
                        "string": ", y 7 ) for the question slots given a span s vij , and is trained to minimize the negative log-likelihood of gold slot values."
                    },
                    {
                        "id": 113,
                        "string": "Local Model The local model predicts the words for each slot independently: p(y k | X v , s vij ) ∝ exp(w k MLP(s vij ) + b k )."
                    },
                    {
                        "id": 114,
                        "string": "(4) 5 E.g., an I-tag should only follow a B-tag."
                    },
                    {
                        "id": 115,
                        "string": "Sequence Model The sequence model uses the machinery of an RNN to share information between slots."
                    },
                    {
                        "id": 116,
                        "string": "At each slot k, we apply a multiple layers of LSTM cells: h l,k , c l,k = LSTMCELL l,k (h l−1,k , h l,k−1 , c l,k−1 ) (5) where the initial input at each slot is a concatenation of the span representation and the embedding of the previous word of the question: h 0,k = [s vij ; y k−1 ]."
                    },
                    {
                        "id": 117,
                        "string": "Since each question slot predicts from a different set of words, we found it beneficial to use separate weights for the LSTM cells at each slot k. During training, we feed in the gold token at the previous slot, while at test time, we use the predicted token."
                    },
                    {
                        "id": 118,
                        "string": "The output distribution at slot k is computed via the final layers' output vector h Lk : p(y k | X v , s vij ) ∝ exp(w k MLP(h Lk ) + b k ) (6) Initial Results Automatic evaluation for QA-SRL parsing presents multiple challenges."
                    },
                    {
                        "id": 119,
                        "string": "In this section, we introduce automatic metrics that can help us compare models."
                    },
                    {
                        "id": 120,
                        "string": "In Section 6, we will report human evaluation results for our final system."
                    },
                    {
                        "id": 121,
                        "string": "Span Detection Metrics We evaluate span detection using a modified notion of precision and recall."
                    },
                    {
                        "id": 122,
                        "string": "We count predicted spans as correct if they match any of the labeled spans in the dataset."
                    },
                    {
                        "id": 123,
                        "string": "Since each predicted span could potentially be a match to multiple questions (due to overlapping annotations) we map each predicted span to one matching question in the way that maximizes measured recall using maximum bipartite matching."
                    },
                    {
                        "id": 124,
                        "string": "We use both exact match and intersection-over-union (IOU) greater than 0.5 as matching criteria."
                    },
                    {
                        "id": 125,
                        "string": "Results Table 4 shows span detection results on the development set."
                    },
                    {
                        "id": 126,
                        "string": "We report results for the span-based models at two threshold values τ : τ = 0.5, and τ = τ * maximizing F1."
                    },
                    {
                        "id": 127,
                        "string": "The span-based model significantly improves over the BIO model in both precision and recall, although the difference is less pronounced under IOU matching."
                    },
                    {
                        "id": 128,
                        "string": "Question Generation Metrics Like all generation tasks, evaluation metrics for question generation must contend with Table 5 : Question Generation results on the dense development set."
                    },
                    {
                        "id": 129,
                        "string": "EM -Exact Match accuracy, PM -Partial Match Accuracy, SA -Slot-level accuracy the fact that there are in general multiple possible valid questions for a given predicate-argument pair."
                    },
                    {
                        "id": 130,
                        "string": "For instance, the question \"Who did someone blame something on?\""
                    },
                    {
                        "id": 131,
                        "string": "may be rephrased as \"Who was blamed for something?\""
                    },
                    {
                        "id": 132,
                        "string": "However, due to the constrained space of possible questions defined by QA-SRL's slot format, accuracy-based metrics can still be informative."
                    },
                    {
                        "id": 133,
                        "string": "In particular, we report the rate at which the predicted question exactly matches the gold question, as well as a relaxed match where we only count the question word (WH), subject (SBJ), object (OBJ) and Miscellaneous (Misc) slots (see Table 1 )."
                    },
                    {
                        "id": 134,
                        "string": "Finally, we report average slot-level accuracy."
                    },
                    {
                        "id": 135,
                        "string": "Results Table 5 shows the results for question generation on the development set."
                    },
                    {
                        "id": 136,
                        "string": "The sequential model's exact match accuracy is significantly higher, while word-level accuracy is roughly comparable, reflecting the fact that the local model learns the slot-level posteriors."
                    },
                    {
                        "id": 137,
                        "string": "Table 6 : Joint span detection and question generation results on the dense development set, using exact-match for both spans and questions."
                    },
                    {
                        "id": 138,
                        "string": "match for both."
                    },
                    {
                        "id": 139,
                        "string": "This metric is exceedingly hard, but it shows that almost 40% of predictions are exactly correct in both span and question."
                    },
                    {
                        "id": 140,
                        "string": "In Section 6, we use human evaluation to get a more accurate assessment of our model's accuracy."
                    },
                    {
                        "id": 141,
                        "string": "Joint results Data Expansion Since our trained parser can produce full QA-SRL annotations, its predictions can be validated by the same process as in our original annotation pipeline, allowing us to focus annotation efforts towards filling potential data gaps."
                    },
                    {
                        "id": 142,
                        "string": "By detecting spans at a low probability cutoff, we over-generate QA pairs for already-annotated sentences."
                    },
                    {
                        "id": 143,
                        "string": "Then, we filter out QA pairs whose answers overlap with answer spans in the existing annotations, or whose questions match existing questions."
                    },
                    {
                        "id": 144,
                        "string": "What remains are candidate QA pairs which fill gaps in the original annotation."
                    },
                    {
                        "id": 145,
                        "string": "We pass these questions to the validation step of our crowdsourcing pipeline with n = 3 validators, resulting in new labels."
                    },
                    {
                        "id": 146,
                        "string": "We run this process on the training and development partitions of our dataset."
                    },
                    {
                        "id": 147,
                        "string": "For the development set, we use the trained model described in the previous section."
                    },
                    {
                        "id": 148,
                        "string": "For the training set, we use a relaxed version of jackknifing, training 5 models over 5 different folds."
                    },
                    {
                        "id": 149,
                        "string": "We generate 92,080 questions at a threshold of τ = 0.2."
                    },
                    {
                        "id": 150,
                        "string": "Since in this case many sentences have only one question, we restructure the pay to a 2c base rate with a 2c bonus per question after the first (still paying no less than 2c per question)."
                    },
                    {
                        "id": 151,
                        "string": "Data statistics 46,017 (50%) of questions run through the expansion step were considered valid by all three annotators."
                    },
                    {
                        "id": 152,
                        "string": "In total, after filtering, the expansion step increased the number of valid questions in the train and dev partitions by 20%."
                    },
                    {
                        "id": 153,
                        "string": "However, for evaluation, since our recall metric identifies a single question for each answer span (via bipartite matching), we filter out likely question paraphrases by removing questions in the ex-  panded development set whose answer spans have two overlaps with the answer spans of one question in the original annotations."
                    },
                    {
                        "id": 154,
                        "string": "After this filtering, the expanded development set we use for evaluation has 11.5% more questions than the original development set."
                    },
                    {
                        "id": 155,
                        "string": "The total cost including MTurk fees was $8,210.66, for a cost of 8.9c per question, or 17.8c per valid question."
                    },
                    {
                        "id": 156,
                        "string": "While the cost per valid question was comparable to the initial annotation, we gathered many more negative examples (which may serve useful in future work), and this method allowed us to focus on questions that were missed in the first round and improve the exhaustiveness of the annotation (whereas it is not obvious how to make fully crowdsourced annotation more exhaustive at a comparable cost per question)."
                    },
                    {
                        "id": 157,
                        "string": "Retrained model We retrained our final model on the training set extended with the new valid questions, yielding modest improvements on both span detection and question generation in the development set (see Table 7 )."
                    },
                    {
                        "id": 158,
                        "string": "The span detection numbers are higher than on the original dataset, because the expanded development data captures true positives produced by the original model (and the resulting increase in precision can be traded off for recall as well)."
                    },
                    {
                        "id": 159,
                        "string": "Final Evaluation We use the crowdsourced validation step to do a final human evaluation of our models."
                    },
                    {
                        "id": 160,
                        "string": "We test 3 parsers: the span-based span detection model paired with each of the local and sequential question generation models trained on the initial dataset, and our final model (span-based span detection and sequential question generation) trained with the expanded data."
                    },
                    {
                        "id": 161,
                        "string": "Methodology On the 5,205 sentence densely annotated subset of dev and test, we generate QA-SRL labels with all of the models using a span detection threshold of τ = 0.2 and combine the questions with the existing data."
                    },
                    {
                        "id": 162,
                        "string": "We filter out questions that fail the autocomplete grammaticality check (counting them invalid) and pass the data into the validation step, annotating each question to a total of 6 validator judgments."
                    },
                    {
                        "id": 163,
                        "string": "We then compute question and span accuracy as follows: A question is considered correct if 5 out of 6 annotators consider it valid, and a span is considered correct if its generated question is correct and the span is among those selected for the question by validators."
                    },
                    {
                        "id": 164,
                        "string": "We rank all questions and spans by the threshold at which they are generated, which allows us to compute accuracy at different levels of recall."
                    },
                    {
                        "id": 165,
                        "string": "Results Figure 3 shows the results."
                    },
                    {
                        "id": 166,
                        "string": "As expected, the sequence-based question generation models are much more accurate than the local model; this is largely because the local model generated many questions that failed the grammaticality check."
                    },
                    {
                        "id": 167,
                        "string": "Furthermore, training with our expanded data results in more questions and spans generated at the same threshold."
                    },
                    {
                        "id": 168,
                        "string": "If we choose a threshold value which gives a similar number of questions per sentence as were labeled in the original data annotation (2 questions / verb), question and span accuracy are 82.64% and 77.61%, respectively."
                    },
                    {
                        "id": 169,
                        "string": "Table 8 shows the output of our best system on 3 randomly selected sentences from our development set (one from each domain)."
                    },
                    {
                        "id": 170,
                        "string": "The model was overall highly accurate-only one question and 3 spans are considered incorrect, and each mistake is nearly correct, 6 even when the sentence contains a negation."
                    },
                    {
                        "id": 171,
                        "string": "Related Work Resources and formalisms for semantics often require expert annotation and underlying syntax (Palmer et al., 2005; Baker et al., 1998; Banarescu et al., 2013) ."
                    },
                    {
                        "id": 172,
                        "string": "Some more recent semantic resources require less annotator training, or can be crowdsourced (Abend and Rappoport, 2013; Reisinger et al., 2015; Basile et al., 2012; Michael et al., 2018) ."
                    },
                    {
                        "id": 173,
                        "string": "In particular, the original QA-SRL (He et al., 2015) dataset is annotated by freelancers, while we developed streamlined crowdsourcing approaches for more scalable annotation."
                    },
                    {
                        "id": 174,
                        "string": "Crowdsourcing has also been used for indirectly annotating syntax (He et al., 2016; Duan et al., 2016) , and to complement expert annotation of SRL (Wang et al., 2018) ."
                    },
                    {
                        "id": 175,
                        "string": "Our crowdsourcing approach draws heavily on that of Michael et al."
                    },
                    {
                        "id": 176,
                        "string": "(2018) , with automatic two-stage validation for the collected question-answer pairs."
                    },
                    {
                        "id": 177,
                        "string": "More recently, models have been developed for these newer semantic resources, such as UCCA (Teichert et al., 2017) and Semantic Proto-Roles (White et al., 2017) ."
                    },
                    {
                        "id": 178,
                        "string": "Our work is the first highquality parser for QA-SRL, which has several unique modeling challenges, such as its highly structured nature and the noise in crowdsourcing."
                    },
                    {
                        "id": 179,
                        "string": "Several recent works have explored neural models for SRL tasks (Collobert and Weston, 2007; FitzGerald et al., 2015; Swayamdipta et al., 2017; Yang and Mitchell, 2017) , many of which employ a BIO encoding (Zhou and Xu, 2015; ."
                    },
                    {
                        "id": 180,
                        "string": "Recently, span-based models have proven to be useful for question answering (Lee et al., 2016) and coreference resolution , and PropBank SRL ."
                    },
                    {
                        "id": 181,
                        "string": "Table 8 : System output on 3 randomly sampled sentences from the development set (1 from each of the 3 domains)."
                    },
                    {
                        "id": 182,
                        "string": "Spans were selected with τ = 0.5."
                    },
                    {
                        "id": 183,
                        "string": "Questions and spans with a red background were marked incorrect during human evaluation."
                    },
                    {
                        "id": 184,
                        "string": "Conclusion In this paper, we demonstrated that QA-SRL can be scaled to large datasets, enabling a new methodology for labeling and producing predicate-argument structures at a large scale."
                    },
                    {
                        "id": 185,
                        "string": "We presented a new, scalable approach for crowdsourcing QA-SRL, which allowed us to collect QA-SRL Bank 2.0, a new dataset covering over 250,000 question-answer pairs from over 64,000 sentences, in just 9 days."
                    },
                    {
                        "id": 186,
                        "string": "We demonstrated the utility of this data by training the first parser which is able to produce high-quality QA-SRL structures."
                    },
                    {
                        "id": 187,
                        "string": "Finally, we demonstrated that the validation stage of our crowdsourcing pipeline, in combination with our parser tuned for recall, can be used to add new annotations to the dataset, increasing recall."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 31
                    },
                    {
                        "section": "Data Annotation",
                        "n": "2",
                        "start": 32,
                        "end": 71
                    },
                    {
                        "section": "Models",
                        "n": "3",
                        "start": 72,
                        "end": 92
                    },
                    {
                        "section": "Span Detection",
                        "n": "3.1",
                        "start": 93,
                        "end": 95
                    },
                    {
                        "section": "BIO Sequence Model",
                        "n": "3.1.1",
                        "start": 96,
                        "end": 101
                    },
                    {
                        "section": "Span-based Model",
                        "n": "3.1.2",
                        "start": 102,
                        "end": 106
                    },
                    {
                        "section": "Question Generation",
                        "n": "3.2",
                        "start": 107,
                        "end": 112
                    },
                    {
                        "section": "Local Model",
                        "n": "3.2.1",
                        "start": 113,
                        "end": 114
                    },
                    {
                        "section": "Sequence Model",
                        "n": "3.2.2",
                        "start": 115,
                        "end": 118
                    },
                    {
                        "section": "Initial Results",
                        "n": "4",
                        "start": 119,
                        "end": 120
                    },
                    {
                        "section": "Span Detection",
                        "n": "4.1",
                        "start": 121,
                        "end": 127
                    },
                    {
                        "section": "Question Generation",
                        "n": "4.2",
                        "start": 128,
                        "end": 140
                    },
                    {
                        "section": "Data Expansion",
                        "n": "5",
                        "start": 141,
                        "end": 158
                    },
                    {
                        "section": "Final Evaluation",
                        "n": "6",
                        "start": 159,
                        "end": 170
                    },
                    {
                        "section": "Related Work",
                        "n": "7",
                        "start": 171,
                        "end": 183
                    },
                    {
                        "section": "Conclusion",
                        "n": "8",
                        "start": 184,
                        "end": 187
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1037-Figure1-1.png",
                        "caption": "Figure 1: An annotated sentence from our dataset. Question 6 was not produced by crowd workers in the initial collection, but was produced by our parser as part of Data Expansion (see Section 5.)",
                        "page": 0,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 222.23999999999998,
                            "y2": 349.91999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1037-Table6-1.png",
                        "caption": "Table 6: Joint span detection and question generation results on the dense development set, using exact-match for both spans and questions.",
                        "page": 5,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 513.12,
                            "y1": 62.879999999999995,
                            "y2": 105.11999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1037-Table4-1.png",
                        "caption": "Table 4: Results for Span Detection on the dense development dataset. Span detection results are given with the cutoff threshold τ at 0.5, and at the value which maximizes F-score. The top chart lists precision, recall and F-score with exact span match, while the bottom reports matches where the intersection over union (IOU) is ≥ 0.5.",
                        "page": 5,
                        "bbox": {
                            "x1": 93.6,
                            "x2": 268.32,
                            "y1": 62.879999999999995,
                            "y2": 201.12
                        }
                    },
                    {
                        "filename": "../figure/image/1037-Table5-1.png",
                        "caption": "Table 5: Question Generation results on the dense development set. EM - Exact Match accuracy, PM - Partial Match Accuracy, SA - Slot-level accuracy",
                        "page": 5,
                        "bbox": {
                            "x1": 114.72,
                            "x2": 248.16,
                            "y1": 317.76,
                            "y2": 360.96
                        }
                    },
                    {
                        "filename": "../figure/image/1037-Figure2-1.png",
                        "caption": "Figure 2: Interface for the generation step. Autocomplete shows completions of the current QASRL slot, and auto-suggest shows fully-formed questions (highlighted green) based on the previous questions.",
                        "page": 1,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 69.6,
                            "y2": 224.16
                        }
                    },
                    {
                        "filename": "../figure/image/1037-Table7-1.png",
                        "caption": "Table 7: Results on the expanded development set comparing the full model trained on the original data, and with the expanded data.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 290.4,
                            "y1": 62.879999999999995,
                            "y2": 314.88
                        }
                    },
                    {
                        "filename": "../figure/image/1037-Table1-1.png",
                        "caption": "Table 1: Example QA-SRL questions, decomposed into their slot-based representation. See He et al. (2015) for the full details. All slots draw from a small, deterministic set of options, including verb tense (present, pastparticiple, etc.) Here we have replaced the verb-tense slot with its conjugated form.",
                        "page": 2,
                        "bbox": {
                            "x1": 158.88,
                            "x2": 438.71999999999997,
                            "y1": 63.36,
                            "y2": 131.04
                        }
                    },
                    {
                        "filename": "../figure/image/1037-Table2-1.png",
                        "caption": "Table 2: Statistics for the dataset with questions written by workers across three domains.",
                        "page": 2,
                        "bbox": {
                            "x1": 84.0,
                            "x2": 278.4,
                            "y1": 202.56,
                            "y2": 253.92
                        }
                    },
                    {
                        "filename": "../figure/image/1037-Figure3-1.png",
                        "caption": "Figure 3: Human evaluation accuracy for questions and spans, as each model’s span detection threshold is varied. Questions are considered correct if 5 out of 6 annotators consider it valid. Spans are considered correct if their question was valid, and the span was among those labeled by human annotators for that question. The vertical line indicates a threshold value where the number of questions per sentence matches that of the original labeled data (2 questions / verb).",
                        "page": 7,
                        "bbox": {
                            "x1": 78.72,
                            "x2": 519.36,
                            "y1": 61.44,
                            "y2": 421.91999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1037-Table3-1.png",
                        "caption": "Table 3: Precision and recall of our annotation pipeline on a merged and validated subset of 100 verbs. The unfiltered number represents relaxing the restriction that none of 2 validators marked the question as invalid.",
                        "page": 3,
                        "bbox": {
                            "x1": 78.72,
                            "x2": 283.2,
                            "y1": 62.879999999999995,
                            "y2": 119.03999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1037-Table8-1.png",
                        "caption": "Table 8: System output on 3 randomly sampled sentences from the development set (1 from each of the 3 domains). Spans were selected with τ = 0.5. Questions and spans with a red background were marked incorrect during human evaluation.",
                        "page": 8,
                        "bbox": {
                            "x1": 117.11999999999999,
                            "x2": 480.47999999999996,
                            "y1": 62.879999999999995,
                            "y2": 351.36
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-22"
        },
        {
            "slides": {
                "0": {
                    "title": "Introduction",
                    "text": [
                        "Chinese spelling checkers are difficult",
                        "No word delimiters exist among Chinese words",
                        "A Chinese word can contain only",
                        "character or mulGple characters",
                        "More than 13 thousand characters",
                        "The spelling checker is expected to idenGfy all possible spelling errors, highlight their locaGons and suggest possible correcGons",
                        "SIGHAN 2015 @ Beijing, China"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Chinese Spelling Check Evaluations",
                    "text": [
                        "The 1st Chinese Spelling Check Bake-off",
                        "SIGHAN-2013 workshop @ Nagoya, Japan",
                        "Chinese as a foreign language learners",
                        "CIPS-SIGHAN joint CLP-2014 conference @ Wuhan",
                        "SIGHAN-2015 workshop @ Beijing, China",
                        "SIGHAN 2015 @ Beijing, China"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "3": {
                    "title": "Testing Examples",
                    "text": [
                        "SIGHAN 2015 @ Beijing, China"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "5": {
                    "title": "Training Set",
                    "text": [
                        "This set included selected essays with a total of spelling errors.",
                        "Each essay is shown in terms of format",
                        "SIGHAN 2015 @ Beijing, China"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": [
                        "figure/image/1040-Figure1-1.png"
                    ]
                },
                "6": {
                    "title": "Dryrun Set",
                    "text": [
                        "A total of 39 passages were given to parGcipants to familiarize themselves with the f inal tesGng process.",
                        "The purpose is to validate the submiked output format only, and no dryrun outcomes were considered in the official evaluaGon",
                        "SIGHAN 2015 @ Beijing, China"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "7": {
                    "title": "Test Set",
                    "text": [
                        "This set consists of 1,100 tesGng passages. Half of these passages contained no spelling errors, while the other half included at least one spelling error",
                        "Open test policy: employing any linguisGc and computaGonal resources to detect and correct spelling errors are allowed.",
                        "SIGHAN 2015 @ Beijing, China"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "9": {
                    "title": "Evaluation Examples",
                    "text": [
                        "SIGHAN 2015 @ Beijing, China"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "10": {
                    "title": "9 Participants and 15 Runs",
                    "text": [
                        "SIGHAN 2015 @ Beijing, China"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": [
                        "figure/image/1040-Table2-1.png"
                    ]
                },
                "11": {
                    "title": "Testing Results",
                    "text": [
                        "SIGHAN 2015 @ Beijing, China"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "12": {
                    "title": "A Summary of Developed Systems",
                    "text": [
                        "SIGHAN 2015 @ Beijing, China"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": [
                        "figure/image/1040-Table4-1.png"
                    ]
                }
            },
            "paper_title": "Introduction to SIGHAN 2015 Bake-off for Chinese Spelling Check",
            "paper_id": "1040",
            "paper": {
                "title": "Introduction to SIGHAN 2015 Bake-off for Chinese Spelling Check",
                "abstract": "This paper introduces the SIGHAN 2015 Bake-off for Chinese Spelling Check, including task description, data preparation, performance metrics, and evaluation results. The competition reveals current state-of-the-art NLP techniques in dealing with Chinese spelling checking. All data sets with gold standards and evaluation tool used in this bake-off are publicly available for future research.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction to SIGHAN 2015 Bake-off for Chinese Spelling Check 1 Introduction Chinese spelling checkers are relatively difficult to develop, partly because no word delimiters exist among Chinese words and a Chinese word can contain only a single character or multiple characters."
                    },
                    {
                        "id": 1,
                        "string": "Furthermore, there are more than 13 thousand Chinese characters, instead of only 26 letters in English, and each with its own context to constitute a meaningful Chinese word."
                    },
                    {
                        "id": 2,
                        "string": "All these make Chinese spell checking a challengeable task."
                    },
                    {
                        "id": 3,
                        "string": "An empirical analysis indicated that Chinese spelling errors frequently arise from confusion among multiple-character words, which are phonologically and visually similar, but semantically distinct (Liu et al., 2011) ."
                    },
                    {
                        "id": 4,
                        "string": "The automatic spelling checker should have both capabilities of identifying the spelling errors and suggesting the correct characters of erroneous usages."
                    },
                    {
                        "id": 5,
                        "string": "The SIGHAN 2013 Bake-off for Chinese Spelling Check was the first campaign to provide data sets as benchmarks for the performance evaluation of Chinese spelling checkers (Wu et al., 2013) ."
                    },
                    {
                        "id": 6,
                        "string": "The data in SIGHAN 2013 originated from the essays written by native Chinese speakers."
                    },
                    {
                        "id": 7,
                        "string": "Following the experience of the first evaluation, the second bake-off was held in CIPS-SIGHAN Joint CLP-2014 conference, which focuses on the essays written by learners of Chinese as a Foreign Language (CFL) (Yu et al., 2014) ."
                    },
                    {
                        "id": 8,
                        "string": "Due to the greater challenge in detecting and correcting spelling errors in CFL leaners' written essays, SIGHAN 2015 Bake-off, again features a Chinese Spelling Check task, providing an evaluation platform for the development and implementation of automatic Chinese spelling checkers."
                    },
                    {
                        "id": 9,
                        "string": "Given a passage composed of several sentences, the checker is expected to identify all possible spelling errors, highlight their locations, and suggest possible corrections."
                    },
                    {
                        "id": 10,
                        "string": "The rest of this article is organized as follows."
                    },
                    {
                        "id": 11,
                        "string": "Section 2 provides an overview of the SIGHAN 2015 Bake-off for Chinese Spelling Check."
                    },
                    {
                        "id": 12,
                        "string": "Section 3 introduces the developed data sets."
                    },
                    {
                        "id": 13,
                        "string": "Section 4 proposes the evaluation metrics."
                    },
                    {
                        "id": 14,
                        "string": "Section 5 compares results from the various contestants."
                    },
                    {
                        "id": 15,
                        "string": "Finally, we conclude this paper with findings and offer future research directions in Section 6."
                    },
                    {
                        "id": 16,
                        "string": "Task Description The goal of this task is to evaluate the capability of a Chinese spelling checker."
                    },
                    {
                        "id": 17,
                        "string": "A passage consisting of several sentences with/without spelling errors is given as the input."
                    },
                    {
                        "id": 18,
                        "string": "The checker should return the locations of incorrect characters and suggest the correct characters."
                    },
                    {
                        "id": 19,
                        "string": "Each character or punctuation mark occupies 1 spot for counting location."
                    },
                    {
                        "id": 20,
                        "string": "The input instance is given a unique passage number pid."
                    },
                    {
                        "id": 21,
                        "string": "If the sentence contains no spelling errors, the checker should return \"pid, 0\"."
                    },
                    {
                        "id": 22,
                        "string": "If an input passage contains at least one spelling error, the output format is \"pid [, location, correction]+\", where the symbol \"+\" indicates there is one or more instance of the predicted element \"[, location, correction]\"."
                    },
                    {
                        "id": 23,
                        "string": "\"Location\" and \"correction\", respectively, denote the location of incorrect character and its correct version."
                    },
                    {
                        "id": 24,
                        "string": "Examples are given as follows."
                    },
                    {
                        "id": 25,
                        "string": "There are 2 wrong characters in Ex."
                    },
                    {
                        "id": 26,
                        "string": "1, and correct characters \"希,\" and \"望\" should be used in locations 4, and 5, respectively."
                    },
                    {
                        "id": 27,
                        "string": "In Ex."
                    },
                    {
                        "id": 28,
                        "string": "2, the 17 th character \"偏\" is wrong, and should be \"遍\"."
                    },
                    {
                        "id": 29,
                        "string": "Location \"0\" denotes that there is no spelling error in Ex."
                    },
                    {
                        "id": 30,
                        "string": "3 Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan."
                    },
                    {
                        "id": 31,
                        "string": "The spelling errors were manually annotated by trained native Chinese speakers, who also provided corrections corresponding to each error."
                    },
                    {
                        "id": 32,
                        "string": "The essays were then split into three sets as follows (1) Training Set: this set included 970 selected essays with a total of 3,143 spelling errors."
                    },
                    {
                        "id": 33,
                        "string": "Each essay is represented in SGML format shown in Fig."
                    },
                    {
                        "id": 34,
                        "string": "1 ."
                    },
                    {
                        "id": 35,
                        "string": "The title attribute is used to describe the essay topic."
                    },
                    {
                        "id": 36,
                        "string": "Each passage is composed of several sentences, and each passage contains at least one spelling error, and the data indicates both the error's location and corresponding correction."
                    },
                    {
                        "id": 37,
                        "string": "All essays in this set are used to train the developed spelling checker."
                    },
                    {
                        "id": 38,
                        "string": "(2) Dryrun Set: a total of 39 passages were given to participants to familiarize themselves with the final testing process."
                    },
                    {
                        "id": 39,
                        "string": "Each participant can submit several runs generated using different models with different parameter settings of their checkers."
                    },
                    {
                        "id": 40,
                        "string": "In addition to make sure that the submitted results can be correctly evaluated, participants can fine-tune their developed models in the dryrun phase."
                    },
                    {
                        "id": 41,
                        "string": "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: this set consists of 1,100 testing passages."
                    },
                    {
                        "id": 42,
                        "string": "Half of these passages contained no spelling errors, while the other half included at least one spelling error."
                    },
                    {
                        "id": 43,
                        "string": "The evaluation was con- ducted as an open test."
                    },
                    {
                        "id": 44,
                        "string": "In addition to the data sets provided, registered participant teams were allowed to employ any linguistic and computational resources to detect and correct spelling errors."
                    },
                    {
                        "id": 45,
                        "string": "Besides, passages written by CFL learners may yield grammatical errors, missing or redundant words, poor word selection, or word ordering problems."
                    },
                    {
                        "id": 46,
                        "string": "The task in question focuses exclusively on spelling error correction."
                    },
                    {
                        "id": 47,
                        "string": "<ESSAY title=\"學中文的第一天\"> <TEXT> <PASSAGE id=\"A2-0521-1\"> 這位小姐說:你應 該一直走到十只路口,再右磚一直走經過一家銀 行就到了。</PASSAGE> <PASSAGE id=\"A2-0521-2\">應為今天是第一天, 老師先請學生自己給介紹。</PASSAGE> </TEXT> <MISTAKE id=\"A2-0521-1\" location=\"15\"> <WRONG>十只路口</WRONG> <CORRECTION>十字路口</CORRECTION> </MISTAKE> <MISTAKE id=\"A2-0521-1\" location=\"21\"> <WRONG>右磚</WRONG> <CORRECTION>右轉</CORRECTION> </MISTAKE> <MISTAKE id=\"A2-0521-2\" location=\"1\"> <WRONG>應為</WRONG> <CORRECTION>因為</CORRECTION> </MISTAKE> </ESSAY> Figure 1 ."
                    },
                    {
                        "id": 48,
                        "string": "An essay represented in SGML format Table 1 shows the confusion matrix used for performance evaluation."
                    },
                    {
                        "id": 49,
                        "string": "In the matrix, TP (True Positive) is the number of passages with spelling errors that are correctly identified by the spelling checker; FP (False Positive) is the number of passages in which non-existent errors are identified; TN (True Negative) is the number of passages without spelling errors which are correctly identified as such; FN (False Negative) is the number of passages with spelling errors for which no errors are detected."
                    },
                    {
                        "id": 50,
                        "string": "Performance Metrics The criteria for judging correctness are determined at two levels as follows."
                    },
                    {
                        "id": 51,
                        "string": "(1) Detection level: all locations of incorrect characters in a given passage should be completely identical with the gold standard."
                    },
                    {
                        "id": 52,
                        "string": "(2) Correction level: all locations and corresponding corrections of incorrect characters should be completely identical with the gold standard."
                    },
                    {
                        "id": 53,
                        "string": "In addition to achieve satisfactory detection/correction performance, reducing the false positive rate, that is the mistaken identification of errors where none exist, is also important (Wu et al., 2010) ."
                    },
                    {
                        "id": 54,
                        "string": "The following metrics are measured at both levels with the help of the confusion matrix."
                    },
                    {
                        "id": 55,
                        "string": "For example, if 5 testing inputs with gold standards are \"A2-0092-2, 0\", \"A2-0243-1, 3, 健, 4, 康\", \"B2-1923-2, 8, 誤, 41, 情\", \"B2-2731-1, 0\", and \"B2-3754-3, 10, 觀\", and the system outputs the result as \"A2-0092-2, 5, 玩\", \"A2-0243-1, 3, 件, 4, 康\", \"B2-1923-2, 8, 誤, 41, 情\", \"B2-2731-1, 0\", and \"B2-3754-3, 11, 觀\", the evaluation tool will yield the following performance: Table 3 shows the task testing results."
                    },
                    {
                        "id": 56,
                        "string": "The research team NCTU&NTUT achieved the lowest false positive rate at 0.0509."
                    },
                    {
                        "id": 57,
                        "string": "For the detectionlevel evaluations, according to the test data distribution, a baseline system can achieve an accuracy level of 0.5 by always reporting all testing cases as correct without errors."
                    },
                    {
                        "id": 58,
                        "string": "The system result submitted by CAS achieved promising performance exceeding 0.7."
                    },
                    {
                        "id": 59,
                        "string": "We used the F1 score to reflect the tradeoff between precision and recall."
                    },
                    {
                        "id": 60,
                        "string": "As shown in the testing results, CAS provided the best error detection results, achieving a high F1 score of 0.6404."
                    },
                    {
                        "id": 61,
                        "string": "For correction-level evaluations, the correction accuracy provided by the CAS system (0.6918) significantly outperformed the other teams."
                    },
                    {
                        "id": 62,
                        "string": "Besides, in terms of correction precision and recall, the spelling checker developed by CAS also outperforms the others, which in turn has the highest F1 score of 0.6254."
                    },
                    {
                        "id": 63,
                        "string": "Note that it is difficult to correct all spelling errors found in the input passages, since some sentences contain multiple errors and only correcting some of them are regarded as a wrong case in our evaluation."
                    },
                    {
                        "id": 64,
                        "string": "Table 4 summarizes the participants' developed approaches and the usages of linguistic resources."
                    },
                    {
                        "id": 65,
                        "string": "Among 6 teams that submitted the official testing results, NCYU did not submit the report of its developed method."
                    },
                    {
                        "id": 66,
                        "string": "None of the submitted systems provided superior performance in all metrics, though those submitted by CAS and NCTU&NTUT provided relatively best overall performance when different metric is considered."
                    },
                    {
                        "id": 67,
                        "string": "The CAS team proposes a unified framework for Chinese spelling correction."
                    },
                    {
                        "id": 68,
                        "string": "They used HMM-based approach to segment sentences and generate correction candidates."
                    },
                    {
                        "id": 69,
                        "string": "Then, a twostage filter process is applied to re-ranking the candidates for choosing the most promising candidates."
                    },
                    {
                        "id": 70,
                        "string": "The NCTU&NTUT team proposes a word vector/conditional random field based spelling error detector."
                    },
                    {
                        "id": 71,
                        "string": "They utilize the error detection results to guide and speed up the timeconsuming language model rescoring procedure."
                    },
                    {
                        "id": 72,
                        "string": "By this way, potential Chinese spelling errors could be detected and corrected in a modified sentence with the maximum language model score."
                    },
                    {
                        "id": 73,
                        "string": "Evaluation Results Participant (Ordered by abbreviations of names) #Runs Conclusions and Future Work This paper provides an overview of SIGHAN 2015 Bake-off for Chinese spelling check, including task design, data preparation, evaluation metrics, performance evaluation results and the approaches used by the participant teams."
                    },
                    {
                        "id": 74,
                        "string": "Regardless of actual performance, all submissions contribute to the knowledge in search for an effective Chinese spell checker, and the individual reports in the Bake-off proceedings provide useful insight into Chinese language processing."
                    },
                    {
                        "id": 75,
                        "string": "We hope the data sets collected for this Bakeoff can facilitate and expedite future development of effective Chinese spelling checkers."
                    },
                    {
                        "id": 76,
                        "string": "Therefore, all data sets with gold standards and evaluation tool are made publicly available at http://ir.itc.ntnu.edu.tw/lre/sighan8csc.html."
                    },
                    {
                        "id": 77,
                        "string": "The future direction focuses on the development of Chinese grammatical error correction."
                    },
                    {
                        "id": 78,
                        "string": "We plan to build new language resources to help improve existing techniques for computer-aided Chinese language learning."
                    },
                    {
                        "id": 79,
                        "string": "In addition, new data sets obtained from CFL learners will be investigated for the future enrichment of this research topic."
                    }
                ],
                "headers": [
                    {
                        "section": "Task Description",
                        "n": "2",
                        "start": 16,
                        "end": 29
                    },
                    {
                        "section": "Data Preparation",
                        "n": "3",
                        "start": 30,
                        "end": 49
                    },
                    {
                        "section": "Performance Metrics",
                        "n": "4",
                        "start": 50,
                        "end": 72
                    },
                    {
                        "section": "Conclusions and Future Work",
                        "n": "6",
                        "start": 73,
                        "end": 79
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1040-Table1-1.png",
                        "caption": "Table 1. Confusion matrix for evaluation.",
                        "page": 2,
                        "bbox": {
                            "x1": 69.6,
                            "x2": 292.32,
                            "y1": 247.67999999999998,
                            "y2": 340.32
                        }
                    },
                    {
                        "filename": "../figure/image/1040-Table4-1.png",
                        "caption": "Table 4. A summary of participants’ developed systems",
                        "page": 4,
                        "bbox": {
                            "x1": 70.56,
                            "x2": 524.16,
                            "y1": 69.6,
                            "y2": 440.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1040-Figure1-1.png",
                        "caption": "Figure 1. An essay represented in SGML format",
                        "page": 1,
                        "bbox": {
                            "x1": 305.76,
                            "x2": 523.1999999999999,
                            "y1": 196.79999999999998,
                            "y2": 445.91999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1040-Table2-1.png",
                        "caption": "Table 2. Submission statistics for all participants",
                        "page": 3,
                        "bbox": {
                            "x1": 69.6,
                            "x2": 525.12,
                            "y1": 272.64,
                            "y2": 444.0
                        }
                    },
                    {
                        "filename": "../figure/image/1040-Table3-1.png",
                        "caption": "Table 3. Testing results of our Chinese spelling check task.",
                        "page": 3,
                        "bbox": {
                            "x1": 69.6,
                            "x2": 525.12,
                            "y1": 496.79999999999995,
                            "y2": 684.9599999999999
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-23"
        },
        {
            "slides": {
                "0": {
                    "title": "Task definition",
                    "text": [
                        "Given a name, what is its language?",
                        "Same script (no diacritics)"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "1": {
                    "title": "Motivation",
                    "text": [
                        "Improving letter-to-phoneme performance (Font",
                        "Improving machine transliteration performance",
                        "Adjusting for different semantic transliteration rules"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Previous approaches",
                    "text": [
                        "Character language models (Cavnar and Trenkle, 1994)",
                        "Construct models for each language, then choose the language with the most similar model to the test data",
                        "accuracy given >300 characters & 14 languages",
                        "Given 50 bytes (and 17 languages), language models give",
                        "Between 13 languages, average F1 on last names is full names gives (Konstantopoulos, 2007)",
                        "Easier with more dissimilar languages: English vs.",
                        "Chinese vs. Japanese (same script) gives (Li et al.,"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Using SVMs",
                    "text": [
                        "Substrings (n-grams) of length n for n=1 to 5",
                        "Include special characters at the beginning and the end to account for prefixes and suffixes",
                        "Other kernels (polynomial, string kernels) did not work well"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "8": {
                    "title": "Future work",
                    "text": [
                        "Other ways of incorporating language information for machine transliteration",
                        "Direct use as a feature"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                }
            },
            "paper_title": "Language identification of names with SVMs",
            "paper_id": "1051",
            "paper": {
                "title": "Language identification of names with SVMs",
                "abstract": "The task of identifying the language of text or utterances has a number of applications in natural language processing. Language identification has traditionally been approached with character-level language models. However, the language model approach crucially depends on the length of the text in question. In this paper, we consider the problem of language identification of names. We show that an approach based on SVMs with n-gram counts as features performs much better than language models. We also experiment with applying the method to pre-process transliteration data for the training of separate models.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The task of identifying the language of text or utterances has a number of applications in natural language processing."
                    },
                    {
                        "id": 1,
                        "string": "Font Llitjós and Black (2001) show that language identification can improve the accuracy of letter-to-phoneme conversion."
                    },
                    {
                        "id": 2,
                        "string": "Li et al."
                    },
                    {
                        "id": 3,
                        "string": "(2007) use language identification in a transliteration system to account for different semantic transliteration rules between languages when the target language is Chinese."
                    },
                    {
                        "id": 4,
                        "string": "Huang (2005) improves the accuracy of machine transliteration by clustering his training data according to the source language."
                    },
                    {
                        "id": 5,
                        "string": "Language identification has traditionally been approached using character-level n-gram language models."
                    },
                    {
                        "id": 6,
                        "string": "In this paper, we propose the use of support vector machines (SVMs) for the language identification of very short texts such as proper nouns."
                    },
                    {
                        "id": 7,
                        "string": "We show that SVMs outperform language models on two different data sets consisting of personal names."
                    },
                    {
                        "id": 8,
                        "string": "Furthermore, we test the hypothesis that language identification can improve transliteration by pre-processing the source data and training separate models using a state-of-the-art transliteration system."
                    },
                    {
                        "id": 9,
                        "string": "Previous work N -gram approaches have proven very popular for language identification in general."
                    },
                    {
                        "id": 10,
                        "string": "Cavnar and Trenkle (1994) apply n-gram language models to general text categorization."
                    },
                    {
                        "id": 11,
                        "string": "They construct character-level language models using n-grams up to a certain maximum length from each class in their training corpora."
                    },
                    {
                        "id": 12,
                        "string": "To classify new text, they generate an n-gram frequency profile from the text and then assign it to the class having the most similar language model, which is determined by summing the differences in n-gram ranks."
                    },
                    {
                        "id": 13,
                        "string": "Given 14 languages, text of 300 characters or more, and retaining the 400 most common n-grams up to length 5, they achieve an overall accuracy of 99.8%."
                    },
                    {
                        "id": 14,
                        "string": "However, the accuracy of the n-gram approach strongly depends on the length of the texts."
                    },
                    {
                        "id": 15,
                        "string": "Kruengkrai et al."
                    },
                    {
                        "id": 16,
                        "string": "(2005) report that, on a language identification task of 17 languages with average text length 50 bytes, the accuracy drops to 90.2%."
                    },
                    {
                        "id": 17,
                        "string": "When SVMs were used for the same task, they achieved 99.7% accuracy."
                    },
                    {
                        "id": 18,
                        "string": "Konstantopoulos (2007) looks particularly at the task of identifying the language of proper nouns."
                    },
                    {
                        "id": 19,
                        "string": "He focuses on a data set of soccer player names coming from 13 possible national languages."
                    },
                    {
                        "id": 20,
                        "string": "He finds that using general n-gram language models yields an average F 1 score of only 27%, but training the models specifically to these smaller data gives significantly better results: 50% average F 1 score for last names only, and 60% for full names."
                    },
                    {
                        "id": 21,
                        "string": "On the other hand, Li et al."
                    },
                    {
                        "id": 22,
                        "string": "(2007) report some good results for single-name language identification using n-gram language models."
                    },
                    {
                        "id": 23,
                        "string": "For the task of separating single Chinese, English, and Japanese names, they achieve an overall accuracy of 94.8%."
                    },
                    {
                        "id": 24,
                        "string": "One reason that they do better is because of the smaller number of classes."
                    },
                    {
                        "id": 25,
                        "string": "We can further see that the languages in question are very dissimilar, making the problem easier; for example, the character \"x\" appears only in the list of Chinese names, and the bigram \"kl\" appears only in the list of English names."
                    },
                    {
                        "id": 26,
                        "string": "Language identification with SVMs Rather than using language models to determine the language of a name, we propose to count character n-gram occurrences in the given name, for n up to some maximum length, and use these counts as the features in an SVM."
                    },
                    {
                        "id": 27,
                        "string": "We choose SVMs because they can take a large number of features and learn to weigh them appropriately."
                    },
                    {
                        "id": 28,
                        "string": "When counting n-grams, we include space characters at the beginning and end of each word, so that prefixes and suffixes are counted appropriately."
                    },
                    {
                        "id": 29,
                        "string": "In addition to n-gram counts, we also include word length as a feature."
                    },
                    {
                        "id": 30,
                        "string": "In our initial experiments, we tested several different kernels."
                    },
                    {
                        "id": 31,
                        "string": "The kernels that performed the best were the linear, sigmoid, and radial basis function (RBF) kernels."
                    },
                    {
                        "id": 32,
                        "string": "We tested various maximum n-gram lengths; Figure 1 shows the accuracy of the linear kernel as a function of maximum n-gram length."
                    },
                    {
                        "id": 33,
                        "string": "Polynomial kernels, a substring match-count string kernel, and a string kernel based on the edit distance all performed poorly in comparison."
                    },
                    {
                        "id": 34,
                        "string": "We also experimented with other modifications such as normalizing the feature vectors, and decreasing the weights of frequent n-gram counts to avoid larger counts dominating smaller counts."
                    },
                    {
                        "id": 35,
                        "string": "Since the effects were negligible, we exclude these results from this paper."
                    },
                    {
                        "id": 36,
                        "string": "In our experiments, we used the LIBLINEAR (Fan et al., 2008) package for the linear kernel and the LIBSVM (Chang and Lin, 2001 ) package for the RBF and sigmoid kernels."
                    },
                    {
                        "id": 37,
                        "string": "We discarded any periods and parentheses, but kept apostrophes and hyphens, and we converted all letters to lower case."
                    },
                    {
                        "id": 38,
                        "string": "We removed very short names of length less than two."
                    },
                    {
                        "id": 39,
                        "string": "For all data sets, we held out 10% of the data as the test set."
                    },
                    {
                        "id": 40,
                        "string": "We then found optimal parameters for each kernel type using 10-fold cross-validation on the remaining training set."
                    },
                    {
                        "id": 41,
                        "string": "This yielded optimum maximum n-gram lengths of four for single names and five for full names."
                    },
                    {
                        "id": 42,
                        "string": "Using the optimal parameters, we constructed models from the entire training data and then tested the models on the held-out test set."
                    },
                    {
                        "id": 43,
                        "string": "Intrinsic evaluation We used two corpora to test our SVM-based approach: the Transfermarkt corpus of soccer player names, and the Chinese-English-Japanese (CEJ) corpus of first names and surnames."
                    },
                    {
                        "id": 44,
                        "string": "These corpora are described in further detail below."
                    },
                    {
                        "id": 45,
                        "string": "Transfermarkt corpus The Transfermarkt corpus (Konstantopoulos, 2007) consists of European soccer player names annotated with one of 13 possible national languages, with separate lists provided for last names and full names."
                    },
                    {
                        "id": 46,
                        "string": "Diacritics were removed in order to avoid trivializing the task."
                    },
                    {
                        "id": 47,
                        "string": "There are 14914 full names, with average length 14.8, and 12051 last names, with average length 7.8."
                    },
                    {
                        "id": 48,
                        "string": "It should be noted that these data are noisy; the fact that a player plays for a certain nation's team does not necessarily indicate that his or her name is of that nation's language."
                    },
                    {
                        "id": 49,
                        "string": "For example, Dario Dakovic was born in Bosnia but plays for the Austrian national team; his name is therefore annotated as German."
                    },
                    {
                        "id": 50,
                        "string": "ing SVMs clearly outperforms using language models on the Transfermarkt corpus; in fact, SVMs yield better accuracy on last names than language models on full names."
                    },
                    {
                        "id": 51,
                        "string": "Differences between kernels are not statistically significant."
                    },
                    {
                        "id": 52,
                        "string": "CEJ corpus The CEJ corpus (Li et al., 2007) provides a combined list of first names and surnames, each classified as Chinese, English, or Japanese."
                    },
                    {
                        "id": 53,
                        "string": "There are a total of 97115 names with an average length of 7.6 characters."
                    },
                    {
                        "id": 54,
                        "string": "This corpus was used for the semantic transliteration of personal names into Chinese."
                    },
                    {
                        "id": 55,
                        "string": "We found that the RBF and sigmoid kernels were very slow-presumably due to the large size of the corpus-so we tested only the linear kernel."
                    },
                    {
                        "id": 56,
                        "string": "Table 2 shows our results in comparison to those of language models reported in (Li et al., 2007) ; we reduce the error rate by over 50%."
                    },
                    {
                        "id": 57,
                        "string": "Application to machine transliteration Machine transliteration is one of the primary potential applications of language identification because the language of a word often determines its pronunciation."
                    },
                    {
                        "id": 58,
                        "string": "We therefore tested language identification to see if results could indeed be improved by using language identification as a pre-processing step."
                    },
                    {
                        "id": 59,
                        "string": "Data The English-Hindi corpus of names (Li et al., 2009; MSRI, 2009 ) contains a test set of 1000 names represented in both the Latin and Devanagari scripts."
                    },
                    {
                        "id": 60,
                        "string": "We manually classified these names as being of either Indian or non-Indian origin, occasionally resorting to web searches to help disambiguate them."
                    },
                    {
                        "id": 61,
                        "string": "1 We discarded those names that fell into both categories 1 Our tagged data are available online at http://www."
                    },
                    {
                        "id": 62,
                        "string": "cs.ualberta.ca/˜ab31/langid/."
                    },
                    {
                        "id": 63,
                        "string": "(e.g."
                    },
                    {
                        "id": 64,
                        "string": "\"Maya\") as well as those that we could not confidently classify."
                    },
                    {
                        "id": 65,
                        "string": "In total, we discarded 95 of these names, and randomly selected 95 names from the training set that we could confidently classify to complete our corpus of 1000 names."
                    },
                    {
                        "id": 66,
                        "string": "Of the 1000 names, 546 were classified as being of Indian origin and the remaining 454 were classified as being of non-Indian origin; the names have an average length of 7.0 characters."
                    },
                    {
                        "id": 67,
                        "string": "We trained our language identification approach on 900 names, with the remaining 100 names serving as the test set."
                    },
                    {
                        "id": 68,
                        "string": "The resulting accuracy was 80% with the linear kernel, 84% with the RBF kernel, and 83% with the sigmoid kernel."
                    },
                    {
                        "id": 69,
                        "string": "In this case, the performance of the RBF kernel was found to be significantly better than that of the linear kernel according to the McNemar test with p < 0.05."
                    },
                    {
                        "id": 70,
                        "string": "Experimental setup We tested a simple method of combining language identification with transliteration."
                    },
                    {
                        "id": 71,
                        "string": "We use a language identification model to split the training, development, and test sets into disjoint classes."
                    },
                    {
                        "id": 72,
                        "string": "We train a transliteration model on each separate class, and then combine the results."
                    },
                    {
                        "id": 73,
                        "string": "Our transliteration system was DIRECTL (Jiampojamarn et al., 2009) ."
                    },
                    {
                        "id": 74,
                        "string": "We trained the language identification model over the entire set of 1000 tagged names using the parameters from above."
                    },
                    {
                        "id": 75,
                        "string": "Because these names comprised most of the test set and were now being used as the training set for the language identification model, we swapped various names between sets such that none of the words used for training the language identification model were in the final transliteration test set."
                    },
                    {
                        "id": 76,
                        "string": "Using this language identification model, we split the data."
                    },
                    {
                        "id": 77,
                        "string": "After splitting, the \"Indian\" training, development, and testing sets had 5032, 575, and 483 words respectively while the \"non-Indian\" sets had 11081, 993, and 517 words respectively."
                    },
                    {
                        "id": 78,
                        "string": "Results Splitting the data and training two separate models yielded a combined top-1 accuracy of 46.0%, as compared to 47.0% achieved by a single transliteration model trained over the full data; this difference is not statistically significant."
                    },
                    {
                        "id": 79,
                        "string": "Somewhat counterintuitively, using language identification as a preprocessing step for machine transliteration yields no improvement in performance for our particular data and transliteration system."
                    },
                    {
                        "id": 80,
                        "string": "While it could be argued that our language identification accuracy of 84% is too low to be useful here, we believe that the principal reason for this performance decrease is the reduction in the amount of data available for the training of the separate models."
                    },
                    {
                        "id": 81,
                        "string": "We performed an experiment to confirm this hypothesis: we randomly split the full data into two sets, matching the sizes of the Indian and non-Indian sets."
                    },
                    {
                        "id": 82,
                        "string": "We then trained two separate models and combined the results; this yielded a top-1 accuracy of 41.5%."
                    },
                    {
                        "id": 83,
                        "string": "The difference between this and the 46.0% result above is statistically significant with p < 0.01."
                    },
                    {
                        "id": 84,
                        "string": "From this we conclude that the reduction in data size was a significant factor in the previously described null result, and that language identification does provide useful information to the transliteration system."
                    },
                    {
                        "id": 85,
                        "string": "In addition, we believe that the transliteration system may implicitly leverage the language origin information."
                    },
                    {
                        "id": 86,
                        "string": "Whether a closer coupling of the two modules could produce an increase in accuracy remains an open question."
                    },
                    {
                        "id": 87,
                        "string": "Conclusion We have proposed a novel approach to the task of language identification of names."
                    },
                    {
                        "id": 88,
                        "string": "We have shown that applying SVMs with n-gram counts as features outperforms the predominant approach based on language models."
                    },
                    {
                        "id": 89,
                        "string": "We also tested language identification in one of its potential applications, machine transliteration, and found that a simple method of splitting the data by language yields no significant change in accuracy, although there is an improvement in comparison to a random split."
                    },
                    {
                        "id": 90,
                        "string": "In the future, we plan to investigate other methods of incorporating language identification in machine transliteration."
                    },
                    {
                        "id": 91,
                        "string": "Options to explore include the use of language identification probabilities as features in the transliteration system (Li et al., 2007) , as well as splitting the data into sets that are not necessarily disjoint, allowing separate transliteration models to learn from potentially useful common information."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 8
                    },
                    {
                        "section": "Previous work",
                        "n": "2",
                        "start": 9,
                        "end": 25
                    },
                    {
                        "section": "Language identification with SVMs",
                        "n": "3",
                        "start": 26,
                        "end": 41
                    },
                    {
                        "section": "Intrinsic evaluation",
                        "n": "4",
                        "start": 42,
                        "end": 44
                    },
                    {
                        "section": "Transfermarkt corpus",
                        "n": "4.1",
                        "start": 45,
                        "end": 51
                    },
                    {
                        "section": "CEJ corpus",
                        "n": "4.2",
                        "start": 52,
                        "end": 55
                    },
                    {
                        "section": "Application to machine transliteration",
                        "n": "5",
                        "start": 56,
                        "end": 58
                    },
                    {
                        "section": "Data",
                        "n": "5.1",
                        "start": 59,
                        "end": 69
                    },
                    {
                        "section": "Experimental setup",
                        "n": "5.2",
                        "start": 70,
                        "end": 77
                    },
                    {
                        "section": "Results",
                        "n": "5.3",
                        "start": 78,
                        "end": 86
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 87,
                        "end": 91
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1051-Table2-1.png",
                        "caption": "Table 2: Language identification accuracy on the CEJ corpus. Language models have n = 4.",
                        "page": 2,
                        "bbox": {
                            "x1": 327.84,
                            "x2": 525.12,
                            "y1": 60.0,
                            "y2": 103.2
                        }
                    },
                    {
                        "filename": "../figure/image/1051-Table1-1.png",
                        "caption": "Table 1: Language identification accuracy on the Transfermarkt corpus. Language models have n = 5.",
                        "page": 2,
                        "bbox": {
                            "x1": 79.67999999999999,
                            "x2": 291.36,
                            "y1": 60.0,
                            "y2": 130.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1051-Figure1-1.png",
                        "caption": "Figure 1: Cross-validation accuracy of the linear kernel on the Transfermarkt full names corpus.",
                        "page": 1,
                        "bbox": {
                            "x1": 317.76,
                            "x2": 536.64,
                            "y1": 63.36,
                            "y2": 140.64
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-24"
        },
        {
            "slides": {
                "1": {
                    "title": "Background",
                    "text": [
                        "There are two types of citations to retracted articles: Citations that a retracted article received prior to its retraction and citations that are received post retraction and despite retraction notices.",
                        "In this study we sought out to find the context around post-retraction citations with the main purpose of finding out whether they are negatively, positively or neutrally mentioned."
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Data Collection",
                    "text": [
                        "ScienceDirect, Elseviers full text database was accessed in October 2014. The database was queried for the term RETRACTED in the article title and its retraction notice.",
                        "For this study we selected the five top articles that were found to be highly cited since 2015. This ensured that the papers all cite retracted articles",
                        "A total of 1,203 results retrieved from which 988 were retracted articles. The results excluded were retraction notices, duplicates and papers whose original titles included the word \"retracted\"."
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Case study",
                    "text": [
                        "For each article we extracted the citing documents and analyzed the ones appearing in 2015 and 2016. Overall, we analyzed 120 citing documents.",
                        "Each mention was categorized as follows:",
                        "Positive: A positive citation indicates that the retracted article was cited as legitimate prior work and its findings used to corroborate the author/s current study.",
                        "Negative: A negative citations indicates that the authors mentioned the retracted article as such and its findings as inappropriate.",
                        "Neutral: A neutral citation indicates that the retracted article was mentioned as a publication that appears in the literature and does not include judgement on its validity."
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Citations in Context",
                    "text": [
                        "This article was cited 109 times since its publication in 2012 with",
                        "28 citations tracked after 2014",
                        "More citations are seen to be negative, the positive and neutral ones are also present",
                        "The negative citations mostly point to the media frenzy around the results.",
                        "The study was republished in 2014 by Environmental Sciences Europe. The republished article received 17 citations in 2015 and 2016. The vast majority of them being positive mentions",
                        "NA Negative Neutral Positive"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "5": {
                    "title": "Conclusions",
                    "text": [
                        "Retracted articles continue to be cited years after retraction and despite retraction notices being posted on publishers platforms.",
                        "This could be the result of general interest by the public or media.",
                        "In other cases, the reason for retraction does not deter others from citing the article.",
                        "We recommend that publishers use reference checks to all submitted articles to detect citations of retracted articles and remove them or at least request an explanation from the authors for citing a retracted paper in a positive or neutral manner"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                }
            },
            "paper_title": "Post Retraction Citations in Context",
            "paper_id": "1052",
            "paper": {
                "title": "Post Retraction Citations in Context",
                "abstract": "In this paper we explore post retraction citations to retracted papers. The reasons for retractions in our sample were data manipulation, small sample size, scientific misconduct, and duplicate publication by the authors. We found, that the huge majority of the citations are positive, and the citing papers usually fail to mention that the cited article was retracted. Retracted articles are freely available at the publishers' site, which is probably a catalyst for receiving more citations.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Studies on retracted articles show that the amount of retracted articles has increased in relative measure to the overall increase in scientific publications [1, 2] ."
                    },
                    {
                        "id": 1,
                        "string": "Although retracting articles helps purge the scientific literature of erroneous or unethical research, citations to such research present a real challenge."
                    },
                    {
                        "id": 2,
                        "string": "Citing articles that were retracted especially due to plagiarism, data falsification or any other unethical practices interferes with the process of eliminating such studies form the literature and research."
                    },
                    {
                        "id": 3,
                        "string": "There are two types of retraction citations; citations that a retracted article received prior to its retraction and citations that are received post retraction and despite retraction notices [3, 4] ."
                    },
                    {
                        "id": 4,
                        "string": "Both types of citations put the scientific process in jeopardy, especially when they are cited as legitimate references to previous work."
                    },
                    {
                        "id": 5,
                        "string": "Some studies on retracted articles have shown that retracted articles that received a high number of citations preretraction are more likely to occur additional citations post-retraction [4, 5] ."
                    },
                    {
                        "id": 6,
                        "string": "A good example is described in a study by [6] who studied the case of Scott S. Reuben who was convicted of fabricating data in 25 of his studies which resulted in mass retractions of his articles."
                    },
                    {
                        "id": 7,
                        "string": "The authors of the study have shown that the popularity of Reuben's articles did not diminish post-retraction even 5 years after the retractions have been made."
                    },
                    {
                        "id": 8,
                        "string": "Another phenomenon that was identified in the literature is of authors' selfciting their retracted articles and thus contributing to the perception that their retracted work is valid [7] ."
                    },
                    {
                        "id": 9,
                        "string": "In this study we sought out to find the context around post-retraction citations with the main purpose of finding out whether they are negatively, positively or neutrally mentioned."
                    },
                    {
                        "id": 10,
                        "string": "In this case study we present a sample of five retracted articles that have post-retraction citations tracked in 2015 and 2016."
                    },
                    {
                        "id": 11,
                        "string": "2 Data collection ScienceDirect, Elsevier's full text database was accessed in October 2014."
                    },
                    {
                        "id": 12,
                        "string": "The database was queried for the term \"RETRACTED\" in the article title and its retraction notice."
                    },
                    {
                        "id": 13,
                        "string": "In ScienceDirect, each retracted article is preceded with the word \"RETRACTED\"."
                    },
                    {
                        "id": 14,
                        "string": "In addition, each Elsevier journal incorporates a retraction notice which explains who retracted article and the reason for retraction."
                    },
                    {
                        "id": 15,
                        "string": "This allowed us to manually code each article in our dataset with an additional field \"retracted by\" that represented the person/s requesting the retraction."
                    },
                    {
                        "id": 16,
                        "string": "A total of 1,203 results retrieved from which 988 were retracted articles."
                    },
                    {
                        "id": 17,
                        "string": "The results excluded were retraction notices, duplicates and papers whose original titles included the word \"retracted\"."
                    },
                    {
                        "id": 18,
                        "string": "For this study we selected the five top articles that were cited most (more than 20 times) since 2015."
                    },
                    {
                        "id": 19,
                        "string": "This way we made sure that the papers all cite retracted articles (since they were all retracted before October 2014)."
                    },
                    {
                        "id": 20,
                        "string": "The reason for this decision is that the retraction date of many of the retracted articles is unknown."
                    },
                    {
                        "id": 21,
                        "string": "For each article we extracted the citing documents and analyzed the ones appearing in 2015 and 2016."
                    },
                    {
                        "id": 22,
                        "string": "Overall, we analyzed located 125 citing documents and analyzed 109 of them; 16 documents were unavailable to us mostly because they appear in books to which we did not have access."
                    },
                    {
                        "id": 23,
                        "string": "Each citing document was inspected to identify the precise mention of the retracted article within the text."
                    },
                    {
                        "id": 24,
                        "string": "Each mention was categorized as follows: Positive: A positive citation indicates that the retracted article was cited as legitimate prior work and its findings used to corroborate the author/s current study."
                    },
                    {
                        "id": 25,
                        "string": "Negative: A negative citations indicates that the authors mentioned the retracted article as such and its findings as inappropriate."
                    },
                    {
                        "id": 26,
                        "string": "Neutral: A neutral citation indicates that the retracted article was mentioned as a publication that appears in the literature and does not include judgement on its validity."
                    },
                    {
                        "id": 27,
                        "string": "This article was published in 2010 in Cell and retracted in 2014 due to irregularities in graphs and data misrepresentation in the images."
                    },
                    {
                        "id": 28,
                        "string": "Although the graphs and images did not have any bearing on the validity of the results, according to the retraction notice, the editors stated that \"…the level of care in figure preparation in Donmez et al."
                    },
                    {
                        "id": 29,
                        "string": "falls well below the standard that we expect, and we are therefore retracting the paper\"."
                    },
                    {
                        "id": 30,
                        "string": "We conducted an individual content analysis of the most recent 36 citations which were tracked in 2015 and 2016."
                    },
                    {
                        "id": 31,
                        "string": "We were able to analyze 32 citing articles in context."
                    },
                    {
                        "id": 32,
                        "string": "Our results show that the citations are mostly positive (see Fig."
                    },
                    {
                        "id": 33,
                        "string": "1 )."
                    },
                    {
                        "id": 34,
                        "string": "One negative mention was found in a letter to the editor of Journal of Korean Medical Science written \"giving the above article as an example of how altered graphics are causing bias in the biomedical field and result in numerous articles being retracted\" [8] ."
                    },
                    {
                        "id": 35,
                        "string": "In this case, the editor indicated that the actual results of the study were valid, and this could be the reason for the continuous citations of the article."
                    },
                    {
                        "id": 36,
                        "string": "In one other case, although the article was cited positively in the paper, in the reference list it was noted that the article was retracted."
                    },
                    {
                        "id": 37,
                        "string": "This article, published in 2012 was the subject of a debate surrounding the validity of the findings, use of animals and even accusations of fraud."
                    },
                    {
                        "id": 38,
                        "string": "Its publication and retraction process have resulted in the \"Séralini affair\" which became a big media news item."
                    },
                    {
                        "id": 39,
                        "string": "The article described a 2-year study of rats which were fed genetically modified (GM) crops and showed increased tumors."
                    },
                    {
                        "id": 40,
                        "string": "The study, which was also scrutinized by government agencies, received major media attention that resulted in the creation of a social movement against GM food."
                    },
                    {
                        "id": 41,
                        "string": "The demand to label of all GM foods is still underway."
                    },
                    {
                        "id": 42,
                        "string": "Despite the accusation of fraud and fabrication of results, the editors found no such evidence to that effect."
                    },
                    {
                        "id": 43,
                        "string": "However, the article was retracted because of the \"low number of animals\" used in this study which lead to the conclusion that \"no definitive conclusions can be reached with this small sample size\"."
                    },
                    {
                        "id": 44,
                        "string": "This article was cited 109 times since its publication in 2012 with 23 citations tracked after its retraction (2015-2016) out of which 18 citing articles were accessible to us."
                    },
                    {
                        "id": 45,
                        "string": "As can be seen in Fig."
                    },
                    {
                        "id": 46,
                        "string": "2 post-retraction citations are divided."
                    },
                    {
                        "id": 47,
                        "string": "Although more citations are seen to be negative, the positive and neutral ones are also present."
                    },
                    {
                        "id": 48,
                        "string": "The negative citations mostly point to the media frenzy around the results."
                    },
                    {
                        "id": 49,
                        "string": "Positive mentions appear in similar studies which claim that concerns raised by the GM study are valid and the dangers of GM foods to humans should be studied further."
                    },
                    {
                        "id": 50,
                        "string": "The study was republished in 2014 by Environmental Sciences Europe [9] ."
                    },
                    {
                        "id": 51,
                        "string": "The republication of the study stirred another controversial discussion in the scientific community with several scientists writing letters expressing their concerns regarding the appearance of the same study in another journal [10] ."
                    },
                    {
                        "id": 52,
                        "string": "The republished article received 17 citations in 2015 and 2016."
                    },
                    {
                        "id": 53,
                        "string": "The vast majority of them being positive mentions (see Fig."
                    },
                    {
                        "id": 54,
                        "string": "3 )."
                    },
                    {
                        "id": 55,
                        "string": "In addition, some criticism towards the peer-review practices of the retracting editors were also detected [10] ."
                    },
                    {
                        "id": 56,
                        "string": "The one negative mention of the re-published article was criticism towards the media frenzy around the topic and the inability of the scientific community to refute invalid results."
                    },
                    {
                        "id": 57,
                        "string": "The authors state that \"Although scientists have investigated each GMO crisis and reached scientific and rational conclusions, they have less ability to disseminate information than the media, so the public is not promptly informed of their rational and objective viewpoints as experts\" [11, p.134 ]."
                    },
                    {
                        "id": 58,
                        "string": "The leading author of the paper, Dipak Das and his lab at the University of Connecticut Health Sciences Center were the subject of an ethical investigation by the university."
                    },
                    {
                        "id": 59,
                        "string": "The results of the university's investigation led to the retraction of all of Dr. Das' papers due to scientific misconduct and data manipulation."
                    },
                    {
                        "id": 60,
                        "string": "This particular paper was investigated by the journal's ethics committee along with an additional paper that appeared in This article was retracted in 2014 due to serious data manipulation and falsification."
                    },
                    {
                        "id": 61,
                        "string": "In the retraction notice of this article, the editors of the journal went to great lengths to examine and re-examine the statistical claims made by the authors using the services of three separate methodologists."
                    },
                    {
                        "id": 62,
                        "string": "Following the methodologists' findings of irregularities in the reported data and falsification of results, and the authors' lack of proper response to their findings, the article was retracted from the journal."
                    },
                    {
                        "id": 63,
                        "string": "However, the article continued to be cited despite the lengthy and detailed retraction notice."
                    },
                    {
                        "id": 64,
                        "string": "A close examination of the post retraction citations (2015-March 2016 -24 citations of which 23 were analyzed) shows that all citations were positive citations, meaning that the citing authors used findings from this article to support their findings."
                    },
                    {
                        "id": 65,
                        "string": "The subject of \"authentic leadership\" is popular in management studies and has seen a surge in publications since 2012."
                    },
                    {
                        "id": 66,
                        "string": "This could explain the overall positive citations of the article."
                    },
                    {
                        "id": 67,
                        "string": "This article, published in 1999 was retracted due to an identical version which was published 2 years earlier."
                    },
                    {
                        "id": 68,
                        "string": "In the retraction notice the editors state that \"The article duplicates significant parts of a paper that had already appeared in [J China Textil Univ, 1997, 14(3), [8] [9] [10] [11] [12] [13] \"."
                    },
                    {
                        "id": 69,
                        "string": "The authors in this case re-used data they already published on and re-published it in a different journal."
                    },
                    {
                        "id": 70,
                        "string": "However, this article has been cited even in recent years despite being retracted for many years."
                    },
                    {
                        "id": 71,
                        "string": "A content analysis of the 18 out of the 21 recently citing articles from 2015 and 2016 shows that this article is being referred to mostly in positive context or mentioned as a legitimate piece in the literature."
                    },
                    {
                        "id": 72,
                        "string": "Here too, there is one paper that cites the article positively in the text, but in the references it appears as retracted."
                    },
                    {
                        "id": 73,
                        "string": "4 Discussion and Conclusions As can be seen from the examples above, retracted articles continue to be cited years after retraction and despite retraction notices being posted on publishers' platforms."
                    },
                    {
                        "id": 74,
                        "string": "In some cases, the continuous citations rates could be the result of general interest by the public or media."
                    },
                    {
                        "id": 75,
                        "string": "For example, the Séralini article evoked an ongoing public debate regarding the safety of GM foods which resulted in a call to label all such food."
                    },
                    {
                        "id": 76,
                        "string": "This could explain the continuing interest in the study and its citations."
                    },
                    {
                        "id": 77,
                        "string": "The article was also republished and thus continues to be cited despite of the fact that the authors did not modify it."
                    },
                    {
                        "id": 78,
                        "string": "In the case of the Mukherjee article, again, public interest could explain its continuing citations."
                    },
                    {
                        "id": 79,
                        "string": "Resveratrol was hailed by the media as an important supplement that could ensure longevity and good health and is an off the counter supplement available in vitamin shops."
                    },
                    {
                        "id": 80,
                        "string": "Finally, the Walumbwa article which describes 'authentic leadership' and followers' dynamic is also a topic of media and business management interest."
                    },
                    {
                        "id": 81,
                        "string": "With numerous management books published on this topic it has been accepted as a management style encouraged by corporations."
                    },
                    {
                        "id": 82,
                        "string": "In other cases, the reason for retraction does not deter others from citing the article."
                    },
                    {
                        "id": 83,
                        "string": "For example, the Donmez article (case study 1 above) was retracted because of poor graphing and data representation."
                    },
                    {
                        "id": 84,
                        "string": "However, the editors do state in the retraction notice that these faults do not apply to the results of the study, even though on PubPeer [14] there was an extensive discussion on problems with the article."
                    },
                    {
                        "id": 85,
                        "string": "The editors' approval of the results could be the reason for the continuing citations to the article."
                    },
                    {
                        "id": 86,
                        "string": "The Li article, as another example, re-used data and thus violated the originality rule of scientific publishing."
                    },
                    {
                        "id": 87,
                        "string": "However, the data itself was not refuted by the editors and the article that was published first seems to be inaccessible."
                    },
                    {
                        "id": 88,
                        "string": "Regardless of the reasons speculated for the post-retractions citations, the fact that invalid and falsified research is continuing to appear as valid research is concerning."
                    },
                    {
                        "id": 89,
                        "string": "We recommend that publishers use reference checks to all submitted articles to detect citations of retracted articles and remove them or at least request an explanation from the authors for citing a retracted paper in a positive or neutral manner."
                    },
                    {
                        "id": 90,
                        "string": "This explanation should clearly appear in the paper."
                    },
                    {
                        "id": 91,
                        "string": "In addition, we would recommend the deletion of retracted articles from publishers' websites."
                    },
                    {
                        "id": 92,
                        "string": "Currently, at least for the major publishers: Elsevier, Springer Nature and Wiley, but possibly a general practice, retracted articles are not only available on the publishers' site, but they are freely available, without the need for a subscription or for a one-time payment."
                    },
                    {
                        "id": 93,
                        "string": "While leaving a retraction notice, the article itself should not appear on platforms such as ScienceDirect or others."
                    },
                    {
                        "id": 94,
                        "string": "Although versions of these articles may appear elsewhere, the journal websites should not carry these versions and make it difficult for authors to download, read and consequently cite retracted articles."
                    },
                    {
                        "id": 95,
                        "string": "Acknowledgement The first author was supported by EU COST Actions PEERE (TD1306) and KnowEscape (TD1210)."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 47
                    },
                    {
                        "section": "Acknowledgement",
                        "n": "5",
                        "start": 48,
                        "end": 95
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1052-Figure1-1.png",
                        "caption": "Fig. 1. Citations in context for the Donmez et al. article",
                        "page": 2,
                        "bbox": {
                            "x1": 135.35999999999999,
                            "x2": 438.24,
                            "y1": 230.88,
                            "y2": 323.03999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1052-Figure3-1.png",
                        "caption": "Fig. 3. Citations in context for the republished Séralini et al. article",
                        "page": 3,
                        "bbox": {
                            "x1": 135.35999999999999,
                            "x2": 410.4,
                            "y1": 430.56,
                            "y2": 520.3199999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1052-Figure2-1.png",
                        "caption": "Fig. 2. Citations in context for the Séralini et al. article",
                        "page": 3,
                        "bbox": {
                            "x1": 135.84,
                            "x2": 400.32,
                            "y1": 146.88,
                            "y2": 240.95999999999998
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-25"
        },
        {
            "slides": {
                "0": {
                    "title": "A NLG system Architecture",
                    "text": [
                        "Communicative Goal Document Plans Sentence Plans Surface Text",
                        "Ehud Reiter and Robert Dale, Building Natural Language",
                        "Generation Systems, Cambridge University Press, 2000.",
                        "In this paper, we study surface realization, i.e. mapping meaning representations to natural language sentences."
                    ],
                    "page_nums": [
                        3,
                        4
                    ],
                    "images": []
                },
                "1": {
                    "title": "Meaning Representation",
                    "text": [
                        "Logic form, e.g. lambda calculus"
                    ],
                    "page_nums": [
                        5,
                        6,
                        7
                    ],
                    "images": []
                },
                "2": {
                    "title": "Graph Structured Meaning Representation",
                    "text": [
                        "Different kinds of graph-structured semantic representations:",
                        "Semantic Dependency Graphs (SDP)",
                        "Abstract Meaning Representations (AMR)",
                        "Dependency-based Minimal Recursion Semantics (DMRS)",
                        "Elementary Dependency Structures (EDS)",
                        "BV ARG1 ARG2 BV"
                    ],
                    "page_nums": [
                        8,
                        9,
                        10,
                        11
                    ],
                    "images": []
                },
                "3": {
                    "title": "Type Logical Semantic Graph",
                    "text": [
                        "EDS graphs are grounded under type-logical semantics. They are usually very flat and multi-rooted graphs.",
                        "BV ARG1 ARG2 BV",
                        "The boy wants the girl to believe him."
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "5": {
                    "title": "Formalisms for Strings Trees and Graphs",
                    "text": [
                        "Chomsky hierarchy Grammar Abstract machines",
                        "Manipulating Graphs: Graph Grammar and DAG Automata."
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                },
                "6": {
                    "title": "Existing System",
                    "text": [
                        "David Chiang, Frank Drewes, Daniel Gildea, Adam Lopez and",
                        "Giorgio Satta. Weighted DAG Automata for Semantic Graphs.",
                        "the longest NLP paper that Ive ever read",
                        "Daniel Quernheim and Kevin Knight. 2012. Towards probabilistic acceptors and transducers for feature structures"
                    ],
                    "page_nums": [
                        18,
                        29
                    ],
                    "images": []
                },
                "7": {
                    "title": "DAG Automata",
                    "text": [
                        "A weighted DAG automaton is a tuple",
                        "A run of M on DAG D V,E, is an edge labeling function",
                        "The weight of is the product of all weight of local transitions:"
                    ],
                    "page_nums": [
                        19,
                        20
                    ],
                    "images": []
                },
                "8": {
                    "title": "DAG Automata Toy Example",
                    "text": [
                        "John wants to go. {} _want_v_",
                        "named(John) } named(John Failed !",
                        "Accept ! } named(John"
                    ],
                    "page_nums": [
                        21,
                        22,
                        23,
                        24,
                        25,
                        26,
                        27,
                        28
                    ],
                    "images": [
                        "figure/image/1060-Figure2-1.png"
                    ]
                },
                "9": {
                    "title": "DAG to Tree Transducer",
                    "text": [
                        "WANT qnomb want s qinfb qnomb want s INF INF",
                        "BOY GIRL BOY GIRL",
                        "qnomb want s INF INF",
                        "BE LIE VE BE LIE VE qaccg to believe qaccb NP NP NP",
                        "qaccg qaccb NP NP NP to believe",
                        "t he b oy want s t he gir l t o b elieve him BOY G IR L BOY G IR L BOY G IR L",
                        "q S S S",
                        "Challenges for DAG-to-tree transduction on EDS graphs:",
                        "Cannot easily reverse the directions of edges",
                        "Cannot easily handle multiple roots"
                    ],
                    "page_nums": [
                        30,
                        31,
                        32,
                        33,
                        34
                    ],
                    "images": []
                },
                "10": {
                    "title": "Our DAG to program transducer",
                    "text": [
                        "Rewritting: directly generating a new data structure piece by piece, during recognizing an input DAG.",
                        "Obtaining target structures based on side effects of the",
                        "States: The output of our transducer is a program:",
                        "John wants to go.",
                        "S John want to go"
                    ],
                    "page_nums": [
                        36,
                        37,
                        38,
                        39,
                        40,
                        41,
                        42
                    ],
                    "images": [
                        "figure/image/1060-Figure2-1.png"
                    ]
                },
                "11": {
                    "title": "Transducation Rules",
                    "text": [
                        "A valid DAG Automata transition",
                        "We use parameterized states:",
                        "The range of direction: unchanged, empty, reversed."
                    ],
                    "page_nums": [
                        43,
                        44,
                        45
                    ],
                    "images": []
                },
                "12": {
                    "title": "Toy Example",
                    "text": [
                        "Rule For Recognition For Generation",
                        "Recognition: To find an edge labeling function . The red dashed edges make up an intermediate graph T().",
                        "of edge ei with variable xij and L with the output string in the statement templates."
                    ],
                    "page_nums": [
                        46,
                        47,
                        48,
                        49,
                        50,
                        51
                    ],
                    "images": [
                        "figure/image/1060-Figure2-1.png",
                        "figure/image/1060-Figure3-1.png",
                        "figure/image/1060-Table1-1.png"
                    ]
                },
                "13": {
                    "title": "DAG Transduction based NLG",
                    "text": [
                        "DAG Transducer Seq2seq Model",
                        "Semantic Graph Sequential Lemmas Surface string"
                    ],
                    "page_nums": [
                        52
                    ],
                    "images": []
                },
                "15": {
                    "title": "NLG via DAG transduction",
                    "text": [
                        "Data: DeepBank + Wikiwoods",
                        "Decoder: Beam search (beam size = 128)",
                        "About 37,000 induced rules are directly obtained from",
                        "DeepBank training dataset by a group of heuristic rules.",
                        "Disambiguation: global linear model",
                        "Transducer Lemmas Sentences Coverage induced rules induced and exteneded rules induced, exteneded and dynamic rules"
                    ],
                    "page_nums": [
                        61,
                        64,
                        66
                    ],
                    "images": []
                },
                "16": {
                    "title": "Fine to coarse Transduction",
                    "text": [
                        "To deal with data sparseness problem, we use some heuristic rules to generate extened rules by slightly changing an induced rule.",
                        "Given a induced rule:",
                        "New rule generated by deleting:",
                        "New rule generated by copying:"
                    ],
                    "page_nums": [
                        62,
                        63
                    ],
                    "images": []
                },
                "17": {
                    "title": "Fine to coarse transduction",
                    "text": [
                        "During decoding, when neither induced nor extended rule is applicable, we use markov model to create a dynamic rule",
                        "C {q1, qm},D represents the context. r1, rn denotes the outgoing states."
                    ],
                    "page_nums": [
                        65
                    ],
                    "images": []
                }
            },
            "paper_title": "Language Generation via DAG Transduction",
            "paper_id": "1060",
            "paper": {
                "title": "Language Generation via DAG Transduction",
                "abstract": "A DAG automaton is a formal device for manipulating graphs. By augmenting a DAG automaton with transduction rules, a DAG transducer has potential applications in fundamental NLP tasks. In this paper, we propose a novel DAG transducer to perform graph-to-program transformation. The target structure of our transducer is a program licensed by a declarative programming language rather than linguistic structures. By executing such a program, we can easily get a surface string. Our transducer is designed especially for natural language generation (NLG) from type-logical semantic graphs. Taking Elementary Dependency Structures, a format of English Resource Semantics, as input, our NLG system achieves a BLEU-4 score of 68.07. This remarkable result demonstrates the feasibility of applying a DAG transducer to resolve NLG, as well as the effectiveness of our design.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The recent years have seen an increased interest as well as rapid progress in semantic parsing and surface realization based on graph-structured semantic representations, e.g."
                    },
                    {
                        "id": 1,
                        "string": "Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Elementary Dependency Structure (EDS; Oepen and Lønning, 2006) and Depedendency-based Minimal Recursion Semantics (DMRS; Copestake, 2009) ."
                    },
                    {
                        "id": 2,
                        "string": "Still underexploited is a formal framework for manipulating graphs that parallels automata, tranducers or formal grammars for strings and trees."
                    },
                    {
                        "id": 3,
                        "string": "Two such formalisms have recently been proposed and applied for NLP."
                    },
                    {
                        "id": 4,
                        "string": "One is graph grammar, e.g."
                    },
                    {
                        "id": 5,
                        "string": "Hyperedge Replacement Gram-mar (HRG; Ehrig et al., 1999) ."
                    },
                    {
                        "id": 6,
                        "string": "The other is DAG automata, originally studied by Kamimura and Slutzki (1982) and extended by Chiang et al."
                    },
                    {
                        "id": 7,
                        "string": "(2018) ."
                    },
                    {
                        "id": 8,
                        "string": "In this paper, we study DAG transducers in depth, with the goal of building accurate, efficient yet robust natural language generation (NLG) systems."
                    },
                    {
                        "id": 9,
                        "string": "The meaning representation studied in this work is what we call type-logical semantic graphs, i.e."
                    },
                    {
                        "id": 10,
                        "string": "semantic graphs grounded under type-logical semantics (Carpenter, 1997) , one dominant theoretical framework for modeling natural language semantics."
                    },
                    {
                        "id": 11,
                        "string": "In this framework, adjuncts, such as adjective and adverbal phrases, are analyzed as (higher-order) functors, the function of which is to consume complex arguments (Kratzer and Heim, 1998) ."
                    },
                    {
                        "id": 12,
                        "string": "In the same spirit, generalized quantifiers, prepositions and function words in many languages other than English are also analyzed as higher-order functions."
                    },
                    {
                        "id": 13,
                        "string": "Accordingly, all the linguistic elements are treated as roots in type-logical semantic graphs, such as EDS and DMRS."
                    },
                    {
                        "id": 14,
                        "string": "This makes the typological structure quite flat rather than hierachical, which is an essential distinction between natural language semantics and syntax."
                    },
                    {
                        "id": 15,
                        "string": "To the best of our knowledge, the only existing DAG transducer for NLG is the one proposed by Quernheim and Knight (2012) ."
                    },
                    {
                        "id": 16,
                        "string": "Quernheim and Knight introduced a DAG-to-tree transducer that can be applied to AMR-to-text generation."
                    },
                    {
                        "id": 17,
                        "string": "This transducer is designed to handle hierarchical structures with limited reentrencies, and it is unsuitable for meaning graphs transformed from type-logical semantics."
                    },
                    {
                        "id": 18,
                        "string": "Furthermore, Quernheim and Knight did not describe how to acquire graph recognition and transduction rules from linguistic data, and reported no result of practical generation."
                    },
                    {
                        "id": 19,
                        "string": "It is still unknown to what extent a DAG transducer suits realistic NLG."
                    },
                    {
                        "id": 20,
                        "string": "The design for string and tree transducers (Comon et al., 1997) focuses on not only the logic of the computation for a new data structure, but also the corresponding control flow."
                    },
                    {
                        "id": 21,
                        "string": "This is very similar the imperative programming paradigm: implementing algorithms with exact details in explicit steps."
                    },
                    {
                        "id": 22,
                        "string": "This design makes it very difficult to transform a type-logical semantic graph into a string, due to the fact their internal structures are highly diverse."
                    },
                    {
                        "id": 23,
                        "string": "We borrow ideas from declarative programming, another programming paradigm, which describes what a program must accomplish, rather than how to accomplish it."
                    },
                    {
                        "id": 24,
                        "string": "We propose a novel DAG transducer to perform graphto-program transformation ( §3)."
                    },
                    {
                        "id": 25,
                        "string": "The input of our transducer is a semantic graph, while the output is a program licensed by a declarative programming language rather than linguistic structures."
                    },
                    {
                        "id": 26,
                        "string": "By executing such a program, we can easily get a surface string."
                    },
                    {
                        "id": 27,
                        "string": "This idea can be extended to other types of linguistic structures, e.g."
                    },
                    {
                        "id": 28,
                        "string": "syntactic trees or semantic representations of another language."
                    },
                    {
                        "id": 29,
                        "string": "We conduct experiments on richly detailed semantic annotations licensed by English Resource Grammar (ERG; Flickinger, 2000) ."
                    },
                    {
                        "id": 30,
                        "string": "We introduce a principled method to derive transduction rules from DeepBank (Flickinger et al., 2012) ."
                    },
                    {
                        "id": 31,
                        "string": "Furthermore, we introduce a fine-to-coarse strategy to ensure that at least one sentence is generated for any input graph."
                    },
                    {
                        "id": 32,
                        "string": "Taking EDS graphs, a variable-free ERS format, as input, our NLG system achieves a BLEU-4 score of 68.07."
                    },
                    {
                        "id": 33,
                        "string": "On average, it produces more than 5 sentences in a second on an x86 64 GNU/Linux platform with two Intel Xeon E5-2620 CPUs."
                    },
                    {
                        "id": 34,
                        "string": "Since the data for experiments is newswire data, i.e."
                    },
                    {
                        "id": 35,
                        "string": "WSJ sentences from PTB (Marcus et al., 1993) , the input graphs are quite large on average."
                    },
                    {
                        "id": 36,
                        "string": "The remarkable accuracy, efficiency and robustness demonstrate the feasibility of applying a DAG transducer to resolve NLG, as well as the effectiveness of our transducer design."
                    },
                    {
                        "id": 37,
                        "string": "Previous Work and Challenges Preliminaries A node-labeled simple graph over alphabet Σ is a triple G = (V, E, ℓ), where V is a finite set of nodes, E ⊆ V × V is an finite set of edges and ℓ : V → Σ is a labeling function."
                    },
                    {
                        "id": 38,
                        "string": "For a node v ∈ V , sets of its incoming and outgoing edges are denoted by in(v) and out(v) respectively."
                    },
                    {
                        "id": 39,
                        "string": "For an edge e ∈ E, its source node and target node are denoted by src(e) and tar(e) respectively."
                    },
                    {
                        "id": 40,
                        "string": "Gen-erally speaking, a DAG is a directed acyclic simple graph."
                    },
                    {
                        "id": 41,
                        "string": "Different from trees, a DAG allows nodes to have multiple incoming edges."
                    },
                    {
                        "id": 42,
                        "string": "In this paper, we only consider DAGs that are unordered, node-labeled, multi-rooted 1 and connected."
                    },
                    {
                        "id": 43,
                        "string": "Conceptual graphs, including AMR and EDS, are both node-labeled and edge-labeled."
                    },
                    {
                        "id": 44,
                        "string": "It seems that without edge labels, a DAG is inadequate, but this problem can be solved easily by using the strategies introduced in (Chiang et al., 2018) ."
                    },
                    {
                        "id": 45,
                        "string": "Take a labeled edge proper q BV −→ named for example 2 ."
                    },
                    {
                        "id": 46,
                        "string": "We can represent the same information by replacing it with two unlabeled edges and a new labeled node: proper q → BV → named."
                    },
                    {
                        "id": 47,
                        "string": "Previous Work DAG automata are the core engines of graph transducers (Bohnet and Wanner, 2010; Quernheim and Knight, 2012) ."
                    },
                    {
                        "id": 48,
                        "string": "In this work, we adopt Chiang et al."
                    },
                    {
                        "id": 49,
                        "string": "(2018) 's design and define a weighted DAG automaton as a tuple M = ⟨Σ, Q, δ, K⟩: • Σ is an alphabet of node labels."
                    },
                    {
                        "id": 50,
                        "string": "• Q is a finite set of states."
                    },
                    {
                        "id": 51,
                        "string": "• (K, ⊕, ⊗, 0, 1) is a semiring of weights."
                    },
                    {
                        "id": 52,
                        "string": "• δ : Θ → K\\{0} is a weight function that assigns nonzero weights to a finite transition set Θ."
                    },
                    {
                        "id": 53,
                        "string": "Every transition t ∈ Θ is of the form {q 1 , · · · , q m } σ − → {r 1 , · · · , r n } where q i and r j are states in Q."
                    },
                    {
                        "id": 54,
                        "string": "A transition t gets m states on the incoming edges of a node and puts n states on the outgoing edges."
                    },
                    {
                        "id": 55,
                        "string": "A transition that does not belong to Θ recieves a weight of zero."
                    },
                    {
                        "id": 56,
                        "string": "A run of M on a DAG D = ⟨V, E, ℓ⟩ is an edge labeling function ρ : E → Q."
                    },
                    {
                        "id": 57,
                        "string": "The weight of a run ρ (denoted as δ ′ (ρ)) is the product of all weights of local transitions: δ ′ (ρ) = ⊗ v∈V δ ( ρ(in(v)) ℓ(v) − − → ρ(out(v)) ) Here, for a function f , we use f ({a 1 , · · · , a n }) to represent {f (a 1 ), · · · , f (a n )}."
                    },
                    {
                        "id": 58,
                        "string": "If K is a boolean semiring, the automata fall backs to an unweighted DAG automata or DAG acceptor."
                    },
                    {
                        "id": 59,
                        "string": "A accepting run or recognition is a run, the weight of which is 1, meaning true."
                    },
                    {
                        "id": 60,
                        "string": "Challenges The DAG automata defined above can only be used for recognition."
                    },
                    {
                        "id": 61,
                        "string": "In order to generate sentences from semantic graphs, we need DAG transducers."
                    },
                    {
                        "id": 62,
                        "string": "A DAG transducer is a DAG automata-augmented computation model for transducing well-formed DAGs to other data structures."
                    },
                    {
                        "id": 63,
                        "string": "Quernheim and Knight (2012) focused on feature structures and introduced a DAG-to-Tree transducer to perform graph-to-tree transformation."
                    },
                    {
                        "id": 64,
                        "string": "The input of their transducer is limited to single-rooted DAGs."
                    },
                    {
                        "id": 65,
                        "string": "When the labels of the leaves of an output tree in order are interpreted as words, this transducer can be applied to generate natural language sentences."
                    },
                    {
                        "id": 66,
                        "string": "When applying Quernheim and Knight's DAGto-Tree transducer on type-logic semantic graphs, e.g."
                    },
                    {
                        "id": 67,
                        "string": "ERS, there are some significant problems."
                    },
                    {
                        "id": 68,
                        "string": "First, it lacks the ability to reverse the direction of edges during transduction because it is difficult to keep acyclicy anymore if edge reversing is allowed."
                    },
                    {
                        "id": 69,
                        "string": "Second, it cannot handle multiple roots."
                    },
                    {
                        "id": 70,
                        "string": "But we have discussed and reached the conclusion that multi-rootedness is a necessary requirement for representing type-logical semantic graphs."
                    },
                    {
                        "id": 71,
                        "string": "It is difficult to decide which node should be the tree root during a 'top-down' transduction and it is also difficult to merge multiple unconnected nodes into one during a 'bottom-up' transduction."
                    },
                    {
                        "id": 72,
                        "string": "At the risk of oversimplifying, we argue that the function of the existing DAG-to-Tree transducer is to transform a hierachical structure into another hierarchical structure."
                    },
                    {
                        "id": 73,
                        "string": "Since the type-local semantic graphs are so flat, it is extremely difficult to adopt Quernheim and Knight's design to handle such graphs."
                    },
                    {
                        "id": 74,
                        "string": "Third, there are unconnected nodes with direct dependencies, meaning that their correpsonding surface expressions appear to be very close."
                    },
                    {
                        "id": 75,
                        "string": "The conceptual nodes even x deg and steep a 1 in Figure 4 are an example."
                    },
                    {
                        "id": 76,
                        "string": "It is extremely difficult for the DAG-to-Tree transducer to handle this situation."
                    },
                    {
                        "id": 77,
                        "string": "A New DAG Transducer Basic Idea In this paper, we introduce a design of transducers that can perform structure transformation towards many data structures, including but not limited to trees."
                    },
                    {
                        "id": 78,
                        "string": "The basic idea is to give up the rewritting method to directly generate a new data structure piece by piece, while recognizing an input DAG."
                    },
                    {
                        "id": 79,
                        "string": "Instead, our transducer obtains target structures based on side effects of DAG recognition."
                    },
                    {
                        "id": 80,
                        "string": "The output of our transducer is no longer the target data structure itself, e.g."
                    },
                    {
                        "id": 81,
                        "string": "a tree or another DAG, and is now a program, i.e."
                    },
                    {
                        "id": 82,
                        "string": "a bunch of statements licensed by a particular declarative programming language."
                    },
                    {
                        "id": 83,
                        "string": "The target structures are constructed by executing such programs."
                    },
                    {
                        "id": 84,
                        "string": "Since our main concern of this paper is natural language generation, we take strings, namely sequences of words, as our target structures."
                    },
                    {
                        "id": 85,
                        "string": "In this section, we introduce an extremely simple programming language for string concatenation and then details about how to leverage the power of declarative programming to perform DAG-tostring transformation."
                    },
                    {
                        "id": 86,
                        "string": "A Declarative Programming Language The syntax in the BNF format of our declarative programming language, denoted as L c , for string calculation is: ⟨program⟩ ::= ⟨statement⟩ * ⟨statement⟩ ::= ⟨variable⟩ = ⟨expr⟩ ⟨expr⟩ ::= ⟨variable⟩ | ⟨string⟩ | ⟨expr⟩ + ⟨expr⟩ Here a string is a sequence of characters selected from an alphabet (denoted as Σ out ) and can be empty (denoted as ϵ)."
                    },
                    {
                        "id": 87,
                        "string": "The semantics of '=' is value assignment, while the semantics of '+' is string concatenation."
                    },
                    {
                        "id": 88,
                        "string": "The value of variables are strings."
                    },
                    {
                        "id": 89,
                        "string": "For every statement, the left hand side is a variable and the right hand side is a sequence of string literals and variables that are combined through '+'."
                    },
                    {
                        "id": 90,
                        "string": "Equation (1) presents an exmaple program licensed by this language."
                    },
                    {
                        "id": 91,
                        "string": "S = x 21 + want + x 11 x 11 = to + go x 21 = x 41 + John x 41 = ϵ (1) After solving these statements, we can query the values of all variables."
                    },
                    {
                        "id": 92,
                        "string": "In particular, we are interested in S, which is related to the desired natural language expression John want to go 3 ."
                    },
                    {
                        "id": 93,
                        "string": "Using the relation between the variables, we can easily convert the statements in (1) to a rooted tree."
                    },
                    {
                        "id": 94,
                        "string": "The result is shown in Figure 1 ."
                    },
                    {
                        "id": 95,
                        "string": "This tree is significantly different from the target structures discussed by Quernheim and Knight (2012) or other normal tree transducers (Comon et al., 1997) ."
                    },
                    {
                        "id": 96,
                        "string": "This tree represents calculation to solve the program."
                    },
                    {
                        "id": 97,
                        "string": "Constructing such internal trees is an essential function of the compiler of our programming language."
                    },
                    {
                        "id": 98,
                        "string": "Informal Illustration We introduce our DAG transducer using a simple example."
                    },
                    {
                        "id": 99,
                        "string": "Figure 2 shows the original input graph D = (V, E, ℓ)."
                    },
                    {
                        "id": 100,
                        "string": "Without any loss of generality, we remove edge labels."
                    },
                    {
                        "id": 101,
                        "string": "Table 1 lists the rule set-R-for this example."
                    },
                    {
                        "id": 102,
                        "string": "Every row represents an applicable transduction rule that consists of two parts."
                    },
                    {
                        "id": 103,
                        "string": "The left column is the recognition part displayed in the form I σ − → O, where I, O and σ decode the state set of incoming edges, the state set of outgoing edges and the node label respectively."
                    },
                    {
                        "id": 104,
                        "string": "The right column is the generation part which consists of (multiple) templates of statements licensed by the programming language defined in the previous section."
                    },
                    {
                        "id": 105,
                        "string": "In practice, two different rules may have a same recognition part but different generation parts."
                    },
                    {
                        "id": 106,
                        "string": "Every state q is of the form l(n, d) where l is the finite state label, n is the count of possible variables related to q, and d denotes the direction."
                    },
                    {
                        "id": 107,
                        "string": "The value of d can only be r (reversed), u (unchanged) or e(empty)."
                    },
                    {
                        "id": 108,
                        "string": "Variable v l(j,d) represents the jth (1 ≤ j ≤ n) variable related to state q."
                    },
                    {
                        "id": 109,
                        "string": "For example, v X(2,r) means the second variable of state X(3,r)."
                    },
                    {
                        "id": 110,
                        "string": "There are two special variables: S, which corresponds to the whole sentence and L, which corresponds to the output string associated to current node label."
                    },
                    {
                        "id": 111,
                        "string": "It is reasonable to assume that there exists a function ψ : Σ → Σ * out that maps a particular node label, i.e."
                    },
                    {
                        "id": 112,
                        "string": "concept, to a surface string."
                    },
                    {
                        "id": 113,
                        "string": "Therefore L is determined by ψ."
                    },
                    {
                        "id": 114,
                        "string": "Now we are ready to apply transduction rules to translate D into a string."
                    },
                    {
                        "id": 115,
                        "string": "The transduction consists of two steps: Recognition The goal of this step is to find an edge labeling function ρ : E → Q which satisfies that for every node v, ρ(in(v)) ℓ(v) − − → ρ(out(v)) matches the recognition part of a rule in R. The recognition result is shown in Figure 3 ."
                    },
                    {
                        "id": 116,
                        "string": "The red dashed edges in Figure 3 make up an intermediate graph T (ρ), which is a subgraph of D if edge direction is not taken into account."
                    },
                    {
                        "id": 117,
                        "string": "Sometimes, T (ρ) paralles the syntactic structure of an output sentence."
                    },
                    {
                        "id": 118,
                        "string": "For a labeling function ρ, we can construct intermediate graph T (ρ) by checking the direction parameter of every edge state."
                    },
                    {
                        "id": 119,
                        "string": "For an u) is included."
                    },
                    {
                        "id": 120,
                        "string": "The recognition process is slightly different from the one in Chiang et al."
                    },
                    {
                        "id": 121,
                        "string": "(2018) ."
                    },
                    {
                        "id": 122,
                        "string": "Since incoming edges with an Empty(0,e) state carry no semantic information, they will be ignored during recognition."
                    },
                    {
                        "id": 123,
                        "string": "For example, in Figure 3 , we will only use e 2 and e 4 to match transducation rules for node named(John)."
                    },
                    {
                        "id": 124,
                        "string": "edge e = (u, v) ∈ E, if the direction of ρ(e) is r, then (v, u) is in T (ρ)."
                    },
                    {
                        "id": 125,
                        "string": "If the direction is u, then (u, v) is in T (ρ)."
                    },
                    {
                        "id": 126,
                        "string": "If the direction is e, neither (u, v) nor (v, Instantiation We use rule(v) to denotes the rule used on node v. Assume s is the generation part of rule(v)."
                    },
                    {
                        "id": 127,
                        "string": "For every edge e i adjacent to v, assume ρ(e i ) = l (n, d) ."
                    },
                    {
                        "id": 128,
                        "string": "We replace L with ψ(ℓ(v)) and replace every occurrence of v l (j,d) in s with a new variable x ij (1 ≤ j ≤ n)."
                    },
                    {
                        "id": 129,
                        "string": "Then we Q = {DET(1,r) , Empty(0,e), VP(1,u), NP(1,u)} Rule For Recognition For Table 1 : Sets of states (Q) and rules (R) that can be used to process the graph in Figure 2 ."
                    },
                    {
                        "id": 130,
                        "string": "Generation 1 {} proper q − −−−−− → {DET(1,r)} v DET(1,r) = ϵ 2 {} want v 1 − −−−−− → {VP(1,u), NP(1,u)} S = v NP(1,u) + L + v VP(1,u) 3 {VP(1,u)} go v 1 −−−−→ {Empty(0,e)} v VP(1,u) = to + L 4 {NP(1,u), DET(1,r)} named − −−− → {} v NP(1,u) = v DET(1,r) + L get a newly generated expression for v. For example, node want v 1 is recognized using Rule 2, so we replace v NP(1,u) with x 21 , v VP(1,u) with x 11 and L with want."
                    },
                    {
                        "id": 131,
                        "string": "After instantiation, we get all the statements in Equation (1) ."
                    },
                    {
                        "id": 132,
                        "string": "Our transducer is suitable for type-logical semantic graphs."
                    },
                    {
                        "id": 133,
                        "string": "Because declarative programming brings in more freedom for graph transduction."
                    },
                    {
                        "id": 134,
                        "string": "We can arrange the variables in almost any order without regard to the edge directions in original graphs."
                    },
                    {
                        "id": 135,
                        "string": "Meanwhile, the multi-rooted problem can be solved easily because the generation is based on side effects."
                    },
                    {
                        "id": 136,
                        "string": "We do not need to decide which node is the tree root."
                    },
                    {
                        "id": 137,
                        "string": "Definition The formal definition of our DAG transducer described above is a tuple M = (Σ, Q, R, w, V, S) where: • Σ is an alphabet of node labels."
                    },
                    {
                        "id": 138,
                        "string": "• Q is a finite set of edge states."
                    },
                    {
                        "id": 139,
                        "string": "Every state q ∈ Q is of the form l(n, d) where l is the state label, n is the variable count and d is the direction of state which can be r, u or e. • R is a finite set of rules."
                    },
                    {
                        "id": 140,
                        "string": "Every rule is of the form I σ − → ⟨O, E⟩."
                    },
                    {
                        "id": 141,
                        "string": "E can be any kind of statement in a declarative programming language."
                    },
                    {
                        "id": 142,
                        "string": "It is called the generation part."
                    },
                    {
                        "id": 143,
                        "string": "I, σ and O have the same meanings as they do in the previous section and they are called the recognition part."
                    },
                    {
                        "id": 144,
                        "string": "• w is a score function."
                    },
                    {
                        "id": 145,
                        "string": "Given a particular run and an anchor node, w assigns a score to measure the preference for a particular rule at this anchor node."
                    },
                    {
                        "id": 146,
                        "string": "• V is the set of parameterized variables that can be used in every expression."
                    },
                    {
                        "id": 147,
                        "string": "• S ∈ V is a distinguished, global variable."
                    },
                    {
                        "id": 148,
                        "string": "It is like the 'goal' of a program."
                    },
                    {
                        "id": 149,
                        "string": "DAG Transduction-based NLG Different languages exhibit different morphosyntactic and syntactico-semantic properties."
                    },
                    {
                        "id": 150,
                        "string": "For example, Russian and Arabic are morphologically-rich languages and heavily utilize grammatical markers to indicate grammatical as well as semantic functions."
                    },
                    {
                        "id": 151,
                        "string": "On the contrary, Chinese, as an analytic language, encodes grammatical and semantic information in a highly configurational rather than either inflectional or derivational way."
                    },
                    {
                        "id": 152,
                        "string": "Such differences affects NLG significantly."
                    },
                    {
                        "id": 153,
                        "string": "Considering generating Chinese sentences, it seems sufficient to employ our DAG transducer to obtain a sequence of lemmas, since no morpholical production is needed."
                    },
                    {
                        "id": 154,
                        "string": "But for morphologically-rich languages, we do need to model complex morphological changes."
                    },
                    {
                        "id": 155,
                        "string": "To unify a general framework for DAG transduction-based NLG, we propose a two-step strategy to achive meaning-to-text transformation."
                    },
                    {
                        "id": 156,
                        "string": "• In the first phase, we are concerned with syntactico-semantic properties and utilize our DAG transducer to translate a semantic graph into sequential lemmas."
                    },
                    {
                        "id": 157,
                        "string": "Information such as tense, apsects, gender, etc."
                    },
                    {
                        "id": 158,
                        "string": "is attached to anchor lemmas."
                    },
                    {
                        "id": 159,
                        "string": "Actually, our transducer generates \"want.PRES\" rather than \"wants\"."
                    },
                    {
                        "id": 160,
                        "string": "Here, \"PRES\" indicates a particular tense."
                    },
                    {
                        "id": 161,
                        "string": "• In the second phase, we are concerned with morpho-syntactic properties and utilize a neural sequence-to-sequence model to obtain final surface strings from the outputs of the DAG transducer."
                    },
                    {
                        "id": 162,
                        "string": "Inducing Transduction Rules We present an empirical study on the feasibility of DAG transduction-based NLG."
                    },
                    {
                        "id": 163,
                        "string": "We focus on Figure 4 : An example graph."
                    },
                    {
                        "id": 164,
                        "string": "The intended reading is \"the decline is even steeper than in September\", he said."
                    },
                    {
                        "id": 165,
                        "string": "Original edge labels are removed for clarity."
                    },
                    {
                        "id": 166,
                        "string": "Every edge is associated with a span list, and spans are written in the form label<begin:end>."
                    },
                    {
                        "id": 167,
                        "string": "The red dashed edges belong to the intermediate graph T ."
                    },
                    {
                        "id": 168,
                        "string": "variable-free MRS representations, namely EDS (Oepen and Lønning, 2006) ."
                    },
                    {
                        "id": 169,
                        "string": "The data set used in this work is DeepBank 1.1 (Flickinger et al., 2012) ."
                    },
                    {
                        "id": 170,
                        "string": "EDS-specific Constraints In order to generate reasonable strings, three constraints must be kept during transduction."
                    },
                    {
                        "id": 171,
                        "string": "First, for a rule I σ − → ⟨O, E⟩, a state with direction u in I or a state with direction r in O is called head state and its variables are called head variables."
                    },
                    {
                        "id": 172,
                        "string": "For example, the head state of rule 3 in Table 1 is VP(1,u) and the head state of rule 2 is DET(1,r)."
                    },
                    {
                        "id": 173,
                        "string": "There is at most one head state in a rule and only head variables or S can be the left sides of statements."
                    },
                    {
                        "id": 174,
                        "string": "If there is no head state, we assign the global S as its head."
                    },
                    {
                        "id": 175,
                        "string": "Otherwise, the number of statements is equal to the number of head variables and each statement has a distinguished left side variable."
                    },
                    {
                        "id": 176,
                        "string": "An empty state does not have any variables."
                    },
                    {
                        "id": 177,
                        "string": "Second, every rule has no-copying, no-deleting statements."
                    },
                    {
                        "id": 178,
                        "string": "In other words, all variables must be used exactly once in a statement."
                    },
                    {
                        "id": 179,
                        "string": "Third, during recognition, a labeling function ρ is valid only if T (ρ) is a rooted tree."
                    },
                    {
                        "id": 180,
                        "string": "After transduction, we get result ρ * ."
                    },
                    {
                        "id": 181,
                        "string": "The first and second constraints ensure that for all nodes, there is at most one incoming red dashed edge in T (ρ * ) and 'data' carried by variables of the only incoming red dashed edge or S is separated into variables of outgoing red dashed edges."
                    },
                    {
                        "id": 182,
                        "string": "The last constraint ensures that we can solve all statements by a bottom-up process on tree T (ρ * )."
                    },
                    {
                        "id": 183,
                        "string": "Fine-to-Coarse Transduction Almost all NLG systems that heavily utilize a symbolic system to encode deep syntacticosemantic information lack some robustness, meaning that some input graphs may not be successfully processed."
                    },
                    {
                        "id": 184,
                        "string": "There are two reasons: (1) some explicit linguistic constraints are not included; (2) exact decoding is too time-consuming while inexact decoding cannot cover the whole search space."
                    },
                    {
                        "id": 185,
                        "string": "To solve the robustness problem, we introduce a fine-to-coarse strategy to ensure that at least one sentence is generated for any input graph."
                    },
                    {
                        "id": 186,
                        "string": "There are three types of rules in our system, namely induced rules, extended rules and dynamic rules."
                    },
                    {
                        "id": 187,
                        "string": "The most fine-grained rules are applied to bring us precision, while the most coarse-grained rules are for robustness."
                    },
                    {
                        "id": 188,
                        "string": "In order to extract reasonable rules, we will use both EDS graphs and the corresponding derivation trees provided by ERG."
                    },
                    {
                        "id": 189,
                        "string": "The details will be described step-by-step in the following sections."
                    },
                    {
                        "id": 190,
                        "string": "Figure 4 shows an example for obtaining induced rules."
                    },
                    {
                        "id": 191,
                        "string": "The induced rules are directly obtained by following three steps: Induced Rules Finding intermediate tree T EDS graphs are highly regular semantic graphs."
                    },
                    {
                        "id": 192,
                        "string": "It is not difficult to generate T based on a highly customized 'breadthfirst' search."
                    },
                    {
                        "id": 193,
                        "string": "The generation starts from the 'top' node ( say v to in Figure 4) given by the EDS graph and traverse the whole graph."
                    },
                    {
                        "id": 194,
                        "string": "No more than thirty heuristic rules are used to decide the visiting order of nodes."
                    },
                    {
                        "id": 195,
                        "string": "Assigning states EDS graphs also provide span information for nodes."
                    },
                    {
                        "id": 196,
                        "string": "We select a group of lexical nodes which have corresponding substrings in the original sentence."
                    },
                    {
                        "id": 197,
                        "string": "In Figure 4 , these nodes are in bold font and directly followed by a span."
                    },
                    {
                        "id": 198,
                        "string": "Then we merge spans from the bottom of T to the top to assign each red edge a span list."
                    },
                    {
                        "id": 199,
                        "string": "For each node n in T , we collect spans of every outgoing dashed edge of n into a list s. Some additional spans may be inserted into s. These spans do not occur in the EDS graph but they do occur in the sentence, i.e."
                    },
                    {
                        "id": 200,
                        "string": "than<29:33>."
                    },
                    {
                        "id": 201,
                        "string": "Then we merge continuous spans in s and assign the remaining spans in s to the incoming red dashed edge of n. We apply a similar method to the derivation tree."
                    },
                    {
                        "id": 202,
                        "string": "As a result, every inner node of the derivation tree is associated with a span."
                    },
                    {
                        "id": 203,
                        "string": "Then we align the edges in T to nodes of the inner derivation tree by comparing their spans."
                    },
                    {
                        "id": 204,
                        "string": "Finally edge labels in Figure 4 are generated."
                    },
                    {
                        "id": 205,
                        "string": "We use the concatenation of the edge labels in a span list as the state label."
                    },
                    {
                        "id": 206,
                        "string": "The edge labels are joined in order with ' '."
                    },
                    {
                        "id": 207,
                        "string": "Empty(0,e) is the state of the edges that do not belong to T (ignoring direction), such as e 12 ."
                    },
                    {
                        "id": 208,
                        "string": "The variable count of a state is equal to the size of the span list and the direction of a state is decided by whether the edge in T related to the state and its corresponding edge in D have different directions."
                    },
                    {
                        "id": 209,
                        "string": "For example, the state of e 5 should be ADV PP(2,r)."
                    },
                    {
                        "id": 210,
                        "string": "Generating statements After the above two steps, we are ready to generate statements according to how spans are merged."
                    },
                    {
                        "id": 211,
                        "string": "For all nodes, spans of the incoming edges represent the left hand side and the outgoing edges represent the right hand side."
                    },
                    {
                        "id": 212,
                        "string": "For example, the rule for node comp will be: {ADV(1,r)} comp − −− → {PP(1,u), ADV PP(2,r)} v ADV PP(1,r) = v ADV(1,r) v ADV PP(2,r) = than + v PP(1,u) Extended Rules Extended rules are used when no induced rules can cover a given node."
                    },
                    {
                        "id": 213,
                        "string": "In theory, there can be unlimited modifier nodes pointing to a given node, such as PP and ADJ."
                    },
                    {
                        "id": 214,
                        "string": "We use some manually written rules to slightly change an induced rule (prototype) by addition or deletion to generate a group of extended rules."
                    },
                    {
                        "id": 215,
                        "string": "The motivation here is to deal with the data sparseness problem."
                    },
                    {
                        "id": 216,
                        "string": "For a group of selected non-head states in I, such as PP and ADJ."
                    },
                    {
                        "id": 217,
                        "string": "We can produce new rules by removing or duplicating more of them."
                    },
                    {
                        "id": 218,
                        "string": "For example: {NP(1,u), ADJ(1,r)} X n 1 − −−− → {} v NP(1,u) = v ADJ(1,r) + L As a result, we get the two rules below: {NP(1,u)} X n 1 − −−− → {} v NP(1,u) = L {NP(1,u), ADJ(1,r) 1 , ADJ(1,r) 2 } X n 1 − −−− → {} v NP(1,u) = v ADJ(1,r) 1 + v ADJ(1,r) 2 + L Dynamic Rules During decoding, when neither induced nor extended rule is applicable, we create a dynamic rule on-the-fly."
                    },
                    {
                        "id": 219,
                        "string": "Our rule creator builds a new rule following the Markov assumption: P (O|C) = P (q 1 |C) n ∏ i=2 P (q i |C)P (q i |q i−1 , C) C = ⟨I, D⟩ represents the context."
                    },
                    {
                        "id": 220,
                        "string": "O = {q 1 , · · · , q n } denotes the outgoing states and I, D have the same meaning as before."
                    },
                    {
                        "id": 221,
                        "string": "Though they are unordered multisets, we can give them an explicit alphabet order by their edge labels."
                    },
                    {
                        "id": 222,
                        "string": "There is also a group of hard constraints to make sure that the predicted rules are well-formed as the definition in §5 requires."
                    },
                    {
                        "id": 223,
                        "string": "This Markovization strategy is widely utilized by lexicalized and unlexicalized PCFG parsers (Collins, 1997; Klein and Manning, 2003) ."
                    },
                    {
                        "id": 224,
                        "string": "For a dynamic rule, all variables in this rule will appear in the statement."
                    },
                    {
                        "id": 225,
                        "string": "We use a simple perceptron-based scorer to assign every variable a score and arrange them in an decreasing order."
                    },
                    {
                        "id": 226,
                        "string": "6 Evaluation and Analysis 6.1 Set-up We use DeepBank 1.1 (Flickinger et al., 2012) , i.e."
                    },
                    {
                        "id": 227,
                        "string": "gold-standard ERS annotations, as our main experimental data set to train a DAG transducer as well as a sequence-to-sequence morpholyzer, and wikiwoods (Flickinger et al., 2010) , i.e."
                    },
                    {
                        "id": 228,
                        "string": "automatically-generated ERS annotations by ERG, as additional data set to enhance the sequence-to-sequence morpholyzer."
                    },
                    {
                        "id": 229,
                        "string": "The training, development and test data sets are from DeepBank and split according to DeepBank's recommendation."
                    },
                    {
                        "id": 230,
                        "string": "There are 34,505, 1,758 and 1,444 sentences (all disconnected graphs as well as their associated sentences are removed) in the training, development and test data sets."
                    },
                    {
                        "id": 231,
                        "string": "We use a small portion of wikiwoods data, c.a."
                    },
                    {
                        "id": 232,
                        "string": "300K sentences, for experiments."
                    },
                    {
                        "id": 233,
                        "string": "37,537 induced rules are directly extracted from the training data set, and 447,602 extended rules are obtained."
                    },
                    {
                        "id": 234,
                        "string": "For DAG recognition, at one particular position, there may be more than one rule applicable."
                    },
                    {
                        "id": 235,
                        "string": "In this case, we need a disambiguation model as well as a decoder to search for a globally optimal solution."
                    },
                    {
                        "id": 236,
                        "string": "In this work, we train a structured perceptron model (Collins, 2002) for disambiguation and employ a beam decoder."
                    },
                    {
                        "id": 237,
                        "string": "The perceptron model used by our dynamic rule generator are trained with the induced rules."
                    },
                    {
                        "id": 238,
                        "string": "To get a sequence-to-sequence model, we use the open source tool-OpenNMT 4 ."
                    },
                    {
                        "id": 239,
                        "string": "The Decoder We implement a fine-to-coarse beam search decoder."
                    },
                    {
                        "id": 240,
                        "string": "Given a DAG D, our goal is to find the highest scored labeling function ρ: ρ = arg max ρ n ∏ i=1 ∑ j w j · f j (rule(v i ), D) s.t."
                    },
                    {
                        "id": 241,
                        "string": "rule(v i ) = ρ(in(v i )) ℓ(v i ) − −− → ⟨ρ(out(v i )), E i ⟩ where n is the node count and f j (·, ·) and w j represent a feature and the corresponding weight, respectively."
                    },
                    {
                        "id": 242,
                        "string": "The features are chosen from the context of the given node v i ."
                    },
                    {
                        "id": 243,
                        "string": "We perform 'topdown' search to translate an input DAG into a morphology-function-enhanced lemma sequence."
                    },
                    {
                        "id": 244,
                        "string": "Each hypothesis consists of the current DAG graph, the partial labeling function, the current hypothesis score and other graph information used to perform rule selection."
                    },
                    {
                        "id": 245,
                        "string": "The decoder will keep the corresponding partial intermediate graph T acyclic when decoding."
                    },
                    {
                        "id": 246,
                        "string": "The algorithm used by our decoder is displayed in Algorithm 1."
                    },
                    {
                        "id": 247,
                        "string": "Function FindRules(h, n, R) will use hard constraints to select rules from the rule set R according to the contextual information."
                    },
                    {
                        "id": 248,
                        "string": "It will also perform an acyclic check on T ."
                    },
                    {
                        "id": 249,
                        "string": "Function Insert(h, r, n, B) will create and score a new hypothesis made from the given context and then insert it into beam B. E ← E ∪ {e} 23 if in(tar(e)) ⊆ E: 24 Q ← Q ∪ {tar(e)} 25 Extract ρ from best hypothesis in B1 Accuracy In order to evaluate the effectiveness of our transducer for NLG, we try a group of tests showed in Table 2 ."
                    },
                    {
                        "id": 250,
                        "string": "All sequence-to-sequence models (either from lemma sequences to lemma sequences or lemma sequences to sentences) are trained on DeepBank and wikiwoods data set and tuned on the development data."
                    },
                    {
                        "id": 251,
                        "string": "The second column shows the BLEU-4 scores between generated lemma sequences and golden sequences of lemmas."
                    },
                    {
                        "id": 252,
                        "string": "The third column shows the BLEU-4 scores between generated sentences and golden sentences."
                    },
                    {
                        "id": 253,
                        "string": "The fourth column shows the fraction of graphs in the test data set that can reach output sentences."
                    },
                    {
                        "id": 254,
                        "string": "(Song et al., 2017) ."
                    },
                    {
                        "id": 255,
                        "string": "The graphs that cannot received any natural language sentences are removed while conducting the BLEU evaluation."
                    },
                    {
                        "id": 256,
                        "string": "As we can conclude from Table 2 , using only induced rules achieves the highest accuracy but the coverage is not satisfactory."
                    },
                    {
                        "id": 257,
                        "string": "Extended rules lead to a slight accuracy drop but with a great improvement of coverage (c.a."
                    },
                    {
                        "id": 258,
                        "string": "10%)."
                    },
                    {
                        "id": 259,
                        "string": "Using dynamic rules, we observe a significant accuracy drop."
                    },
                    {
                        "id": 260,
                        "string": "Nevertheless, we are able to handle all EDS graphs."
                    },
                    {
                        "id": 261,
                        "string": "The full-coverage robustness may benefit many NLP applications."
                    },
                    {
                        "id": 262,
                        "string": "The lemma sequences generated by our transducer are really close to the golden one."
                    },
                    {
                        "id": 263,
                        "string": "This means that our model actually works and most reordering patterns are handled well by induced rules."
                    },
                    {
                        "id": 264,
                        "string": "Compared to the AMR generation task, our transducer on EDS graphs achieves much higher accuracies."
                    },
                    {
                        "id": 265,
                        "string": "To make clear how much improvement is from the data and how much is from our DAG transducer, we implement a purely neural baseline."
                    },
                    {
                        "id": 266,
                        "string": "The baseline converts a DAG into a concept sequence by a pre-order DFS traversal on the intermediate tree of this DAG."
                    },
                    {
                        "id": 267,
                        "string": "Then we use a sequenceto-sequence model to transform this concept sequence to the lemma sequence for comparison."
                    },
                    {
                        "id": 268,
                        "string": "This is a kind of implementation of Konstas et al."
                    },
                    {
                        "id": 269,
                        "string": "'s model but evaluated on the EDS data."
                    },
                    {
                        "id": 270,
                        "string": "We can see that on this task, our transducer is much better than a pure sequence-to-sequence model on DeepBank data."
                    },
                    {
                        "id": 271,
                        "string": "Table 3 : Efficiency of our NL generator."
                    },
                    {
                        "id": 272,
                        "string": "Table 3 shows the efficiency of the beam search decoder with a beam size of 128."
                    },
                    {
                        "id": 273,
                        "string": "The platform for this experiment is x86 64 GNU/Linux with two Intel Xeon E5-2620 CPUs."
                    },
                    {
                        "id": 274,
                        "string": "The second and third columns represent the average and the maximal time (in seconds) to translate an EDS graph."
                    },
                    {
                        "id": 275,
                        "string": "Using dynamic rules slow down the decoder to a great degree."
                    },
                    {
                        "id": 276,
                        "string": "Since the data for experiments is newswire data, i.e."
                    },
                    {
                        "id": 277,
                        "string": "WSJ sentences from PTB (Marcus et al., 1993) , the input graphs are quite large on average."
                    },
                    {
                        "id": 278,
                        "string": "On average, it produces more than 5 sentences per second on CPU."
                    },
                    {
                        "id": 279,
                        "string": "We consider this a promising speed."
                    },
                    {
                        "id": 280,
                        "string": "Efficiency Conclusion We extend the work on DAG automata in Chiang et al."
                    },
                    {
                        "id": 281,
                        "string": "(2018) and propose a general method to build flexible DAG transducer."
                    },
                    {
                        "id": 282,
                        "string": "The key idea is to leverage a declarative programming language to minimize the computation burden of a graph transducer."
                    },
                    {
                        "id": 283,
                        "string": "We think may NLP tasks that involve graph manipulation may benefit from this design."
                    },
                    {
                        "id": 284,
                        "string": "To exemplify our design, we develop a practical system for the semantic-graph-to-string task."
                    },
                    {
                        "id": 285,
                        "string": "Our system is accurate (BLEU 68.07), efficient (more than 5 sentences per second on a CPU) and robust (fullcoverage)."
                    },
                    {
                        "id": 286,
                        "string": "The empirical evaluation confirms the usefulness a DAG transducer to resolve NLG, as well as the effectiveness of our design."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 36
                    },
                    {
                        "section": "Preliminaries",
                        "n": "2.1",
                        "start": 37,
                        "end": 46
                    },
                    {
                        "section": "Previous Work",
                        "n": "2.2",
                        "start": 47,
                        "end": 59
                    },
                    {
                        "section": "Challenges",
                        "n": "2.3",
                        "start": 60,
                        "end": 76
                    },
                    {
                        "section": "Basic Idea",
                        "n": "3.1",
                        "start": 77,
                        "end": 85
                    },
                    {
                        "section": "A Declarative Programming Language",
                        "n": "3.2",
                        "start": 86,
                        "end": 97
                    },
                    {
                        "section": "Informal Illustration",
                        "n": "3.3",
                        "start": 98,
                        "end": 136
                    },
                    {
                        "section": "Definition",
                        "n": "3.4",
                        "start": 137,
                        "end": 148
                    },
                    {
                        "section": "DAG Transduction-based NLG",
                        "n": "4",
                        "start": 149,
                        "end": 161
                    },
                    {
                        "section": "Inducing Transduction Rules",
                        "n": "5",
                        "start": 162,
                        "end": 169
                    },
                    {
                        "section": "EDS-specific Constraints",
                        "n": "5.1",
                        "start": 170,
                        "end": 182
                    },
                    {
                        "section": "Fine-to-Coarse Transduction",
                        "n": "5.2",
                        "start": 183,
                        "end": 190
                    },
                    {
                        "section": "Induced Rules",
                        "n": "5.3",
                        "start": 191,
                        "end": 212
                    },
                    {
                        "section": "Extended Rules",
                        "n": "5.4",
                        "start": 213,
                        "end": 218
                    },
                    {
                        "section": "Dynamic Rules",
                        "n": "5.5",
                        "start": 219,
                        "end": 238
                    },
                    {
                        "section": "The Decoder",
                        "n": "6.2",
                        "start": 239,
                        "end": 249
                    },
                    {
                        "section": "Accuracy",
                        "n": "6.3",
                        "start": 250,
                        "end": 279
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 280,
                        "end": 286
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1060-Table2-1.png",
                        "caption": "Table 2: Accuracy (BLEU-4 score) and coverage of different systems. I denotes transduction only using induced rules; I+E denotes transduction using both induced and extended rules; I+E+D denotes transduction using all kinds of rules. DFSNN is a rough implementation of Konstas et al. (2017) but with the EDS data, while AMR-NN includes the results originally reported by Konstas et al., which are evaluated on the AMR data. AMR-NRG includes the results obtained by a synchronous graph grammar (Song et al., 2017).",
                        "page": 8,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 295.2,
                            "y1": 62.879999999999995,
                            "y2": 160.32
                        }
                    },
                    {
                        "filename": "../figure/image/1060-Table3-1.png",
                        "caption": "Table 3: Efficiency of our NL generator.",
                        "page": 8,
                        "bbox": {
                            "x1": 321.59999999999997,
                            "x2": 511.2,
                            "y1": 62.879999999999995,
                            "y2": 119.03999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1060-Figure4-1.png",
                        "caption": "Figure 4: An example graph. The intended reading is “the decline is even steeper than in September”, he said. Original edge labels are removed for clarity. Every edge is associated with a span list, and spans are written in the form label<begin:end>. The red dashed edges belong to the intermediate graph T .",
                        "page": 5,
                        "bbox": {
                            "x1": 110.88,
                            "x2": 485.28,
                            "y1": 65.28,
                            "y2": 192.95999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1060-Table1-1.png",
                        "caption": "Table 1: Sets of states (Q) and rules (R) that can be used to process the graph in Figure 2.",
                        "page": 4,
                        "bbox": {
                            "x1": 115.67999999999999,
                            "x2": 481.44,
                            "y1": 62.879999999999995,
                            "y2": 156.0
                        }
                    },
                    {
                        "filename": "../figure/image/1060-Figure1-1.png",
                        "caption": "Figure 1: Variable relation tree.",
                        "page": 3,
                        "bbox": {
                            "x1": 106.08,
                            "x2": 253.92,
                            "y1": 214.07999999999998,
                            "y2": 274.08
                        }
                    },
                    {
                        "filename": "../figure/image/1060-Figure3-1.png",
                        "caption": "Figure 3: A run of the graph in Figure 2.",
                        "page": 3,
                        "bbox": {
                            "x1": 332.64,
                            "x2": 497.76,
                            "y1": 181.44,
                            "y2": 270.71999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1060-Figure2-1.png",
                        "caption": "Figure 2: An input graph. The intended reading is John wants to go.",
                        "page": 3,
                        "bbox": {
                            "x1": 346.56,
                            "x2": 484.32,
                            "y1": 62.879999999999995,
                            "y2": 128.64
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-26"
        },
        {
            "slides": {
                "1": {
                    "title": "Search Engine QA Engine",
                    "text": [
                        "who was Katy Perry's husband second tallest mountain in england|",
                        "who was tom cruise's first wife",
                        "Wet Web Shopping Maps News Images More + Search tools",
                        "Web News Images Shopping",
                        "C7 when did minnesota becor",
                        "Minnesota - Scafell Pike"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Generic Semantic Parsing eg Kwiatkowski 13",
                    "text": [
                        "Who is Justin Biebers sister?",
                        ". sibling_of(justin_bieber, x) gender(x, female)"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "KB Specitic Semantic Parsing eg Berant 13",
                    "text": [
                        "Who is Justin Biebers sister?",
                        ". sibling_of(justin_bieber, x) gender(x, female)"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "Key Challenges",
                    "text": [
                        "What was the date that Minnesota became a state?",
                        "When was the state Minnesota created?",
                        "Minnesota's date it entered the union?"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "9": {
                    "title": "Identity Core Inferential Chain",
                    "text": [
                        "Family Guy cast y actor x",
                        "Family Guy Family Guy writer y start x",
                        "Family Guy genre x",
                        "Who first voiced Meg on Family Guy?"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": [
                        "figure/image/1072-Figure5-1.png",
                        "figure/image/1072-Figure4-1.png"
                    ]
                },
                "11": {
                    "title": "Augment Constraints",
                    "text": [
                        "Family Guy cast y actor x Family Guy y x",
                        "Who voiced Family Guy",
                        "One or more constraint nodes can be added to or",
                        ": Additional property of this event (e.g., character MegGriffin",
                        ": Additional property of the answer entity (e.g., gender)"
                    ],
                    "page_nums": [
                        18
                    ],
                    "images": [
                        "figure/image/1072-Figure5-1.png",
                        "figure/image/1072-Figure2-1.png",
                        "figure/image/1072-Figure7-1.png"
                    ]
                },
                "14": {
                    "title": "WebQuestions Dataset Berant 13",
                    "text": [
                        "What character did Natalie Portman play in Star Wars? Padme Amidala",
                        "What currency do you use in Costa Rica? Costa Rican colon",
                        "What did Obama study in school? political science",
                        "What do Michelle Obama do for a living? writer, lawyer",
                        "What killed Sammy Davis Jr? throat cancer [Examples from Berant]"
                    ],
                    "page_nums": [
                        23
                    ],
                    "images": []
                },
                "15": {
                    "title": "Creating Training Data trom Q A Pairs",
                    "text": [
                        "Relation Matching (Identifying Core Inferential Chain)",
                        "what was <e> known for people.person.profession",
                        "what kind of government does <e> have location.country.form_of_government",
                        "what year were the <e> established sports.sports_team.founded",
                        "what city was <e> born in people.person.place_of_birth",
                        "what did <e> die from people.deceased_person.cause_of_death",
                        "who married <e> people.person.spouse_s"
                    ],
                    "page_nums": [
                        24,
                        25
                    ],
                    "images": []
                },
                "17": {
                    "title": "Contribution from Entity Linking",
                    "text": [
                        "Method #Entities Covered Ques. Labeled Ent."
                    ],
                    "page_nums": [
                        27
                    ],
                    "images": []
                },
                "18": {
                    "title": "Contribution from Relation Matching",
                    "text": [
                        "Fy score of query graphs that have only a core inferential",
                        "Questions trom search engine users are short & simple",
                        "Even if the correct parse requires more constraints, the less constrained graph still gets a partial score"
                    ],
                    "page_nums": [
                        28
                    ],
                    "images": []
                }
            },
            "paper_title": "Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base",
            "paper_id": "1072",
            "paper": {
                "title": "Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base",
                "abstract": "We propose a novel semantic parsing framework for question answering using a knowledge base. We define a query graph that resembles subgraphs of the knowledge base and can be directly mapped to a logical form. Semantic parsing is reduced to query graph generation, formulated as a staged search problem. Unlike traditional approaches, our method leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem. By applying an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences, our system outperforms previous methods substantially, and achieves an F 1 measure of 52.5% on the WEBQUESTIONS dataset.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Organizing the world's facts and storing them in a structured database, large-scale knowledge bases (KB) like DBPedia (Auer et al., 2007) and Freebase (Bollacker et al., 2008) have become important resources for supporting open-domain question answering (QA)."
                    },
                    {
                        "id": 1,
                        "string": "Most state-of-the-art approaches to KB-QA are based on semantic parsing, where a question (utterance) is mapped to its formal meaning representation (e.g., logical form) and then translated to a KB query."
                    },
                    {
                        "id": 2,
                        "string": "The answers to the question can then be retrieved simply by executing the query."
                    },
                    {
                        "id": 3,
                        "string": "The semantic parse also provides a deeper understanding of the question, which can be used to justify the answer to users, as well as to provide easily interpretable information to developers for error analysis."
                    },
                    {
                        "id": 4,
                        "string": "However, most traditional approaches for semantic parsing are largely decoupled from the knowledge base, and thus are faced with several challenges when adapted to applications like QA."
                    },
                    {
                        "id": 5,
                        "string": "For instance, a generic meaning representation may have the ontology matching problem when the logical form uses predicates that differ from those defined in the KB (Kwiatkowski et al., 2013) ."
                    },
                    {
                        "id": 6,
                        "string": "Even when the representation language is closely related to the knowledge base schema, finding the correct predicates from the large vocabulary in the KB to relations described in the utterance remains a difficult problem (Berant and Liang, 2014) ."
                    },
                    {
                        "id": 7,
                        "string": "Inspired by (Yao and Van Durme, 2014; Bao et al., 2014) , we propose a semantic parsing framework that leverages the knowledge base more tightly when forming the parse for an input question."
                    },
                    {
                        "id": 8,
                        "string": "We first define a query graph that can be straightforwardly mapped to a logical form in λcalculus and is semantically closely related to λ-DCS (Liang, 2013) ."
                    },
                    {
                        "id": 9,
                        "string": "Semantic parsing is then reduced to query graph generation, formulated as a search problem with staged states and actions."
                    },
                    {
                        "id": 10,
                        "string": "Each state is a candidate parse in the query graph representation and each action defines a way to grow the graph."
                    },
                    {
                        "id": 11,
                        "string": "The representation power of the semantic parse is thus controlled by the set of legitimate actions applicable to each state."
                    },
                    {
                        "id": 12,
                        "string": "In particular, we stage the actions into three main steps: locating the topic entity in the question, finding the main relationship between the answer and the topic entity, and expanding the query graph with additional constraints that describe properties the answer needs to have, or relationships between the answer and other entities in the question."
                    },
                    {
                        "id": 13,
                        "string": "One key advantage of this staged design is that through grounding partially the utterance to some entities and predicates in the KB, we make the search far more efficient by focusing on the promising areas in the space that most likely lead to the correct query graph, before the full parse is determined."
                    },
                    {
                        "id": 14,
                        "string": "For example, after linking \"Fam-ily Guy\" in the question \"Who first voiced Meg on Family Guy?\""
                    },
                    {
                        "id": 15,
                        "string": "to FamilyGuy (the TV show) in the knowledge base, the procedure needs only to examine the predicates that can be applied to FamilyGuy instead of all the predicates in the KB."
                    },
                    {
                        "id": 16,
                        "string": "Resolving other entities also becomes easy, as given the context, it is clear that Meg refers to MegGriffin (the character in Family Guy)."
                    },
                    {
                        "id": 17,
                        "string": "Our design divides this particular semantic parsing problem into several sub-problems, such as entity linking and relation matching."
                    },
                    {
                        "id": 18,
                        "string": "With this integrated framework, best solutions to each subproblem can be easily combined and help produce the correct semantic parse."
                    },
                    {
                        "id": 19,
                        "string": "For instance, an advanced entity linking system that we employ outputs candidate entities for each question with both high precision and recall."
                    },
                    {
                        "id": 20,
                        "string": "In addition, by leveraging a recently developed semantic matching framework based on convolutional networks, we present better relation matching models using continuous-space representations instead of pure lexical matching."
                    },
                    {
                        "id": 21,
                        "string": "Our semantic parsing approach improves the state-of-the-art result on the WEBQUESTIONS dataset (Berant et al., 2013) to 52.5% in F 1 , a 7.2% absolute gain compared to the best existing method."
                    },
                    {
                        "id": 22,
                        "string": "The rest of this paper is structured as follows."
                    },
                    {
                        "id": 23,
                        "string": "Sec."
                    },
                    {
                        "id": 24,
                        "string": "2 introduces the basic notion of the graph knowledge base and the design of our query graph."
                    },
                    {
                        "id": 25,
                        "string": "Sec."
                    },
                    {
                        "id": 26,
                        "string": "3 presents our search-based approach for generating the query graph."
                    },
                    {
                        "id": 27,
                        "string": "The experimental results are shown in Sec."
                    },
                    {
                        "id": 28,
                        "string": "4, and the discussion of our approach and the comparisons to related work are in Sec."
                    },
                    {
                        "id": 29,
                        "string": "5."
                    },
                    {
                        "id": 30,
                        "string": "Finally, Sec."
                    },
                    {
                        "id": 31,
                        "string": "6 concludes the paper."
                    },
                    {
                        "id": 32,
                        "string": "Background In this work, we aim to learn a semantic parser that maps a natural language question to a logical form query q, which can be executed against a knowledge base K to retrieve the answers."
                    },
                    {
                        "id": 33,
                        "string": "Our approach takes a graphical view of both K and q, and reduces semantic parsing to mapping questions to query graphs."
                    },
                    {
                        "id": 34,
                        "string": "We describe the basic design below."
                    },
                    {
                        "id": 35,
                        "string": "Knowledge base The knowledge base K considered in this work is a collection of subject-predicate-object triples (e 1 , p, e 2 ), where e 1 , e 2 ∈ E are the entities (e.g., FamilyGuy or MegGriffin) and p ∈ P is a binary predicate like character."
                    },
                    {
                        "id": 36,
                        "string": "A knowledge base in this form is often called a knowledge graph because of its straightforward graphical representation -each entity is a node and two related entities are linked by a directed edge labeled by the predicate, from the subject to the object entity."
                    },
                    {
                        "id": 37,
                        "string": "To compare our approach to existing methods, we use Freebase, which is a large database with more than 46 million topics and 2.6 billion facts."
                    },
                    {
                        "id": 38,
                        "string": "In Freebase's design, there is a special entity category called compound value type (CVT), which is not a real-world entity, but is used to collect multiple fields of an event or a special relationship."
                    },
                    {
                        "id": 39,
                        "string": "Fig."
                    },
                    {
                        "id": 40,
                        "string": "1 shows a small subgraph of Freebase related to the TV show Family Guy."
                    },
                    {
                        "id": 41,
                        "string": "Nodes are the entities, including some dates and special CVT entities 1 ."
                    },
                    {
                        "id": 42,
                        "string": "A directed edge describes the relation between two entities, labeled by the predicate."
                    },
                    {
                        "id": 43,
                        "string": "Query graph Given the knowledge graph, executing a logicalform query is equivalent to finding a subgraph that can be mapped to the query and then resolving the binding of the variables."
                    },
                    {
                        "id": 44,
                        "string": "To capture this intuition, we describe a restricted subset of λ-calculus in a graph representation as our query graph."
                    },
                    {
                        "id": 45,
                        "string": "Our query graph consists of four types of nodes: grounded entity (rounded rectangle), existential variable (circle), lambda variable (shaded circle), aggregation function (diamond)."
                    },
                    {
                        "id": 46,
                        "string": "Grounded entities are existing entities in the knowledge base K. Existential variables and lambda variables are un- grounded entities."
                    },
                    {
                        "id": 47,
                        "string": "In particular, we would like to retrieve all the entities that can map to the lambda variables in the end as the answers."
                    },
                    {
                        "id": 48,
                        "string": "Aggregation function is designed to operate on a specific entity, which typically captures some numerical properties."
                    },
                    {
                        "id": 49,
                        "string": "Just like in the knowledge graph, related nodes in the query graph are connected by directed edges, labeled with predicates in K. To demonstrate this design, Fig."
                    },
                    {
                        "id": 50,
                        "string": "2 shows one possible query graph for the question \"Who first voiced Meg on Family Guy?\""
                    },
                    {
                        "id": 51,
                        "string": "using Freebase."
                    },
                    {
                        "id": 52,
                        "string": "The two entities, MegGriffin and FamilyGuy are represented by two rounded rectangle nodes."
                    },
                    {
                        "id": 53,
                        "string": "The circle node y means that there should exist an entity describing some casting relations like the character, actor and the time she started the role 2 ."
                    },
                    {
                        "id": 54,
                        "string": "The shaded circle node x is also called the answer node, and is used to map entities retrieved by the query."
                    },
                    {
                        "id": 55,
                        "string": "The diamond node arg min constrains that the answer needs to be the earliest actor for this role."
                    },
                    {
                        "id": 56,
                        "string": "Equivalently, the logical form query in λ-calculus without the aggregation function is: λx.∃y.cast(FamilyGuy, y) ∧ actor(y, x) ∧ character(y, MegGriffin) Running this query graph against K as in Fig."
                    },
                    {
                        "id": 57,
                        "string": "1 will match both LaceyChabert and MilaKunis before applying the aggregation function, but only LaceyChabert is the correct answer as she started this role earlier (by checking the from property of the grounded CVT node)."
                    },
                    {
                        "id": 58,
                        "string": "Our query graph design is inspired by (Reddy et al., 2014) , but with some key differences."
                    },
                    {
                        "id": 59,
                        "string": "The nodes and edges in our query graph closely resemble the exact entities and predicates from the knowledge base."
                    },
                    {
                        "id": 60,
                        "string": "As a result, the graph can be straightforwardly translated to a logical form query that is directly executable."
                    },
                    {
                        "id": 61,
                        "string": "In contrast, the query graph in (Reddy et al., 2014) is mapped from the CCG parse of the question, and needs further transformations before mapping to subgraphs 2 y should be grounded to a CVT entity in this case."
                    },
                    {
                        "id": 62,
                        "string": "Figure 3 : The legitimate actions to grow a query graph."
                    },
                    {
                        "id": 63,
                        "string": "See text for detail."
                    },
                    {
                        "id": 64,
                        "string": "f S e S p S c A e A p A a /A c A a /A c of the target knowledge base to retrieve answers."
                    },
                    {
                        "id": 65,
                        "string": "Semantically, our query graph is more related to simple λ-DCS (Berant et al., 2013; Liang, 2013) , which is a syntactic simplification of λ-calculus when applied to graph databases."
                    },
                    {
                        "id": 66,
                        "string": "A query graph can be viewed as the tree-like graph pattern of a logical form in λ-DCS."
                    },
                    {
                        "id": 67,
                        "string": "For instance, the path from the answer node to an entity node can be described using a series of join operations in λ-DCS."
                    },
                    {
                        "id": 68,
                        "string": "Different paths of the tree graph are combined via the intersection operators."
                    },
                    {
                        "id": 69,
                        "string": "Staged Query Graph Generation We focus on generating query graphs with the following properties."
                    },
                    {
                        "id": 70,
                        "string": "First, the tree graph consists of one entity node as the root, referred as the topic entity."
                    },
                    {
                        "id": 71,
                        "string": "Second, there exists only one lambda variable x as the answer node, with a directed path from the root to it, and has zero or more existential variables in-between."
                    },
                    {
                        "id": 72,
                        "string": "We call this path the core inferential chain of the graph, as it describes the main relationship between the answer and topic entity."
                    },
                    {
                        "id": 73,
                        "string": "Variables can only occur in this chain, and the chain only has variable nodes except the root."
                    },
                    {
                        "id": 74,
                        "string": "Finally, zero or more entity or aggregation nodes can be attached to each variable node, including the answer node."
                    },
                    {
                        "id": 75,
                        "string": "These branches are the additional constraints that the answers need to satisfy."
                    },
                    {
                        "id": 76,
                        "string": "For example, in Fig."
                    },
                    {
                        "id": 77,
                        "string": "2 , FamilyGuy is the root and FamilyGuy → y → x is the core inferential chain."
                    },
                    {
                        "id": 78,
                        "string": "The branch y → MegGriffin specifies the character and y → arg min constrains that the answer needs to be the earliest actor for this role."
                    },
                    {
                        "id": 79,
                        "string": "Given a question, we formalize the query graph generation process as a search problem, with staged states and actions."
                    },
                    {
                        "id": 80,
                        "string": "Let S = {φ, S e , S p , S c } be the set of states, where each state could be an empty graph (φ), a singlenode graph with the topic entity (S e ), a core inferential chain (S p ), or a more complex query graph with additional constraints (S c )."
                    },
                    {
                        "id": 81,
                        "string": "Let A = {A e , A p , A c , A a } be the set of actions."
                    },
                    {
                        "id": 82,
                        "string": "An action grows a given graph by adding some edges and nodes."
                    },
                    {
                        "id": 83,
                        "string": "In particular, A e picks an entity node; A p determines the core inferential chain; A c and A a add constraints and aggregation nodes, respectively."
                    },
                    {
                        "id": 84,
                        "string": "Given a state, the valid action set can be defined by the finite state diagram in Fig."
                    },
                    {
                        "id": 85,
                        "string": "3 ."
                    },
                    {
                        "id": 86,
                        "string": "Notice that the order of possible actions is chosen for the convenience of implementation."
                    },
                    {
                        "id": 87,
                        "string": "In principle, we could choose a different order, such as matching the core inferential chain first and then resolving the topic entity linking."
                    },
                    {
                        "id": 88,
                        "string": "However, since we will consider multiple hypotheses during search, the order of the staged actions can simply be viewed as a different way to prune the search space or to bias the exploration order."
                    },
                    {
                        "id": 89,
                        "string": "We define the reward function on the state space using a log-linear model."
                    },
                    {
                        "id": 90,
                        "string": "The reward basically estimates the likelihood that a query graph correctly parses the question."
                    },
                    {
                        "id": 91,
                        "string": "Search is done using the best-first strategy with a priority queue, which is formally defined in Appendix A."
                    },
                    {
                        "id": 92,
                        "string": "In the following subsections, we use a running example of finding the semantic parse of question q ex = \"Who first voiced Meg of Family Guy?\""
                    },
                    {
                        "id": 93,
                        "string": "to describe the sequence of actions."
                    },
                    {
                        "id": 94,
                        "string": "Linking Topic Entity Starting from the initial state s 0 , the valid actions are to create a single-node graph that corresponds to the topic entity found in the given question."
                    },
                    {
                        "id": 95,
                        "string": "For instance, possible topic entities in q ex can either be FamilyGuy or MegGriffin, shown in Fig."
                    },
                    {
                        "id": 96,
                        "string": "4 ."
                    },
                    {
                        "id": 97,
                        "string": "We use an entity linking system that is designed for short and noisy text (Yang and Chang, 2015) ."
                    },
                    {
                        "id": 98,
                        "string": "For each entity e in the knowledge base, the system first prepares a surface-form lexicon that lists all possible ways that e can be mentioned in text."
                    },
                    {
                        "id": 99,
                        "string": "This lexicon is created using various data sources, such as names and aliases of the entities, the anchor text in Web documents and the Wikipedia redirect table."
                    },
                    {
                        "id": 100,
                        "string": "Given a question, it considers all the consecutive word sequences that have occurred in the lexicon as possible mentions, paired with their possible entities."
                    },
                    {
                        "id": 101,
                        "string": "Each pair is then scored by a statistical model based on its frequency counts in the surface-form lexicon."
                    },
                    {
                        "id": 102,
                        "string": "To tolerate potential mistakes of the entity linking system, as well as exploring more possible query graphs, up to 10 topranked entities are considered as the topic entity."
                    },
                    {
                        "id": 103,
                        "string": "The linking score will also be used as a feature for the reward function."
                    },
                    {
                        "id": 104,
                        "string": "Identifying Core Inferential Chain Given a state s that corresponds to a single-node graph with the topic entity e, valid actions to extend this graph is to identify the core inferential chain; namely, the relationship between the topic entity and the answer."
                    },
                    {
                        "id": 105,
                        "string": "For example, Fig."
                    },
                    {
                        "id": 106,
                        "string": "5 shows three possible chains that expand the single-node graph in s 1 ."
                    },
                    {
                        "id": 107,
                        "string": "Because the topic entity e is given, we only need to explore legitimate predicate sequences that can start from e. Specifically, to restrict the search space, we explore all paths of length 2 when the middle existential variable can be grounded to a CVT node and paths of length 1 if not."
                    },
                    {
                        "id": 108,
                        "string": "We also consider longer predicate sequences if the combinations are observed in training data 3 ."
                    },
                    {
                        "id": 109,
                        "string": "Analogous to the entity linking problem, where the goal is to find the mapping of mentions to entities in K, identifying the core inferential chain is to map the natural utterance of the question to the correct predicate sequence."
                    },
                    {
                        "id": 110,
                        "string": "For question \"Who first voiced Meg on [Family Guy]?\""
                    },
                    {
                        "id": 111,
                        "string": "we need to measure the likelihood that each of the sequences in {cast-actor, writer-start, genre} correctly captures the relationship between Family Guy and Who."
                    },
                    {
                        "id": 112,
                        "string": "We reduce this problem to measuring semantic similarity using neural networks."
                    },
                    {
                        "id": 113,
                        "string": "Figure 6 : The architecture of the convolutional neural networks (CNN) used in this work."
                    },
                    {
                        "id": 114,
                        "string": "The CNN model maps a variable-length word sequence (e.g., a pattern or predicate sequence) to a low-dimensional vector in a latent semantic space."
                    },
                    {
                        "id": 115,
                        "string": "See text for the description of each layer."
                    },
                    {
                        "id": 116,
                        "string": "Deep Convolutional Neural Networks To handle the huge variety of the semantically equivalent ways of stating the same question, as well as the mismatch of the natural language utterances and predicates in the knowledge base, we propose using Siamese neural networks (Bromley et al., 1993) for identifying the core inferential chain."
                    },
                    {
                        "id": 117,
                        "string": "For instance, one of our constructions maps the question to a pattern by replacing the entity mention with a generic symbol <e> and then compares it with a candidate chain, such as \"who first voiced meg on <e>\" vs. cast-actor."
                    },
                    {
                        "id": 118,
                        "string": "The model consists of two neural networks, one for the pattern and the other for the inferential chain."
                    },
                    {
                        "id": 119,
                        "string": "Both are mapped to k-dimensional vectors as the output of the networks."
                    },
                    {
                        "id": 120,
                        "string": "Their semantic similarity is then computed using some distance function, such as cosine."
                    },
                    {
                        "id": 121,
                        "string": "This continuous-space representation approach has been proposed recently for semantic parsing and question answering (Bordes et al., 2014a; Yih et al., 2014) and has shown better results compared to lexical matching approaches (e.g., word-alignment models)."
                    },
                    {
                        "id": 122,
                        "string": "In this work, we adapt a convolutional neural network (CNN) framework (Shen et al., 2014b; Shen et al., 2014a; Gao et al., 2014) to this matching problem."
                    },
                    {
                        "id": 123,
                        "string": "The network architecture is illustrated in Fig."
                    },
                    {
                        "id": 124,
                        "string": "6 ."
                    },
                    {
                        "id": 125,
                        "string": "The CNN model first applies a word hashing technique (Huang et al., 2013) that breaks a word into a vector of letter-trigrams (x t → f t in Fig."
                    },
                    {
                        "id": 126,
                        "string": "6 )."
                    },
                    {
                        "id": 127,
                        "string": "For example, the bag of letter-trigrams of the word \"who\" are #-w-h, w-h-o, h-o-# after adding the Figure 7 : Extending an inferential chain with constraints and aggregation functions."
                    },
                    {
                        "id": 128,
                        "string": "word boundary symbol #."
                    },
                    {
                        "id": 129,
                        "string": "Then, it uses a convolutional layer to project the letter-trigram vectors of words within a context window of 3 words to a local contextual feature vector (f t → h t ), followed by a max pooling layer that extracts the most salient local features to form a fixed-length global feature vector (v)."
                    },
                    {
                        "id": 130,
                        "string": "The global feature vector is then fed to feed-forward neural network layers to output the final non-linear semantic features (y), as the vector representation of either the pattern or the inferential chain."
                    },
                    {
                        "id": 131,
                        "string": "Training the model needs positive pairs, such as a pattern like \"who first voiced meg on <e>\" and an inferential chain like cast-actor."
                    },
                    {
                        "id": 132,
                        "string": "These pairs can be extracted from the full semantic parses when provided in the training data."
                    },
                    {
                        "id": 133,
                        "string": "If the correct semantic parses are latent and only the pairs of questions and answers are available, such as the case in the WEBQUESTIONS dataset, we can still hypothesize possible inferential chains by traversing the paths in the knowledge base that connect the topic entity and the answer."
                    },
                    {
                        "id": 134,
                        "string": "Sec."
                    },
                    {
                        "id": 135,
                        "string": "4.1 will illustrate this data generation process in detail."
                    },
                    {
                        "id": 136,
                        "string": "Our model has two advantages over the embedding approach (Bordes et al., 2014a) ."
                    },
                    {
                        "id": 137,
                        "string": "First, the word hashing layer helps control the dimensionality of the input space and can easily scale to large vocabulary."
                    },
                    {
                        "id": 138,
                        "string": "The letter-trigrams also capture some sub-word semantics (e.g., words with minor typos have almost identical letter-trigram vectors), which makes it especially suitable for questions from real-world users, such as those issued to a search engine."
                    },
                    {
                        "id": 139,
                        "string": "Second, it uses a deeper architecture with convolution and max-pooling layers, which has more representation power."
                    },
                    {
                        "id": 140,
                        "string": "Augmenting Constraints & Aggregations A graph with just the inferential chain forms the simplest legitimate query graph and can be executed against the knowledge base K to retrieve the answers; namely, all the entities that x can be grounded to."
                    },
                    {
                        "id": 141,
                        "string": "For instance, the graph in s 3 in Fig."
                    },
                    {
                        "id": 142,
                        "string": "7 will retrieve all the actors who have been on FamilyGuy."
                    },
                    {
                        "id": 143,
                        "string": "Although this set of entities obviously contains the correct answer to the question (assuming the topic entity FamilyGuy is correct), it also includes incorrect entities that do not satisfy additional constraints implicitly or explicitly mentioned in the question."
                    },
                    {
                        "id": 144,
                        "string": "To further restrict the set of answer entities, the graph with only the core inferential chain can be expanded by two types of actions: A c and A a ."
                    },
                    {
                        "id": 145,
                        "string": "A c is the set of possible ways to attach an entity to a variable node, where the edge denotes one of the valid predicates that can link the variable to the entity."
                    },
                    {
                        "id": 146,
                        "string": "For instance, in Fig."
                    },
                    {
                        "id": 147,
                        "string": "7 , s 6 is created by attaching MegGriffin to y with the predicate character."
                    },
                    {
                        "id": 148,
                        "string": "This is equivalent to the last conjunctive term in the corresponding λ-expression: λx.∃y.cast(FamilyGuy, y) ∧ actor(y, x) ∧ character(y, MegGriffin)."
                    },
                    {
                        "id": 149,
                        "string": "Sometimes, the constraints are described over the entire answer set through the aggregation function, such as the word \"first\" in our example question q ex ."
                    },
                    {
                        "id": 150,
                        "string": "This is handled similarly by actions A a , which attach an aggregation node on a variable node."
                    },
                    {
                        "id": 151,
                        "string": "For example, the arg min node of s 7 in Fig."
                    },
                    {
                        "id": 152,
                        "string": "7 chooses the grounding with the smallest from attribute of y."
                    },
                    {
                        "id": 153,
                        "string": "The full possible constraint set can be derived by first issuing the core inferential chain as a query to the knowledge base to find the bindings of variables y's and x, and then enumerating all neighboring nodes of these entities."
                    },
                    {
                        "id": 154,
                        "string": "This, however, often results in an unnecessarily large constraint pool."
                    },
                    {
                        "id": 155,
                        "string": "In this work, we employ simple rules to retain only the nodes that have some possibility to be legitimate constraints."
                    },
                    {
                        "id": 156,
                        "string": "For instance, a constraint node can be an entity that also appears in the question (detected by our entity linking component), or an aggregation constraint can only be added if certain keywords like \"first\" or \"latest\" occur in the question."
                    },
                    {
                        "id": 157,
                        "string": "The complete set of these rules can be found in Appendix B."
                    },
                    {
                        "id": 158,
                        "string": "Learning the reward function Given a state s, the reward function γ(s) basically judges whether the query graph represented by s is the correct semantic parse of the input question q."
                    },
                    {
                        "id": 159,
                        "string": "We use a log-linear model to learn the reward function."
                    },
                    {
                        "id": 160,
                        "string": "Below we describe the features and the learning process."
                    },
                    {
                        "id": 161,
                        "string": "Features The features we designed essentially match specific portions of the graph to the question, and generally correspond to the staged actions described previously, including: Topic Entity The score returned by the entity linking system is directly used as a feature."
                    },
                    {
                        "id": 162,
                        "string": "Core Inferential Chain We use similarity scores of different CNN models described in Sec."
                    },
                    {
                        "id": 163,
                        "string": "3.2.1 to measure the quality of the core inferential chain."
                    },
                    {
                        "id": 164,
                        "string": "PatChain compares the pattern (replacing the topic entity with an entity symbol) and the predicate sequence."
                    },
                    {
                        "id": 165,
                        "string": "QuesEP concatenates the canonical name of the topic entity and the predicate sequence, and compares it with the question."
                    },
                    {
                        "id": 166,
                        "string": "This feature conceptually tries to verify the entity linking suggestion."
                    },
                    {
                        "id": 167,
                        "string": "These two CNN models are learned using pairs of the question and the inferential chain of the parse in the training data."
                    },
                    {
                        "id": 168,
                        "string": "In addition to the in-domain similarity features, we also train a ClueWeb model using the Freebase annotation of ClueWeb corpora (Gabrilovich et al., 2013) ."
                    },
                    {
                        "id": 169,
                        "string": "For two entities in a sentence that can be linked by one or two predicates, we pair the sentences and predicates to form a parallel corpus to train the CNN model."
                    },
                    {
                        "id": 170,
                        "string": "Constraints & Aggregations When a constraint node is present in the graph, we use some simple features to check whether there are words in the question that can be associated with the constraint entity or property."
                    },
                    {
                        "id": 171,
                        "string": "Examples of such features include whether a mention in the question can be linked to this entity, and the percentage of the words in the name of the constraint entity appear in the question."
                    },
                    {
                        "id": 172,
                        "string": "Similarly, we check the existence of some keywords in a pre-compiled list, such as \"first\", \"current\" or \"latest\" as features for aggregation nodes such as arg min."
                    },
                    {
                        "id": 173,
                        "string": "The complete list of these simple word matching features can also be found in Appendix B."
                    },
                    {
                        "id": 174,
                        "string": "Overall The number of the answer entities retrieved when issuing the query to the knowledge base and the number of nodes in the query graph are both included as features."
                    },
                    {
                        "id": 175,
                        "string": "(1) EntityLinkingScore(FamilyGuy, Family Guy ) = 0.9 (2) PatChain( who first voiced meg on <e> , cast-actor) = 0.7 (3) QuesEP(q, family guy cast-actor ) = 0.6 (4) ClueWeb( who first voiced meg on <e> , cast-actor) = 0.2 (5) ConstraintEntityWord( Meg Griffin , q) = 0.5 (6) ConstraintEntityInQ( Meg Griffin , q) = 1 (7) AggregationKeyword(argmin, q) = 1 (8) NumNodes(s) = 5 (9) NumAns(s) = 1 s Figure 8 : Active features of a query graph s. (1) is the entity linking score of the topic entity."
                    },
                    {
                        "id": 176,
                        "string": "(2)-(4) are different model scores of the core chain."
                    },
                    {
                        "id": 177,
                        "string": "(5) indicates 50% of the words in \"Meg Griffin\" appear in the question q."
                    },
                    {
                        "id": 178,
                        "string": "(6) is 1 when the mention \"Meg\" in q is correctly linked to MegGriffin by the entity linking component."
                    },
                    {
                        "id": 179,
                        "string": "(8) is the number of nodes in s. The knowledge base returns only 1 entity when issuing this query, so (9) is 1."
                    },
                    {
                        "id": 180,
                        "string": "To illustrate our feature design, Fig."
                    },
                    {
                        "id": 181,
                        "string": "8 presents the active features of an example query graph."
                    },
                    {
                        "id": 182,
                        "string": "Learning In principle, once the features are extracted, the model can be trained using any standard off-theshelf learning algorithm."
                    },
                    {
                        "id": 183,
                        "string": "Instead of treating it as a binary classification problem, where only the correct query graphs are labeled as positive, we view it as a ranking problem."
                    },
                    {
                        "id": 184,
                        "string": "Suppose we have several candidate query graphs for each question 4 ."
                    },
                    {
                        "id": 185,
                        "string": "Let g a and g b be the query graphs described in states s a and s b for the same question q, and the entity sets A a and A b be those retrieved by executing g a and g b , respectively."
                    },
                    {
                        "id": 186,
                        "string": "Suppose that A is the labeled answers to q."
                    },
                    {
                        "id": 187,
                        "string": "We first compute the precision, recall and F 1 score of A a and A b , compared with the gold answer set A."
                    },
                    {
                        "id": 188,
                        "string": "We then rank s a and s b by their F 1 scores 5 ."
                    },
                    {
                        "id": 189,
                        "string": "The intuition behind is that even if a query is not completely correct, it is still preferred than some other totally incorrect queries."
                    },
                    {
                        "id": 190,
                        "string": "In this work, we use a one-layer neural network model based on lambda-rank (Burges, 2010) for training the ranker."
                    },
                    {
                        "id": 191,
                        "string": "Experiments We first introduce the dataset and evaluation metric, followed by the main experimental results and some analysis."
                    },
                    {
                        "id": 192,
                        "string": "Data & evaluation metric We use the WEBQUESTIONS dataset (Berant et al., 2013) , which consists of 5,810 question/answer pairs."
                    },
                    {
                        "id": 193,
                        "string": "These questions were collected using Google Suggest API and the answers were obtained from Freebase with the help of Amazon MTurk."
                    },
                    {
                        "id": 194,
                        "string": "The questions are split into training and testing sets, which contain 3,778 questions (65%) and 2,032 questions (35%), respectively."
                    },
                    {
                        "id": 195,
                        "string": "This dataset has several unique properties that make it appealing and was used in several recent papers on semantic parsing and question answering."
                    },
                    {
                        "id": 196,
                        "string": "For instance, although the questions are not directly sampled from search query logs, the selection process was still biased to commonly asked questions on a search engine."
                    },
                    {
                        "id": 197,
                        "string": "The distribution of this question set is thus closer to the \"real\" information need of search users than that of a small number of human editors."
                    },
                    {
                        "id": 198,
                        "string": "The system performance is basically measured by the ratio of questions that are answered correctly."
                    },
                    {
                        "id": 199,
                        "string": "Because there can be more than one answer to a question, precision, recall and F 1 are computed based on the system output for each individual question."
                    },
                    {
                        "id": 200,
                        "string": "The average F 1 score is reported as the main evaluation metric 6 ."
                    },
                    {
                        "id": 201,
                        "string": "Because this dataset contains only question and answer pairs, we use essentially the same search procedure to simulate the semantic parses for training the CNN models and the overall reward function."
                    },
                    {
                        "id": 202,
                        "string": "Candidate topic entities are first generated using the same entity linking system for each question in the training data."
                    },
                    {
                        "id": 203,
                        "string": "Paths on the Freebase knowledge graph that connect a candidate entity to at least one answer entity are identified as the core inferential chains 7 ."
                    },
                    {
                        "id": 204,
                        "string": "If an inferentialchain query returns more entities than the correct answers, we explore adding constraint and aggregation nodes, until the entities retrieved by the query graph are identical to the labeled answers, or the F 1 score cannot be increased further."
                    },
                    {
                        "id": 205,
                        "string": "Negative examples are sampled from of the incorrect candidate graphs generated during the search process."
                    },
                    {
                        "id": 206,
                        "string": "Method Prec."
                    },
                    {
                        "id": 207,
                        "string": "Rec."
                    },
                    {
                        "id": 208,
                        "string": "F1 (Berant et al., 2013) 48.0 41.3 35.7 (Bordes et al., 2014b) --29.7 (Yao and Van Durme, 2014) --33.0 (Berant and Liang, 2014) 40.5 46.6 39.9 (Bao et al., 2014) --37.5 (Bordes et al., 2014a) --39.2 (Yang et al., 2014) --41.3 (Wang et al., 2014) --45.3 Our approach -STAGG 52.8 60.7 52.5 Table 1 : The results of our approach compared to existing work."
                    },
                    {
                        "id": 209,
                        "string": "The numbers of other systems are either from the original papers or derived from the evaluation script, when the output is available."
                    },
                    {
                        "id": 210,
                        "string": "In the end, we produce 17,277 query graphs with none-zero F 1 scores from the training set questions and about 1.7M completely incorrect ones."
                    },
                    {
                        "id": 211,
                        "string": "For training the CNN models to identify the core inferential chain (Sec."
                    },
                    {
                        "id": 212,
                        "string": "3.2.1), we only use 4,058 chain-only query graphs that achieve F 1 = 0.5 to form the parallel question and predicate sequence pairs."
                    },
                    {
                        "id": 213,
                        "string": "The hyper-parameters in CNN, such as the learning rate and the numbers of hidden nodes at the convolutional and semantic layers were chosen via cross-validation."
                    },
                    {
                        "id": 214,
                        "string": "We reserved 684 pairs of patterns and inference-chains from the whole training examples as the held-out set, and the rest as the initial training set."
                    },
                    {
                        "id": 215,
                        "string": "The optimal hyper-parameters were determined by the performance of models trained on the initial training set when applied to the held-out data."
                    },
                    {
                        "id": 216,
                        "string": "We then fixed the hyper-parameters and retrained the CNN models using the whole training set."
                    },
                    {
                        "id": 217,
                        "string": "The performance of CNN is insensitive to the hyperparameters as long as they are in a reasonable range (e.g., 1000 ± 200 nodes in the convolutional layer, 300 ± 100 nodes in the semantic layer, and learning rate 0.05 ∼ 0.005) and the training process often converges after ∼ 800 epochs."
                    },
                    {
                        "id": 218,
                        "string": "When training the reward function, we created up to 4,000 examples for each question that contain all the positive query graphs and randomly selected negative examples."
                    },
                    {
                        "id": 219,
                        "string": "The model is trained as a ranker, where example query graphs are ranked by their F 1 scores."
                    },
                    {
                        "id": 220,
                        "string": "Results Tab."
                    },
                    {
                        "id": 221,
                        "string": "1 shows the results of our system, STAGG (Staged query graph generation), compared to existing work 8 ."
                    },
                    {
                        "id": 222,
                        "string": "As can be seen from the table, our 8 We do not include results of (Reddy et al., 2014) system outperforms the previous state-of-the-art method by a large margin -7.2% absolute gain."
                    },
                    {
                        "id": 223,
                        "string": "Given the staged design of our approach, it is thus interesting to examine the contributions of each component."
                    },
                    {
                        "id": 224,
                        "string": "Because topic entity linking is the very first stage, the quality of the entities found in the questions, both in precision and recall, affects the final results significantly."
                    },
                    {
                        "id": 225,
                        "string": "To get some insight about how our topic entity linking component performs, we also experimented with applying Freebase Search API to suggest entities for possible mentions in a question."
                    },
                    {
                        "id": 226,
                        "string": "As can be observed in Tab."
                    },
                    {
                        "id": 227,
                        "string": "2, to cover most of the training questions, we only need half of the number of suggestions when using our entity linking component, compared to Freebase API."
                    },
                    {
                        "id": 228,
                        "string": "Moreover, they also cover more entities that were selected as the topic entities in the original dataset."
                    },
                    {
                        "id": 229,
                        "string": "Starting from those 9,147 entities output by our component, answers of 3,453 questions (91.4%) can be found in their neighboring nodes."
                    },
                    {
                        "id": 230,
                        "string": "When replacing our entity linking component with the results from Freebase API, we also observed a significant performance degradation."
                    },
                    {
                        "id": 231,
                        "string": "The overall system performance drops from 52.5% to 48.4% in F 1 (Prec = 49.8%, Rec = 55.7%), which is 4.1 points lower."
                    },
                    {
                        "id": 232,
                        "string": "Next we test the system performance when the query graph has just the core inferential chain."
                    },
                    {
                        "id": 233,
                        "string": "Tab."
                    },
                    {
                        "id": 234,
                        "string": "3 summarizes the results."
                    },
                    {
                        "id": 235,
                        "string": "When only the PatChain CNN model is used, the performance is already very strong, outperforming all existing work."
                    },
                    {
                        "id": 236,
                        "string": "Adding the other CNN models boosts the performance further, reaching 51.8% and is only slightly lower than the full system performance."
                    },
                    {
                        "id": 237,
                        "string": "This may be due to two reasons."
                    },
                    {
                        "id": 238,
                        "string": "First, the questions from search engine users are often short and a large portion of them simply ask about properties of an entity."
                    },
                    {
                        "id": 239,
                        "string": "Examining the query graphs generated for training set questions, we found that 1,888 directly comparable to results from other work."
                    },
                    {
                        "id": 240,
                        "string": "On these 570 questions, our system achieves 67.0% in F1."
                    },
                    {
                        "id": 241,
                        "string": "(50.0%) can be answered exactly (i.e., F 1 = 1) using a chain-only query graph."
                    },
                    {
                        "id": 242,
                        "string": "Second, even if the correct parse requires more constraints, the less constrained graph still gets a partial score, as its results cover the correct answers."
                    },
                    {
                        "id": 243,
                        "string": "Error Analysis Although our approach substantially outperforms existing methods, the room for improvement seems big."
                    },
                    {
                        "id": 244,
                        "string": "After all, the accuracy for the intended application, question answering, is still low and only slightly above 50%."
                    },
                    {
                        "id": 245,
                        "string": "We randomly sampled 100 questions that our system did not generate the completely correct query graphs, and categorized the errors."
                    },
                    {
                        "id": 246,
                        "string": "About one third of errors are in fact due to label issues and are not real mistakes."
                    },
                    {
                        "id": 247,
                        "string": "This includes label error (2%), incomplete labels (17%, e.g., only one song is labeled as the answer to \"What songs did Bob Dylan write?\")"
                    },
                    {
                        "id": 248,
                        "string": "and acceptable answers (15%, e.g., \"Time in China\" vs. \"UTC+8\")."
                    },
                    {
                        "id": 249,
                        "string": "8% of the errors are due to incorrect entity linking; however, sometimes the mention is inherently ambiguous (e.g., AFL in \"Who founded the AFL?\""
                    },
                    {
                        "id": 250,
                        "string": "could mean either \"American Football League\" or \"American Federation of Labor\")."
                    },
                    {
                        "id": 251,
                        "string": "35% of the errors are because of the incorrect inferential chains; 23% are due to incorrect or missing constraints."
                    },
                    {
                        "id": 252,
                        "string": "Related Work and Discussion Several semantic parsing methods use a domainindependent meaning representation derived from the combinatory categorial grammar (CCG) parses (e.g., (Cai and Yates, 2013; Kwiatkowski et al., 2013; Reddy et al., 2014) )."
                    },
                    {
                        "id": 253,
                        "string": "In contrast, our query graph design matches closely the graph knowledge base."
                    },
                    {
                        "id": 254,
                        "string": "Although not fully demonstrated in this paper, the query graph can in fact be fairly expressive."
                    },
                    {
                        "id": 255,
                        "string": "For instance, negations can be handled by adding tags to the constraint nodes indicating that certain conditions cannot be satisfied."
                    },
                    {
                        "id": 256,
                        "string": "Our graph generation method is inspired by (Yao and Van Durme, 2014; Bao et al., 2014) ."
                    },
                    {
                        "id": 257,
                        "string": "Unlike traditional semantic parsing approaches, it uses the knowledge base to help prune the search space when forming the parse."
                    },
                    {
                        "id": 258,
                        "string": "Similar ideas have also been explored in (Poon, 2013) ."
                    },
                    {
                        "id": 259,
                        "string": "Empirically, our results suggest that it is crucial to identify the core inferential chain, which matches the relationship between the topic entity in the question and the answer."
                    },
                    {
                        "id": 260,
                        "string": "Our CNN models can be analogous to the embedding approaches (Bordes et al., 2014a; Yang et al., 2014) , but are more sophisticated."
                    },
                    {
                        "id": 261,
                        "string": "By allowing parameter sharing among different question-pattern and KB predicate pairs, the matching score of a rare or even unseen pair in the training data can still be predicted precisely."
                    },
                    {
                        "id": 262,
                        "string": "This is due to the fact that the prediction is based on the shared model parameters (i.e., projection matrices) that are estimated using all training pairs."
                    },
                    {
                        "id": 263,
                        "string": "Conclusion In this paper, we present a semantic parsing framework for question answering using a knowledge base."
                    },
                    {
                        "id": 264,
                        "string": "We define a query graph as the meaning representation that can be directly mapped to a logical form."
                    },
                    {
                        "id": 265,
                        "string": "Semantic parsing is reduced to query graph generation, formulated as a staged search problem."
                    },
                    {
                        "id": 266,
                        "string": "With the help of an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences, our system outperforms previous methods substantially on the WEBQUESTIONS dataset."
                    },
                    {
                        "id": 267,
                        "string": "In the future, we would like to extend our query graph to represent more complicated questions, and explore more features and models for matching constraints and aggregation functions."
                    },
                    {
                        "id": 268,
                        "string": "Applying other structured-output prediction methods to graph generation will also be investigated."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 31
                    },
                    {
                        "section": "Background",
                        "n": "2",
                        "start": 32,
                        "end": 34
                    },
                    {
                        "section": "Knowledge base",
                        "n": "2.1",
                        "start": 35,
                        "end": 42
                    },
                    {
                        "section": "Query graph",
                        "n": "2.2",
                        "start": 43,
                        "end": 68
                    },
                    {
                        "section": "Staged Query Graph Generation",
                        "n": "3",
                        "start": 69,
                        "end": 93
                    },
                    {
                        "section": "Linking Topic Entity",
                        "n": "3.1",
                        "start": 94,
                        "end": 103
                    },
                    {
                        "section": "Identifying Core Inferential Chain",
                        "n": "3.2",
                        "start": 104,
                        "end": 115
                    },
                    {
                        "section": "Deep Convolutional Neural Networks",
                        "n": "3.2.1",
                        "start": 116,
                        "end": 139
                    },
                    {
                        "section": "Augmenting Constraints & Aggregations",
                        "n": "3.3",
                        "start": 140,
                        "end": 157
                    },
                    {
                        "section": "Learning the reward function",
                        "n": "3.4",
                        "start": 158,
                        "end": 160
                    },
                    {
                        "section": "Features",
                        "n": "3.4.1",
                        "start": 161,
                        "end": 181
                    },
                    {
                        "section": "Learning",
                        "n": "3.4.2",
                        "start": 182,
                        "end": 189
                    },
                    {
                        "section": "Experiments",
                        "n": "4",
                        "start": 190,
                        "end": 191
                    },
                    {
                        "section": "Data & evaluation metric",
                        "n": "4.1",
                        "start": 192,
                        "end": 219
                    },
                    {
                        "section": "Results",
                        "n": "4.2",
                        "start": 220,
                        "end": 242
                    },
                    {
                        "section": "Error Analysis",
                        "n": "4.3",
                        "start": 243,
                        "end": 251
                    },
                    {
                        "section": "Related Work and Discussion",
                        "n": "5",
                        "start": 252,
                        "end": 262
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 263,
                        "end": 268
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1072-Figure1-1.png",
                        "caption": "Figure 1: Freebase subgraph of Family Guy",
                        "page": 1,
                        "bbox": {
                            "x1": 322.56,
                            "x2": 515.04,
                            "y1": 81.6,
                            "y2": 264.0
                        }
                    },
                    {
                        "filename": "../figure/image/1072-Figure8-1.png",
                        "caption": "Figure 8: Active features of a query graph s. (1) is the entity linking score of the topic entity. (2)- (4) are different model scores of the core chain. (5) indicates 50% of the words in “Meg Griffin” appear in the question q. (6) is 1 when the mention “Meg” in q is correctly linked to MegGriffin by the entity linking component. (8) is the number of nodes in s. The knowledge base returns only 1 entity when issuing this query, so (9) is 1.",
                        "page": 6,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 287.52,
                            "y1": 67.2,
                            "y2": 233.76
                        }
                    },
                    {
                        "filename": "../figure/image/1072-Figure3-1.png",
                        "caption": "Figure 3: The legitimate actions to grow a query graph. See text for detail.",
                        "page": 2,
                        "bbox": {
                            "x1": 312.96,
                            "x2": 517.92,
                            "y1": 73.44,
                            "y2": 110.39999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1072-Figure2-1.png",
                        "caption": "Figure 2: Query graph that represents the question “Who first voiced Meg on Family Guy?”",
                        "page": 2,
                        "bbox": {
                            "x1": 86.88,
                            "x2": 276.0,
                            "y1": 80.64,
                            "y2": 133.92
                        }
                    },
                    {
                        "filename": "../figure/image/1072-Table1-1.png",
                        "caption": "Table 1: The results of our approach compared to existing work. The numbers of other systems are either from the original papers or derived from the evaluation script, when the output is available.",
                        "page": 7,
                        "bbox": {
                            "x1": 87.84,
                            "x2": 274.08,
                            "y1": 65.75999999999999,
                            "y2": 169.92
                        }
                    },
                    {
                        "filename": "../figure/image/1072-Table2-1.png",
                        "caption": "Table 2: Statistics of entity linking results on training set questions. Both methods cover roughly the same number of questions, but Freebase API suggests twice the number of entities output by our entity linking system and covers fewer topic entities labeled in the original data.",
                        "page": 7,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 65.75999999999999,
                            "y2": 100.32
                        }
                    },
                    {
                        "filename": "../figure/image/1072-Figure4-1.png",
                        "caption": "Figure 4: Two possible topic entity linking actions applied to an empty graph, for question “Who first voiced [Meg] on [Family Guy]?”",
                        "page": 3,
                        "bbox": {
                            "x1": 104.64,
                            "x2": 258.24,
                            "y1": 71.52,
                            "y2": 144.0
                        }
                    },
                    {
                        "filename": "../figure/image/1072-Figure5-1.png",
                        "caption": "Figure 5: Candidate core inferential chains start from the entity FamilyGuy.",
                        "page": 3,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 526.0799999999999,
                            "y1": 67.67999999999999,
                            "y2": 147.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1072-Table3-1.png",
                        "caption": "Table 3: The system results when only the inferential-chain query graphs are generated. We started with the PatChain CNN model and then added QuesEP and ClueWeb sequentially. See Sec. 3.4 for the description of these models.",
                        "page": 8,
                        "bbox": {
                            "x1": 111.83999999999999,
                            "x2": 251.04,
                            "y1": 65.75999999999999,
                            "y2": 109.92
                        }
                    },
                    {
                        "filename": "../figure/image/1072-Figure6-1.png",
                        "caption": "Figure 6: The architecture of the convolutional neural networks (CNN) used in this work. The CNN model maps a variable-length word sequence (e.g., a pattern or predicate sequence) to a low-dimensional vector in a latent semantic space. See text for the description of each layer.",
                        "page": 4,
                        "bbox": {
                            "x1": 82.56,
                            "x2": 282.24,
                            "y1": 76.8,
                            "y2": 208.79999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1072-Figure7-1.png",
                        "caption": "Figure 7: Extending an inferential chain with constraints and aggregation functions.",
                        "page": 4,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 526.0799999999999,
                            "y1": 68.64,
                            "y2": 199.2
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-27"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivation",
                    "text": [
                        "User attribute prediction from text is successful:",
                        "I Gender (Burger et al. 2011 EMNLP)",
                        "I Location (Eisenstein et al. 2011 EMNLP)",
                        "I Personality (Schwartz et al. 2013 PLoS One)",
                        "I Impact (Lampos et al. 2014 EACL)",
                        "I Political orientation (Volkova et al. 2014 ACL)",
                        "I Mental illness (Coppersmith et al. 2014 ACL)",
                        "Downstream applications are benefiting from this:",
                        "I Sentiment analysis (Volkova et al. 2013 EMNLP)",
                        "I Text classification (Hovy 2015 ACL)"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "However",
                    "text": [
                        "Socio-economic factors (occupation, social class, education, income) play a vital role in language use",
                        "No large scale user level dataset to date",
                        "I sociological analysis of language use",
                        "I embedding to downstream tasks (e.g. controlling for"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "At a Glance",
                    "text": [
                        "I Predicting new user attribute: occupation",
                        "I New dataset: user occupation",
                        "I Gaussian Process classification for NLP tasks",
                        "I Feature ranking and analysis using non-linear methods"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "8": {
                    "title": "Gaussian Processes",
                    "text": [
                        "Brings together several key ideas in one framework:",
                        "Elegant and powerful framework, with growing popularity in machine learning and application domains"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "9": {
                    "title": "Gaussian Process Graphical Model View",
                    "text": [
                        "I f RD R is a latent",
                        "I y is a noisy realisation",
                        "I k is the covariance",
                        "I m and are learnt"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "10": {
                    "title": "Gaussian Process Classification",
                    "text": [
                        "Pass latent function through logistic function to squash the input from (,) to obtain probability, (x) p(yi fi)",
                        "(similar to logistic regression)",
                        "The likelihood is non-Gaussian and solution is not analytical",
                        "Inference using Expectation propagation (EP)",
                        "FITC approximation for large data",
                        "ARD kernel learns feature importance features most discriminative between classes",
                        "We learn 9 one-vs-all binary classifiers",
                        "This way, we find the most predictive features consistent for all classes"
                    ],
                    "page_nums": [
                        16,
                        17
                    ],
                    "images": []
                },
                "11": {
                    "title": "Gaussian Process Resources",
                    "text": [
                        "I GPs for Natural Language Processing tutorial (ACL 2014)",
                        "I GP Schools in Sheffield and roadshows in Kampala,",
                        "I Annotated bibliography and other materials",
                        "I GPy Toolkit (Python)"
                    ],
                    "page_nums": [
                        18,
                        19
                    ],
                    "images": []
                },
                "12": {
                    "title": "Prediction",
                    "text": [
                        "LR SVM-RBF GP Baseline",
                        "Stratified 10 fold cross-validation"
                    ],
                    "page_nums": [
                        20,
                        21,
                        22,
                        23,
                        24
                    ],
                    "images": []
                },
                "13": {
                    "title": "Prediction Analysis",
                    "text": [
                        "User level features have no predictive value",
                        "Word2Vec features are better than SVD/NPMI for prediction",
                        "Non-linear methods (SVM-RBF and GP) significantly outperform linear methods",
                        "52.7% accuracy for 9-class classification is decent"
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": []
                },
                "14": {
                    "title": "Class Comparison",
                    "text": [
                        "Jensen-Shannon Divergence between topic distributions across occupational classes",
                        "Some clusters of occupations are observable"
                    ],
                    "page_nums": [
                        26
                    ],
                    "images": [
                        "figure/image/1074-Figure3-1.png"
                    ]
                },
                "15": {
                    "title": "Feature Analysis",
                    "text": [
                        "Rank Manual Label Topic (most frequent words)",
                        "Arts art, design, print, collection, poster, painting, custom, logo, printing, drawing",
                        "Health risk, cancer, mental, stress, pa- tients, treatment, surgery, dis- ease, drugs, doctor",
                        "Beauty Care beauty, natural, dry, skin, mas- sage, plastic, spray, facial, treat- ments, soap",
                        "Higher Education students, research, board, stu- dent, college, education, library, schools, teaching, teachers",
                        "Software Engineering service, data, system, services, access, security, development, software, testing, standard",
                        "Most predictive Word2Vec 200 clusters as given by Gaussian",
                        "Football van, foster, cole, winger, terry, reckons, youngster, f ielding, kenny rooney,",
                        "Corporate patent, industry, reports, global,",
                        "Cooking recipe, meat, salad, egg, soup, sauce, beef, served, pork, rice",
                        "Elongated Words wait, till, til, yay, ahhh, hoo, woo, woot, whoop, woohoo",
                        "Politics human, culture, justice, religion, democracy, religious, humanity, tradition, ancient, racism",
                        "Comparison of mean topic usage between supersets of"
                    ],
                    "page_nums": [
                        27,
                        28,
                        32
                    ],
                    "images": []
                },
                "16": {
                    "title": "Feature Analysis Cumulative density functions",
                    "text": [
                        "Topic more prevalent CDF line closer to bottom-right corner"
                    ],
                    "page_nums": [
                        29,
                        30,
                        31
                    ],
                    "images": []
                },
                "17": {
                    "title": "Take Aways",
                    "text": [
                        "User occupation influences language use in social media",
                        "Non-linear methods (Gaussian Processes) obtain significant gains over linear methods",
                        "Topic (clusters) features are both predictive and interpretable",
                        "New dataset available for research"
                    ],
                    "page_nums": [
                        33
                    ],
                    "images": []
                }
            },
            "paper_title": "An analysis of the user occupational class through Twitter content",
            "paper_id": "1074",
            "paper": {
                "title": "An analysis of the user occupational class through Twitter content",
                "abstract": "Social media content can be used as a complementary source to the traditional methods for extracting and studying collective social attributes. This study focuses on the prediction of the occupational class for a public user profile. Our analysis is conducted on a new annotated corpus of Twitter users, their respective job titles, posted textual content and platform-related attributes. We frame our task as classification using latent feature representations such as word clusters and embeddings. The employed linear and, especially, non-linear methods can predict a user's occupational class with strong accuracy for the coarsest level of a standard occupation taxonomy which includes nine classes. Combined with a qualitative assessment, the derived results confirm the feasibility of our approach in inferring a new user attribute that can be embedded in a multitude of downstream applications.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The growth of online social networks provides the opportunity to analyse user text in a broader context (Tumasjan et al., 2010; Bollen et al., 2011; Lampos and Cristianini, 2012) ."
                    },
                    {
                        "id": 1,
                        "string": "This includes the social network (Sadilek et al., 2012) , spatio-temporal information (Lampos and Cristianini, 2010) and personal attributes (Al Zamal et al., 2012) ."
                    },
                    {
                        "id": 2,
                        "string": "Previous research has analysed language differences in user attributes like location (Cheng et al., 2010) , gender (Burger et al., 2011) , impact (Lampos et al., 2014) and age (Rao et al., 2010) , showing that language use is influenced by them."
                    },
                    {
                        "id": 3,
                        "string": "Therefore, user text allows us to infer these properties."
                    },
                    {
                        "id": 4,
                        "string": "This user profiling is important not only for sociolinguistic studies, but also for other applications: recommender systems to provide targeted advertising, analysts who study different opinions in each social class or integration in text regression tasks such as voting intention (Lampos et al., 2013) ."
                    },
                    {
                        "id": 5,
                        "string": "Social status reflected through a person's occupation is a factor which influences language use (Bernstein, 1960; Bernstein, 2003; Labov, 2006) ."
                    },
                    {
                        "id": 6,
                        "string": "Therefore, our hypothesis is that language use in social media can be indicative of a user's occupational class."
                    },
                    {
                        "id": 7,
                        "string": "For example, executives may write more frequently about business or financial news, while people in manufacturing positions could refer more to their personal interests and less to job related activities."
                    },
                    {
                        "id": 8,
                        "string": "Similarly, we expect some categories of people, like those working in sales and customer services, to be more social or to use more informal language."
                    },
                    {
                        "id": 9,
                        "string": "Focusing on the microblogging platform of Twitter, we explore our hypothesis by studying the task of predicting a user's occupational class given platform-related attributes and generated content, i.e."
                    },
                    {
                        "id": 10,
                        "string": "tweets."
                    },
                    {
                        "id": 11,
                        "string": "That has direct applicability in a broad range of areas from sociological studies, which analyse the behaviour of different occupations, to recruiting companies that target people for new job opportunities."
                    },
                    {
                        "id": 12,
                        "string": "For this study, we created a publicly available data set of users, including their profile information and historical text content as well as a label to an occupational class from the \"Standard Occupational Classification\" taxonomy (see Section 2)."
                    },
                    {
                        "id": 13,
                        "string": "We frame our task as classification, aiming to identify the most likely job class for a given user based on profile and a variety of textual features: general word embeddings and clusters (or 'topics')."
                    },
                    {
                        "id": 14,
                        "string": "Both linear and non-linear classification methods are applied with a focus on those that can assist interpretation and offer qualitative insights."
                    },
                    {
                        "id": 15,
                        "string": "We find that text features, especially word clusters, lead to good predictive performance."
                    },
                    {
                        "id": 16,
                        "string": "Accuracy for our best model is well above 50% for 9-way classifi-cation, outperforming competitive methods."
                    },
                    {
                        "id": 17,
                        "string": "The best results are obtained using the Bayesian nonparametric framework of Gaussian Processes (Rasmussen and Williams, 2006) , which also accommodates feature interpretation via the Automatic Relevance Determination."
                    },
                    {
                        "id": 18,
                        "string": "This allows us to get insight into differences in language use across job classes and, finally, assess our original hypothesis about the thematic divergence across them."
                    },
                    {
                        "id": 19,
                        "string": "Standard Occupational Classification To enable the user occupation study, we adopt a standardised job classification taxonomy for mapping Twitter users to occupations."
                    },
                    {
                        "id": 20,
                        "string": "The Standard Occupational Classification (SOC) 1 is a UK government system developed by the Office of National Statistics for classifying occupations."
                    },
                    {
                        "id": 21,
                        "string": "Jobs are categorised hierarchically based on skill requirements and content."
                    },
                    {
                        "id": 22,
                        "string": "The SOC scheme includes nine major groups coded with a digit from 1 to 9."
                    },
                    {
                        "id": 23,
                        "string": "Each major group is divided into sub-major groups coded with 2 digits, where the first digit indicates the major group."
                    },
                    {
                        "id": 24,
                        "string": "Each sub-major group is further divided into minor groups coded with 3 digits and finally, minor groups are divided into unit groups, coded with 4 digits."
                    },
                    {
                        "id": 25,
                        "string": "The unit groups are the leaves of the hierarchy and represent specific jobs related to the group."
                    },
                    {
                        "id": 26,
                        "string": "Table 1 shows a part of the SOC hierarchy."
                    },
                    {
                        "id": 27,
                        "string": "In total, there are 9 major groups, 25 sub-major groups, 90 minor groups and 369 unit groups."
                    },
                    {
                        "id": 28,
                        "string": "Although other hierarchies exist, we use the SOC because it has been published recently (in 2010), includes newly introduced jobs, has a balanced hierarchy and offers a wide variety of job titles that were crucial in our data set creation."
                    },
                    {
                        "id": 29,
                        "string": "Data To the best of our knowledge there are no publicly available data sets suitable for the task we aim to investigate."
                    },
                    {
                        "id": 30,
                        "string": "Thus, we have created a new one consisting of Twitter users mapped to their occupation, together with their profile information and historical tweets."
                    },
                    {
                        "id": 31,
                        "string": "We use the account's profile information to capture users with self-disclosed occupations."
                    },
                    {
                        "id": 32,
                        "string": "The potential self-selection bias is acknowledged, but filtering content via self disclosure is widespread when extracting large-scale data for user attribute inference (Pennacchiotti and Popescu, 2011; Coppersmith et al., 2014) ."
                    },
                    {
                        "id": 33,
                        "string": "Similarly to Hecht et al."
                    },
                    {
                        "id": 34,
                        "string": "(2011) , we first assess the proportion of Twitter accounts with a clear mention to their occupation by annotating the user description field of a random set of 500 users."
                    },
                    {
                        "id": 35,
                        "string": "There were chosen from the random 1% sample, having at least 200 tweets in their history and with a majority of English tweets."
                    },
                    {
                        "id": 36,
                        "string": "There, we can identify the following categories: no description (12.2%), random information (22%), user information but not occupation related (45.8%), and job related information (20%)."
                    },
                    {
                        "id": 37,
                        "string": "To create our data set, we thus use the user description field to search for self-disclosed job titles provided by the 4-digit SOC unit groups, since they contain specific job titles."
                    },
                    {
                        "id": 38,
                        "string": "We queried Twitter's Search API to retrieve for each job title a maximum of 200 accounts which best matched occupation keywords."
                    },
                    {
                        "id": 39,
                        "string": "Then, we aggregated the accounts into the 3-digit (minor) categories."
                    },
                    {
                        "id": 40,
                        "string": "To remove potential ambiguity in the retrieved set, we manually inspected accounts in each minor category and filtered out those that belong to companies, contain no description or the description provided does not indicate that the user has a job corresponding to the minor category."
                    },
                    {
                        "id": 41,
                        "string": "In total, around 50% of the accounts were removed by manual inspection per-formed by the authors."
                    },
                    {
                        "id": 42,
                        "string": "We also removed users in multiple categories and or users that have tweeted less than 50 times in their history."
                    },
                    {
                        "id": 43,
                        "string": "Finally, we eliminated all 3-digit categories that contained less than 45 user accounts after this filtering."
                    },
                    {
                        "id": 44,
                        "string": "This process produced a total number of 5,191 users from 55 minor groups (22 sub-major groups), spread across all nine major SOC groups."
                    },
                    {
                        "id": 45,
                        "string": "The distribution of users across these nine groups is: 9.7%, 34.5%, 20.6%, 3.8%, 16.7%, 6.1%, 1.4%, 4.2%, and 3% (following the ordering of Table 1 )."
                    },
                    {
                        "id": 46,
                        "string": "In our data set the most well represented minor occupational groups are 'Functional Managers and Directors' (184 users -code 113), 'Therapy Professionals' (159 userscode 222) and 'Quality and Regulatory Professionals' (158 users -code 246), whereas the least represented ones are 'Textile and Garment Trades' (45 users -code 541), 'Elementary Security Occupations' (46 users -code 924), 'Elementary Cleaning Occupations' (47 users -code 923)."
                    },
                    {
                        "id": 47,
                        "string": "The mean number of users in the minor classes is equal to 94.4 with a standard deviation of 35.6."
                    },
                    {
                        "id": 48,
                        "string": "For these users, we have collected all their tweets, going as far back as the latest 3,200, and their profile information."
                    },
                    {
                        "id": 49,
                        "string": "The final data set consists of 10,796,836 tweets collected around 5 August 2014 and is openly available."
                    },
                    {
                        "id": 50,
                        "string": "2 A separate Twitter data set is used as a reference corpus in order to build the feature representations detailed in Section 4."
                    },
                    {
                        "id": 51,
                        "string": "This data set is an extract from the Twitter Gardenhose stream (a 10% representative sample of the entire Twitter stream) from 2 January to 28 February 2011."
                    },
                    {
                        "id": 52,
                        "string": "Based on this content, we also build the vocabulary for the text features, containing the most frequent 71,555 words."
                    },
                    {
                        "id": 53,
                        "string": "We tokenise and filter for English using the Trendminer preprocessing pipeline (Preoţiuc-Pietro et al., 2012) ."
                    },
                    {
                        "id": 54,
                        "string": "Features In this section, we overview the features used in the occupational class prediction task."
                    },
                    {
                        "id": 55,
                        "string": "They are divided into two types: (1) user level features, (2) textual features."
                    },
                    {
                        "id": 56,
                        "string": "User Level Features (UserLevel) The user level features are based on the general user information or aggregated statistics about the tweets."
                    },
                    {
                        "id": 57,
                        "string": "Table 2 introduces the 18 features in this u1 number of followers u2 number of friends u3 number of times listed u4 follower/friend ratio u5 proportion of non-duplicate tweets u6 proportion of retweeted tweets u7 average no."
                    },
                    {
                        "id": 58,
                        "string": "of retweets/tweet u8 proportion of retweets done u9 proportion of hashtags u10 proportion of tweets with hashtags u11 proportion of tweets with @-mentions u12 proportion of @-replies u13 no."
                    },
                    {
                        "id": 59,
                        "string": "of unique @-mentions in tweets u14 proportion of tweets with links u15 no."
                    },
                    {
                        "id": 60,
                        "string": "of favourites the account made u16 avg."
                    },
                    {
                        "id": 61,
                        "string": "number of tweets/day u17 total number of tweets u18 proportion of tweets in English Textual Features The textual features are derived from the aggregated set of user's tweets."
                    },
                    {
                        "id": 62,
                        "string": "We use our reference corpus to represent each user as a distribution over these features."
                    },
                    {
                        "id": 63,
                        "string": "We ignore the bio field from building textual features to avoid introducing biases from our data collection method."
                    },
                    {
                        "id": 64,
                        "string": "While this is a restriction, our analysis showed that in less than 20% of the cases the information in the bio is directly relevant to the occupation."
                    },
                    {
                        "id": 65,
                        "string": "SVD Word Embeddings (SVD-E) We use a more abstract representation of words than simple unigram counts in order to aid interpretability of our analysis."
                    },
                    {
                        "id": 66,
                        "string": "We compute a word to word similarity matrix from our reference corpus."
                    },
                    {
                        "id": 67,
                        "string": "Normalised Pointwise Mutual Information (NPMI) (Bouma, 2009 ) is used to compute word to word similarity."
                    },
                    {
                        "id": 68,
                        "string": "NPMI is an information theoretic measure indicating which words co-occur in the same context, where the context is represented by a whole tweet: NPMI(x, y) = − log P(x, y) · log P(x, y) P(x) · P(y) ."
                    },
                    {
                        "id": 69,
                        "string": "(1) We then perform singular value decomposition (SVD) on the word to word similarity matrix and obtain an embedding of words into a low dimensional space."
                    },
                    {
                        "id": 70,
                        "string": "In our experiments we tried the following dimensionalities: 30, 50, 100 and 200."
                    },
                    {
                        "id": 71,
                        "string": "The feature representation for each user is obtained summing over each of the embedding dimensions across all words."
                    },
                    {
                        "id": 72,
                        "string": "NPMI Clusters (SVD-C) We use the NPMI matrix described in the previous paragraph to create hard clusters of words."
                    },
                    {
                        "id": 73,
                        "string": "These clusters can be thought as 'topics', i.e."
                    },
                    {
                        "id": 74,
                        "string": "words that are semantically similar."
                    },
                    {
                        "id": 75,
                        "string": "From a variety of clustering techniques we choose spectral clustering (Shi and Malik, 2000; Ng et al., 2002) , a hard-clustering approach which deals well with high-dimensional and non-convex data (von Luxburg, 2007) ."
                    },
                    {
                        "id": 76,
                        "string": "Spectral clustering is based on applying SVD to the graph Laplacian and aims to perform an optimal graph partitioning on the NPMI similarity matrix."
                    },
                    {
                        "id": 77,
                        "string": "The number of clusters needs to be pre-specified."
                    },
                    {
                        "id": 78,
                        "string": "We use 30, 50, 100 and 200 clusters -numbers were chosen a priori based on previous work (Lampos et al., 2014) ."
                    },
                    {
                        "id": 79,
                        "string": "The feature representation is the standardised number of words from each cluster."
                    },
                    {
                        "id": 80,
                        "string": "Although there is a loss of information compared to the original representation, the clusters are very useful in the model analysis step."
                    },
                    {
                        "id": 81,
                        "string": "Embeddings are hard to interpret because each dimension is an abstract notion, while the clusters can be interpreted by presenting a list of the most frequent or representative words."
                    },
                    {
                        "id": 82,
                        "string": "The latter are identified using the following centrality metric: C w = x∈c NPMI(w, x) |c| − 1 , (2) where c denotes the cluster and w the target word."
                    },
                    {
                        "id": 83,
                        "string": "Neural Embeddings (W2V-E) Recently, there has been a growing interest in neural language models, where the words are projected into a lower dimensional dense vector space via a hidden layer (Mikolov et al., 2013b) ."
                    },
                    {
                        "id": 84,
                        "string": "These models showed they can provide a better representation of words compared to traditional language models (Mikolov et al., 2013c) because they capture syntactic information rather than just bag-of-context, handling non-linear transformations."
                    },
                    {
                        "id": 85,
                        "string": "In this low dimensional vector space, words with a small distance are considered semantically similar."
                    },
                    {
                        "id": 86,
                        "string": "We use the skipgram model with negative sampling (Mikolov et al., 2013a) to learn word embeddings on the Twitter reference corpus."
                    },
                    {
                        "id": 87,
                        "string": "In that case, the skip-gram model is factorising a word-context PMI matrix (Levy and Goldberg, 2014) ."
                    },
                    {
                        "id": 88,
                        "string": "We use a layer size of 50 and the Gensim implementation."
                    },
                    {
                        "id": 89,
                        "string": "3 Neural Clusters (W2V-C) Similar to the NPMI cluster, we use the neural embeddings in order to obtain clusters of related words, i.e."
                    },
                    {
                        "id": 90,
                        "string": "'topics'."
                    },
                    {
                        "id": 91,
                        "string": "We derive a word to word similarity matrix using cosine similarity on the neural embeddings."
                    },
                    {
                        "id": 92,
                        "string": "We apply spectral clustering on this matrix to obtain 30, 50, 100 and 200 word clusters."
                    },
                    {
                        "id": 93,
                        "string": "Classification with Gaussian Processes In this section, we briefly overview Gaussian Process (GP) for classification, highlighting our motivation for using this method."
                    },
                    {
                        "id": 94,
                        "string": "GPs formulate a Bayesian non-parametric machine learning framework which defines a prior on functions (Rasmussen and Williams, 2006) ."
                    },
                    {
                        "id": 95,
                        "string": "The properties of the functions are given by a kernel which models the covariance in the response values as a function of its inputs."
                    },
                    {
                        "id": 96,
                        "string": "Although GPs form a powerful learning tool, they have only recently been used in NLP research (Cohn and Specia, 2013; with classification applications limited to (Polajnar et al., 2011) ."
                    },
                    {
                        "id": 97,
                        "string": "Formally, GP methods aim to learn a function f : R d → R drawn from a GP prior given the inputs x x x ∈ R d : f (x x x) ∼ GP(m(x x x), k(x x x, x x x )) , (3) where m(·) is the mean function (here 0) and k(·, ·) is the covariance kernel."
                    },
                    {
                        "id": 98,
                        "string": "Usually, the Squared Exponential (SE) kernel (a.k.a."
                    },
                    {
                        "id": 99,
                        "string": "RBF or Gaussian) is used to encourage smooth functions."
                    },
                    {
                        "id": 100,
                        "string": "For the multidimensional pair of inputs (x x x, x x x ), this is: k ard (x x x, x x x ) = σ 2 exp d i − (x i − x i ) 2 2l 2 i , (4) where l i are lengthscale parameters learnt only using training data by performing gradient ascent on the type-II marginal likelihood."
                    },
                    {
                        "id": 101,
                        "string": "Intuitively, the lengthscale parameter l i controls the variation along the i input dimension, i.e."
                    },
                    {
                        "id": 102,
                        "string": "a low value makes the output very sensitive to input data, thus making that input more useful for the prediction."
                    },
                    {
                        "id": 103,
                        "string": "If the lengthscales are learnt separately for each input dimension the kernel is named SE with Automatic Relevance Determination (ARD) (Neal, 1996) ."
                    },
                    {
                        "id": 104,
                        "string": "Binary classification using GPs 'squashes' the real valued latent function f (x) output through a logistic function: π(x x x) P(y = 1|x x x) = σ(f (x x x)) in a similar way to logistic regression classification."
                    },
                    {
                        "id": 105,
                        "string": "The object of the GP inference is the distribution of the latent variable corresponding to a test case x * : P(f * |x x x, y y y, x * ) = P(f * |x x x, x * , f )P(f |x x x, y y y)df , (5) where P(f |x x x, y y y) = P(y y y|f )P(f |x x x)/P(y y y|x x x) is the posterior over the latent variables."
                    },
                    {
                        "id": 106,
                        "string": "If the likelihood P(y y y|f ) is Gaussian, the combination with a GP prior P(f |x x x) gives a posterior GP over functions."
                    },
                    {
                        "id": 107,
                        "string": "In binary classification, the distribution over the latent f * is combined with the logistic function to produce the prediction: π * = σ(f * )P(f * |x x x, y y y, x * )df * ."
                    },
                    {
                        "id": 108,
                        "string": "(6) This results in a non-Gaussian likelihood in the posterior formulation and therefore, exact inference is infeasible for classification models."
                    },
                    {
                        "id": 109,
                        "string": "Multiple approximations exist that make the computation tractable (Gibbs and Mackay, 1997; Williams and Barber, 1998; Neal, 1999) ."
                    },
                    {
                        "id": 110,
                        "string": "In our experiments we opt to use the Expectation Propagation (EP) method (Minka, 2001) which approximates the non-Gaussian joint posterior with a Gaussian one."
                    },
                    {
                        "id": 111,
                        "string": "EP offers very good empirical results for many different likelihoods, although it has no proof of convergence."
                    },
                    {
                        "id": 112,
                        "string": "The complexity for the inference step is O(n 3 )."
                    },
                    {
                        "id": 113,
                        "string": "Given that our data set is very large and the number of features is high, we conduct inference using the fully independent training conditional (FITC) approximation (Snelson and Ghahramani, 2006) with 500 random inducing points."
                    },
                    {
                        "id": 114,
                        "string": "We refer the interested reader to Rasmussen and Williams (2006) for further information on GP classification."
                    },
                    {
                        "id": 115,
                        "string": "Although we could use multi-class classification methods, in order to provide insight, we perform a separate one-vs-all classification for each class and then determine a label through the occupational class that has the highest likelihood."
                    },
                    {
                        "id": 116,
                        "string": "Experiments This section presents the experimental results for our task."
                    },
                    {
                        "id": 117,
                        "string": "We first compare the accuracy of our classification methods on held out data using each feature set and conduct a standard error analysis."
                    },
                    {
                        "id": 118,
                        "string": "We then use the interpretability of the ARD lengthscales from the GP classifier to further analyse the relevant features."
                    },
                    {
                        "id": 119,
                        "string": "Predictive Accuracy We assign users to one of nine possible classes (see the 'Major Groups' on  features at a time."
                    },
                    {
                        "id": 120,
                        "string": "Experiments combining features yielded only minor improvements."
                    },
                    {
                        "id": 121,
                        "string": "We apply common linear and non-linear methods together with our proposed GP classifier."
                    },
                    {
                        "id": 122,
                        "string": "The linear method is logistic regression (LR) with Elastic Net regularisation (Freedman, 2009 ) and the non-linear one is formulated by a Support Vector Machine (SVM) with an RBF kernel (Vapnik, 1998) ."
                    },
                    {
                        "id": 123,
                        "string": "The accuracy of our classifiers is measured on held-out data."
                    },
                    {
                        "id": 124,
                        "string": "Our data set is divided into stratified training (80%), validation (10%) and testing (10%) sets."
                    },
                    {
                        "id": 125,
                        "string": "The validation set was used to learn the LR and SVM hyperparameters, while the GP did not use this set at all."
                    },
                    {
                        "id": 126,
                        "string": "We report results using all three methods and all feature sets in Table 3 ."
                    },
                    {
                        "id": 127,
                        "string": "We first observe that user level features (User-Level; see Section 4.1) are not useful for predicting the job class."
                    },
                    {
                        "id": 128,
                        "string": "This finding indicates that general social behaviour or user impact are likely to be spread evenly across classes."
                    },
                    {
                        "id": 129,
                        "string": "It also highlights the difficulty of the task and motivates the use of deeper textual features."
                    },
                    {
                        "id": 130,
                        "string": "The textual features (see Section 4.2) improve performance as compared to the most frequent class baseline."
                    },
                    {
                        "id": 131,
                        "string": "We also notice that the embeddings (SVD-E and W2V-E) have lower performance than the clusters (SVD-C and W2V-C) in most of the cases."
                    },
                    {
                        "id": 132,
                        "string": "This is expected, as adding word vectors to represent a user's text may overemphasise common words."
                    },
                    {
                        "id": 133,
                        "string": "The size of the embedding also increases performance."
                    },
                    {
                        "id": 134,
                        "string": "The W2V features show better ac-   curacy than the SVD on the NPMI matrix."
                    },
                    {
                        "id": 135,
                        "string": "This is consistent with previous work that showed the efficiency of word2vec and the ability of those embeddings to capture non-linear relationships and syntactic features (Mikolov et al., 2013a; Mikolov et al., 2013b; Mikolov et al., 2013c) ."
                    },
                    {
                        "id": 136,
                        "string": "LR has a lower performance than the non-linear methods, especially when using clusters as features."
                    },
                    {
                        "id": 137,
                        "string": "GPs usually outperform SVMs by a small margin."
                    },
                    {
                        "id": 138,
                        "string": "However, these offer the advantages of not using the validation set and the interpretability properties we highlight in the next section."
                    },
                    {
                        "id": 139,
                        "string": "Although we only draw our focus on major occupational classes, the data set allows the study of finer granularities of occupation classes in future work."
                    },
                    {
                        "id": 140,
                        "string": "For example, prediction performance for sub-major groups reaches 33.9% accuracy (15.6% majority class, 22 classes) and 29.2% accuracy for minor groups (3.4% majority class, 55 classes)."
                    },
                    {
                        "id": 141,
                        "string": "Error Analysis To illustrate the errors made by our classifiers, Figure 1 shows the confusion matrix of the classification results."
                    },
                    {
                        "id": 142,
                        "string": "First, we observe that class 4 is many times classified as class 2 or 3."
                    },
                    {
                        "id": 143,
                        "string": "This can be explained by the fact that classes 2, 3 and 4 contain similar types of occupations, e.g."
                    },
                    {
                        "id": 144,
                        "string": "doctors and nurses or accountants and assistant accountants."
                    },
                    {
                        "id": 145,
                        "string": "However, with very few exceptions, we notice that only adjacent classes get misclassified, suggesting that our model captures the general user skill level."
                    },
                    {
                        "id": 146,
                        "string": "Qualitative Analysis The word clusters that were built from a reference corpus and then used as features in the GP classifier, give us the opportunity to extract some qualitative derivations from our predictive task."
                    },
                    {
                        "id": 147,
                        "string": "For the rest of the section we use the best performing model of this type (W2V-C-200) in order to analyse the results."
                    },
                    {
                        "id": 148,
                        "string": "Our main assumption is that there might be a divergence of language and topic usage across occupational classes following previous studies in sociology (Bernstein, 1960; Bernstein, 2003) ."
                    },
                    {
                        "id": 149,
                        "string": "Knowing that the inferred GP lengthscale hyperparameters are inversely proportional to feature (i.e."
                    },
                    {
                        "id": 150,
                        "string": "topic) relevance (see Section 5), we can use them to rank the topic importance and give answers to our hypothesis."
                    },
                    {
                        "id": 151,
                        "string": "Table 4 shows 10 of the most informative topics (represented by the top 10 most central and frequent words) sorted by their ARD lengthscale Mean Reciprocal Rank (MRR) (Manning et al., 2008) across the nine classifiers."
                    },
                    {
                        "id": 152,
                        "string": "Evidently, they cover a broad range of thematic subjects, including potentially work specific topics in different domains such as 'Corporate' (Topic #124), 'Software Engineering' (#158), 'Health' (#105), 'Higher Education' (#21) and 'Arts' (#116), as well as topics covering recreational interests such as 'Football' (#186), 'Cooking' (#96) and 'Beauty Care' (#153)."
                    },
                    {
                        "id": 153,
                        "string": "The highest ranked MRR GP lengthscales only highlight the topics that are the most discriminative of the particular learning task, i.e."
                    },
                    {
                        "id": 154,
                        "string": "which topic used alone would have had the best performance."
                    },
                    {
                        "id": 155,
                        "string": "To examine the difference in topic usage across occupations, we illustrate how six topics are covered by the users of each class."
                    },
                    {
                        "id": 156,
                        "string": "Figure 2 shows the Cumulative Distribution Functions (CDFs) across the nine different occupational classes for these six topics."
                    },
                    {
                        "id": 157,
                        "string": "CDFs indicate the fraction of users having at least a certain topic proportion in their tweets."
                    },
                    {
                        "id": 158,
                        "string": "A topic is more prevalent in a class, if the CDF line leans towards the bottom-right corner of the plot."
                    },
                    {
                        "id": 159,
                        "string": "'Higher Education' (#21) is more prevalent in classes 1 and 2, but is also discriminative for classes 3 and 4 compared to the rest."
                    },
                    {
                        "id": 160,
                        "string": "This is expected because the vast majority of jobs in these classes require a university degree (holds for all of the jobs in classes 2 and 3) or are actually jobs in higher education."
                    },
                    {
                        "id": 161,
                        "string": "On the other hand, classes 5 to 9 have a similar behaviour, tweeting less on this topic."
                    },
                    {
                        "id": 162,
                        "string": "We also observe that words in 'Corporate' (#124) are used more as the skill required for a job gets higher."
                    },
                    {
                        "id": 163,
                        "string": "This topic is mainly used by people in classes 1 and 2 and with less extent in classes 3 and 4, indicating that people in these occupational classes are more likely to use social media for discussions about corporate business."
                    },
                    {
                        "id": 164,
                        "string": "There is a clear trend of people with more skilled jobs to talk about 'Politics' (#176)."
                    },
                    {
                        "id": 165,
                        "string": "Indeed, highly ranked politicians and political philosophers are parts of classes 1 and 2 respectively."
                    },
                    {
                        "id": 166,
                        "string": "Nevertheless, this pattern expands to the entire spectrum of the investigated occupational classes, providing further proof-of-concept for our methodology, under the assumption that the theme of politics is more attractive to the higher skilled classes rather than the lower skilled occupations."
                    },
                    {
                        "id": 167,
                        "string": "By examining 'Arts' (#116), we see that it clearly separates class 5, which includes artists, from all others."
                    },
                    {
                        "id": 168,
                        "string": "This topic appears to be relevant to most of the classification tasks and it is ranked first according to the MRR metric."
                    },
                    {
                        "id": 169,
                        "string": "Moreover, we observe that people with higher skilled jobs and education (classes 1-3) post more content about arts."
                    },
                    {
                        "id": 170,
                        "string": "Finally, we examine two topics containing words that can be used in more informal occasions, i.e."
                    },
                    {
                        "id": 171,
                        "string": "'Elongated Words' (#164) and 'Beauty Care' (#153)."
                    },
                    {
                        "id": 172,
                        "string": "We observe a similar pattern in both topics by which users with lower skilled jobs tweet more often."
                    },
                    {
                        "id": 173,
                        "string": "The main conclusion we draw from Figure 2 is that there exists a topic divergence between users in the lower vs. higher skilled occupational classes."
                    },
                    {
                        "id": 174,
                        "string": "To examine this distinction better, we use the Jensen-Shannon divergence (JSD) to quantify the difference between the topic distributions across every Figure 3 visualises these differences."
                    },
                    {
                        "id": 175,
                        "string": "There, we confirm that adjacent classes use similar topics of discussion."
                    },
                    {
                        "id": 176,
                        "string": "We also notice that JSD increases as the classes are further apart."
                    },
                    {
                        "id": 177,
                        "string": "Two main groups of related classes, with a clear separation from the rest, are identified: classes 1-2 and 6-9."
                    },
                    {
                        "id": 178,
                        "string": "For the users belonging to these two groups, we compute their topic usage distribution (for the top topics listed in Table 4 )."
                    },
                    {
                        "id": 179,
                        "string": "Then, we assess whether the topic usage distributions of those super-classes of occupations have a statistically significant dif-ference by performing a two-sample Kolmogorov-Smirnov test."
                    },
                    {
                        "id": 180,
                        "string": "We enumerate the group topic usage means in Table 5 ; all differences were indeed statistically significant (p < 10 −5 )."
                    },
                    {
                        "id": 181,
                        "string": "From this comparison, we conclude that users in the higher skilled classes have a higher representation in all top topics but 'Beauty Care' and 'Elongated Words'."
                    },
                    {
                        "id": 182,
                        "string": "Hence, the original hypothesis about the difference in the usage of language between upper and lower occupational classes is reconfirmed in this more generic testing."
                    },
                    {
                        "id": 183,
                        "string": "A very noticeable difference occurs for the Related Work Occupational class prediction has been studied in the past in the areas of psychology and economics."
                    },
                    {
                        "id": 184,
                        "string": "French (1959) investigated the relation between various measures on 232 undergraduate students and their future occupations."
                    },
                    {
                        "id": 185,
                        "string": "This study concluded that occupational membership can be predicted from variables such as the ability of subjects in using mathematical and verbal symbols, their family economic status, body-build and personality components."
                    },
                    {
                        "id": 186,
                        "string": "Schmidt and Strauss (1975) also studied the relationship between job types (five classes) and certain demographic attributes (gender, race, experience, education, location)."
                    },
                    {
                        "id": 187,
                        "string": "Their analysis identified biases or discrimination which possibly exist in different types of jobs."
                    },
                    {
                        "id": 188,
                        "string": "Sociolinguistic and sociology studies deduct that social status is an important factor in determining the use of language (Bernstein, 1960; Bernstein, 2003; Labov, 2006) ."
                    },
                    {
                        "id": 189,
                        "string": "Differences arise either due to language use or due to the topics people discuss as parts of various social domains."
                    },
                    {
                        "id": 190,
                        "string": "However, a large scale investigation of this hypothesis has never been attempted."
                    },
                    {
                        "id": 191,
                        "string": "Relevant to our task is a relation extraction approach proposed by Li et al."
                    },
                    {
                        "id": 192,
                        "string": "(2014) aiming to extract user profile information on Twitter."
                    },
                    {
                        "id": 193,
                        "string": "They used a weakly supervised approach to obtain information for job, education and spouse."
                    },
                    {
                        "id": 194,
                        "string": "Nonetheless, the information relevant to the job attribute re-gards the employer of a user (i.e."
                    },
                    {
                        "id": 195,
                        "string": "the name of a company) rather than the type of occupation."
                    },
                    {
                        "id": 196,
                        "string": "In addition, Huang et al."
                    },
                    {
                        "id": 197,
                        "string": "(2014) proposed a method to classify Sina Weibo users to twelve predefined occupations using content based and network features."
                    },
                    {
                        "id": 198,
                        "string": "However, there exist significant differences from our task since this inference is based on a distinct platform, with an ambiguous distribution over occupations (e.g."
                    },
                    {
                        "id": 199,
                        "string": "more than 25% related to media), while the occupational classes are not generic (e.g."
                    },
                    {
                        "id": 200,
                        "string": "media, welfare and electronic are three of the twelve categories)."
                    },
                    {
                        "id": 201,
                        "string": "Most importantly, the applied model did not allow for a qualitative interpretation."
                    },
                    {
                        "id": 202,
                        "string": "Filho et al."
                    },
                    {
                        "id": 203,
                        "string": "(2014) inferred the social class of social media users by combining geolocation information derived from Foursquare and Twitter posts."
                    },
                    {
                        "id": 204,
                        "string": "Recently, Sloan et al."
                    },
                    {
                        "id": 205,
                        "string": "(2015) introduced tools for the automated extraction of demographic data (age, occupation and social class) from the profile descriptions of Twitter users using a similar method to our data set extraction approach."
                    },
                    {
                        "id": 206,
                        "string": "They showed that it is feasible to build a data set that matches the real-world UK occupation distribution as given by the SOC."
                    },
                    {
                        "id": 207,
                        "string": "Conclusions Our paper presents the first large-scale systematic study on language use on social media as a factor for inferring a user's occupational class."
                    },
                    {
                        "id": 208,
                        "string": "To address this problem, we have also introduced an extensive labelled data set extracted from Twitter."
                    },
                    {
                        "id": 209,
                        "string": "We have framed prediction as a classification task and, to this end, we used the powerful, non-linear GP framework that combines strong predictive performance with feature interpretability."
                    },
                    {
                        "id": 210,
                        "string": "Results show that we can achieve a good predictive accuracy, highlighting that the occupation of a user influences text use."
                    },
                    {
                        "id": 211,
                        "string": "Through a qualitative analysis, we have shown that the derived topics capture both occupation specific interests as well as general class-based behaviours."
                    },
                    {
                        "id": 212,
                        "string": "We acknowledge that the derivations of this study, similarly to other studies in the field, are reflecting the Twitter population and may experience a bias introduced by users self-mentioning their occupations."
                    },
                    {
                        "id": 213,
                        "string": "However, the magnitude, occupational diversity and face validity of our conclusions suggest that the presented approach is useful for future downstream applications."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 18
                    },
                    {
                        "section": "Standard Occupational Classification",
                        "n": "2",
                        "start": 19,
                        "end": 28
                    },
                    {
                        "section": "Data",
                        "n": "3",
                        "start": 29,
                        "end": 52
                    },
                    {
                        "section": "Features",
                        "n": "4",
                        "start": 53,
                        "end": 55
                    },
                    {
                        "section": "User Level Features (UserLevel)",
                        "n": "4.1",
                        "start": 56,
                        "end": 60
                    },
                    {
                        "section": "Textual Features",
                        "n": "4.2",
                        "start": 61,
                        "end": 64
                    },
                    {
                        "section": "SVD Word Embeddings (SVD-E)",
                        "n": "4.2.1",
                        "start": 65,
                        "end": 71
                    },
                    {
                        "section": "NPMI Clusters (SVD-C)",
                        "n": "4.2.2",
                        "start": 72,
                        "end": 82
                    },
                    {
                        "section": "Neural Embeddings (W2V-E)",
                        "n": "4.2.3",
                        "start": 83,
                        "end": 88
                    },
                    {
                        "section": "Neural Clusters (W2V-C)",
                        "n": "4.2.4",
                        "start": 89,
                        "end": 92
                    },
                    {
                        "section": "Classification with Gaussian Processes",
                        "n": "5",
                        "start": 93,
                        "end": 115
                    },
                    {
                        "section": "Experiments",
                        "n": "6",
                        "start": 116,
                        "end": 118
                    },
                    {
                        "section": "Predictive Accuracy",
                        "n": "6.1",
                        "start": 119,
                        "end": 140
                    },
                    {
                        "section": "Error Analysis",
                        "n": "6.2",
                        "start": 141,
                        "end": 145
                    },
                    {
                        "section": "Qualitative Analysis",
                        "n": "6.3",
                        "start": 146,
                        "end": 182
                    },
                    {
                        "section": "Related Work",
                        "n": "7",
                        "start": 183,
                        "end": 206
                    },
                    {
                        "section": "Conclusions",
                        "n": "8",
                        "start": 207,
                        "end": 213
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1074-Table4-1.png",
                        "caption": "Table 4: Topics, represented by their most central and most frequent 10 words, sorted by their ARD lengthscale MRR across the nine GP-based occupation classifiers. µ(l) denotes the average lengthscale for a topic across these classifiers. Topic labels are manually created.",
                        "page": 5,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 524.16,
                            "y1": 61.44,
                            "y2": 376.32
                        }
                    },
                    {
                        "filename": "../figure/image/1074-Figure1-1.png",
                        "caption": "Figure 1: Confusion matrix of the prediction results. Rows represent the actual occupational class (C 1– 9) and columns the predicted class.",
                        "page": 5,
                        "bbox": {
                            "x1": 89.28,
                            "x2": 265.44,
                            "y1": 447.84,
                            "y2": 590.4
                        }
                    },
                    {
                        "filename": "../figure/image/1074-Table1-1.png",
                        "caption": "Table 1: Subset of the SOC classification hierarchy.",
                        "page": 1,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 524.16,
                            "y1": 62.879999999999995,
                            "y2": 319.2
                        }
                    },
                    {
                        "filename": "../figure/image/1074-Figure3-1.png",
                        "caption": "Figure 3: Jensen-Shannon divergence in the topic distributions between the different occupational classes (C 1–9).",
                        "page": 6,
                        "bbox": {
                            "x1": 324.47999999999996,
                            "x2": 505.44,
                            "y1": 473.76,
                            "y2": 616.8
                        }
                    },
                    {
                        "filename": "../figure/image/1074-Table2-1.png",
                        "caption": "Table 2: User level attributes for a Twitter user.",
                        "page": 2,
                        "bbox": {
                            "x1": 327.84,
                            "x2": 505.44,
                            "y1": 62.879999999999995,
                            "y2": 243.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1074-Figure2-1.png",
                        "caption": "Figure 2: CDFs for six of the most important topics; the x-axis is on the log-scale for display purposes. A point on a CDF line indicates the fraction of users (y-axis point) with a topic proportion in their tweets lower or equal to the corresponding x-axis point. The topic is more prevalent in a class, if the CDF line leans closer to the bottom-right corner of the plot.",
                        "page": 7,
                        "bbox": {
                            "x1": 117.6,
                            "x2": 480.0,
                            "y1": 70.56,
                            "y2": 521.76
                        }
                    },
                    {
                        "filename": "../figure/image/1074-Table5-1.png",
                        "caption": "Table 5: Comparison of mean topic usage for super-sets (classes 1–2 vs. 6–9) of the occupational classes; all values were multiplied by 103. The difference between the topic usage distributions was statistically significant (p < 10−5).",
                        "page": 8,
                        "bbox": {
                            "x1": 87.84,
                            "x2": 275.03999999999996,
                            "y1": 61.44,
                            "y2": 228.0
                        }
                    },
                    {
                        "filename": "../figure/image/1074-Table3-1.png",
                        "caption": "Table 3: 9-way classification accuracy on held-out data for our 3 methods. Textual features are obtained using SVD or Word2Vec (W2V). E represents embeddings, C clusters. The final number denotes the amount of clusters or the size of the embedding.",
                        "page": 4,
                        "bbox": {
                            "x1": 318.71999999999997,
                            "x2": 514.0799999999999,
                            "y1": 62.879999999999995,
                            "y2": 224.16
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-28"
        },
        {
            "slides": {
                "0": {
                    "title": "What is this presentation about",
                    "text": [
                        "Summarize the history and current state of efforts related to the",
                        "Illustrate the challenges of maintaining a community Project",
                        "Invite the community to extend the capabilities of the Anthology",
                        "Call you to join the Anthology team",
                        "Summary History Future-proofing Upcoming Future"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "The Anthology in summary",
                    "text": [
                        "Open access service for all",
                        "Also hosts posters and additional data",
                        "Paper search and author pages",
                        "45K papers and 4.5K daily hits",
                        "New papers added in collaboration with proceedings editors",
                        "History Future-proofing Upcoming Future"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "A brief History of the Anthology",
                    "text": [
                        "Proposed in 2001 by Steven Bird",
                        "First version online in 2002, with Steven Bird as editor",
                        "Min-Yen Kan becomes the",
                        "A new version of the Anthology with extra functionality is released in 2012",
                        "Steven Bird Min-Yen Kan",
                        "Hosting of the Anthology moves from the National University of Singapore to Saarland University",
                        "Summary Future-proofing Upcoming Future"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "How to Future proof the Anthology",
                    "text": [
                        "Limited resources for day-to-day code maintenance",
                        "Docker container for easier set-up and sandboxing",
                        "Collaborative documentation efforts to ease onboarding",
                        "Migration plan on the pipeline, including upgrades and test cases",
                        "Summary History Upcoming Future"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "Upcoming major steps",
                    "text": [
                        "Hosting the Anthology within the main ACL website",
                        "Recruit a new Anthology editor",
                        "(possibly) pay for extra support for the Anthology",
                        "Summary History Future-proofing Future"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "5": {
                    "title": "Exercise Importing of your slides",
                    "text": [
                        "We import slides, datasets, videos from your own",
                        "Currently done by email",
                        "(try it yourself! yes, now)",
                        "Better workflow: pull request against the",
                        "Anthology XML (a la csrankings.org)",
                        "Summary History Future-proofing Future"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "6": {
                    "title": "Possible future directions",
                    "text": [
                        "Contains useful information both for CL researchers and about CL researchers. Useful for identifying suitable reviewers.",
                        "Move focus from day-to-day operations towards development",
                        "Establish a network of mirrors",
                        "Summary History Future-proofing Upcoming"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                }
            },
            "paper_title": "The ACL Anthology: Current State and Future Directions",
            "paper_id": "1083",
            "paper": {
                "title": "The ACL Anthology: Current State and Future Directions",
                "abstract": "The Association of Computational Linguistic's Anthology is the open source archive, and the main source for computational linguistics and natural language processing's scientific literature. The ACL Anthology is currently maintained exclusively by community volunteers and has to be available and up-to-date at all times. We first discuss the current, open source approach used to achieve this, and then discuss how the planned use of Docker images will improve the Anthology's longterm stability. This change will make it easier for researchers to utilize Anthology data for experimentation. We believe the ACL community can directly benefit from the extension-friendly architecture of the Anthology. We end by issuing an open challenge of reviewer matching we encourage the community to rally towards. 2 https://creativecommons.org/licenses/ by-nc-sa/3.0/ 3 https://creativecommons.org/licenses/ by/4.0/ 5 https://worksheets.codalab.org/ 6 https://arxiv.org/",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The ACL Anthology 1 is a service offered by the Association for Computational Linguistics (ACL) allowing open access to the proceedings of all ACL sponsored conferences and journal articles."
                    },
                    {
                        "id": 1,
                        "string": "As a community goodwill gesture, it also hosts third-party computational linguistics literature from sister organizations and their national venues."
                    },
                    {
                        "id": 2,
                        "string": "It offers both text and faceted search of the indexed papers, author-specific pages, and can incorporate third-party metadata and services that can be embedded within pages (Bysani and Kan, 2012) ."
                    },
                    {
                        "id": 3,
                        "string": "As of this paper, it hosts over 1 https://aclanthology.info/ 43,000 computational linguistics and natural language processing papers, along with their metadata."
                    },
                    {
                        "id": 4,
                        "string": "Over 4,500 daily requests are served by the Anthology."
                    },
                    {
                        "id": 5,
                        "string": "The code for the Anthology is available at https://github.com/acl-org/ acl-anthology under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License 2 ."
                    },
                    {
                        "id": 6,
                        "string": "Slightly different from the Anthology source code, ACL also licenses its papers with a more liberal license, supporting Creative Commons Attribution 4.0 International License 3 , supporting liberal re-use of papers published with the ACL."
                    },
                    {
                        "id": 7,
                        "string": "The maintenance of the code and the website is handled through volunteer efforts coordinated by the Anthology editor."
                    },
                    {
                        "id": 8,
                        "string": "Running a key service for the computational linguistics community that needs to be continuously available and updated frequently is one of the main issues in administering the Anthology."
                    },
                    {
                        "id": 9,
                        "string": "We discuss this issue along with the challenges of running a large scale project on a volunteer basis and its resulting technical debt."
                    },
                    {
                        "id": 10,
                        "string": "As we look towards the future, previous research has shown that it can also be used as a data source to characterize the work and workings of the ACL community (Bird et al., 2008; Vogel and Jurafsky, 2012; Anderson et al., 2012) ."
                    },
                    {
                        "id": 11,
                        "string": "Extensions to the Anthology that build on this information could make the Anthology an even more valuable resource for the community."
                    },
                    {
                        "id": 12,
                        "string": "We will discuss two possibl eextensions -anonymous pre-prints and support for finding relevant submission reviewers by linking au-thors in the Anthology with their research interests and community connections."
                    },
                    {
                        "id": 13,
                        "string": "Beyond being useful in itself, work on such challenges has the potential to motivate the ACL community to further support the Anthology."
                    },
                    {
                        "id": 14,
                        "string": "Current State of the Anthology The ACL Anthology was proposed as a project to the ACL Executive by Steven Bird at the 2001 ACL conference and first launched in 2002, with a second version developed in 2012, commissioned by the ACL committee."
                    },
                    {
                        "id": 15,
                        "string": "Steven Bird also served as the first editor of the anthology from 2002 to 2007, a post which Min-Yen Kan took over in 2008 and continues to fill as of today."
                    },
                    {
                        "id": 16,
                        "string": "The Anthology provides access to papers in Portable Document Format (PDF) as well as the associated metadata in multiple formats (e.g., BIBT E X and Endnote)."
                    },
                    {
                        "id": 17,
                        "string": "For recent papers, authors can also opt include data, notes and open-source software, and may provide Digital Object Identifiers (DOIs) for permalinking the citations within their papers."
                    },
                    {
                        "id": 18,
                        "string": "The technology behind the current version is detailed in Table 1 ."
                    },
                    {
                        "id": 19,
                        "string": "As a community project, daily administration and development is handled by volunteers."
                    },
                    {
                        "id": 20,
                        "string": "However, to tackle larger problems with the Anthology which require a more focused effort, the ACL committee has solicited paid assistance."
                    },
                    {
                        "id": 21,
                        "string": "Hosting and bandwidth for the Anthology has historically been provided by universities free of charge."
                    },
                    {
                        "id": 22,
                        "string": "It was hosted at the National University of Singapore until the spring of 2017, when it was migrated to its current home at Saarland University."
                    },
                    {
                        "id": 23,
                        "string": "In the future, hosting duties are planned to fall under the umbrella of the ACL itself, unifying all services under https://www.aclweb."
                    },
                    {
                        "id": 24,
                        "string": "org/portal/."
                    },
                    {
                        "id": 25,
                        "string": "Framework Ruby on Rails Search engine Solr Database PostgreSQL Web server (Prod./Test) Nginx / Jetty Operating System Debian GNU-Linux Table 1 : Tech stack for the ACL Anthology."
                    },
                    {
                        "id": 26,
                        "string": "The most important task is the importing, indexing and provisioning of newly accepted papers from recent conference proceedings and journal issues."
                    },
                    {
                        "id": 27,
                        "string": "The original Anthology defined an XML format for simple bibliographic metadata, which has been extended to support the more recent fea-tures of associated software, posters, videos and datasets that accompany the scholarly publications."
                    },
                    {
                        "id": 28,
                        "string": "Providing the XML for new materials is an semi-automated process that is largely integrated with the various mechanisms for managing ACL conference submissions and printed proceedings."
                    },
                    {
                        "id": 29,
                        "string": "It is straightforward for ACL events that utilize the licensed START conference management software 4 , as an established software pipeline builds upon the artefacts used for creation of the final publications themselves."
                    },
                    {
                        "id": 30,
                        "string": "After the accepted papers are finalized, START produces an archive file of camera-ready PDF files and author-provided metadata such as the title, author list, and abstract for each paper."
                    },
                    {
                        "id": 31,
                        "string": "These files are processed by a set of scripts in START maintained by ACL publication chairs in order to assign page numbers to papers, and to produce a PDF proceedings volume for each conference complete with a table of contents, author index, and other front matter."
                    },
                    {
                        "id": 32,
                        "string": "These scripts also produce bibliographic information that are programmatically transformed into the ACL Anthology's XML format."
                    },
                    {
                        "id": 33,
                        "string": "The Anthology is then updated with the author-provided PDFs and the XML metadata."
                    },
                    {
                        "id": 34,
                        "string": "For importing journal articles and venues not using the START submission system, additional manual work is necessary to construct the Anthology XML."
                    },
                    {
                        "id": 35,
                        "string": "Sanity checks and some manual curation is also necessary to deal with issues such as character encodings and accents in names, multipart family names, and so on."
                    },
                    {
                        "id": 36,
                        "string": "This pipeline has reached a point of high efficiency, but may need to be adapted if the ACL ever considers it necessary to integrate with a different service for conference organization."
                    },
                    {
                        "id": 37,
                        "string": "Running the Anthology as a Community Project Since the Anthology is not tied to a specific research project or institution, contributors that work on Anthology-related system administration and development tasks have been recruited in response to calls for volunteers at the main ACL conferences."
                    },
                    {
                        "id": 38,
                        "string": "In contrast, new features have been developed by researchers using the ACL Anthology as a resource in their own work, unconnected with the daily operation of the Anthology."
                    },
                    {
                        "id": 39,
                        "string": "Such research deliverables include, for example, the creation of a corpus of research papers (Bird et al., 2008) , an author citation network (Radev et al., 2013) or a faceted search engine (Schäfer et al., 2012; Buitelaar et al., 2014) ."
                    },
                    {
                        "id": 40,
                        "string": "These factors, in combination with the multiple, changing responsibilities and shifting research interests of community members, mean that new volunteers join and leave the Anthology team in unpredictable and sporadic patterns."
                    },
                    {
                        "id": 41,
                        "string": "Preserving knowledge about the Anthology's operational workflow is thus one of the most important challenges for the Anthology."
                    },
                    {
                        "id": 42,
                        "string": "The Anthology editor has played a key role ensuring the continuity of the entire project."
                    },
                    {
                        "id": 43,
                        "string": "This position has so far always been filled for multiple years, longer than the normal time frame for an ACL officer."
                    },
                    {
                        "id": 44,
                        "string": "The role has been critical in ensuring a smooth transition between volunteers, at the cost of a long term with a heavy workload and a potential single point of failure."
                    },
                    {
                        "id": 45,
                        "string": "In order to tackle both issues, there is currently a concerted effort to improve the documentation of all tasks related to maintaining the Anthology."
                    },
                    {
                        "id": 46,
                        "string": "As the ACL community and its publishing needs continue to grow, the ACL Executive is considering commercial support for publishing."
                    },
                    {
                        "id": 47,
                        "string": "While this may be suitable for help with daily operations, we strongly advocate the continuation and promotion of a closely-knit volunteer group for development."
                    },
                    {
                        "id": 48,
                        "string": "Passing the responsibilities for the Anthology to a commercial devoid who has no intrinsic interest in the Anthology's scientific contents may end up poorly."
                    },
                    {
                        "id": 49,
                        "string": "Future Proofing the Anthology All code, documentation, bug reports, and feature requests are hosted at https://github."
                    },
                    {
                        "id": 50,
                        "string": "com/acl-org/acl-anthology, along with instructions detailing the steps required to set up an instance of the Anthology and keep it updated with proceedings for new conferences."
                    },
                    {
                        "id": 51,
                        "string": "These instructions have been verified and updated using test builds."
                    },
                    {
                        "id": 52,
                        "string": "We began with the initial documentation provided by experienced contributors to the project and the original developer."
                    },
                    {
                        "id": 53,
                        "string": "New volunteers were then asked to set up and update a new instance of the Anthology on a new server while communicating with more experienced contributors."
                    },
                    {
                        "id": 54,
                        "string": "The documentation was expanded and updated based on the problems and questions encountered during this process."
                    },
                    {
                        "id": 55,
                        "string": "The resulting documentation will likely reduce the learning curve for new volunteers and will make their recruitment easier."
                    },
                    {
                        "id": 56,
                        "string": "It will also make it easier to migrate the An-thology to new servers when the hosting arrangement changes or to create mirrors."
                    },
                    {
                        "id": 57,
                        "string": "The latter is an important future task for the Anthology in order to ensure that alternatives are available if the main Anthology server experiences any downtime."
                    },
                    {
                        "id": 58,
                        "string": "The current implementation of the Anthology has been extended over the years with minor improvements to functionality and bug fixes."
                    },
                    {
                        "id": 59,
                        "string": "The core code has remained mostly intact from its original version and has proved to be robust and reliable."
                    },
                    {
                        "id": 60,
                        "string": "However, fearing the introduction of bugs and instability (Spolsky, 2000) , the maintainers chose to keep the software working in its current state for as long as the technology would allow it, and focus their resources instead on features that would help the community with their research and publication efforts."
                    },
                    {
                        "id": 61,
                        "string": "This choice is not without its drawbacks."
                    },
                    {
                        "id": 62,
                        "string": "One key problem is the deprecation of dependencies with time."
                    },
                    {
                        "id": 63,
                        "string": "For example, Ruby 2.0 is no longer available in Debian repositories, and SSL support no longer compiles against it by default."
                    },
                    {
                        "id": 64,
                        "string": "These problems can be seen as indicators that delaying upgrades might not be feasible for much longer."
                    },
                    {
                        "id": 65,
                        "string": "Where possible, deprecated libraries are replaced with newer versions."
                    },
                    {
                        "id": 66,
                        "string": "This is the case for the database, web server, and the Java interpreter, all of which have been replaced with little extra effort."
                    },
                    {
                        "id": 67,
                        "string": "When a new version of a library breaks backwards compatibility, the software is either upgraded or frozen in its current version."
                    },
                    {
                        "id": 68,
                        "string": "Ruby (frozen at 2.0.0-p353 via RVM) and Solr are both examples of the latter, with detailed documented instructions to replicate the software environment."
                    },
                    {
                        "id": 69,
                        "string": "In addition to the production Anthology site, a second version is kept on low-cost cloud servers for testing purposes."
                    },
                    {
                        "id": 70,
                        "string": "This copy has proven useful for testing step-by-step instructions, since rolling back the server to a clean state requires neither authorization nor downtime."
                    },
                    {
                        "id": 71,
                        "string": "It is also used as a staging area, and to do trial imports of new proceedings and for volunteer training."
                    },
                    {
                        "id": 72,
                        "string": "Security is another major concern: older dependencies increase exposure to unpatched bugs."
                    },
                    {
                        "id": 73,
                        "string": "The Anthology currently does not collect or store personal data, rendering the consequences of a data breach modest."
                    },
                    {
                        "id": 74,
                        "string": "A compromised server, however, presents not only a risk for the maintainers (service downtime, unauthorized applications) but for the community at large, due to the large number of researchers who could be exposed to malicious scripts."
                    },
                    {
                        "id": 75,
                        "string": "While the former puts the goodwill of the hosting institution at risk, the latter would affect a large portion of the ACL community."
                    },
                    {
                        "id": 76,
                        "string": "To tackle issues with outdated software, the Anthology volunteer group is working on making the entire Anthology available via a Docker image (Matthias and Kane, 2015) ."
                    },
                    {
                        "id": 77,
                        "string": "Docker provides a virtualized environment (also known as a container) in which software can be run but where, unlike a virtual machine, the underlying operating system resources can be used directly."
                    },
                    {
                        "id": 78,
                        "string": "Containers are typically stateless, allowing system administrators to add and restart services with minimum friction."
                    },
                    {
                        "id": 79,
                        "string": "Hosting a mirror of the Anthology with Docker containers abstracts away the relatively complex server setup and makes it easier to tackle dependency problems independently from future mirror deployments."
                    },
                    {
                        "id": 80,
                        "string": "As a result, hosting institutions can apply their own internal security policies, and the community can benefit from the added robustness via a larger network of mirrors."
                    },
                    {
                        "id": 81,
                        "string": "Development versions of this image are already available at https://github.com/ acl-org/anthology-docker."
                    },
                    {
                        "id": 82,
                        "string": "When an instance of this Docker container is started, it first downloads all the data necessary to run the Anthology, inclusive of the metadata and source publications (PDF files) for all proceedings hosted within the Anthology."
                    },
                    {
                        "id": 83,
                        "string": "The resulting Anthology instance is a peer of the production site, but completely independent."
                    },
                    {
                        "id": 84,
                        "string": "This makes it possible for member institutions and even interested individual members to easily provide a mirror or experiment with the data in the Anthology."
                    },
                    {
                        "id": 85,
                        "string": "Freezing software versions has proven useful to keep stability under control, improve documentation practices, and implement long-requested features like search engine indexing."
                    },
                    {
                        "id": 86,
                        "string": "This does not preclude a full software upgrade from being part of our development roadmap."
                    },
                    {
                        "id": 87,
                        "string": "With better test coverage and expanded consistency checks in place, we expect the first successful upgrade tests to be within our reach in the near future."
                    },
                    {
                        "id": 88,
                        "string": "Docker containers and temporary servers also show great promise for researchers."
                    },
                    {
                        "id": 89,
                        "string": "An isolated, easy-to-replicate software environment reduces friction in transferring tools between researchers usually caused by incompatible software, simplifies the replication of experiments, and limits the data loss due to software bugs."
                    },
                    {
                        "id": 90,
                        "string": "A container-like approach specifying complete envi-ronments can also help in distributing code and general research within the community (e.g., Co-daLab 5 as used in SemEval competitions)."
                    },
                    {
                        "id": 91,
                        "string": "In the future, best practices within the community may encourage researchers to program and experiment within Docker images to aid reproducibility."
                    },
                    {
                        "id": 92,
                        "string": "The Anthology is currently stable and supports its current, intended use."
                    },
                    {
                        "id": 93,
                        "string": "However, to ensure that the ACL Anthology continues fulfilling its key roles, we call on the members of the ACL to help with both its operational and development goals: • hosting mirrors of the Anthology and developing policy for mirror management; • adding and indexing new publications to the Anthology; • maintaining and updating the code underlying the Anthology; • extending the capabilities of the Anthology to help tackle new challenges facing the ACL."
                    },
                    {
                        "id": 94,
                        "string": "Challenges for the Anthology Maintaining community buy-in for the Anthology is necessary to ensure its future."
                    },
                    {
                        "id": 95,
                        "string": "This is best assured by extending the Anthology with useful capabilities that align with research efforts."
                    },
                    {
                        "id": 96,
                        "string": "This is crucially enabled by the liberal licensing scheme that the ACL employs for the publications to empower end users."
                    },
                    {
                        "id": 97,
                        "string": "Research on the history and structure of the NLP community based on this data has already been undertaken (Anderson et al., 2012; Vogel and Jurafsky, 2012) ."
                    },
                    {
                        "id": 98,
                        "string": "Anonymous Pre-prints."
                    },
                    {
                        "id": 99,
                        "string": "A current challenge needing attention is the result of the increasing popularity of pre-prints and their role in promoting scientific progress."
                    },
                    {
                        "id": 100,
                        "string": "However, such pre-print systems are not anonymous, interfering with the well-documented gains that author-blinded publications help in combating bias."
                    },
                    {
                        "id": 101,
                        "string": "Through membership polls and subcommitee study, the ACL executive has adopted a recent set of guidelines upholding the value of double-blinded submissions (ACL Executive Committee, 2017)."
                    },
                    {
                        "id": 102,
                        "string": "One solution would be the use of anonymous pre-prints as an option for authors."
                    },
                    {
                        "id": 103,
                        "string": "Currently two ways of implementing this have been discussed: as a collaboration with an existing pre-print service such as arXiv 6 or through hosting pre-prints directly within the Anthology."
                    },
                    {
                        "id": 104,
                        "string": "While the latter option would be a challenge to the Anthologyrequiring increased resources both for monitoring the submissions and for scaling the system architecture to a larger and less controlled inflow of papers -but could result in better community control of the process, and a greater awareness and feeling of co-ownership of the Anthology and its data among ACL members."
                    },
                    {
                        "id": 105,
                        "string": "Reviewer Matching."
                    },
                    {
                        "id": 106,
                        "string": "One key problem with scientific conference and journal organization is in finding suitable reviewers for the peer review process, which is also a key problem for ACL."
                    },
                    {
                        "id": 107,
                        "string": "7 We believe that we can leverage the ACL Anthology data to support conference organizers in the assignment of potential peer reviewers."
                    },
                    {
                        "id": 108,
                        "string": "There has been a substantial growth in the number of submissions to the main ACL conferences in recent years (Barzilay, 2017) , and the ACL has been active in supporting automated approaches to solve the problem (Stent and Ji, 2018) such as the Toronto Paper Matching System (TPMS) (Charlin and Zemel, 2013) ."
                    },
                    {
                        "id": 109,
                        "string": "However, data for judging the fit between a reviewer and submitted papers are available in the Anthology; i.e., a reviewer's interests and expertise as encoded in their previous publications."
                    },
                    {
                        "id": 110,
                        "string": "Mining and representing such information directly from the Anthology, where data about potential reviewers is already available, makes it unnecessary to upload papers to an external platform, mitigating current low response rates."
                    },
                    {
                        "id": 111,
                        "string": "Measuring overlap between reviewer interests and a submitted paper, based on the reviewer's previous publications, is a problem that the NLP community is ideally suited to solve."
                    },
                    {
                        "id": 112,
                        "string": "Furthermore, the information generated by such a tool could serve conference chairs and journal editors when considering how much weight to assign to a review from specific reviewers."
                    },
                    {
                        "id": 113,
                        "string": "The data required for building such a tool would be both the text and metadata from every submitted paper."
                    },
                    {
                        "id": 114,
                        "string": "While some metadata is already accessible within the Anthology, clean textual content of papers would need to be harvested from the source PDF files, which currently has been partially achieved."
                    },
                    {
                        "id": 115,
                        "string": "(Bird et al., 2008) suggests that the text can generally be extracted using standard tools, with additional processing only necessary for a small fraction of the 7 As intimated through internal discussions with the ACL executive committee."
                    },
                    {
                        "id": 116,
                        "string": "data."
                    },
                    {
                        "id": 117,
                        "string": "We are aware that clean textual data from the Anthology archives is current on-going interest being investigated by a number of NLP/CL teams within the community."
                    },
                    {
                        "id": 118,
                        "string": "If such a solution were to be implemented, it would be in the interest of the entire community to have the Anthology maintainers integrate it directly into the Anthology, with support from the original implementers."
                    },
                    {
                        "id": 119,
                        "string": "This has been a problem in the past, where attempts to extend the capabilities of the Anthology with more detailed search and annotation (Schäfer et al., 2011 (Schäfer et al., , 2012 were spun off as independent systems to start with and have still not become part of the Anthology service."
                    },
                    {
                        "id": 120,
                        "string": "We note that these two challenges are synergistically solved."
                    },
                    {
                        "id": 121,
                        "string": "Solving the first challenge will provide the submissions' source text within the Anthology framework and promote better coupling for the second challenge of reviewer matching."
                    },
                    {
                        "id": 122,
                        "string": "Conclusion The ACL Anthology is a key resource for researchers in the NLP community."
                    },
                    {
                        "id": 123,
                        "string": "We have described the software engineering and maintenance work that goes on behind-the-scenes in order for the Anthology to serve its purpose."
                    },
                    {
                        "id": 124,
                        "string": "This includes ingestion of new papers, maintenance of the Anthology codebase, and the social aspects of recruiting volunteers for this work."
                    },
                    {
                        "id": 125,
                        "string": "The task of training future volunteers and ensuring Anthology uptime is likely to become easier due to improved documentation and simplified server set-up."
                    },
                    {
                        "id": 126,
                        "string": "However, recruitment of new volunteers continues to be an issue."
                    },
                    {
                        "id": 127,
                        "string": "We invite all community members to download the Anthology images for experimentation, not only for the challenge of automated reviewer assignment, but also for other use cases based on their own research interests."
                    },
                    {
                        "id": 128,
                        "string": "We hope that open challenges and the tasks associated with extending the usefulness of the Anthology will motivate more community members to take interest and become and familiar with its inner workings."
                    },
                    {
                        "id": 129,
                        "string": "We extend an open invitation to anyone interested in the Anthology to get in touch with the members of the team."
                    },
                    {
                        "id": 130,
                        "string": "Our current needs are focused on system administration, software development, database management, and Docker integration, but any kind of experience is welcome."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 13
                    },
                    {
                        "section": "Current State of the Anthology",
                        "n": "2",
                        "start": 14,
                        "end": 36
                    },
                    {
                        "section": "Running the Anthology as a Community Project",
                        "n": "3",
                        "start": 37,
                        "end": 48
                    },
                    {
                        "section": "Future Proofing the Anthology",
                        "n": "4",
                        "start": 49,
                        "end": 93
                    },
                    {
                        "section": "Challenges for the Anthology",
                        "n": "5",
                        "start": 94,
                        "end": 121
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 122,
                        "end": 130
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1083-Table1-1.png",
                        "caption": "Table 1: Tech stack for the ACL Anthology.",
                        "page": 1,
                        "bbox": {
                            "x1": 77.75999999999999,
                            "x2": 288.0,
                            "y1": 580.8,
                            "y2": 644.16
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-29"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivations",
                    "text": [
                        "Rule-Based Machine Translation (RBMT)",
                        "We have been developed RBMT for more than 30 years.",
                        "Large technical dictionaries and translation rules",
                        "Pre-ordering SMT and Tree/Forest to String",
                        "Effective solutions for Asian language translation (WAT2014)",
                        "But, pre-ordering rules and parsers are needed.",
                        "Statistical Post Editing (SPE) (same as WAT2014)",
                        "Verify effectiveness in all tasks",
                        "System combination between SPE and SMT (new in WAT2015)"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "2": {
                    "title": "Features of SPE",
                    "text": [
                        "Correct mistranslations / Translate unknown words",
                        "Phrase-level correction (domain adaptation)",
                        "Use of more fluent expressions",
                        "From SMTs standpoint SPE:",
                        "Reduction of NULL alignment (subject/particle)",
                        "Use of syntax information (polarity/aspect)"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "SPR for Patent Translation",
                    "text": [
                        "Corpus: JPO-NICT patent corpus",
                        "# of automatic evaluation: 2,000",
                        "# of human evaluation: 200",
                        "RBMT SMT SPE RBMT SMT SPE RBMT SMT SPE",
                        "en-ja en-ja zh-ja zh-ja ko-ja ko-ja",
                        "SPE shows: Automatic evaluation for en-ja/zh-ja/ko-ja",
                        "- Better scores than PB-SMT in automatic evaluation",
                        "- Improvements of understandable level (>=C in acceptability)",
                        "A AA RBMT SMT SPE RBMT SMT SPE Human evaluation for zh-ja"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "System Combination",
                    "text": [
                        "Selection based on SMT scores and/or other features.",
                        "Selection based on estimated score (Adequacy? Fluency? )",
                        "Need data to learn the relationship",
                        "Our approach in WAT2015:",
                        "Merge n-best candidates and rescore them.",
                        "We used RNNLM for reranking.",
                        "SPE Merge and Rescore Final translation"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "7": {
                    "title": "Which systems did the combination selected",
                    "text": [
                        "SAME SAME SMT SAME SAME",
                        "ja-en en-ja ja-zh zh-ja",
                        "ja-en/en-ja/zh-ja: about 80% translations come from SPE.",
                        "ja-zh and JPCzh-ja: COMB selected SPE and SMT, equivalently.",
                        "(Because RBMT couldnt translate well, % of SMT increased. )",
                        "SPE SPE SMT SMT",
                        "JPCzh-ja JPCko-ja same means that COMB results were included both SMT and SPE. 2015 Toshiba Corporation"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "8": {
                    "title": "Toshiba MT system of WAT2015",
                    "text": [
                        "We additionally applied some pre/post processing.",
                        "Technical Term English Word KATAKANA",
                        "frequent notations for .",
                        "+ JPO patent dictionary",
                        "for JPCzh-ja) continous -> continuous"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "10": {
                    "title": "Crowdsourcing Evaluation",
                    "text": [
                        "Analysis of JPCko-ja result (COMB vs Online A)",
                        "In in-house evaluation, COMB is better than Online A.",
                        "Baseline COMB Online A",
                        "Effected by differences in number expressions !?",
                        "SRC : Online A:",
                        "Equally evaluated in-house evaluation.",
                        "Crowd-workers should be provided an evaluation guideline by",
                        "which such a difference is considered."
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "11": {
                    "title": "Summary",
                    "text": [
                        "Toshiba MT system achieved a combination method",
                        "between SMT and SPE by RNNLM reranking.",
                        "Our system ranked the top 3 HUMAN score in ja-en/ja-",
                        "We will aim for practical MT system by more effective",
                        "combination systems (SMT, SPE , RBMT and more...)"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                }
            },
            "paper_title": "Toshiba MT System Description for the WAT2015 Workshop",
            "paper_id": "1085",
            "paper": {
                "title": "Toshiba MT System Description for the WAT2015 Workshop",
                "abstract": "This paper provides the system description of Toshiba Machine Translation System for the 2nd Workshop on Asian Translation (WAT2015). We participated in all tasks that consist of \"scientific papers subtask\" and \"patents subtask\". We submitted statistically post edited translation (SPE) results based on our rule based translation system and SMT for each language pair. In addition, we submitted system combination results between SPE and SMT with a recurrent neural language model (RNNLM). In experimental results, the system combination achieved higher BLEU scores than single system with reranking. We also obtained improvements in Chinese translation in crowdsourcing evaluations.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Recently, statistical machine translation (SMT) has been broadly developed and successfully used in the portion of practicable systems."
                    },
                    {
                        "id": 1,
                        "string": "However, it is costly to make a large volume of parallel corpora in a wide range of domains for commercial use."
                    },
                    {
                        "id": 2,
                        "string": "For this reason, we have developed rule based machine translation (RBMT) system using a monolingual corpus in the target language."
                    },
                    {
                        "id": 3,
                        "string": "For example, target word selection is possible based on co-occurrence relationship extracted from a monolingual corpus (Suzuki et al., 2005) ."
                    },
                    {
                        "id": 4,
                        "string": "Furthermore, we have developed a word sense disambiguation based on a monolingual corpus in the target domain, and it has been applied to Japanese-Korean and Korean-Japanese translation systems (Kumano 2013, ."
                    },
                    {
                        "id": 5,
                        "string": "On the other hand, open Asian parallel corpora including ASPEC 1 , NTCIR PatentMT 2 and JPO Patent Corpus 3 are available for the research of machine translation systems."
                    },
                    {
                        "id": 6,
                        "string": "By using the parallel corpora, we have confirmed advantages which apply statistical post editing (SPE) to RBMT in domain adaptation (Suzuki, 2011) ."
                    },
                    {
                        "id": 7,
                        "string": "In the last workshop (Nakazawa et al., 2014) , we participated in Japanese-English and Japanese-Chinese tasks with SPE approach and obtained higher evaluation results than RBMT."
                    },
                    {
                        "id": 8,
                        "string": "Meanwhile, RBMT showed better performance than SPE in the direct and relative comparison ."
                    },
                    {
                        "id": 9,
                        "string": "In this workshop (WAT2015), we participated in all tasks including Japanese-English (ja-en), English-Japanese (en-ja), Japanese-Chinese (ja-zh) and Chinese-Japanese (zh-ja) for \"scientific paper subtask\", and Chinese-Japanese (JPCzh-ja) and Korean-Japanese (JPCko-ja) for \"patents subtask\"."
                    },
                    {
                        "id": 10,
                        "string": "Patents subtask is newly added, and its parallel corpus has 4 sections (Chemistry, Electricity, Mechanical Engineering and Physics)."
                    },
                    {
                        "id": 11,
                        "string": "In all the tasks, we submitted SPE translation results based on our RBMT and SMT."
                    },
                    {
                        "id": 12,
                        "string": "In addition, we submitted system combination results between SPE and SMT with recurrent neural language model (RNNLM; Mikolov el al., 2010) ."
                    },
                    {
                        "id": 13,
                        "string": "Section 2 and 3 describe the overview of our systems and some pre/post processing."
                    },
                    {
                        "id": 14,
                        "string": "The experimental results and official results are shown in Section 4 and 5."
                    },
                    {
                        "id": 15,
                        "string": "The analysis for the official results is discussed in Section 6 and finally, Section 7 concludes this paper."
                    },
                    {
                        "id": 16,
                        "string": "As for a contextaware translation, the description was omitted because our baseline system is the same as the last workshop (see ."
                    },
                    {
                        "id": 17,
                        "string": "Overview of Toshiba System RBMT System Our RBMT system is basically a transfer-based machine translation (Izuha et al., 2008) ."
                    },
                    {
                        "id": 18,
                        "string": "The core framework consists of morphological analysis, syntactic/semantic analysis, target word selection, structural transfer, syntactic generation and morphological generation."
                    },
                    {
                        "id": 19,
                        "string": "Furthermore, huge amount of rules as translation knowledge including word dictionaries can realize both high translation performance and flexibility of customization."
                    },
                    {
                        "id": 20,
                        "string": "As for Japanese-Korean translation, syntactic analysis and transfer are omitted because the languages are grammatically similar."
                    },
                    {
                        "id": 21,
                        "string": "Statistical Post Editing SPE using phrase-based SMT has been proposed and it is an efficient framework which is able to adapt translation output to target domains (Michel et al., 2007) ."
                    },
                    {
                        "id": 22,
                        "string": "We first translated source sentences of training data in ASPEC and JPO Patent Corpus by RBMT."
                    },
                    {
                        "id": 23,
                        "string": "Then we trained phrase-based model between translated sentences and reference sentences using Moses toolkit (Kohen et al., 2007) ."
                    },
                    {
                        "id": 24,
                        "string": "In the training, we used 1M sentences for ja-en, en-ja, JPCzh-ja and JPCko-ja, 0.67M for ja-zh and zh-ja in the training data."
                    },
                    {
                        "id": 25,
                        "string": "Japanese sentences were tokenized by JUMAN 4 , and Moses tokenizer for English, and Kytea (Neubig et."
                    },
                    {
                        "id": 26,
                        "string": "al, 2011) for Chinese."
                    },
                    {
                        "id": 27,
                        "string": "We also trained 5-gram language models using KenLM (Heafield et al., 2013) ."
                    },
                    {
                        "id": 28,
                        "string": "In tuning and decoding, we set distortion limit to 0 for JPOko-ja in consideration of grammatical similarity and 6 for other language pairs."
                    },
                    {
                        "id": 29,
                        "string": "System Combination using RNNLM Although both SPE and SMT are based on a statistical model from the given corpora, they generate different translation candidates because SPE has some features from RBMT."
                    },
                    {
                        "id": 30,
                        "string": "If a better system can be selected from the candidates in each translation, we can get a better translation result."
                    },
                    {
                        "id": 31,
                        "string": "Thus, we realized a system combination between SPE and SMT as n-best reranking using a RNNLM."
                    },
                    {
                        "id": 32,
                        "string": "The n-best reranking can be achieved using both basic features and RNNLM score."
                    },
                    {
                        "id": 33,
                        "string": "In tuning, we combined 100-best candidates of both SPE and SMT for dev-set, and ran MERT tuning by adding the RNNLM score to the basic features."
                    },
                    {
                        "id": 34,
                        "string": "In decoding, we re-ranked combined candidates by product-sum of the features including RNNLM score and tuned weights."
                    },
                    {
                        "id": 35,
                        "string": "For ja-en, en-ja, ja-zh and zh-ja, we used RNNLMs trained by the first 500k sentences in the training data of ASPEC."
                    },
                    {
                        "id": 36,
                        "string": "For JPOzh-ja and JPOko-ja, we used 500k sentences which were evenly extracted from 4 sections in JPO Patent Corpus."
                    },
                    {
                        "id": 37,
                        "string": "All RNNLMs were trained with 500 hidden layers and 50 classes by RNNLM toolkit 5 ."
                    },
                    {
                        "id": 38,
                        "string": "Tuning RBMT and pre/postprocessing Technical Term Dictionaries As the preparation for each task, we selected technical term dictionaries by the same principle in the last workshop (Sonoh el al., 2014) ."
                    },
                    {
                        "id": 39,
                        "string": "For JPOzh-ja, we used an additional patent dictionary, which is extracted from JPO Chinese-Japanese dictionary 6 ."
                    },
                    {
                        "id": 40,
                        "string": "Furthermore, for JPOko-ja, we used n-gram probability dictionary, which was made from monolingual patent resources, in order to resolve word sense disambiguation ."
                    },
                    {
                        "id": 41,
                        "string": "English Word Correction To improve translation of sentences including misspelled words in English, we applied correction processing based on an edited distance."
                    },
                    {
                        "id": 42,
                        "string": "We replaced the word considered as misspelling with a word which had the smallest edited distance in the training data."
                    },
                    {
                        "id": 43,
                        "string": "However, because SMT and SPE basically have robustness to the misspelling, we confined words to be replaced to words which remain as unknown words in SMT and SPE results."
                    },
                    {
                        "id": 44,
                        "string": "Japanese KATAKANA Normalization In the case where a target language is Japanese; we applied normalization of KATAKANA notation."
                    },
                    {
                        "id": 45,
                        "string": "In advance of translation, we counted the frequency of KATAKANA notation, which has fluctuations of prolonged sound mark, in the target sentences of the training data."
                    },
                    {
                        "id": 46,
                        "string": "In the translation results, KATAKANA fluctuations were replaced with those of highly-frequent notations, such as \"from スクリュ to スクリュー\" and \"from サーバー to サーバ\"."
                    },
                    {
                        "id": 47,
                        "string": "By applying normalization, we got improvements of about 0.5 BLEU in RBMT."
                    },
                    {
                        "id": 48,
                        "string": "Furthermore, we replaced the ideographic comma \"、\" in number expression with a normal comma \",\" for translation results in Japanese."
                    },
                    {
                        "id": 49,
                        "string": "Other Post Processing In order to reduce unknown words in SMT, we applied RBMT to SMT results."
                    },
                    {
                        "id": 50,
                        "string": "For example, in ja-zh, we translated KATAKANA words, which remain in SMT results, into Chinese or English words, if the words were found in RBMT dictionaries."
                    },
                    {
                        "id": 51,
                        "string": "Also, Hangul words in SMT results of JPOko-ja were translated into Japanese words."
                    },
                    {
                        "id": 52,
                        "string": "Experimental Results This section shows experimental results of our translation systems."
                    },
                    {
                        "id": 53,
                        "string": "Table 1 and 2 show the overall BLEU and RIBES scores for \"scientific papers subtask\" and \"patents subtask\", respectively."
                    },
                    {
                        "id": 54,
                        "string": "COMB means results of the system combination and Rerank means results of reranking using RNNLM (100best for SMT and SPE, 200-best for COMB)."
                    },
                    {
                        "id": 55,
                        "string": "In all tasks, SPE improves translation results of RBMT on the BLEU and RIBES."
                    },
                    {
                        "id": 56,
                        "string": "In tasks except JPOko-ja, SPE achieves performance equal to or better than phrase-based SMT."
                    },
                    {
                        "id": 57,
                        "string": "Moreover, in most tasks, Rerank improves about 0.3-0.5 BLEU score, and COMB shows better performance than other systems."
                    },
                    {
                        "id": 58,
                        "string": "In JPOko-ja, SMT, SPE and COMB show very high performances which are close to 70 BLEU, and SMT with reranking achieves the highest BLEU and RIBES scores."
                    },
                    {
                        "id": 59,
                        "string": "In ja-en, en-ja, ja-zh and zh-ja, more than half of translations selected from SPE and the others selected from SMT."
                    },
                    {
                        "id": 60,
                        "string": "In particular SPE accounted for about 80% translations in ja-en, en-ja and zh-ja."
                    },
                    {
                        "id": 61,
                        "string": "On the other hand, more than half of translations selected from SMT in JPOzh-ja and JPOko-ja."
                    },
                    {
                        "id": 62,
                        "string": "Table 3 shows the translation examples that COMB achieves better results than SPE with reranking in sentence-level BLEU."
                    },
                    {
                        "id": 63,
                        "string": "Finally, we compared between phrase-based model and hierarchical phrase-based model."
                    },
                    {
                        "id": 64,
                        "string": "Table 4 shows comparison in ja-zh task."
                    },
                    {
                        "id": 65,
                        "string": "In all systems including SPE, hierarchical phrase-based model improves about 0.4 BLEU."
                    },
                    {
                        "id": 66,
                        "string": "We applied hierarchical phrase-based model to ja-zh only, because significant improvements were not confirmed in other language pairs."
                    },
                    {
                        "id": 67,
                        "string": ".716 31.82 0.770 29.60 0.810 37 .47 0.827 Official Results This section shows official results of our translation systems."
                    },
                    {
                        "id": 68,
                        "string": "We basically submitted two results, one is SPE 7 and the other is the system combination between SPE and SMT."
                    },
                    {
                        "id": 69,
                        "string": "Furthermore, top two systems on the BLEU scores were evaluated by the crowdsourcing."
                    },
                    {
                        "id": 70,
                        "string": "In the crowdsourcing evaluation, pair-wise evaluation against the baseline system (phrase-based SMT) was performed by 5 evaluators, and HUMAN score was calculated 7 In JPOko-ja, because SMT showed higher BLEU score than SPE, we submitted SMT result."
                    },
                    {
                        "id": 71,
                        "string": "(Nakazawa et al., 2014) ."
                    },
                    {
                        "id": 72,
                        "string": "In WAT2015 results (Nakazawa et al., 2015) , we note that Toshiba systems were ranked as one of the top three systems in human evaluation in ja-en, ja-zh and JPOzh-ja."
                    },
                    {
                        "id": 73,
                        "string": "Especially, ja-zh achieved the highest score although the BLEU score is lower than other systems."
                    },
                    {
                        "id": 74,
                        "string": "On the other hand, as for JPOko-ja, we got a comparatively high BLEU score, but were disappointed by its low HUMAN score."
                    },
                    {
                        "id": 75,
                        "string": "Table 5 and 6 are the overall official results for each task, respectively."
                    },
                    {
                        "id": 76,
                        "string": "In ja-zh and zh-ja, COMB shows higher HUMAN score than SPE."
                    },
                    {
                        "id": 77,
                        "string": "On the other hand, SPE or SMT is higher than COMB in ja-en, JPOzh-ja and JPOko-ja."
                    },
                    {
                        "id": 78,
                        "string": "These results indicate that the system combination improves human evaluation of Chinese translation in the scientific documents, at least."
                    },
                    {
                        "id": 79,
                        "string": "We guess that the system combination between equivalent systems achieves complementary translation to improve human evaluations."
                    },
                    {
                        "id": 80,
                        "string": "For example, BLEU scores of SPE and SMT are nearly equal in ja-zh and zh-ja (shown in Table 1 )."
                    },
                    {
                        "id": 81,
                        "string": "Discussion On receiving the crowdsourcing results, we analyzed differences between our system and Online A, which obtained the highest HUMAN score in JPOko-ja."
                    },
                    {
                        "id": 82,
                        "string": "Table 7 shows the comparison between our system (COMB) and Online A."
                    },
                    {
                        "id": 83,
                        "string": "Here, 'Baseline' column is the HUMAN score in the result of crowdsourcing (official results) and the other was evaluated by inner evaluators."
                    },
                    {
                        "id": 84,
                        "string": "The inner evaluation was conducted excluding expressional differences as described in detail below."
                    },
                    {
                        "id": 85,
                        "string": "Although Online A achieves a very high HUMAN score to the baseline system, superior results of COMB over Online A are shown in the pair-wise evaluation."
                    },
                    {
                        "id": 86,
                        "string": "We hypothesize that the significant difference between the crowdsourcing and the inner evaluators occurs from the evaluation of the number expressions, such as \"システム(100)\" and \"システム 100\"."
                    },
                    {
                        "id": 87,
                        "string": "In the training data of JPOko-ja, a lot of brackets of numbers in the source sentences disappear in the target sentences."
                    },
                    {
                        "id": 88,
                        "string": "Thus, brackets are dropped in SPE and SMT."
                    },
                    {
                        "id": 89,
                        "string": "As for well-translated target sentences such as JPOko-ja, it is possible that evaluators in the crowdsourcing judged faithful translation as better by focusing on existence of brackets."
                    },
                    {
                        "id": 90,
                        "string": "Conclusion The overview of Toshiba machine translation systems, which applied the statistical post editing and the system combination with RNNLM, is described in this paper."
                    },
                    {
                        "id": 91,
                        "string": "SPE and reranking with RNNLM achieved higher BLEU than phrase-based SMT in most language pairs."
                    },
                    {
                        "id": 92,
                        "string": "Furthermore, the system combination between SPE and SMT improved BLEU score in Japanese-English pair and Japanese-Chinese pair."
                    },
                    {
                        "id": 93,
                        "string": "In the other hand, a straightforward correlation between automatic evaluation  and human evaluation is not confirmed in our system."
                    },
                    {
                        "id": 94,
                        "string": "We need to establish the combination of multi-systems for practical use purpose, taking advantage of their characteristics and qualities."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 16
                    },
                    {
                        "section": "RBMT System",
                        "n": "2.1",
                        "start": 17,
                        "end": 20
                    },
                    {
                        "section": "Statistical Post Editing",
                        "n": "2.2",
                        "start": 21,
                        "end": 28
                    },
                    {
                        "section": "System Combination using RNNLM",
                        "n": "2.3",
                        "start": 29,
                        "end": 36
                    },
                    {
                        "section": "Tuning",
                        "n": "3",
                        "start": 37,
                        "end": 37
                    },
                    {
                        "section": "Technical Term Dictionaries",
                        "n": "3.1",
                        "start": 38,
                        "end": 40
                    },
                    {
                        "section": "English Word Correction",
                        "n": "3.2",
                        "start": 41,
                        "end": 43
                    },
                    {
                        "section": "Japanese KATAKANA Normalization",
                        "n": "3.3",
                        "start": 44,
                        "end": 48
                    },
                    {
                        "section": "Other Post Processing",
                        "n": "3.4",
                        "start": 49,
                        "end": 51
                    },
                    {
                        "section": "Experimental Results",
                        "n": "4",
                        "start": 52,
                        "end": 66
                    },
                    {
                        "section": "Official Results",
                        "n": "5",
                        "start": 67,
                        "end": 80
                    },
                    {
                        "section": "Discussion",
                        "n": "6",
                        "start": 81,
                        "end": 89
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 90,
                        "end": 94
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1085-Table1-1.png",
                        "caption": "Table 1: Overall BLEU and RIBES scores for “scientific papers subtask”.",
                        "page": 2,
                        "bbox": {
                            "x1": 94.56,
                            "x2": 500.15999999999997,
                            "y1": 95.52,
                            "y2": 215.04
                        }
                    },
                    {
                        "filename": "../figure/image/1085-Table2-1.png",
                        "caption": "Table 2: Overall BLEU and RIBES scores for “patents subtask”.",
                        "page": 2,
                        "bbox": {
                            "x1": 174.72,
                            "x2": 420.47999999999996,
                            "y1": 238.56,
                            "y2": 358.08
                        }
                    },
                    {
                        "filename": "../figure/image/1085-Table6-1.png",
                        "caption": "Table 6: Overall official results for “patents subtask”.",
                        "page": 4,
                        "bbox": {
                            "x1": 174.72,
                            "x2": 421.44,
                            "y1": 191.04,
                            "y2": 268.32
                        }
                    },
                    {
                        "filename": "../figure/image/1085-Table7-1.png",
                        "caption": "Table 7: The relationship between automatic evaluations and human evaluations.",
                        "page": 4,
                        "bbox": {
                            "x1": 303.84,
                            "x2": 530.4,
                            "y1": 509.76,
                            "y2": 618.72
                        }
                    },
                    {
                        "filename": "../figure/image/1085-Table5-1.png",
                        "caption": "Table 5: Overall official results for “scientific papers subtask”. B, R and H mean BLEU, RIBES, HUMAN, respectively. HUMAN was evaluated by 5 evaluators using crowdsorcing.",
                        "page": 4,
                        "bbox": {
                            "x1": 67.67999999999999,
                            "x2": 527.04,
                            "y1": 108.0,
                            "y2": 169.44
                        }
                    },
                    {
                        "filename": "../figure/image/1085-Table4-1.png",
                        "caption": "Table 4: A Comparison of Phrase-based Model.",
                        "page": 3,
                        "bbox": {
                            "x1": 64.8,
                            "x2": 293.28,
                            "y1": 575.04,
                            "y2": 694.0799999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1085-Table3-1.png",
                        "caption": "Table 3: Translation examples indicating that COMB achieves better results than SPE in sentece-level BLEU.",
                        "page": 3,
                        "bbox": {
                            "x1": 67.67999999999999,
                            "x2": 527.04,
                            "y1": 106.56,
                            "y2": 548.16
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-30"
        },
        {
            "slides": {
                "0": {
                    "title": "SMT Experiments",
                    "text": [
                        "Experimental results of SMT",
                        "st Moses Aligner BLEU RIBES Training time",
                        "Table: Evaluation results by using different aligner (GIZA++ and MGIZA) based on the",
                        "Anymalign + Cutnalign BLEU Training time Timeout (s) i",
                        "zh-ja zh-ja zh-ja zh-ja",
                        "Table: Evaluation results by using the alignment method of combining sampling-based",
                        "alignment and bilingual hierarchical sub-sentential alignment methods."
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": [
                        "figure/image/1086-Table5-1.png"
                    ]
                }
            },
            "paper_title": "Sampling-based Alignment and Hierarchical Sub-sentential Alignment in Chinese-Japanese Translation of Patents",
            "paper_id": "1086",
            "paper": {
                "title": "Sampling-based Alignment and Hierarchical Sub-sentential Alignment in Chinese-Japanese Translation of Patents",
                "abstract": "This paper describes Chinese-Japanese translation systems based on different alignment methods using the JPO corpus and our submission (ID: WASUIPS) to the subtask of the 2015 Workshop on Asian Translation. One of the alignment methods used is bilingual hierarchical sub-sentential alignment combined with sampling-based multilingual alignment. We also accelerated this method and in this paper, we evaluate the translation results and time spent on several machine translation tasks. The training time is much faster than the standard baseline pipeline (GIZA++/Moses) and MGIZA/Moses. 1",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Phrase-based Statistical Machine Translation (PB-SMT) as a data-oriented approach to machine translation has been widely used for over 10 years."
                    },
                    {
                        "id": 1,
                        "string": "The Moses (Koehn et al., 2007) open source statistical machine translation toolkit was developed by the Statistical Machine Translation Group at the University of Edinburgh."
                    },
                    {
                        "id": 2,
                        "string": "During the three processes (training, tuning and decoding) for building a phrase-based translation system using Moses, training is the most important step as it creates the core knowledge used in machine translation."
                    },
                    {
                        "id": 3,
                        "string": "Word or phrase alignment in the training step allows to obtain translation relationships among the words or phrases in a sentence-aligned bi-corpus."
                    },
                    {
                        "id": 4,
                        "string": "Word or phrase alignment affects the quality of translation."
                    },
                    {
                        "id": 5,
                        "string": "It is also one of the most time-consuming processing step."
                    },
                    {
                        "id": 6,
                        "string": "The probabilistic approach attempts at determining the best set of alignment links between source and target words or phrases in parallel sentences."
                    },
                    {
                        "id": 7,
                        "string": "IBM models (Brown et al., 1993) and HMM alignment models (Vogel et al., 1996) , which are typical implementation of the EM algorithm (Dempster et al., 1977) , are the most widely used representatives in this category."
                    },
                    {
                        "id": 8,
                        "string": "GIZA++ (Och and Ney, 2003) implemented IBM Models, it aligns words based on statistical models."
                    },
                    {
                        "id": 9,
                        "string": "It is a global optimization process simultaneously considers all possible associations in the entire corpus and estimates the parameters of the parallel corpus."
                    },
                    {
                        "id": 10,
                        "string": "Several improvements were made: MGIZA (Gao and Vogel, 2008) is a parallel implementation of IBM models."
                    },
                    {
                        "id": 11,
                        "string": "However, the parallelization may lead to slightly different final alignment results, thus preventing reproduction of results to a certain extent."
                    },
                    {
                        "id": 12,
                        "string": "The associative approaches, introduced in (Gale and Church, 1991), do not rely on an alignment model, but on independence statistical measures."
                    },
                    {
                        "id": 13,
                        "string": "The Dice coefficient, mutual information (Gale and Church, 1991) , and likelihood ratio (Dunning, 1993) are representative cases of this approach."
                    },
                    {
                        "id": 14,
                        "string": "The associative approaches use a local maximization process in which each sentence is processed independently."
                    },
                    {
                        "id": 15,
                        "string": "Sampling-based multilingual alignment (Anymalign) (Lardilleux et al., 2013) and hierarchical sub-sentential alignment (Cutnalign) (Lardilleux et al., 2012) are two associative approaches."
                    },
                    {
                        "id": 16,
                        "string": "Anymalign 1 is an open source multilingual associative aligner (Lardilleux and Lepage, 2009; Lardilleux et al., 2013) ."
                    },
                    {
                        "id": 17,
                        "string": "This method samples large numbers of sub-corpora randomly to obtain source and target word or phrase occurrence distributions."
                    },
                    {
                        "id": 18,
                        "string": "The more often two words or phrases have the same occurrence distribution over particular sub-corpora, the higher the association between them."
                    },
                    {
                        "id": 19,
                        "string": "We can run Anymalign by setting with -t (running time) option and stop it at any time, and the option -i allows to to extract longer phrases by enforcing n-grams to be considered as tokens."
                    },
                    {
                        "id": 20,
                        "string": "For pre-segmented texts, option -i allows to group words into phrases more easily."
                    },
                    {
                        "id": 21,
                        "string": "Cutnalign is a bilingual hierarchical subsentential alignment method (Lardilleux et al., 2012) ."
                    },
                    {
                        "id": 22,
                        "string": "It is based on a recursive binary segmentation process of the alignment matrix between a source sentence and its corresponding target sentence."
                    },
                    {
                        "id": 23,
                        "string": "We make use of this method in combination with Anymalign."
                    },
                    {
                        "id": 24,
                        "string": "In the experiments, reported in this paper, we extend the work to decrease time costs in the training step."
                    },
                    {
                        "id": 25,
                        "string": "We obtained comparable results in only one fifth of the training time required by the GIZA++/Moses baseline pipeline."
                    },
                    {
                        "id": 26,
                        "string": "Chinese and Japanese data used The data used in our systems are the Chinese-Japanese JPO Patent Corpus (JPC) 2 provided by WAT 2015 for the patents subtask (Nakazawa et al., 2015) ."
                    },
                    {
                        "id": 27,
                        "string": "It contains 1 million Chinese-Japanese parallel sentences in four domains in the training data."
                    },
                    {
                        "id": 28,
                        "string": "These are Chemistry, Electricity, Mechanical engineering, and Physics."
                    },
                    {
                        "id": 29,
                        "string": "We used sentences of 40 words or less than 40 words as our training data for the translation models, but use all of the Japanese sentences in the parallel corpus for training the language models."
                    },
                    {
                        "id": 30,
                        "string": "We used all of the development data for tuning."
                    },
                    {
                        "id": 31,
                        "string": "For Chinese and Japanese segmentation we used the Stanford Segmenter (version: 2014-01-04 with Chinese Penn Treebank (CTB) model) 3 and Juman (version 7.0) 4 ."
                    },
                    {
                        "id": 32,
                        "string": "Table 1 shows some statistics on the data we used in our systems (after tokenization, lowercase and clean)."
                    },
                    {
                        "id": 33,
                        "string": "Bilingual hierarchical sub-sentential alignment method Cutnalign as a bilingual hierarchical subsentential alignment method based on a recursive binary segmentation process of the alignment matrix between a source sentence and its translation."
                    },
                    {
                        "id": 34,
                        "string": "It is a three-step approach: • measure the strength of the translation link between any source and target pair of words; • compute the optimal joint clustering of a bipartite graph to search the best alignment; • segment and align a pair of sentences."
                    },
                    {
                        "id": 35,
                        "string": "When building alignment matrices, the strength between two words is evaluated using the following formula (Lardilleux et al., 2012) ."
                    },
                    {
                        "id": 36,
                        "string": "w(s, t) = p(s|t) × p(t|s) (1) (p(s|t) and p(t|s)) are translation probabilities estimated by Anymalign."
                    },
                    {
                        "id": 37,
                        "string": "An example of alignment matrix is shown in Table 2 ."
                    },
                    {
                        "id": 38,
                        "string": "The optimal joint clustering of a bipartite graph is computed recursively using the following formula for searching the best alignment between words in the source and target languages (Zha et al., 2001; Lardilleux et al., 2012) ."
                    },
                    {
                        "id": 39,
                        "string": "cut(X, Y ) = W (X, Y ) + W (X, Y ) (2) X, X, Y , Y denote the segmentation of the sentences."
                    },
                    {
                        "id": 40,
                        "string": "Here the block we start with is the entire matrix."
                    },
                    {
                        "id": 41,
                        "string": "Splitting horizontally and vertically into two parts gives four sub-blocks."
                    },
                    {
                        "id": 42,
                        "string": "W (X, Y ) = s∈X,t∈Y w(s, t) (3) W (X, Y ) is the sum of all translation strengths between all source and target words inside a subblock (X, Y )."
                    },
                    {
                        "id": 43,
                        "string": "The point where to is found on the x and y which minimize N cut (Lardilleux et al., 2012) : shows several segmentations out of all the possible segmentation in two blocks by computing the sub-sentential alignment between a Chinese and a Japanese sentences."
                    },
                    {
                        "id": 44,
                        "string": "For each word pair (x, y), we compute N cut (x, y) ."
                    },
                    {
                        "id": 45,
                        "string": "In this case, we start at word pair (根据, それら), the search space is the rectangle area [(根据, それら), (。, 。)]."
                    },
                    {
                        "id": 46,
                        "string": "In Table 3 , only 7 out of all the possible segmentations in two blocks are shown."
                    },
                    {
                        "id": 47,
                        "string": "The number of possible segmentation is: the length of the Japanese sentence minus one, multiplied by the length of the Chinese sentence minus one, multiplied by two, as there are two possible direction for segmenting."
                    },
                    {
                        "id": 48,
                        "string": "After computing all N cut(x, y), we compare and find the minimal N cut(x, y)."
                    },
                    {
                        "id": 49,
                        "string": "Table 4 shows the flow of recursive segmentation and alignment."
                    },
                    {
                        "id": 50,
                        "string": "N cut(X, Y ) = cut(X, Y ) cut(X, Y ) + 2 × W (X, Y ) + cut(X, Y ) cut(X, Y ) + 2 × W (X, Y ) (4) In the our experiments, we introduced two types of improvements (Yang and Lepage, 2015) compared to the original implementation."
                    },
                    {
                        "id": 51,
                        "string": "The first one, introduces multi-processing in both the sampling-based alignment method and hierarchical sub-sentential alignment method so as to trivially accelerate the overall alignment process."
                    },
                    {
                        "id": 52,
                        "string": "We also re-implement the core of Cutnalign in C. The second one, approximations in the computation of N cut accelerate some decisions."
                    },
                    {
                        "id": 53,
                        "string": "Also a method to reduce the search space in hierarchical subsentential alignment has been introduced, so that important speed-ups are obtained."
                    },
                    {
                        "id": 54,
                        "string": "We refer the reader to (Yang and Lepage, 2015) for a detailed description of these improvements."
                    },
                    {
                        "id": 55,
                        "string": "4 Experiments based on different alignment methods Experiment settings Here, we basically perform experiments with GIZA++ or MGIZA."
                    },
                    {
                        "id": 56,
                        "string": "The phrase tables are extracted from the alignments obtained using the grow-diag-final-and heuristic (Ayan and Dorr, 2006) integrated in the Moses toolkit."
                    },
                    {
                        "id": 57,
                        "string": "Our sampling-based alignment method and hierarchical sub-sentential alignment method are also evaluated within a PB-SMT system built by using the Moses toolkit, the Ken Language Modeling toolkit (Heafield, 2011) and a lexicalized reordering model (Koehn et al., 2005) ."
                    },
                    {
                        "id": 58,
                        "string": "We built systems from Chinese to Japanese."
                    },
                    {
                        "id": 59,
                        "string": "Each experiment was run using the same data sets (see Section 2)."
                    },
                    {
                        "id": 60,
                        "string": "Translations were evaluated using BLEU (Papineni et al., 2002) and RIBES (Isozaki et al., 2010) ."
                    },
                    {
                        "id": 61,
                        "string": "We used Anymalign (i=2, two words can be considered as one token) and Cutnalign to build phrase tables."
                    },
                    {
                        "id": 62,
                        "string": "As a timeout (-t) should be given, we set two different timeouts (5400 sec."
                    },
                    {
                        "id": 63,
                        "string": "and 1200 sec.)."
                    },
                    {
                        "id": 64,
                        "string": "We also use different Cutnalign versions where core components are implemented in C or Python."
                    },
                    {
                        "id": 65,
                        "string": "We passed word-to-word associations output by Anymalign (i=2) to Cutnalign which produces sub-sentential alignments, which are in turn passed to the grow-dial-final-and heuristic of the Moses toolkit to build phrase tables."
                    },
                    {
                        "id": 66,
                        "string": "Results Evaluation results using different alignment methods based on the same data sets are given in Tables 5 and 7."
                    },
                    {
                        "id": 67,
                        "string": "The system built based on GIZA++/Moses pipeline as a baseline system is given in Table 5 ."
                    },
                    {
                        "id": 68,
                        "string": "We also show the evaluation results obtained by the WAT 2015 automatic evaluation 5 in Table 6 and 8."
                    },
                    {
                        "id": 69,
                        "string": "The results in Table 7 and 8 show that there are no significant differences among the evaluation results based on different versions of Moses, different Anymalign timeouts or different versions of Cutnalign."
                    },
                    {
                        "id": 70,
                        "string": "However, the training times changed considerably depending on the timeouts for Anymalign."
                    },
                    {
                        "id": 71,
                        "string": "The fastest training time is obtained with Moses version 2.1.1, a timeout of 1200 sec."
                    },
                    {
                        "id": 72,
                        "string": "for Anymalign and the C version of Cutnalign: 57 minutes, i.e., about one fifth of the time used by GIZA++ or MGIZA (Table 5 and 6) ."
                    },
                    {
                        "id": 73,
                        "string": "We also checked the confidence intervals between using GIZA++ and our method (the fastest one): 37.24 ± 0.86 and 35.72 ± 0.90."
                    },
                    {
                        "id": 74,
                        "string": "The probability of actually getting them (p-value) is 0."
                    },
                    {
                        "id": 75,
                        "string": "Conclusion In this paper, we have shown that it is possible to accelerate development of SMT systems following the work by Lardilleux et al."
                    },
                    {
                        "id": 76,
                        "string": "(2012) and Yang and Lepage (2015) on bilingual hierarchical sub-sentential alignment."
                    },
                    {
                        "id": 77,
                        "string": "We performed several machine translation experiments using different alignment methods and obtained a significant reduction of processing training time."
                    },
                    {
                        "id": 78,
                        "string": "Setting different timeouts for Anymalign did not change the translation quality."
                    },
                    {
                        "id": 79,
                        "string": "In other word, we get a relative steady translation quality even when less time is allotted to word-to-word association computation."
                    },
                    {
                        "id": 80,
                        "string": "Here, the fastest training time was only 57 minutes, one fifth compared with the use of GIZA++ or MGIZA."
                    },
                    {
                        "id": 81,
                        "string": "References"
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 25
                    },
                    {
                        "section": "Chinese and Japanese data used",
                        "n": "2",
                        "start": 26,
                        "end": 32
                    },
                    {
                        "section": "Bilingual hierarchical sub-sentential alignment method",
                        "n": "3",
                        "start": 33,
                        "end": 54
                    },
                    {
                        "section": "Experiment settings",
                        "n": "4.1",
                        "start": 55,
                        "end": 65
                    },
                    {
                        "section": "Results",
                        "n": "4.2",
                        "start": 66,
                        "end": 74
                    },
                    {
                        "section": "Conclusion",
                        "n": "5",
                        "start": 75,
                        "end": 81
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1086-Table3-1.png",
                        "caption": "Table 3: 7 out of all the possible segmentation in two blocks are shown.",
                        "page": 5,
                        "bbox": {
                            "x1": 81.6,
                            "x2": 530.4,
                            "y1": 146.88,
                            "y2": 657.12
                        }
                    },
                    {
                        "filename": "../figure/image/1086-Table1-1.png",
                        "caption": "Table 1: Statistics of our baseline training data of JPC.",
                        "page": 1,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 525.12,
                            "y1": 62.879999999999995,
                            "y2": 180.95999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1086-Table4-1.png",
                        "caption": "Table 4: Steps in recursive segmentation and alignment result using sampling-based alignment and hierarchical sub-sentential alignment method.",
                        "page": 6,
                        "bbox": {
                            "x1": 81.6,
                            "x2": 530.4,
                            "y1": 76.8,
                            "y2": 587.04
                        }
                    },
                    {
                        "filename": "../figure/image/1086-Table5-1.png",
                        "caption": "Table 5: Evaluation results by using different aligner (GIZA++ and MGIZA) based on the data of JPC given in Table 1.",
                        "page": 6,
                        "bbox": {
                            "x1": 149.76,
                            "x2": 448.32,
                            "y1": 657.6,
                            "y2": 708.9599999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1086-Table6-1.png",
                        "caption": "Table 6: Evaluation results (Web server automatic evaluation) by using different aligner (GIZA++ and MGIZA ) based on the data of JPC given in Table 1.",
                        "page": 7,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 546.24,
                            "y1": 110.88,
                            "y2": 176.16
                        }
                    },
                    {
                        "filename": "../figure/image/1086-Table7-1.png",
                        "caption": "Table 7: Evaluation results by using the alignment method of combining sampling-based alignment and bilingual hierarchical sub-sentential alignment methods based on the data of JPC given in Table 1. In decreasing order of BLEU cores. Here, 2 (c) shows option -i of Anymalign is 2, and Cutnlaign version where core component is implemented in C.",
                        "page": 7,
                        "bbox": {
                            "x1": 133.92,
                            "x2": 464.15999999999997,
                            "y1": 315.84,
                            "y2": 413.28
                        }
                    },
                    {
                        "filename": "../figure/image/1086-Table8-1.png",
                        "caption": "Table 8: Evaluation results (Web server automatic evaluation) by using the alignment method of combining sampling-based alignment and bilingual hierarchical sub-sentential alignment methods based on the data of JPC given in Table 1. In decreasing order of BLEU cores.",
                        "page": 7,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 549.12,
                            "y1": 580.8,
                            "y2": 664.3199999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1086-Table2-1.png",
                        "caption": "Table 2: An example of an alignment matrix which contains the translation strength for each word pair (Chinese–Japanese). The scores are obtained using Anymalign’s output. Computing by w.",
                        "page": 4,
                        "bbox": {
                            "x1": 85.92,
                            "x2": 511.2,
                            "y1": 313.44,
                            "y2": 472.32
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-31"
        },
        {
            "slides": {
                "0": {
                    "title": "Adequacy in Neural Machine Translation",
                    "text": [
                        "Source: und wir benutzen dieses wort mit solcher verachtung",
                        "Repetitions Reference: and we say that word with such contempt",
                        "Translation: and we use this word with such contempt contempt",
                        "Ein 28-jahriger Koch, der kurzlich nach Pittsburgh gezogen war, wurde diese Woche im Treppenhaus eines ortlichen Einkaufszentrums tot aufgefunden .",
                        "Translation: A 28-year-old chef who recently moved to Pittsburgh was found dead in the staircase this week .",
                        "Pittsburgh was found dead in the staircase of a local shopping mall this week ."
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Previous Work",
                    "text": [
                        "Conditioning on coverage vectors to track",
                        "Gating architectures and adaptive attention to control",
                        "Coverage penalty during decoding (Wu, 2016)."
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Main Contributions",
                    "text": [
                        "J'ai mange le sandwich",
                        "1. Fertility-based Neural Machine Translation Model",
                        "(Bounds on source attention weights)",
                        "2. Novel attention transform function: Constrained Sparsemax",
                        "3. Evaluation Metrics: REP-Score and DROP-Score"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Attention Transform Functions",
                    "text": [
                        "Sparsemax: Euclidean projection of z provides sparse probability distributions.",
                        "Constrained Softmax: Returns the distribution closest to softmax whose attention probabilities are bounded by upper bounds u."
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Constrained Sparsemax",
                    "text": [
                        "Provides sparse and bounded probability distributions.",
                        "This transformation has two levels of sparsity: over time steps & over attended words at each step.",
                        "Efficient linear and sublinear time algorithms for forward and backward propagation."
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "6": {
                    "title": "Visualization Attention transform functions",
                    "text": [
                        "csparsemax provides sparse and constrained probabilities."
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": [
                        "figure/image/1092-Figure1-1.png"
                    ]
                },
                "7": {
                    "title": "Fertility based NMT",
                    "text": [
                        "Allocate fertilities for each source word as attention budgets that exhaust over decoding.",
                        "Fertility Predictor : Train biLSTM model supervised by fertilities from fast_align (IBM Model 2).",
                        "Exhaustion strategy to encourage more attention for words with larger credit remaining:"
                    ],
                    "page_nums": [
                        11,
                        12
                    ],
                    "images": []
                },
                "9": {
                    "title": "Evaluation Metrics REP Score and DROP Score",
                    "text": [
                        "Penalizes n-gram repetitions in predicted translations.",
                        "Normalize by number of words in reference corpus.",
                        "Find word alignments from source to reference & source to predicted.",
                        "% of source words aligned with some word in reference, but not with any word in predicted translation."
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "10": {
                    "title": "Results",
                    "text": [
                        "softmax softmax+CovPenalty softmax+CovVector csparsemax"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                }
            },
            "paper_title": "Sparse and Constrained Attention for Neural Machine Translation",
            "paper_id": "1092",
            "paper": {
                "title": "Sparse and Constrained Attention for Neural Machine Translation",
                "abstract": "In NMT, words are sometimes dropped from the source or generated repeatedly in the translation. We explore novel strategies to address the coverage problem that change only the attention transformation. Our approach allocates fertilities to source words, used to bound the attention each word can receive. We experiment with various sparse and constrained attention transformations and propose a new one, constrained sparsemax, shown to be differentiable and sparse. Empirical evaluation is provided in three languages pairs.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Neural machine translation (NMT) emerged in the last few years as a very successful paradigm (Sutskever et al., 2014; Bahdanau et al., 2014; Gehring et al., 2017; Vaswani et al., 2017) ."
                    },
                    {
                        "id": 1,
                        "string": "While NMT is generally more fluent than previous statistical systems, adequacy is still a major concern (Koehn and Knowles, 2017) : common mistakes include dropping source words and repeating words in the generated translation."
                    },
                    {
                        "id": 2,
                        "string": "Previous work has attempted to mitigate this problem in various ways."
                    },
                    {
                        "id": 3,
                        "string": "Wu et al."
                    },
                    {
                        "id": 4,
                        "string": "(2016) incorporate coverage and length penalties during beam search-a simple yet limited solution, since it only affects the scores of translation hypotheses that are already in the beam."
                    },
                    {
                        "id": 5,
                        "string": "Other approaches involve architectural changes: providing coverage vectors to track the attention history (Mi et al., 2016; Tu et al., 2016) , using gating architectures and adaptive attention to control the amount of source context provided (Tu et al., 2017a; Li and Zhu, 2017) , or adding a reconstruction loss (Tu et al., 2017b) ."
                    },
                    {
                        "id": 6,
                        "string": "Feng et al."
                    },
                    {
                        "id": 7,
                        "string": "(2016) also use the notion of fertility * Work done during an internship at Unbabel."
                    },
                    {
                        "id": 8,
                        "string": "implicitly in their proposed model."
                    },
                    {
                        "id": 9,
                        "string": "Their fertility conditioned decoder uses a coverage vector and an extract gate which are incorporated in the decoding recurrent unit, increasing the number of parameters."
                    },
                    {
                        "id": 10,
                        "string": "In this paper, we propose a different solution that does not change the overall architecture, but only the attention transformation."
                    },
                    {
                        "id": 11,
                        "string": "Namely, we replace the traditional softmax by other recently proposed transformations that either promote attention sparsity (Martins and Astudillo, 2016) or upper bound the amount of attention a word can receive (Martins and Kreutzer, 2017) ."
                    },
                    {
                        "id": 12,
                        "string": "The bounds are determined by the fertility values of the source words."
                    },
                    {
                        "id": 13,
                        "string": "While these transformations have given encouraging results in various NLP problems, they have never been applied to NMT, to the best of our knowledge."
                    },
                    {
                        "id": 14,
                        "string": "Furthermore, we combine these two ideas and propose a novel attention transformation, constrained sparsemax, which produces both sparse and bounded attention weights, yielding a compact and interpretable set of alignments."
                    },
                    {
                        "id": 15,
                        "string": "While being in-between soft and hard alignments ( Figure 2 ), the constrained sparsemax transformation is end-to-end differentiable, hence amenable for training with gradient backpropagation."
                    },
                    {
                        "id": 16,
                        "string": "To sum up, our contributions are as follows: 1 • We formulate constrained sparsemax and derive efficient linear and sublinear-time algorithms for running forward and backward propagation."
                    },
                    {
                        "id": 17,
                        "string": "This transformation has two levels of sparsity: over time steps, and over the attended words at each step."
                    },
                    {
                        "id": 18,
                        "string": "• We provide a detailed empirical comparison of various attention transformations, including softmax (Bahdanau et al., 2014) , sparse-max (Martins and Astudillo, 2016) , constrained softmax (Martins and Kreutzer, 2017) , and our newly proposed constrained sparsemax."
                    },
                    {
                        "id": 19,
                        "string": "We provide error analysis including two new metrics targeted at detecting coverage problems."
                    },
                    {
                        "id": 20,
                        "string": "Preliminaries Our underlying model architecture is a standard attentional encoder-decoder (Bahdanau et al., 2014) ."
                    },
                    {
                        "id": 21,
                        "string": "Let x := x 1:J and y := y 1:T denote the source and target sentences, respectively."
                    },
                    {
                        "id": 22,
                        "string": "We use a Bi-LSTM encoder to represent the source words as a matrix H := [h 1 , ."
                    },
                    {
                        "id": 23,
                        "string": "."
                    },
                    {
                        "id": 24,
                        "string": "."
                    },
                    {
                        "id": 25,
                        "string": ", h J ] ∈ R 2D×J ."
                    },
                    {
                        "id": 26,
                        "string": "The conditional probability of the target sentence is given as p(y | x) := T t=1 p(y t | y 1:(t−1) , x), (1) where p(y t | y 1:(t−1) , x) is computed by a softmax output layer that receives a decoder state s t as input."
                    },
                    {
                        "id": 27,
                        "string": "This state is updated by an auto-regressive LSTM, s t = RNN(embed(y t−1 ), s t−1 , c t ), where c t is an input context vector."
                    },
                    {
                        "id": 28,
                        "string": "This vector is computed as c t := Hα t , where α t is a probability distribution that represents the attention over the source words, commonly obtained as α t = softmax(z t ), (2) where z t ∈ R J is a vector of scores."
                    },
                    {
                        "id": 29,
                        "string": "We follow Luong et al."
                    },
                    {
                        "id": 30,
                        "string": "(2015) and define z t,j := s t−1 W h j as a bilinear transformation of encoder and decoder states, where W is a model parameter."
                    },
                    {
                        "id": 31,
                        "string": "2 Sparse and Constrained Attention In this work, we consider alternatives to Eq."
                    },
                    {
                        "id": 32,
                        "string": "2."
                    },
                    {
                        "id": 33,
                        "string": "Since the softmax is strictly positive, it forces all words in the source to receive some probability mass in the resulting attention distribution, which can be wasteful."
                    },
                    {
                        "id": 34,
                        "string": "Moreover, it may happen that the decoder attends repeatedly to the same source words across time steps, causing repetitions in the generated translation, as Tu et al."
                    },
                    {
                        "id": 35,
                        "string": "(2016) observed."
                    },
                    {
                        "id": 36,
                        "string": "With this in mind, we replace Eq."
                    },
                    {
                        "id": 37,
                        "string": "2 by α t = ρ(z t , u t ), where ρ is a transformation that may depend both on the scores z t ∈ R J and on upper bounds u t ∈ R J that limit the amount of attention that each word can receive."
                    },
                    {
                        "id": 38,
                        "string": "We consider three alternatives to softmax, described next."
                    },
                    {
                        "id": 39,
                        "string": "Sparsemax."
                    },
                    {
                        "id": 40,
                        "string": "The sparsemax transformation (Martins and Astudillo, 2016 ) is defined as: sparsemax(z) := arg min α∈∆ J α − z 2 , (3) where ∆ J := {α ∈ R J | α ≥ 0, j α j = 1}."
                    },
                    {
                        "id": 41,
                        "string": "In words, it is the Euclidean projection of the scores z onto the probability simplex."
                    },
                    {
                        "id": 42,
                        "string": "These projections tend to hit the boundary of the simplex, yielding a sparse probability distribution."
                    },
                    {
                        "id": 43,
                        "string": "This allows the decoder to attend only to a few words in the source, assigning zero probability mass to all other words."
                    },
                    {
                        "id": 44,
                        "string": "Martins and Astudillo (2016) have shown that the sparsemax can be evaluated in O(J) time (same asymptotic cost as softmax) and gradient backpropagation takes sublinear time (faster than softmax), by exploiting the sparsity of the solution."
                    },
                    {
                        "id": 45,
                        "string": "Constrained softmax."
                    },
                    {
                        "id": 46,
                        "string": "The constrained softmax transformation was recently proposed by Martins and Kreutzer (2017) in the context of easy-first sequence tagging, being defined as follows: csoftmax(z; u) := arg min α∈∆ J KL(α softmax(z)) s.t."
                    },
                    {
                        "id": 47,
                        "string": "α ≤ u, (4) where u is a vector of upper bounds, and KL(."
                    },
                    {
                        "id": 48,
                        "string": ".)"
                    },
                    {
                        "id": 49,
                        "string": "is the Kullback-Leibler divergence."
                    },
                    {
                        "id": 50,
                        "string": "In other words, it returns the distribution closest to softmax(z) whose attention probabilities are bounded by u. Martins and Kreutzer (2017) have shown that this transformation can be evaluated in O(J log J) time and its gradients backpropagated in O(J) time."
                    },
                    {
                        "id": 51,
                        "string": "To use this transformation in the attention mechanism, we make use of the idea of fertility (Brown et al., 1993) ."
                    },
                    {
                        "id": 52,
                        "string": "Namely, let β t−1 := t−1 τ =1 α τ denote the cumulative attention that each source word has received up to time step t, and let f := (f j ) J j=1 be a vector containing fertility upper bounds for each source word."
                    },
                    {
                        "id": 53,
                        "string": "The attention at step t is computed as α t = csoftmax(z t , f − β t−1 )."
                    },
                    {
                        "id": 54,
                        "string": "(5) Intuitively, each source word j gets a credit of f j units of attention, which are consumed along the decoding process."
                    },
                    {
                        "id": 55,
                        "string": "If all the credit is exhausted, it receives zero attention from then on."
                    },
                    {
                        "id": 56,
                        "string": "Unlike the sparsemax transformation, which places sparse attention over the source words, the constrained softmax leads to sparsity over time steps."
                    },
                    {
                        "id": 57,
                        "string": "Figure 1 : Illustration of the different attention transformations for a toy example with three source words."
                    },
                    {
                        "id": 58,
                        "string": "We show the attention values on the probability simplex."
                    },
                    {
                        "id": 59,
                        "string": "In the first row we assume scores z = (1.2, 0.8, −0.2), and in the second and third rows z = (0.7, 0.9, 0.1) and z = (−0.2, 0.2, 0.9), respectively."
                    },
                    {
                        "id": 60,
                        "string": "For constrained softmax/sparsemax, we set unit fertilities to every word; for each row the upper bounds (represented as green dashed lines) are set as the difference between these fertilities and the cumulative attention each word has received."
                    },
                    {
                        "id": 61,
                        "string": "The last row illustrates the cumulative attention for the three words after all rounds."
                    },
                    {
                        "id": 62,
                        "string": "Constrained sparsemax."
                    },
                    {
                        "id": 63,
                        "string": "In this work, we propose a novel transformation which shares the two properties above: it provides both sparse and bounded probabilities."
                    },
                    {
                        "id": 64,
                        "string": "It is defined as: csparsemax(z; u) := arg min α∈∆ J α − z 2 s.t."
                    },
                    {
                        "id": 65,
                        "string": "α ≤ u."
                    },
                    {
                        "id": 66,
                        "string": "(6) The following result, whose detailed proof we include as supplementary material (Appendix A), is key for enabling the use of the constrained sparsemax transformation in neural networks."
                    },
                    {
                        "id": 67,
                        "string": "Proposition 1 Let α = csparsemax(z; u) be the solution of Eq."
                    },
                    {
                        "id": 68,
                        "string": "6, and define the sets A = {j ∈ [J] | 0 < α j < u j }, A L = {j ∈ [J] | α j = 0}, and A R = {j ∈ [J] | α j = u j }."
                    },
                    {
                        "id": 69,
                        "string": "Then: • Forward propagation."
                    },
                    {
                        "id": 70,
                        "string": "α can be computed in O(J) time with the algorithm of Pardalos and Kovoor (1990) (Alg."
                    },
                    {
                        "id": 71,
                        "string": "1 in Appendix A)."
                    },
                    {
                        "id": 72,
                        "string": "The solution takes the form α j = max{0, min{u j , z j − τ }}, where τ is a normalization constant."
                    },
                    {
                        "id": 73,
                        "string": "• Gradient backpropagation."
                    },
                    {
                        "id": 74,
                        "string": "Backpropagation takes sublinear time O(|A| + |A R |)."
                    },
                    {
                        "id": 75,
                        "string": "Let L(θ) be a loss function, dα = ∇ α L(θ) be the output gradient, and dz = ∇ z L(θ) and du = ∇ u L(θ) be the input gradients."
                    },
                    {
                        "id": 76,
                        "string": "Then, we have: dz j = 1(j ∈ A)(dα j − m) (7) du j = 1(j ∈ A R )(dα j − m), (8) where m = 1 |A| j∈A dα j ."
                    },
                    {
                        "id": 77,
                        "string": "Fertility Bounds We experiment with three ways of setting the fertility of the source words: CONSTANT, GUIDED, and PREDICTED."
                    },
                    {
                        "id": 78,
                        "string": "With CONSTANT, we set the fertilities of all source words to a fixed integer value f ."
                    },
                    {
                        "id": 79,
                        "string": "With GUIDED, we train a word aligner based on IBM Model 2 (we used fast align in our experiments, Dyer et al."
                    },
                    {
                        "id": 80,
                        "string": "(2013) ) and, for each word in the vocabulary, we set the fertilities to the maximal observed value in the training data (or 1 if no alignment was observed  with fertility upper bounds and the word aligner may miss some word pairs, we found it beneficial to add a constant to this number (1 in our experiments)."
                    },
                    {
                        "id": 81,
                        "string": "At test time, we use the expected fertilities according to our model."
                    },
                    {
                        "id": 82,
                        "string": "Sink token."
                    },
                    {
                        "id": 83,
                        "string": "We append an additional <SINK> token to the end of the source sentence, to which we assign unbounded fertility (f J+1 = ∞)."
                    },
                    {
                        "id": 84,
                        "string": "The token is akin to the null alignment in IBM models."
                    },
                    {
                        "id": 85,
                        "string": "The reason we add this token is the following: without the sink token, the length of the generated target sentence can never exceed j f j words if we use constrained softmax/sparsemax."
                    },
                    {
                        "id": 86,
                        "string": "At training time this may be problematic, since the target length is fixed and the problems in Eqs."
                    },
                    {
                        "id": 87,
                        "string": "4-6 can become infeasible."
                    },
                    {
                        "id": 88,
                        "string": "By adding the sink token we guarantee j f j = ∞, eliminating the problem."
                    },
                    {
                        "id": 89,
                        "string": "Exhaustion strategies."
                    },
                    {
                        "id": 90,
                        "string": "To avoid missing source words, we implemented a simple strategy to encourage more attention to words with larger credit: we redefine the pre-attention word scores as z t = z t + cu t , where c is a constant (c = 0.2 in our experiments)."
                    },
                    {
                        "id": 91,
                        "string": "This increases the score of words which have not yet exhausted their fertility (we may regard it as a \"soft\" lower bound in Eqs."
                    },
                    {
                        "id": 92,
                        "string": "4-6)."
                    },
                    {
                        "id": 93,
                        "string": "Experiments We evaluated our attention transformations on three language pairs."
                    },
                    {
                        "id": 94,
                        "string": "We focused on small datasets, as they are the most affected by coverage mistakes."
                    },
                    {
                        "id": 95,
                        "string": "We use the IWSLT 2014 corpus for DE-EN, the KFTT corpus for JA-EN (Neubig, 2011), and the WMT 2016 dataset for RO-EN."
                    },
                    {
                        "id": 96,
                        "string": "The training sets have 153,326, 329,882, and 560,767 parallel sentences, respectively."
                    },
                    {
                        "id": 97,
                        "string": "Our reason to prefer smaller datasets is that this regime is what brings more adequacy issues and demands more structural biases, hence it is a good test bed for our methods."
                    },
                    {
                        "id": 98,
                        "string": "We tokenized the data using the Moses scripts and preprocessed it with subword units (Sennrich et al., 2016) with a joint vocabulary and 32k merge operations."
                    },
                    {
                        "id": 99,
                        "string": "Our implementation was done on a fork of the OpenNMT-py toolkit (Klein et al., 2017) with the default parameters 4 ."
                    },
                    {
                        "id": 100,
                        "string": "We used a validation set to tune hyperparameters introduced by our model."
                    },
                    {
                        "id": 101,
                        "string": "Even though our attention implementations are CPU-based using NumPy (unlike the rest of the computation which is done on the GPU), we did not observe any noticeable slowdown using multiple devices."
                    },
                    {
                        "id": 102,
                        "string": "As baselines, we use softmax attention, as well as two recently proposed coverage models: • COVPENALTY (Wu et al., 2016, §7) ."
                    },
                    {
                        "id": 103,
                        "string": "At test time, the hypotheses in the beam are rescored with a global score that includes a length and a coverage penalty."
                    },
                    {
                        "id": 104,
                        "string": "5 We tuned α and β with grid search on {0.2k} 5 k=0 , as in Wu et al."
                    },
                    {
                        "id": 105,
                        "string": "(2016) ."
                    },
                    {
                        "id": 106,
                        "string": "• COVVECTOR (Tu et al., 2016) ."
                    },
                    {
                        "id": 107,
                        "string": "At training and test time, coverage vectors β and additional parameters v are used to condition the next attention step."
                    },
                    {
                        "id": 108,
                        "string": "We adapted this to our bilinear attention by defining z t,j = s t−1 (W h j + vβ t−1,j )."
                    },
                    {
                        "id": 109,
                        "string": "We also experimented combining the strategies above with the sparsemax transformation."
                    },
                    {
                        "id": 110,
                        "string": "As evaluation metrics, we report tokenized BLEU, METEOR (Denkowski and Lavie (2014) , as well as two new metrics that we describe next to account for over and under-translation."
                    },
                    {
                        "id": 111,
                        "string": "6 4 We used a 2-layer LSTM, embedding and hidden size of 500, dropout 0.3, and the SGD optimizer for 13 epochs."
                    },
                    {
                        "id": 112,
                        "string": "5 Since our sparse attention can become 0 for some words, we extended the original coverage penalty by adding another parameter , set to 0.1: cp(x; y) := β J j=1 log max{ , min{1, |y| t=1 αjt}}."
                    },
                    {
                        "id": 113,
                        "string": "6 Both evaluation metrics are included in our software package at www.github.com/Unbabel/ sparse constrained attention."
                    },
                    {
                        "id": 114,
                        "string": "REP-score: a new metric to count repetitions."
                    },
                    {
                        "id": 115,
                        "string": "Formally, given an n-gram s ∈ V n , let t(s) and r(s) be the its frequency in the model translation and reference."
                    },
                    {
                        "id": 116,
                        "string": "We first compute a sentence-level score σ(t, r) = λ 1 s∈V n , t(s)≥2 max{0, t(s) − r(s)} + λ 2 w∈V max{0, t(ww) − r(ww)}."
                    },
                    {
                        "id": 117,
                        "string": "The REP-score is then given by summing σ(t, r) over sentences, normalizing by the number of words on the reference corpus, and multiplying by 100."
                    },
                    {
                        "id": 118,
                        "string": "We used n = 2, λ 1 = 1 and λ 2 = 2."
                    },
                    {
                        "id": 119,
                        "string": "DROP-score: a new metric that accounts for possibly dropped words."
                    },
                    {
                        "id": 120,
                        "string": "To compute it, we first compute two sets of word alignments: from source to reference translation, and from source to the predicted translation."
                    },
                    {
                        "id": 121,
                        "string": "In our experiments, the alignments were obtained with fast align (Dyer et al., 2013) , trained on the training partition of the data."
                    },
                    {
                        "id": 122,
                        "string": "Then, the DROP-score computes the percentage of source words that aligned with some word from the reference translation, but not with any word from the predicted translation."
                    },
                    {
                        "id": 123,
                        "string": "Table 1 shows the results."
                    },
                    {
                        "id": 124,
                        "string": "We can see that on average, the sparse models (csparsemax as well as sparsemax combined with coverage models) have higher scores on both BLEU and METEOR."
                    },
                    {
                        "id": 125,
                        "string": "Generally, they also obtain better REP and DROP scores than csoftmax and softmax, which suggests that sparse attention alleviates the problem of coverage to some extent."
                    },
                    {
                        "id": 126,
                        "string": "To compare different fertility strategies, we ran experiments on the DE-EN for the csparsemax transformation ( Table 2) ."
                    },
                    {
                        "id": 127,
                        "string": "We see that the PRE-DICTED strategy outperforms the others both in terms of BLEU and METEOR, albeit slightly."
                    },
                    {
                        "id": 128,
                        "string": "Figure 2 shows examples of sentences for which the csparsemax fixed repetitions, along with the corresponding attention maps."
                    },
                    {
                        "id": 129,
                        "string": "We see that in the case of softmax repetitions, the decoder attends repeatedly to the same portion of the source sentence (the expression \"letzten hundert\" in the first sentence and \"regierung\" in the second sentence)."
                    },
                    {
                        "id": 130,
                        "string": "Not only did csparsemax avoid repetitions, but it also yielded a sparse set of alignments, as expected."
                    },
                    {
                        "id": 131,
                        "string": "Appendix B provides more examples of translations from all models in discussion."
                    },
                    {
                        "id": 132,
                        "string": "Conclusions We proposed a new approach to address the coverage problem in NMT, by replacing the softmax attentional transformation by sparse and constrained alternatives: sparsemax, constrained softmax, and the newly proposed constrained sparsemax."
                    },
                    {
                        "id": 133,
                        "string": "For the latter, we derived efficient forward and backward propagation algorithms."
                    },
                    {
                        "id": 134,
                        "string": "By incorporating a model for fertility prediction, our attention transformations led to sparse alignments, avoiding repeated words in the translation."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 19
                    },
                    {
                        "section": "Preliminaries",
                        "n": "2",
                        "start": 20,
                        "end": 30
                    },
                    {
                        "section": "Sparse and Constrained Attention",
                        "n": "3",
                        "start": 31,
                        "end": 76
                    },
                    {
                        "section": "Fertility Bounds",
                        "n": "4",
                        "start": 77,
                        "end": 92
                    },
                    {
                        "section": "Experiments",
                        "n": "5",
                        "start": 93,
                        "end": 130
                    },
                    {
                        "section": "Conclusions",
                        "n": "6",
                        "start": 131,
                        "end": 134
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1092-Figure1-1.png",
                        "caption": "Figure 1: Illustration of the different attention transformations for a toy example with three source words. We show the attention values on the probability simplex. In the first row we assume scores z = (1.2, 0.8,−0.2), and in the second and third rows z = (0.7, 0.9, 0.1) and z = (−0.2, 0.2, 0.9), respectively. For constrained softmax/sparsemax, we set unit fertilities to every word; for each row the upper bounds (represented as green dashed lines) are set as the difference between these fertilities and the cumulative attention each word has received. The last row illustrates the cumulative attention for the three words after all rounds.",
                        "page": 2,
                        "bbox": {
                            "x1": 108.47999999999999,
                            "x2": 497.76,
                            "y1": 77.28,
                            "y2": 264.48
                        }
                    },
                    {
                        "filename": "../figure/image/1092-Table2-1.png",
                        "caption": "Table 2: Impact of various fertility strategies for the csparsemax attention model (DE-EN).",
                        "page": 4,
                        "bbox": {
                            "x1": 100.8,
                            "x2": 261.12,
                            "y1": 204.95999999999998,
                            "y2": 264.0
                        }
                    },
                    {
                        "filename": "../figure/image/1092-Table1-1.png",
                        "caption": "Table 1: BLEU, METEOR, REP and DROP scores on the test sets for different attention transformations.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 525.12,
                            "y1": 61.44,
                            "y2": 164.16
                        }
                    },
                    {
                        "filename": "../figure/image/1092-Figure2-1.png",
                        "caption": "Figure 2: Attention maps for softmax and csparsemax for two DE-EN sentence pairs (white means zero attention). Repeated words are highlighted. The reference translations are “This is Moore’s law over the last hundred years” and “I am going to go ahead and select government.”",
                        "page": 3,
                        "bbox": {
                            "x1": 81.6,
                            "x2": 281.28,
                            "y1": 63.839999999999996,
                            "y2": 300.47999999999996
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-32"
        },
        {
            "slides": {
                "0": {
                    "title": "Direct Transfer for NER",
                    "text": [
                        "Input: Unlabelled sentences in the target language encoded with cross-lingual embeddings",
                        "O B-PER O O B-LOC O O O B-PER O O O O O O O B-LOC O",
                        "kailangan namin ng mas maraming dugo sa Pagasanjan. k ailangan namin ng mas maraming d ugo sa Pagasanjan. kailangan namin ng mas maraming dugo sa Pagasanjan."
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "2": {
                    "title": "Voting and English are often poor",
                    "text": [
                        "ra a oe sd om e +79 = ae ye IP S me | op om ne 4 so",
                        "a ae | ye",
                        "= Tope En4MV Target Language"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "3": {
                    "title": "General findings",
                    "text": [
                        "Transfer strongest within language family",
                        "Asymmetry between use as source vs target language (Slavic-Cyr,",
                        "But lots of odd results & overall highly noisy"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": [
                        "figure/image/1095-Figure5-1.png"
                    ]
                },
                "4": {
                    "title": "Problem Statement",
                    "text": [
                        "N black-box source models",
                        "Unlabelled data in target language",
                        "Little or no labelled data (few shot and zero shot)",
                        "Good predictions in the target language"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                },
                "5": {
                    "title": "Model 1 Few Shot Ranking and Retraining RaRe",
                    "text": [
                        "Source Model AR F1AR",
                        "Source Model EN F1EN",
                        "Source Model VI F1VI",
                        "Source Model AR Dataset AR",
                        "20k unlabelled sents in Tagalog",
                        "Source Model EN Dataset EN",
                        "Source Model VI Dataset VI",
                        "N training sets in Tagalo g",
                        "Final training set, a mixture of distilled knowledge",
                        "1. Train an NER model on the mixture datasets.",
                        "2. Fine-tune on 100 gold samples.",
                        "Zero-shot variant: uniform sampling without fine-tuning"
                    ],
                    "page_nums": [
                        12,
                        13,
                        14,
                        15
                    ],
                    "images": []
                },
                "6": {
                    "title": "Hierarchical BiLSTM CRF as model",
                    "text": [
                        "Our method is independent of model choice."
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                },
                "7": {
                    "title": "Model 2 Zero Shot Transfer BEA",
                    "text": [
                        "What if no gold labels are available?",
                        "Treat gold labels Z as hidden variables",
                        "Estimate Z that best explains all the observed predictions",
                        "Re-estimate the quality of source models",
                        "Inspired by Kim and Ghahramani (2012)",
                        "True label of instance i",
                        "variational mean- field approx."
                    ],
                    "page_nums": [
                        17,
                        18,
                        19,
                        20,
                        21,
                        22,
                        23
                    ],
                    "images": [
                        "figure/image/1095-Figure1-1.png"
                    ]
                },
                "8": {
                    "title": "Extensions to BEA",
                    "text": [
                        "After running BEA, estimate source model qualities and remove bottom k, run BEA again (BEAunsx2)",
                        "2. Few shot scenario:",
                        "Given 100 gold sentences, estimate source model confusion matrices, then run BEA (BEAsup)",
                        "3. Token vs Entity application"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": []
                },
                "9": {
                    "title": "Benchmark BWET Xie et al 2018",
                    "text": [
                        "Single source annotation projection with bilingual dictionaries from cross-lingual word embeddings",
                        "Transfer english training data to German, Dutch, and",
                        "Train a transformer NER on the projected training data.",
                        "State-of-the-art onzero-shotNER transfer (orthogonal to this)"
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": []
                },
                "12": {
                    "title": "Word representation FastText MUSE",
                    "text": [
                        "Use fasttext monolingual wiki embeddings mapped to",
                        "English space using Identical Character Strings."
                    ],
                    "page_nums": [
                        35
                    ],
                    "images": []
                },
                "14": {
                    "title": "Effect of increasing source languages",
                    "text": [
                        "Methods robust to many varying quality source languages.",
                        "Even better with few-shot supervision."
                    ],
                    "page_nums": [
                        44
                    ],
                    "images": [
                        "figure/image/1095-Figure4-1.png"
                    ]
                },
                "15": {
                    "title": "Takeawauys I",
                    "text": [
                        "Transfer from multiple source languages helps because for many languages we dont know the best source language.",
                        "takeaway / noun [uk/aus/nz]: a meal cooked and bought at a shop or restaurant but taken somewhere else... Cambridge English Dictionary"
                    ],
                    "page_nums": [
                        45
                    ],
                    "images": []
                },
                "16": {
                    "title": "Takeawauys II",
                    "text": [
                        "With multiple source languages, you need to estimate their qualities because uniform voting doesnt perform well.",
                        "takeaway / noun [uk/aus/nz]: a meal cooked and bought at a shop or restaurant but taken somewhere else... Cambridge English Dictionary"
                    ],
                    "page_nums": [
                        46
                    ],
                    "images": []
                },
                "17": {
                    "title": "Takeaways III",
                    "text": [
                        "A small training set in target language helps, and can be done cheaply and quickly",
                        "takeaway / noun [uk/aus/nz]: a meal cooked and bought at a shop or restaurant but taken somewhere else... Cambridge English Dictionary"
                    ],
                    "page_nums": [
                        47
                    ],
                    "images": []
                },
                "18": {
                    "title": "Future Work",
                    "text": [
                        "Map all scripts to IPA or Roman alphabet",
                        "(good for shared embeddings and character-level transfer)",
                        "Can we estimate the quality of source models/languages for a specific target language based on language",
                        "Technique should apply beyond NER to other tasks."
                    ],
                    "page_nums": [
                        49
                    ],
                    "images": []
                }
            },
            "paper_title": "Massively Multilingual Transfer for NER",
            "paper_id": "1095",
            "paper": {
                "title": "Massively Multilingual Transfer for NER",
                "abstract": "In cross-lingual transfer, NLP models over one or more source languages are applied to a lowresource target language. While most prior work has used a single source model or a few carefully selected models, here we consider a \"massive\" setting with many such models. This setting raises the problem of poor transfer, particularly from distant languages. We propose two techniques for modulating the transfer, suitable for zero-shot or few-shot learning, respectively. Evaluating on named entity recognition, we show that our techniques are much more effective than strong baselines, including standard ensembling, and our unsupervised method rivals oracle selection of the single best individual model. 1 * Both authors contributed equally to this work. 1  The code and the datasets will be made available at https://github.com/afshinrahimi/mmner.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Supervised learning remains king in natural language processing, with most tasks requiring large quantities of annotated corpora."
                    },
                    {
                        "id": 1,
                        "string": "The majority of the world's 6,000+ languages however have limited or no annotated text, and therefore much of the progress in NLP has yet to be realised widely."
                    },
                    {
                        "id": 2,
                        "string": "Cross-lingual transfer learning is a technique which can compensate for the dearth of data, by transferring knowledge from high-to lowresource languages, which has typically taken the form of annotation projection over parallel corpora or other multilingual resources (Yarowsky et al., 2001; Hwa et al., 2005) , or making use of transferable representations, such as phonetic transcriptions (Bharadwaj et al., 2016) , closely related languages (Cotterell and Duh, 2017) or bilingual dictionaries (Mayhew et al., 2017; Xie et al., 2018) ."
                    },
                    {
                        "id": 3,
                        "string": "Most methods proposed for cross-lingual transfer rely on a single source language, which limits the transferable knowledge to only one source."
                    },
                    {
                        "id": 4,
                        "string": "The target language might be similar to many source languages, on the grounds of the script, word order, loan words etc, and transfer would benefit from these diverse sources of information."
                    },
                    {
                        "id": 5,
                        "string": "There are a few exceptions, which use transfer from several languages, ranging from multitask learning (Duong et al., 2015; Ammar et al., 2016; Fang and Cohn, 2017) , and annotation projection from several languages (Täckström, 2012; Fang and Cohn, 2016; Plank and Agić, 2018) ."
                    },
                    {
                        "id": 6,
                        "string": "However, to the best of our knowledge, none of these approaches adequately account for the quality of transfer, but rather \"weight\" the contribution of each language uniformly."
                    },
                    {
                        "id": 7,
                        "string": "In this paper, we propose a novel method for zero-shot multilingual transfer, inspired by research in truth inference in crowd-sourcing, a related problem, in which the 'ground truth' must be inferred from the outputs of several unreliable annotators (Dawid and Skene, 1979) ."
                    },
                    {
                        "id": 8,
                        "string": "In this problem, the best approaches estimate each model's reliability, and their patterns of mistakes (Kim and Ghahramani, 2012) ."
                    },
                    {
                        "id": 9,
                        "string": "Our proposed model adapts these ideas to a multilingual transfer setting, whereby we learn the quality of transfer, and language-specific transfer errors, in order to infer the best labelling in the target language, as part of a Bayesian graphical model."
                    },
                    {
                        "id": 10,
                        "string": "The key insight is that while the majority of poor models make lots of mistakes, these mistakes are diverse, while the few good models consistently provide reliable input."
                    },
                    {
                        "id": 11,
                        "string": "This allows the model to infer which are the reliable models in an unsupervised manner, i.e., without explicit supervision in the target language, and thereby make accurate inferences despite the substantial noise."
                    },
                    {
                        "id": 12,
                        "string": "In the paper, we also consider a supervised setting, where a tiny annotated corpus is available in the target language."
                    },
                    {
                        "id": 13,
                        "string": "We present two methods to use this data: 1) estimate reliability parameters of the Bayesian model, and 2) explicit model selection and fine-tuning of a low-resource supervised model, thus allowing for more accurate modelling of language specific parameters, such as character embeddings, shown to be important in previous work (Xie et al., 2018) ."
                    },
                    {
                        "id": 14,
                        "string": "Experimenting on two NER corpora, one with as many as 41 languages, we show that single model transfer has highly variable performance, and uniform ensembling often substantially underperforms the single best model."
                    },
                    {
                        "id": 15,
                        "string": "In contrast, our zero-shot approach does much better, exceeding the performance of the single best model, and our few-shot supervised models result in further gains."
                    },
                    {
                        "id": 16,
                        "string": "Approach We frame the problem of multilingual transfer as follows."
                    },
                    {
                        "id": 17,
                        "string": "We assume a collection of H models, all trained in a high resource setting, denoted M h = {M h i , i ∈ (1, H)}."
                    },
                    {
                        "id": 18,
                        "string": "Each of these models are not well matched to our target data setting, for instance these may be trained on data from different domains, or on different languages, as we evaluate in our experiments, where we use crosslingual embeddings for model transfer."
                    },
                    {
                        "id": 19,
                        "string": "This is a problem of transfer learning, namely, how best we can use the H models for best results in the target language."
                    },
                    {
                        "id": 20,
                        "string": "2 Simple approaches in this setting include a) choosing a single model M ∈ M h , on the grounds of practicality, or the similarity between the model's native data condition and the target, and this model is used to label the target data; or b) allowing all models to 'vote' in an classifier ensemble, such that the most frequent outcome is selected as the ensemble output."
                    },
                    {
                        "id": 21,
                        "string": "Unfortunately neither of these approaches are very accurate in a cross-lingual transfer setting, as we show in §4, where we show a fixed source language model (en) dramatically underperforms compared to oracle selection of source language, and the same is true for uniform voting."
                    },
                    {
                        "id": 22,
                        "string": "Motivated by these findings, we propose novel methods for learning."
                    },
                    {
                        "id": 23,
                        "string": "For the \"zero-shot\" setting where no labelled data is available in the target, we propose the BEA uns method inspired by work 2 We limit our attention to transfer in a 'black-box' setting, that is, given predictive models, but not assuming access to their data, nor their implementation."
                    },
                    {
                        "id": 24,
                        "string": "This is the most flexible scenario, as it allows for application to settings with closed APIs, and private datasets."
                    },
                    {
                        "id": 25,
                        "string": "It does, however, preclude multitask learning, as the source models are assumed to be static."
                    },
                    {
                        "id": 26,
                        "string": "in truth inference from crowd-sourced datasets or diverse classifiers ( §2.1)."
                    },
                    {
                        "id": 27,
                        "string": "To handle the \"few-shot\" case §2.2 presents a rival supervised technique, RaRe, based on using very limited annotations in the target language for model selection and classifier fine-tuning."
                    },
                    {
                        "id": 28,
                        "string": "V (j) π z i y ij β α i = 1 ."
                    },
                    {
                        "id": 29,
                        "string": "."
                    },
                    {
                        "id": 30,
                        "string": "."
                    },
                    {
                        "id": 31,
                        "string": "N j = 1 ."
                    },
                    {
                        "id": 32,
                        "string": "."
                    },
                    {
                        "id": 33,
                        "string": "."
                    },
                    {
                        "id": 34,
                        "string": "H Zero-Shot Transfer One way to improve the performance of the ensemble system is to select a subset of component models carefully, or more generally, learn a non-uniform weighting function."
                    },
                    {
                        "id": 35,
                        "string": "Some models do much better than others, on their own, so it stands to reason that identifying these handful of models will give rise to better ensemble performance."
                    },
                    {
                        "id": 36,
                        "string": "How might we proceed to learn the relative quality of models in the setting where no annotations are available in the target language?"
                    },
                    {
                        "id": 37,
                        "string": "This is a classic unsupervised inference problem, for which we propose a probabilistic graphical model, inspired by Kim and Ghahramani (2012) ."
                    },
                    {
                        "id": 38,
                        "string": "We develop a generative model, illustrated in Figure 1 , of the transfer models' predictions, y ij , where i ∈ [1, N ] is an instance (a token or an entity span), and j ∈ [1, H] indexes a transfer model."
                    },
                    {
                        "id": 39,
                        "string": "The generative process assumes a 'true' label, z i ∈ [1, K], which is corrupted by each transfer model, in producing the prediction, y ij ."
                    },
                    {
                        "id": 40,
                        "string": "The corruption process is described by P (y ij = l|z i = k, V (j) ) = V (j) kl , where V (j) ∈ R K×K is the confusion matrix specific to a transfer model."
                    },
                    {
                        "id": 41,
                        "string": "To complete the story, the confusion matrices are drawn from vague row-wise independent Dirichlet priors, with a parameter α = 1, and the true labels are governed by a Dirichlet prior, π, which is drawn from an uninformative Dirichlet distribution with a parameter β = 1."
                    },
                    {
                        "id": 42,
                        "string": "This generative model is referred to as BEA."
                    },
                    {
                        "id": 43,
                        "string": "Inference under the BEA model involves ex-plaining the observed predictions Y in the most efficient way."
                    },
                    {
                        "id": 44,
                        "string": "Where several transfer models have identical predictions, k, on an instance, this can be explained by letting z i = k, 3 and the confusion matrices of those transfer models assigning high probability to V (j) kk ."
                    },
                    {
                        "id": 45,
                        "string": "Other, less reliable, transfer models will have divergent predictions, which are less likely to be in agreement, or else are heavily biased towards a particular class."
                    },
                    {
                        "id": 46,
                        "string": "Accordingly, the BEA model can better explain these predictions through label confusion, using the off-diagonal elements of the confusion matrix."
                    },
                    {
                        "id": 47,
                        "string": "Aggregated over a corpus of instances, the BEA model can learn to differentiate between those reliable transfer models, with high V (j) kk and those less reliable ones, with high V (j) kl , l = k. This procedure applies perlabel, and thus the 'reliability' of a transfer model is with respect to a specific label, and may differ between classes."
                    },
                    {
                        "id": 48,
                        "string": "This helps in the NER setting where many poor transfer models have excellent accuracy for the outside label, but considerably worse performance for entity labels."
                    },
                    {
                        "id": 49,
                        "string": "For inference, we use mean-field variational Bayes (Jordan, 1998) , which learns a variational distribution, q(Z, V, π) to optimise the evidence lower bound (ELBO), log P (Y |α, β) ≥ E q(Z,V,π) log P (Y, Z, V, π|α, β) q(Z, V, π) assuming a fully factorised variational distribution, q(Z, V, π) = q(Z)q(V )q(π)."
                    },
                    {
                        "id": 50,
                        "string": "This gives rise to an iterative learning algorithm with update rules: E q log π k (1a) =ψ β + i q(z i = k) − ψ (Kβ + N ) E q log V (j) kl (1b) =ψ α + i q(z i = k)1[y ij = l] − ψ Kα + i q(z i = k) q(z i = k) ∝ exp    E q log π k + j E q log V (j) ky ij    (2) 3 Although there is no explicit breaking of the symmetry of the model, we initialise inference using the majority vote, which results in a bias towards this solution."
                    },
                    {
                        "id": 51,
                        "string": "where ψ is the digamma function, defined as the logarithmic derivative of the gamma function."
                    },
                    {
                        "id": 52,
                        "string": "The sets of rules (1) and (2) are applied alternately, to update the values of E q log π k , E q log V (j) kl , and q(z ij = k) respectively."
                    },
                    {
                        "id": 53,
                        "string": "This repeats until convergence, when the difference in the ELBO between two iterations is smaller than a threshold."
                    },
                    {
                        "id": 54,
                        "string": "The final prediction of the model is based on q(Z), using the maximum a posteriori label z i = arg max z q(z i = z)."
                    },
                    {
                        "id": 55,
                        "string": "This method is referred to as BEA uns ."
                    },
                    {
                        "id": 56,
                        "string": "In our NER transfer task, classifiers are diverse in their F1 scores ranging from almost 0 to around 80, motivating spammer removal (Raykar and Yu, 2012) to filter out the worst of the transfer models."
                    },
                    {
                        "id": 57,
                        "string": "We adopt a simple strategy that first estimates the confusion matrices for all transfer models on all labels, then ranks them based on their mean recall on different entity categories (elements on the diagonals of their confusion matrices), and then runs the BEA model again using only labels from the top k transfer models only."
                    },
                    {
                        "id": 58,
                        "string": "We call this method BEA uns×2 and its results are reported in §4."
                    },
                    {
                        "id": 59,
                        "string": "Token versus Entity Granularity Our proposed aggregation method in §2.1 is based on an assumption that the true annotations are independent from each other, which simplifies the model but may generate undesired results."
                    },
                    {
                        "id": 60,
                        "string": "That is, entities predicted by different transfer models could be mixed, resulting in labels inconsistent with the BIO scheme."
                    },
                    {
                        "id": 61,
                        "string": "Table 1 shows an example, where a sentence with 4 words is annotated by 5 transfer models with 4 different predictions, among which at most one is correct as they overlap."
                    },
                    {
                        "id": 62,
                        "string": "However, the aggregated result in the token view is a mixture of two predictions, which is supported by no transfer models."
                    },
                    {
                        "id": 63,
                        "string": "To deal with this problem, we consider aggre-gating the predictions in the entity view."
                    },
                    {
                        "id": 64,
                        "string": "As shown in Table 1 , we convert the predictions for tokens to predictions for ranges, aggregate labels for every range, and then resolve remaining conflicts."
                    },
                    {
                        "id": 65,
                        "string": "A prediction is ignored if it conflicts with another one with higher probability."
                    },
                    {
                        "id": 66,
                        "string": "By using this greedy strategy, we can solve the conflicts raised in entitylevel aggregation."
                    },
                    {
                        "id": 67,
                        "string": "We use superscripts tok and ent to denote token-level and entity-level aggregations, i.e."
                    },
                    {
                        "id": 68,
                        "string": "BEA tok uns and BEA ent uns ."
                    },
                    {
                        "id": 69,
                        "string": "Few-Shot Transfer Until now, we have assumed no access to annotations in the target language."
                    },
                    {
                        "id": 70,
                        "string": "However, when some labelled text is available, how might this best be used?"
                    },
                    {
                        "id": 71,
                        "string": "In our experimental setting, we assume a modest set of 100 labelled sentences, in keeping with a low-resource setting (Garrette and Baldridge, 2013) ."
                    },
                    {
                        "id": 72,
                        "string": "4 We propose two models BEA sup and RaRe in this setting."
                    },
                    {
                        "id": 73,
                        "string": "Supervising BEA (BEA sup ) One possibility is to use the labelled data to find the posterior for the parameters V (j) and π of the Bayesian model described in §2.1."
                    },
                    {
                        "id": 74,
                        "string": "Let n k be the number of instances in the labelled data whose true label is k, and n jkl the number of instances whose true label is k and classifier j labels them as l. Then the quantities in Equation (1) can be calculated as E log π k =ψ(n k ) − ψ(N ) E log v jkl =ψ(n jkl ) − ψ l n jkl ."
                    },
                    {
                        "id": 75,
                        "string": "These are used in Equation (2) for inference on the test set."
                    },
                    {
                        "id": 76,
                        "string": "We refer to this setting as BEA sup ."
                    },
                    {
                        "id": 77,
                        "string": "Ranking and Retraining (RaRe) We also propose an alternative way of exploiting the limited annotations, RaRe, which first ranks the systems, and then uses the top ranked models' outputs alongside the gold data to retrain a model on the target language."
                    },
                    {
                        "id": 78,
                        "string": "The motivation is that the above technique is agnostic to the input text, and therefore is unable to exploit situations where regularities occur, such as common words or character patterns that are indicative of specific class labels, including names, titles, etc."
                    },
                    {
                        "id": 79,
                        "string": "These signals are unlikely to be consistently captured by crosslingual transfer."
                    },
                    {
                        "id": 80,
                        "string": "Training a model on the target language with a character encoder component, can distil the signal that are captured by the transfer models, while relating this towards generalisable lexical and structural evidence in the target language."
                    },
                    {
                        "id": 81,
                        "string": "This on its own will not be enough, as many tokens will be consistently misclassified by most or all of the transfer models, and for this reason we also perform model fine-tuning using the supervised data."
                    },
                    {
                        "id": 82,
                        "string": "The ranking step in RaRe proceeds by evaluating each of the H transfer models on the target gold set, to produce scores s h (using the F 1 score)."
                    },
                    {
                        "id": 83,
                        "string": "The scores are then truncated to the top k ≤ H values, such that s h = 0 for those systems h not ranked in the top k, and normalised ω h = s h k j=1 s j ."
                    },
                    {
                        "id": 84,
                        "string": "The range of scores are quite wide, covering 0.00 − 0.81 (see Figure 2 ), and accordingly this simple normalisation conveys a strong bias towards the top scoring transfer systems."
                    },
                    {
                        "id": 85,
                        "string": "The next step is a distillation step, where a model is trained on a large unannotated dataset in the target language, such that the model predictions match those of a weighted mixture of transfer models, using ω = (ω 1 , ."
                    },
                    {
                        "id": 86,
                        "string": "."
                    },
                    {
                        "id": 87,
                        "string": "."
                    },
                    {
                        "id": 88,
                        "string": ", ω H ) as the mixing weights."
                    },
                    {
                        "id": 89,
                        "string": "This process is implemented as minibatch scheduling, where the labels for each minibatch are randomly sampled from transfer model h with probability ω h ."
                    },
                    {
                        "id": 90,
                        "string": "5 This is repeated over the course of several epochs of training."
                    },
                    {
                        "id": 91,
                        "string": "Finally, the model is fine-tuned using the small supervised dataset, in order to correct for phenomena that are not captured from model transfer, particularly character level information which is not likely to transfer well for all but the most closely related languages."
                    },
                    {
                        "id": 92,
                        "string": "Fine-tuning proceeds for a fixed number of epochs on the supervised dataset, to limit overtraining of richly parameterise models on a tiny dataset."
                    },
                    {
                        "id": 93,
                        "string": "Note that in all stages, the same supervised dataset is used, both in ranking and fine-tuning, and moreover, we do not use a development set."
                    },
                    {
                        "id": 94,
                        "string": "This is not ideal, and generalisation performance would likely improve were we to use additional annotated data, however our meagre use of data is designed for a low resource setting where labelled data is at a premium."
                    },
                    {
                        "id": 95,
                        "string": "Experiments Data Our primarily evaluation is over a subset of the Wikiann NER corpus (Pan et al., 2017) , using 41 out of 282 languages, where the langauges were chosen based on their overlap with multilingual word embedding resources from Lample et al."
                    },
                    {
                        "id": 96,
                        "string": "(2018) ."
                    },
                    {
                        "id": 97,
                        "string": "6 The NER taggs are in IOB2 format comprising of LOC, PER, and ORG."
                    },
                    {
                        "id": 98,
                        "string": "The distribution of labels is highly skewed, so we created balanced datasets, and partitioned into training, development, and test sets, details of which are in the Appendix."
                    },
                    {
                        "id": 99,
                        "string": "For comparison with prior work, we also evaluate on the CoNLL 2002 and 2003 datasets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) , which we discuss further in §4."
                    },
                    {
                        "id": 100,
                        "string": "For language-independent word embedding features we use fastText 300 dimensional Wikipedia embeddings (Bojanowski et al., 2017) , and map them to the English embedding space using character-identical words as the seed for the Procrustes rotation method for learning bingual embedding spaces from MUSE (Lample et al., 2018)."
                    },
                    {
                        "id": 101,
                        "string": "7 Similar to Xie et al."
                    },
                    {
                        "id": 102,
                        "string": "(2018) we don't rely on a bilingual dictionary, so the method can be easily applied to other languages."
                    },
                    {
                        "id": 103,
                        "string": "Model Variations As the sequential tagger, we use a BiLSTM-CRF (Lample et al., 2016) , which has been shown to result in state-of-the-art results in high resource settings (Ma and Hovy, 2016; Lample et al., 2016) ."
                    },
                    {
                        "id": 104,
                        "string": "This model includes both word embeddings (for which we used fixed cross-lingual embeddings) and character embeddings, to form a parameterised potential function in a linear chain conditional random field."
                    },
                    {
                        "id": 105,
                        "string": "With the exception of batch size and learning rate which were tuned (details in Appendix), we kept the architecture and the hyperparameters the same as the published code."
                    },
                    {
                        "id": 106,
                        "string": "8 6 With ISO 639-1 codes: af, ar, bg, bn, bs, ca, cs, da, de, el, en, es, et, fa, fi, fr, he, hi, hr, hu, id, it, lt, lv, mk, ms, nl, no, pl, pt, ro, ru, sk, sl, sq, sv, ta, tl, tr, uk and vi."
                    },
                    {
                        "id": 107,
                        "string": "7 We also experimented with other bilingual embedding methods, including: supervised learning over bilingual dictionaries, which barely affected system performance; and pure-unsupervised methods (Lample et al., 2018; Artetxe et al., 2018) , which performed substantially worse."
                    },
                    {
                        "id": 108,
                        "string": "For this reason we use identical word type seeding, which is preferred as it imposes no additional supervision requirement."
                    },
                    {
                        "id": 109,
                        "string": "8 https://github.com/guillaumegenthial/ sequence_tagging We trained models on all 41 languages in both high-resource (HSup) and naive supervised lowresource (LSup) settings, where HSup pre-trained models were used for transfer in a leave-one-out setting, i.e., taking the predictions of 40 models into a single target language."
                    },
                    {
                        "id": 110,
                        "string": "The same BiLSTM-CRF is also used for RaRe."
                    },
                    {
                        "id": 111,
                        "string": "To avoid overfitting, we use early stopping based on a validation set for the HSup, and LSup baselines."
                    },
                    {
                        "id": 112,
                        "string": "For RaRe, given that the model is already trained on noisy data, we stop fine-tuning after only 5 iterations, chosen based on the performance for the first four languages."
                    },
                    {
                        "id": 113,
                        "string": "We compare the supervised HSup and LSup monolingual baselines with our proposed transfer models: MV uniform ensemble, a.k.a."
                    },
                    {
                        "id": 114,
                        "string": "\"majority vote\"; BEA uns×2 , BEA uns unsupervised aggregation models, applied to entities or tokens (see §2.1); BEA sup supervised estimation of BEA Results We report the results for single source direct transfer, and then show that our proposed multilingual methods outperform majority voting."
                    },
                    {
                        "id": 115,
                        "string": "Then we analyse the choice of source languages, and how it affects transfer."
                    },
                    {
                        "id": 116,
                        "string": "10 Finally we report results on CoNLL NER datasets."
                    },
                    {
                        "id": 117,
                        "string": "9 Because BWET uses identical characters for bilingual dictionary induction, we observed many English loan words in the target language mapped to the same word in the induced bilingual dictionaries."
                    },
                    {
                        "id": 118,
                        "string": "Filtering such dictionary items might improve BWET."
                    },
                    {
                        "id": 119,
                        "string": "10 For detailed results see  Direct Transfer The first research question we consider is the utility of direct transfer, and the simple majority vote ensembling method."
                    },
                    {
                        "id": 120,
                        "string": "As shown in Figure 2 , using a single model for direct transfer (English: en) is often a terrible choice."
                    },
                    {
                        "id": 121,
                        "string": "The oracle choice of source language model does much better, however it is not always a closely related language (e.g., Italian: it does best for Indonesian: id, despite the target being closer to Malay: ms)."
                    },
                    {
                        "id": 122,
                        "string": "Note the collection of Cyrillic languages (bg, mk, uk) where the oracle is substantially better than the majority vote, which is likely due to script differences."
                    },
                    {
                        "id": 123,
                        "string": "The role of script appears to be more important than language family, as seen for Slavic languages where direct transfer works well between between pairs languages using the same alphabet (Cyrillic versus Latin), but much more poorly when there is an alphabet mismatch."
                    },
                    {
                        "id": 124,
                        "string": "11 The transfer relationship is not symmetric e.g., Persian: fa does best for Arabic: ar, but German: de does best for Persian."
                    },
                    {
                        "id": 125,
                        "string": "Figure 2 also shows that ensemble voting is well below the oracle best language, which is likely to be a result of overall high error rates coupled with error correlation between models, and little can be gained from ensembling."
                    },
                    {
                        "id": 126,
                        "string": "Multilingual Transfer We report the results for the proposed low-resource supervised models (RaRe and BEA sup ), and unsupervised models (BEA uns and BEA uns×2 ), summarised as an average over the 41 languages in Figure 3 (see Appendix A for the full table of results)."
                    },
                    {
                        "id": 127,
                        "string": "The figure compares against high-and low-resource supervised baselines (HSup and LSup, respectively), and BWET."
                    },
                    {
                        "id": 128,
                        "string": "The best performance is achieved with a high supervision (HSup, F 1 = 89.2), while very limited supervision (LSup) results in a considerably lower F 1 of 62.1."
                    },
                    {
                        "id": 129,
                        "string": "The results for MV tok show that uniform ensembling of multiple source models is even worse, by about 5 points."
                    },
                    {
                        "id": 130,
                        "string": "Unsupervised zero-shot learning dramatically improves upon MV tok , and BEA ent uns outperforms BEA tok uns , showing the effectiveness of inference over entities rather than tokens."
                    },
                    {
                        "id": 131,
                        "string": "It is clear that having access to limited annotation in the target language makes a substantial difference in BEA ent sup and RaRe with F 1 of 74.8 and 77.4, respectively."
                    },
                    {
                        "id": 132,
                        "string": "Further analysis show that majority voting works reasonably well for Romance and Germanic languages, which are well represented in the dataset, but fails miserably compared to single best for Slavic languages (e.g."
                    },
                    {
                        "id": 133,
                        "string": "ru, uk, bg) where there are only a few related languages."
                    },
                    {
                        "id": 134,
                        "string": "For most of the isolated languages (ar, fa, he, vi, ta), explicitly training a model in RaRe outperforms BEA ent sup , showing that relying only on aggregation of annotated data has limitations, in that it cannot exploit character and structural features."
                    },
                    {
                        "id": 135,
                        "string": "Choice of Source Languages An important question is how the other models, particularly the unsupervised variants, are affected by the number and choice of sources languages."
                    },
                    {
                        "id": 136,
                        "string": "Figure 4 charts the performance of MV, BEA, and RaRe against the number of source models, comparing the use of ideal or realistic selection methods to attempt to find the best source models."
                    },
                    {
                        "id": 137,
                        "string": "MV ent , BEA ent sup , and RaRe use a small labeled dataset to rank the source models."
                    },
                    {
                        "id": 138,
                        "string": "BEA ent uns, oracle has the access to the perfect ranking of source models based on their real F 1 on the test set."
                    },
                    {
                        "id": 139,
                        "string": "BEA uns×2 is completely unsupervised in that it uses its own estimates to rank all source models."
                    },
                    {
                        "id": 140,
                        "string": "MV doesn't show any benefit with more than 3 source models."
                    },
                    {
                        "id": 141,
                        "string": "12 In contrast, BEA and RaRe con- 12 The sawtooth pattern arises from the increased numbers of ties (broken randomly) with even numbers of inputs."
                    },
                    {
                        "id": 142,
                        "string": "tinue to improve with up to 10 languages."
                    },
                    {
                        "id": 143,
                        "string": "We show that BEA in two realistic scenarios (unsupervised: BEA ent uns×2 , and supervised: BEA ent sup ) is highly effective at discriminating between good and bad source models, and thus filtering out the bad models gives the best results."
                    },
                    {
                        "id": 144,
                        "string": "The BEA ent uns×2 curve shows the effect of filtering using purely unsupervised signal, which has a positive, albeit mild effect on performance."
                    },
                    {
                        "id": 145,
                        "string": "In BEA ent uns, oracle although the source model ranking is perfect, it narrowly outperforms BEA."
                    },
                    {
                        "id": 146,
                        "string": "Note also that neither of the BEA curves show evidence of the sawtooth pattern, i.e., they largely benefit from more inputs, irrespective of their parity."
                    },
                    {
                        "id": 147,
                        "string": "Finally, adding supervision in the target language in RaRe further improves upon the unsupervised models."
                    },
                    {
                        "id": 148,
                        "string": "CoNLL Dataset Finally, we apply our model to the CoNLL-02/03 datasets, to benchmark our technique against related work."
                    },
                    {
                        "id": 149,
                        "string": "This corpus is much less rich than Wikiann used above, as it includes only four languages (en, de, nl, es), and furthermore, the languages are closely related and share the same script."
                    },
                    {
                        "id": 150,
                        "string": "Results in Table 2 show that our methods are competitive with benchmark methods, and, moreover, the use of 100 annotated sentences in the target language (RaRe l ) gives good improvements over unsupervised models."
                    },
                    {
                        "id": 151,
                        "string": "13 Results also show that MV does very well, especially MV ent , and its performance is comparable to BEA's."
                    },
                    {
                        "id": 152,
                        "string": "Note that there are only 3 source models and none of them is clearly bad, so BEA estimates that they are similarly reliable which results in little difference in terms of performance between BEA and MV."
                    },
                    {
                        "id": 153,
                        "string": "Related Work Two main approaches for cross-lingual transfer are representation and annotation projection."
                    },
                    {
                        "id": 154,
                        "string": "Representation projection learns a model in a highresource source language using representations that are cross-linguistically transferable, and then directly applies the model to data in the target language."
                    },
                    {
                        "id": 155,
                        "string": "This can include the use of crosslingual word clusters  and word embeddings (Ammar et al., 2016; Ni et al., 2017) , multitask learning with a closely related high-resource language (e.g."
                    },
                    {
                        "id": 156,
                        "string": "Spanish for Galician) (Cotterell and Duh, 2017), or bridging  the source and target languages through phonemic transcription (Bharadwaj et al., 2016) or Wikification (Tsai et al., 2016) ."
                    },
                    {
                        "id": 157,
                        "string": "In annotation projection, the annotations of tokens in a source sentence are projected to their aligned tokens in the target language through a parallel corpus."
                    },
                    {
                        "id": 158,
                        "string": "Annotation projection has been applied to POS tagging ( (Täckström, 2012; Plank and Agić, 2018) use only one language (often English) as the source language."
                    },
                    {
                        "id": 159,
                        "string": "In multi-source language setting, majority voting is often used to aggregate noisy annotations (e.g."
                    },
                    {
                        "id": 160,
                        "string": "Plank and Agić (2018) )."
                    },
                    {
                        "id": 161,
                        "string": "Fang and Cohn (2016) show the importance of modelling the annotation biases that the source language(s) might project to the target language."
                    },
                    {
                        "id": 162,
                        "string": "Transfer from multiple source languages: Previous work has shown the improvements of multi-source transfer in NER (Täckström, 2012; Fang et al., 2017; Enghoff et al., 2018) , POS tagging (Snyder et al., 2009; Plank and Agić, 2018) , and parsing (Ammar et al., 2016) compared to single source transfer, however, multi-source transfer might be noisy as a result of divergence in script, phonology, morphology, syntax, and semantics between the source languages, and the target language."
                    },
                    {
                        "id": 163,
                        "string": "To capture such differences, various methods have been proposed: latent variable models (Snyder et al., 2009 ), majority voting (Plank and Agić, 2018) , utilising typological features (Ammar et al., 2016) , or explicitly learning annotation bias (Fang and Cohn, 2017)."
                    },
                    {
                        "id": 164,
                        "string": "Our work is also related to knowledge distillation from multiple source models applied in parsing (Kuncoro et al., 2016) and machine translation (Kim and Rush, 2016; Johnson et al., 2017) ."
                    },
                    {
                        "id": 165,
                        "string": "In this work, we use truth inference to model the transfer annotation bias from diverse source models."
                    },
                    {
                        "id": 166,
                        "string": "Finally, our work is related to truth inference from crowd-sourced annotations (Whitehill et al., 2009; Welinder et al., 2010) , and most importantly from diverse classifiers (Kim and Ghahramani, 2012; Ratner et al., 2017) ."
                    },
                    {
                        "id": 167,
                        "string": "Nguyen et al."
                    },
                    {
                        "id": 168,
                        "string": "(2017) propose a hidden Markov model for aggregating crowdsourced sequence labels, but only learn per-class accuracies for workers instead of full confusion matrices in order to address the data sparsity problem in crowdsourcing."
                    },
                    {
                        "id": 169,
                        "string": "Conclusion Cross-lingual transfer does not work out of the box, especially when using large numbers of source languages, and distantly related target languages."
                    },
                    {
                        "id": 170,
                        "string": "In an NER setting using a collection of 41 languages, we showed that simple methods such as uniform ensembling do not work well."
                    },
                    {
                        "id": 171,
                        "string": "We proposed two new multilingual transfer models (RaRe and BEA), based on unsupervised transfer, or a supervised transfer setting with a small 100 sentence labelled dataset in the target language."
                    },
                    {
                        "id": 172,
                        "string": "We also compare our results with BWET (Xie et al., 2018) , a state-of-the-art unsupervised single source (English) transfer model, and showed that multilingual transfer outperforms it, however, our work is orthogonal to their work in that if training data from multiple source models is created, RaRe and BEA can still combine them, and outperform majority voting."
                    },
                    {
                        "id": 173,
                        "string": "Our unsupervised method, BEA uns , provides a fast and simple way of annotating data in the target language, which is capable of reasoning under noisy annotations, and outperforms several competitive baselines, including the majority voting ensemble, a low-resource supervised baseline, and the oracle single best transfer model."
                    },
                    {
                        "id": 174,
                        "string": "We show that light supervision improves performance further, and that our second approach, RaRe, based on ranking transfer models and then retraining on the target language, results in further and more consistent performance improvements."
                    },
                    {
                        "id": 175,
                        "string": "A Appendices A.1 Hyperparameters We tuned the batch size and the learning rate using development sets in four languages, 14 and then fixed these hyperparameters for all other languages in each model."
                    },
                    {
                        "id": 176,
                        "string": "The batch size was 1 sentence in low-resource scenarios (in baseline LSup and fine-tuning of RaRe), and to 100 sentences, in high-resource settings (HSup and the pretraining phase of RaRe)."
                    },
                    {
                        "id": 177,
                        "string": "The learning rate was set to 0.001 and 0.01 for the high-resource and low-resource baseline models, respectively, and to 0.005, 0.0005 for the pretraining and fine-tuning phase of RaRe based on development results for the four languages."
                    },
                    {
                        "id": 178,
                        "string": "For CoNLL datasets, we had to decrease the batch size of the pre-training phase from 100 to 20 (because of GPU memory issues)."
                    },
                    {
                        "id": 179,
                        "string": "A.2 Cross-lingual Word Embeddings We experimented with Wiki and CommonCrawl monolingual embeddings from fastText (Bojanowski et al., 2017) ."
                    },
                    {
                        "id": 180,
                        "string": "Each of the 41 languages is mapped to English embedding space using three methods from MUSE: 1) supervised with bilingual dictionaries; 2) seeding using identical character sequences; and 3) unsupervised training using adversarial learning (Lample et al., 2018) ."
                    },
                    {
                        "id": 181,
                        "string": "The crosslingual mappings are evaluated by precision at k = 1."
                    },
                    {
                        "id": 182,
                        "string": "The resulting cross-lingual embeddings are then used in NER direct transfer in a leave-one-out setting for the 41 languages (41×40 transfers), and we report the mean F 1 in Table 3 ."
                    },
                    {
                        "id": 183,
                        "string": "CommonCrawl doesn't perform well in bilingual induction despite having larger text corpora, and underperforms in direct transfer NER."
                    },
                    {
                        "id": 184,
                        "string": "It is also evident that using identical character strings instead of a bilingual dictionary as the seed for learning a supervised bilingual mapping barely affects the performance."
                    },
                    {
                        "id": 185,
                        "string": "This finding also applies to few-shot learning over larger ensembles: running RaRe over 40 source languages achieves an average F 1 of 77.9 when using embeddings trained with a dictionary, versus 76.9 using string identity instead."
                    },
                    {
                        "id": 186,
                        "string": "For this reason we have used the string identity method in the paper (e.g., Table 4 ), providing greater portability to language pairs without a bilingual dictionary."
                    },
                    {
                        "id": 187,
                        "string": "Experiments with unsupervised mappings performed substantially worse than supervised methods, and so we didn't explore these further."
                    },
                    {
                        "id": 188,
                        "string": "14 Afrikaans, Arabic, Bulgarian and Bengali."
                    },
                    {
                        "id": 189,
                        "string": "A.3 Direct Transfer Results In Figure 5 the performance of an NER model trained in a high-resource setting on a source language applied on the other 40 target languages (leave-one-out) is shown."
                    },
                    {
                        "id": 190,
                        "string": "An interesting finding is that symmetry does not always hold (e.g."
                    },
                    {
                        "id": 191,
                        "string": "id vs. ms or fa vs. ar)."
                    },
                    {
                        "id": 192,
                        "string": "A.4 Detailed Low-resource Results The result of applying baselines, proposed models and their variations, and unsupervised transfer model of Xie et al."
                    },
                    {
                        "id": 193,
                        "string": "(2018) are shown in Table 4 ."
                    },
                    {
                        "id": 194,
                        "string": "ar he id ms tl vi af nl en de da no sv el tr fa bn hi ta ca es fr it pt ro bg mk ru uk bs cs hr pl sk sl sq lt lv et fi hu  17 7 9 3 7 12 6 3 2 36 10 5 28 3 6 35 17 14 21 8 13 4 2 15 10 72 63 72 15 2 2 7 2 3 2 12 2 4 3 7 12 13 16 9 9 8 8 9 7 29 14 10 19 10 13 25 17 13 16 14 20 10 9 18 14 52 39 61 10 10 7 11 9 9 9 7 8 10 9 12 17 11 14 9 7 4 5 4 3 27 11 6 21 6 8 33 19 17 25 14 14 7 5 19 16 56 58 60 6 12 4 10 5 5 5 5 5 7 12 9 9 23 62 50 37 47 57 70 58 63 Figure 5 : The direct transfer performance of a source NER model trained in a high-resource setting applied on the other 40 target languages, and evaluated in terms of phrase-level F 1 ."
                    },
                    {
                        "id": 195,
                        "string": "The languages are roughly sorted by language family."
                    },
                    {
                        "id": 196,
                        "string": "Slavic languages in Cyrillic script are from bg to uk, and those in Latin script are from bs to sl."
                    },
                    {
                        "id": 197,
                        "string": "µ ---89.2 62.1 74.3 77.4 76.9 74.8 60.2 50.5 72.8 69.7 64.5 56.7 71.6 σ ---2.8 5.2 7.3 6.4 6.4 9.6 24.1 14.7 11.5 12.6 13.7 25 11.5 Table 4 : The size of training and test sets (development set size equals test set size) in thousand sentences, and the precision at 1 for Bilingual dictionaries induced from mapping languages to the English embedding space (using identical characters) is shown (BiDic.P@1)."
                    },
                    {
                        "id": 198,
                        "string": "F 1 scores on the test set, comparing baseline supervised models (HSup, LSup), multilingual transfer from top k source languages (RaRe, 5 runs, k = 1, 10, 40), an unsupervised RaRe with uniform expertise and no fine-tuning (RaRe uns ), and aggregation methods: majority voting (MV tok ), BEA tok uns and BEA ent uns (Bayesian aggregation in token-and entity-level), and the oracle single best annotation (Oracle)."
                    },
                    {
                        "id": 199,
                        "string": "We also compare with BWET (Xie et al., 2018) , an unsupervised transfer model with stateof-the-art on CoNLL NER datasets."
                    },
                    {
                        "id": 200,
                        "string": "The mean and standard deviation over all 41 languages, µ, σ, are also reported."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 15
                    },
                    {
                        "section": "Approach",
                        "n": "2",
                        "start": 16,
                        "end": 33
                    },
                    {
                        "section": "Zero-Shot Transfer",
                        "n": "2.1",
                        "start": 34,
                        "end": 58
                    },
                    {
                        "section": "Token versus Entity Granularity",
                        "n": "2.1.1",
                        "start": 59,
                        "end": 68
                    },
                    {
                        "section": "Few-Shot Transfer",
                        "n": "2.2",
                        "start": 69,
                        "end": 94
                    },
                    {
                        "section": "Data",
                        "n": "3.1",
                        "start": 95,
                        "end": 102
                    },
                    {
                        "section": "Model Variations",
                        "n": "3.2",
                        "start": 103,
                        "end": 113
                    },
                    {
                        "section": "Results",
                        "n": "4",
                        "start": 114,
                        "end": 152
                    },
                    {
                        "section": "Related Work",
                        "n": "5",
                        "start": 153,
                        "end": 168
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 169,
                        "end": 200
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1095-Figure3-1.png",
                        "caption": "Figure 3: The mean and standard deviation for the F1 score of the proposed unsupervised models (BEAtokuns and BEA ent",
                        "page": 5,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 286.56,
                            "y1": 269.76,
                            "y2": 436.32
                        }
                    },
                    {
                        "filename": "../figure/image/1095-Figure2-1.png",
                        "caption": "Figure 2: Best source language ( ) compared with en ( ), and majority voting ( ) over all source languages in terms of F1 performance in direct transfer shown for a subset of the 41 target languages (x axis). Worst transfer score, not shown here, is about 0. See §3 for details of models and datasets.",
                        "page": 5,
                        "bbox": {
                            "x1": 85.44,
                            "x2": 521.28,
                            "y1": 66.24,
                            "y2": 203.04
                        }
                    },
                    {
                        "filename": "../figure/image/1095-Figure1-1.png",
                        "caption": "Figure 1: Plate diagram for the BEA model.",
                        "page": 1,
                        "bbox": {
                            "x1": 338.88,
                            "x2": 495.35999999999996,
                            "y1": 62.879999999999995,
                            "y2": 172.32
                        }
                    },
                    {
                        "filename": "../figure/image/1095-Figure4-1.png",
                        "caption": "Figure 4: The mean F1 performance of MVent, BEAentsup, BEA ent",
                        "page": 6,
                        "bbox": {
                            "x1": 88.8,
                            "x2": 273.59999999999997,
                            "y1": 62.879999999999995,
                            "y2": 238.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1095-Table4-1.png",
                        "caption": "Table 4: The size of training and test sets (development set size equals test set size) in thousand sentences, and the precision at 1 for Bilingual dictionaries induced from mapping languages to the English embedding space (using identical characters) is shown (BiDic.P@1). F1 scores on the test set, comparing baseline supervised models (HSup, LSup), multilingual transfer from top k source languages (RaRe, 5 runs, k = 1, 10, 40), an unsupervised RaRe with uniform expertise and no fine-tuning (RaRe uns), and aggregation methods: majority voting (MVtok), BEAtokuns and BEA ent",
                        "page": 13,
                        "bbox": {
                            "x1": 120.96,
                            "x2": 477.12,
                            "y1": 102.24,
                            "y2": 618.24
                        }
                    },
                    {
                        "filename": "../figure/image/1095-Table1-1.png",
                        "caption": "Table 1: An example sentence with its aggregated labels in both token view and entity view. Aggregation in token view may generate results inconsistent with the BIO scheme.",
                        "page": 2,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 62.4,
                            "y2": 155.04
                        }
                    },
                    {
                        "filename": "../figure/image/1095-Figure5-1.png",
                        "caption": "Figure 5: The direct transfer performance of a source NER model trained in a high-resource setting applied on the other 40 target languages, and evaluated in terms of phrase-level F1. The languages are roughly sorted by language family. Slavic languages in Cyrillic script are from bg to uk, and those in Latin script are from bs to sl.",
                        "page": 12,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 517.4399999999999,
                            "y1": 170.88,
                            "y2": 610.56
                        }
                    },
                    {
                        "filename": "../figure/image/1095-Table2-1.png",
                        "caption": "Table 2: The performance of RaRe and BEA in terms of phrase-based F1 on CoNLL NER datasets compared with state-of-the-art benchmark methods. Resource requirements are indicated with superscripts, p: parallel corpus, w: Wikipedia, d: dictionary, l: 100 NER annotation, 0: no extra resources.",
                        "page": 7,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 287.03999999999996,
                            "y1": 62.4,
                            "y2": 291.36
                        }
                    },
                    {
                        "filename": "../figure/image/1095-Table3-1.png",
                        "caption": "Table 3: The effect of the choice of monolingual word embeddings (Common Crawl and Wikipedia), and their cross-lingual mapping on NER direct transfer. Word translation accuracy, and direct transfer NER F1 are averaged over 40 languages.",
                        "page": 11,
                        "bbox": {
                            "x1": 313.92,
                            "x2": 518.4,
                            "y1": 62.4,
                            "y2": 169.92
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-33"
        },
        {
            "slides": {
                "0": {
                    "title": "Empirically evaluate various models in EJ task",
                    "text": [
                        "multi-layer encoder-decoder model soft-attention model",
                        "Three recurrent units Two kinds of training data",
                        "LSTM, GRU, IRNN naturally-ordered, pre-reordered"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": [
                        "figure/image/1097-Figure5-1.png",
                        "figure/image/1097-Figure6-1.png"
                    ]
                },
                "2": {
                    "title": "Results evaluation scores",
                    "text": [
                        "Baseline hierarchical phrase-based SMT",
                        "Submitted system 2 (NMT + System combination)",
                        "Best competitor 1: NAIST (Travatar System with NeuralMT Reranking)",
                        "Best competitor 2: naver (SMT t2s + Spell correction + NMT reranking)"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Finding and Insights",
                    "text": [
                        "Soft-attention models outperforms multi-layer encoder-decoder models",
                        "Training models on pre-reordered data hurts the performance",
                        "NMT models tend to make grammatically valid but incomplete translations"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                }
            },
            "paper_title": "Evaluating Neural Machine Translation in English-Japanese Task",
            "paper_id": "1097",
            "paper": {
                "title": "Evaluating Neural Machine Translation in English-Japanese Task",
                "abstract": "Recently, by scaling up neural network models and incorporating some techniques during the training, the performance of NMT models have already achieved the state-of-the-art in English-French translation task (Luong et al.,",
                "text": [
                    {
                        "id": 0,
                        "string": "In this paper, we evaluate Neural Machine Translation (NMT) models in English-Japanese translation task."
                    },
                    {
                        "id": 1,
                        "string": "Various network architectures with different recurrent units are tested."
                    },
                    {
                        "id": 2,
                        "string": "Additionally, we examine the effect of using pre-reordered data for the training."
                    },
                    {
                        "id": 3,
                        "string": "Our experiments show that even simple NMT models can produce better translations compared with all SMT baselines."
                    },
                    {
                        "id": 4,
                        "string": "For NMT models, recovering unknown words is another key to obtaining good translations."
                    },
                    {
                        "id": 5,
                        "string": "We describe a simple workaround to find missing translations with a back-off system."
                    },
                    {
                        "id": 6,
                        "string": "To our surprise, performing prereordering on the training data hurts the model performance."
                    },
                    {
                        "id": 7,
                        "string": "Finally, we provide a qualitative analysis demonstrates a specific error pattern in NMT translations which omits some information and thus fail to preserve the complete meaning."
                    },
                    {
                        "id": 8,
                        "string": "Introduction In the last two decades, Statistical Machine Translation (SMT) with log-linear models in the core has shown promising results in the field."
                    },
                    {
                        "id": 9,
                        "string": "However, as stated in (Duh and Kirchhoff, 2008) , log-linear models may suffer from the underfitting problem and thus give poor performance."
                    },
                    {
                        "id": 10,
                        "string": "While for recurrent neural networks (RNNs), as demonstrated in (Mikolov et al., 2010) , they brought significant improvement in Natural Language Processing tasks."
                    },
                    {
                        "id": 11,
                        "string": "In their research, RNNs are shown to be capable of giving more prediction power compared with conventional language models when large training data is given."
                    },
                    {
                        "id": 12,
                        "string": "Using these neural lan-guage models to rescore SMT outputs generally gives better translation results (Auli and Gao, 2014) ."
                    },
                    {
                        "id": 13,
                        "string": "Other approaches rescore with RNNs that predict the next word by taking the word in current step and S as inputs (Kalchbrenner and Blunsom, 2013; Cho, Merrienboer, et al., 2014) ."
                    },
                    {
                        "id": 14,
                        "string": "Here, S is a vector representation summarizes the whole input sentence."
                    },
                    {
                        "id": 15,
                        "string": "Neural machine translation is a brand-new approach that samples translation results directly from RNNs."
                    },
                    {
                        "id": 16,
                        "string": "Most published models involve an encoder and a decoder in the network architecture (Sutskever, Vinyals, and Le, 2014) , called Encoder-Decoder approach."
                    },
                    {
                        "id": 17,
                        "string": "Figure 1 gives a general overview of this approach."
                    },
                    {
                        "id": 18,
                        "string": "In Figure 1 , the vector output S of the encoder RNN represents the whole input sentence."
                    },
                    {
                        "id": 19,
                        "string": "Hence, S contains all information required to produce the translation."
                    },
                    {
                        "id": 20,
                        "string": "In order to boost up the performance, (Sutskever, Vinyals, and Le, 2014) used stacked Long Short-Term Memory (LSTM) units for both encoder and decoder, their ensembled models outperformed phrasebased SMT baseline in English-French translation task."
                    },
                    {
                        "id": 21,
                        "string": "and English-German translation task (Jean et al., 2015) ."
                    },
                    {
                        "id": 22,
                        "string": "In this paper, we describe our works on applying NMT to English-Japanese translation task."
                    },
                    {
                        "id": 23,
                        "string": "The main contributions of this work are detailed as follows: • We examined the effect of using different network architecture and recurrent units for English-Japanese translation • We empirically evaluated NMT models trained on pre-reordered data • We demonstrate a simple solution to recover unknown words in the translation results with a back-off system • We provide a qualitative analysis on the translation results of NMT models Recurrent neural networks Recurrent neural network is the solution for modeling temporal data with neural networks."
                    },
                    {
                        "id": 24,
                        "string": "The framework of widely used modern RNN is introduced by Elman (Elman, 1990) , it is also known as Elman Network or Simple Recurrent Network."
                    },
                    {
                        "id": 25,
                        "string": "At each time step, RNN updates its internal state h t based on a new input x t and the previous state h t−1 , produces an output y t ."
                    },
                    {
                        "id": 26,
                        "string": "Generally, they are computed recursively by applying following operations: h t = f (W i x t + W h h t−1 + b h ) (1) y t = f (W o h t + b o ) (2) Where f is an element-wise non-linearity, such as sigmoid or tanh."
                    },
                    {
                        "id": 27,
                        "string": "Figures 2 illustrates the computational graph of a RNN."
                    },
                    {
                        "id": 28,
                        "string": "Solid lines in the figure mark out the Affine transformations followed with a non-linear activation."
                    },
                    {
                        "id": 29,
                        "string": "Dashed lines indicate that the result of previous computation is just a parameter of next operation."
                    },
                    {
                        "id": 30,
                        "string": "The bias term b h is omitted in the illustration."
                    },
                    {
                        "id": 31,
                        "string": "RNN can be trained with Backpropagation Through Time (BPTT), which is a gradientbased technique that unfolds the network through time so as to compute the actual gradients of parameters in each time step."
                    },
                    {
                        "id": 32,
                        "string": "Long short-term memory For RNN, as the internal state h t is completely changed in each time step, BPTT algorithm dilutes error information after each step of computation."
                    },
                    {
                        "id": 33,
                        "string": "Hence, RNN suffers from the problem that it is difficult to capture longterm dependencies."
                    },
                    {
                        "id": 34,
                        "string": "Long short-term memory units (Hochreiter and Schmidhuber, 1997) incorporate some gates to control the information flow."
                    },
                    {
                        "id": 35,
                        "string": "In addition to the hidden units in RNN, memory cells are used to store long-term information, which is updated linearly."
                    },
                    {
                        "id": 36,
                        "string": "Empirically, LSTM can preserve information for arbitrarily long periods of time."
                    },
                    {
                        "id": 37,
                        "string": "Figure 3 gives an illustration of the computational graph of a basic LSTM unit."
                    },
                    {
                        "id": 38,
                        "string": "In which, input gate i t , forget gate f t and output gate o t are marked with rhombuses."
                    },
                    {
                        "id": 39,
                        "string": "\"×\" and \"+\" are element-wise multiplication and elementwise addition respectively."
                    },
                    {
                        "id": 40,
                        "string": "The computational steps follows (Graves, 2013) , 11 weight parameters are involved in this model, compared with only 2 weight parameters in RNN."
                    },
                    {
                        "id": 41,
                        "string": "We can see from Figure 3 that the memory cells c t can keep unchanged when f t outputs 1 and i t outputs 0."
                    },
                    {
                        "id": 42,
                        "string": "Gated recurrent unit Gated recurrent unit (GRU) is originally proposed in (Cho, Merrienboer, et al., 2014) ."
                    },
                    {
                        "id": 43,
                        "string": "Similarly to LSTM unit, GRU also has gating units to control the information flow."
                    },
                    {
                        "id": 44,
                        "string": "While LSTM unit has a separate memory cell, GRU unit only maintains one kind of internal states, thus reduces computational complexity."
                    },
                    {
                        "id": 45,
                        "string": "The computational graph of a GRU unit is demonstrated in Figure 4 ."
                    },
                    {
                        "id": 46,
                        "string": "As shown in the figure, 6 weight parameters are involved."
                    },
                    {
                        "id": 47,
                        "string": "Network architectures of Neural Machine Translation A basic architecture of NMT is called Encoder-Decoder approach (Sutskever, Vinyals, and Le, 2014) , which encodes the input sequence into a vector representation, then unrolls it to generate the output sequence."
                    },
                    {
                        "id": 48,
                        "string": "Then softmax function is applied to the output layer in order to compute cross-entropy."
                    },
                    {
                        "id": 49,
                        "string": "Instead of using one-hot embeddings for the tokens in the vocabulary, trainable word embeddings are used."
                    },
                    {
                        "id": 50,
                        "string": "As a pre-processing step, \"<eos>\" token is appended to the end of each sequence."
                    },
                    {
                        "id": 51,
                        "string": "When translating, the token with the highest probability in the output layer is sampled and input back to the neural network to get next output."
                    },
                    {
                        "id": 52,
                        "string": "This is done recursively until \"<eos>\" is observed."
                    },
                    {
                        "id": 53,
                        "string": "Figure 5 gives a detailed illustration of this architecture when using stacked multilayer recurrent units."
                    },
                    {
                        "id": 54,
                        "string": "Soft-attention models in NMT As stated in (Cho, Merriënboer, et al., 2014) , two critical drawbacks exist in the basic Encoder-Decoder approach: (1) the performance degrades when the input sentence gets longer, (2) the vocabulary size in the target Attentional models are first proposed in the field of computer vision, which allows the recurrent network to focus on a small portion in the image at each step."
                    },
                    {
                        "id": 55,
                        "string": "The internal state is updated only depends on this glimpse."
                    },
                    {
                        "id": 56,
                        "string": "Softattention first evaluates the weights for all possible positions to attend, then make a weighted summarization of all hidden states in the encoder."
                    },
                    {
                        "id": 57,
                        "string": "The summarized vector is finally used to update the internal state of the decoder."
                    },
                    {
                        "id": 58,
                        "string": "Contrary to hard-attention mechanism which selects only one location at each step and thus has to be trained with reinforce learning techniques, soft-attention mechanism makes the computational graph differentiable and thus able to be trained with standard backpropagation."
                    },
                    {
                        "id": 59,
                        "string": "Figure 6 : The recurrent network using softattention mechanism to predict next output."
                    },
                    {
                        "id": 60,
                        "string": "The application of soft-attention mechanism in machine translation is firstly described in (Bahdanau, Cho, and Bengio, 2014) , which is referred as \"RNNsearch\" in this paper."
                    },
                    {
                        "id": 61,
                        "string": "The computational graph of a soft-attention NMT model is illustrated in Figure 6 ."
                    },
                    {
                        "id": 62,
                        "string": "In which, the encoder is replaced by a bi-directional RNN, the hidden states of two RNNs is finally concatenated in each input position."
                    },
                    {
                        "id": 63,
                        "string": "At each time step of decoding, an alignment weight a i is computed based on the previous state of the decoder and the concatenated hidden state of position i in the encoder."
                    },
                    {
                        "id": 64,
                        "string": "The alignment weights are finally normalized by softmax function."
                    },
                    {
                        "id": 65,
                        "string": "The weighted summarization of the hidden states in the encoder is then fed into the decoder."
                    },
                    {
                        "id": 66,
                        "string": "Hence, the internal state of the decoder is updated based on 3 inputs: the previous state, weighted summarization of the encoder and the target-side input token."
                    },
                    {
                        "id": 67,
                        "string": "The empirical results in (Bahdanau, Cho, and Bengio, 2014) show that the performance of RNNsearch does not degrade severely like normal Encoder-Decoder approach."
                    },
                    {
                        "id": 68,
                        "string": "Solutions of unknown words A critical practical problem of NMT is the fixed vocabulary size in the output layer."
                    },
                    {
                        "id": 69,
                        "string": "As the output layer uses dense connections, enlarging it will significantly increase the computational complexity and thus slow down the training."
                    },
                    {
                        "id": 70,
                        "string": "According to existing publications, two kinds of approaches are used to tackle this problem: model-specific and translationspecific approach."
                    },
                    {
                        "id": 71,
                        "string": "Well known modelspecific approaches are noise-contrastive training (Mnih and Kavukcuoglu, 2013) and classbased models (Mikolov et al., 2010) ."
                    },
                    {
                        "id": 72,
                        "string": "In (Jean et al., 2015) , another model-specific solution is proposed by using only a small set of target vocabulary at each update."
                    },
                    {
                        "id": 73,
                        "string": "By using a very large target vocabulary, they were able to outperform the state-of-the-art system in English-German translation task."
                    },
                    {
                        "id": 74,
                        "string": "Solutions of Translation-specific approach usually take advantage of the alignment of tokens in both sides."
                    },
                    {
                        "id": 75,
                        "string": "For examples, the proposed method in (Luong et al., 2015) annotates the unknown target words with \"unkpos i \" instead of \"unk\"."
                    },
                    {
                        "id": 76,
                        "string": "Where the subscript i is the position of the aligned source word for the unknown target word."
                    },
                    {
                        "id": 77,
                        "string": "The alignments can be obtained by conventional aligners."
                    },
                    {
                        "id": 78,
                        "string": "The purpose of this processing step put some cues for recovering missing words into the output."
                    },
                    {
                        "id": 79,
                        "string": "By applying this approach, they were able to surpass the state-of-the-art SMT system in English-French translation task."
                    },
                    {
                        "id": 80,
                        "string": "Experiments Experiment setup In our experiments, we are curious to see how NMT models work in English-Japanese translation and how well the existing approaches for unknown words fit into this setting."
                    },
                    {
                        "id": 81,
                        "string": "As Japanese language drastically differs from English in terms of word order and grammar structure."
                    },
                    {
                        "id": 82,
                        "string": "NMT models must capture the semantics of long-range dependencies in a sentence in order to translate it well."
                    },
                    {
                        "id": 83,
                        "string": "We use Japanese-English Scientific Paper Abstract Corpus (ASPEC-JE) as training data and focus on evaluating the models for English-Japanese translation task."
                    },
                    {
                        "id": 84,
                        "string": "In order to make the training time-efficient, we pick 1.5M sentences according to similarity score then filter out long sentences with more than 40 words in either English or Japanese side."
                    },
                    {
                        "id": 85,
                        "string": "This processing step gives 1.1M sentences for training."
                    },
                    {
                        "id": 86,
                        "string": "We randomly separate out 1,280 sentences as valid data."
                    },
                    {
                        "id": 87,
                        "string": "As almost zero pre-knowledge of NMT experiments in English-Japanese translation can be found in publications, our purpose is to conduct a thorough experiment so that we can evaluate and compare different model architectures and recurrent units."
                    },
                    {
                        "id": 88,
                        "string": "However, the limitation of computational resource and time disallows us to massively test various models, training schemes, and hyper-parameters."
                    },
                    {
                        "id": 89,
                        "string": "In our experiments, we evaluated four kinds of models as follow: Most of the details of these models are common."
                    },
                    {
                        "id": 90,
                        "string": "The recurrent layers of all the models contain 1024 neurons each."
                    },
                    {
                        "id": 91,
                        "string": "The size of word embedding is 1000."
                    },
                    {
                        "id": 92,
                        "string": "We truncate the sourceside and target-side vocabulary sizes to 80k and 40k respectively."
                    },
                    {
                        "id": 93,
                        "string": "For all models, we insert a dense layer contains 600 neurons immediately before the output layer."
                    },
                    {
                        "id": 94,
                        "string": "We basically use SGD with learning rate decay as optimization method, the batch size is 60 and initial learning rate is 1."
                    },
                    {
                        "id": 95,
                        "string": "The gradients are clipped to ensure L2 norm lower than 3."
                    },
                    {
                        "id": 96,
                        "string": "Although we sort the training data according to the input length, the order of batches is shuffled before training."
                    },
                    {
                        "id": 97,
                        "string": "For LSTM units, we set the bias of forget gate to 1 before training (Jozefowicz, Zaremba, and Sutskever, 2015) ."
                    },
                    {
                        "id": 98,
                        "string": "During the translation, we set beam size to 20, if no valid translation is obtained, then another trail with beam size of 1000 will be performed."
                    },
                    {
                        "id": 99,
                        "string": "Evaluating models by perplexity For our in-house experiments, the evaluation of our models mainly relies on the perplexity measured on valid data, as a strong correlation between perplexity and translation performance is observed in many existing publications (Luong et al., 2015) ."
                    },
                    {
                        "id": 100,
                        "string": "The changing perplexities of the models described in Section 5.1 are visualized in Figure 7 ."
                    },
                    {
                        "id": 101,
                        "string": "In Figure 7 , we can see that soft-attention models with LSTM unit constantly outperforms the muti-layer Encoder-Decoder model."
                    },
                    {
                        "id": 102,
                        "string": "This matches our expectation as the alignment between English and Japanese is too complicated thus it is difficult for simple Encoder-Decoder models to capture it correctly."
                    },
                    {
                        "id": 103,
                        "string": "Another observation is that the performance of the soft-attention model with GRU unit is significantly lower than that with LSTM unit."
                    },
                    {
                        "id": 104,
                        "string": "As this is conflict with the results reported in other publications (Jozefowicz, Zaremba, and Sutskever, 2015) , one possible explanation is that some implementation issues exist and further investigation is required."
                    },
                    {
                        "id": 105,
                        "string": "One surprising observation is that using prereordered data to train soft-attention models does not benefit the perplexity, but degrades the performance by a small margin."
                    },
                    {
                        "id": 106,
                        "string": "We show that the same conclusion can be drawn by measuring translation performance directly in latter sections."
                    },
                    {
                        "id": 107,
                        "string": "Replacing unknown words Initially, we adapt the solution described in (Luong et al., 2015) , which annotate the unknown words with \"unkpos i \", where i is the position of aligned source word."
                    },
                    {
                        "id": 108,
                        "string": "We find this require source-side and target-side sentences roughly aligned."
                    },
                    {
                        "id": 109,
                        "string": "When testing on the softattention model with pre-reordered training data, we found this method can correctly point out the rough aligned position of a missing word."
                    },
                    {
                        "id": 110,
                        "string": "This allows us to recover the missing output words with a dictionary or SMT systems."
                    },
                    {
                        "id": 111,
                        "string": "However, for the training data in natural order, the position of aligned words in two languages differs drastically."
                    },
                    {
                        "id": 112,
                        "string": "The solution described above can hardly be applied as it annotates the unknown words with relative positions."
                    },
                    {
                        "id": 113,
                        "string": "Here, we propose a simple workaround for recovering the unknown words with a back-off system."
                    },
                    {
                        "id": 114,
                        "string": "We translate the input sentence using both a NMT system and a baseline SMT system."
                    },
                    {
                        "id": 115,
                        "string": "Assume the translation results are similar, then if we observe an unknown word in the result of the NMT system, then it is reasonable to infer that the rarest word in the baseline result which is missing in the NMT result should be this unknown translation."
                    },
                    {
                        "id": 116,
                        "string": "This is demonstrated in Figure 8 , the rarest word in the baseline result is picked out to replace the unknown word in the NMT result."
                    },
                    {
                        "id": 117,
                        "string": "Practically, the assumption will not be true, the results of NMT systems and conventional SMT systems differ tremendously."
                    },
                    {
                        "id": 118,
                        "string": "Hence, some incorrect word replacements are introduced."
                    },
                    {
                        "id": 119,
                        "string": "This method can be generalized to recover multiple unknown words by selecting the rarest word in a near position."
                    },
                    {
                        "id": 120,
                        "string": "Evaluating translation performance In this section, we describe our submitted systems and report the evaluation results in the English-Japanese translation task of The 2nd Workshop on Asian Translation 1 (Nakazawa et al., 2015) ."
                    },
                    {
                        "id": 121,
                        "string": "We train these models with AdaDelta for 5 epochs."
                    },
                    {
                        "id": 122,
                        "string": "Then, we fine-tune the model with AdaGrad using an enlarged training data, that each sentence contains no more than 50 words."
                    },
                    {
                        "id": 123,
                        "string": "With this fine-tuning step, we are able to achieve perplexity of 1.76 in valid data."
                    },
                    {
                        "id": 124,
                        "string": "The automatic evaluation results are shown in Table 1 ."
                    },
                    {
                        "id": 125,
                        "string": "Three SMT baselines are picked for comparison."
                    },
                    {
                        "id": 126,
                        "string": "In the middle of the table, we list two single soft-attention NMT models with LSTM unit."
                    },
                    {
                        "id": 127,
                        "string": "The results show that training models on pre-reordered corpus leads to degrading of translation performance, where the pre-reordering step is done using the model described in (Zhu, 2014) ."
                    },
                    {
                        "id": 128,
                        "string": "Our submitted systems are basically an ensemble of two LSTM Search models trained on natural-order data, as shown in the bottom of Table 1 ."
                    },
                    {
                        "id": 129,
                        "string": "After we replaced unknown words with the technique described in 5.3, we gained 0.8 BLEU on test data."
                    },
                    {
                        "id": 130,
                        "string": "This is our first submitted system, marked with \"S1\"."
                    },
                    {
                        "id": 131,
                        "string": "We also found it is useful to perform a sys-1 Our team ID is \"WEBLIO MT\" tem combination based on perplexity scores."
                    },
                    {
                        "id": 132,
                        "string": "We evaluate the perplexity for all outputs produced by a baseline system and the NMT model."
                    },
                    {
                        "id": 133,
                        "string": "These two sets of perplexity score are normalized by mean and standard deviation respectively."
                    },
                    {
                        "id": 134,
                        "string": "Then for each NMT result, we rescore it with the difference of perplexity against the baseline system."
                    },
                    {
                        "id": 135,
                        "string": "Intuitively, if the NMT result is better than the baseline result, the new score shall be a positive number."
                    },
                    {
                        "id": 136,
                        "string": "In our experiment, we pick the system described in (Zhu, 2014) as baseline system."
                    },
                    {
                        "id": 137,
                        "string": "We pick top-1000 results from NMT and the rest from the baseline system, this gives us a gain of 1.8 in BLEU."
                    },
                    {
                        "id": 138,
                        "string": "30.000 Submitted system 1 (S1) 43.500 Submitted system 2 (S2) 53.750 Finally, we added 3 pre-reordered LSTM Search models to the ensemble, results in a 5model ensemble."
                    },
                    {
                        "id": 139,
                        "string": "During the translation, these three models receive pre-reordered input, another two LSTM Search models receive input in natural order."
                    },
                    {
                        "id": 140,
                        "string": "We gain 0.24 BLEU with this setting, and this is the second submitted system, marked with \"S2\"."
                    },
                    {
                        "id": 141,
                        "string": "Human evaluation results of our submitted systems are shown in Table 2 ."
                    },
                    {
                        "id": 142,
                        "string": "As we already know that pre-reordering does not help improving translation performance, a natural choice is to train more normal LSTM Search models and put into the ensemble."
                    },
                    {
                        "id": 143,
                        "string": "We failed to do it because of insufficient time."
                    },
                    {
                        "id": 144,
                        "string": "Qualitative analysis To find some insights in the translation results of NMT systems, we performed qualitative analysis on a proportion of held-out development data."
                    },
                    {
                        "id": 145,
                        "string": "During the inspection, we found many errors share the same pattern."
                    },
                    {
                        "id": 146,
                        "string": "It turns out that NMT model tends to make a perfect translation by omitting some information during the translation."
                    },
                    {
                        "id": 147,
                        "string": "In this case, the output tends to be a valid sentence, but the meaning is partially lost."
                    },
                    {
                        "id": 148,
                        "string": "One example of this phenomenon is shown in the following snippet: Input: this paper discusses some systematic uncertainties including casimir force , false force due to electric force , and various factors for irregular uncertainties due to patch field and detector noise ."
                    },
                    {
                        "id": 149,
                        "string": "Conclusion In this paper, we performed a systematic evaluation of various kinds for NMT models in the setting of English-Japanese translation."
                    },
                    {
                        "id": 150,
                        "string": "Based on the empirical evaluation results, we found soft-attention NMT models can already make good translation results in English-Japanese translation task."
                    },
                    {
                        "id": 151,
                        "string": "Their performance surpasses all SMT baselines by a substantial margin according to RIBES scores."
                    },
                    {
                        "id": 152,
                        "string": "We also found that NMT models can work well without extra data processing steps such as pre-reordering."
                    },
                    {
                        "id": 153,
                        "string": "Finally, we described a simple workaround to recover unknown words with a back-off system."
                    },
                    {
                        "id": 154,
                        "string": "However, a sophisticated solution for dealing with unknown words is still an open question in the English-Japanese setting."
                    },
                    {
                        "id": 155,
                        "string": "As some patterns of mistakes can be observed from the translation results, there exists some space for further improvements."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 8,
                        "end": 22
                    },
                    {
                        "section": "Recurrent neural networks",
                        "n": "2",
                        "start": 23,
                        "end": 31
                    },
                    {
                        "section": "Long short-term memory",
                        "n": "2.1",
                        "start": 32,
                        "end": 41
                    },
                    {
                        "section": "Gated recurrent unit",
                        "n": "2.2",
                        "start": 42,
                        "end": 46
                    },
                    {
                        "section": "Network architectures of Neural Machine Translation",
                        "n": "3",
                        "start": 47,
                        "end": 53
                    },
                    {
                        "section": "Soft-attention models in NMT",
                        "n": "3.1",
                        "start": 54,
                        "end": 67
                    },
                    {
                        "section": "Solutions of unknown words",
                        "n": "4",
                        "start": 68,
                        "end": 79
                    },
                    {
                        "section": "Experiment setup",
                        "n": "5.1",
                        "start": 80,
                        "end": 98
                    },
                    {
                        "section": "Evaluating models by perplexity",
                        "n": "5.2",
                        "start": 99,
                        "end": 106
                    },
                    {
                        "section": "Replacing unknown words",
                        "n": "5.3",
                        "start": 107,
                        "end": 119
                    },
                    {
                        "section": "Evaluating translation performance",
                        "n": "5.4",
                        "start": 120,
                        "end": 143
                    },
                    {
                        "section": "Qualitative analysis",
                        "n": "5.5",
                        "start": 144,
                        "end": 148
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 149,
                        "end": 155
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1097-Figure1-1.png",
                        "caption": "Figure 1: Basic neural network architecture in Encoder-Decoder approach",
                        "page": 0,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 535.1999999999999,
                            "y1": 570.72,
                            "y2": 640.3199999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1097-Table1-1.png",
                        "caption": "Table 1: Automatic evaluation results in WAT2015",
                        "page": 5,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 529.4399999999999,
                            "y1": 107.52,
                            "y2": 247.2
                        }
                    },
                    {
                        "filename": "../figure/image/1097-Table2-1.png",
                        "caption": "Table 2: Human evaluation results for submitted system in WAT2015",
                        "page": 5,
                        "bbox": {
                            "x1": 333.59999999999997,
                            "x2": 495.35999999999996,
                            "y1": 524.64,
                            "y2": 581.28
                        }
                    },
                    {
                        "filename": "../figure/image/1097-Figure3-1.png",
                        "caption": "Figure 3: An illustration of a basic LSTM unit",
                        "page": 1,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 535.1999999999999,
                            "y1": 577.4399999999999,
                            "y2": 720.0
                        }
                    },
                    {
                        "filename": "../figure/image/1097-Figure2-1.png",
                        "caption": "Figure 2: An illustration of the computational graph of a RNN",
                        "page": 1,
                        "bbox": {
                            "x1": 325.92,
                            "x2": 508.32,
                            "y1": 59.519999999999996,
                            "y2": 109.92
                        }
                    },
                    {
                        "filename": "../figure/image/1097-Figure4-1.png",
                        "caption": "Figure 4: An illustration of a GRU unit",
                        "page": 2,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 299.03999999999996,
                            "y1": 229.44,
                            "y2": 336.96
                        }
                    },
                    {
                        "filename": "../figure/image/1097-Figure5-1.png",
                        "caption": "Figure 5: Illustration of a basic neural network architecture for NMT with stacked multi-layer recurrent units.",
                        "page": 2,
                        "bbox": {
                            "x1": 329.76,
                            "x2": 503.03999999999996,
                            "y1": 64.8,
                            "y2": 188.16
                        }
                    },
                    {
                        "filename": "../figure/image/1097-Figure6-1.png",
                        "caption": "Figure 6: The recurrent network using softattention mechanism to predict next output.",
                        "page": 2,
                        "bbox": {
                            "x1": 313.92,
                            "x2": 524.16,
                            "y1": 538.56,
                            "y2": 715.1999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1097-Figure8-1.png",
                        "caption": "Figure 8: Illustration of replacing unknown words with a back-off system.",
                        "page": 4,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 534.24,
                            "y1": 630.72,
                            "y2": 695.04
                        }
                    },
                    {
                        "filename": "../figure/image/1097-Figure7-1.png",
                        "caption": "Figure 7: Visualization of the training for different models.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 295.2,
                            "y1": 540.48,
                            "y2": 698.4
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-34"
        },
        {
            "slides": {
                "1": {
                    "title": "Motivation",
                    "text": [
                        "... I knew it was time to leave.",
                        "A single sentence may cause ambiguous",
                        "Is not that a great argument for term limits?",
                        "The contextual information of a individual sentence offers",
                        "more confident for classifying",
                        "Liao and Grishman, ACL, 2010",
                        "Huang and Riloff, AAAI, 2012"
                    ],
                    "page_nums": [
                        3,
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Model ED Oriented Document Embedding Learning",
                    "text": [
                        "Indicatedis a event trigger and is setted as 1, other words are setted as 0.",
                        "The square error as the general loss of the attention at sentence level to supervise the learning process.",
                        "S1, S3 and SL are sentences with event triggers and is setted as 1, other sentences are setted as 0."
                    ],
                    "page_nums": [
                        6,
                        7,
                        8,
                        9
                    ],
                    "images": [
                        "figure/image/1118-Figure1-1.png"
                    ]
                },
                "4": {
                    "title": "Model Document level Enhanced Event Detector",
                    "text": [
                        "softmax output layer to get the predicted probability for each word"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "7": {
                    "title": "Experiments Configuration",
                    "text": [
                        "GRU w ,GRU s ,GRUe",
                        "entity type embeddings 50 (randomly initialized)"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "8": {
                    "title": "Experiments Model analysis",
                    "text": [
                        "both gold attention signals"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": [
                        "figure/image/1118-Table1-1.png"
                    ]
                },
                "9": {
                    "title": "Experiments Baselines",
                    "text": [
                        "* Feature-based methods without document-level information :",
                        "* Representation-based methods without document-level information :",
                        "* Feature-based methods using document level information :"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "11": {
                    "title": "Summary",
                    "text": [
                        "hierarchical and supervised attention",
                        "gold word- and sentence-level attentions"
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                }
            },
            "paper_title": "Document Embedding Enhanced Event Detection with Hierarchical and Supervised Attention",
            "paper_id": "1118",
            "paper": {
                "title": "Document Embedding Enhanced Event Detection with Hierarchical and Supervised Attention",
                "abstract": "Document-level information is very important for event detection even at sentence level. In this paper, we propose a novel Document Embedding Enhanced Bi-RNN model, called DEEB-RNN, to detect events in sentences. This model first learns event detection oriented embeddings of documents through a hierarchical and supervised attention based RNN, which pays word-level attention to event triggers and sentence-level attention to those sentences containing events. It then uses the learned document embedding to enhance another bidirectional RNN model to identify event triggers and their types in sentences. Through experiments on the ACE-2005 dataset, we demonstrate the effectiveness and merits of the proposed DEEB-RNN model via comparison with state-of-the-art methods.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Event Detection (ED) is an important subtask of event extraction."
                    },
                    {
                        "id": 1,
                        "string": "It extracts event triggers from individual sentences and further identifies the type of the corresponding events."
                    },
                    {
                        "id": 2,
                        "string": "For instance, according to the ACE-2005 annotation guideline, in the sentence \"Jane and John are married\", an ED system should be able to identify the word \"married\" as a trigger of the event \"Marry\"."
                    },
                    {
                        "id": 3,
                        "string": "However, it may be difficult to identify events from isolated sentences, because the same event trigger might represent different event types in different contexts."
                    },
                    {
                        "id": 4,
                        "string": "Existing ED methods can mainly be categorized into two classes, namely, feature-based methods (e.g., (McClosky et al., 2011; Hong et al., 2011; Li et al., 2014) ) and representation-based methods (e.g., (Nguyen and Grishman, 2015; Chen et al., 2015; Liu et al., 2016a; )."
                    },
                    {
                        "id": 5,
                        "string": "The former mainly rely on a set of hand-designed features, while the latter employ distributed representation to capture meaningful semantic information."
                    },
                    {
                        "id": 6,
                        "string": "In general, most of these existing methods mainly exploit sentence-level contextual information."
                    },
                    {
                        "id": 7,
                        "string": "However, document-level information is also important for ED, because the sentences in the same document, although they may contain different types of events, are often correlated with respect to the theme of the document."
                    },
                    {
                        "id": 8,
                        "string": "For example, there are the following sentences in ACE-2005: ..."
                    },
                    {
                        "id": 9,
                        "string": "I knew it was time to leave."
                    },
                    {
                        "id": 10,
                        "string": "Isn't that a great argument for term limits?"
                    },
                    {
                        "id": 11,
                        "string": "..."
                    },
                    {
                        "id": 12,
                        "string": "If we only examine the first sentence, it is hard to determine whether the trigger \"leave\" indicates a \"Transport\" event meaning that he wants to leave the current place, or an \"End-Position\" event indicating that he will stop working for his current organization."
                    },
                    {
                        "id": 13,
                        "string": "However, if we can capture the contextual information of this sentence, it is more confident for us to label \"leave\" as the trigger of an \"End-Position\" event."
                    },
                    {
                        "id": 14,
                        "string": "Upon such observation, there have been some feature-based studies (Ji and Grishman, 2008; Liao and Grishman, 2010; Huang and Riloff, 2012) that construct rules to capture document-level information for improving sentence-level ED."
                    },
                    {
                        "id": 15,
                        "string": "However, they suffer from two major limitations."
                    },
                    {
                        "id": 16,
                        "string": "First, the features used therein often need to be manually designed and may involve error propagation due to natural language processing; Second, they discover inter-event information at document level by constructing inference rules, which is time-consuming and is hard to make the rule set as complete as possible."
                    },
                    {
                        "id": 17,
                        "string": "Besides, a representation-based study has been presented in (Duan et al., 2017) , which employs the PV-DM model to train document embeddings and further uses it in a RNN-based event classifier."
                    },
                    {
                        "id": 18,
                        "string": "However, as being limited by the unsupervised training process, the document-level representation cannot specifically capture event-related information."
                    },
                    {
                        "id": 19,
                        "string": "In this paper, we propose a novel Document Embedding Enhanced Bi-RNN model, called DEEB-RNN, for ED at sentence level."
                    },
                    {
                        "id": 20,
                        "string": "This model first learns ED oriented embeddings of documents through a hierarchical and supervised attention based bidirectional RNN, which pays word-level attention to event triggers and sentence-level attention to those sentences containing events."
                    },
                    {
                        "id": 21,
                        "string": "It then uses the learned document embeddings to facilitate another bidirectional RNN model to identify event triggers and their types in individual sentences."
                    },
                    {
                        "id": 22,
                        "string": "This learning process is guided by a general loss function where the loss corresponding to attention at both word and sentence levels and that of event type identification are integrated."
                    },
                    {
                        "id": 23,
                        "string": "It should be mentioned that although the attention mechanism has recently been applied effectively in various tasks, including machine translation , question answering (Hao et al., 2017) , document summarization (Tan et al., 2017) , etc., this is the first study, to the best of our knowledge, which adopts a hierarchical and supervised attention mechanism to learn ED oriented embeddings of documents."
                    },
                    {
                        "id": 24,
                        "string": "We evaluate the developed DEEB-RNN model on the benchmark dataset, ACE-2005, and systematically investigate the impacts of different supervised attention strategies on its performance."
                    },
                    {
                        "id": 25,
                        "string": "Experimental results show that the DEEB-RNN model outperforms both feature-based and representation-based state-of-the-art methods in terms of recall and F1-measure."
                    },
                    {
                        "id": 26,
                        "string": "The Proposed Model We formalize ED as a multi-class classification problem."
                    },
                    {
                        "id": 27,
                        "string": "Given a sentence, we treat every word in it as a trigger candidate, and classify each candidate to a certain event type."
                    },
                    {
                        "id": 28,
                        "string": "In the ACE-2005 dataset, there are 8 event types, further being divided into 33 subtypes, and a \"Not Applicable (NA)\" type."
                    },
                    {
                        "id": 29,
                        "string": "Without loss of generality, in this paper we regard the 33 subtypes as 33 event types."
                    },
                    {
                        "id": 30,
                        "string": "Figure 1 presents the schematic diagram of the proposed DEEB-RNN model, which contains two main modules: The ED Oriented Document Embedding Learning (EDODEL) module, which learns the distributed representations of documents from both word and sentence levels via the well-designed hierarchical and supervised attention mechanism."
                    },
                    {
                        "id": 31,
                        "string": "2."
                    },
                    {
                        "id": 32,
                        "string": "The Document-level Enhanced Event Detector (DEED) module, which tags each trigger candidate with an event type based on the learned embedding of documents."
                    },
                    {
                        "id": 33,
                        "string": "The EDODEL Module To learn the ED oriented embedding of a document, we apply the hierarchical and supervised attention network presented in Figure 1 , which consists of a word-level Bi-GRU (Schuster and Paliwal, 2002 ) encoder with attention on event triggers and a sentence-level Bi-GRU encoder with attention on sentences with events."
                    },
                    {
                        "id": 34,
                        "string": "Given a document with L sentences, DEEB-RNN learns its embedding for detecting events in all sentences."
                    },
                    {
                        "id": 35,
                        "string": "Word-level embeddings Given a sentence s i (i = 1, 2, ..., L) consisting of words {w it |t = 1, 2, ..., T }."
                    },
                    {
                        "id": 36,
                        "string": "For each word w it , we first concatenate its embedding w it and its entity type embedding 1 e it (Nguyen and Grishman, 2015) as the input g it of a Bi-GRU and thus obtain the bidirectional hidden state h it : h it = [ − −−− → GRU w (g it ), ← −−− − GRU w (g it )]."
                    },
                    {
                        "id": 37,
                        "string": "(1) We then feed h it to a perceptron with no bias to get u it = tanh(W w h it ) as a hidden representation of h it and also obtain an attention weight α it = u T it c w , which should be normalized through a softmax function."
                    },
                    {
                        "id": 38,
                        "string": "Here, similar to that in (Yang et al., 2016) , c w is a vector representing the wordlevel context of w it , which is initialized at random."
                    },
                    {
                        "id": 39,
                        "string": "Finally, the embedding of the sentence s i can be obtained by summing up h it with their weights: s i = T ∑ t=1 α it h it ."
                    },
                    {
                        "id": 40,
                        "string": "(2) To pay more attention to trigger words than other words, we construct the gold word-level attention signals α * i for the sentence s i , as illustrated in Figure 2a ."
                    },
                    {
                        "id": 41,
                        "string": "We can then take the square error as the general loss of the attention at word level to supervise the learning process: E w (α * , α) = L ∑ i=1 T ∑ t=1 (α * it − α it ) 2 ."
                    },
                    {
                        "id": 42,
                        "string": "(3) 1 The words in the ACE-2005 dataset are annotated with their entity types (annotated as \"NA\" if they are not an entity)."
                    },
                    {
                        "id": 43,
                        "string": "Sentence-level embeddings Given the sentence embeddings {s i |i = 1, 2, ..., L}, we first get the hidden state q i via a Bi-GRU: q i = [ −−−→ GRU s (s i ), ←−−− GRU s (s i )]."
                    },
                    {
                        "id": 44,
                        "string": "(4) Then we feed q i to a perceptron with no bias to get the hidden representation t i = tanh(W s q i ) and also obtain an attention weight β i = t T i c s to be normalized via softmax."
                    },
                    {
                        "id": 45,
                        "string": "Similarly, c s represents the sentence-level context of s i to be randomly initialized."
                    },
                    {
                        "id": 46,
                        "string": "We eventually obtain the document embedding d as: d = L ∑ i=1 β i s i ."
                    },
                    {
                        "id": 47,
                        "string": "(5) We also think that the sentences containing event should obtain more attention than other ones."
                    },
                    {
                        "id": 48,
                        "string": "Therefore, similar to the case at word level, we construct the gold sentence-level attention signals β * for the document d, as illustrated in Figure 2b , and further take the square error as the general loss of the attention at sentence level to supervise the learning process: E s (β * , β) = L ∑ i=1 (β * i − β i ) 2 ."
                    },
                    {
                        "id": 49,
                        "string": "(6) The DEED Module We employ another Bi-GRU encoder and a softmax output layer to model the ED task, which can handle event triggers with multiple words."
                    },
                    {
                        "id": 50,
                        "string": "Specifically, given a sentence s j (j = 1, 2, ..., L) in document d, for each of its word w jt (t = 1, 2, ..., T ), we concatenate its word embedding w jt and entity type embedding e jt with the corresponding document embedding d as the input r jt of the Bi-GRU and thus obtain the hidden state f jt : f jt = [ −−−→ GRU e (r jt ), ←−−− GRU e (r jt )]."
                    },
                    {
                        "id": 51,
                        "string": "(7) Finally, we get the probability vector o jt with K dimensions through a softmax layer for w jt , where the k-th element, o jt , of o jt indicates the probability of classifying w jt to the k-th event type."
                    },
                    {
                        "id": 52,
                        "string": "The loss function , J(y, o) , can thus be defined in terms of the cross-entropy error of the real event type y jt and the predicted probability o (k) jt as follows: J(y, o) = − L ∑ j=1 T ∑ t=1 K ∑ k=1 I(y jt = k)log o (k) jt , (8) where I(·) is the indicator function."
                    },
                    {
                        "id": 53,
                        "string": "Joint Training of the DEEB-RNN model In the DEEB-RNN model, the above two modules are jointly trained."
                    },
                    {
                        "id": 54,
                        "string": "For this purpose, we define the joint loss function in the training process upon the losses specified for different modules as follows: J(θ) = ∑ ∀d∈ϕ (J(y, o)+λE w (α * , α)+µE s (β * , β)), (9) where θ denotes, as a whole, the parameters used in DEEB-RNN, ϕ is the training document set, and λ and µ are hyper-parameters for striking a balance among J(y, o), E w (α * , α) and E s (β * , β)."
                    },
                    {
                        "id": 55,
                        "string": "Experiments Datasets and Settings We validate the proposed model through comparison with state-of-the-art methods on the ACE-2005 dataset."
                    },
                    {
                        "id": 56,
                        "string": "In the experiments, the validation set has 30 documents from different genres, the test set has 40 documents and the training set contains the remaining 529 documents."
                    },
                    {
                        "id": 57,
                        "string": "All the data preprocessing and evaluation criteria follow those in (Ghaeini et al., 2016) ."
                    },
                    {
                        "id": 58,
                        "string": "Hyper-parameters are tuned on the validation set."
                    },
                    {
                        "id": 59,
                        "string": "We set the dimension of the hidden layers corresponding to GRU w , GRU s , and GRU e to 300, 200, and 300, respectively, the output size of W w and W s to 600 and 400, respectively, the dimension of entity type embeddings to 50, the batch size to 25, the dropout rate to 0.5."
                    },
                    {
                        "id": 60,
                        "string": "In addition, we utilize the pre-trained word embeddings with 300 dimensions from (Mikolov et al., 2013) for initialization."
                    },
                    {
                        "id": 61,
                        "string": "For entity types, their embeddings are randomly initialized."
                    },
                    {
                        "id": 62,
                        "string": "We train the model using Stochastic Gradient Descent (SGD) over shuffled mini-batches and using dropout (Krizhevsky et al., 2012) for regularization."
                    },
                    {
                        "id": 63,
                        "string": "Baseline Models In order to validate the proposed DEEB-RNN model through experimental comparison, we choose the following typical models as the baselines."
                    },
                    {
                        "id": 64,
                        "string": "Sentence-level is a feature-based model proposed in (Hong et al., 2011) , which regards entitytype consistency as a key feature to predict event mentions."
                    },
                    {
                        "id": 65,
                        "string": "Joint Local is a feature-based model developed in (Li et al., 2013) , which incorporates such features that explicitly capture the dependency among multiple triggers and arguments."
                    },
                    {
                        "id": 66,
                        "string": "JRNN is a representation-based model proposed in , which exploits the inter-dependency between event triggers and argument roles via discrete structures."
                    },
                    {
                        "id": 67,
                        "string": "Skip-CNN is a representation-based model presented in , which proposes a novel convolution to exploit nonconsecutive k-grams for event detection."
                    },
                    {
                        "id": 68,
                        "string": "ANN-S2 is a representation-based model developed in , which explicitly exploits argument information for event detection via supervised attention mechanisms."
                    },
                    {
                        "id": 69,
                        "string": "Cross-event is a feature-based model proposed in (Liao and Grishman, 2010) , which learns relations among event types from training corpus and futher helps predict the occurrence of events."
                    },
                    {
                        "id": 70,
                        "string": "PSL is a feature-based model developed in (Liu et al., 2016b) , which encods global information such as event-event association in the form of logic using the probabilistic soft logic model."
                    },
                    {
                        "id": 71,
                        "string": "DLRNN is a representation-based model proposed in (Duan et al., 2017) , which automatically extracts cross-sentence clues to improve sentencelevel event detection."
                    },
                    {
                        "id": 72,
                        "string": "Impacts of Different Attention Strategies In this section, we conduct experiments on the ACE-2005 dataset to demonstrate the effectiveness of different attention strategies."
                    },
                    {
                        "id": 73,
                        "string": "Bi-GRU is the basic ED model, which does not employ document-level embeddings."
                    },
                    {
                        "id": 74,
                        "string": "DEEB-RNN uses the document embeddings and computes attentions without supervision, in which hyper-parameters λ and µ are set to 0."
                    },
                    {
                        "id": 75,
                        "string": "DEEB-RNN1/2/3 means they uses the gold attention signals as supervision information."
                    },
                    {
                        "id": 76,
                        "string": "Specifically, DEEB-RNN1 uses only the gold word-level attention signal (λ = 1 and µ = 0), DEEB-RNN2 uses only the gold sentence-level attention signal (λ = 0 and µ = 1), whilst DEEB-RNN3 employs the gold attention signals at both word and sen-  tence levels (λ = 1 and µ = 1)."
                    },
                    {
                        "id": 77,
                        "string": "Table 1 compares these methods, where we can observe that the methods with document embeddings (i.e., the last four) significantly outperform the pure Bi-GRU method, which suggests that document-level information is very beneficial for ED."
                    },
                    {
                        "id": 78,
                        "string": "An interesting phenomenon is that, as compared to DEEB-RNN, DEEB-RNN2 changes the precision-recall balance."
                    },
                    {
                        "id": 79,
                        "string": "This is because of the following reasons."
                    },
                    {
                        "id": 80,
                        "string": "On one hand, as compared to DEEB-RNN, DEEB-RNN2 uses the gold sentence-level attention signal, indicating that it pays special attention to the sentences containing events with event triggers."
                    },
                    {
                        "id": 81,
                        "string": "In this way, the Bi-RNN model for learning document embeddings will filter out the sentences containing events but without explicit event triggers."
                    },
                    {
                        "id": 82,
                        "string": "That means the events detected by DEEB-RNN2 are basically the ones with explicit event triggers."
                    },
                    {
                        "id": 83,
                        "string": "Therefore, as compared to DEEB-RNN, the precision of DEEB-RNN2 is improved; On the other hand, the above strategy may result in less learning of words, which are event triggers but do not appear in the training dataset."
                    },
                    {
                        "id": 84,
                        "string": "Therefore, those sentences with such event triggers cannot be detected."
                    },
                    {
                        "id": 85,
                        "string": "The recall of DEEB-RNN2 is thus lowered, as compared to DEEB-RNN."
                    },
                    {
                        "id": 86,
                        "string": "Moreover, DEEB-RNN3 shows the best performance, indicating that the gold attention signals at both word and sentence levels are useful for ED."
                    },
                    {
                        "id": 87,
                        "string": "Table 2 presents the overall performance of all methods on ACE-2005."
                    },
                    {
                        "id": 88,
                        "string": "We can see that different versions of DEEB-RNN consistently out-perform the existing state-of-the-art methods in terms of both recall and F1-measure, while their precision is comparable to that of others."
                    },
                    {
                        "id": 89,
                        "string": "The better performance of DEEB-RNN can be explained by the following reasons: (1) Compared with feature-based methods, including Sentencelevel, Joint Local, and representation-based methods, including JRNN, Skip-CNN and ANN-S2, our method exploits document-level information (i.e., the ED oriented document embeddings) from both word and sentence levels in a document by the supervised attention mechanism, which enhance the ability of identifying trigger words; Performance Comparison (2) Compared with feature-based methods using document-level information, such as Cross-event, PSL, our method can automatically capture event types in documents via a end-to-end Bi-RNN based model without manually designed rules; (3) Compared with representation-based methods using document-level information, such as DLRNN, our method can learn event detection oriented embeddings of documents through the hierarchical and supervised attention based Bi-RNN network."
                    },
                    {
                        "id": 90,
                        "string": "Conclusions and Future Work In this study, we proposed a hierarchical and supervised attention based and document embedding enhanced Bi-RNN method, called DEEB-RNN, for event detection."
                    },
                    {
                        "id": 91,
                        "string": "We explored different strategies to construct gold word-and sentence-level attentions to focus on event information."
                    },
                    {
                        "id": 92,
                        "string": "Experiments on the ACE-2005 dataset demonstrate that DEEB-RNN achieves better performance as compared to the state-of-the-art methods in terms of both recall and F1-measure."
                    },
                    {
                        "id": 93,
                        "string": "In this paper, we can strike a balance between sentence and document embeddings by adjusting their dimensions."
                    },
                    {
                        "id": 94,
                        "string": "In the future, we may improve the DEEB-RNN model to automatically determine the weights of sentence and document embeddings."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 25
                    },
                    {
                        "section": "The Proposed Model",
                        "n": "2",
                        "start": 26,
                        "end": 29
                    },
                    {
                        "section": "The ED Oriented Document Embedding",
                        "n": "1.",
                        "start": 30,
                        "end": 32
                    },
                    {
                        "section": "The EDODEL Module",
                        "n": "2.1",
                        "start": 33,
                        "end": 48
                    },
                    {
                        "section": "The DEED Module",
                        "n": "2.2",
                        "start": 49,
                        "end": 52
                    },
                    {
                        "section": "Joint Training of the DEEB-RNN model",
                        "n": "2.3",
                        "start": 53,
                        "end": 54
                    },
                    {
                        "section": "Datasets and Settings",
                        "n": "3.1",
                        "start": 55,
                        "end": 62
                    },
                    {
                        "section": "Baseline Models",
                        "n": "3.2",
                        "start": 63,
                        "end": 71
                    },
                    {
                        "section": "Impacts of Different Attention Strategies",
                        "n": "3.3",
                        "start": 72,
                        "end": 88
                    },
                    {
                        "section": "Performance Comparison",
                        "n": "3.4",
                        "start": 89,
                        "end": 89
                    },
                    {
                        "section": "Conclusions and Future Work",
                        "n": "4",
                        "start": 90,
                        "end": 94
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1118-Figure2-1.png",
                        "caption": "Figure 2: Examples of the gold word- and sentence-level attention without normalization. (a) Word-level attention. “Indicated” is a candidate trigger; (b) Sentence-level attention. The sentences in purple contain trigger words.",
                        "page": 2,
                        "bbox": {
                            "x1": 79.67999999999999,
                            "x2": 282.24,
                            "y1": 61.44,
                            "y2": 158.4
                        }
                    },
                    {
                        "filename": "../figure/image/1118-Table2-1.png",
                        "caption": "Table 2: Comparison between different methods. † indicates that the corresponding ED method uses information at both sentence and document levels.",
                        "page": 4,
                        "bbox": {
                            "x1": 93.6,
                            "x2": 268.32,
                            "y1": 62.879999999999995,
                            "y2": 211.2
                        }
                    },
                    {
                        "filename": "../figure/image/1118-Figure1-1.png",
                        "caption": "Figure 1: The schematic diagram of the DEEB-RNN model for ED at sentence level.",
                        "page": 1,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 523.1999999999999,
                            "y1": 61.44,
                            "y2": 276.0
                        }
                    },
                    {
                        "filename": "../figure/image/1118-Table1-1.png",
                        "caption": "Table 1: Experimental results with different attention strategies.",
                        "page": 3,
                        "bbox": {
                            "x1": 325.92,
                            "x2": 507.35999999999996,
                            "y1": 62.879999999999995,
                            "y2": 137.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-35"
        },
        {
            "slides": {
                "0": {
                    "title": "Semantic Role Labeling SRL",
                    "text": [
                        "Find out who did what to whom in text",
                        "I ate pizza with friends"
                    ],
                    "page_nums": [
                        1,
                        2
                    ],
                    "images": []
                },
                "1": {
                    "title": "SRL as BIO Tagging",
                    "text": [
                        "ARG0 V ARG1 AM-PRP",
                        "Input1 Many tourists visit Disney to meet their favorite cartoon characters",
                        "Needs target predicate as input!",
                        "(Prior works typically used gold predicates)",
                        "Ne eds to re-run the tag ger f or ea ch pre dicate"
                    ],
                    "page_nums": [
                        3,
                        4
                    ],
                    "images": []
                },
                "2": {
                    "title": "SRL as Predicting Word Span Relations",
                    "text": [
                        "Many tourists visit Disney to meet their favorite cartoon characters",
                        "(similar to Punyakanok08, FitzGerald15, inter alia)",
                        "* Too many possible edges (n2 argument spans x n predicates)"
                    ],
                    "page_nums": [
                        5,
                        6
                    ],
                    "images": []
                },
                "6": {
                    "title": "End to End SRL Results",
                    "text": [
                        "He17 Ours He17 (Ensemble) Ours+ELMo",
                        "BIO-based, pipelined predicate ID",
                        "CoNL05 WSJ Test CoNL05 Brown Test CoNLL2012 (OntoNotes)",
                        "With ELMo, over 3 points improvement over SotA ensemble!",
                        "*ELMo: Deep Contextualized Word Representations, Peters et al., 2018"
                    ],
                    "page_nums": [
                        25,
                        26
                    ],
                    "images": []
                },
                "7": {
                    "title": "Span based vs BIO",
                    "text": [
                        "Predicate Identification Pipelined Joint",
                        "Predicate Identification Due to the strong independence Pipelined Joint",
                        "Global Consistency By allowing direct interaction",
                        "between predicates and arguments"
                    ],
                    "page_nums": [
                        27,
                        28,
                        29
                    ],
                    "images": []
                },
                "8": {
                    "title": "Conclusion",
                    "text": [
                        "Joint prediction of predicates and arguments",
                        "1. Contextualized span representations",
                        "2. Local label classifiers",
                        "3. Greedy span pruning",
                        "Future work: Improve global consistency, use span representations for downstream tasks, etc."
                    ],
                    "page_nums": [
                        30,
                        31,
                        32
                    ],
                    "images": [
                        "figure/image/1119-Figure2-1.png"
                    ]
                }
            },
            "paper_title": "Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling",
            "paper_id": "1119",
            "paper": {
                "title": "Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling",
                "abstract": "Recent BIO-tagging-based neural semantic role labeling models are very high performing, but assume gold predicates as part of the input and cannot incorporate span-level features. We propose an endto-end approach for jointly predicting all predicates, arguments spans, and the relations between them. The model makes independent decisions about what relationship, if any, holds between every possible word-span pair, and learns contextualized span representations that provide rich, shared input features for each decision. Experiments demonstrate that this approach sets a new state of the art on PropBank SRL without gold predicates. 1",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Semantic role labeling (SRL) captures predicateargument relations, such as \"who did what to whom.\""
                    },
                    {
                        "id": 1,
                        "string": "Recent high-performing SRL models Marcheggiani et al., 2017; Tan et al., 2018) are BIO-taggers, labeling argument spans for a single predicate at a time (as shown in Figure 1) ."
                    },
                    {
                        "id": 2,
                        "string": "They are typically only evaluated with gold predicates, and must be pipelined with error-prone predicate identification models for deployment."
                    },
                    {
                        "id": 3,
                        "string": "We propose an end-to-end approach for predicting all the predicates and their argument spans in one forward pass."
                    },
                    {
                        "id": 4,
                        "string": "Our model builds on a recent coreference resolution model , by making central use of learned, contextualized span representations."
                    },
                    {
                        "id": 5,
                        "string": "We use these representations to predict SRL graphs directly over text spans."
                    },
                    {
                        "id": 6,
                        "string": "Each edge is identified by independently predicting which role, if any, holds between every possible pair of text spans, while using aggressive beam 1 Code and models: https://github.com/luheng/lsgn pruning for efficiency."
                    },
                    {
                        "id": 7,
                        "string": "The final graph is simply the union of predicted SRL roles (edges) and their associated text spans (nodes)."
                    },
                    {
                        "id": 8,
                        "string": "Our span-graph formulation overcomes a key limitation of semi-markov and BIO-based models (Kong et al., 2016; Zhou and Xu, 2015; Yang and Mitchell, 2017; Tan et al., 2018) : it can model overlapping spans across different predicates in the same output structure (see Figure 1 )."
                    },
                    {
                        "id": 9,
                        "string": "The span representations also generalize the token-level representations in BIObased models, letting the model dynamically decide which spans and roles to include, without using previously standard syntactic features (Punyakanok et al., 2008; FitzGerald et al., 2015) ."
                    },
                    {
                        "id": 10,
                        "string": "To the best of our knowledge, this is the first span-based SRL model that does not assume that predicates are given."
                    },
                    {
                        "id": 11,
                        "string": "In this more realistic setting, where the predicate must be predicted, our model achieves state-of-the-art performance on PropBank."
                    },
                    {
                        "id": 12,
                        "string": "It also reinforces the strong performance of similar span embedding methods for coreference , suggesting that this style of models could be used for other span-span relation tasks, such as syntactic parsing (Stern et al., 2017) , relation extraction (Miwa and Bansal, 2016) , and QA-SRL (FitzGerald et al., 2018) ."
                    },
                    {
                        "id": 13,
                        "string": "Model We consider the space of possible predicates to be all the tokens in the input sentence, and the space of arguments to be all continuous spans."
                    },
                    {
                        "id": 14,
                        "string": "Our model decides what relation exists between each predicate-argument pair (including no relation)."
                    },
                    {
                        "id": 15,
                        "string": "Formally, given a sequence X = w 1 , ."
                    },
                    {
                        "id": 16,
                        "string": "."
                    },
                    {
                        "id": 17,
                        "string": "."
                    },
                    {
                        "id": 18,
                        "string": ", w n , we wish to predict a set of labeled predicateargument relations Y ⊆ P × A × L, where P = {w 1 , ."
                    },
                    {
                        "id": 19,
                        "string": "."
                    },
                    {
                        "id": 20,
                        "string": "."
                    },
                    {
                        "id": 21,
                        "string": ", w n } is the set of all tokens (predicates), A = {(w i , ."
                    },
                    {
                        "id": 22,
                        "string": "."
                    },
                    {
                        "id": 23,
                        "string": "."
                    },
                    {
                        "id": 24,
                        "string": ", w j ) | 1 ≤ i ≤ j ≤ n} contains all the spans (arguments), and L is the space of semantic role labels, including a null label indicating no relation."
                    },
                    {
                        "id": 25,
                        "string": "The final SRL output would be all the non-empty relations {(p, a, l) ∈ Y | l = }."
                    },
                    {
                        "id": 26,
                        "string": "We then define a set of random variables, where each random variable y p,a corresponds to a predicate p ∈ P and an argument a ∈ A, taking value from the discrete label space L. The random variables y p,a are conditionally independent of each other given the input X: P (Y | X) = p∈P,a∈A P (y p,a | X) (1) P (y p,a = l | X) = exp(φ(p, a, l)) l ∈L exp(φ(p, a, l )) (2) Where φ(p, a, l) is a scoring function for a possible (predicate, argument, label) combination."
                    },
                    {
                        "id": 27,
                        "string": "φ is decomposed into two unary scores on the predicate and the argument (defined in Section 3), as well as a label-specific score for the relation: φ(p, a, l) = Φ a (a) + Φ p (p) + Φ (l) rel (a, p) (3) The score for the null label is set to a constant: φ(p, a, ) = 0, similar to logistic regression."
                    },
                    {
                        "id": 28,
                        "string": "Learning For each input X, we minimize the negative log likelihood of the gold structure Y * : J (X) = − log P (Y * | X) (4) Beam pruning As our model deals with O(n 2 ) possible argument spans and O(n) possible predicates, it needs to consider O(n 3 |L|) possible relations, which is computationally impractical."
                    },
                    {
                        "id": 29,
                        "string": "To overcome this issue, we define two beams B a and B p for storing the candidate arguments and predicates, respectively."
                    },
                    {
                        "id": 30,
                        "string": "The candidates in each beam are ranked by their unary score (Φ a or Φ p )."
                    },
                    {
                        "id": 31,
                        "string": "The sizes of the beams are limited by λ a n and λ p n. Elements that fall out of the beam do not participate in computing the edge factors Φ (l) rel , reducing the overall number of relational factors evaluated by the model to O(n 2 |L|)."
                    },
                    {
                        "id": 32,
                        "string": "We also limit the maximum width of spans to a fixed number W (e.g."
                    },
                    {
                        "id": 33,
                        "string": "W = 30), further reducing the number of computed unary factors to O(n)."
                    },
                    {
                        "id": 34,
                        "string": "Neural Architecture Our model builds contextualized representations for argument spans a and predicate words p based on BiLSTM outputs ( Figure 2 ) and uses feedforward networks to compute the factor scores in φ(p, a, l) described in Section 2 ( Figure 3 )."
                    },
                    {
                        "id": 35,
                        "string": "Word-level contexts The bottom layer consists of pre-trained word embeddings concatenated with character-based representations, i.e."
                    },
                    {
                        "id": 36,
                        "string": "for each token w i , we have x i = [WORDEMB(w i ); CHARCNN(w i )]."
                    },
                    {
                        "id": 37,
                        "string": "We then contextualize each x i using an m-layered bidirectional LSTM with highway connections (Zhang et al., 2016) , which we denote asx i ."
                    },
                    {
                        "id": 38,
                        "string": "Argument and predicate representation We build contextualized representations for all candidate arguments a ∈ A and predicates p ∈ P. The argument representation contains the following: end points from the BiLSTM outputs (x START(a) ,x END(a) ), a soft head word x h (a), and embedded span width features f (a), similar to ."
                    },
                    {
                        "id": 39,
                        "string": "The predicate representation is simply the BiLSTM output at the position INDEX(p)."
                    },
                    {
                        "id": 40,
                        "string": "g(a) =[x START(a) ;x END(a) ; x h (a); f (a)] (5) g(p) =x INDEX(p) (6) The soft head representation x h (a) is an attention mechanism over word inputs x in the argument span, where the weights e(a) are computed via a linear layer over the BiLSTM outputsx."
                    },
                    {
                        "id": 41,
                        "string": "x h (a) = x START(a):END(a) e(s) (7) e(a) = SOFTMAX(w exSTART(a):END(a) ) (8) x START(a):END(a) is a shorthand for stacking a list of vectors x t , where START(a) ≤ t ≤ END(a)."
                    },
                    {
                        "id": 42,
                        "string": "Scoring The scoring functions Φ are implemented with feed-forward networks based on the predicate and argument representations g: Φ a (a) =w a MLP a (g(a)) (9) Φ p (p) =w p MLP p (g(p)) (10) Φ (l) rel (a, p) =w (l) r MLP r ([g(a); g(p)]) (11) Experiments We experiment on the CoNLL 2005 (Carreras and Màrquez, 2005) and CoNLL 2012 (OntoNotes 5.0, (Pradhan et al., 2013)) benchmarks, using two SRL setups: end-to-end and gold predicates."
                    },
                    {
                        "id": 43,
                        "string": "In the end-to-end setup, a system takes a tokenized sentence as input, and predicts all the predicates and their arguments."
                    },
                    {
                        "id": 44,
                        "string": "Systems are evaluated on the micro-averaged F1 for correctly predicting (predicate, argument span, label) tuples."
                    },
                    {
                        "id": 45,
                        "string": "For comparison with previous systems, we also report results with gold predicates, in which the complete set of predicates in the input sentence is given as well."
                    },
                    {
                        "id": 46,
                        "string": "Other experimental setups and hyperparameteres are listed in Appendix A.1."
                    },
                    {
                        "id": 47,
                        "string": "ELMo embeddings To further improve performance, we also add ELMo word representations (Peters et al., 2018) to the BiLSTM input (in the +ELMo rows)."
                    },
                    {
                        "id": 48,
                        "string": "Since the contextualized representations ELMo provides can be applied to most previous neural systems, the improvement is orthogonal to our contribution."
                    },
                    {
                        "id": 49,
                        "string": "In Table 1 and 2, we organize all the results into two categories: the comparable single model systems, and the mod-els augmented with ELMo or ensembling (in the PoE rows)."
                    },
                    {
                        "id": 50,
                        "string": "End-to-end results As shown in Table 1 , 2 our joint model outperforms the previous best pipeline system  by an F1 difference of anywhere between 1.3 and 6.0 in every setting."
                    },
                    {
                        "id": 51,
                        "string": "The improvement is larger on the Brown test set, which is out-of-domain, and the CoNLL 2012 test set, which contains nominal predicates."
                    },
                    {
                        "id": 52,
                        "string": "On all datasets, our model is able to predict over 40% of the sentences completely correctly."
                    },
                    {
                        "id": 53,
                        "string": "Results with gold predicates To compare with additional previous systems, we also conduct experiments with gold predicates by constraining our predicate beam to be gold predicates only."
                    },
                    {
                        "id": 54,
                        "string": "As shown in Analysis Our model's architecture differs significantly from previous BIO systems in terms of both input and decision space."
                    },
                    {
                        "id": 55,
                        "string": "To better understand our model's strengths and weaknesses, we perform three analyses following  and , studying (1) the effectiveness of beam Figure 4 shows the predicate and argument spans kept in the beam, sorted with their unary scores."
                    },
                    {
                        "id": 56,
                        "string": "Our model efficiently prunes unlikely argument spans and predicates, significantly reduces the number of edges it needs to consider."
                    },
                    {
                        "id": 57,
                        "string": "Figure 5 shows the recall of predicate words on the CoNLL 2012 development set."
                    },
                    {
                        "id": 58,
                        "string": "By retaining λ p = 0.4 predicates per word, we are able to keep over 99.7% argument-bearing predicates."
                    },
                    {
                        "id": 59,
                        "string": "Compared to having a part-of-speech tagger (POS:X in Figure 5 ), our joint beam pruning allowing the model to have a soft trade-off between efficiency and recall."
                    },
                    {
                        "id": 60,
                        "string": "4 Effectiveness of beam pruning Long-distance dependencies Figure 6 shows the performance breakdown by binned distance between arguments to the given predicates."
                    },
                    {
                        "id": 61,
                        "string": "Our model is better at accurately predicting arguments that are farther away from the predicates, even  compared to an ensemble model  that has a higher overall F1."
                    },
                    {
                        "id": 62,
                        "string": "This is very likely due to architectural differences; in a BIO tagger, predicate information passes through many LSTM timesteps before reaching a long-distance argument, whereas our architecture enables direct connections between all predicates-arguments pairs."
                    },
                    {
                        "id": 63,
                        "string": "Agreement with syntax As mentioned in , their BIO-based SRL system has good agreement with gold syntactic span boundaries (94.3%) but falls short of previous syntaxbased systems (Punyakanok et al., 2004) ."
                    },
                    {
                        "id": 64,
                        "string": "By directly modeling span information, our model achieves comparable syntactic agreement (95.0%) to Punyakanok et al."
                    },
                    {
                        "id": 65,
                        "string": "(2004) Figure 5 : Recall of gold argument-bearing predicates on the CoNLL 2012 development data as we increase the number of predicates kept per word."
                    },
                    {
                        "id": 66,
                        "string": "POS:X shows the gold predicate recall from using certain pos-tags identified by the NLTK part-ofspeech tagger (Bird, 2006) ."
                    },
                    {
                        "id": 67,
                        "string": "tions of global structural constraints 5 compared to previous systems."
                    },
                    {
                        "id": 68,
                        "string": "Our model made more constraint violations compared to previous systems."
                    },
                    {
                        "id": 69,
                        "string": "For example, our model predicts duplicate core arguments 6 (shown in the U column in Table 3 ) more often than previous work."
                    },
                    {
                        "id": 70,
                        "string": "This is due to the fact that our model uses independent classifiers to label each predicate-argument pair, making it difficult for them to implicitly track the decisions made for several arguments with the same predicate."
                    },
                    {
                        "id": 71,
                        "string": "The Ours+decode row in Table 3 shows SRL performance after enforcing the U-constraint using dynamic programming  at decoding time."
                    },
                    {
                        "id": 72,
                        "string": "Constrained decoding at test time is effective at eliminating all the core-role inconsistencies (shown in the U-column), but did not bring significant gain on the end result (shown 5 Punyakanok et al."
                    },
                    {
                        "id": 73,
                        "string": "(2008) described a list of global constraints for SRL systems, e.g., there can be at most one core argument of each type for each predicate."
                    },
                    {
                        "id": 74,
                        "string": "6 Arguments with labels ARG0,ARG1,."
                    },
                    {
                        "id": 75,
                        "string": "."
                    },
                    {
                        "id": 76,
                        "string": "."
                    },
                    {
                        "id": 77,
                        "string": ",ARG5 and AA."
                    },
                    {
                        "id": 78,
                        "string": "in SRL F1), which only evaluates the piece-wise predicate-argument structures."
                    },
                    {
                        "id": 79,
                        "string": "Conclusion and Future Work We proposed a new SRL model that is able to jointly predict all predicates and argument spans, generalized from a recent coreference system ."
                    },
                    {
                        "id": 80,
                        "string": "Compared to previous BIO systems, our new model supports joint predicate identification and is able to incorporate span-level features."
                    },
                    {
                        "id": 81,
                        "string": "Empirically, the model does better at longrange dependencies and agreement with syntactic boundaries, but is weaker at global consistency, due to our strong independence assumption."
                    },
                    {
                        "id": 82,
                        "string": "In the future, we could incorporate higher-order inference methods  to relax this assumption."
                    },
                    {
                        "id": 83,
                        "string": "It would also be interesting to combine our span-based architecture with the selfattention layers (Tan et al., 2018; Strubell et al., 2018) for more effective contextualization."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 12
                    },
                    {
                        "section": "Model",
                        "n": "2",
                        "start": 13,
                        "end": 31
                    },
                    {
                        "section": "Neural Architecture",
                        "n": "3",
                        "start": 32,
                        "end": 41
                    },
                    {
                        "section": "Experiments",
                        "n": "4",
                        "start": 42,
                        "end": 53
                    },
                    {
                        "section": "Analysis",
                        "n": "5",
                        "start": 54,
                        "end": 78
                    },
                    {
                        "section": "Conclusion and Future Work",
                        "n": "6",
                        "start": 79,
                        "end": 83
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1119-Figure3-1.png",
                        "caption": "Figure 3: The span-pair classifier takes in predicate and argument representations as inputs, and computes a softmax over the label space L.",
                        "page": 2,
                        "bbox": {
                            "x1": 88.8,
                            "x2": 290.4,
                            "y1": 246.72,
                            "y2": 360.47999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1119-Figure2-1.png",
                        "caption": "Figure 2: Building the argument span representations g(a) from BiLSTM outputs. For clarity, we only show one BiLSTM layer and a small subset of the arguments.",
                        "page": 2,
                        "bbox": {
                            "x1": 102.72,
                            "x2": 494.88,
                            "y1": 65.28,
                            "y2": 184.79999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1119-Figure5-1.png",
                        "caption": "Figure 5: Recall of gold argument-bearing predicates on the CoNLL 2012 development data as we increase the number of predicates kept per word. POS:X shows the gold predicate recall from using certain pos-tags identified by the NLTK part-ofspeech tagger (Bird, 2006).",
                        "page": 4,
                        "bbox": {
                            "x1": 89.28,
                            "x2": 277.44,
                            "y1": 73.44,
                            "y2": 186.72
                        }
                    },
                    {
                        "filename": "../figure/image/1119-Table3-1.png",
                        "caption": "Table 3: Comparison on the CoNLL 05 development set against previous systems in terms of unlabeled agreement with gold constituency (Syn%) and each type of SRL-constraints violations (Unique core roles, Continuation roles and Reference roles).",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 62.4,
                            "y2": 165.12
                        }
                    },
                    {
                        "filename": "../figure/image/1119-Figure6-1.png",
                        "caption": "Figure 6: F1 by surface distance between predicates and arguments, showing degrading performance on long-range arguments.",
                        "page": 4,
                        "bbox": {
                            "x1": 90.72,
                            "x2": 276.0,
                            "y1": 306.71999999999997,
                            "y2": 408.0
                        }
                    },
                    {
                        "filename": "../figure/image/1119-Table1-1.png",
                        "caption": "Table 1: End-to-end SRL results for CoNLL 2005 and CoNLL 2012, compared to previous systems. CoNLL 05 contains two test sets: WSJ (in-domain) and Brown (out-of-domain).",
                        "page": 3,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 67.2,
                            "y2": 144.96
                        }
                    },
                    {
                        "filename": "../figure/image/1119-Table2-1.png",
                        "caption": "Table 2: Experiment results with gold predicates.",
                        "page": 3,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 296.15999999999997,
                            "y1": 198.72,
                            "y2": 327.36
                        }
                    },
                    {
                        "filename": "../figure/image/1119-Figure4-1.png",
                        "caption": "Figure 4: Top: The candidate arguments and predicates in the argument beam Ba and predicate beam Bp after pruning, along with their unary scores. Bottom: Predicted SRL relations with two identified predicates and their arguments.",
                        "page": 3,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 198.72,
                            "y2": 357.12
                        }
                    },
                    {
                        "filename": "../figure/image/1119-Figure1-1.png",
                        "caption": "Figure 1: A comparison of our span-graph structure (top) versus BIO-based SRL (bottom).",
                        "page": 0,
                        "bbox": {
                            "x1": 319.68,
                            "x2": 510.24,
                            "y1": 226.07999999999998,
                            "y2": 357.12
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-36"
        },
        {
            "slides": {
                "0": {
                    "title": "Again",
                    "text": [
                        "Heard on the campaign trail:",
                        "Make the middle class mean something again, with rising incomes and broader horizons.",
                        "Make America great again."
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "What is presupposition",
                    "text": [
                        "Presuppositions: assumptions shared by discourse participants in an",
                        "Presupposition triggers: expressions that indicate the presence of presuppositions.",
                        "Oops! I did it again Trigger",
                        "Presupposes Britney did it before"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "3": {
                    "title": "Motivation and Applications",
                    "text": [
                        "Interesting testbed for pragmatic reasoning: investigating presupposition triggers requires understanding preceding context.",
                        "Presupposition triggers influencing political discourse:",
                        "The abundant use of presupposition triggers helps to better communicate political messages and consequently persuade the audience (Liang and Liu,",
                        "To improve the readability and coherence in language generation applications (e.g., summarization, dialogue systems)."
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "Adverbial Presupposition Triggers",
                    "text": [
                        "Indicate the recurrence, continuation, or termination of an event in the discourse context, or the presence of a similar event.",
                        "The most commonly occurring presupposition triggers (after existential triggers) (Khaleel, 2010).",
                        "Little work has been done on these triggers in the computational literature from a statistical, corpus-driven perspective.",
                        "All others (lexical and structural)"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "5": {
                    "title": "This Work",
                    "text": [
                        "Computational approach to detecting presupposition triggers.",
                        "Create new datasets for the task of detecting adverbial presupposition triggers.",
                        "Control for potential confounding factors such as class balance and syntactic governor of the triggering adverb.",
                        "Present a new weighted pooling attention mechanism for the task."
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "6": {
                    "title": "Task",
                    "text": [
                        "Detect contexts in which adverbial presupposition triggers can be used.",
                        "Requires detecting recurring or similar events in the discourse context.",
                        "Five triggers of interest: too, again, also, still, yet.",
                        "Frame the learning problem as a binary classification for predicting the presence of an adverbial presupposition (as opposed to the identity of the adverb)."
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "7": {
                    "title": "Sample Configuration",
                    "text": [
                        "3-tuple: label, list of tokens, list of POS tags.",
                        "Back to our example:",
                        "Make America great again. Trigger",
                        "(aka governor of again)",
                        "Special token: to identify the candidate context in the passage to the model.",
                        "Make, America, great], Tokens",
                        "VB, NNP, JJ ] POS tags"
                    ],
                    "page_nums": [
                        9,
                        10,
                        11,
                        12,
                        13,
                        14
                    ],
                    "images": []
                },
                "8": {
                    "title": "Positive vs Negative Samples",
                    "text": [
                        "Same governors as in the positive cases but without triggering presupposition.",
                        "Example of positive sample:",
                        "Juan is coming to the event too.",
                        "Example of negative sample:",
                        "Whitney is coming tomorrow."
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "9": {
                    "title": "Extracting Positive Samples",
                    "text": [
                        "Scan through all the documents to search for target adverbs.",
                        "For each occurrence of a target adverb:",
                        "Store the location and the governor of the adverb.",
                        "Extract 50 unlemmatized tokens preceding the governor, together with the tokens right after it up to the end of the sentence (where the adverb is)."
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                },
                "10": {
                    "title": "Extracting Negative Samples",
                    "text": [
                        "Extract sentences containing the same governors (as in the positive cases) but not any of the target adverbs.",
                        "Number of samples in the positive and negative classes roughly balanced.",
                        "Negative samples are extracted/constructed in the same manner as the positive examples."
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                },
                "11": {
                    "title": "Position Related Confounding Factors",
                    "text": [
                        "We try to control position-related confounding factors by two randomization approaches:",
                        "Randomize the order of documents to be scanned.",
                        "Within each document, start scanning from a random location in the document."
                    ],
                    "page_nums": [
                        18
                    ],
                    "images": []
                },
                "21": {
                    "title": "Datasets",
                    "text": [
                        "New datasets extracted from:",
                        "The English Gigaword corpus:",
                        "Individual sub-datasets (i.e., presence of each adverb vs. absence).",
                        "ALL (i.e., presence of one of the 5 adverbs vs. absence).",
                        "The Penn Tree Bank (PTB) corpus:"
                    ],
                    "page_nums": [
                        30
                    ],
                    "images": []
                },
                "22": {
                    "title": "Results Overview",
                    "text": [
                        "Our model outperforms all other models in 10 out of 14 scenarios",
                        "(combinations of datasets and whether or not POS tags are used).",
                        "WP outperforms regular LSTM without introducing additional parameters.",
                        "For all models, we find that including POS tags benefits the detection of adverbial presupposition triggers in Gigaword and PTB datasets."
                    ],
                    "page_nums": [
                        31
                    ],
                    "images": []
                },
                "23": {
                    "title": "Results WSJ",
                    "text": [
                        "WP best on WSJ.",
                        "WSJ - Accuracy MFC: Most Frequent Class",
                        "baselines by large margin.",
                        "Models Variants All adverbs",
                        "Network based on (Kim",
                        "+ POS LSTM - POS",
                        "+ POS WP - POS"
                    ],
                    "page_nums": [
                        32
                    ],
                    "images": []
                },
                "24": {
                    "title": "Results Gigaword",
                    "text": [
                        "Models Variants All adverbs Again Still Too Yet Also",
                        "+ POS CNN - POS",
                        "+ POS LSTM - POS",
                        "+ POS WP - POS",
                        "in 10 out of cases.",
                        "Better performance with POS."
                    ],
                    "page_nums": [
                        33,
                        34,
                        35
                    ],
                    "images": []
                },
                "27": {
                    "title": "Future Directions",
                    "text": [
                        "Incorporate such a system in an NLG pipeline (e.g., dialogue or summarization with text rewriting).",
                        "Discourse analysis with presupposition (e.g., political speech).",
                        "Investigate other types of presupposition."
                    ],
                    "page_nums": [
                        38
                    ],
                    "images": []
                }
            },
            "paper_title": "Let's do it \"again\": A First Computational Approach to Detecting Adverbial Presupposition Triggers",
            "paper_id": "1121",
            "paper": {
                "title": "Let's do it \"again\": A First Computational Approach to Detecting Adverbial Presupposition Triggers",
                "abstract": "We introduce the task of predicting adverbial presupposition triggers such as also and again. Solving such a task requires detecting recurring or similar events in the discourse context, and has applications in natural language generation tasks such as summarization and dialogue systems. We create two new datasets for the task, derived from the Penn Treebank and the Annotated English Gigaword corpora, as well as a novel attention mechanism tailored to this task. Our attention mechanism augments a baseline recurrent neural network without the need for additional trainable parameters, minimizing the added computational cost of our mechanism. We demonstrate that our model statistically outperforms a number of baselines, including an LSTM-based language model.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction In pragmatics, presuppositions are assumptions or beliefs in the common ground between discourse participants when an utterance is made (Frege, 1892; Strawson, 1950; Stalnaker, 1973 Stalnaker, , 1998 , and are ubiquitous in naturally occurring discourses (Beaver and Geurts, 2014) ."
                    },
                    {
                        "id": 1,
                        "string": "Presuppositions underly spoken statements and written sentences and understanding them facilitates smooth communication."
                    },
                    {
                        "id": 2,
                        "string": "We refer to expressions that indicate the presence of presuppositions as presupposition triggers."
                    },
                    {
                        "id": 3,
                        "string": "These include definite descriptions, factive verbs and certain adverbs, among others."
                    },
                    {
                        "id": 4,
                        "string": "For example, consider the following statements: (1) John is going to the restaurant again."
                    },
                    {
                        "id": 5,
                        "string": "* Authors (listed in alphabetical order) contributed equally."
                    },
                    {
                        "id": 6,
                        "string": "(2) John has been to the restaurant."
                    },
                    {
                        "id": 7,
                        "string": "(1) is only appropriate in the context where (2) is held to be true because of the presence of the presupposition trigger again."
                    },
                    {
                        "id": 8,
                        "string": "One distinguishing characteristic of presupposition is that it is unaffected by negation of the presupposing context, unlike other semantic phenomena such as entailment and implicature."
                    },
                    {
                        "id": 9,
                        "string": "The negation of (1), John is not going to the restaurant again., also presupposes (2)."
                    },
                    {
                        "id": 10,
                        "string": "Our focus in this paper is on adverbial presupposition triggers such as again, also and still."
                    },
                    {
                        "id": 11,
                        "string": "Adverbial presupposition triggers indicate the recurrence, continuation, or termination of an event in the discourse context, or the presence of a similar event."
                    },
                    {
                        "id": 12,
                        "string": "In one study of presuppositional triggers in English journalistic texts (Khaleel, 2010) , adverbial triggers were found to be the most commonly occurring presupposition triggers after existential triggers."
                    },
                    {
                        "id": 13,
                        "string": "1 Despite their frequency, there has been little work on these triggers in the computational literature from a statistical, corpus-driven perspective."
                    },
                    {
                        "id": 14,
                        "string": "As a first step towards language technology systems capable of understanding and using presuppositions, we propose to investigate the detection of contexts in which these triggers can be used."
                    },
                    {
                        "id": 15,
                        "string": "This task constitutes an interesting testing ground for pragmatic reasoning, because the cues that are indicative of contexts containing recurring or similar events are complex and often span more than one sentence, as illustrated in Sentences (1) and (2) ."
                    },
                    {
                        "id": 16,
                        "string": "Moreover, such a task has immediate practical consequences."
                    },
                    {
                        "id": 17,
                        "string": "For example, in language generation applications such as summarization and dialogue systems, adding presuppositional triggers in contextually appropriate loca-tions can improve the readability and coherence of the generated output."
                    },
                    {
                        "id": 18,
                        "string": "We create two datasets based on the Penn Treebank corpus (Marcus et al., 1993) and the English Gigaword corpus (Graff et al., 2007) , extracting contexts that include presupposition triggers as well as other similar contexts that do not, in order to form a binary classification task."
                    },
                    {
                        "id": 19,
                        "string": "In creating our datasets, we consider a set of five target adverbs: too, again, also, still, and yet."
                    },
                    {
                        "id": 20,
                        "string": "We focus on these adverbs in our investigation because these triggers are well known in the existing linguistic literature and commonly triggering presuppositions."
                    },
                    {
                        "id": 21,
                        "string": "We control for a number of potential confounding factors, such as class balance, and the syntactic governor of the triggering adverb, so that models cannot exploit these correlating factors without any actual understanding of the presuppositional properties of the context."
                    },
                    {
                        "id": 22,
                        "string": "We test a number of standard baseline classifiers on these datasets, including a logistic regression model and deep learning methods based on recurrent neural networks (RNN) and convolutional neural networks (CNN)."
                    },
                    {
                        "id": 23,
                        "string": "In addition, we investigate the potential of attention-based deep learning models for detecting adverbial triggers."
                    },
                    {
                        "id": 24,
                        "string": "Attention is a promising approach to this task because it allows a model to weigh information from multiple points in the previous context and infer long-range dependencies in the data (Bahdanau et al., 2015) ."
                    },
                    {
                        "id": 25,
                        "string": "For example, the model could learn to detect multiple instances involving John and restaurants, which would be a good indication that again is appropriate in that context."
                    },
                    {
                        "id": 26,
                        "string": "Also, an attention-based RNN has achieved success in predicting article definiteness, which involves another class of presupposition triggers (Kabbara et al., 2016) ."
                    },
                    {
                        "id": 27,
                        "string": "As another contribution, we introduce a new weighted pooling attention mechanism designed for predicting adverbial presupposition triggers."
                    },
                    {
                        "id": 28,
                        "string": "Our attention mechanism allows for a weighted averaging of our RNN hidden states where the weights are informed by the inputs, as opposed to a simple unweighted averaging."
                    },
                    {
                        "id": 29,
                        "string": "Our model uses a form of self-attention (Paulus et al., 2018; Vaswani et al., 2017) , where the input sequence acts as both the attention mechanism's query and key/value."
                    },
                    {
                        "id": 30,
                        "string": "Unlike other attention models, instead of simply averaging the scores to be weighted, our approach aggregates (learned) attention scores by learning a reweighting scheme of those scores through another level (dimension) of attention."
                    },
                    {
                        "id": 31,
                        "string": "Additionally, our mechanism does not introduce any new parameters when compared to our LSTM baseline, reducing its computational impact."
                    },
                    {
                        "id": 32,
                        "string": "We compare our model using the novel attention mechanism against the baseline classifiers in terms of prediction accuracy."
                    },
                    {
                        "id": 33,
                        "string": "Our model outperforms these baselines for most of the triggers on the two datasets, achieving 82.42% accuracy on predicting the adverb \"also\" on the Gigaword dataset."
                    },
                    {
                        "id": 34,
                        "string": "The contributions of this work are as follows: 1."
                    },
                    {
                        "id": 35,
                        "string": "We introduce the task of predicting adverbial presupposition triggers."
                    },
                    {
                        "id": 36,
                        "string": "2."
                    },
                    {
                        "id": 37,
                        "string": "We present new datasets for the task of detecting adverbial presupposition triggers, with a data extraction method that can be applied to other similar pre-processing tasks."
                    },
                    {
                        "id": 38,
                        "string": "3."
                    },
                    {
                        "id": 39,
                        "string": "We develop a new attention mechanism in an RNN architecture that is appropriate for the prediction of adverbial presupposition triggers, and show that its use results in better prediction performance over a number of baselines without introducing additional parameters."
                    },
                    {
                        "id": 40,
                        "string": "2 Related Work Presupposition and pragmatic reasoning The discussion of presupposition can be traced back to Frege's work on the philosophy of language (Frege, 1892), which later leads to the most commonly accepted view of presupposition called the Frege-Strawson theory (Kaplan, 1970; Strawson, 1950) ."
                    },
                    {
                        "id": 41,
                        "string": "In this view, presuppositions are preconditions for sentences/statements to be true or false."
                    },
                    {
                        "id": 42,
                        "string": "To the best of our knowledge, there is no previous computational work that directly investigates adverbial presupposition."
                    },
                    {
                        "id": 43,
                        "string": "However in the fields of semantics and pragmatics, there exist linguistic studies on presupposition that involve adverbs such as \"too\" and \"again\" (e.g., (Blutner et al., 2003) , (Kang, 2012) ) as a pragmatic presupposition trigger."
                    },
                    {
                        "id": 44,
                        "string": "Also relevant to our work is (Kabbara et al., 2016) , which proposes using an attention-based LSTM network to predict noun phrase definiteness in English."
                    },
                    {
                        "id": 45,
                        "string": "Their work demonstrates the ability of these attention-based models to pick up on contextual cues for pragmatic reasoning."
                    },
                    {
                        "id": 46,
                        "string": "Many different classes of construction can trigger presupposition in an utterance, this includes but is not limited to stressed constituents, factive verbs, and implicative verbs (Zare et al., 2012) ."
                    },
                    {
                        "id": 47,
                        "string": "In this work, we focus on the class of adverbial presupposition triggers."
                    },
                    {
                        "id": 48,
                        "string": "Our task setup resembles the Cloze test used in psychology (Taylor, 1953; E. B. Coleman, 1968; Earl F. Rankin, 1969) and machine comprehension (Riloff and Thelen, 2000) , which tests text comprehension via a fill-in-the-blanks task."
                    },
                    {
                        "id": 49,
                        "string": "We similarly pre-process our samples such that they are roughly the same length, and have equal numbers of negative samples as positive ones."
                    },
                    {
                        "id": 50,
                        "string": "However, we avoid replacing the deleted words with a blank, so that our model has no clue regarding the exact position of the possibly missing trigger."
                    },
                    {
                        "id": 51,
                        "string": "Another related work on the Children's Book Test (Hill et al., 2015) notes that memories that encode sub-sentential chunks (windows) of informative text seem to be most useful to neural networks when interpreting and modelling language."
                    },
                    {
                        "id": 52,
                        "string": "Their finding inspires us to run initial experiments with different context windows and tune the size of chunks according to the Logistic Regression results on the development set."
                    },
                    {
                        "id": 53,
                        "string": "Attention In the context of encoder-decoder models, attention weights are usually based on an energy measure of the previous decoder hidden state and encoder hidden states."
                    },
                    {
                        "id": 54,
                        "string": "Many variations on attention computation exist."
                    },
                    {
                        "id": 55,
                        "string": "Sukhbaatar et al."
                    },
                    {
                        "id": 56,
                        "string": "(2015) propose an attention mechanism conditioned on a query and applied to a document."
                    },
                    {
                        "id": 57,
                        "string": "To generate summaries, Paulus et al."
                    },
                    {
                        "id": 58,
                        "string": "(2018) add an attention mechanism in the prediction layer, as opposed to the hidden states."
                    },
                    {
                        "id": 59,
                        "string": "Vaswani et al."
                    },
                    {
                        "id": 60,
                        "string": "(2017) suggest a model which learns an input representation by self-attending over inputs."
                    },
                    {
                        "id": 61,
                        "string": "While these methods are all tailored to their specific tasks, they all inspire our choice of a self-attending mechanism."
                    },
                    {
                        "id": 62,
                        "string": "3 Datasets Corpora We extract datasets from two corpora, namely the Penn Treebank (PTB) corpus (Marcus et al., 1993) and a subset (sections 000-760) of the third edition of the English Gigaword corpus (Graff et al., 2007) ."
                    },
                    {
                        "id": 63,
                        "string": "For the PTB dataset, we use sections 22 and 23 for testing."
                    },
                    {
                        "id": 64,
                        "string": "For the Gigaword corpus, we use sections 700-760 for testing."
                    },
                    {
                        "id": 65,
                        "string": "For the remaining data, we randomly chose 10% of them for development, and the other 90% for training."
                    },
                    {
                        "id": 66,
                        "string": "For each dataset, we consider a set of five target adverbs: too, again, also, still, and yet."
                    },
                    {
                        "id": 67,
                        "string": "We choose these five because they are commonly used adverbs that trigger presupposition."
                    },
                    {
                        "id": 68,
                        "string": "Since we are concerned with investigating the capacity of attentional deep neural networks in predicting the presuppositional effects in general, we frame the learning problem as a binary classification for predicting the presence of an adverbial presupposition (as opposed to the identity of the adverb)."
                    },
                    {
                        "id": 69,
                        "string": "On the Gigaword corpus, we consider each adverb separately, resulting in five binary classification tasks."
                    },
                    {
                        "id": 70,
                        "string": "This was not feasible for PTB because of its small size."
                    },
                    {
                        "id": 71,
                        "string": "Finally, because of the commonalities between the adverbs in presupposing similar events, we create a dataset that unifies all instances of the five adverbs found in the Gigaword corpus, with a label \"1\" indicating the presence of any of these adverbs."
                    },
                    {
                        "id": 72,
                        "string": "Data extraction process We define a sample in our dataset as a 3-tuple, consisting of a label (representing the target adverb, or 'none' for a negative sample), a list of tokens we extract (before/after the adverb), and a list of corresponding POS tags (Klein and Manning, 2002) ."
                    },
                    {
                        "id": 73,
                        "string": "In each sample, we also add a special token \"@@@@\" right before the head word and the corresponding POS tag of the head word, both in positive and negative cases."
                    },
                    {
                        "id": 74,
                        "string": "We add such special tokens to identify the candidate context in the passage to the model."
                    },
                    {
                        "id": 75,
                        "string": "Figure 1 shows a single positive sample in our dataset."
                    },
                    {
                        "id": 76,
                        "string": "We first extract positive contexts that contain a triggering adverb, then extract negative contexts that do not, controlling for a number of potential confounds."
                    },
                    {
                        "id": 77,
                        "string": "Our positive data consist of cases where the target adverb triggers presupposition by modifying a certain head word which, in most cases, is a verb."
                    },
                    {
                        "id": 78,
                        "string": "We define such head word as a governor of the target adverb."
                    },
                    {
                        "id": 79,
                        "string": "When extracting positive data, we scan through all the documents, searching for target adverbs."
                    },
                    {
                        "id": 80,
                        "string": "For each occurrence of a target adverb, we store the location and the governor of the adverb."
                    },
                    {
                        "id": 81,
                        "string": "Taking each occurrence of a governor as a pivot, we extract the 50 unlemmatized tokens preceding it, together with the tokens right after it up to the end of the sentence (where the adverb is)-with the adverb itself being removed."
                    },
                    {
                        "id": 82,
                        "string": "If there are less than 50 tokens before the adverb, we simply extract all of these tokens."
                    },
                    {
                        "id": 83,
                        "string": "In preliminary testing using a logistic regression classifier, we found that limiting the size to 50 tokens had higher accuracy than 25 or 100 tokens."
                    },
                    {
                        "id": 84,
                        "string": "As some head words themselves are stopwords, in the list of tokens, we do not remove any stopwords from the sample; otherwise, we would lose many important samples."
                    },
                    {
                        "id": 85,
                        "string": "We filter out the governors of \"too\" that have POS tags \"JJ\" and \"RB\" (adjectives and adverbs), because such cases corresponds to a different sense of \"too\" which indicates excess quantity and does not trigger presupposition (e.g., \"rely too heavily on\", \"it's too far from\")."
                    },
                    {
                        "id": 86,
                        "string": "After extracting the positive cases, we then use the governor information of positive cases to extract negative data."
                    },
                    {
                        "id": 87,
                        "string": "In particular, we extract sentences containing the same governors but not any of the target adverbs as negatives."
                    },
                    {
                        "id": 88,
                        "string": "In this way, models cannot rely on the identity of the governor alone to predict the class."
                    },
                    {
                        "id": 89,
                        "string": "This procedure also roughly balances the number of samples in the positive and negative classes."
                    },
                    {
                        "id": 90,
                        "string": "For each governor in a positive sample, we locate a corresponding context in the corpus where the governor occurs without being modified by any of the target adverbs."
                    },
                    {
                        "id": 91,
                        "string": "We then extract the surrounding tokens in the same fashion as above."
                    },
                    {
                        "id": 92,
                        "string": "Moreover, we try to control positionrelated confounding factors by two randomization approaches: 1) randomize the order of documents to be scanned, and 2) within each document, start scanning from a random location in the document."
                    },
                    {
                        "id": 93,
                        "string": "Note that the number of negative cases might not be exactly equal to the number of negative cases in all datasets because some governors appearing in positive cases are rare words, and we're unable to find any (or only few) occurrences that match them for the negative cases."
                    },
                    {
                        "id": 94,
                        "string": "Learning Model In this section, we introduce our attention-based model."
                    },
                    {
                        "id": 95,
                        "string": "At a high level, our model extends a bidirectional LSTM model by computing correlations between the hidden states at each timestep, then applying an attention mechanism over these correlations."
                    },
                    {
                        "id": 96,
                        "string": "Our proposed weighted-pooling (WP) neural network architecture is shown in Figure 2 ."
                    },
                    {
                        "id": 97,
                        "string": "The input sequence u = {u 1 , u 2 , ."
                    },
                    {
                        "id": 98,
                        "string": "."
                    },
                    {
                        "id": 99,
                        "string": "."
                    },
                    {
                        "id": 100,
                        "string": ", u T } consists of a sequence, of time length T , of onehot encoded word tokens, where the original tokens are those such as in Listing 1."
                    },
                    {
                        "id": 101,
                        "string": "Each token u t is embedded with pretrained embedding matrix W e ∈ R |V |×d , where |V | corresponds to the number of tokens in vocabulary V , and d defines the size of the word embeddings."
                    },
                    {
                        "id": 102,
                        "string": "The embedded token vector x t ∈ R d is retrieved simply with x t = u t W e ."
                    },
                    {
                        "id": 103,
                        "string": "Optionally, x t may also include the token's POS tag."
                    },
                    {
                        "id": 104,
                        "string": "In such instances, the embedded token at time step t is concatenated with the POS tag's one-hot encoding p t : x t = u t W e ||p t , where || denotes the vector concatenation operator."
                    },
                    {
                        "id": 105,
                        "string": "At each input time step t, a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) encodes x t into hidden state h t ∈ R s : h t = − → h t || ← − h t (1) where − → h t = f (x t , h t−1 ) is computed by the forward LSTM, and ← − h t = f (x t , h t+1 ) is computed by the backward LSTM."
                    },
                    {
                        "id": 106,
                        "string": "Concatenated vector h t is of size 2s, where s is a hyperparameter determining the size of the LSTM hidden states."
                    },
                    {
                        "id": 107,
                        "string": "Let matrix H ∈ R 2s×T correspond to the concatenation of all hidden state vectors: H = [h 1 ||h 2 || ."
                    },
                    {
                        "id": 108,
                        "string": "."
                    },
                    {
                        "id": 109,
                        "string": "."
                    },
                    {
                        "id": 110,
                        "string": "||h T ]."
                    },
                    {
                        "id": 111,
                        "string": "(2) Our model uses a form of self-attention (Paulus et al., 2018; Vaswani et al., 2017) , where the input sequence acts as both the attention mechanism's query and key/value."
                    },
                    {
                        "id": 112,
                        "string": "Since the location of a presupposition trigger can greatly vary from one sample to another, and because dependencies can be long range or short range, we model all possible word-pair interactions within a sequence."
                    },
                    {
                        "id": 113,
                        "string": "We calculate the energy between all input tokens with a Figure 2: Our weighted-pooling neural network architecture (WP)."
                    },
                    {
                        "id": 114,
                        "string": "The tokenized input is embedded with pretrained word embeddings and possibly concatenated with one-hot encoded POS tags."
                    },
                    {
                        "id": 115,
                        "string": "The input is then encoded with a bi-directional LSTM, followed by our attention mechanism."
                    },
                    {
                        "id": 116,
                        "string": "The computed attention scores are then used as weights to average the encoded states, in turn connected to a fully connected layer to predict presupposition triggering."
                    },
                    {
                        "id": 117,
                        "string": "pair-wise matching matrix: M = H H (3) where M is a square matrix ∈ R T ×T ."
                    },
                    {
                        "id": 118,
                        "string": "To get a single attention weight per time step, we adopt the attention-over-attention method (Cui et al., 2017) ."
                    },
                    {
                        "id": 119,
                        "string": "With matrix M , we first compute row-wise attention score M r ij over M : M r ij = exp(e ij ) T t=1 exp(e it ) (4) where e ij = M ij ."
                    },
                    {
                        "id": 120,
                        "string": "M r can be interpreted as a word-level attention distribution over all other words."
                    },
                    {
                        "id": 121,
                        "string": "Since we would like a single weight per word, we need an additional step to aggregate these attention scores."
                    },
                    {
                        "id": 122,
                        "string": "Instead of simply averaging the scores, we follow (Cui et al., 2017) 's approach which learns the aggregation by an additional attention mechanism."
                    },
                    {
                        "id": 123,
                        "string": "We compute columnwise softmax M c ij over M : M c ij = exp(e ij ) T t=1 exp(e tj ) (5) The columns of M r are then averaged, forming vector β ∈ R T ."
                    },
                    {
                        "id": 124,
                        "string": "Finally, β is multiplied with the column-wise softmax matrix M c to get attention vector α: α = M r β."
                    },
                    {
                        "id": 125,
                        "string": "(6) Note Equations (2) to (6) have described how we derived an attention score over our input without the introduction of any new parameters, potentially minimizing the computational effect of our attention mechanism."
                    },
                    {
                        "id": 126,
                        "string": "As a last layer to their neural network, Cui et al."
                    },
                    {
                        "id": 127,
                        "string": "(2017) sum over α to extract the most relevant input."
                    },
                    {
                        "id": 128,
                        "string": "However, we use α as weights to combine all of our hidden states h t : c = T t=1 α t h t (7) where c ∈ R s ."
                    },
                    {
                        "id": 129,
                        "string": "We follow the pooling with a dense layer z = σ(W z c + b z ), where σ is a non-linear function, matrix W z ∈ R 64×s and vector b z ∈ R 64 are learned parameters."
                    },
                    {
                        "id": 130,
                        "string": "The presupposition trigger probability is computed with an affine transform followed by a softmax: y = softmax(W o z + b o ) (8) where matrix W o ∈ R 2×64 and vector b o ∈ R 2 are learned parameters."
                    },
                    {
                        "id": 131,
                        "string": "The training objective minimizes: J(θ) = 1 m m t=1 E(ŷ, y) (9) where E(· , ·) is the standard cross-entropy."
                    },
                    {
                        "id": 132,
                        "string": "Experiments We compare the performance of our WP model against several models which we describe in this section."
                    },
                    {
                        "id": 133,
                        "string": "We carry out the experiments on both datasets described in Section 3."
                    },
                    {
                        "id": 134,
                        "string": "We also investigate the impact of POS tags and attention mechanism on the models' prediction accuracy."
                    },
                    {
                        "id": 135,
                        "string": "Baselines We compare our learning model against the following systems."
                    },
                    {
                        "id": 136,
                        "string": "The first is the most-frequentclass baseline (MFC) which simply labels all samples with the most frequent class of 1."
                    },
                    {
                        "id": 137,
                        "string": "The second is a logistic regression classifier (LogReg), in which the probabilities describing the possible outcomes of a single input x is modeled using a logistic function."
                    },
                    {
                        "id": 138,
                        "string": "We implement this baseline classifier with the scikit-learn package (Pedregosa et al., 2011) , with a CountVectorizer including bi-gram features."
                    },
                    {
                        "id": 139,
                        "string": "All of the other hyperparameters are set to default weights."
                    },
                    {
                        "id": 140,
                        "string": "The third is a variant LSTM recurrent neural network as introduced in (Graves, 2013) ."
                    },
                    {
                        "id": 141,
                        "string": "The input is encoded by a bidirectional LSTM like the WP model detailed in Section 4."
                    },
                    {
                        "id": 142,
                        "string": "Instead of a self-attention mechanism, we simply mean-pool matrix H, the concatenation of all LSTM hidden states, across all time steps."
                    },
                    {
                        "id": 143,
                        "string": "This is followed by a fully connected layer and softmax function for the binary classification."
                    },
                    {
                        "id": 144,
                        "string": "Our WP model uses the same bidirectional LSTM as this baseline LSTM, and has the same number of parameters, allowing for a fair comparison of the two models."
                    },
                    {
                        "id": 145,
                        "string": "Such a standard LSTM model represents a state-of-the-art language model, as it outperforms more recent models on language modeling tasks when the number of model parameters is controlled for (Melis et al., 2017) ."
                    },
                    {
                        "id": 146,
                        "string": "For the last model, we use a slight variant of the CNN sentence classification model of (Kim, 2014) based on the Britz tensorflow implementation 2 ."
                    },
                    {
                        "id": 147,
                        "string": "Hyperparameters & Additional Features After tuning, we found the following hyperparameters to work best: 64 units in fully connected layers and 40 units for POS embeddings."
                    },
                    {
                        "id": 148,
                        "string": "We used dropout with probability 0.5 and mini-batch size of 64."
                    },
                    {
                        "id": 149,
                        "string": "For all models, we initialize word embeddings with word2vec  pretrained embeddings of size 300."
                    },
                    {
                        "id": 150,
                        "string": "Unknown words are randomly initialized to the same size as the word2vec embeddings."
                    },
                    {
                        "id": 151,
                        "string": "In early tests on the development datasets, we found that our neural networks would consistently perform better when fixing the word embeddings."
                    },
                    {
                        "id": 152,
                        "string": "All neural network performance reported in this paper use fixed embeddings."
                    },
                    {
                        "id": 153,
                        "string": "Fully connected layers in the LSTM, CNN and WP model are regularized with dropout (Srivastava et al., 2014) ."
                    },
                    {
                        "id": 154,
                        "string": "The model parameters for these neural networks are fine-tuned with the Adam algorithm (Kingma and Ba, 2015) ."
                    },
                    {
                        "id": 155,
                        "string": "To stabilize the RNN training gradients (Pascanu et al., 2013) , we perform gradient clipping for gradients below threshold value -1, or above 1."
                    },
                    {
                        "id": 156,
                        "string": "To reduce overfitting, we stop training if the development set does not improve in accuracy for 10 epochs."
                    },
                    {
                        "id": 157,
                        "string": "All performance on the test set is reported using the best trained model as measured on the development set."
                    },
                    {
                        "id": 158,
                        "string": "In addition, we use the CoreNLP Part-of- Table 2 : Performance of various models, including our weighted-pooled LSTM (WP)."
                    },
                    {
                        "id": 159,
                        "string": "MFC refers to the most-frequent-class baseline, LogReg is the logistic regression baseline."
                    },
                    {
                        "id": 160,
                        "string": "LSTM and CNN correspond to strong neural network baselines."
                    },
                    {
                        "id": 161,
                        "string": "Note that we bold the performance numbers for the best performing model for each of the \"+ POS\" case and the \"-POS\" case."
                    },
                    {
                        "id": 162,
                        "string": "Speech (POS) tagger (Manning et al., 2014) to get corresponding POS features for extracted tokens."
                    },
                    {
                        "id": 163,
                        "string": "In all of our models, we limit the maximum length of samples and POS tags to 60 tokens."
                    },
                    {
                        "id": 164,
                        "string": "For the CNN, sequences shorter than 60 tokens are zeropadded."
                    },
                    {
                        "id": 165,
                        "string": "Table 3 shows the confusion matrix for the best performing model (WP,+POS)."
                    },
                    {
                        "id": 166,
                        "string": "The small differences in the off-diagonal entries inform us that the model misclassifications are not particularly skewed towards the presence or absence of pre-supposition triggers."
                    },
                    {
                        "id": 167,
                        "string": "Results Predicted Actual Absence Presence Absence 54,658 11,961 Presence 11,776 55,006 Analysis Consider the following pair of samples that we randomly choose from the PTB dataset (shortened for readability): 1."
                    },
                    {
                        "id": 168,
                        "string": "...Taped just as the market closed yesterday , it offers Ms. Farrell advising , \" We view the market here as going through a relatively normal cycle ... ."
                    },
                    {
                        "id": 169,
                        "string": "We continue to feel that the stock market is the @@@@ place to be for long-term appreciation 2."
                    },
                    {
                        "id": 170,
                        "string": "...More people are remaining independent longer presumably because they are better off physically and financially ."
                    },
                    {
                        "id": 171,
                        "string": "Careers count most for the well-to-do many affluent people @@@@ place personal success and money above family In both cases, the head word is place."
                    },
                    {
                        "id": 172,
                        "string": "In Example 1, the word continue (emphasized in the above text) suggests that adverb still could be used to modify head word place (i.e., ... the stock market is still the place ...)."
                    },
                    {
                        "id": 173,
                        "string": "Further, it is also easy to see that place refers to stock market, which has occurred in the previous context."
                    },
                    {
                        "id": 174,
                        "string": "Our model correctly predicts this sample as containing a presupposition, this despite the complexity of the coreference across the text."
                    },
                    {
                        "id": 175,
                        "string": "In the second case of the usage of the same main head word place in Example 2, our model falsely predicts the presence of a presupposition."
                    },
                    {
                        "id": 176,
                        "string": "However, even a human could read the sentence as \"many people still place personal success and money above family\"."
                    },
                    {
                        "id": 177,
                        "string": "This underlies the subtlety and difficulty of the task at hand."
                    },
                    {
                        "id": 178,
                        "string": "The longrange dependencies and interactions within sentences seen in these examples are what motivate the use of the various deep non-linear models presented in this work, which are useful in detecting these coreferences, particularly in the case of attention mechanisms."
                    },
                    {
                        "id": 179,
                        "string": "Conclusion In this work, we have investigated the task of predicting adverbial presupposition triggers and introduced several datasets for the task."
                    },
                    {
                        "id": 180,
                        "string": "Additionally, we have presented a novel weighted-pooling attention mechanism which is incorporated into a recurrent neural network model for predicting the presence of an adverbial presuppositional trigger."
                    },
                    {
                        "id": 181,
                        "string": "Our results show that the model outperforms the CNN and LSTM, and does not add any additional parameters over the standard LSTM model."
                    },
                    {
                        "id": 182,
                        "string": "This shows its promise in classification tasks involving capturing and combining relevant information from multiple points in the previous context."
                    },
                    {
                        "id": 183,
                        "string": "In future work, we would like to focus more on designing models that can deal with and be optimized for scenarios with severe data imbalance."
                    },
                    {
                        "id": 184,
                        "string": "We would like to also explore various applications of presupposition trigger prediction in language generation applications, as well as additional attention-based neural network architectures."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 39
                    },
                    {
                        "section": "Presupposition and pragmatic reasoning",
                        "n": "2.1",
                        "start": 40,
                        "end": 52
                    },
                    {
                        "section": "Attention",
                        "n": "2.2",
                        "start": 53,
                        "end": 61
                    },
                    {
                        "section": "Corpora",
                        "n": "3.1",
                        "start": 62,
                        "end": 71
                    },
                    {
                        "section": "Data extraction process",
                        "n": "3.2",
                        "start": 72,
                        "end": 93
                    },
                    {
                        "section": "Learning Model",
                        "n": "4",
                        "start": 94,
                        "end": 131
                    },
                    {
                        "section": "Experiments",
                        "n": "5",
                        "start": 132,
                        "end": 134
                    },
                    {
                        "section": "Baselines",
                        "n": "5.1",
                        "start": 135,
                        "end": 146
                    },
                    {
                        "section": "Hyperparameters & Additional Features",
                        "n": "5.2",
                        "start": 147,
                        "end": 166
                    },
                    {
                        "section": "Analysis",
                        "n": "7",
                        "start": 167,
                        "end": 178
                    },
                    {
                        "section": "Conclusion",
                        "n": "8",
                        "start": 179,
                        "end": 184
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1121-Figure1-1.png",
                        "caption": "Figure 1: An example of an instance containing a presuppositional trigger from our dataset.",
                        "page": 2,
                        "bbox": {
                            "x1": 302.88,
                            "x2": 529.4399999999999,
                            "y1": 62.879999999999995,
                            "y2": 169.92
                        }
                    },
                    {
                        "filename": "../figure/image/1121-Table1-1.png",
                        "caption": "Table 1: Number of training samples in each dataset.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 61.44,
                            "y2": 186.23999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1121-Figure2-1.png",
                        "caption": "Figure 2: Our weighted-pooling neural network architecture (WP). The tokenized input is embedded with pretrained word embeddings and possibly concatenated with one-hot encoded POS tags. The input is then encoded with a bi-directional LSTM, followed by our attention mechanism. The computed attention scores are then used as weights to average the encoded states, in turn connected to a fully connected layer to predict presupposition triggering.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 525.12,
                            "y1": 217.44,
                            "y2": 464.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1121-Table4-1.png",
                        "caption": "Table 4: Contingency table for correct (cor.) and incorrect (inc.) predictions between the LSTM baseline and the attention model (WP) on the Giga_also dataset.",
                        "page": 6,
                        "bbox": {
                            "x1": 331.68,
                            "x2": 501.12,
                            "y1": 500.64,
                            "y2": 543.36
                        }
                    },
                    {
                        "filename": "../figure/image/1121-Table2-1.png",
                        "caption": "Table 2: Performance of various models, including our weighted-pooled LSTM (WP). MFC refers to the most-frequent-class baseline, LogReg is the logistic regression baseline. LSTM and CNN correspond to strong neural network baselines. Note that we bold the performance numbers for the best performing model for each of the “+ POS” case and the “- POS” case.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 63.839999999999996,
                            "y2": 272.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1121-Table3-1.png",
                        "caption": "Table 3: Confusion matrix for the best performing model, predicting the presence of a presupposition trigger or the absence of such as trigger.",
                        "page": 6,
                        "bbox": {
                            "x1": 337.91999999999996,
                            "x2": 501.12,
                            "y1": 374.88,
                            "y2": 431.03999999999996
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-37"
        },
        {
            "slides": {
                "0": {
                    "title": "Extractive Summarization",
                    "text": [
                        "Select salient sentences from input document to create a summary",
                        "INPUT Document with sentences S1, S2,.., Sn"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Our Contribution",
                    "text": [
                        "A Deep Learning Architecture for training an extractive summarizer: SWAP-NET",
                        "Unlike previous methods, SWAP-NET uses keywords for sentence selection",
                        "Predicts both important words and sentences in document",
                        "Two-level Encoder-Decoder Attention model",
                        "Outperform state of the art extractive summarisers.",
                        "INPUT Document with sentences S1, S2,.., Sn"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Extractive Summarization Methods",
                    "text": [
                        "Pre-trained word embeddings Word Encodings wrt other words Sentence Encoding wrt words in it Sentence Encodings wrt other sentences Document Encoding wrt its sentences",
                        "Sentence encodings wrt other sentences",
                        "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of docments. In Association for the Advancement of Artificial Intelligence, pages 30753081. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. 54th Annual Meeting of the Association for Computational Linguistics.",
                        "SummaRuNNer (Nallapati et al., 2017)",
                        "Both assume saliency of sentence s depends on salient sentences appearing before s",
                        "Word Label Prediction (with decoder)",
                        "Sentence Label Prediction (with decoder)"
                    ],
                    "page_nums": [
                        3,
                        4,
                        5,
                        6,
                        44
                    ],
                    "images": []
                },
                "3": {
                    "title": "Intuition Behind Approach",
                    "text": [
                        "Question: Which sentence should be considered salient (part of summary)?",
                        "Our hypothesis: saliency of a sentence depends on both salient sentences and words appearing before that sentence in the document",
                        "Similar to graph based models by Wan et al. (2007)",
                        "Along with labelling sentences we also label words to determine their saliency",
                        "Moreover, saliency of a word depends on previous salient words and sentences",
                        "Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2007. Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction. In Proceedings of the 45th annual meeting of the association of computational linguistics, pages 552559.",
                        "Three types of Interactions:"
                    ],
                    "page_nums": [
                        7,
                        8
                    ],
                    "images": []
                },
                "4": {
                    "title": "Intuition Interaction Between Sentences",
                    "text": [
                        "A sentence should be salient if it is heavily linked with other salient sentences"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "5": {
                    "title": "Intuition Interaction Between Words",
                    "text": [
                        "A word should be salient if it is heavily linked with other salient words"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "6": {
                    "title": "Intuition Words and Sentences Interaction",
                    "text": [
                        "A sentence should be salient if it contains many salient words",
                        "A word should be salient if it appears in many salient sentences",
                        "Generate extractive summary using both important words and sentences",
                        "Word-Word Important Sentences: S3 Important Words: V2, V3"
                    ],
                    "page_nums": [
                        11,
                        12
                    ],
                    "images": []
                },
                "7": {
                    "title": "Keyword Extraction and Sentence Extraction",
                    "text": [
                        "Sentence to Sentence Interaction as Sentence Extraction",
                        "Word to Word Interaction as Word Extraction",
                        "For discrete sequences, pointer networks have been successfully used to learn how to select positions from an input sequence",
                        "We use two pointer networks one at word-level and another at sentence-level"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "8": {
                    "title": "Pointer Network",
                    "text": [
                        "Encoder-Decoder architecture with Attention",
                        "Attention mechanism is used to select one of the inputs at each decoding step",
                        "Thus, effectively pointing to an input"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "10": {
                    "title": "Three Interactions SWAP NET",
                    "text": [
                        "A Mechanism to Combine",
                        "Word Level Attentions and"
                    ],
                    "page_nums": [
                        16,
                        18,
                        21
                    ],
                    "images": []
                },
                "11": {
                    "title": "Questions",
                    "text": [
                        "Q1 : How can the two attentions be combined?",
                        "Q2 : How can the summaries be generated considering both the attentions?",
                        "A Mechanism to Combine",
                        "Word Level Attentions and"
                    ],
                    "page_nums": [
                        17,
                        41
                    ],
                    "images": []
                },
                "12": {
                    "title": "SWAP NET Architecture Word Level Pointer Network",
                    "text": [
                        "Similar to Pointer Network,",
                        "The word encoder is bi-directional LSTM",
                        "Word-level decoder learns to point to important words",
                        "E W E W E W E W E W D W D W D W",
                        "Purple line: attention vector given as input to each decoding step",
                        "Sum of word encodings weighted by attention probabilities generated in previous step",
                        "Probability of word i, at decoding step j"
                    ],
                    "page_nums": [
                        19,
                        20
                    ],
                    "images": []
                },
                "13": {
                    "title": "SWAP NET Architecture",
                    "text": [
                        "Sentence-Level Hierarchical Pointer Network",
                        "Sentence is represented by encoding of last word of that sentence",
                        "Word Encoder E W E W E W E W E W D W D W D W",
                        "Attention vectors are sum of sentence encodings weighted by attention probabilities by previous decoding step",
                        "Probability of sentence k, at decoding step j"
                    ],
                    "page_nums": [
                        22,
                        23
                    ],
                    "images": []
                },
                "14": {
                    "title": "Combining Sentence Attention and Word Attention",
                    "text": [
                        "Q1 : How can the two attentions be combined?",
                        "A document with three sentences and corresponding words is shown"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": []
                },
                "15": {
                    "title": "Sentence and Word Interactions",
                    "text": [
                        "Step 1: Hold sentence processing. Then group all words and determine their saliency sequentially",
                        "Step 2: Using output of step 1, i.e., using keywords, process sentences to determine salient sentences",
                        "INCOMPLETE SOLUTION This methods processes sentence depending on words but does not use sentences for processing words.",
                        "Group each sentence and its words separately and process them sequentially",
                        "Hold sentence processing. Determine saliency of words in S1",
                        "Using information about saliency of words in S2 and saliency of previous sentence S1",
                        "Hold word processing and resume sentence processing.",
                        "Using information about saliency of both S1 and its words",
                        "Hold sentence processing and resume word processing.",
                        "Determine saliency of words in next sentence S2",
                        "Determine saliency of sentence S2",
                        "This methods ensures that saliency of word and sentence is determined from previously predicted both salient sentences and words",
                        "Using previously predicted salient word and sentences",
                        "Synchronising Decoding Steps: Decide when to turn off and on word processing and sentence processing to synchronise word and sentence prediction",
                        "Sharing Attention Vectors: Determine salient words and sentences"
                    ],
                    "page_nums": [
                        25,
                        26,
                        27,
                        28,
                        29,
                        30,
                        31,
                        32,
                        33
                    ],
                    "images": []
                },
                "17": {
                    "title": "SWAP NET Switch Mechanism",
                    "text": [
                        "Sharing both attention vectors (purple and orange lines) between the two decoder",
                        "Synchronising decoding steps of the two decoders by allowing only one decoder output at a step",
                        "Feedforward Netw ork Switch Probability",
                        "E W E W E W E W E W D W D W D W Word Decoder Hidden State",
                        "Output is selected with maximum of final word and sentence probabilities",
                        "Final Word Probabilities E W E W E W E W E W D W D W D W"
                    ],
                    "page_nums": [
                        35,
                        36
                    ],
                    "images": []
                },
                "18": {
                    "title": "Prediction with SWAP NET Encoding",
                    "text": [
                        "E W E W E W E W E W",
                        "Input Document w1 w2 w3 w4 w5"
                    ],
                    "page_nums": [
                        37
                    ],
                    "images": []
                },
                "19": {
                    "title": "Prediction with SWAP NET Decoding Step",
                    "text": [
                        "Switch has two states,",
                        "Q = 0 : word selection and",
                        "Q = 1 : sentence selection",
                        "Q=0 E W E W E W E W E W D W D W D W"
                    ],
                    "page_nums": [
                        38,
                        39,
                        40
                    ],
                    "images": []
                },
                "20": {
                    "title": "Summary Generation",
                    "text": [
                        "House prices across the UK will rise at a fraction of last years frenetic pace, forecasts show.",
                        "prices rise fraction frenetic pace forecasts show",
                        "Probability KeyWord P1 P2 P3 P4 P5 P6 P7",
                        "Score of Given Sentence = (Sentence Probability) + (Sum of its keyword Probabilities)",
                        "k = Ps + Pi where k is number of keywords in sentence S i=1",
                        "Top 3 sentences with maximum scores are chosen as summary"
                    ],
                    "page_nums": [
                        42,
                        43
                    ],
                    "images": []
                },
                "21": {
                    "title": "Dataset and Evaluation",
                    "text": [
                        "Large Benchmark Dataset CNN/DailyMail News Corpus",
                        "News articles from CNN/DailyMail along with human generated summary (gold summary) for each article",
                        "GroundTruth Binary Labels For Training",
                        "Sentences: Anonymised version of dataset given by (Cheng and Lapata, 2016)",
                        "Words: Extract keywords from each gold summary using RAKE",
                        "Dataset Training Validation Test",
                        "Standard Evaluation Metric: Three Variates of Rouge Score",
                        "Comparing generated summaries and gold summaries for matching:",
                        "ROUGE-L (RL): Longest Common Subsequences Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic key word extraction from individual documents. Text Mining: Applications and Theory."
                    ],
                    "page_nums": [
                        45
                    ],
                    "images": []
                },
                "23": {
                    "title": "Example",
                    "text": [
                        "Meet the four immigrant students each accepted to ALL EIGHT Ivy League schools who want to pay back their parents who moved to the U.S. to give them a better",
                        "Their parents came to the U.S. for opportunities and now these four teens have them in abundance .",
                        "The high-achieving high schoolers have each been accepted to all eight Ivy League schools : Brown University , Columbia University ,",
                        "Cornell University , Dartmouth College , Harvard University , University of Pennsylvania , Princeton University and Yale University .",
                        "And as well as the Ivy League colleges , each of them has also been accepted to other top schools . While they all grew up in different cities , the students are the offspring of immigrant parents who moved to America - from Bulgaria , Somalia or Nigeria . Summary Generated",
                        "Munira_Khalif from Minnesota , Stefan_Stoykov from Indiana , Victor_Agbafe from North_Carolina , and Harold_Ekeh from New_York got multiple offers All have immigrant parents - from Somalia , Bulgaria or Nigeria - and say they have their parents ' hard work to thank for their successes They hope to use the opportunities for good , from improving education across the world to becoming neurosurgeons"
                    ],
                    "page_nums": [
                        48
                    ],
                    "images": []
                },
                "24": {
                    "title": "SWAP NET Predicted Keywords",
                    "text": [
                        "Summary Generated by SWAP-NET",
                        "While they all grew up in different cities , the students are the offspring of immigrant parents who moved to America - from Bulgaria , Somalia or Nigeria",
                        "And all four - Munira_Khalif from Minnesota , Stefan_Stoykov from Indiana , Victor_Agbafe from North_Carolina , and Harold_Ekeh from New_York - say they have their parents ' hard work to thank .",
                        "Now they hope to use the opportunities for good - whether its effecting positive social change , improving education across the world or becoming a neurosurgeon",
                        "SWAP-NET predictions highlighted in green"
                    ],
                    "page_nums": [
                        49
                    ],
                    "images": []
                },
                "25": {
                    "title": "Keywords Ground truth vs SWAP NET predictions",
                    "text": [
                        "SWAP-NET key words (green) and Ground truth (blue)",
                        "While they all grew up in different cities , the students are the offspring of immigrant parents who moved to America - from Bulgaria , Somalia or Nigeria",
                        "And all four - Munira_Khalif from Minnesota Stefan_Stoykov from Indiana , Victor_Agbafe from North_Carolina , and Harold_Ekeh from New_York - say they have their parents hard work to thank . Now they hope to use the opportunities for good - whether its effecting positive social change , improving education across the world or becoming a neurosurgeon",
                        "Now they hope to use the opportunities for good - whether its effecting positive social change , improving education across the world or becoming a neurosurgeon",
                        "Munira_Khalif from Minnesota Stefan_Stoykov from Indiana Victor_Agbafe from North_Carolina , and Harold_Ekeh from New_York got multiple offers All have immigrant parents - from Somalia Bulgaria or Nigeria - and say they have their parents hard work to thank for their successes They hope to use the opportunities for good , from improving education across the world to becoming neurosurgeons"
                    ],
                    "page_nums": [
                        50
                    ],
                    "images": []
                },
                "26": {
                    "title": "Observations",
                    "text": [
                        "Almost no keyword is repeated across different sentence in the summary",
                        "Summary Generated by SWAP-NET:",
                        "Presence of key words in all the overlapping segments of text with the gold summary",
                        "While they all grew up in different cities , the students are the offspring of immigrant parents who moved to America - from Bulgaria Somalia or Nigeria And all four - Munira_Khalif from Minnesota Stefan_Stoykov from Indiana , Victor_Agbafe from North_Carolina , and Harold_Ekeh from New_York - say they have their parents hard work to thank . Now they hope to use the opportunities for good - whether its effecting positive social change , improving education across the world or becoming a neurosurgeon",
                        "Bulgaria , Somalia or Nigeria",
                        "And all four - Munira_Khalif from Minnesota ,",
                        "Munira_Khalif from Minnesota Stefan_Stoykov from Indiana Victor_Agbafe from North_Carolina , and Harold_Ekeh from New_York got multiple offers All have immigrant parents - from Somalia Bulgaria or Nigeria - and say they have their parents hard work to thank for their successes They hope to use the opportunities for good , from improving education across the world to becoming neurosurgeons",
                        "North_Carolina , and Harold_Ekeh from New_York - say they have their parents ' hard work to thank .",
                        "Now they hope to use the opportunities for good - whether its effecting positive social change , improving education across the world or becoming a neurosurgeon",
                        "Most of the predicted keywords are actual keywords",
                        "Most of the extracted summary sentences contain keywords",
                        "Gold Summary: Large proportion of key words from the",
                        "gold summary present in the generated summary"
                    ],
                    "page_nums": [
                        51
                    ],
                    "images": []
                }
            },
            "paper_title": "Reinforced Extractive Summarization with Question-Focused Rewards",
            "paper_id": "1123",
            "paper": {
                "title": "Reinforced Extractive Summarization with Question-Focused Rewards",
                "abstract": "We investigate a new training paradigm for extractive summarization. Traditionally, human abstracts are used to derive goldstandard labels for extraction units. However, the labels are often inaccurate, because human abstracts and source documents cannot be easily aligned at the word level. In this paper we convert human abstracts to a set of Cloze-style comprehension questions. System summaries are encouraged to preserve salient source content useful for answering questions and share common words with the abstracts. We use reinforcement learning to explore the space of possible extractive summaries and introduce a question-focused reward function to promote concise, fluent, and informative summaries. Our experiments show that the proposed method is effective. It surpasses state-of-the-art systems on the standard summarization dataset. Source Document The first doses of the Ebola vaccine were on a commercial flight to West Africa and were expected to arrive on Friday, according to a spokesperson from GlaxoSmithKline (GSK) one of the companies that has created the vaccine with the National Institutes of Health. Another vaccine from Merck and NewLink will also be tested. \"Shipping the vaccine today is a major achievement and shows that we remain on track with the accelerated development of our candidate Ebola vaccine,\" Dr. Moncef Slaoui, chairman of global vaccines at GSK said in a company release. (Rest omitted.) Abstract The first vials of an Ebola vaccine should land in Liberia Friday Questions Q: The first vials of an vaccine should land in Liberia Friday Q: The first vials of an Ebola vaccine should in Liberia Friday Q: The first vials of an Ebola vaccine should land in Friday",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction We study extractive summarization in this work where salient word sequences are extracted from the source document and concatenated to form a summary (Nenkova and McKeown, 2011) ."
                    },
                    {
                        "id": 1,
                        "string": "Existing supervised approaches to extractive summarization frequently use human abstracts to create annotations for extraction units (Gillick and Favre, 2009; Li et al., 2013; Cheng and Lapata, 2016) ."
                    },
                    {
                        "id": 2,
                        "string": "E.g., a source word is labelled 1 if it appears in the abstract, 0 otherwise."
                    },
                    {
                        "id": 3,
                        "string": "Despite the usefulness, there are two issues with this scheme."
                    },
                    {
                        "id": 4,
                        "string": "First, a vast majority of the source words are tagged 0s, only a small portion are 1s."
                    },
                    {
                        "id": 5,
                        "string": "This is due to the fact that human abstracts are short and concise; they often contain words not present in the source."
                    },
                    {
                        "id": 6,
                        "string": "Second, Table 1 : Example source document, the top sentence of the abstract, and system-generated Cloze-style questions."
                    },
                    {
                        "id": 7,
                        "string": "Source content related to the abstract is italicized."
                    },
                    {
                        "id": 8,
                        "string": "not all labels are accurate."
                    },
                    {
                        "id": 9,
                        "string": "Source words that are labelled 0 may be paraphrases, generalizations, or otherwise related to words in the abstracts."
                    },
                    {
                        "id": 10,
                        "string": "These source words are often mislabelled."
                    },
                    {
                        "id": 11,
                        "string": "Consequently, leveraging human abstracts to provide supervision for extractive summarization remains a challenge."
                    },
                    {
                        "id": 12,
                        "string": "Neural abstractive summarization can alleviate this issue by allowing the system to either copy words from the source texts or generate new words from a vocabulary (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) ."
                    },
                    {
                        "id": 13,
                        "string": "While the techniques are promising, they face other challenges, such as ensuring the summaries remain faithful to the original."
                    },
                    {
                        "id": 14,
                        "string": "Failing to reproduce factual details has been revealed as one of the main obstacles for neural abstractive summarization (Cao et al., 2018; Song and Liu, 2018) ."
                    },
                    {
                        "id": 15,
                        "string": "This study thus chooses to focus on neural extractive summarization."
                    },
                    {
                        "id": 16,
                        "string": "We explore a new training paradigm for extractive summarization."
                    },
                    {
                        "id": 17,
                        "string": "We convert human abstracts to a set of Cloze-style comprehension questions, where the question body is a sentence of the abstract with a blank, and the answer is an entity or a keyword."
                    },
                    {
                        "id": 18,
                        "string": "Table 1 shows an example."
                    },
                    {
                        "id": 19,
                        "string": "Because the questions cannot be answered by applying general world knowledge, system summaries are encouraged to preserve salient source content that is relevant to the questions (≈ human abstract) such that the summaries can work as a document surrogate to predict correct answers."
                    },
                    {
                        "id": 20,
                        "string": "We use an attention mechanism to locate segments of a summary that are relevant to a given question so that the summary can be used to answer multiple questions."
                    },
                    {
                        "id": 21,
                        "string": "This study extends the work of (Lei et al., 2016) to use reinforcement learning to explore the space of extractive summaries."
                    },
                    {
                        "id": 22,
                        "string": "While the original work focuses on generating rationales to support supervised classification, the goal of our study is to produce fluent, generic document summaries."
                    },
                    {
                        "id": 23,
                        "string": "The question-answering (QA) task is designed to fulfill this goal and the QA performance is only secondary."
                    },
                    {
                        "id": 24,
                        "string": "Our research contributions can be summarized as follows: • we investigate an alternative training scheme for extractive summarization where the summaries are encouraged to be semantically close to human abstracts in addition to sharing common words; • we compare two methods to convert human abstracts to Cloze-style questions and investigate its impact on QA and summarization performance."
                    },
                    {
                        "id": 25,
                        "string": "Our results surpass those of previous systems on a standard summarization dataset."
                    },
                    {
                        "id": 26,
                        "string": "Related Work This study focuses on generic summarization."
                    },
                    {
                        "id": 27,
                        "string": "It is different from the query-based summarization (Daumé III and Marcu, 2006; Dang and Owczarzak, 2008) , where systems are trained to select text pieces related to predefined queries."
                    },
                    {
                        "id": 28,
                        "string": "In this work we have no predefined queries but the system carefully generates questions from human abstracts and learns to produce generic summaries that are capable of answering all questions."
                    },
                    {
                        "id": 29,
                        "string": "Cloze questions have been used in reading comprehension (Richardson et al., 2013; Weston et al., 2016; Mostafazadeh et al., 2016; Rajpurkar et al., 2016) to test the system's ability to perform reasoning and language understanding."
                    },
                    {
                        "id": 30,
                        "string": "Hermann et al."
                    },
                    {
                        "id": 31,
                        "string": "(2015) describe an approach to extract (context, question, answer) triples from news articles."
                    },
                    {
                        "id": 32,
                        "string": "Our work draws on this approach to automatically create questions from human abstracts."
                    },
                    {
                        "id": 33,
                        "string": "Reinforcement learning (RL) has been recently applied to a number of NLP applications, includ-ing dialog generation (Li et al., 2017) , machine translation (MT) (Ranzato et al., 2016; Gu et al., 2018) , question answering (Choi et al., 2017) , and summarization and sentence simplification (Zhang and Lapata, 2017; Paulus et al., 2017; Chen and Bansal, 2018; Narayan et al., 2018) ."
                    },
                    {
                        "id": 34,
                        "string": "This study leverages RL to explore the space of possible extractive summaries."
                    },
                    {
                        "id": 35,
                        "string": "The summaries are encouraged to preserve salient source content useful for answering questions as well as sharing common words with the abstracts."
                    },
                    {
                        "id": 36,
                        "string": "Our Approach Given a source document X, our system generates a summary Y = (y 1 , y 2 , · · · , y |Y | ) by identifying consecutive sequences of words: y t is 1 if the t-th source word is included in the summary, 0 otherwise."
                    },
                    {
                        "id": 37,
                        "string": "In this section we investigate a questionoriented reward R(Y ) that encourages summaries to contain sufficient content useful for answering key questions about the document ( §3.1); we then use reinforcement learning to explore the space of possible extractive summaries ( §3.2)."
                    },
                    {
                        "id": 38,
                        "string": "Question-Focused Reward We reward a summary if it can be used as a document surrogate to answer important questions."
                    },
                    {
                        "id": 39,
                        "string": "Let {(Q k , e * k )} K k=1 be a set of question-answer pairs for a source document, where e * k is the groundtruth answer corresponding to an entity or a keyword."
                    },
                    {
                        "id": 40,
                        "string": "We encode the question Q k into a vector: q k = Bi-LSTM(Q k ) ∈ R d using a bidirectional LSTM, where the last outputs of the forward and backward passes are concatenated to form a question vector."
                    },
                    {
                        "id": 41,
                        "string": "We use the same Bi-LSTM to encode the summary Y to a sequence of vectors: (h S 1 , h S 2 , · · · , h S |S| ) = Bi-LSTM(Y ), where |S| is the number of words in the summary; h S t ∈ R d is the concatenation of forward and backward hidden states at time step t. Figure 1 provides an illustration of the system framework."
                    },
                    {
                        "id": 42,
                        "string": "An attention mechanism is used to locate parts of the summary that are relevant to Q k ."
                    },
                    {
                        "id": 43,
                        "string": "We define α k,i ∝ exp(q k W a h S i ) to represent the importance of the i-th summary word (h S i ) to answering the k-th question (q k ), characterized by a bilinear term (Chen et al., 2016a) ."
                    },
                    {
                        "id": 44,
                        "string": "A context vector c k is constructed as a weighted sum of all summary words relevant to the k-th question, and it is used to predict the answer."
                    },
                    {
                        "id": 45,
                        "string": "We define the QA reward R a (Y ) as the log-likelihood of correctly predict- ing all answers."
                    },
                    {
                        "id": 46,
                        "string": "{W a , W c } are learnable model parameters."
                    },
                    {
                        "id": 47,
                        "string": "α k,i = exp(q k W a h S i ) |S| i=1 exp(q k W a h S i ) (1) c k = |S| i=1 α k,i h S i (2) P (e k |Y, Q k ) = softmax(W c c k ) (3) R a (Y ) = 1 K K k=1 log P (e * k |Y, Q k ) (4) In the following we describe approaches to obtain a set of question-answer pairs {(Q k , e * k )} K k=1 from a human abstract."
                    },
                    {
                        "id": 48,
                        "string": "In fact, this formulation has the potential to make use of multiple human abstracts (subject to availability) in a unified framework; in that case, the QA pairs will be extracted from all abstracts."
                    },
                    {
                        "id": 49,
                        "string": "According to Eq."
                    },
                    {
                        "id": 50,
                        "string": "(4), the system is optimized to generate summaries that preserve salient source content sufficient to answer all questions (≈ human abstract)."
                    },
                    {
                        "id": 51,
                        "string": "We expect to harvest one question-answer pair from each sentence of the abstract."
                    },
                    {
                        "id": 52,
                        "string": "More are possible, but the QA pairs will contain duplicate content."
                    },
                    {
                        "id": 53,
                        "string": "There are a few other noteworthy issues."
                    },
                    {
                        "id": 54,
                        "string": "If we do not collect any QA pairs from a sentence of the abstract, its content will be left out of the system summary."
                    },
                    {
                        "id": 55,
                        "string": "It is thus crucial for the system to extract at least one QA pair from any sentence in an automatic manner."
                    },
                    {
                        "id": 56,
                        "string": "Further, the questions must not be answered by simply applying general world knowledge."
                    },
                    {
                        "id": 57,
                        "string": "We expect the adequacy of the summary to have a direct influence on whether or not the questions will be correctly answered."
                    },
                    {
                        "id": 58,
                        "string": "Motivated by these considerations, we perform the following steps."
                    },
                    {
                        "id": 59,
                        "string": "We split a human abstract to a set of sentences, identify an answer token from each sentence, then convert the sentence to a question by replacing the token with a placeholder, yielding a Cloze question."
                    },
                    {
                        "id": 60,
                        "string": "We explore two approaches to extract answer tokens: • Entities."
                    },
                    {
                        "id": 61,
                        "string": "We extract four types of named entities {PER, LOC, ORG, MISC} from sentences and treat them as possible answer tokens."
                    },
                    {
                        "id": 62,
                        "string": "• Keywords."
                    },
                    {
                        "id": 63,
                        "string": "This approach identifies the ROOT word of a sentence dependency parse tree and treats it as a keyword-based answer token."
                    },
                    {
                        "id": 64,
                        "string": "Not all sentences contain entities, but every sentence has a root word; it is often the main verb of the sentence."
                    },
                    {
                        "id": 65,
                        "string": "We obtain K question-answer pairs from each human abstract, one pair per sentence."
                    },
                    {
                        "id": 66,
                        "string": "If there are less than K sentences in the abstract, the QA pairs of the top sentences will be duplicated, with the assumption that the top sentences are more important than others."
                    },
                    {
                        "id": 67,
                        "string": "If multiple entities reside in a sentence, we randomly pick one as the answer token; otherwise if there are no entities, we use the root word instead."
                    },
                    {
                        "id": 68,
                        "string": "To ensure that the extractive summaries are concise, fluent, and close to the original wording, we add additional components to the reward function: (i) we define R s (Y ) = | 1 |Y | |Y | t=1 y t − δ| to restrict the summary size."
                    },
                    {
                        "id": 69,
                        "string": "We require the percentage of selected source words to be close to a predefined threshold δ."
                    },
                    {
                        "id": 70,
                        "string": "This constraint works well at restricting length, with the average summary size adhering to this percentage; (ii) we further introduce R f (Y ) = |Y | t=2 |y t − y t−1 | to encourage the summaries to be fluent."
                    },
                    {
                        "id": 71,
                        "string": "This component is adopted from (Lei et al., 2016) , where few 0/1 switches between y t−1 and y t indicates the system is selecting consecutive word sequences; (iii) we encourage system and reference summaries to share common bigrams."
                    },
                    {
                        "id": 72,
                        "string": "This practice has shown suc-cess in earlier studies (Gillick and Favre, 2009 )."
                    },
                    {
                        "id": 73,
                        "string": "R b (Y ) is defined as the percentage of reference bigrams successfully covered by the system summary."
                    },
                    {
                        "id": 74,
                        "string": "These three components together ensure the well-formedness of extractive summaries."
                    },
                    {
                        "id": 75,
                        "string": "The final reward function R(Y ) is a linear interpolation of all the components; γ, α, β are coefficients and we describe their parameter tuning in §4."
                    },
                    {
                        "id": 76,
                        "string": "R(Y )=R a (Y )+γR b (Y )−αR f (Y )−βR s (Y ) (5) Reinforcement Learning In the following we seek to optimize a policy P (Y |X) for generating extractive summaries so that the expected reward E P (Y |X) [R(Y )] is maximized."
                    },
                    {
                        "id": 77,
                        "string": "Taking derivatives of this objective with respect to model parameters θ involves repeatedly sampling summariesŶ = (ŷ 1 ,ŷ 2 , · · · ,ŷ |Y | ) (illustrated in Eq."
                    },
                    {
                        "id": 78,
                        "string": "(6) )."
                    },
                    {
                        "id": 79,
                        "string": "In this way reinforcement learning exploits the space of extractive summaries of a source document."
                    },
                    {
                        "id": 80,
                        "string": "∇ θ E P (Y |X) [R(Y )] = E P (Y |X) [R(Y )∇ θ log P (Y |X)] ≈ 1 N N n=1 R(Ŷ (n) )∇ θ log P (Ŷ (n) |X) (6) To calculate P (Y |X) and then sampleŶ from it, we use a bidirectional LSTM to encode a source document to a sequence of vectors: (h D 1 , h D 2 , · · · , h D |X| ) = Bi-LSTM(X)."
                    },
                    {
                        "id": 81,
                        "string": "Whether to include the t-th source word in the summary (ŷ t ) thus can be decided based on h D t ."
                    },
                    {
                        "id": 82,
                        "string": "However, we also want to accommodate the previous t-1 sampling decisions (ŷ 1:t−1 ) to improve the fluency of the extractive summary."
                    },
                    {
                        "id": 83,
                        "string": "Following (Lei et al., 2016) , we introduce a single-direction LSTM encoder whose hidden state s t tracks the sampling decisions up to time step t (Eq."
                    },
                    {
                        "id": 84,
                        "string": "8)."
                    },
                    {
                        "id": 85,
                        "string": "It represents the semantic meaning encoded in the current summary."
                    },
                    {
                        "id": 86,
                        "string": "To sample the t-th word, we concatenate the two vectors [h D t ||s t−1 ] and use it as input to a feedforward layer with sigmoid activation to estimatê y t ∼ P (y t |ŷ 1:t−1 , X) (Eq."
                    },
                    {
                        "id": 87,
                        "string": "7)."
                    },
                    {
                        "id": 88,
                        "string": "P (y t |ŷ 1:t−1 , X) = σ(W h [h D t ||s t−1 ] + b h ) (7) s t = LSTM([h D t ||ŷ t ], s t−1 ) (8) P (Ŷ |X) = |Y | t=1 P (ŷ t |ŷ 1:t−1 , X) (9) Note that Eq."
                    },
                    {
                        "id": 89,
                        "string": "(7) can be pretrained using goldstandard summary sequence Y * = (y * 1 , y * 2 , · · · , y * |Y | ) to minimize the word-level cross-entropy loss, System R-1 R-2 R-L LSA (Steinberger and Jezek, 2004) 21.2 6.2 14.0 LexRank (Erkan and Radev, 2004) 26.1 9.6 17.7 TextRank (Mihalcea and Tarau, 2004) 23.3 7.7 15.8 SumBasic (Vanderwende et al., 2007) 22.9 5.5 14.8 KL-Sum (Haghighi and Vanderwende, 2009) 20.7 5.9 13.7 Distraction-M3 (Chen et al., 2016b) 27.1 8.2 18.7 Seq2Seq w/ Attn (See et al., 2017) 25.0 7.7 18.8 Pointer-Gen w/ Cov (See et al., 2017) 29.9 10.9 21.1 Graph-based Attn (Tan et al., 2017) 30.3 9.8 20.0 Extr+EntityQ (this paper) 31.4 11.5 21.7 Extr+KeywordQ (this paper) 31.7 11.6 21.5 where we set y * t as 1 if (x t , x t+1 ) is a bigram in the human abstract."
                    },
                    {
                        "id": 90,
                        "string": "For reinforcement learning, our goal is to optimize the policy P (Y |X) using the reward function R(Y ) ( §3.1) during the training process."
                    },
                    {
                        "id": 91,
                        "string": "Once the policy P (Y |X) is learned, we do not need the reward function (or any QA pairs) at test time to generate generic summaries."
                    },
                    {
                        "id": 92,
                        "string": "Instead we chooseŷ t that yields the highest probabilityŷ t = arg max P (y t |ŷ 1:t−1 , X)."
                    },
                    {
                        "id": 93,
                        "string": "Experiments All training, validation, and testing was performed using the CNN dataset (Hermann et al., 2015; Nallapati et al., 2016) containing news articles paired with human-written highlights (i.e., abstracts)."
                    },
                    {
                        "id": 94,
                        "string": "We observe that a source article contains 29.8 sentences and an abstract contains 3.54 sentences on average."
                    },
                    {
                        "id": 95,
                        "string": "The train/valid/test splits contain 90,266, 1,220, 1,093 articles respectively."
                    },
                    {
                        "id": 96,
                        "string": "Hyperparameters The hyperparameters, tuned on the validation set, include the following: the hidden state size of the Bi-LSTM is 256; the hidden state size of the single-direction LSTM encoder is 30."
                    },
                    {
                        "id": 97,
                        "string": "Dropout rate (Srivastava, 2013) , used twice in the sampling component, is set to 0.2."
                    },
                    {
                        "id": 98,
                        "string": "The minibatch size is set to 256."
                    },
                    {
                        "id": 99,
                        "string": "We apply early stopping on the validation set, where the maximum number of epochs is set to 50."
                    },
                    {
                        "id": 100,
                        "string": "Our source vocabulary contains 150K words; words not in the vocabulary are replaced by the unk token."
                    },
                    {
                        "id": 101,
                        "string": "We use 100-dimensional word embeddings, initialized by GloVe (Pennington et al., 2014) and remain trainable."
                    },
                    {
                        "id": 102,
                        "string": "We set β = 2α and select the best α ∈ {10, 20, 50} and γ ∈ {5, 6, 7, 8} using the valid set (best value underlined)."
                    },
                    {
                        "id": 103,
                        "string": "The maximum length of input is set to 100 words; δ is set to be 0.4 (≈40 words)."
                    },
                    {
                        "id": 104,
                        "string": "We use the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 1e-4 and halve the learning rate if the objective worsens beyond a threshold (> 10%)."
                    },
                    {
                        "id": 105,
                        "string": "As mentioned we utilized a bigram based pretraining method."
                    },
                    {
                        "id": 106,
                        "string": "We found that this stabilized the training of the full model."
                    },
                    {
                        "id": 107,
                        "string": "Results We compare our methods with state-of-the-art published systems, including both extractive and abstractive approaches (their details are summarized below)."
                    },
                    {
                        "id": 108,
                        "string": "We experiment with two variants of our approach."
                    },
                    {
                        "id": 109,
                        "string": "\"EntityQ\" uses QA pairs whose answers are named entities."
                    },
                    {
                        "id": 110,
                        "string": "\"KeywordQ\" uses pairs whose answers are sentence root words."
                    },
                    {
                        "id": 111,
                        "string": "According to the R-1, R-2, and R-L scores (Lin, 2004) presented in Table 2 , both methods are superior to the baseline systems on the benchmark dataset, yielding 11.5 and 11.6 R-2 F-scores, respectively."
                    },
                    {
                        "id": 112,
                        "string": "• LSA (Steinberger and Jezek, 2004) uses the latent semantic analysis technique to identify semantically important sentences."
                    },
                    {
                        "id": 113,
                        "string": "• LexRank (Erkan and Radev, 2004 ) is a graphbased approach that computes sentence importance based on the concept of eigenvector centrality in a graph representation of source sentences."
                    },
                    {
                        "id": 114,
                        "string": "• TextRank (Mihalcea and Tarau, 2004) is an unsupervised graph-based ranking algorithm inspired by algorithms PageRank and HITS."
                    },
                    {
                        "id": 115,
                        "string": "• SumBasic (Vanderwende et al., 2007) is an extractive approach that assumes words occurring frequently in a document cluster have a higher chance of being included in the summary."
                    },
                    {
                        "id": 116,
                        "string": "• KL-Sum (Haghighi and Vanderwende, 2009 ) describes a method that greedily adds sentences to the summary so long as it decreases the KL divergence."
                    },
                    {
                        "id": 117,
                        "string": "• Distraction-M3 (Chen et al., 2016b ) trains the summarization model to not only attend to to specific regions of input documents, but also distract the attention to traverse different content of the source document."
                    },
                    {
                        "id": 118,
                        "string": "• Pointer-Generator (See et al., 2017) allows the system to not only copy words from the source text via pointing but also generate novel words through the generator."
                    },
                    {
                        "id": 119,
                        "string": "• Graph-based Attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoder-decoder framework."
                    },
                    {
                        "id": 120,
                        "string": "Table 3 : Train/valid accuracy and R-2 F-scores when using varying numbers of QA pairs (K=1 to 5) in the reward func."
                    },
                    {
                        "id": 121,
                        "string": "In Table 3 , we vary the number of QA pairs used per article in the reward function (K=1 to 5)."
                    },
                    {
                        "id": 122,
                        "string": "The summaries are encouraged to contain comprehensive content useful for answering all questions."
                    },
                    {
                        "id": 123,
                        "string": "When more QA pairs are used (K1→K5), we observe that the number of answer tokens has increased and almost doubled: 23.7K (K1)→50.3K (K5) for entities as answers, and 7.3K→13.7K for keywords."
                    },
                    {
                        "id": 124,
                        "string": "The enlarged answer space has an impact on QA accuracies."
                    },
                    {
                        "id": 125,
                        "string": "When using entities as answers, the training accuracy is 34.8% (Q5) and validation is 15.4% (Q5), and there appears to be a considerable gap between the two."
                    },
                    {
                        "id": 126,
                        "string": "In contrast, the gap is quite small when using keywords as answers (27.5% and 21.9% for Q5), suggesting that using sentence root words as answers is a more viable strategy to create QA pairs."
                    },
                    {
                        "id": 127,
                        "string": "Comparing to QA studies (Chen et al., 2016a) , we remove the constraint that requires answer entities (or keywords) to reside in the source documents."
                    },
                    {
                        "id": 128,
                        "string": "Adding this constraint improves the QA accuracy for a standard QA system."
                    },
                    {
                        "id": 129,
                        "string": "However, because our system does not perform QA during testing (the question-answer pairs are not available for the test set) but only generate generic summaries, we do not enforce this requirement and report no testing accuracies."
                    },
                    {
                        "id": 130,
                        "string": "We observe that the R-2 scores only present minor changes from K1 to K5."
                    },
                    {
                        "id": 131,
                        "string": "We conjecture that more question-answer pairs do not make the summaries contain more comprehensive content because the input and the summary are relatively short; K=1 yields the best results."
                    },
                    {
                        "id": 132,
                        "string": "In Table 4 , we present example system and reference summaries."
                    },
                    {
                        "id": 133,
                        "string": "Our extractive summaries can be overlaid with the source documents to assist people with browsing through the documents."
                    },
                    {
                        "id": 134,
                        "string": "In this way the summaries stay true to the original and do not contain information that was not in the source documents."
                    },
                    {
                        "id": 135,
                        "string": "Future work."
                    },
                    {
                        "id": 136,
                        "string": "We are interested in investigating Source Document It was all set for a fairytale ending for record breaking jockey AP Mc-Coy."
                    },
                    {
                        "id": 137,
                        "string": "In the end it was a different but familiar name who won the Grand National on Saturday."
                    },
                    {
                        "id": 138,
                        "string": "25-1 outsider Many Clouds, who had shown little form going into the race, won by a length and a half, ridden by jockey Leighton Aspell."
                    },
                    {
                        "id": 139,
                        "string": "Aspell won last year's Grand National too, making him the first jockey since the 1950s to ride back-to-back winners on different horses."
                    },
                    {
                        "id": 140,
                        "string": "\"It feels wonderful, I asked big questions,\" Aspell said... Abstract 25-1 shot Many Clouds wins Grand National Second win a row for jockey Leighton Aspell First jockey to win two in a row on different horses since 1950s approaches that automatically group selected summary segments into clusters."
                    },
                    {
                        "id": 141,
                        "string": "Each cluster can capture a unique aspect of the document, and clusters of text segments can be color-highlighted."
                    },
                    {
                        "id": 142,
                        "string": "Inspired by the recent work of Narayan et al."
                    },
                    {
                        "id": 143,
                        "string": "(2018) , we are also interested in conducting the usability study to test how well the summary highlights can help users quickly answer key questions about the documents."
                    },
                    {
                        "id": 144,
                        "string": "This will provide an alternative strategy for evaluating our proposed method against both extractive and abstractive baselines."
                    },
                    {
                        "id": 145,
                        "string": "Conclusion In this paper we explore a new training paradigm for extractive summarization."
                    },
                    {
                        "id": 146,
                        "string": "Our system converts human abstracts to a set of question-answer pairs."
                    },
                    {
                        "id": 147,
                        "string": "We use reinforcement learning to exploit the space of extractive summaries and promote summaries that are concise, fluent, and adequate for answering questions."
                    },
                    {
                        "id": 148,
                        "string": "Results show that our approach is effective, surpassing state-of-the-art systems."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 25
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 26,
                        "end": 35
                    },
                    {
                        "section": "Our Approach",
                        "n": "3",
                        "start": 36,
                        "end": 37
                    },
                    {
                        "section": "Question-Focused Reward",
                        "n": "3.1",
                        "start": 38,
                        "end": 75
                    },
                    {
                        "section": "Reinforcement Learning",
                        "n": "3.2",
                        "start": 76,
                        "end": 92
                    },
                    {
                        "section": "Experiments",
                        "n": "4",
                        "start": 93,
                        "end": 95
                    },
                    {
                        "section": "Hyperparameters",
                        "n": "4.1",
                        "start": 96,
                        "end": 106
                    },
                    {
                        "section": "Results",
                        "n": "4.2",
                        "start": 107,
                        "end": 144
                    },
                    {
                        "section": "Conclusion",
                        "n": "5",
                        "start": 145,
                        "end": 148
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1123-Table4-1.png",
                        "caption": "Table 4: Example system summary and human abstract. The summary words are shown in bold in the source document.",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 293.28,
                            "y1": 62.879999999999995,
                            "y2": 195.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1123-Figure1-1.png",
                        "caption": "Figure 1: System framework. The model uses an extractive summary as a document surrogate to answer important questions about the document. The questions are automatically derived from the human abstract.",
                        "page": 2,
                        "bbox": {
                            "x1": 93.6,
                            "x2": 500.15999999999997,
                            "y1": 68.64,
                            "y2": 190.56
                        }
                    },
                    {
                        "filename": "../figure/image/1123-Table2-1.png",
                        "caption": "Table 2: Results on the CNN test set (full-length F1 scores).",
                        "page": 3,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 529.4399999999999,
                            "y1": 62.879999999999995,
                            "y2": 201.12
                        }
                    },
                    {
                        "filename": "../figure/image/1123-Table3-1.png",
                        "caption": "Table 3: Train/valid accuracy and R-2 F-scores when using varying numbers of QA pairs (K=1 to 5) in the reward func.",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 531.36,
                            "y1": 62.879999999999995,
                            "y2": 168.0
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-38"
        },
        {
            "slides": {
                "0": {
                    "title": "Story",
                    "text": [
                        "3. Conclusions and Future Work",
                        "Representations as Multimodal Embeddings. AAAI",
                        "Learn mapping f text vision.",
                        "Finding 1: Imagined vectors, f (text), outperform original visual vectors in 7/7 word similarity tasks.",
                        "So, why are mapped vectors multimodal? We conjecture:",
                        "Continuity. Output vector is nothing but the input vector transformed by a continuous map: f (x x",
                        "Finding 2 (not in AAAI paper): Vectors imagined with an untrained network do even better."
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Motivation",
                    "text": [
                        "3. Conclusions and Future Work",
                        "Applications (e.g., zero-shot image tagging, zero-shot translation or cross-modal retrieval):",
                        "Use linear or NN maps to bridge modalities / spaces.",
                        "Then, they tag / translate based on neighborhood structure of mapped vectors f (X",
                        "Research question: Is the neighborhood structure of f (X similar to that of Y? Or rather to X?",
                        "How to measure similarity of 2 sets of vectors from different spaces? Idea: mean nearest neighbor overlap"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "General Setting",
                    "text": [
                        "3. Conclusions and Future Work",
                        "Mappings f X Y to bridge modalities X and Y:"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": [
                        "figure/image/1150-Figure1-1.png"
                    ]
                }
            },
            "paper_title": "Do Neural Network Cross-Modal Mappings Really Bridge Modalities?",
            "paper_id": "1150",
            "paper": {
                "title": "Do Neural Network Cross-Modal Mappings Really Bridge Modalities?",
                "abstract": "Feed-forward networks are widely used in cross-modal applications to bridge modalities by mapping distributed vectors of one modality to the other, or to a shared space. The predicted vectors are then used to perform e.g., retrieval or labeling. Thus, the success of the whole system relies on the ability of the mapping to make the neighborhood structure (i.e., the pairwise similarities) of the predicted vectors akin to that of the target vectors. However, whether this is achieved has not been investigated yet. Here, we propose a new similarity measure and two ad hoc experiments to shed light on this issue. In three cross-modal benchmarks we learn a large number of language-to-vision and visionto-language neural network mappings (up to five layers) using a rich diversity of image and text features and loss functions. Our results reveal that, surprisingly, the neighborhood structure of the predicted vectors consistently resembles more that of the input vectors than that of the target vectors. In a second experiment, we further show that untrained nets do not significantly disrupt the neighborhood (i.e., semantic) structure of the input vectors.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Neural network mappings are widely used to bridge modalities or spaces in cross-modal retrieval (Qiao et al., 2017; Wang et al., 2016; Zhang et al., 2016) , zero-shot learning (Lazaridou et al., 2015b (Lazaridou et al., , 2014 Socher et al., 2013) in building multimodal representations (Collell et al., 2017) or in word translation (Lazaridou et al., 2015a) , to name a few."
                    },
                    {
                        "id": 1,
                        "string": "Typically, a neural network is firstly trained to predict the distributed vectors of one modality (or space) from the other."
                    },
                    {
                        "id": 2,
                        "string": "At test time, some operation such as retrieval or labeling is performed based on the nearest neighbors of the predicted (mapped) vectors."
                    },
                    {
                        "id": 3,
                        "string": "For instance, in zero-shot image classification, image features are mapped to the text space and the label of the nearest neighbor word is assigned."
                    },
                    {
                        "id": 4,
                        "string": "Thus, the success of such systems relies entirely on the ability of the map to make the predicted vectors similar to the target vectors in terms of semantic or neighborhood structure."
                    },
                    {
                        "id": 5,
                        "string": "1 However, whether neural nets achieve this goal in general has not been investigated yet."
                    },
                    {
                        "id": 6,
                        "string": "In fact, recent work evidences that considerable information about the input modality propagates into the predicted modality (Collell et al., 2017; Lazaridou et al., 2015b; Frome et al., 2013) ."
                    },
                    {
                        "id": 7,
                        "string": "To shed light on these questions, we first introduce the (to the best of our knowledge) first existing measure to quantify similarity between the neighborhood structures of two sets of vectors."
                    },
                    {
                        "id": 8,
                        "string": "Second, we perform extensive experiments in three benchmarks where we learn image-to-text and text-to-image neural net mappings using a rich variety of state-of-the-art text and image features and loss functions."
                    },
                    {
                        "id": 9,
                        "string": "Our results reveal that, contrary to expectation, the semantic structure of the mapped vectors consistently resembles more that of the input vectors than that of the target vectors of interest."
                    },
                    {
                        "id": 10,
                        "string": "In a second experiment, by using six concept similarity tasks we show that the semantic structure of the input vectors is preserved after mapping them with an untrained network, further evidencing that feed-forward nets naturally preserve semantic information about the input."
                    },
                    {
                        "id": 11,
                        "string": "Overall, we uncover and rise awareness of a largely ignored phenomenon relevant to a wide range of cross-modal / cross-space applications such as retrieval, zero-shot learning or image annotation."
                    },
                    {
                        "id": 12,
                        "string": "Ultimately, this paper aims at: (1) Encouraging the development of better architectures to bridge modalities / spaces; (2) Advocating for the use of semantic-based criteria to evaluate the quality of predicted vectors such as the neighborhood-based measure proposed here, instead of purely geometric measures such as mean squared error (MSE)."
                    },
                    {
                        "id": 13,
                        "string": "Related Work and Motivation Neural network and linear mappings are popular tools to bridge modalities in cross-modal retrieval systems."
                    },
                    {
                        "id": 14,
                        "string": "Lazaridou et al."
                    },
                    {
                        "id": 15,
                        "string": "(2015b) leverage a text-to-image linear mapping to retrieve images given text queries."
                    },
                    {
                        "id": 16,
                        "string": "Weston et al."
                    },
                    {
                        "id": 17,
                        "string": "(2011) map label and image features into a shared space with a linear mapping to perform image annotation."
                    },
                    {
                        "id": 18,
                        "string": "Alternatively, Frome et al."
                    },
                    {
                        "id": 19,
                        "string": "(2013) , Lazaridou et al."
                    },
                    {
                        "id": 20,
                        "string": "(2014) and Socher et al."
                    },
                    {
                        "id": 21,
                        "string": "(2013) perform zero-shot image classification with an image-to-text neural network mapping."
                    },
                    {
                        "id": 22,
                        "string": "Instead of mapping to latent features, Collell et al."
                    },
                    {
                        "id": 23,
                        "string": "(2018) use a 2-layer feedforward network to map word embeddings directly to image pixels in order to visualize spatial arrangements of objects."
                    },
                    {
                        "id": 24,
                        "string": "Neural networks are also popular in other cross-space applications such as cross-lingual tasks."
                    },
                    {
                        "id": 25,
                        "string": "Lazaridou et al."
                    },
                    {
                        "id": 26,
                        "string": "(2015a) learn a linear map from language A to language B and then translate new words by returning the nearest neighbor of the mapped vector in the B space."
                    },
                    {
                        "id": 27,
                        "string": "In the context of zero-shot learning, shortcomings of cross-space neural mappings have also been identified."
                    },
                    {
                        "id": 28,
                        "string": "For instance, \"hubness\" (Radovanović et al., 2010) and \"pollu-tion\" (Lazaridou et al., 2015a) relate to the highdimensionality of the feature spaces and to overfitting respectively."
                    },
                    {
                        "id": 29,
                        "string": "Crucially, we do not assume that our cross-modal problem has any class labels, and we study the similarity between input and mapped vectors and between output and mapped vectors."
                    },
                    {
                        "id": 30,
                        "string": "Recent work evidences that the predicted vectors of cross-modal neural net mappings are still largely informative about the input vectors."
                    },
                    {
                        "id": 31,
                        "string": "Lazaridou et al."
                    },
                    {
                        "id": 32,
                        "string": "(2015b) qualitatively observe that abstract textual concepts are grounded with the visual input modality."
                    },
                    {
                        "id": 33,
                        "string": "Counterintuitively, Collell et al."
                    },
                    {
                        "id": 34,
                        "string": "(2017) find that the vectors \"imagined\" from a language-to-vision neural map, outperform the original visual vectors in concept similarity tasks."
                    },
                    {
                        "id": 35,
                        "string": "The paper argued that the reconstructed visual vectors become grounded with language because the map preserves topological properties of the input."
                    },
                    {
                        "id": 36,
                        "string": "Here, we go one step further and show that the mapped vectors often resemble the input vectors more than the target vectors in semantic terms, which goes against the goal of a cross-modal map."
                    },
                    {
                        "id": 37,
                        "string": "Well-known theoretical work shows that networks with as few as one hidden layer are able to approximate any function (Hornik et al., 1989) ."
                    },
                    {
                        "id": 38,
                        "string": "However, this result does not reveal much neither about test performance nor about the semantic structure of the mapped vectors."
                    },
                    {
                        "id": 39,
                        "string": "Instead, the phenomenon described is more closely tied to other properties of neural networks."
                    },
                    {
                        "id": 40,
                        "string": "In particular, continuity guarantees that topological properties of the input, such as connectedness, are preserved (Armstrong, 2013)."
                    },
                    {
                        "id": 41,
                        "string": "Furthermore, continuity in a topology induced by a metric also ensures that points that are close together are mapped close together."
                    },
                    {
                        "id": 42,
                        "string": "As a toy example, Fig."
                    },
                    {
                        "id": 43,
                        "string": "1 illustrates the distortion of a manifold after being mapped by a neural net."
                    },
                    {
                        "id": 44,
                        "string": "2 In a noiseless world with fully statistically dependent modalities, the vectors of one modality could be perfectly predicted from those of the other."
                    },
                    {
                        "id": 45,
                        "string": "However, in real-world problems this is unrealistic given the noise of the features and the fact that modalities encode complementary information (Collell and Moens, 2016) ."
                    },
                    {
                        "id": 46,
                        "string": "Such unpredictability combined with continuity and topology-preserving properties of neural nets propel the phenomenon identified, namely mapped vectors resembling more the input than the target vectors, in nearest neighbors terms."
                    },
                    {
                        "id": 47,
                        "string": "Proposed Approach To bridge modalities X and Y, we consider two popular cross-modal mappings f : X → Y."
                    },
                    {
                        "id": 48,
                        "string": "(i) Linear mapping (lin): f (x) = W 0 x + b 0 with W 0 ∈ R dy×dx , b 0 ∈ R dy , where d x and d y are the input and output dimensions respectively."
                    },
                    {
                        "id": 49,
                        "string": "(ii) Feed-forward neural network (nn): f (x) = W 1 σ(W 0 x + b 0 ) + b 1 with W 1 ∈ R dy×d h , W 0 ∈ R d h ×dx , b 0 ∈ R d h , b 1 ∈ R dy where d h is the number of hidden units and σ() the non-linearity (e.g., tanh or sigmoid)."
                    },
                    {
                        "id": 50,
                        "string": "Although single hidden layer networks are already universal approximators (Hornik et al., 1989) , we explored whether deeper nets with 3 and 5 hidden layers could improve the fit (see Supplement)."
                    },
                    {
                        "id": 51,
                        "string": "Loss: Our primary choice is the MSE: 1 2 f (x) − y 2 , where y is the target vector."
                    },
                    {
                        "id": 52,
                        "string": "We also tested other losses such as the cosine: 1 − cos(f (x), y) and the max-margin: max{0, γ + f (x) − y − f (x) − y }, wherẽ x belongs to a different class than (x, y), and γ is the margin."
                    },
                    {
                        "id": 53,
                        "string": "As in Lazaridou et al."
                    },
                    {
                        "id": 54,
                        "string": "(2015a) and Weston et al."
                    },
                    {
                        "id": 55,
                        "string": "(2011) , we choose the firstx that violates the constraint."
                    },
                    {
                        "id": 56,
                        "string": "Notice that losses that do not require class labels such as MSE are suitable for a wider, more general set of tasks than discriminative losses (e.g., cross-entropy)."
                    },
                    {
                        "id": 57,
                        "string": "In fact, cross-modal retrieval tasks often do not exhibit any class labels."
                    },
                    {
                        "id": 58,
                        "string": "Additionally, our research question concerns the cross-space mapping problem in isolation (independently of class labels)."
                    },
                    {
                        "id": 59,
                        "string": "Let us denote a set of N input and output vectors by X ∈ R N ×dx and Y ∈ R N ×dy respectively."
                    },
                    {
                        "id": 60,
                        "string": "Each input vector x i is paired to the output vector y i of the same index (i = 1, · · · , N )."
                    },
                    {
                        "id": 61,
                        "string": "Let us henceforth denote the mapped input vectors by f (X) ∈ R N ×dy ."
                    },
                    {
                        "id": 62,
                        "string": "In order to explore the similarity between f (X) and X, and between f (X) and Y , we propose two ad hoc settings below."
                    },
                    {
                        "id": 63,
                        "string": "Neighborhood Structure of Mapped Vectors (Experiment 1) To measure the similarity between the neighborhood structure of two sets of paired vectors V and Z, we propose the mean nearest neighbor overlap measure (mNNO K (V, Z))."
                    },
                    {
                        "id": 64,
                        "string": "We define the nearest neighbor overlap NNO K (v i , z i ) as the number of K nearest neighbors that two paired vectors v i , z i share in their respective spaces."
                    },
                    {
                        "id": 65,
                        "string": "E.g., if the 3 (= K) nearest neighbors of v cat in V are {v dog , v tiger , v lion } and those of z cat in Z are {z mouse , z tiger , z lion }, the NNO 3 (v cat , z cat ) is 2."
                    },
                    {
                        "id": 66,
                        "string": "Definition 1 Let V = {v i } N i=1 and Z = {z i } N i=1 be two sets of N paired vectors."
                    },
                    {
                        "id": 67,
                        "string": "We define: mNNO K (V, Z) = 1 KN N i=1 NNO K (v i , z i ) (1) with NNO K (v i , z i ) = |NN K (v i ) ∩ NN K (z i )|, where NN K (v i ) and NN K (z i ) are the indexes of the K nearest neighbors of v i and z i , respectively."
                    },
                    {
                        "id": 68,
                        "string": "The normalizing constant K simply scales mNNO K (V, Z) between 0 and 1, making it independent of the choice of K. Thus, a mNNO K (V, Z) = 0.7 means that the vectors in V and Z share, on average, 70% of their nearest neighbors."
                    },
                    {
                        "id": 69,
                        "string": "Notice that mNNO implicitly performs retrieval for some similarity measure (e.g., Euclidean or cosine), and quantifies how semantically similar two sets of paired vectors are."
                    },
                    {
                        "id": 70,
                        "string": "Mapping with Untrained Networks (Experiment 2) To complement the setting above (Sect."
                    },
                    {
                        "id": 71,
                        "string": "3.1), it is instructive to consider the limit case of an untrained network."
                    },
                    {
                        "id": 72,
                        "string": "Concept similarity tasks provide a suitable setting to study the semantic structure of distributed representations (Pennington et al., 2014) ."
                    },
                    {
                        "id": 73,
                        "string": "That is, semantically similar concepts should ideally be close together."
                    },
                    {
                        "id": 74,
                        "string": "In particular, our interest is in comparing X with its projection f (X) through a mapping with random parameters, to understand the extent to which the mapping may disrupt or preserve the semantic structure of X."
                    },
                    {
                        "id": 75,
                        "string": "4 Experimental Setup 4.1 Experiment 1 Datasets To test the generality of our claims, we select a rich diversity of cross-modal tasks involving texts at three levels: word level (ImageNet), sentence level (IAPR TC-12), and document level (Wiki)."
                    },
                    {
                        "id": 76,
                        "string": "ImageNet (Russakovsky et al., 2015) ."
                    },
                    {
                        "id": 77,
                        "string": "Consists of ∼14M images, covering ∼22K WordNet synsets (or meanings)."
                    },
                    {
                        "id": 78,
                        "string": "Following Collell et al."
                    },
                    {
                        "id": 79,
                        "string": "(2017) , we take the most relevant word for each synset and keep only synsets with more than 50 images."
                    },
                    {
                        "id": 80,
                        "string": "This yields 9,251 different words (or instances)."
                    },
                    {
                        "id": 81,
                        "string": "IAPR TC-12 (Grubinger et al., 2006 Hyperparameters and Implementation See the Supplement (Sect."
                    },
                    {
                        "id": 82,
                        "string": "1) for details."
                    },
                    {
                        "id": 83,
                        "string": "Image and Text Features To ensure that results are independent of the choice of image and text features, we use 5 (2 image + 3 text) features of varied dimensionality (64d, 128-d, 300-d, 2,048-d) and two directions, textto-image (T → I) and image-to-text (I → T )."
                    },
                    {
                        "id": 84,
                        "string": "We make our extracted features publicly available."
                    },
                    {
                        "id": 85,
                        "string": "3 Text."
                    },
                    {
                        "id": 86,
                        "string": "In ImageNet we use 300-dimensional GloVe 4 (Pennington et al., 2014) and 300-d word2vec  word embeddings."
                    },
                    {
                        "id": 87,
                        "string": "In IAPR TC-12 and Wiki, we employ stateof-the-art bidirectional gated recurrent unit (bi-GRU) features (Cho et al., 2014) that we learn with a classification task (see Sect."
                    },
                    {
                        "id": 88,
                        "string": "2 of Supplement)."
                    },
                    {
                        "id": 89,
                        "string": "Image."
                    },
                    {
                        "id": 90,
                        "string": "For ImageNet, we use the publicly available 5 VGG-128 (Chatfield et al., 2014) and ResNet (He et al., 2015) visual features from Collell et al."
                    },
                    {
                        "id": 91,
                        "string": "(2017) , where we obtained 128dimensional VGG-128 and 2,048-d ResNet features from the last layer (before the softmax) of the forward pass of each image."
                    },
                    {
                        "id": 92,
                        "string": "The final representation for a word is the average feature vector (centroid) of all available images for this word."
                    },
                    {
                        "id": 93,
                        "string": "In IAPR TC-12 and Wiki, features for individual images are obtained similarly from the last layer of a ResNet and a VGG-128 model."
                    },
                    {
                        "id": 94,
                        "string": "Experiment 2 Datasets We include six benchmarks, comprising three types of concept similarity: (i) Semantic similarity: SemSim (Silberer and Lapata, 2014) , Sim-lex999 (Hill et al., 2015) and SimVerb-3500 (Gerz et al., 2016) ; (ii) Relatedness: MEN  and WordSim-353 (Finkelstein et al., 2001) ; (iii) Visual similarity: VisSim (Silberer and Lapata, 2014) which includes the same word pairs as SemSim, rated for visual similarity instead of semantic."
                    },
                    {
                        "id": 95,
                        "string": "All six test sets contain human ratings of similarity for word pairs, e.g., ('cat','dog')."
                    },
                    {
                        "id": 96,
                        "string": "Hyperparameters and Implementation The parameters in W 0 , W 1 are drawn from a random uniform distribution [−1, 1] and b 0 , b 1 are set to zero."
                    },
                    {
                        "id": 97,
                        "string": "We use a tanh activation σ()."
                    },
                    {
                        "id": 98,
                        "string": "6 The output dimension d y is set to 2,048 for all embeddings."
                    },
                    {
                        "id": 99,
                        "string": "Image and Text Features Textual and visual features are the same as described in Sect."
                    },
                    {
                        "id": 100,
                        "string": "4.1.3 for the ImageNet dataset."
                    },
                    {
                        "id": 101,
                        "string": "Similarity Predictions We compute the prediction of similarity between two vectors z 1 , z 2 with both the cosine z 1 z 2 z 1 z 2 and the Euclidean similarity 1 1+ z 1 −z 2 ."
                    },
                    {
                        "id": 102,
                        "string": "7 4.2."
                    },
                    {
                        "id": 103,
                        "string": "Performance Metrics As is common practice, we evaluate the predictions of similarity of the embeddings (Sect."
                    },
                    {
                        "id": 104,
                        "string": "4.2.4) against the human similarity ratings with the Spearman correlation ρ."
                    },
                    {
                        "id": 105,
                        "string": "We report the average of 10 sets of randomly generated parameters."
                    },
                    {
                        "id": 106,
                        "string": "Results and Discussion We test statistical significance with a two-sided Wilcoxon rank sum test adjusted with Bonferroni."
                    },
                    {
                        "id": 107,
                        "string": "The null hypothesis is that a compared pair is equal."
                    },
                    {
                        "id": 108,
                        "string": "In Tab."
                    },
                    {
                        "id": 109,
                        "string": "1, * indicates that mNNO(X, f (X)) differs from mNNO(Y, f (X)) (p < 0.001) on the same mapping, embedding and direction."
                    },
                    {
                        "id": 110,
                        "string": "In Tab."
                    },
                    {
                        "id": 111,
                        "string": "2, * indicates that performance of mapped and input vectors differs (p < 0.05) in the 10 runs."
                    },
                    {
                        "id": 112,
                        "string": "Experiment 1 Results below are with cosine neighbors and K = 10."
                    },
                    {
                        "id": 113,
                        "string": "Euclidean neighbors yield similar results and are thus left to the Supplement."
                    },
                    {
                        "id": 114,
                        "string": "Similarly, results in ImageNet with GloVe embeddings are shown below and word2vec results in the Supplement."
                    },
                    {
                        "id": 115,
                        "string": "The choice of K = {5, 10, 30} had no visible effect on results."
                    },
                    {
                        "id": 116,
                        "string": "Results with 3-and 5-layer nets did not show big differences with the results below (see Supplement)."
                    },
                    {
                        "id": 117,
                        "string": "The cosine and max-margin losses performed slightly worse than MSE (see Supplement) ."
                    },
                    {
                        "id": 118,
                        "string": "Although Lazaridou et al."
                    },
                    {
                        "id": 119,
                        "string": "(2015a) and Weston et al."
                    },
                    {
                        "id": 120,
                        "string": "(2011) find that max-margin performs the best in their tasks, we do not find our result entirely surprising given that max-margin focuses on inter-class differences while we look also at intraclass neighbors (in fact, we do not require classes)."
                    },
                    {
                        "id": 121,
                        "string": "Tab."
                    },
                    {
                        "id": 122,
                        "string": "1 shows our core finding, namely that the semantic structure of f (X) resembles more that of X than that of Y , for both lin and nn maps."
                    },
                    {
                        "id": 123,
                        "string": "Table 1 : Test mean nearest neighbor overlap."
                    },
                    {
                        "id": 124,
                        "string": "Boldface indicates the largest score at each mNNO 10 (X, f (X)) and mNNO 10 (Y, f (X)) pair, which are abbreviated by X, f (X) and Y, f (X)."
                    },
                    {
                        "id": 125,
                        "string": "Fig."
                    },
                    {
                        "id": 126,
                        "string": "2 is particularly revealing."
                    },
                    {
                        "id": 127,
                        "string": "If we would only look at train performance (and allow train MSE to reach 0) then f (X) = Y and clearly train mNNO(f (X), Y ) = 1 while mNNO(f (X), X) can only be smaller than 1."
                    },
                    {
                        "id": 128,
                        "string": "However, the interest is always on test samples, and (near-)perfect test prediction is unrealistic."
                    },
                    {
                        "id": 129,
                        "string": "Notice in fact in Fig."
                    },
                    {
                        "id": 130,
                        "string": "2 that even if we look at train fit, MSE needs to be close to 0 for mNNO(f (X), Y ) to be reasonably large."
                    },
                    {
                        "id": 131,
                        "string": "In all the combinations from Tab."
                    },
                    {
                        "id": 132,
                        "string": "1, the test mNNO(f (X), Y ) never surpasses test mNNO(f (X), X) for any number of epochs, even with an oracle (not shown)."
                    },
                    {
                        "id": 133,
                        "string": "ResNet VGG-128 X, f (X) Y, f (X) X, f (X) Y, f (X) ImageNet I → T lin Experiment 2 Tab."
                    },
                    {
                        "id": 134,
                        "string": "2 shows that untrained linear (f lin ) and neural net (f nn ) mappings preserve the semantic structure of the input X, complementing thus the findings of Experiment 1."
                    },
                    {
                        "id": 135,
                        "string": "Experiment 1 concerns learning, while, by \"ablating\" the learning part and randomizing weights, Experiment 2 is revealing about the natural tendency of neural nets to preserve semantic information about the input, regardless of the choice of the target vectors and loss function."
                    },
                    {
                        "id": 136,
                        "string": "WS-353 Men SemSim Conclusions Overall, we uncovered a phenomenon neglected so far, namely that neural net cross-modal mappings can produce mapped vectors more akin to the input vectors than the target vectors, in terms of semantic structure."
                    },
                    {
                        "id": 137,
                        "string": "Such finding has been possible thanks to the proposed measure that explicitly quantifies similarity between the neighborhood structure of two sets of vectors."
                    },
                    {
                        "id": 138,
                        "string": "While other measures such as mean squared error can be misleading, our measure provides a more realistic estimate of the semantic similarity between predicted and target vectors."
                    },
                    {
                        "id": 139,
                        "string": "In fact, it is the semantic structure (or pairwise similarities) what ultimately matters in cross-modal applications."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 12
                    },
                    {
                        "section": "Related Work and Motivation",
                        "n": "2",
                        "start": 13,
                        "end": 46
                    },
                    {
                        "section": "Proposed Approach",
                        "n": "3",
                        "start": 47,
                        "end": 62
                    },
                    {
                        "section": "Neighborhood Structure of Mapped",
                        "n": "3.1",
                        "start": 63,
                        "end": 69
                    },
                    {
                        "section": "Mapping with Untrained Networks (Experiment 2)",
                        "n": "3.2",
                        "start": 70,
                        "end": 74
                    },
                    {
                        "section": "Datasets",
                        "n": "4.1.1",
                        "start": 75,
                        "end": 79
                    },
                    {
                        "section": "Hyperparameters and Implementation",
                        "n": "4.1.2",
                        "start": 80,
                        "end": 82
                    },
                    {
                        "section": "Image and Text Features",
                        "n": "4.1.3",
                        "start": 83,
                        "end": 92
                    },
                    {
                        "section": "Datasets",
                        "n": "4.2.1",
                        "start": 93,
                        "end": 95
                    },
                    {
                        "section": "Hyperparameters and Implementation",
                        "n": "4.2.2",
                        "start": 96,
                        "end": 97
                    },
                    {
                        "section": "Image and Text Features",
                        "n": "4.2.3",
                        "start": 98,
                        "end": 99
                    },
                    {
                        "section": "Similarity Predictions",
                        "n": "4.2.4",
                        "start": 100,
                        "end": 102
                    },
                    {
                        "section": "Performance Metrics",
                        "n": "5",
                        "start": 103,
                        "end": 105
                    },
                    {
                        "section": "Results and Discussion",
                        "n": "5",
                        "start": 106,
                        "end": 111
                    },
                    {
                        "section": "Experiment 1",
                        "n": "5.1",
                        "start": 112,
                        "end": 132
                    },
                    {
                        "section": "Experiment 2",
                        "n": "5.2",
                        "start": 133,
                        "end": 135
                    },
                    {
                        "section": "Conclusions",
                        "n": "6",
                        "start": 136,
                        "end": 139
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1150-Table2-1.png",
                        "caption": "Table 2: Spearman correlations between human ratings and the similarities (cosine or Euclidean) predicted from the embeddings. Boldface denotes best performance per input embedding type.",
                        "page": 4,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 524.16,
                            "y1": 277.44,
                            "y2": 471.35999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1150-Table1-1.png",
                        "caption": "Table 1: Test mean nearest neighbor overlap. Boldface indicates the largest score at each mNNO10(X, f(X)) and mNNO10(Y, f(X)) pair, which are abbreviated by X, f(X) and Y, f(X).",
                        "page": 4,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 289.44,
                            "y1": 384.47999999999996,
                            "y2": 568.3199999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1150-Figure2-1.png",
                        "caption": "Figure 2: Learning a nn model in Wiki (left), IAPR TC-12 (middle) and ImageNet (right).",
                        "page": 4,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 292.32,
                            "y1": 66.24,
                            "y2": 165.12
                        }
                    },
                    {
                        "filename": "../figure/image/1150-Figure1-1.png",
                        "caption": "Figure 1: Effect of applying a mapping f to a (disconnected) manifold M with three hypothetical classes ( , N and •).",
                        "page": 1,
                        "bbox": {
                            "x1": 78.72,
                            "x2": 283.2,
                            "y1": 61.44,
                            "y2": 187.68
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-39"
        },
        {
            "slides": {
                "0": {
                    "title": "Complaints",
                    "text": [
                        "| wish | had more time to tell you about complaining, but the organizers only allocated 15 minutes for this talk.",
                        "2019 Bloomberg Finance L.P. All rights reserved. . : Engineering"
                    ],
                    "page_nums": [
                        1,
                        2,
                        3,
                        4,
                        5,
                        6,
                        7
                    ],
                    "images": []
                },
                "1": {
                    "title": "Complaints Applications",
                    "text": [
                        "2019 Bloomberg Finance L.P. All rights reserved."
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "6": {
                    "title": "Analysis Complaints",
                    "text": [
                        "2019 Bloomberg Finance L.P. All rights reserved."
                    ],
                    "page_nums": [
                        18,
                        19,
                        20,
                        21,
                        22,
                        23,
                        24
                    ],
                    "images": []
                },
                "7": {
                    "title": "Analysis Not Complaints",
                    "text": [
                        "2019 Bloomberg Finance L.P. All rights reserved."
                    ],
                    "page_nums": [
                        25,
                        26,
                        27
                    ],
                    "images": []
                },
                "8": {
                    "title": "Prediction",
                    "text": [
                        "2019 Bloomberg Finance L.P. All rights reserved.",
                        "Most Freq Class Complaint Specific Sentiment Emotions POS Tags LIWC Word2Vec Clusters Unigrams Combined MLP BiLSTM"
                    ],
                    "page_nums": [
                        28,
                        29,
                        30,
                        31,
                        32,
                        33,
                        34,
                        35
                    ],
                    "images": []
                },
                "9": {
                    "title": "Prediction Other Experiments",
                    "text": [
                        "2019 Bloomberg Finance L.P. All rights reserved."
                    ],
                    "page_nums": [
                        36
                    ],
                    "images": []
                },
                "10": {
                    "title": "Takeaways",
                    "text": [
                        "2019 Bloomberg Finance L.P. All rights reserved."
                    ],
                    "page_nums": [
                        37
                    ],
                    "images": []
                }
            },
            "paper_title": "Automatically Identifying Complaints in Social Media",
            "paper_id": "1152",
            "paper": {
                "title": "Automatically Identifying Complaints in Social Media",
                "abstract": "Complaining is a basic speech act regularly used in human and computer mediated communication to express a negative mismatch between reality and expectations in a particular situation. Automatically identifying complaints in social media is of utmost importance for organizations or brands to improve the customer experience or in developing dialogue systems for handling and responding to complaints. In this paper, we introduce the first systematic analysis of complaints in computational linguistics. We collect a new annotated data set of written complaints expressed in English on Twitter. 1  We present an extensive linguistic analysis of complaining as a speech act in social media and train strong feature-based and neural models of complaints across nine domains achieving a predictive performance of up to 79 F1 using distant supervision.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Complaining is a basic speech act used to express a negative mismatch between reality and expectations towards a state of affairs, product, organization or event (Olshtain and Weinbach, 1987) ."
                    },
                    {
                        "id": 1,
                        "string": "Understanding the expression of complaints in natural language and automatically identifying them is of utmost importance for: (a) linguists to obtain a better understanding of the context, intent and types of complaints on a large scale; (b) psychologists to identify human traits underpinning complaint behavior and expression; (c) organizations and advisers to improve the customer service by identifying and addressing client concerns and issues effectively in real time, especially on social media; (d) developing downstream natural language processing (NLP) applications, such as 1 Data and code is available here: https: //github.com/danielpreotiuc/ complaints-social-media Tweet C S @FC Help hi, I ordered a necklace over a week ago and it still hasn't arrived (...) @BootsUK I love Boots!"
                    },
                    {
                        "id": 2,
                        "string": "Shame you're introducing a man tax of 7% in 2018 :( dialogue systems that aim to automatically identify complaints."
                    },
                    {
                        "id": 3,
                        "string": "You suck However, complaining has yet to be studied using computational approaches."
                    },
                    {
                        "id": 4,
                        "string": "The speech act of complaining, as previously defined in linguistics research (Olshtain and Weinbach, 1987 ) and adopted in this study, has as its core the concept of violated or breached expectations i.e., the person posting the complaint had their favorable expectations breached by a party, usually the one to which the complaint is addressed."
                    },
                    {
                        "id": 5,
                        "string": "Complaints have been previously analyzed by linguists (Vásquez, 2011) as distinctly different from expressing negative sentiment towards an entity."
                    },
                    {
                        "id": 6,
                        "string": "Key to the definition of complaints is the expression of the breach of expectations."
                    },
                    {
                        "id": 7,
                        "string": "Table 1 shows examples of tweets highlighting the differences between complaints and sentiment."
                    },
                    {
                        "id": 8,
                        "string": "The first example expresses the writer's breach of expectations about an item that was expected to arrive, but does not express negative sentiment toward the entity, while the second shows mixed sentiment and expresses a complaint about a tax that was introduced."
                    },
                    {
                        "id": 9,
                        "string": "The third statement is an insult that implies negative sentiment, but there are not enough cues to indicate any breach of expectations; hence, this cannot be categorized as a complaint."
                    },
                    {
                        "id": 10,
                        "string": "This paper presents the first extensive analysis of complaints in computational linguistics."
                    },
                    {
                        "id": 11,
                        "string": "Our contributions include: 1."
                    },
                    {
                        "id": 12,
                        "string": "The first publicly available data set of complaints extracted from Twitter with expert annotations spanning nine domains (e.g., software, transport); 2."
                    },
                    {
                        "id": 13,
                        "string": "An extensive quantitative analysis of the syntactic, stylistic and semantic linguistic features distinctive of complaints; 3."
                    },
                    {
                        "id": 14,
                        "string": "Predictive models using a broad range of features and machine learning models, which achieve high predictive performance for identifying complaints in tweets of up to 79 F1; 4."
                    },
                    {
                        "id": 15,
                        "string": "A distant supervision approach to collect data combined with domain adaptation to boost predictive performance."
                    },
                    {
                        "id": 16,
                        "string": "Related Work Complaints have to date received significant attention in linguistics and marketing research."
                    },
                    {
                        "id": 17,
                        "string": "Olshtain and Weinbach (1987) provide one of the early definitions of a complaint as when a speaker expects a favorable event to occur or an unfavorable event to be prevented and these expectations are breached."
                    },
                    {
                        "id": 18,
                        "string": "Thus, the discrepancy between the expectations of the complainer and the reality is the key component of identifying complaints."
                    },
                    {
                        "id": 19,
                        "string": "Complaining is considered to be a distinct speech act, as defined by speech act theory (Austin, 1975; Searle, 1969) which is central to the field of pragmatics."
                    },
                    {
                        "id": 20,
                        "string": "Complaints are either addressed to the party responsible for enabling the breach of expectations (direct complaints) or indirectly mention the party (indirect complaints) (Boxer, 1993b) ."
                    },
                    {
                        "id": 21,
                        "string": "Complaints are widely considered to be among the face-threatening acts (Brown and Levinson, 1987) -acts that aim to damage the face or self-esteem of the person or entity the act is directed at."
                    },
                    {
                        "id": 22,
                        "string": "The concept of face (Goffman, 1967) represents the public image specific of each person or entity and has two aspects: positive (i.e., the desire to be liked) and negative face (i.e., the desire to not be imposed upon)."
                    },
                    {
                        "id": 23,
                        "string": "Complaints can intrinsically threaten both positive and negative face."
                    },
                    {
                        "id": 24,
                        "string": "Positive face of the responsible party is affected by having enabled the breach of expectations."
                    },
                    {
                        "id": 25,
                        "string": "Usually, when a direct complaint is made, the illocutionary function of the complaint is to request for a correction or reparation for these events."
                    },
                    {
                        "id": 26,
                        "string": "Thus, this aims to affect negative face by aiming to impose an action to be undertaken by the responsible party."
                    },
                    {
                        "id": 27,
                        "string": "Complaints usually co-occur with other speech acts such as warnings, threats, suggestions or advice (Olshtain and Weinbach, 1987; Cohen and Olshtain, 1993) ."
                    },
                    {
                        "id": 28,
                        "string": "Previous linguistics research has qualitatively examined the types of complaints elicited via discourse completion tests (DCT) (Trosborg, 1995) and in naturally occurring speech (Laforest, 2002) ."
                    },
                    {
                        "id": 29,
                        "string": "Differences in complaint strategies and expression were studied across cultures (Cohen and Olsh tain, 1993 ) and socio-demographic traits (Boxer, 1993a) ."
                    },
                    {
                        "id": 30,
                        "string": "In naturally occurring text, the discourse structure of complaints has been studied in letters to editors (Hartford and Mahboob, 2004; Ranosa-Madrunio, 2004) ."
                    },
                    {
                        "id": 31,
                        "string": "In the area of linguistic studies on computer mediated communication, Vásquez (2011) performed an analysis of 100 negative reviews on TripAdvisor, which showed that complaints in this medium often co-occur with other speech acts including positive and negative remarks, frequently make explicit references to expectations not being met and directly demand a reparation or compensation."
                    },
                    {
                        "id": 32,
                        "string": "Meinl (2013) studied complaints in eBay reviews by annotating 200 reviews in English and German with the speech act sequence that makes up each complaint e.g., warning, annoyance (the annotations are not available publicly or after contacting the authors)."
                    },
                    {
                        "id": 33,
                        "string": "Mikolov et al."
                    },
                    {
                        "id": 34,
                        "string": "(2018) analyze which financial complaints submitted to the Consumer Financial Protection Bureau will receive a timely response."
                    },
                    {
                        "id": 35,
                        "string": "Most recently, Yang et al."
                    },
                    {
                        "id": 36,
                        "string": "(2019) studied customer support dialogues and predicted if these complaints will be escalated with a government agency or made public on social media."
                    },
                    {
                        "id": 37,
                        "string": "To the best of our knowledge, the only previous work that tackles a concept defined as a complaint with computational methods is by Zhou and Ganesan (2016) which studies Yelp reviews."
                    },
                    {
                        "id": 38,
                        "string": "However, they define a complaint as a 'sentence with negative connotation with supplemental information'."
                    },
                    {
                        "id": 39,
                        "string": "This definition is not aligned with previous research in linguistics (as presented above) and represents only a minor variation on sentiment analysis."
                    },
                    {
                        "id": 40,
                        "string": "They introduce a data set of complaints, unavailable at the time of this submission, and only perform a qualitative analysis, without building predictive models for identifying complaints."
                    },
                    {
                        "id": 41,
                        "string": "Data To date, there is no available data set with annotated complaints as previously defined in linguistics (Olshtain and Weinbach, 1987) ."
                    },
                    {
                        "id": 42,
                        "string": "Thus, we create a new data set of written utterances annotated with whether they express a complaint."
                    },
                    {
                        "id": 43,
                        "string": "We use Twitter as the data source because (1) it represents a platform with high levels of self-expression; and (2) users directly interact with other users or corporate brand accounts."
                    },
                    {
                        "id": 44,
                        "string": "Tweets are openly available and represent a popular option for data selection in other related tasks such as predicting sentiment (Rosenthal et al., 2017) , affect (Mohammad et al., 2018) , emotion analysis (Mohammad and Kiritchenko, 2015) , sarcasm (González-Ibánez et al., 2011; Bamman and Smith, 2015) , stance , text-image relationship (Vempala and Preoţiuc-Pietro, 2019) or irony (Van Hee et al., 2016; Cervone et al., 2017; Van Hee et al., 2018) ."
                    },
                    {
                        "id": 45,
                        "string": "Collection We choose to manually annotate tweets in order to provide a solid benchmark to foster future research on this task."
                    },
                    {
                        "id": 46,
                        "string": "Complaints represent a minority of the total written posts on Twitter."
                    },
                    {
                        "id": 47,
                        "string": "We use a data sampling method that increases the hit rate of complaints, following previous work on labeling infrequent linguistic phenomena such as irony (Mohammad et al., 2018) ."
                    },
                    {
                        "id": 48,
                        "string": "Numerous companies use Twitter to provide customer service and address user complaints."
                    },
                    {
                        "id": 49,
                        "string": "We select tweets directed to these accounts as candidates for complaint annotation."
                    },
                    {
                        "id": 50,
                        "string": "We manually assembled a list of 93 customer service handles."
                    },
                    {
                        "id": 51,
                        "string": "Using the Twitter API, 2 we collected all the tweets that are available to download (the most recent 3,200)."
                    },
                    {
                        "id": 52,
                        "string": "We then identified all the original tweets to which the customer support handle responded."
                    },
                    {
                        "id": 53,
                        "string": "We randomly sample an equal number of tweets addressed to each customer support handle for annotation."
                    },
                    {
                        "id": 54,
                        "string": "Using this method, we collected 1,971 tweets to which the customer support handles responded."
                    },
                    {
                        "id": 55,
                        "string": "Further, we have also manually grouped the customer support handles in several high-level domains based on their industry type and area of activity."
                    },
                    {
                        "id": 56,
                        "string": "We have done this to enable analyzing complaints by domain and assess transferability of classifiers across domains."
                    },
                    {
                        "id": 57,
                        "string": "In related work on sentiment analysis, reviews for products from four different domains were collected across domains in a similar fashion (Blitzer et al., 2007) ."
                    },
                    {
                        "id": 58,
                        "string": "All customer support handles grouped by category are presented in Table 2 ."
                    },
                    {
                        "id": 59,
                        "string": "We add to our data set randomly sampled tweets to ensure that there is a more representative and diverse set of tweets for feature analysis and to ensure that the evaluation does not disproportionally contain complaints."
                    },
                    {
                        "id": 60,
                        "string": "We thus additionally sampled 1,478 tweets consisting of two groups of 739 tweets: the first group contains random tweets addressed to any other Twitter handle (at-replies) to match the initial sample, while the second group contains tweets not addressed to a Twitter handle."
                    },
                    {
                        "id": 61,
                        "string": "As preprocessing, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens."
                    },
                    {
                        "id": 62,
                        "string": "To extract the unigrams used as features, we use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017) ."
                    },
                    {
                        "id": 63,
                        "string": "Tweets were filtered for English using langid.py (Lui and Baldwin, 2012) and retweets were excluded."
                    },
                    {
                        "id": 64,
                        "string": "Annotation We create a binary annotation task for identifying if a tweet contains a complaint or not."
                    },
                    {
                        "id": 65,
                        "string": "Tweets are short and usually express a single thought."
                    },
                    {
                        "id": 66,
                        "string": "Therefore, we consider the entire tweet as a complaint if it contains at least one complaint speech act."
                    },
                    {
                        "id": 67,
                        "string": "For annotation, we adopt as the guideline a complaint definition similar to that from previous linguistic research (Olshtain and Weinbach, 1987; Cohen and Olshtain, 1993) : \"A complaint presents a state of affairs which breaches the writer's favorable expectation\"."
                    },
                    {
                        "id": 68,
                        "string": "Each tweet was labeled by two independent annotators, authors of the paper, with significant experience in linguistic annotation."
                    },
                    {
                        "id": 69,
                        "string": "After an initial calibration run of 100 tweets (later discarded from the final data set), each annotator labeled all 1,971 tweets independently."
                    },
                    {
                        "id": 70,
                        "string": "The two annotators achieved a Cohen's Kappa κ = 0.731, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) ."
                    },
                    {
                        "id": 71,
                        "string": "Disagreements were discussed and resolved between the annotators."
                    },
                    {
                        "id": 72,
                        "string": "In total, 1,232 tweets (62.4%) are complaints and 739 are not complaints (37.6%)."
                    },
                    {
                        "id": 73,
                        "string": "The statistics for each category is in Table 3 ."
                    },
                    {
                        "id": 74,
                        "string": "Features In our analysis and predictive experiments, we use the following groups of features: generic linguistic features proven to perform well in text classification tasks Preoţiuc-Pietro et al., 2017; Volkova and Bell, 2017; Preoţiuc-Pietro and Ungar, 2018 ) (unigrams, LIWC, word clusters), methods for predict-   Unigrams."
                    },
                    {
                        "id": 75,
                        "string": "We use the bag-of-words approach to represent each tweet as a TF-IDF weighted distribution over the vocabulary consisting of all words present in at least two tweets (2,641 words)."
                    },
                    {
                        "id": 76,
                        "string": "LIWC."
                    },
                    {
                        "id": 77,
                        "string": "Traditional psychology studies use dictionary-based approaches to representing text."
                    },
                    {
                        "id": 78,
                        "string": "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including parts-of-speech, topical or stylistic categories."
                    },
                    {
                        "id": 79,
                        "string": "Each tweet is thus represented as a distribution over these categories."
                    },
                    {
                        "id": 80,
                        "string": "Word2Vec Clusters."
                    },
                    {
                        "id": 81,
                        "string": "An alternative to LIWC for identifying semantic themes in a tweet is to use automatically generated word clusters."
                    },
                    {
                        "id": 82,
                        "string": "These clusters can be thought of as topics i.e., groups of words that are semantically and/or syntactically similar."
                    },
                    {
                        "id": 83,
                        "string": "The clusters help reduce the feature space and provide good interpretability (Lampos et al., 2014; Lampos et al., 2016; Aletras and Chamberlain, 2018) ."
                    },
                    {
                        "id": 84,
                        "string": "We follow  to compute clusters using spectral clustering (Shi and Malik, 2000) applied to a word-word similarity matrix weighted with the cosine similarity of the corresponding word embedding vectors (Mikolov et al., 2013) ."
                    },
                    {
                        "id": 85,
                        "string": "The clusters help reduce the feature space and provide good interpretability."
                    },
                    {
                        "id": 86,
                        "string": "3 For brevity and clarity, we present experiments using 200 clusters as in ."
                    },
                    {
                        "id": 87,
                        "string": "We aggregated all the words in a tweet and represent each tweet as a distribution of the fraction of tokens belonging to each cluster."
                    },
                    {
                        "id": 88,
                        "string": "Part-of-Speech Tags."
                    },
                    {
                        "id": 89,
                        "string": "We analyze part-of-speech tag usage to quantify the syntactic patterns associated with complaints and to enhance the representation of unigrams."
                    },
                    {
                        "id": 90,
                        "string": "We part-of-speech tag all tweets using the Twitter model of the Stanford Tagger (Derczynski et al., 2013) ."
                    },
                    {
                        "id": 91,
                        "string": "In prediction experiments we supplement each unigram feature with their POS tag (e.g., I PRP, bought VBN)."
                    },
                    {
                        "id": 92,
                        "string": "For feature analysis, we represent each tweet as a bag-of-words distribution over part-of-speech unigrams and bigrams in order to uncover regular syntactic patterns specific of complaints."
                    },
                    {
                        "id": 93,
                        "string": "Sentiment & Emotion Models."
                    },
                    {
                        "id": 94,
                        "string": "We use existing sentiment and emotion analysis models to study their relationship to complaint annotations and to measure their predictive power on our complaint data set."
                    },
                    {
                        "id": 95,
                        "string": "If the concepts of negative sentiment and complaint were to coincide, standard sentiment prediction models that have access to larger sets of training data should be very competitive on predicting complaints."
                    },
                    {
                        "id": 96,
                        "string": "We test the following models: • MPQA: We use the MPQA sentiment lexicon (Wiebe et al., 2005) to assign a positive and negative score to each tweet based on the ratio of tokens in a tweet which appear in the positive and negative MPQA lists respectively."
                    },
                    {
                        "id": 97,
                        "string": "These scores are used as features."
                    },
                    {
                        "id": 98,
                        "string": "• NRC: We use the word lexicon derived using crowd-sourcing from Turney, 2010, 2013) for assigning to each tweet the proportion of tokens that have positive, negative and neutral sentiment, as well as one of eight emotions that include the six basic emotions of Ekman (Ekman, 1992) (anger, disgust, fear, joy, sadness and surprise) plus trust and anticipation."
                    },
                    {
                        "id": 99,
                        "string": "All scores are used as features in prediction in order to maximize their predictive power."
                    },
                    {
                        "id": 100,
                        "string": "• Volkova & Bachrach (V&B): We quantify positive, negative and neutral sentiment as well as the six Ekman emotions for each message using the model made available in (Volkova and Bachrach, 2016) and use them as features in predicting complaints."
                    },
                    {
                        "id": 101,
                        "string": "The sentiment model is trained on a data set of 19,555 tweets that combine all previously annotated tweets across seven public data sets."
                    },
                    {
                        "id": 102,
                        "string": "• VADER: We use the outcome of the rule-based sentiment analysis model which has shown very good predictive performance on predicting sentiment in tweets (Gilbert and Hutto, 2014 )."
                    },
                    {
                        "id": 103,
                        "string": "• Stanford: We quantify sentiment using the Stanford sentiment prediction model as described in (Socher et al., 2013) ."
                    },
                    {
                        "id": 104,
                        "string": "Complaint Specific Features."
                    },
                    {
                        "id": 105,
                        "string": "The features in this category are inspired by linguistic aspects specific to complaints (Meinl, 2013): • Request."
                    },
                    {
                        "id": 106,
                        "string": "The illocutionary function of complaints is often that of requesting for a correction or reparation for the event that caused the breach of expectations (Olshtain and Weinbach, 1987) ."
                    },
                    {
                        "id": 107,
                        "string": "We explicitly predict if an utterance is a request using the model introduced in (Danescu- Niculescu-Mizil et al., 2013) ."
                    },
                    {
                        "id": 108,
                        "string": "• Intensifiers."
                    },
                    {
                        "id": 109,
                        "string": "In order to increase the facethreatening effect a complaint has on the complainee, intensifiers are usually used by the person expressing the complaint (Meinl, 2013)."
                    },
                    {
                        "id": 110,
                        "string": "We use features derived from: (1) capitalization patterns often used online as an equivalent to shouting (e.g., number/percentage of capitalized words, number/percentage of words starting with capitals, number/percentage of capitalized letters); and (2) repetitions of exclamation marks, question marks or letters within the same token."
                    },
                    {
                        "id": 111,
                        "string": "• Downgraders and Politeness Markers."
                    },
                    {
                        "id": 112,
                        "string": "In contrast to intensifiers, downgrading modifiers are used to reduce the face-threat involved when voicing a complaint, usually as part of a strategy to obtain a reparation for the breach of ex-pectation (Meinl, 2013) ."
                    },
                    {
                        "id": 113,
                        "string": "Downgraders are coded by several dictionaries: play down (e.g., i wondered if ), understaters (e.g., one little), disarmers (e.g., but), downtoners (e.g., just) and hedges (e.g., somewhat)."
                    },
                    {
                        "id": 114,
                        "string": "Politeness markers have a similar effect to downgraders and include apologies (e.g., sorry), greetings at the start, direct questions, direct start (e.g., so), indicative modals (e.g., can you), subjunctive modals (e.g., could you), politeness markers (e.g., please) (Svarova, 2008) and politeness maxims (e.g., i must say)."
                    },
                    {
                        "id": 115,
                        "string": "Finally, we directly predict the politeness score of the tweet using the model presented in (Danescu-Niculescu-Mizil et al., 2013) ."
                    },
                    {
                        "id": 116,
                        "string": "• Temporal References."
                    },
                    {
                        "id": 117,
                        "string": "Temporal references are often used in complaints to stress how long a complainer has been waiting for a correction or reparation from the addressee or to provide context for their complaint (e.g., mentioning the date in which they have bought an item) (Meinl, 2013) ."
                    },
                    {
                        "id": 118,
                        "string": "We identify time expressions in tweets using SynTime, which achieved state-of-the-art results across on several benchmark data sets (Zhong et al., 2017) ."
                    },
                    {
                        "id": 119,
                        "string": "We represent temporal expressions both as days elapsed relative to the day of the post and in buckets of different granularities (one day, week, month, year)."
                    },
                    {
                        "id": 120,
                        "string": "• Pronoun Types."
                    },
                    {
                        "id": 121,
                        "string": "Pronouns are used in complaints to reveal the personal involvement or opinion of the complainer and intensify or reduce the face-threat of the complaint based on the person or type of the pronoun (Claridge, 2007; Meinl, 2013) ."
                    },
                    {
                        "id": 122,
                        "string": "We split pronouns using dictionaries into: first person, second person, third person, demonstrative (e.g., this) and indefinite (e.g., everybody)."
                    },
                    {
                        "id": 123,
                        "string": "Linguistic Feature Analysis This section presents a quantitative analysis of the linguistic features distinctive of tweets containing complains in order to gain linguistic insight into this task and data."
                    },
                    {
                        "id": 124,
                        "string": "We perform analysis of all previously described feature sets using univariate Pearson correlation (Schwartz et al., 2013) ."
                    },
                    {
                        "id": 125,
                        "string": "We compute correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was annotated as a complaint or not."
                    },
                    {
                        "id": 126,
                        "string": "Top unigrams and part-of-speech features specific of complaints and non-complaints are presented in  categories and Word2Vec topics are presented in Table 5 ."
                    },
                    {
                        "id": 127,
                        "string": "All correlations shown in these tables are statistically significant at p < .01, with Simes correction for multiple comparisons."
                    },
                    {
                        "id": 128,
                        "string": "Negations."
                    },
                    {
                        "id": 129,
                        "string": "Negations are uncovered through unigrams (not, no, won't) and the top LIWC category (NEGATE)."
                    },
                    {
                        "id": 130,
                        "string": "Central to complaining is the concept of breached expectations."
                    },
                    {
                        "id": 131,
                        "string": "Hence the complainers use negations to express this discrepancy and to describe their experience with the product or service that caused this."
                    },
                    {
                        "id": 132,
                        "string": "Issues."
                    },
                    {
                        "id": 133,
                        "string": "Several unigrams (error, issue, working, fix) and a cluster (Issues) contain words referring to issues or errors."
                    },
                    {
                        "id": 134,
                        "string": "However, words regularly describing negative sentiment or emotions are not one of the most distinctive features for complaints."
                    },
                    {
                        "id": 135,
                        "string": "On the other hand, the presence of terms that show positive sentiment or emotions (good, great, win, POSEMO, AFFECT, ASSENT) are among the top most distinctive features for a tweet not being la-beled as a complaint."
                    },
                    {
                        "id": 136,
                        "string": "In addition, other words and clusters expressing positive states such as gratitude (thank, great, love) or laughter (lol) are also distinctive for tweets that are not complaints."
                    },
                    {
                        "id": 137,
                        "string": "Linguistics research on complaints in longer documents identified that complaints are likely to co-occur with other speech acts, including with expressions of positive or negative emotions (Vásquez, 2011) ."
                    },
                    {
                        "id": 138,
                        "string": "In our data set, perhaps due to the particular nature of Twitter communication and the character limit, complainers are much more likely to not express positive sentiment in a complaint and do not regularly post negative sentiment."
                    },
                    {
                        "id": 139,
                        "string": "Instead, they choose to focus more on describing the issue regarding the service or product in an attempt to have it resolved."
                    },
                    {
                        "id": 140,
                        "string": "Pronouns."
                    },
                    {
                        "id": 141,
                        "string": "Across unigrams, part-of-speech patterns and word clusters, we see a distinctive pattern emerging around pronoun usage."
                    },
                    {
                        "id": 142,
                        "string": "Complaints use more possessive pronouns, indicating that the user is describing personal experiences."
                    },
                    {
                        "id": 143,
                        "string": "A distinctive part-of-speech pattern common in complaints is possessive pronouns followed by nouns (PRP$ NN) which refer to items of services possessed by the complainer (e.g., my account, my order)."
                    },
                    {
                        "id": 144,
                        "string": "Complaints tend to not contain personal pronouns (he, she, it, him, you, SHEHE, MALE, FE-MALE), as the focus on expressing the complaint is on the self and the party the complaint is addressed to and not other third parties."
                    },
                    {
                        "id": 145,
                        "string": "Punctuation."
                    },
                    {
                        "id": 146,
                        "string": "Question marks are distinctive of complaints, as many complaints are formulated as questions to the responsible party (e.g., why is this not working?, when will I get my response?)."
                    },
                    {
                        "id": 147,
                        "string": "Complaints are not usually accompanied by exclamation marks."
                    },
                    {
                        "id": 148,
                        "string": "Although exclamation marks are regularly used for emphasis in the context of complaints, most complainers in our data set prefer not to use them perhaps in an attempt to address them in a less confrontational manner."
                    },
                    {
                        "id": 149,
                        "string": "Temporal References."
                    },
                    {
                        "id": 150,
                        "string": "Mentions of time are specific of complaints (been, still, on, days, Temporal References cluster)."
                    },
                    {
                        "id": 151,
                        "string": "Their presence is usually needed to provide context for the event that caused the breach of expectations."
                    },
                    {
                        "id": 152,
                        "string": "Another role of temporal references is to express dissatisfaction towards non-responsiveness of the responsible party in addressing their previous requests."
                    },
                    {
                        "id": 153,
                        "string": "In addition, the presence of verbs in past participle (VBN) is the most distinctive part-of-speech pattern of complaints."
                    },
                    {
                        "id": 154,
                        "string": "These are used to describe actions com- NEGATE not, no, can't, don't, never, nothing, doesn't, won't .271 POSEMO thanks, love, thank, good, great, support, lol, win .185 RELATIV in, on, when, at, out, still, now, up, back, new .225 AFFECT thanks, love, thank, good, great, support, lol .111 FUNCTION the, i, to, a, my, and, you, for, is, in .204 SHEHE he, his, she, her, him, he's, himself .105 TIME when, still, now, back, new, never, after, then, waiting .186 MALE he, his, man, him, sir, he's, son .086 DIFFER not, but, if, or, can't, really, than, other, haven't .169 FEMALE she, her, girl, mom, ma, lady, mother, Table 5 : Group text features associated with tweets that are complaints and not complaints."
                    },
                    {
                        "id": 155,
                        "string": "Features are sorted by Pearson correlation (r) between their each feature's normalized frequency and the outcome."
                    },
                    {
                        "id": 156,
                        "string": "We restrict to only the top six categories for each feature type."
                    },
                    {
                        "id": 157,
                        "string": "All correlations are significant at p < .01, two-tailed t-test, Simes corrected."
                    },
                    {
                        "id": 158,
                        "string": "Within each cluster, words are sorted by frequency in our data set."
                    },
                    {
                        "id": 159,
                        "string": "Labels for Word2Vec clusters are assigned by the authors."
                    },
                    {
                        "id": 160,
                        "string": "pleted in the past (e.g., i've bought, have come) in order to provide context for the complaint."
                    },
                    {
                        "id": 161,
                        "string": "Verbs."
                    },
                    {
                        "id": 162,
                        "string": "Several part-of-speech patterns distinctive of complaints involve present verbs in third person singular (VBZ)."
                    },
                    {
                        "id": 163,
                        "string": "In general, these verbs are used in complaints to reference an action that the author expects to happen, but his expectations are breached (e.g., nobody is answering)."
                    },
                    {
                        "id": 164,
                        "string": "Verbs in gerund or present participle are used as a complaint strategy to describe things that just happened to a user (e.g., got an email saying my service will be terminated)."
                    },
                    {
                        "id": 165,
                        "string": "Topics."
                    },
                    {
                        "id": 166,
                        "string": "General topics typical of complaint tweets include requiring assistance or customer support."
                    },
                    {
                        "id": 167,
                        "string": "Several groups of words are much more likely to appear in a complaint, although not used to express complaints per se: about orders or deliveries (in the retail domain), about access (in complaints to service providers) and about parts of tech products (in tech)."
                    },
                    {
                        "id": 168,
                        "string": "This is natural, as people are more likely to deliberately tweet about an order or tech parts if they want to complain about them."
                    },
                    {
                        "id": 169,
                        "string": "This is similar to sentiment analysis, where not only emotionally valenced words are predictive of sentiment."
                    },
                    {
                        "id": 170,
                        "string": "Predicting Complaints In this section, we experiment with different approaches to build predictive models of complaints from text content alone."
                    },
                    {
                        "id": 171,
                        "string": "We first experiment with feature based approaches including Logistic Regression classification with Elastic Net regularization (LR) (Zou and Hastie, 2005) ."
                    },
                    {
                        "id": 172,
                        "string": "4 We train the classifiers with all individual feature types."
                    },
                    {
                        "id": 173,
                        "string": "Neural Methods."
                    },
                    {
                        "id": 174,
                        "string": "For reference, we experiment with two neural architectures."
                    },
                    {
                        "id": 175,
                        "string": "In both architectures, tweets are represented as sequences of onehot word vectors which are first mapped into embeddings."
                    },
                    {
                        "id": 176,
                        "string": "A multi-layer perceptron (MLP) network (Hornik et al., 1989) feeds the embedded representation (E = 200) of the tweet (mean embedding of its constituent words) into a dense hidden layer (D = 100) followed by a ReLU activation function and dropout (0.2)."
                    },
                    {
                        "id": 177,
                        "string": "The output layer is one dimensional dense layer with a sigmoid activation function."
                    },
                    {
                        "id": 178,
                        "string": "The second architecture, a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network, processes sequentially the tweet by modeling one word (embedding) at each time step followed by the same output layer as in MLP."
                    },
                    {
                        "id": 179,
                        "string": "The size of the hidden state of the LSTM is L = 50."
                    },
                    {
                        "id": 180,
                        "string": "We train the networks using the Adam optimizer (Kingma and Ba, 2014) (learning rate is set to 0.01) by minimizing the binary cross-entropy."
                    },
                    {
                        "id": 181,
                        "string": "Experimental Setup."
                    },
                    {
                        "id": 182,
                        "string": "We conduct experiments using a nested stratified 10-fold cross-validation, where nine folds are used for training and one for testing (i.e., outer loop)."
                    },
                    {
                        "id": 183,
                        "string": "In the inner loop, we choose the model parameters 5 using a 3fold cross-validation on the tweets from the nine folds of training data (from the outer loop)."
                    },
                    {
                        "id": 184,
                        "string": "Train/dev/test splits for each experiment are released together with the data for replicability."
                    },
                    {
                        "id": 185,
                        "string": "We report predictive performance of the models as the mean accuracy, F1 (macro-averaged) and ROC AUC over the 10 folds (Dietterich, 1998) ."
                    },
                    {
                        "id": 186,
                        "string": "Results."
                    },
                    {
                        "id": 187,
                        "string": "Results are presented in Table 6 ."
                    },
                    {
                        "id": 188,
                        "string": "Most sentiment analysis models show accuracy above chance in predicting complaints."
                    },
                    {
                        "id": 189,
                        "string": "The best results are obtained by the Volkova & Bachrach model (Sentiment -V&B) which achieves 60 F1."
                    },
                    {
                        "id": 190,
                        "string": "However, models trained using linguistic features on the training data obtain significantly higher predictive accuracy."
                    },
                    {
                        "id": 191,
                        "string": "Complaint specific features are predictive of complaints, but to a smaller extent than sentiment, reaching an overall 55.2 F1."
                    },
                    {
                        "id": 192,
                        "string": "From this group of features, the most predictive groups are intensifiers and downgraders."
                    },
                    {
                        "id": 193,
                        "string": "Syntactic part-ofspeech features alone obtain higher performance than any sentiment or complaint feature group, showing the syntactic patterns discussed in the previous section hold high predictive accuracy for the task."
                    },
                    {
                        "id": 194,
                        "string": "The topical features such as the LIWC dictionaries (which combine syntactic and semantic information) and Word2Vec topics perform in the same range as the part of speech tags."
                    },
                    {
                        "id": 195,
                        "string": "However, best predictive performance is obtained using bag-of-word features, reaching an F1 of up to 77.5 and AUC of 0.866."
                    },
                    {
                        "id": 196,
                        "string": "Further, combining all features boosts predictive accuracy to 78 F1 and 0.864 AUC."
                    },
                    {
                        "id": 197,
                        "string": "We notice that neural network approaches are comparable, but do not outperform the best performing feature-based model, likely in part due to the training data size."
                    },
                    {
                        "id": 198,
                        "string": "Distant Supervision."
                    },
                    {
                        "id": 199,
                        "string": "We explore the idea of identifying extra complaint data using distant supervision to further boost predictive performance."
                    },
                    {
                        "id": 200,
                        "string": "Previous work has demonstrated improvements on related tasks relying on weak supervision e.g., in the form of tweets with related hashtags (Bamman and Smith, 2015; Volkova and Bachrach, 2016; Cliche, 2017) ."
                    },
                    {
                        "id": 201,
                        "string": "Following the same procedure, seven hashtags were identified with the help of the training data to likely correspond to complaints: #appallingcustomercare, #badbusiness, #badcustomerserivice, #badservice, #lostbusiness, #unhappycustomer, #worstbrand."
                    },
                    {
                        "id": 202,
                        "string": "Tweets were collected to contain these hashtags from a combination of the 1% Twitter archive between 2012-2018 and by filtering tweets with these hashtags in real-time from Twitter REST API for three months."
                    },
                    {
                        "id": 203,
                        "string": "We collected in total 18,218 tweets (excluding retweets and duplicates) equated to complaints."
                    },
                    {
                        "id": 204,
                        "string": "As negative complaint examples, the same amount of tweets were sampled randomly from the same time interval."
                    },
                    {
                        "id": 205,
                        "string": "All hashtags were removed and the data was preprocessed identically as the annotated data set."
                    },
                    {
                        "id": 206,
                        "string": "We experiment with two techniques for combining distantly supervised data with our annotated data."
                    },
                    {
                        "id": 207,
                        "string": "First, the tweets obtained through distant supervision are simply added to the annotated training data in each fold (Pooling)."
                    },
                    {
                        "id": 208,
                        "string": "Secondly, as important signal may be washed out if the features are joined across both domains, we experiment with domain adaptation using the popular EasyAdapt algorithm (Daumé III, 2007) (EasyAdapt)."
                    },
                    {
                        "id": 209,
                        "string": "Experiments use logistic regression with bag-of-word features enhanced with part-ofspeech tags, because these performed best in the previous experiment."
                    },
                    {
                        "id": 210,
                        "string": "Table 7 show that the domain adaptation approach further boosts F1 by 1 point to 79 (t-test, p<0.5) and ROC AUC by 0.012."
                    },
                    {
                        "id": 211,
                        "string": "However, simply pooling the data actually hurts predictive performance leading to a drop of more than 2 points in F1."
                    },
                    {
                        "id": 212,
                        "string": "Results presented in Domain Experiments We assess the performance of models trained using the best method and features by using in training: (1) using only indomain data (In-Domain); (2) adding out-ofdomain data into the training set (Pooling); and (3) combining in-and out-of-domain data with EasyAdapt domain adaptation (EasyAdapt)."
                    },
                    {
                        "id": 213,
                        "string": "The experimental setup is identical to the one described in the previous experiments."
                    },
                    {
                        "id": 214,
                        "string": "Table 8 shows the model performance in macro-averaged F1 using the best performing feature set."
                    },
                    {
                        "id": 215,
                        "string": "Results show that, in all but one case, adding out-of-domain data helps predictive performance."
                    },
                    {
                        "id": 216,
                        "string": "The apparel domain is qualitatively very different from the others as a large number of complaints are about returns or the company not stocking items, hence leading to different features being important for prediction."
                    },
                    {
                        "id": 217,
                        "string": "Domain adaptation is beneficial the majority of domains, lowering performance on a single domain compared to data pooling."
                    },
                    {
                        "id": 218,
                        "string": "This highlights the differences in expressing complaints across domains."
                    },
                    {
                        "id": 219,
                        "string": "Overall, predictive performance is high across all domains, with the exception of transport."
                    },
                    {
                        "id": 220,
                        "string": "Cross Domain Experiments Finally, Table 9 presents the results of models trained on tweets from one domain and tested on all tweets from other domains, with additional models trained on tweets from all domains except the one that the model is tested on."
                    },
                    {
                        "id": 221,
                        "string": "We observe that predictive performance is relatively consistent across all domains with two exceptions ('Food & Beverage' consistently shows lower performance, while 'Other' achieves higher performance) when using all the data available from the other domains."
                    },
                    {
                        "id": 222,
                        "string": "Conclusions & Future Work We presented the first computational approach using methods from computational linguistics and machine learning to modeling complaints as de-  fined in prior studies in linguistics and pragmatics (Olshtain and Weinbach, 1987) ."
                    },
                    {
                        "id": 223,
                        "string": "To this end, we introduced the first data set consisting of English Twitter posts annotated with complaints across nine domains."
                    },
                    {
                        "id": 224,
                        "string": "We analyzed the syntactic patterns and linguistic markers specific of complaints."
                    },
                    {
                        "id": 225,
                        "string": "Then, we built predictive models of complaints in tweets using a wide range of features reaching up to 79% Macro F1 (0.885 AUC) and conducted experiments using distant supervision and domain adaptation to boost predictive performance."
                    },
                    {
                        "id": 226,
                        "string": "We studied performance of complaint prediction models on each individual domain and presented results with a domain adaptation approach which overall improves predictive accuracy."
                    },
                    {
                        "id": 227,
                        "string": "All data and code is available to the research community to foster further research on complaints."
                    },
                    {
                        "id": 228,
                        "string": "A predictive model for identification of complaints is useful to companies that wish to automatically gather and analyze complaints about a particular event or product."
                    },
                    {
                        "id": 229,
                        "string": "This would allow them to improve efficiency in customer service or to more cheaply gauge popular opinion in a timely manner in order to identify common issues around a product launch or policy proposal."
                    },
                    {
                        "id": 230,
                        "string": "In the future, we plan to identify the target of the complaint in a similar way to aspect-based sentiment analysis (Pontiki et al., 2016) ."
                    },
                    {
                        "id": 231,
                        "string": "We plan to use additional context and conversational structure to improve performance and identify the sociodemographic covariates of expressing and phrasing complaints."
                    },
                    {
                        "id": 232,
                        "string": "Another research direction is to study the role of complaints in personal conversation or in the political domain, e.g., predicting political stance in elections (Tsakalidis et al., 2018) ."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 15
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 16,
                        "end": 40
                    },
                    {
                        "section": "Data",
                        "n": "3",
                        "start": 41,
                        "end": 44
                    },
                    {
                        "section": "Collection",
                        "n": "3.1",
                        "start": 45,
                        "end": 63
                    },
                    {
                        "section": "Annotation",
                        "n": "3.2",
                        "start": 64,
                        "end": 73
                    },
                    {
                        "section": "Features",
                        "n": "4",
                        "start": 74,
                        "end": 122
                    },
                    {
                        "section": "Linguistic Feature Analysis",
                        "n": "5",
                        "start": 123,
                        "end": 169
                    },
                    {
                        "section": "Predicting Complaints",
                        "n": "6",
                        "start": 170,
                        "end": 221
                    },
                    {
                        "section": "Conclusions & Future Work",
                        "n": "7",
                        "start": 222,
                        "end": 232
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1152-Table1-1.png",
                        "caption": "Table 1: Examples of tweets annotated for complaint (C) and sentiment (S).",
                        "page": 0,
                        "bbox": {
                            "x1": 309.59999999999997,
                            "x2": 524.16,
                            "y1": 221.76,
                            "y2": 279.36
                        }
                    },
                    {
                        "filename": "../figure/image/1152-Table4-1.png",
                        "caption": "Table 4: Features associated with complaint and noncomplaint tweets, sorted by Pearson correlation (r) computed between the normalized frequency of each feature and the complaint label across all tweets. All correlations are significant at p < .01, two-tailed t-test, Simes corrected.",
                        "page": 5,
                        "bbox": {
                            "x1": 78.72,
                            "x2": 283.2,
                            "y1": 62.879999999999995,
                            "y2": 412.32
                        }
                    },
                    {
                        "filename": "../figure/image/1152-Table5-1.png",
                        "caption": "Table 5: Group text features associated with tweets that are complaints and not complaints. Features are sorted by Pearson correlation (r) between their each feature’s normalized frequency and the outcome. We restrict to only the top six categories for each feature type. All correlations are significant at p < .01, two-tailed t-test, Simes corrected. Within each cluster, words are sorted by frequency in our data set. Labels for Word2Vec clusters are assigned by the authors.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 525.12,
                            "y1": 63.839999999999996,
                            "y2": 179.04
                        }
                    },
                    {
                        "filename": "../figure/image/1152-Table6-1.png",
                        "caption": "Table 6: Complaint prediction results using logistic regression (with different types of linguistic features), neural network approaches and the most frequent class baseline. Best results are in bold.",
                        "page": 7,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 288.0,
                            "y1": 63.839999999999996,
                            "y2": 308.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1152-Table7-1.png",
                        "caption": "Table 7: Complaint prediction results using the original data set and distantly supervised data. All models are based on logistic regression with bag-of-word and Partof-Speech tag features.",
                        "page": 7,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 524.16,
                            "y1": 63.839999999999996,
                            "y2": 123.36
                        }
                    },
                    {
                        "filename": "../figure/image/1152-Table2-1.png",
                        "caption": "Table 2: List of customer support handles by domain. The domain is chosen based on the most frequent product or service the account usually receives complaints about (e.g., NikeSupport receives most complaints about the Nike Fitness Bands).",
                        "page": 3,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 525.12,
                            "y1": 62.879999999999995,
                            "y2": 156.96
                        }
                    },
                    {
                        "filename": "../figure/image/1152-Table3-1.png",
                        "caption": "Table 3: Number of tweets annotated as complaints across the nine domains.",
                        "page": 3,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 289.44,
                            "y1": 214.56,
                            "y2": 313.92
                        }
                    },
                    {
                        "filename": "../figure/image/1152-Table8-1.png",
                        "caption": "Table 8: Performance of models in Macro F1 on tweets from each domain.",
                        "page": 8,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 289.44,
                            "y1": 62.879999999999995,
                            "y2": 159.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1152-Table9-1.png",
                        "caption": "Table 9: Performance of models trained with tweets from one domain and tested on other domains. All results are reported in ROC AUC. The All line displays results on training on all categories except the category in testing.",
                        "page": 8,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 525.12,
                            "y1": 62.879999999999995,
                            "y2": 163.2
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-40"
        },
        {
            "slides": {
                "0": {
                    "title": "Objective",
                    "text": [
                        "What geometric properties of an embedding space are important for performance on a given task?",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019",
                        "Understand utility of embeddings as input features.",
                        "Provide direction for future work in training and tuning embeddings."
                    ],
                    "page_nums": [
                        1,
                        2
                    ],
                    "images": []
                },
                "1": {
                    "title": "Embedding space",
                    "text": [
                        "In NLP, the term embedding is often used to denote both a map and (an element of) its image.",
                        "We define an embedding space as a set of word vectors in Rd.",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Geometric properties",
                    "text": [
                        "We consider the following attributes of word embedding geometry: position relative to the origin; distribution of feature values in Rd; global pairwise distances; local pairwise distances.",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Our approach",
                    "text": [
                        "We transform the embedding space such that we expose only a subset of the stated properties to downstream models.",
                        "position relative to the origin; distribution of feature values in Rd; global pairwise distances; local pairwise distances.",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Affine",
                    "text": [
                        "pos. relative to the origin distribution of features global distances local distances",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Cosine distance embedding CDE",
                    "text": [
                        "d embedding dimension (300);",
                        "|V distance vector dimension (104 most",
                        "pos. relative to the origin distribution of features global distances local distances",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "6": {
                    "title": "Nearest neighbor embedding NNE",
                    "text": [
                        "pos. relative to the origin distribution of features global distances local distances",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "7": {
                    "title": "Hierarchy of transformations",
                    "text": [
                        "Ordering is with respect to number of properties ablated.",
                        "We include a random baseline of meaningless vectors.",
                        "Arrow length does not mean anything.",
                        "Transformations are applied independently to the original embeddings.",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "8": {
                    "title": "Embeddings and Tasks",
                    "text": [
                        "Word2Vec on Google news;",
                        "GloVe on common crawl;",
                        "10 standard intrinsic tasks.",
                        "5 extrinsic tasks (embeddings plugged into a downstream machine learning model).",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "9": {
                    "title": "Tasks",
                    "text": [
                        "Word Similarity and Relatedness via cosine distance",
                        "Sentence-level sentiment polarity classif. on MR movie reviews",
                        "Sentiment classif. on IMDB reviews",
                        "Subj./Obj. classif. on Rotten",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                },
                "10": {
                    "title": "Results intrinsic tasks",
                    "text": [
                        "We see the lowest performance on thresholded-NNE.",
                        "Largest drop in performance at",
                        "CDE (written as distAE on the",
                        "Rotations, dilations, and reflections are innocuous.",
                        "Displacing the origin has a nontrivial effect.",
                        "NNE causes a significant drop in performance as well.",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "11": {
                    "title": "Results extrinsic tasks",
                    "text": [
                        "CDE is still the largest drop.",
                        "NNE recover most of the losses, and are on par with affines.",
                        "Extrinsic tasks are more robust to translations, but not homotheties.",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "12": {
                    "title": "Discussion",
                    "text": [
                        "Drop due to CDE likely associated with the importance of locality in embedding learning.",
                        "With thresholded-NNE, high out-degree words are rare words, introducing noise during node2vecs random walk.",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "13": {
                    "title": "Takeaways",
                    "text": [
                        "We find that in general, both intrinsic and extrinsic models rely heavily on local similarity, as opposed to global distance information.",
                        "We also find that intrinsic models are more sensitive to absolute position than extrinsic ones.",
                        "Methods for tuning and training should focus on local geometric structure in Rd.",
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "14": {
                    "title": "Questions",
                    "text": [
                        "Whitaker, Newman-Griffis, Haldar, et al. Characterizing Embedding Geometry June 4, 2019"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                }
            },
            "paper_title": "Characterizing the Impact of Geometric Properties of Word Embeddings on Task Performance",
            "paper_id": "1155",
            "paper": {
                "title": "Characterizing the Impact of Geometric Properties of Word Embeddings on Task Performance",
                "abstract": "Analysis of word embedding properties to inform their use in downstream NLP tasks has largely been studied by assessing nearest neighbors. However, geometric properties of the continuous feature space contribute directly to the use of embedding features in downstream models, and are largely unexplored. We consider four properties of word embedding geometry, namely: position relative to the origin, distribution of features in the vector space, global pairwise distances, and local pairwise distances. We define a sequence of transformations to generate new embeddings that expose subsets of these properties to downstream models and evaluate change in task performance to understand the contribution of each property to NLP models. We transform publicly available pretrained embeddings from three popular toolkits (word2vec, GloVe, and FastText) and evaluate on a variety of intrinsic tasks, which model linguistic information in the vector space, and extrinsic tasks, which use vectors as input to machine learning models. We find that intrinsic evaluations are highly sensitive to absolute position, while extrinsic tasks rely primarily on local similarity. Our findings suggest that future embedding models and post-processing techniques should focus primarily on similarity to nearby points in vector space.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Learned vector representations of words, known as word embeddings, have become ubiquitous throughout natural language processing (NLP) applications."
                    },
                    {
                        "id": 1,
                        "string": "As a result, analysis of embedding spaces to understand their utility as input features has emerged as an important avenue of inquiry, in order to facilitate proper use of embeddings in downstream NLP tasks."
                    },
                    {
                        "id": 2,
                        "string": "Many analyses have focused on nearest neighborhoods, as a viable proxy for semantic information (Rogers et al."
                    },
                    {
                        "id": 3,
                        "string": ", * These authors contributed equally to this work."
                    },
                    {
                        "id": 4,
                        "string": "2018; Pierrejean and Tanguy, 2018) ."
                    },
                    {
                        "id": 5,
                        "string": "However, neighborhood-based analysis is limited by the unreliability of nearest neighborhoods (Wendlandt et al., 2018) ."
                    },
                    {
                        "id": 6,
                        "string": "Further, it is intended to evaluate the semantic content of embedding spaces, as opposed to characteristics of the feature space itself."
                    },
                    {
                        "id": 7,
                        "string": "Geometric analysis offers another recent angle from which to understand the properties of word embeddings, both in terms of their distribution (Mimno and Thompson, 2017) and correlation with downstream performance (Chandrahas et al., 2018) ."
                    },
                    {
                        "id": 8,
                        "string": "Through such geometric investigations, neighborhood-based semantic characterizations are augmented with information about the continuous feature space of an embedding."
                    },
                    {
                        "id": 9,
                        "string": "Geometric features offer a more direct connection to the assumptions made by neural models about continuity in input spaces (Szegedy et al., 2014) , as well as the use of recent contextualized representation methods using continuous language models (Peters et al., 2018; Devlin et al., 2018) ."
                    },
                    {
                        "id": 10,
                        "string": "In this work, we aim to bridge the gap between neighborhood-based semantic analysis and geometric performance analysis."
                    },
                    {
                        "id": 11,
                        "string": "We consider four components of the geometry of word embeddings, and transform pretrained embeddings to expose only subsets of these components to downstream models."
                    },
                    {
                        "id": 12,
                        "string": "We transform three popular sets of embeddings, trained using word2vec (Mikolov et al., 2013) , 1 GloVe (Pennington et al., 2014) , 2 and FastText (Bojanowski et al., 2017) , 3 and use the resulting embeddings in a battery of standard evaluations to measure changes in task performance."
                    },
                    {
                        "id": 13,
                        "string": "We find that intrinsic evaluations, which model linguistic information directly in the vector space, are highly sensitive to absolute position in pretrained embeddings; while extrinsic tasks, in which word embeddings are passed as input features to a trained model, are more robust and rely primarily on information about local similarity between word vectors."
                    },
                    {
                        "id": 14,
                        "string": "Our findings, including evidence that global organization of word vectors is often a major source of noise, suggest that further development of embedding learning and tuning methods should focus explicitly on local similarity, and help to explain the success of several recent methods."
                    },
                    {
                        "id": 15,
                        "string": "Related Work Word embedding models and outputs have been analyzed from several angles."
                    },
                    {
                        "id": 16,
                        "string": "In terms of performance, evaluating the \"quality\" of word embedding models has long been a thorny problem."
                    },
                    {
                        "id": 17,
                        "string": "While intrinsic evaluations such as word similarity and analogy completion are intuitive and easy to compute, they are limited by both confounding geometric factors (Linzen, 2016) and task-specific factors (Faruqui et al., 2016; Rogers et al., 2017) ."
                    },
                    {
                        "id": 18,
                        "string": "Chiu et al."
                    },
                    {
                        "id": 19,
                        "string": "(2016) show that these tasks, while correlated with some semantic content, do not always predict downstream performance."
                    },
                    {
                        "id": 20,
                        "string": "Thus, it is necessary to use a more comprehensive set of intrinsic and extrinsic evaluations for embeddings."
                    },
                    {
                        "id": 21,
                        "string": "Nearest neighbors in sets of embeddings are commonly used as a proxy for qualitative semantic information."
                    },
                    {
                        "id": 22,
                        "string": "However, their instability across embedding samples (Wendlandt et al., 2018 ) is a limiting factor, and they do not necessarily correlate with linguistic analyses (Hellrich and Hahn, 2016) ."
                    },
                    {
                        "id": 23,
                        "string": "Modeling neighborhoods as a graph structure offers an alternative analysis method (Cuba Gyllensten and Sahlgren, 2015) , as does 2-D or 3-D visualization (Heimerl and Gleicher, 2018) ."
                    },
                    {
                        "id": 24,
                        "string": "However, both of these methods provide qualitative insights only."
                    },
                    {
                        "id": 25,
                        "string": "By systematically analyzing geometric information with a wide variety of eval-uations, we provide a quantitative counterpart to these understandings of embedding spaces."
                    },
                    {
                        "id": 26,
                        "string": "Methods In order to investigate how different geometric properties of word embeddings contribute to model performance on intrinsic and extrinsic evaluations, we consider the following attributes of word embedding geometry: • position relative to the origin; • distribution of feature values in R d ; • global pairwise distances, i.e."
                    },
                    {
                        "id": 27,
                        "string": "distances between any pair of vectors; • local pairwise distances, i.e."
                    },
                    {
                        "id": 28,
                        "string": "distances between nearby pairs of vectors."
                    },
                    {
                        "id": 29,
                        "string": "Using each of our sets of pretrained word embeddings, we apply a variety of transformations to induce new embeddings that only expose subsets of these attributes to downstream models."
                    },
                    {
                        "id": 30,
                        "string": "These are: affine transformation, which obfuscates the original position of the origin; cosine distance encoding, which obfuscates the original distribution of feature values in R d ; nearest neighbor encoding, which obfuscates global pairwise distances; and random encoding."
                    },
                    {
                        "id": 31,
                        "string": "This sequence is illustrated in Figure 1 , and the individual transformations are discussed in the following subsections."
                    },
                    {
                        "id": 32,
                        "string": "General notation for defining our transformations is as follows."
                    },
                    {
                        "id": 33,
                        "string": "Let W be our vocabulary of words taken from some source corpus."
                    },
                    {
                        "id": 34,
                        "string": "We associate with each word w ∈ W a vector v ∈ R d resulting from training via one of our embedding generation algorithms, where d is an arbitrary dimensionality for the embedding space."
                    },
                    {
                        "id": 35,
                        "string": "We define V to be the set of all pretrained word vectors v for a given corpus, embedding algorithm, and parameters."
                    },
                    {
                        "id": 36,
                        "string": "The matrix of embeddings M V associated with this set then has shape |V | × d. For simplicity, we restrict our analysis to transformed embeddings of the same dimensionality d as the original vectors."
                    },
                    {
                        "id": 37,
                        "string": "Affine transformations Affine transformations have been previously utilized for post-processing of word embeddings."
                    },
                    {
                        "id": 38,
                        "string": "For example, Artetxe et al."
                    },
                    {
                        "id": 39,
                        "string": "(2016) learn a matrix transform to align multilingual embedding spaces, and Faruqui et al."
                    },
                    {
                        "id": 40,
                        "string": "(2015) use a linear sparsification to better capture lexical semantics."
                    },
                    {
                        "id": 41,
                        "string": "In addition, the simplicity of affine functions in machine learning contexts (Hofmann et al., 2008) makes them a good starting point for our analysis."
                    },
                    {
                        "id": 42,
                        "string": "Given a set of embeddings in R d , referred to as an embedding space, affine transformations f affine : R d → R d change positions of points relative to the origin."
                    },
                    {
                        "id": 43,
                        "string": "While prior work has typically focused on linear transformations, which fix the origin, we consider the broader class of affine transformations, which do not."
                    },
                    {
                        "id": 44,
                        "string": "Thus, affine transformations such as translation cannot in general be represented as a square matrix for finite-dimensional spaces."
                    },
                    {
                        "id": 45,
                        "string": "We use the following affine transformations: • translations; • reflections over a hyperplane; • rotations about a subspace; • homotheties."
                    },
                    {
                        "id": 46,
                        "string": "We give brief definitions of each transformation."
                    },
                    {
                        "id": 47,
                        "string": "Definition 1."
                    },
                    {
                        "id": 48,
                        "string": "A translation is a function T x : R d → R d given by T x (v) = v + x (3.1) where x ∈ R d ."
                    },
                    {
                        "id": 49,
                        "string": "Definition 2."
                    },
                    {
                        "id": 50,
                        "string": "For every a ∈ R d , we call the map Refl a : R d → R d given by Refl a (v) = v − 2 v · a a · a a (3.2) the reflection over the hyperplane through the origin orthogonal to a."
                    },
                    {
                        "id": 51,
                        "string": "Definition 3."
                    },
                    {
                        "id": 52,
                        "string": "A rotation through the span of vectors u, x by angle θ is a map Rot u,x : R d → R d given by Rot u,x (v) = Av (3.3) where A = I + sin θ(xu T − ux T ) + (cos θ − 1)(uu T + xx T ) (3.4) and I ∈ Mat d,d (R) is the identity matrix."
                    },
                    {
                        "id": 53,
                        "string": "Definition 4."
                    },
                    {
                        "id": 54,
                        "string": "For every a ∈ R d and λ ∈ R \\ { 0 }, we call the map H a,λ : R d → R d given by H a,λ (v) = a + λ(v − a) (3.5) a homothety of center a and ratio λ."
                    },
                    {
                        "id": 55,
                        "string": "A homothety centered at the origin is called a dilation."
                    },
                    {
                        "id": 56,
                        "string": "Parameters used in our analysis for each of these transformations are provided in Appendix A. Cosine distance encoding (CDE) Our cosine distance encoding transformation f CDE : R d → R |V | obfuscates the distribution of features in R d by representing a set of word vectors as a pairwise distance matrix."
                    },
                    {
                        "id": 57,
                        "string": "Such a transformation might be used to avoid the non-interpretability of embedding features (Fyshe et al., 2015) and compare embeddings based on relative organization alone."
                    },
                    {
                        "id": 58,
                        "string": "Definition 5."
                    },
                    {
                        "id": 59,
                        "string": "Let a, b ∈ R d ."
                    },
                    {
                        "id": 60,
                        "string": "Then their cosine distance d cos : R d × R d → [0, 2] is given by d cos (a, b) = 1 − a · b ||a||||b|| (3.6) where the second term is the cosine similarity."
                    },
                    {
                        "id": 61,
                        "string": "As all three sets of embeddings evaluated in this study have vocabulary size on the order of 10 6 , use of the full distance matrix is impractical."
                    },
                    {
                        "id": 62,
                        "string": "We use a subset consisting of the distance from each point to the embeddings of the 10K most frequent words from each embedding set, yielding f CDE : R d → R 10 4 This is not dissimilar to the global frequencybased negative sampling approach of word2vec (Mikolov et al., 2013) ."
                    },
                    {
                        "id": 63,
                        "string": "We then use an autoencoder to map this back to R d for comparability."
                    },
                    {
                        "id": 64,
                        "string": "Definition 6."
                    },
                    {
                        "id": 65,
                        "string": "Let v ∈ R |V | , W 1 , W 2 ∈ R |V |×d ."
                    },
                    {
                        "id": 66,
                        "string": "Then an autoencoder over R |V | is defined as h = ϕ(vW 1 ) (3.7) v = ϕ(W 2 T h) (3.8) Vector h ∈ R d is then used as the compressed rep- resentation of v. In our experiments, we use ReLU as our activation function ϕ, and train the autoencoder for 50 epochs to minimize L 2 distance between v andv."
                    },
                    {
                        "id": 67,
                        "string": "We recognize that low-rank compression using an autoencoder is likely to be noisy, thus potentially inducing additional loss in evaluations."
                    },
                    {
                        "id": 68,
                        "string": "However, precedent for capturing geometric structure with autoencoders (Li et al., 2017b) suggests that this is a viable model for our analysis."
                    },
                    {
                        "id": 69,
                        "string": "Nearest neighbor encoding (NNE) Our nearest neighbor encoding transformation f NNE : R d → R |V | discards the majority of the global pairwise distance information modeled in CDE, and retains only information about nearest neighborhoods."
                    },
                    {
                        "id": 70,
                        "string": "The output of f NNE (v) is a sparse vector."
                    },
                    {
                        "id": 71,
                        "string": "This transformation relates to the common use of nearest neighborhoods as a proxy for semantic information (Wendlandt et al., 2018; Pierrejean and Tanguy, 2018) ."
                    },
                    {
                        "id": 72,
                        "string": "We take the previously proposed approach of combining the output of f NNE (v) for each v ∈ V to form a sparse adjacency matrix, which describes a directed nearest neighbor graph (Cuba Gyllensten and Sahlgren, 2015; Newman-Griffis and Fosler-Lussier, 2017), using three versions of f NNE defined below."
                    },
                    {
                        "id": 73,
                        "string": "Thresholded The set of non-zero indices in f NNE (v) correspond to word vectorsṽ such that the cosine similarity of v andṽ is greater than or equal to an arbitrary threshold t. In order to ensure that every word has non-zero out degree in the graph, we also include the k nearest neighbors by cosine similarity for every word vector."
                    },
                    {
                        "id": 74,
                        "string": "Non-zero values in f NNE (v) are set to the cosine similarity of v and the relevant neighbor vector."
                    },
                    {
                        "id": 75,
                        "string": "Weighted The set of non-zero indices in f NNE (v) corresponds to only the set of k nearest neighbors to v by cosine similarity."
                    },
                    {
                        "id": 76,
                        "string": "Cosine similarity values are used for edge weights."
                    },
                    {
                        "id": 77,
                        "string": "Unweighted As in the previous case, only k nearest neighbors are included in the adjacency matrix."
                    },
                    {
                        "id": 78,
                        "string": "All edges are weighted equally, regardless of cosine similarity."
                    },
                    {
                        "id": 79,
                        "string": "We report results using k = 5 and t = 0.05; other settings are discussed in Appendix B."
                    },
                    {
                        "id": 80,
                        "string": "Finally, much like the CDE method, we use a second mapping function ψ : R |V | → R d to transform the nearest neighbor graph back to d-dimensional vectors for evaluation."
                    },
                    {
                        "id": 81,
                        "string": "Following Newman-Griffis and Fosler-Lussier (2017), we use node2vec (Grover and Leskovec, 2016) with default parameters to learn this mapping."
                    },
                    {
                        "id": 82,
                        "string": "Like the autoencoder, this is a noisy map, but the intent of node2vec to capture patterns in local graph structure makes it a good fit for our analysis."
                    },
                    {
                        "id": 83,
                        "string": "Random encoding Finally, as a baseline, we use a random encoding f Rand : R d → R d that discards original vectors entirely."
                    },
                    {
                        "id": 84,
                        "string": "While intrinsic evaluations rely only on input embeddings, and thus lose all source information in this case, extrinsic tasks learn a model to transform input features, making even randomlyinitialized vectors a common baseline (Lample et al., 2016; Kim, 2014) ."
                    },
                    {
                        "id": 85,
                        "string": "For fair comparison, we generate one set of random baselines for each embedding set and re-use these across all tasks."
                    },
                    {
                        "id": 86,
                        "string": "Other transformations Many other transformations of a word embedding space could be included in our analysis, such as arbitrary vector-valued polynomial functions, rational vector-valued functions, or common decomposition methods such as principal components analysis (PCA) or singular value decomposition (SVD)."
                    },
                    {
                        "id": 87,
                        "string": "Additionally, though they cannot be effectively applied to the unordered set of word vectors in a raw embedding space, transformations for sequential data such as discrete Fourier transforms or discrete wavelet transforms could be used for word sequences in specific text corpora."
                    },
                    {
                        "id": 88,
                        "string": "For this study, we limit our scope to the transformations listed above."
                    },
                    {
                        "id": 89,
                        "string": "These transformations align with prior work on analyzing and post-processing embeddings for specific tasks, and are highly interpretable with respect to the original embedding space."
                    },
                    {
                        "id": 90,
                        "string": "However, other complex transformations represent an intriguing area of future work."
                    },
                    {
                        "id": 91,
                        "string": "Evaluation In order to measure the contributions of each geometric aspect described in Section 3 to the utility of word embeddings as input features, we evaluate embeddings transformed using our sequence of operations on a battery of standard intrinsic evaluations, which model linguistic information directly in the vector space; and extrinsic evaluations, which use the embeddings as input to learned models for downstream applications Our intrinsic evaluations include: • Word similarity and relatedness, using cosine similarity: WordSim-353 (Finkelstein et al., 2001) , SimLex-999 (Hill et al., 2015) , RareWords (Luong et al., 2013) (Agirre et al., 2009) , which showed the same trends as the full dataset."
                    },
                    {
                        "id": 92,
                        "string": "and concrete nouns)."
                    },
                    {
                        "id": 93,
                        "string": "5 Given the well-documented issues with using vector arithmetic-based analogy completion as an intrinsic evaluation (Linzen, 2016; Rogers et al., 2017; , we do not include it in our analysis."
                    },
                    {
                        "id": 94,
                        "string": "We follow Rogers et al."
                    },
                    {
                        "id": 95,
                        "string": "(2018) in evaluating on a set of five extrinsic tasks: 5 • Relation classification: SemEval-2010 Task 8 (Hendrickx et al., 2010) , using a CNN with word and distance embeddings (Zeng et al., 2014) ."
                    },
                    {
                        "id": 96,
                        "string": "• Sentence-level sentiment polarity classification: MR movie reviews (Pang and Lee, 2005) , with a simplified CNN model from (Kim, 2014) ."
                    },
                    {
                        "id": 97,
                        "string": "• Sentiment classification: IMDB movie reviews (Maas et al., 2011) , with a single 100-d LSTM."
                    },
                    {
                        "id": 98,
                        "string": "• Subjectivity/objectivity classification: Rotten Tomato snippets (Pang and Lee, 2004) , using a logistic regression over summed word embeddings (Li et al., 2017a) ."
                    },
                    {
                        "id": 99,
                        "string": "• Natural language inference: SNLI (Bowman et al., 2015) , using separate LSTMs for premise and hypothesis, combined with a feed-forward classifier."
                    },
                    {
                        "id": 100,
                        "string": "Figure 2 presents the results of each intrinsic and extrinsic evaluation on the transformed versions of our three sets of word embeddings."
                    },
                    {
                        "id": 101,
                        "string": "6 The largest drops in performance across all three sets for intrinsic tasks occur when explicit embedding features are removed with the CDE transformation."
                    },
                    {
                        "id": 102,
                        "string": "While some cases of NNE-transformed embeddings recover a measure of this performance, they remain far under affine-transformed embeddings."
                    },
                    {
                        "id": 103,
                        "string": "Extrinsic tasks are similarly affected by the CDE transformation; however, NNE-transformed embeddings recover the majority of performance."
                    },
                    {
                        "id": 104,
                        "string": "Analysis and Discussion Comparing within the set of affine transformations, the innocuous effect of rotations, dilations, and reflections on both intrinsic and extrinsic tasks suggests that the models used are robust to simple linear transformations."
                    },
                    {
                        "id": 105,
                        "string": "Extrinsic evaluations are also relatively insensitive to translations, which can be modeled with bias terms, though the lack of learned models and reliance on cosine similarity for the intrinsic tasks makes them more sensitive to shifts relative to the origin."
                    },
                    {
                        "id": 106,
                        "string": "Interestingly, homothety, which effectively combines a translation and a dilation, leads to a noticeable drop in performance across all tasks."
                    },
                    {
                        "id": 107,
                        "string": "Intuitively, this result makes sense: by both shifting points relative to the origin and changing their distribution in the space, angular similarity values used for intrinsic tasks can be changed significantly, and the zero mean feature distribution preferred by neural models (Clevert et al., 2016) becomes harder to achieve."
                    },
                    {
                        "id": 108,
                        "string": "This suggests that methods for tuning embeddings should attempt to preserve the origin whenever possible."
                    },
                    {
                        "id": 109,
                        "string": "The large drops in performance observed when using the CDE transformation is likely to relate 6 Due to their large vocabulary size, we were unable to run Thresholded-NNE experiments with word2vec embeddings."
                    },
                    {
                        "id": 110,
                        "string": "to the instability of nearest neighborhoods and the importance of locality in embedding learning (Wendlandt et al., 2018) , although the effects of the autoencoder component also bear further investigation."
                    },
                    {
                        "id": 111,
                        "string": "By effectively increasing the size of the neighborhood considered, CDE adds additional sources of semantic noise."
                    },
                    {
                        "id": 112,
                        "string": "The similar drops from thresholded-NNE transformations, by the same token, is likely related to observations of the relationship between the frequency ranks of a word and its nearest neighbors (Faruqui et al., 2016) ."
                    },
                    {
                        "id": 113,
                        "string": "With thresholded-NNE, we find that the words with highest out degree in the nearest neighbor graph are rare words (e.g., \"Chanterelle\" and \"Courtier\" in FastText, \"Tiegel\" and \"demangler\" in GloVe), which link to other rare words."
                    },
                    {
                        "id": 114,
                        "string": "Thus, node2vec's random walk method is more likely to traverse these dense subgraphs of rare words, adding noise to the output embeddings."
                    },
                    {
                        "id": 115,
                        "string": "Finally, we note that Melamud et al."
                    },
                    {
                        "id": 116,
                        "string": "(2016) showed significant variability in downstream task performance when using different embedding dimensionalities."
                    },
                    {
                        "id": 117,
                        "string": "While we fixed vector dimensionality for the purposes of this study, varying d in future work represents a valuable follow-up."
                    },
                    {
                        "id": 118,
                        "string": "Our findings suggest that methods for training and tuning embeddings, especially for downstream tasks, should explicitly focus on local geometric structure in the vector space."
                    },
                    {
                        "id": 119,
                        "string": "One concrete example of this comes from Chen et al."
                    },
                    {
                        "id": 120,
                        "string": "(2018) , who demonstrate empirical gains when changing the negative sampling approach of word2vec to choose negative samples that are currently near to the target word in vector space, instead of the original frequency-based sampling (which ignores geometric structure)."
                    },
                    {
                        "id": 121,
                        "string": "Similarly, successful methods for tuning word embeddings for specific tasks have often focused on enforcing a specific neighborhood structure (Faruqui et al., 2015) ."
                    },
                    {
                        "id": 122,
                        "string": "We demonstrate that by doing so, they align qualitative semantic judgments with the primary geometric information that downstream models learn from."
                    },
                    {
                        "id": 123,
                        "string": "Conclusion Analysis of word embeddings has largely focused on qualitative characteristics such as nearest neighborhoods or relative distribution."
                    },
                    {
                        "id": 124,
                        "string": "In this work, we take a quantitative approach analyzing geometric attributes of embeddings in R d , in order to understand the impact of geometric properties on downstream task performance."
                    },
                    {
                        "id": 125,
                        "string": "We character-ized word embedding geometry in terms of absolute position, vector features, global pairwise distances, and local pairwise distances, and generated new embedding matrices by removing these attributes from pretrained embeddings."
                    },
                    {
                        "id": 126,
                        "string": "By evaluating the performance of these transformed embeddings on a variety of intrinsic and extrinsic tasks, we find that while intrinsic evaluations are sensitive to absolute position, downstream models rely primarily on information about local similarity."
                    },
                    {
                        "id": 127,
                        "string": "As embeddings are used for increasingly specialized applications, and as recent contextualized embedding methods such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) allow for dynamic generation of embeddings from specific contexts, our findings suggest that work on tuning and improving these embeddings should focus explicitly on local geometric structure in sampling and evaluation methods."
                    },
                    {
                        "id": 128,
                        "string": "The source code for our transformations and complete tables of our results are available online at https://github.com/OSU-slatelab/ geometric-embedding-properties."
                    },
                    {
                        "id": 129,
                        "string": "Appendix A Parameters We give the following library of vectors in R d used as parameter values: v diag =    1 √ d ."
                    },
                    {
                        "id": 130,
                        "string": "."
                    },
                    {
                        "id": 131,
                        "string": "."
                    },
                    {
                        "id": 132,
                        "string": "1 √ d    ; v diagNeg =       − 1 √ d 1 √ d ."
                    },
                    {
                        "id": 133,
                        "string": "."
                    },
                    {
                        "id": 134,
                        "string": "."
                    },
                    {
                        "id": 135,
                        "string": "1 √ d       ."
                    },
                    {
                        "id": 136,
                        "string": "( Appendix B NNE settings We experimented with k ∈ {5, 10, 15} for our weighted and unweighted NNE transformations."
                    },
                    {
                        "id": 137,
                        "string": "For thresholded NNE, in order to best evaluate the impact of thresholding over uniform k, we used the minimum k = 5 and experimented with t ∈ {0.01, 0.05, 0.075}; higher values of t increased graph size sufficiently to be impractical."
                    },
                    {
                        "id": 138,
                        "string": "We report using k = 5 for weighted and unweighted settings in our main results for fairer comparison with the thresholded setting."
                    },
                    {
                        "id": 139,
                        "string": "The effect of thresholding on nearest neighbor graphs was a strongly right-tailed increase in out degree for a small portion of nodes."
                    },
                    {
                        "id": 140,
                        "string": "Our reported value of t = 0.05 increased the out degree of 20,229 nodes for FastText (out of 1M total nodes), with the maximum increase being 819 (\"Chanterelle\"), and 1,354 nodes increasing out degree by only 1."
                    },
                    {
                        "id": 141,
                        "string": "For GloVe, 7,533 nodes increased in out degree (out of 2M total), with maximum increase 240 (\"Tiegel\"), and 372 nodes increasing out degree by only 1."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 14
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 15,
                        "end": 25
                    },
                    {
                        "section": "Methods",
                        "n": "3",
                        "start": 26,
                        "end": 36
                    },
                    {
                        "section": "Affine transformations",
                        "n": "3.1",
                        "start": 37,
                        "end": 55
                    },
                    {
                        "section": "Cosine distance encoding (CDE)",
                        "n": "3.2",
                        "start": 56,
                        "end": 68
                    },
                    {
                        "section": "Nearest neighbor encoding (NNE)",
                        "n": "3.3",
                        "start": 69,
                        "end": 82
                    },
                    {
                        "section": "Random encoding",
                        "n": "3.4",
                        "start": 83,
                        "end": 85
                    },
                    {
                        "section": "Other transformations",
                        "n": "3.5",
                        "start": 86,
                        "end": 90
                    },
                    {
                        "section": "Evaluation",
                        "n": "4",
                        "start": 91,
                        "end": 103
                    },
                    {
                        "section": "Analysis and Discussion",
                        "n": "5",
                        "start": 104,
                        "end": 122
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 123,
                        "end": 141
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1155-Figure2-1.png",
                        "caption": "Figure 2: Performance metrics on intrinsic and extrinsic tasks, comparing across different transformations applied to each set of word embeddings. Dotted lines are for visual aid in tracking performance on individual tasks, and do not indicate continuous transformations. Transformations are presented in order of decreasing geometric information about the original vectors, and are applied independent of one another to the original source embedding.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 61.44,
                            "y2": 431.52
                        }
                    },
                    {
                        "filename": "../figure/image/1155-Figure1-1.png",
                        "caption": "Figure 1: Sequence of transformations applied to word embeddings, including transformation variants. Note that each transformation is applied independently to source word embeddings. Transformations are presented in order of decreasing geometric information retained about the original vectors.",
                        "page": 1,
                        "bbox": {
                            "x1": 108.47999999999999,
                            "x2": 488.15999999999997,
                            "y1": 67.2,
                            "y2": 125.75999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1155-Table2-1.png",
                        "caption": "Table 2: Mean performance on intrinsic tasks under different NNE settings.",
                        "page": 9,
                        "bbox": {
                            "x1": 314.88,
                            "x2": 515.04,
                            "y1": 125.75999999999999,
                            "y2": 259.2
                        }
                    },
                    {
                        "filename": "../figure/image/1155-Table3-1.png",
                        "caption": "Table 3: Mean performance on extrinsic tasks under different NNE settings.",
                        "page": 9,
                        "bbox": {
                            "x1": 314.88,
                            "x2": 515.04,
                            "y1": 315.84,
                            "y2": 449.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-41"
        },
        {
            "slides": {
                "0": {
                    "title": "Twitter for Public health",
                    "text": [
                        "Many users tweet when they caught a disease",
                        "# of tweets is in proportion to # of flu patients",
                        "# of flu related tweets"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Noise included in tweets",
                    "text": [
                        "For more information about bird flu link",
                        "I got a flu I couldnt do anymore Only counts this type of tweets",
                        "Ive never caught a flu",
                        "I got a flu shot yesterday"
                    ],
                    "page_nums": [
                        2,
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Our lab runs flu surveillance system",
                    "text": [
                        "ae 47S A ~ NLP Flu Warning ~",
                        "BIvILY FOr a-ARAMHSRORVRERN TS.",
                        "() (CELA(5) RE MROKIAAORS MR, TLIL-mROMBAIC https://t.co/CsRvdgSL4v #RB HAR #Xvb A/09-7 IDR HAY DID",
                        "e>) AVINLYEOANW ADT, MMA CEAN?HHBAY FIDL BN EY +",
                        "Av FAI tiweett MRE AREY YAN",
                        "Aramaki, Eiji, Sachiko Maskawa, and Mizuki Morita. \"Twitter catches the flu: detecting influenza epidemics using Twitter.\" In Proc of EMNLP 2011. http://mednlp.jp/influ_map/"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Similarity between Tweets and Patients",
                    "text": [
                        "ME X2ieys(fh):Ad Tweets about flu",
                        "is slightly earlier than reports of flu in patients"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Each word has a specific time lag",
                    "text": [
                        "Counts of flu related tweets of flu patients",
                        "The word Fever The word Injection days time lag days time lag",
                        "# of the word fever # of the word Injection",
                        "Time shifted Time shifted"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "6": {
                    "title": "Training data Twitter Corpus",
                    "text": [
                        "Query: The word flu in Japanese"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "8": {
                    "title": "Time lag measure Cross Correlation",
                    "text": [
                        "Cross Correlation is used to search for the most suitable time shift width for each word frequency as between # of tweets days before and # of actual patients",
                        "The cross correlation is exactly the same as the Pearsons correlation when"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "9": {
                    "title": "Motivating examples",
                    "text": [
                        "When = 16, r is 0.95 B/T tweet and IDSC reports",
                        "# of the word fever",
                        "# of flu patients",
                        "When increases, word counts moves to right side:"
                    ],
                    "page_nums": [
                        14,
                        15,
                        16
                    ],
                    "images": [
                        "figure/image/1161-Figure1-1.png"
                    ]
                },
                "10": {
                    "title": "Estimate optimal time lag",
                    "text": [
                        "We define optimal time-lag by maximizing the cross correlation"
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                },
                "11": {
                    "title": "Heatmap representation of Matrix",
                    "text": [
                        "R a w w o rd c o un t s # of patients",
                        "Apply t i m e s hift",
                        "X y X y"
                    ],
                    "page_nums": [
                        18
                    ],
                    "images": [
                        "figure/image/1161-Figure2-1.png"
                    ]
                },
                "12": {
                    "title": "Effectiveness of time shift",
                    "text": [
                        "Regression for nowcasting with applying time-shift or not:",
                        "The searching range of time shift is in [0, , 60]",
                        "Train Season 2 Season 3 Season 1 Season 3 Season 1 Season 2 Avg. Test Season 1 Season 2 Season 3"
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": []
                },
                "13": {
                    "title": "Limitation",
                    "text": [
                        "To estimate specific day of the epidemic through",
                        "Twitter, we need to gather same days tweet",
                        "How to predict future disease outbreaking?",
                        "# of flu related tweets",
                        "# of flu patients"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": []
                },
                "15": {
                    "title": "Motivating example",
                    "text": [
                        "Nowcasting case: [0, max]",
                        "# of the word fever (10 days shifted) # of the word # of flu fe patients ver (10 d ays shifted)",
                        "# of the word fever (16 days shifted) # of flu patients",
                        "# of the word # of th fever word fever",
                        "of the # of word the word fever fever of the # of wo fl u rd patients fever 30 days shifted) of flu patients",
                        "# of the word Injection (30 days shifted) # of the word # of flu In patients jection (3 0 days shifted)",
                        "# of the word # of the Injection word Injection",
                        "# of the word Injection (55 days shifted) # of flu patients"
                    ],
                    "page_nums": [
                        23,
                        24,
                        25,
                        26
                    ],
                    "images": []
                },
                "17": {
                    "title": "Our model beyonds baseline",
                    "text": [
                        "Correlation b/w model and IDSC Correlation b/w model and IDSC Correlation b/w model and IDSC",
                        "Minimum Time-Lag Tnin Minimum Time-Lag Tinin Minimum Time-Lag Tinin",
                        "g Lasso 3 B06 Lasso c Elastic-Net Elastic-Net 5 BaseLine 5 5 0.44 BaseLine c c Lasso c % Elastic-Net 0.2 o o BaseLi 3 5 aseLine 5 oO oO oO 0.0",
                        "Base Line: Yrest (t) = Ytrain(t) * Higher is better"
                    ],
                    "page_nums": [
                        28
                    ],
                    "images": []
                },
                "18": {
                    "title": "Summary",
                    "text": [
                        "We discovered the time difference between twitter and actual phenomena.",
                        "We proposed but handling such difference to improve the nowcasting performance and extend for forecasting model.",
                        "Our method is widely applicable for other time series data which has time-lag between response and predictors.",
                        "Code and Data available at http://sociocom.jp/~iso/forecastword"
                    ],
                    "page_nums": [
                        29
                    ],
                    "images": []
                }
            },
            "paper_title": "Forecasting Word Model: Twitter-based Influenza Surveillance and Prediction",
            "paper_id": "1161",
            "paper": {
                "title": "Forecasting Word Model: Twitter-based Influenza Surveillance and Prediction",
                "abstract": "Because of the increasing popularity of social media, much information has been shared on the internet, enabling social media users to understand various real world events. Particularly, social media-based infectious disease surveillance has attracted increasing attention. In this work, we specifically examine influenza: a common topic of communication on social media. The fundamental theory of this work is that several words, such as symptom words (fever, headache, etc.), appear in advance of flu epidemic occurrence. Consequently, past word occurrence can contribute to estimation of the number of current patients. To employ such forecasting words, one can first estimate the optimal time lag for each word based on their cross correlation. Then one can build a linear model consisting of word frequencies at different time points for nowcasting and for forecasting influenza epidemics. Experimentally obtained results (using 7.7 million tweets of August 2012 -January 2016), the proposed model achieved the best nowcasting performance to date (correlation ratio 0.93) and practically sufficient forecasting performance (correlation ratio 0.91 in 1-week future prediction, and correlation ratio 0.77 in 3-weeks future prediction). This report reveals the effectiveness of the word time shift to predict of future epidemics using Twitter.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The increased use of social media platforms has led to wide sharing of personal information."
                    },
                    {
                        "id": 1,
                        "string": "Especially Twitter, a micro-blogging platform that enables users to communicate by updating their status using 140 or fewer characters, has attracted great attention of researchers and service developers because Twitter can be a valuable personal information resource."
                    },
                    {
                        "id": 2,
                        "string": "The feasibility of such approaches, known as social sensors, has been demonstrated in various event detection systems such as earthquakes (Sakaki et al., 2010) , outbreaks of disease (Chew and Eysenbach, 2010) , and stock market fluctuations (Bollen et al., 2011) ."
                    },
                    {
                        "id": 3,
                        "string": "Among the applications mentioned above, this study particularly examines detection of seasonal influenza epidemics because the influenza detection is a popular application of Twitter."
                    },
                    {
                        "id": 4,
                        "string": "To date, more than 30 Twitter-based influenza detection and prediction systems have been developed worldwide (Charles-Smith et al., 2015) ."
                    },
                    {
                        "id": 5,
                        "string": "Although the detailed functions of these systems differ, they share the underlying assumption that the flu spreading in the real world is immediately reflected to the tweets."
                    },
                    {
                        "id": 6,
                        "string": "Therefore, most systems have simply aggregated counts of daily flu-related tweets to obtain the current patient status (Aramaki et al., 2011; Collier et al., 2011; Chew and Eysenbach, 2010; Lampos and Cristianini, 2010; Culotta, 2013; Paul et al., 2014) ."
                    },
                    {
                        "id": 7,
                        "string": "Their typical materials are presented as shown below."
                    },
                    {
                        "id": 8,
                        "string": "• I got a flu I can not go to school for the rest of the week • I was diagnosed with a high fever."
                    },
                    {
                        "id": 9,
                        "string": "Maybe flu :( Although the former tweet is described by an actual influenza patient, the latter one merely expresses a suspicion of flu."
                    },
                    {
                        "id": 10,
                        "string": "From a practical (clinical) perspective, these differences have great importance because the latter is noise that impedes precise influenza surveillance."
                    },
                    {
                        "id": 11,
                        "string": "Therefore, earlier studies (Aramaki et al., 2011; Kanouchi et al., 2015; SUN et al., 2014) have devoted great efforts to removal of such noise (suspicion, negation, news wired, and so on)."
                    },
                    {
                        "id": 12,
                        "string": "This study employs such noisy tweets."
                    },
                    {
                        "id": 13,
                        "string": "We assume that a word, \"fever\" presents a clue to an upcoming influenza outbreak."
                    },
                    {
                        "id": 14,
                        "string": "Inferring that people are frequently afflicted by symptoms such as \"fever\" and \"headache\" immediately before the onset and diagnosis of influenza, we designate such words as forecasting words."
                    },
                    {
                        "id": 15,
                        "string": "More concrete examples of forecasting words are presented in Figure 1a ."
                    },
                    {
                        "id": 16,
                        "string": "The figure reveals that an approximately 16-day time lag exists between the frequency of \"fever\" (blue line) and the number of patients (red line)."
                    },
                    {
                        "id": 17,
                        "string": "If this time lag was known in advance, one could obtain a good approximation of the number of patients (red line) by a 16-day time shift operation (green line)."
                    },
                    {
                        "id": 18,
                        "string": "Similarly, flu prevention words such as \"shot\" and \"injection\" have previously been used to describe outbreaks."
                    },
                    {
                        "id": 19,
                        "string": "• I took a flu shot today • I don't wanna get a flu injection cuz it hurts me In the latter case as shown in Figure 1b , we can find much longer time lag (55 days) between tweets (frequency of \"injection\") and the reality (number of patients)."
                    },
                    {
                        "id": 20,
                        "string": "Presuming that each word has its own time lag, then the problems to be solved are two-fold: (1) estimating the optimal time lag for each forecasting word and (2) incorporating these time lags into the model."
                    },
                    {
                        "id": 21,
                        "string": "For the first problem, the suitable time lag for each word is measured by calculating the cross correlation between the word frequency and the patient number."
                    },
                    {
                        "id": 22,
                        "string": "For the second problem, we construct a word frequency matrix that consists of a shifted word frequency timeline (Sec."
                    },
                    {
                        "id": 23,
                        "string": "3)."
                    },
                    {
                        "id": 24,
                        "string": "Next, a linear model called nowcasting model is constructed from the modified word matrix, for which the parameters are estimated using several regularization models, Lasso and Elastic Net (Sec."
                    },
                    {
                        "id": 25,
                        "string": "4)."
                    },
                    {
                        "id": 26,
                        "string": "Moreover, the nowcasting model can be extended easily to a predictive model called a forecasting model."
                    },
                    {
                        "id": 27,
                        "string": "In the forecasting model (∆f days future), only forecasting words that have more than n day time lag are used (Sec."
                    },
                    {
                        "id": 28,
                        "string": "5)."
                    },
                    {
                        "id": 29,
                        "string": "Nowcasting models can dramatically boost the current patient number estimation capability (correlation ratio 0.93; +0.10 point)."
                    },
                    {
                        "id": 30,
                        "string": "Forecasting models have demonstrated successful prediction performance (the correlation ratio 0.91 in 1-week future prediction, and the correlation ratio 0.77 in 3-weeks future prediction)."
                    },
                    {
                        "id": 31,
                        "string": "This performance goes beyond the practical baseline (over 0.75 correlation)."
                    },
                    {
                        "id": 32,
                        "string": "Our contributions are summarized as presented below."
                    },
                    {
                        "id": 33,
                        "string": "• We discover that forecasting words have a time lag between the virtual world (number of tweets in Twitter) and the real world (number of patients)."
                    },
                    {
                        "id": 34,
                        "string": "• We propose a method to build time-shifted features using cross correlation measures."
                    },
                    {
                        "id": 35,
                        "string": "• We realize nowcasting model and its extended one, forecasting model, based on the time shift with parameter estimation."
                    },
                    {
                        "id": 36,
                        "string": "This report is the first of the relevant literature describing a successful model enabling the prediction of future epidemics over the practical baseline."
                    },
                    {
                        "id": 37,
                        "string": "We make code and data publicly available."
                    },
                    {
                        "id": 38,
                        "string": "1 2 Dataset Influenza Corpus We collected 7.7 million influenza related tweets, starting from August 2012 to January 2016, via Twitter API 2 ."
                    },
                    {
                        "id": 39,
                        "string": "Then, we filtered noises (removed retweets including the word, RT, and tweets linked to other web pages including the word, http from the collected tweet data)."
                    },
                    {
                        "id": 40,
                        "string": "In the case of just counting influenzarelated tweets, we should only consider unique users to avoid to count more than ones the tweets of the same patients."
                    },
                    {
                        "id": 41,
                        "string": "However, we didn't filter out the users which posted influenza-related tweets multiple times because we provide the different word for the different role even if these tweets were posted by the same patients."
                    },
                    {
                        "id": 42,
                        "string": "For example, the word, \"fever\" for nowcasting, and the word, \"injection\" for forecasting."
                    },
                    {
                        "id": 43,
                        "string": "To analyze a word, we applied a Japanese morphological parser (JUMAN 3 ) and obtained the stem forms."
                    },
                    {
                        "id": 44,
                        "string": "As a result, 27,588 words were extracted."
                    },
                    {
                        "id": 45,
                        "string": "Then, we investigated the word frequency per day to build a word matrix (days × words) as shown in Figure 2a ."
                    },
                    {
                        "id": 46,
                        "string": "IDSC report In Japan, the Infectious Disease Surveillance Center (IDSC) announces the number of influenza patients once a week during an influenza epidemic season (typically during November-May in Japan)."
                    },
                    {
                        "id": 47,
                        "string": "In fact, IDSC reports tend to delay around a week likewise the U.S. Centers for Disease Control and Prevention (CDC) (Paul et al., 2014) , but even if we consider such time delay, twitter stream attains the peak faster than the real world."
                    },
                    {
                        "id": 48,
                        "string": "To use the IDSC reports for evaluation, we divided the data into the following three periods: 2012/12/01-2013/05/31 (Season 1), 2013/12/01-2014/05/31 (Season 2), and 2014/12/01-2015/05/24 (Season 3)."
                    },
                    {
                        "id": 49,
                        "string": "We prepared a buffer time (60 day maximum time shift) immediately preceding the experimental periods to secure the time shift width."
                    },
                    {
                        "id": 50,
                        "string": "Method To estimate the current influenza epidemics (nowcast) and forecast the future ones, the number of influenza patients was derived from the following linear model."
                    },
                    {
                        "id": 51,
                        "string": "y (t) = x (t−τ 1 ) 1β 1 + x (t−τ 2 ) 2β 2 + · · · + x (t−τ |V | ) |V |β |V | Therein,ŷ (t) shows the estimated number of influenza patients at time t, x (t) v stands for the count of a word v at time t, andβ represents a weight estimated in the training phase,τ v denotes a suitable time shift parameter for word v decided in the training phase, and |V | denotes the size of vocabulary."
                    },
                    {
                        "id": 52,
                        "string": "This section first provides methods to explore the most suitable time shift widthτ v for each word v (Sec."
                    },
                    {
                        "id": 53,
                        "string": "3.1)."
                    },
                    {
                        "id": 54,
                        "string": "Then, the parameter estimation method is described (Sec."
                    },
                    {
                        "id": 55,
                        "string": "3.2)."
                    },
                    {
                        "id": 56,
                        "string": "Finally, the model of future prediction based on the original model is explained (Sec."
                    },
                    {
                        "id": 57,
                        "string": "3.3)."
                    },
                    {
                        "id": 58,
                        "string": "IDSC report Time Shift Estimation The first problem to be solved is finding the optimal time shift width that achieves the best fit to the target influenza timeline."
                    },
                    {
                        "id": 59,
                        "string": "Given the IDSC reports and wider range of tweets, Cross Correlation is used to search for the most suitable time shift width for each word frequency as r xv,y (τ ) = T t=1 (x (t−τ ) v −x (t−τ ) v )(y (t) − y) T t=1 (x (t−τ ) v −x (t−τ ) v ) 2 T t=1 (y (t) −ȳ) 2 , where τ is a time shift parameter (time shift width) 4 ."
                    },
                    {
                        "id": 60,
                        "string": "The cross correlation r xv,y (τ ) measures the similarity between (τ days) time shift variable x v and objective y."
                    },
                    {
                        "id": 61,
                        "string": "In this study, x (t−τ ) v is the count of word v with time shift width τ days earlier from t and y = [y (1) , ."
                    },
                    {
                        "id": 62,
                        "string": "."
                    },
                    {
                        "id": 63,
                        "string": "."
                    },
                    {
                        "id": 64,
                        "string": ", y (T ) ] is the number of patients from the IDSC reports."
                    },
                    {
                        "id": 65,
                        "string": "It is formulated asτ v = argmax τ r xv,y ."
                    },
                    {
                        "id": 66,
                        "string": "Next, we construct a matrix, X ∈ N T ×V , where T stands for the timeline and V represents the vocabulary, according to the Algorithm 1."
                    },
                    {
                        "id": 67,
                        "string": "Algorithm 1: Time-shifted word matrix for nowcasting."
                    },
                    {
                        "id": 68,
                        "string": "Set the maximum shift parameter τmax for v ← 1 to |V | do for τ ← 0 to τmax do Calculate Cross Correlation rx v ,y (τ ) end τv = argmax τ ∈{0,...,τmax} rx v ,y (τ ) Shift the word vector to maximize Cross Correlationxv ← [x (1−τv ) v , x (2−τv ) v , ."
                    },
                    {
                        "id": 69,
                        "string": "."
                    },
                    {
                        "id": 70,
                        "string": "."
                    },
                    {
                        "id": 71,
                        "string": ", x (T −τv ) v ] end return Shifted Word Matrix X = [x1, ."
                    },
                    {
                        "id": 72,
                        "string": "."
                    },
                    {
                        "id": 73,
                        "string": "."
                    },
                    {
                        "id": 74,
                        "string": ",x |V | ] The algorithm decides the optimal time shift width (τ xv,y ) based on the cross correlation for each word."
                    },
                    {
                        "id": 75,
                        "string": "After time shifts for all words, a shifted word matrix X is constructed."
                    },
                    {
                        "id": 76,
                        "string": "Figure 2a presents the initial (original) word matrix (τ = 0 for all words) of 50 words (randomly selected)."
                    },
                    {
                        "id": 77,
                        "string": "This matrix includes several low-correlated words, making several vertically irregular lines."
                    },
                    {
                        "id": 78,
                        "string": "In contrast, the time shift operation arranges the irregular words to match the IDSC reports, producing a beautiful horizontal line, as shown in Figure 2b ."
                    },
                    {
                        "id": 79,
                        "string": "Nowcasting To construct the linear model (called nowcasting model), the parameter β is estimated as minimizing the squared error."
                    },
                    {
                        "id": 80,
                        "string": "For this study, the vocabulary size |V | is of much larger order than sample size T so that the ordinary least squares estimator is not unique."
                    },
                    {
                        "id": 81,
                        "string": "It heavily overfits the data."
                    },
                    {
                        "id": 82,
                        "string": "According to the previous study's manner, parameters with a penalty are estimated as shown below."
                    },
                    {
                        "id": 83,
                        "string": "β = argmax β y − Xβ 2 2 + P(β, λ) In that equation, P(β, λ) is the penalty term."
                    },
                    {
                        "id": 84,
                        "string": "In the case of P lasso (β, λ) = λ β 1 , the regularization method called the Least Absolute Shrinkage and Selection Operator (Lasso) is a well-known method for selecting and estimating the parameters simultaneously (Tibshirani, 1994) ."
                    },
                    {
                        "id": 85,
                        "string": "In earlier studies, Lasso was employed to model influenza epidemics by Lampos and Cristianini (2010) ."
                    },
                    {
                        "id": 86,
                        "string": "However, in the case of vocabulary size |V |, which is much larger order than sample size T , it has been observed empirically that the prediction performance of l 1 -penalized regression, the Lasso is dominated by the l 2 -penalized one."
                    },
                    {
                        "id": 87,
                        "string": "Therefore, we employ the Elastic Net (Zou and Hastie, 2005) , which combines the l 1 -penalty and l 2 -penalty P enet = λ(α β 1 + (1 − α) β 2 2 ), where α is called l 1 ratio."
                    },
                    {
                        "id": 88,
                        "string": "The Elastic Net was already employed for nowcasting influenza-like illness rates using search query log, not Twitter (Lampos et al., 2015) ."
                    },
                    {
                        "id": 89,
                        "string": "In the case of α = 1, Elastic Net is exactly the same as Lasso and α = 0, Ridge (l 2 regularization)."
                    },
                    {
                        "id": 90,
                        "string": "Similarly to Lasso, the Elastic Net simultaneously does automatic variable selection and continuous shrinkage."
                    },
                    {
                        "id": 91,
                        "string": "It has a l-2 regularization advantage that selects groups of correlated variables."
                    },
                    {
                        "id": 92,
                        "string": "Elastic Net, as the generalized method of Lasso and Ridge, estimates with equal or better performance compared to both."
                    },
                    {
                        "id": 93,
                        "string": "Forecasting Our nowcasting model can be extended naturally to forecasting model."
                    },
                    {
                        "id": 94,
                        "string": "To predict the number of future patients ∆f days after, we force to shift the word frequency at least ∆f days."
                    },
                    {
                        "id": 95,
                        "string": "To do so, a setting of the nowcasting model in Algorithm 1 is just changed to τ min = ∆f , as shown in Algorithm 2."
                    },
                    {
                        "id": 96,
                        "string": "It enables forecasting of future epidemics, demonstrating a widely applicable methodology of the proposed approach."
                    },
                    {
                        "id": 97,
                        "string": "Experiment 1: Nowcasting To assess the nowcasting performance, we used the actual influenza reports provided by the Japanese IDSC."
                    },
                    {
                        "id": 98,
                        "string": "Comparable Methods We compared four linear methods for nowcasting as shown below: • Lasso: l 1 -regularization method (Tibshirani, 1994; Lampos and Cristianini, 2010) , • Lasso+: Lasso and time shift combined method, • ENet: Elastic-Net, which combines l 1 -, l 2 -regularization (Zou and Hastie, 2005) , • ENet+: Elastic-Net and time shift combined method."
                    },
                    {
                        "id": 99,
                        "string": "All hyperparameters were tuned via five-fold cross validation in the training dataset."
                    },
                    {
                        "id": 100,
                        "string": "Dataset and Evaluation Metric The detailed dataset is described in Sec."
                    },
                    {
                        "id": 101,
                        "string": "2."
                    },
                    {
                        "id": 102,
                        "string": "To construct the time-shifted word matrix, we set τ max = 60."
                    },
                    {
                        "id": 103,
                        "string": "Our tweet corpus had a dropout period, so that we did not calculate the cross correlation with more than a 60-day shift."
                    },
                    {
                        "id": 104,
                        "string": "We employed each season's data as training data and others as test data."
                    },
                    {
                        "id": 105,
                        "string": "The evaluation metric is based on correlation (Pearson correlation) between the estimated value and the value of the IDSC reports."
                    },
                    {
                        "id": 106,
                        "string": "Result Results of modeling accuracy are presented in Table 1 ."
                    },
                    {
                        "id": 107,
                        "string": "Correlations of our baselines, Lasso and Enet, were lower than those of previous studies."
                    },
                    {
                        "id": 108,
                        "string": "Results suggest that our dataset is more difficult than those used in earlier studies."
                    },
                    {
                        "id": 109,
                        "string": "In contrast, time-shifted models (Lasso+, Enet+) demonstrated about 0.1 point improvement than their baseline models, indicating the contribution of time shift features."
                    },
                    {
                        "id": 110,
                        "string": "It is noteworthy that Lasso type model and Enet type one did not differ so much."
                    },
                    {
                        "id": 111,
                        "string": "The whole trained model chose l 1 ratio parameter that is nearly equal to 1, so that the Enet type model became almost identical as Lasso type model."
                    },
                    {
                        "id": 112,
                        "string": "Overestimation Results showed that values in Figure 3a were overestimated in mid-May."
                    },
                    {
                        "id": 113,
                        "string": "One reason is that tweets related to news such as \"Scientists create hybrid flu that can go airborne\" 5 were popular in social media."
                    },
                    {
                        "id": 114,
                        "string": "Although tweets linked to web pages were removed during preprocessing, many tweets without links to web pages were posted by many people worried about the news."
                    },
                    {
                        "id": 115,
                        "string": "An example of such tweets is the following: • What?"
                    },
                    {
                        "id": 116,
                        "string": "In an attempt to make a vaccine for bird flu and swine flu had created a new strain of influenza virus?"
                    },
                    {
                        "id": 117,
                        "string": "What are you doing?"
                    },
                    {
                        "id": 118,
                        "string": "In addition, the model trained in Season 2 included the word \"bird\" as one feature."
                    },
                    {
                        "id": 119,
                        "string": "This word's time shift was 15 days."
                    },
                    {
                        "id": 120,
                        "string": "Consequently, this peak occurred."
                    },
                    {
                        "id": 121,
                        "string": "In most cases, these kinds of outlier words are not selected through model selection, but preprocessing will play an crucial role to prevent these kinds of outlier."
                    },
                    {
                        "id": 122,
                        "string": "Experiment 2: Forecasting We evaluate the forecasting performance described in Sec 3.3."
                    },
                    {
                        "id": 123,
                        "string": "Comparable methods Lasso and Enet have no features for predicting future values."
                    },
                    {
                        "id": 124,
                        "string": "Therefore, we use Lasso+ and Enet+ for forecasting."
                    },
                    {
                        "id": 125,
                        "string": "Additionally, we employ the following baseline model of BaseLine:ŷ (t) test = y (t) train for comparison with our proposed models."
                    },
                    {
                        "id": 126,
                        "string": "Dataset and Evaluation Metric To evaluate the forecasting performance, we used the same dataset and evaluation metric as Experiment 1, except that we set the minimum time shift τ min from 1 day to 30 days."
                    },
                    {
                        "id": 127,
                        "string": "Result Results of forecasting accuracy are presented in Figure 4 ."
                    },
                    {
                        "id": 128,
                        "string": "In both models, the accuracy was superior to the baseline until around 3 weeks into the future."
                    },
                    {
                        "id": 129,
                        "string": "In addition, the accuracy for prediction one week into the future was almost identical to that in the case of τ min = 0."
                    },
                    {
                        "id": 130,
                        "string": "That result might occur because the accuracy about one week future was nearly the same as that for the current state."
                    },
                    {
                        "id": 131,
                        "string": "In addition, there were many highly correlated features by shifting around 10 days into the future."
                    },
                    {
                        "id": 132,
                        "string": "Consequently, our model demonstrated equivalent performance up to 10 days into the future."
                    },
                    {
                        "id": 133,
                        "string": "Furthermore, the forecasting performance decreased dramatically along with the increase of τ min , as shown in Figure 4e ."
                    },
                    {
                        "id": 134,
                        "string": "We discuss that point further in Sec."
                    },
                    {
                        "id": 135,
                        "string": "6."
                    },
                    {
                        "id": 136,
                        "string": "Figure 5 presents timeline plots of examples."
                    },
                    {
                        "id": 137,
                        "string": "From Figure 5a to Figure 5d are shown the values estimated by the forecasting models trained in Season 2 and tested in Season 1 for τ min ∈ {7, 14, 21, 28}."
                    },
                    {
                        "id": 138,
                        "string": "The estimated values showed a consistently similar shape to that of the IDSC report."
                    },
                    {
                        "id": 139,
                        "string": "In Figure 5c , the same word, \"bird\", occurred as described in Sec."
                    },
                    {
                        "id": 140,
                        "string": "4.3."
                    },
                    {
                        "id": 141,
                        "string": "In contrast, the weight for \"bird\" decreased in Figure 5d for that reason, the forecasting accuracy increased."
                    },
                    {
                        "id": 142,
                        "string": "Then, from Figure 5e to Figure 5h show the values estimated by the forecasting models trained in Season 3 and tested in Season 2 for the same τ min ."
                    },
                    {
                        "id": 143,
                        "string": "Our models overestimated before outbreaks and underestimated after the peak of influenza epidemics."
                    },
                    {
                        "id": 144,
                        "string": "For τ min = 28, this phenomenon was widely evident."
                    },
                    {
                        "id": 145,
                        "string": "We discuss that point further in Sec."
                    },
                    {
                        "id": 146,
                        "string": "6."
                    },
                    {
                        "id": 147,
                        "string": "Discussion In general, the proposed approach (time shift operation) fitted the IDSC reports, demonstrating the basic feasibility."
                    },
                    {
                        "id": 148,
                        "string": "However, exceptions were apparent, as for the model trained in Season 3."
                    },
                    {
                        "id": 149,
                        "string": "One reason is that a gap exists in the suitable time shift widths between the train (Season 3) and the other (Seasons 1 and 2)."
                    },
                    {
                        "id": 150,
                        "string": "Lasso+ model trained in Season 3 selected the words, \"fever\" withτ fever = 16, \"vaccination\" witĥ τ vaccination = 55, \"absent\" withτ absent = 10, and others as features."
                    },
                    {
                        "id": 151,
                        "string": "These words have high correlations only in Season 3, with poor correlation in other seasons."
                    },
                    {
                        "id": 152,
                        "string": "The most drastic example is \"vaccination\" witĥ τ vaccination , (over 0.849 correlation in Season 3)."
                    },
                    {
                        "id": 153,
                        "string": "This word is adversely affected by other seasons (0.313 correlation in Season 1 and 0.04 correlation in Season 2)."
                    },
                    {
                        "id": 154,
                        "string": "The reason for the lost correlation was that τ vaccination in Season 3 differed from that of other seasons."
                    },
                    {
                        "id": 155,
                        "string": "This phenomenon suggests that \"vaccination\" is just an annually cycling word."
                    },
                    {
                        "id": 156,
                        "string": "Neither the cycle of \"vaccination\" nor that of influenza is fixed, bringing us different time lags."
                    },
                    {
                        "id": 157,
                        "string": "Figure 4 : Correlation between estimated values using the two methods for forecasting."
                    },
                    {
                        "id": 158,
                        "string": "Figure 5 : Timelines of values estimated using the two methods for forecasting and the IDSC reports in each τ min ."
                    },
                    {
                        "id": 159,
                        "string": "This inconsistency of time shifts also affected the forecasting performance directly."
                    },
                    {
                        "id": 160,
                        "string": "As shown in Figure 4e , the forecasting performance was decreased dramatically against the increase of τ min ."
                    },
                    {
                        "id": 161,
                        "string": "In spite of the word \"shot\" is the largest weighted feature in the case of τ min = 21 and Train in Season 3, these word correlations were 0.310 in Season 1 and 0.03 in Season 2."
                    },
                    {
                        "id": 162,
                        "string": "Consequently, it caused a considerable decrease of the forecasting accuracy."
                    },
                    {
                        "id": 163,
                        "string": "In contrast, some words, such as \"fever\" and \"symptom\", showed consistently similar time shifts."
                    },
                    {
                        "id": 164,
                        "string": "A technique to distinguish actual forecasting words such as \"fever\", and noises (simple year cycle words), \"vaccination\" is highly anticipated for use in the near future."
                    },
                    {
                        "id": 165,
                        "string": "If multiple-year training sets were available, one could filter out such noisy words."
                    },
                    {
                        "id": 166,
                        "string": "Although some room for improvement remains, the basic feasibility of the proposed approach has been demonstrated."
                    },
                    {
                        "id": 167,
                        "string": "The time shift was effective for social media based surveillance."
                    },
                    {
                        "id": 168,
                        "string": "In addition, the model enables prediction."
                    },
                    {
                        "id": 169,
                        "string": "Related Work To date, numerous web based surveillance systems have been proposed, targeting the common cold (Kitagawa et al., 2015) , drug side effects (Bian et al., 2012) , cholera (Chunara et al., 2012) , E. Coli (Diaz-Aviles et al., 2012) , problem drinking (MA et al., 2012) , smoking (Prier et al., 2011) , campylobacteriosis (Chester et al., 2011) , dengue fever (Gomide et al., 2011) , and HIV/AIDS (Ku et al., 2010) ."
                    },
                    {
                        "id": 170,
                        "string": "Influenza has especially drawn much attention from earlier studies (Ginsberg et al., 2009; Polgreen et al., 2009 Hulth et al., 2009; Corley et al., 2010) to current Twitter-based studies (Aramaki et al., 2011; Collier et al., 2011; Chew and Eysenbach, 2010; Lampos and Cristianini, 2010; Culotta, 2013) ."
                    },
                    {
                        "id": 171,
                        "string": "Because of great variance in data resources and evaluation manner (region, year, only winter or all seasons), a precise comparison would be difficult and meaningless, Culotta (Culotta, 2013) and Ginsberg (Ginsberg et al., 2009) are apparently better than the others in US (correlation ratios = 0.96 and 0.94, respectively)."
                    },
                    {
                        "id": 172,
                        "string": "Aramaki et al."
                    },
                    {
                        "id": 173,
                        "string": "(2011) achieved the best score for Japan (correlation ratio = 0.89)."
                    },
                    {
                        "id": 174,
                        "string": "This study also examined Twitter data in Japan, and achieved competitive results for nowcasting."
                    },
                    {
                        "id": 175,
                        "string": "Another aspect of reviews of related studies is the manner of tweet counting."
                    },
                    {
                        "id": 176,
                        "string": "In earlier studies, a simple word counting, the direct number of tweets, is considered an index of the degree of disease epidemics."
                    },
                    {
                        "id": 177,
                        "string": "However, such a simple method is adversely affected by the huge numbers of noisy tweets."
                    },
                    {
                        "id": 178,
                        "string": "Currently, counting approaches of two types have been developed: (1) a classification approach (Kanouchi et al., 2015; SUN et al., 2014; Aramaki et al., 2011) aimed at extracting only tweets including patient information, and (2) a regression approach (Lamb et al., 2013; Culotta, 2010; Lampos and Cristianini, 2010; Paul and Dredze, 2011 ) that handles multiple words to build a precise regression model."
                    },
                    {
                        "id": 179,
                        "string": "The proposed study fundamentally belongs among regression approaches, which explore optimal weight perimeters for each word."
                    },
                    {
                        "id": 180,
                        "string": "An important difference is that this study handles one more parameter for each word: time shift (days)."
                    },
                    {
                        "id": 181,
                        "string": "To handle many parameters, we first ascertain the best time shift widths."
                    },
                    {
                        "id": 182,
                        "string": "Then we explore weight parameters using L1 or elastic net."
                    },
                    {
                        "id": 183,
                        "string": "It is noteworthy that this study does not employ any classification method, engaging a room to improve by incorporation with classification techniques."
                    },
                    {
                        "id": 184,
                        "string": "Conclusions This study proposed a novel social media based influenza surveillance system using forecasting words that appear in Twitter usage before main epidemics occur."
                    },
                    {
                        "id": 185,
                        "string": "First, for each word, the optimal time lag was explored, which maximized the cross correlation to influenza epidemics."
                    },
                    {
                        "id": 186,
                        "string": "Then, we shifted a matrix consisting of word frequencies at different time points by each optimal time lag."
                    },
                    {
                        "id": 187,
                        "string": "Using the time-shifted word matrix, this study produced and evaluated a nowcasting model and forecasting model designed to predict the number of influenza patients."
                    },
                    {
                        "id": 188,
                        "string": "In the experimentally obtained results, the proposed model achieved the best nowcasting performance to date (correlation ratio 0.93) and practically sufficient forecasting performance (correlation ratio 0.91 in the 1-week future prediction, and correlation ratio 0.77 in 3-week future prediction)."
                    },
                    {
                        "id": 189,
                        "string": "This report is the first of the relevant literature describing a model that enables prediction of future epidemics."
                    },
                    {
                        "id": 190,
                        "string": "Furthermore, the model has much room for potential application to prediction of other events."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 37
                    },
                    {
                        "section": "Influenza Corpus",
                        "n": "2.1",
                        "start": 38,
                        "end": 45
                    },
                    {
                        "section": "IDSC report",
                        "n": "2.2",
                        "start": 46,
                        "end": 49
                    },
                    {
                        "section": "Method",
                        "n": "3",
                        "start": 50,
                        "end": 57
                    },
                    {
                        "section": "Time Shift Estimation",
                        "n": "3.1",
                        "start": 58,
                        "end": 78
                    },
                    {
                        "section": "Nowcasting",
                        "n": "3.2",
                        "start": 79,
                        "end": 92
                    },
                    {
                        "section": "Forecasting",
                        "n": "3.3",
                        "start": 93,
                        "end": 94
                    },
                    {
                        "section": "Experiment 1: Nowcasting",
                        "n": "4",
                        "start": 95,
                        "end": 97
                    },
                    {
                        "section": "Comparable Methods",
                        "n": "4.1",
                        "start": 98,
                        "end": 99
                    },
                    {
                        "section": "Dataset and Evaluation Metric",
                        "n": "4.2",
                        "start": 100,
                        "end": 105
                    },
                    {
                        "section": "Result",
                        "n": "4.3",
                        "start": 106,
                        "end": 121
                    },
                    {
                        "section": "Experiment 2: Forecasting",
                        "n": "5",
                        "start": 122,
                        "end": 122
                    },
                    {
                        "section": "Comparable methods",
                        "n": "5.1",
                        "start": 123,
                        "end": 125
                    },
                    {
                        "section": "Dataset and Evaluation Metric",
                        "n": "5.2",
                        "start": 126,
                        "end": 126
                    },
                    {
                        "section": "Result",
                        "n": "5.3",
                        "start": 127,
                        "end": 146
                    },
                    {
                        "section": "Discussion",
                        "n": "6",
                        "start": 147,
                        "end": 168
                    },
                    {
                        "section": "Related Work",
                        "n": "7",
                        "start": 169,
                        "end": 183
                    },
                    {
                        "section": "Conclusions",
                        "n": "8",
                        "start": 184,
                        "end": 190
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1161-Table1-1.png",
                        "caption": "Table 1: Correlation between estimated values and the IDSC reports.",
                        "page": 5,
                        "bbox": {
                            "x1": 123.83999999999999,
                            "x2": 473.28,
                            "y1": 62.4,
                            "y2": 153.12
                        }
                    },
                    {
                        "filename": "../figure/image/1161-Figure3-1.png",
                        "caption": "Figure 3: Timelines of estimated values obtained using the four methods for nowcasting.",
                        "page": 5,
                        "bbox": {
                            "x1": 68.16,
                            "x2": 522.24,
                            "y1": 184.32,
                            "y2": 401.76
                        }
                    },
                    {
                        "filename": "../figure/image/1161-Figure1-1.png",
                        "caption": "Figure 1: Motivating examples: The time lag of the frequency of a word enables one to obtain a good approximation to the number of patients. The blue line shows the word frequency. The green line shows the word frequency shifted time lag days. The red line shows the number of patients.",
                        "page": 1,
                        "bbox": {
                            "x1": 89.28,
                            "x2": 499.2,
                            "y1": 187.2,
                            "y2": 222.72
                        }
                    },
                    {
                        "filename": "../figure/image/1161-Figure4-1.png",
                        "caption": "Figure 4: Correlation between estimated values using the two methods for forecasting.",
                        "page": 7,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 522.24,
                            "y1": 70.08,
                            "y2": 287.52
                        }
                    },
                    {
                        "filename": "../figure/image/1161-Figure5-1.png",
                        "caption": "Figure 5: Timelines of values estimated using the two methods for forecasting and the IDSC reports in each τmin.",
                        "page": 7,
                        "bbox": {
                            "x1": 78.24,
                            "x2": 509.28,
                            "y1": 311.03999999999996,
                            "y2": 473.28
                        }
                    },
                    {
                        "filename": "../figure/image/1161-Figure2-1.png",
                        "caption": "Figure 2: Word matrix transformation. The Y -axis shows a timeline. The X-axis shows words with the IDSC reports (right side).",
                        "page": 3,
                        "bbox": {
                            "x1": 75.36,
                            "x2": 522.24,
                            "y1": 65.28,
                            "y2": 208.32
                        }
                    },
                    {
                        "filename": "../figure/image/1161-Figure6-1.png",
                        "caption": "Figure 6: Frequencies of “Shot” in respective seasons.",
                        "page": 8,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 511.2,
                            "y1": 71.52,
                            "y2": 176.64
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-42"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivation",
                    "text": [
                        "Modeling coherence in linguistics theory into computational task (Barzilay & Lapata,",
                        "Semantic Similarity Graph | wiragotama.github.io TextGraph-11, ACL 2017"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Coherence",
                    "text": [
                        "Coherent text is integrated as a whole, rather than a series",
                        "Every sentence in a coherent text has relation(s) to each other (Halliday and Hasan, 1976; Mann and Thompson,",
                        "Evaluate coherence through cohesion",
                        "Lexical and semantic (meaning) continuity are indispensable",
                        "Semantic Similarity Graph | wiragotama.github.io TextGraph-11, ACL 2017"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "3": {
                    "title": "Proposed Method",
                    "text": [
                        "Formally, text is a graph , where",
                        "is a set of vertices, represents i-th sentence. is a set of edges, represents relation (cohesion) from i-th to j-th sentence (weighted & directed).",
                        "Evaluate the coherence through cohesion",
                        "Sentences are encoded into their meaning form",
                        "Average of summation of word vectors (distributed representation of words)",
                        "An edge represents cohesion among sentences",
                        "Establishment of edge is decided as the operation of vectors representation of sentences",
                        "Semantic Similarity Graph | wiragotama.github.io TextGraph-11, ACL 2017",
                        "An edge is established from the sentence vertex in question to the other vertex with the weight calculated by",
                        "Text coherence measure (higher is better) is calculated by averaging the averaged weight of outgoing edges from every vertex in the graph as",
                        "# vertices # outgoing edges of vertex vi"
                    ],
                    "page_nums": [
                        5,
                        7
                    ],
                    "images": []
                },
                "4": {
                    "title": "Propose Method",
                    "text": [
                        "Semantic Similarity Graph | wiragotama.github.io TextGraph-11, ACL 2017"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Evaluation",
                    "text": [
                        "Task 1: Discrimination (Barzilay and Lapata, 2008)",
                        "Task 2: Insertion (Eisner and Charniak, 2011)",
                        "Both tasks evaluate how well the methods in comparing coherence between texts",
                        "Semantic Similarity Graph | wiragotama.github.io TextGraph-11, ACL 2017"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "6": {
                    "title": "Evaluation Discrimination Task",
                    "text": [
                        "The goal is to compare original vs. permutated text S4",
                        "Program is considered successful when giving greater score to the more coherent (original) text",
                        "Dataset: 683 WSJ (LDC) texts, permutations (avg. 24 sentences, 521 tokens)",
                        "Semantic Similarity Graph | wiragotama.github.io TextGraph-11, ACL 2017"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "7": {
                    "title": "Result Discrimination Task",
                    "text": [
                        "Difference of performance is statistically significant at",
                        "PAV > MSV > Entity Graph",
                        "Cohesion is not only about repeating mention of entities",
                        "PAV MSV pair shares 88.3% same judgement",
                        "Local (adjacent) cohesion is possibly more important than long-distance cohesion",
                        "Semantic Similarity Graph | wiragotama.github.io TextGraph-11, ACL 2017"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "8": {
                    "title": "Evaluation Insertion Task",
                    "text": [
                        "Insertion task is more important than discrimination task",
                        "It was proposed by Eisner and Charniak (2011):",
                        "Given a text, take out a sentence (randomly), then place it into other positions",
                        "Program is considered successful if it prefers to insert take-out-sentence at its original position rather than arbitrary (distorted) positions",
                        "Our Proposal: useTOEFL iBT insertion-type questions",
                        "Semantic Similarity Graph | wiragotama.github.io TextGraph-11, ACL 2017"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                },
                "9": {
                    "title": "TOEFL iBT Insertion type Question",
                    "text": [
                        "A text is coherent even without the insertion sentence",
                        "Preservation of coherence is achieved when the question-sentence is inserted",
                        "(A) The raising of livestock is a major economic activity in semiarid lands, where grasses are generally the dominant type of natural vegetation.",
                        "(B) The consequences of an excessive number of livestock grazing in an area are the reduction of the vegetation cover and trampling and pulverization of the soil. (C) This is usually followed by the drying of the soil and accelerated erosion. (D)",
                        "in the correct place coherence otherwise but disrupt",
                        "Question: Insert the following sentence into one of",
                        "question sentence = \"This economic reliance on livestock in certain regions makes large tracts of land susceptible to overgrazing.",
                        "correct answer = B",
                        "Semantic Similarity Graph | wiragotama.github.io TextGraph-11, ACL 2017"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "10": {
                    "title": "Result Insertion Task",
                    "text": [
                        "Difference in every pair of methods is not statistically significant at p < 0.05",
                        "14 questions are answered incorrectly by PAV, but correctly by SSV.",
                        "In these questions, SSV tends to establish the relationship between distance sentences (dist = 2.8). For example, exemplification text",
                        "Semantic Similarity Graph | wiragotama.github.io TextGraph-11, ACL 2017"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                }
            },
            "paper_title": "Evaluating text coherence based on semantic similarity graph",
            "paper_id": "1167",
            "paper": {
                "title": "Evaluating text coherence based on semantic similarity graph",
                "abstract": "Coherence is a crucial feature of text because it is indispensable for conveying its communication purpose and meaning to its readers. In this paper, we propose an unsupervised text coherence scoring based on graph construction in which edges are established between semantically similar sentences represented by vertices. The sentence similarity is calculated based on the cosine similarity of semantic vectors representing sentences. We provide three graph construction methods establishing an edge from a given vertex to a preceding adjacent vertex, to a single similar vertex, or to multiple similar vertices. We evaluated our methods in the document discrimination task and the insertion task by comparing our proposed methods to the supervised (Entity Grid) and unsupervised (Entity Graph) baselines. In the document discrimination task, our method outperformed the unsupervised baseline but could not do the supervised baseline, while in the insertion task, our method outperformed both baselines.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Coherence plays an important role in a text because it enables a text to convey its communication purpose and meaning to its readers (Bamberg, 1983; Grosz and Sidner, 1986) ."
                    },
                    {
                        "id": 1,
                        "string": "Coherence also decreases reading time as a more coherent text is easier to read with less reader's cognitive load (Todirascu et al., 2016) ."
                    },
                    {
                        "id": 2,
                        "string": "While there is no single agreed definition of coherence, we can compile several definitions of coherence and note its important aspects."
                    },
                    {
                        "id": 3,
                        "string": "First, a text is coherent if it can convey its communication purpose and meaning to its readers (Wolf and Gibson, 2005; Somasundaran et al., 2014; Feng et al., 2014) ."
                    },
                    {
                        "id": 4,
                        "string": "Second, a text needs to be integrated as a whole, rather than a series of independent sentences (Bamberg, 1983; Garing, 2014) ."
                    },
                    {
                        "id": 5,
                        "string": "It means that sentences in the text are centralised around a certain theme or topic, and are arranged in a particular order in terms of logical, spatial, and temporal relations."
                    },
                    {
                        "id": 6,
                        "string": "Third, every sentence in a coherent text has relation(s) to each other (Halliday and Hasan, 1976; Grosz and Sidner, 1986; Mann and Thompson, 1988; Wolf and Gibson, 2005) ."
                    },
                    {
                        "id": 7,
                        "string": "It suggests that a text exhibits discourse/rhetorical relation and cohesion."
                    },
                    {
                        "id": 8,
                        "string": "Fourth, text coherence is greatly influenced by the presence of a certain organisation in the text (Persing et al., 2010; Somasundaran et al., 2014) ."
                    },
                    {
                        "id": 9,
                        "string": "The organisation helps readers to anticipate the upcoming textual information."
                    },
                    {
                        "id": 10,
                        "string": "Although a wellorganised text is highly probable to be coherent, only the organisation does not constitute coherence."
                    },
                    {
                        "id": 11,
                        "string": "Textual organisation concerns the structural formation and logical development of a text, while lexical and semantic continuity is also indispensable for coherent text (Feng et al., 2014) ."
                    },
                    {
                        "id": 12,
                        "string": "Fifth, it is easier to read a coherent text than its less coherent counterpart (Garing, 2014) ."
                    },
                    {
                        "id": 13,
                        "string": "Thus when writing a text, it is not enough to only revise the text with careful editing and proofreading from the lexical, or grammatical aspect."
                    },
                    {
                        "id": 14,
                        "string": "Coherence aspect also should be taken into account in revising the text (Bamberg, 1983; Garing, 2014) ."
                    },
                    {
                        "id": 15,
                        "string": "There are studies on computational modelling of text coherence based on the supervised learning approach, such as the Entity Grid model (Barzilay and Lapata, 2008) ."
                    },
                    {
                        "id": 16,
                        "string": "The Entity Grid model has been further extended into the Role Matrix model (Lin et al., 2011; Feng et al., 2014) ."
                    },
                    {
                        "id": 17,
                        "string": "However, these models have a few drawbacks."
                    },
                    {
                        "id": 18,
                        "string": "First, department trial Microsoft evidence competitors markets products brands case Netscape software S1 Entity Grid using co-reference resolution has a bias towards the original ordering of text when comparing a text with its permutated counterparts."
                    },
                    {
                        "id": 19,
                        "string": "The co-reference resolution module is trained on well-formed texts; thus it does not perform very well for ill-organised texts."
                    },
                    {
                        "id": 20,
                        "string": "The methods utilising a discourse parser for modelling text coherence (Lin et al., 2011; Feng et al., 2014) have the same problem."
                    },
                    {
                        "id": 21,
                        "string": "Second, the supervised model often suffers from data sparsity, domain dependence, and computational cost for training."
                    },
                    {
                        "id": 22,
                        "string": "To alleviate these problems in the supervised model, Guinaudeau and Strube (2013) proposed an unsupervised coherence model known as the Entity Graph model."
                    },
                    {
                        "id": 23,
                        "string": "The Entity Grid, Role Matrix, and Entity Graph model assumed coherence was achieved by local cohesion, i.e."
                    },
                    {
                        "id": 24,
                        "string": "repeated mentions of the same entities constitute cohesion."
                    },
                    {
                        "id": 25,
                        "string": "However, they did not capture the contribution of related-yet-notidentical entities (Petersen et al., 2015) ."
                    },
                    {
                        "id": 26,
                        "string": "To our best knowledge, the closest study addressing this problem was done by Li and Hovy (2014) ."
                    },
                    {
                        "id": 27,
                        "string": "The key idea of Li and Hovy (2014) is to learn a distributed sentence representation which captures the underlying semantic relations between consecutive sentences."
                    },
                    {
                        "id": 28,
                        "string": "To tackle these limitations of the past research, we present an unsupervised text coherence model that captures the contribution of related-yet-not-identical entities."
                    },
                    {
                        "id": 29,
                        "string": "S O S X O − − − − − − S2 − − O − − X S O − − − S3 − − S O − − − − S O O The rest of this paper is organised as follows."
                    },
                    {
                        "id": 30,
                        "string": "Section 2 describes related work; Section 3 introduces our proposed unsupervised method to measure text coherence from a semantic similarity perspective; Section 4 describes experimental results; then followed by the conclusion in Section 5."
                    },
                    {
                        "id": 31,
                        "string": "Related work This section provides an overview of existing coherence scoring models, both supervised and unsupervised."
                    },
                    {
                        "id": 32,
                        "string": "Entity Grid is considered as a supervised baseline in this paper."
                    },
                    {
                        "id": 33,
                        "string": "On the other hand, Entity Graph is selected as an unsupervised baseline."
                    },
                    {
                        "id": 34,
                        "string": "Figure 1 : Part of an example text from (Barzilay and Lapata, 2008) Entity Grid The Entity Grid model focused on the evaluation of local cohesion developed on top of the Centering theory (Barzilay and Lapata, 2008) ."
                    },
                    {
                        "id": 35,
                        "string": "The key idea of the Centering theory is that the distribution of entities in coherent texts exhibits certain regularities (Grosz et al., 1995) ."
                    },
                    {
                        "id": 36,
                        "string": "The text is said to be less coherent if it exhibits many attention shifts, i.e."
                    },
                    {
                        "id": 37,
                        "string": "frequent changes in attention (centre) (Grosz et al., 1995) ."
                    },
                    {
                        "id": 38,
                        "string": "However, if the centre of attention has smooth transitions, it will be more coherent, e.g."
                    },
                    {
                        "id": 39,
                        "string": "when sentences in a text mentioning the same entity."
                    },
                    {
                        "id": 40,
                        "string": "Barzilay and Lapata (2008) proposed a computational model by representing text as a matrix called Entity Grid in which the column corresponds to entities, the row corresponds to sentences in the text, and the cell denotes the role of the entity in the sentence."
                    },
                    {
                        "id": 41,
                        "string": "The role of an entity is defined as one of S(subject), O(object), or X(neither)."
                    },
                    {
                        "id": 42,
                        "string": "The cell is filled with \"−\" if the entity is not mentioned in the sentence."
                    },
                    {
                        "id": 43,
                        "string": "If the entity serves multiple roles in the sentence, the priority order would be S, O, and then X."
                    },
                    {
                        "id": 44,
                        "string": "They consider co-referent noun phrases as an entity."
                    },
                    {
                        "id": 45,
                        "string": "As an example, the text in Figure 1 is transformed into the Entity Grid as in Table 1 ."
                    },
                    {
                        "id": 46,
                        "string": "The bracketed words in Figure 1 are recognised as the entities in Table 1 ."
                    },
                    {
                        "id": 47,
                        "string": "Also, they differentiate salient entities."
                    },
                    {
                        "id": 48,
                        "string": "An entity is considered salient if it occurs at least t times in the text."
                    },
                    {
                        "id": 49,
                        "string": "The text is further encoded into a feature vector, denoting the probability of local entity transitions (Barzilay and Lapata, 2008) Lin et al."
                    },
                    {
                        "id": 50,
                        "string": "(2011) and Feng et al."
                    },
                    {
                        "id": 51,
                        "string": "(2014) tried to tackle this limitation by filling the cell in the grid with the discourse role of the sentence in which the entity appears."
                    },
                    {
                        "id": 52,
                        "string": "Entity Graph To tackle the disadvantages of the supervised coherence model, Guinaudeau and Strube (2013) proposed a graph model to measure text coherence."
                    },
                    {
                        "id": 53,
                        "string": "Graph data structure allows us to relate nonadjacent sentences, spanning globally in the text to reflect global coherence as opposed to the local coherence of the Entity Grid model."
                    },
                    {
                        "id": 54,
                        "string": "A text is represented as a directed bipartite graph."
                    },
                    {
                        "id": 55,
                        "string": "The first partition is a sentence partition in which each vertex represents a sentence."
                    },
                    {
                        "id": 56,
                        "string": "The second partition is a discourse partition in which each vertex represents an entity."
                    },
                    {
                        "id": 57,
                        "string": "The weighted edge between a sentence vertex and an entity vertex is established if the entity is mentioned in the sentence."
                    },
                    {
                        "id": 58,
                        "string": "A weight is assigned to each edge based on entity's role in the sentence: 3 for a subject entity, 2 for an object entity, and 1 for others."
                    },
                    {
                        "id": 59,
                        "string": "Figure 2 shows an example of the bipartite graph transformation from the text in Figure 1 ."
                    },
                    {
                        "id": 60,
                        "string": "This directed bipartite graph is further transformed into a directed projection graph in which a vertex represents a sentence, and a directed weighted edge is established between vertices if they share same entities."
                    },
                    {
                        "id": 61,
                        "string": "The direction of the edge corresponds to the surface sequential order of the sentences within the text."
                    },
                    {
                        "id": 62,
                        "string": "For example, a vertex which represents the second sentence can only have outgoing edges to third, fourth, but not to the first sentence."
                    },
                    {
                        "id": 63,
                        "string": "There are three projection methods, P U , P W , and P Acc depending on the weighting scheme of edges."
                    },
                    {
                        "id": 64,
                        "string": "P U assigns a binary weight to S1 S2 S3 1 1 0.5 S1 S2 S3 1 1 1 S1 S2 S3 6 6 5.5 P U P W P Acc Figure 3 : Example of projection graphs each edge: one for the edge connecting two sentences sharing at least one entity in common and zero for others."
                    },
                    {
                        "id": 65,
                        "string": "P W assigns the number of shared entities between connected sentences to each edge as its weight."
                    },
                    {
                        "id": 66,
                        "string": "P Acc calculates an edge weight by accumulating the products of the weights of edges sharing an entity in the bipartite graph over the shared entities by the connected two sentences."
                    },
                    {
                        "id": 67,
                        "string": "The weight of the edge established between sentence s i and s j is calculated by W ij = e∈E ij bw(e, s i ) · bw(e, s j ), (1) where E ij is the set of entities shared by s i and s j and bw(e, s) is a weight of the edge between entity e and sentence s in the bipartite graph."
                    },
                    {
                        "id": 68,
                        "string": "Furthermore, the edge weight in the projection graph can be normalised with dividing by the distance between the sentences, i.e."
                    },
                    {
                        "id": 69,
                        "string": "|j − i|."
                    },
                    {
                        "id": 70,
                        "string": "Figure 3 shows the projection graph transformed from Figure 2 after the normalisation."
                    },
                    {
                        "id": 71,
                        "string": "To measure text coherence by the projection graph, Guinaudeau and Strube (2013) used the average OutDegree of every vertex in the projection graph."
                    },
                    {
                        "id": 72,
                        "string": "The OutDegree of a vertex is defined as the summation of the weight of outgoing edges leaving the vertex."
                    },
                    {
                        "id": 73,
                        "string": "Constructing semantic similarity graphs As mentioned in Section 1, a text is coherent if it can convey its communication purpose to readers, integrated as a whole, cohesive, well organised, and easy to read."
                    },
                    {
                        "id": 74,
                        "string": "We would like to approach coherence from the cohesion perspective."
                    },
                    {
                        "id": 75,
                        "string": "We argue that coherence of a text is built by cohesion among its sentences."
                    },
                    {
                        "id": 76,
                        "string": "We call our method as Semantic Similarity Graph."
                    },
                    {
                        "id": 77,
                        "string": "Our proposed method employs an unsupervised learning approach."
                    },
                    {
                        "id": 78,
                        "string": "The unsupervised approach suffers less from data sparsity, domain dependence, and computational cost for training which often arise in the supervised approach."
                    },
                    {
                        "id": 79,
                        "string": "We encode a text into a graph G(V, E), where V is a set of vertices and E is a set of edges in the graph."
                    },
                    {
                        "id": 80,
                        "string": "The vertex v i ∈ V represents the i-th sentence s i in the text, and the weighted directed edge e i,j ∈ E represents a semantic relation from the i-th to the j-th sentences."
                    },
                    {
                        "id": 81,
                        "string": "In what follows, the term \"edge\" refers to the weighted directed edge."
                    },
                    {
                        "id": 82,
                        "string": "As stated by Halliday and Hasan (1976) , cohesion is a matter of lexicosemantics."
                    },
                    {
                        "id": 83,
                        "string": "Our method projects a sentence into a vector representation using pre-trained GloVe word vectors 1 by Pennington et al."
                    },
                    {
                        "id": 84,
                        "string": "(2014) ."
                    },
                    {
                        "id": 85,
                        "string": "A sentence consists of multiple words {w 1 , w 2 , · · · , w M } where each of them is mapped into a vector space, i.e."
                    },
                    {
                        "id": 86,
                        "string": "{ w 1 , w 2 , · · · , w M }."
                    },
                    {
                        "id": 87,
                        "string": "A sentence s can be encoded as a vector s by taking the average of consisting word vectors."
                    },
                    {
                        "id": 88,
                        "string": "Formally, a sentence vector s is described as s = 1 M M k=1 w k , where M denotes the number of words in the sentence."
                    },
                    {
                        "id": 89,
                        "string": "We propose three methods for constructing a graph from a text based on semantic similarity between sentence pairs in the text."
                    },
                    {
                        "id": 90,
                        "string": "Given a certain sentence vertex in the graph, how to decide its counterpart vertices for establishing edges is the crucial point."
                    },
                    {
                        "id": 91,
                        "string": "The following subsections describe each method to decide a counterpart vertex."
                    },
                    {
                        "id": 92,
                        "string": "Preceding adjacent vertex (PAV) People read a text from the beginning to the end and understand a particular part of the text based for i ← 2 to N do if sim(si, si−1) > 0 then creates edge ei,i−1 with sim(si, si−1) as the weight else for j ← i − 2 to 1 do if sim(si, sj) > 0 then creates edge ei,j with sim(si, sj) as the weight break Figure 4 : Graph construction algorithm with similarity of PAV on information provided in the preceding part."
                    },
                    {
                        "id": 93,
                        "string": "When they do not understand a particular part, people look backwards for what they have missed."
                    },
                    {
                        "id": 94,
                        "string": "We mimic this reading process into graph construction that is reflected in the algorithm in Figure 4, where N is the number of sentences in the text to be processed."
                    },
                    {
                        "id": 95,
                        "string": "First we define a similarity measure sim(s i , s j ) of a pair of sentences s i and s j as sim(s i , s j ) = α uot(s i , s j ) + (1 − α) cos( s i , s j ), where uot is the number of unique overlapping terms between the sentences s i and s j divided by the number of unique terms in the two sentences; cos( s i , s j ) is a cosine similarity of the sentence vectors; α is a balancing factor ranging over [0, 1] ."
                    },
                    {
                        "id": 96,
                        "string": "The algorithm constructs a graph by establishing a weighted directed edge from each sentence vertex to the preceding adjacent sentence vertex (PAV) if the sim value between the current and the preceding adjacent vertices exceeds zero; otherwise, the algorithm tries to establish an edge to the next closest preceding vertex with non-zero sim value."
                    },
                    {
                        "id": 97,
                        "string": "The established edge is assigned the sim value as its weight."
                    },
                    {
                        "id": 98,
                        "string": "Single similar vertex (SSV) Cohesion between two sentences s i and s j means that we need to know s i in order to understand s j or vice versa (Halliday and Hasan, 1976) ."
                    },
                    {
                        "id": 99,
                        "string": "In this sense, we interpret cohesion as a semantic dependency among sentences."
                    },
                    {
                        "id": 100,
                        "string": "We simulate the semantic dependency with the semantic similarity between sentences."
                    },
                    {
                        "id": 101,
                        "string": "Since the dependency could happen in both direction, we allow edges to the following vertices as well as preceding vertices."
                    },
                    {
                        "id": 102,
                        "string": "In the previous method, \"precedence\" and \"adjacency\" are the important constraints for establishing the edges in graph construction."
                    },
                    {
                        "id": 103,
                        "string": "This S1 I S2 S3 PAV SSV MSV Figure 5 : Example of semantic similarity graphs method discards these constraints and establishes edges based on only the semantic similarity between sentences."
                    },
                    {
                        "id": 104,
                        "string": "However, the edges are still directed and weighted."
                    },
                    {
                        "id": 105,
                        "string": "Also, only a single outgoing edge is allowed from every vertex in the graph."
                    },
                    {
                        "id": 106,
                        "string": "We cast semantic dependency task into an information retrieval task."
                    },
                    {
                        "id": 107,
                        "string": "When establishing an edge from a certain sentence vertex, we search for the most similar sentence in the text."
                    },
                    {
                        "id": 108,
                        "string": "The similarity measure between two sentences s i and s j is calculated based on the cosine similarity of their semantic vectors."
                    },
                    {
                        "id": 109,
                        "string": "An edge is established from the sentence vertex in question to the most similar sentence vertex with the weight calculated by weight(e i,j ) = cos( s i , s j ) |i − j| ."
                    },
                    {
                        "id": 110,
                        "string": "(2) This weight calculation takes into account the distance between two sentences, i.e."
                    },
                    {
                        "id": 111,
                        "string": "we prefer a closer counterpart."
                    },
                    {
                        "id": 112,
                        "string": "Multiple similar vertex (MSV) In the previous method, we allowed only a single outgoing edge for every sentence vertex in the graph."
                    },
                    {
                        "id": 113,
                        "string": "Here we discard the singular condition and allow multiple outgoing edges for every vertex."
                    },
                    {
                        "id": 114,
                        "string": "Instead of choosing the most similar sentence in the text, we choose multiple sentences that exceed a certain threshold (θ) in terms of cosine similarity with the sentence in question."
                    },
                    {
                        "id": 115,
                        "string": "Edges are established for all vertex pairs with the edge weight given in Equation (2) ."
                    },
                    {
                        "id": 116,
                        "string": "Figure 5 shows an example of semantic similarity graphs constructed by three proposed methods for the text shown in Figure 6 ."
                    },
                    {
                        "id": 117,
                        "string": "The parameters for the PAV and MSV-based methods are the optimal value in the evaluation experiment that is de-scribed in the next section, and the insertion sentence (I) was placed in the correct position (B)."
                    },
                    {
                        "id": 118,
                        "string": "Text coherence measure From a constructed graph by one of the three methods explained in the preceding subsections, text coherence measure tc is calculated by averaging averaged weight of outgoing edges from every vertex in the graph as tc = 1 N N i=1 1 L i L i k=1 weight(e ik ), where N is the number of sentences in the text and L i is the number of outgoing edges from the vertex v i ."
                    },
                    {
                        "id": 119,
                        "string": "L i is always one for the PAV and SSV based graph construction, since we allow only a single outgoing edge from every vertex in the graph in these methods."
                    },
                    {
                        "id": 120,
                        "string": "A larger tc value denotes a more coherent text."
                    },
                    {
                        "id": 121,
                        "string": "The proposed models have two significant differences from the Entity Graph model, our direct competitor."
                    },
                    {
                        "id": 122,
                        "string": "First, the Entity Graph model only allows establishing outgoing edges in the following direction, i.e."
                    },
                    {
                        "id": 123,
                        "string": "from the vertex v i to the vertex v j , where i < j."
                    },
                    {
                        "id": 124,
                        "string": "On the other hand, the proposed models except for the PAV based graph construction allow edges in both directions."
                    },
                    {
                        "id": 125,
                        "string": "Second, the Entity Graph model only measures coherence based on shared entities between sentences with respect to their syntactic role."
                    },
                    {
                        "id": 126,
                        "string": "This is also the case for the Entity Grid model."
                    },
                    {
                        "id": 127,
                        "string": "The proposed models measure text coherence based on the similarity between semantic vectors of sentences; hence we can take into account related-yet-not-identical entities."
                    },
                    {
                        "id": 128,
                        "string": "Evaluation and results We evaluate the proposed methods on two experimental tasks: the document discrimination task and insertion task."
                    },
                    {
                        "id": 129,
                        "string": "All stop words are removed from the texts in this experiment, while lemmatisation is not employed."
                    },
                    {
                        "id": 130,
                        "string": "The performance of the proposed methods is also compared with our reimplementation of Entity Grid (Barzilay and Lapata, 2008) and Entity Graph (Guinaudeau and Strube, 2013) ."
                    },
                    {
                        "id": 131,
                        "string": "The experimental settings for each method are described below."
                    },
                    {
                        "id": 132,
                        "string": "PAV The balancing factor α ranges over [0.0, 0.1, 0.2, · · · , 1.0]."
                    },
                    {
                        "id": 133,
                        "string": "SSV There is no particular parameter to set."
                    },
                    {
                        "id": 134,
                        "string": "MSV The cosine similarity threshold θ ranges over [−1.0, 0.1, 0.2, · · · , 0.9]."
                    },
                    {
                        "id": 135,
                        "string": "Entity Grid The optimal value for transition length three (bigram and trigram) is used."
                    },
                    {
                        "id": 136,
                        "string": "In document discrimination task, we implement the Entity Grid model with and without saliency."
                    },
                    {
                        "id": 137,
                        "string": "An entity is judged as salient if it is mentioned in the text at least twice."
                    },
                    {
                        "id": 138,
                        "string": "Saliency is not employed in the insertion task because the texts in the insertion task are relatively short and an entity is not mentioned many times."
                    },
                    {
                        "id": 139,
                        "string": "Entity Graph We implemented three projection methods with normalisation: P U , P W , and P Acc ."
                    },
                    {
                        "id": 140,
                        "string": "Co-reference resolution is not employed to avoid bias as mentioned by Nahnsen (2009) ."
                    },
                    {
                        "id": 141,
                        "string": "However, we follow the suggestion by Eisner and Charniak (2011) to consider all nouns (including nonhead nouns) as entities in our experiment."
                    },
                    {
                        "id": 142,
                        "string": "The role of each entity is extracted using the dependency parser in Stanford CoreNLP toolkit ."
                    },
                    {
                        "id": 143,
                        "string": "4.1 Document discrimination task 4.1.1 Data In the document discrimination task, sentences in a text are randomly permutated to generate another text; the task is to identify the original text given a pair of the original and the randomised one."
                    },
                    {
                        "id": 144,
                        "string": "The result is considered successful if the original is identified with the strictly higher coherence value."
                    },
                    {
                        "id": 145,
                        "string": "The performance is measured by accuracy, i.e."
                    },
                    {
                        "id": 146,
                        "string": "the ratio of successfully identified pairs to all pairs in the test set."
                    },
                    {
                        "id": 147,
                        "string": "Our data came from a part of the English WSJ text in OntoNotes Release 5.0 (LDC2013T19)."
                    },
                    {
                        "id": 148,
                        "string": "Half of the data is used for training while another half is used for testing."
                    },
                    {
                        "id": 149,
                        "string": "For each instance in both training and testing data, at most 20 random permutations were created."
                    },
                    {
                        "id": 150,
                        "string": "Detail of the data is shown in Table 2 ."
                    },
                    {
                        "id": 151,
                        "string": "Table 3 shows the result of the document discrimination task of each method with the various experimental settings."
                    },
                    {
                        "id": 152,
                        "string": "Result and discussion Entity Grid without saliency performed the best (0.845), followed by Entity Grid with saliency (0.837), PAV (0.774, α = 0.4), MSV (0.741, Table 3 : Result of the document discrimination task θ = 0.1), Entity Graph (0.725), then SSV (0.676)."
                    },
                    {
                        "id": 153,
                        "string": "The performances of PAV and MSV are increasing over changes of parameter until at certain point becomes steadily decreasing."
                    },
                    {
                        "id": 154,
                        "string": "We performed the McNemar test in R to find out that the difference in accuracy between every pair of methods is statistically significant at p < 0.05."
                    },
                    {
                        "id": 155,
                        "string": "Contrary to Barzilay and Lapata (2008) , the saliency factor did not work effectively for Entity Grid in our data."
                    },
                    {
                        "id": 156,
                        "string": "The PAV and MSV based-method performed better than Entity Graph."
                    },
                    {
                        "id": 157,
                        "string": "This result suggests that coherence is not only the matter of surface overlapping of entities and their syntactic roles, but semantic similarity between sentences also should be taken into account."
                    },
                    {
                        "id": 158,
                        "string": "This also confirms that Table 4 : Number of the same judgements between two methods in the document discrimination task the semantic relation between adjacent sentences (local coherence) is more important for coherence than semantic relation between long-distance sentences in the document discrimination task."
                    },
                    {
                        "id": 159,
                        "string": "We also calculated the number of the same judgement between all pairs of methods (questions that are answered correctly and incorrectly by both methods in the pair)."
                    },
                    {
                        "id": 160,
                        "string": "Table 4 shows the number of the same judgement between every pair of the methods."
                    },
                    {
                        "id": 161,
                        "string": "We found out the PAV-MSV pair shares the largest number of the same judgement (11,998, 88.3%) ."
                    },
                    {
                        "id": 162,
                        "string": "The MSV-based method establishes an edge between sentences whenever their similarity exceeds the threshold."
                    },
                    {
                        "id": 163,
                        "string": "However, it has relatively many same judgements with PAV."
                    },
                    {
                        "id": 164,
                        "string": "This implies the local coherence is sufficient enough to solve the document discrimination task."
                    },
                    {
                        "id": 165,
                        "string": "Insertion task Data In the insertion task described in Barzilay and Lapata (2008) , the coherence measure is evaluated based on to what extent the measure can estimate the original sentence position in a text from which one sentence is taken out randomly."
                    },
                    {
                        "id": 166,
                        "string": "The coherence measure of the text with a taken-out sentence inserted at the original position, i.e."
                    },
                    {
                        "id": 167,
                        "string": "the original text, is expected to be the highest value among other values of text with the sentence inserted at a wrong position."
                    },
                    {
                        "id": 168,
                        "string": "We argue, however, adopting the TOEFL R iBT insertion type question is more suitable for this kind of task than using the artificially generated texts by sentence deletion."
                    },
                    {
                        "id": 169,
                        "string": "The TOEFL R insertion type question aims at measuring test takers' ability to understand the text coherence."
                    },
                    {
                        "id": 170,
                        "string": "Test takers are given a coherent text with an insert-sentence."
                    },
                    {
                        "id": 171,
                        "string": "The task is to find the best place to insert the insert-sentence."
                    },
                    {
                        "id": 172,
                        "string": "To the best of our observation, the texts in the TOEFL R iBT insertion type question are coherent even before the insert-sentence is inserted."
                    },
                    {
                        "id": 173,
                        "string": "An example of the TOEFL R iBT insertion (A) S1[The raising of livestock is a major economic activity in semiarid lands, where grasses are generally the dominant type of natural vegetation.]"
                    },
                    {
                        "id": 174,
                        "string": "(B) S2[The consequences of an excessive number of livestock grazing in an area are the reduction of the vegetation cover and trampling and pulverization of the soil.]"
                    },
                    {
                        "id": 175,
                        "string": "(C) S3[This is usually followed by the drying of the soil and accelerated erosion.]"
                    },
                    {
                        "id": 176,
                        "string": "(D) Question: Insert the following sentence into one of (A)-(D)."
                    },
                    {
                        "id": 177,
                        "string": "I[This economic reliance on livestock in certain regions makes large tracts of land susceptible to overgrazing.]"
                    },
                    {
                        "id": 178,
                        "string": "Figure 6 : Example of the TOEFL R iBT insertion type question (Education Testing Service, 2007) type question is shown in Figure 6 ."
                    },
                    {
                        "id": 179,
                        "string": "In the following evaluation, a method is judged as a success if it assigns the highest coherence value to the text formed by inserting the insertsentence at the correct insertion position."
                    },
                    {
                        "id": 180,
                        "string": "We do not allow tie values and judge it as fail even though the correct position has the highest tie value."
                    },
                    {
                        "id": 181,
                        "string": "We collected 104 insertion type questions from various TOEFL R iBT preparation books."
                    },
                    {
                        "id": 182,
                        "string": "The average number of sentences in a text is 7.05 (SD: standard deviation=1.85); the average number of tokens in a text is 139.8 (SD=43.7)."
                    },
                    {
                        "id": 183,
                        "string": "As the data size is relatively small, we adopted the one-heldout cross validation for the Entity Grid model."
                    },
                    {
                        "id": 184,
                        "string": "The same rank is assigned to incorrect insertion positions when training the Entity Grid model."
                    },
                    {
                        "id": 185,
                        "string": "We did not adopt the Entity Grid model considering saliency since each text is relatively short in this data thus term frequency (saliency) tends to be low for all terms."
                    },
                    {
                        "id": 186,
                        "string": "Table 5 shows the result of the insertion task of each method with the various experimental settings."
                    },
                    {
                        "id": 187,
                        "string": "Our proposed methods showed good performance, particularly the PAV-based graph construction method outperformed both baselines: Entity Grid and Entity Graph."
                    },
                    {
                        "id": 188,
                        "string": "The PAV method obtained the best performance at α = 0.0, while MSV method performed best at θ = 0.8."
                    },
                    {
                        "id": 189,
                        "string": "However, the McNemar test revealed that the difference in accuracy between every pair of methods was not statistically significant at p < 0.05."
                    },
                    {
                        "id": 190,
                        "string": "This is probably due to the limited size of the insertion data compared with the document discrimination task."
                    },
                    {
                        "id": 191,
                        "string": "Result and discussion There are two questions correctly answered and 31 questions incorrectly answered by all methods."
                    },
                    {
                        "id": 192,
                        "string": "These two correctly answered questions have Table 5 : Result of the insertion task similar characteristics, having word overlaps and synonyms across adjacent sentences."
                    },
                    {
                        "id": 193,
                        "string": "These questions also tend to contain more common words."
                    },
                    {
                        "id": 194,
                        "string": "On the other hand, the failed questions tend to contain more uncommon words, technical terms and named entities."
                    },
                    {
                        "id": 195,
                        "string": "Although the successful questions also contain named entities, they were mentioned more frequently in the texts as opposed to the failed questions."
                    },
                    {
                        "id": 196,
                        "string": "Therefore we suspected the limited coverage of our GloVe dictionary and investigated the proportion of the out of vocabulary (OOV) ratio of the texts."
                    },
                    {
                        "id": 197,
                        "string": "Among all of the questions, there are 32 out of 104 questions including the OOV words; each question contains one to three OOV words in type/in token."
                    },
                    {
                        "id": 198,
                        "string": "All methods failed in 15 out of these 32 questions but succeeded in the rest 17."
                    },
                    {
                        "id": 199,
                        "string": "This fact suggests that OOV words are not necessarily the main reason for failures in the insertion task."
                    },
                    {
                        "id": 200,
                        "string": "Comparing the parameters (α of PAV and θ of MSV) in Table 3 and Table 5 , they are different to achieve the best performance in two different datasets."
                    },
                    {
                        "id": 201,
                        "string": "In the PAV-based method, there is no significant difference in the average uot value of every pair of adjacent two sentences between the datasets."
                    },
                    {
                        "id": 202,
                        "string": "We also calculated the cosine similarity of every pair of adjacent two sentences to find more similar adjacent sentences in the insertion task data than in the document discrimination task data; 90% of the adjacent sentence similarities lies in 0.3 ∼ 0.6 in the document discrimination task, while it ranges 0.5 ∼ 0.9 in the insertion task data."
                    },
                    {
                        "id": 203,
                        "string": "This difference suggests that the uot factor helps relatively more in the document discrimination task for the PAV-based method, while it has less impact in the insertion task."
                    },
                    {
                        "id": 204,
                        "string": "This explains the difference α values of PAV across the two tasks."
                    },
                    {
                        "id": 205,
                        "string": "To investigate the difference of the parameter θ in the MSV-based model, we calculated the cosine similarity of every sentence pair in the text."
                    },
                    {
                        "id": 206,
                        "string": "In both datasets, more than 90% of the sentence similarities lies in 0.5 ∼ 1.0."
                    },
                    {
                        "id": 207,
                        "string": "When the similarity is transformed into the edge weight by dividing by the sentence distance, the difference becomes apparent; while 86.6% of the edge weights in the document discrimination task lies less than 0.2, the edge weights scatter over 0 ∼ 1.0 in the insertion task."
                    },
                    {
                        "id": 208,
                        "string": "This happens because the average length of the texts in the document discrimination task is longer than that of the insertion task."
                    },
                    {
                        "id": 209,
                        "string": "Unless setting a low threshold (θ), the MSV-based model hardly establishes edges between sentence vertices."
                    },
                    {
                        "id": 210,
                        "string": "In other words, establishing edges between distant sentences would contribute to the performance of these tasks."
                    },
                    {
                        "id": 211,
                        "string": "Table 6 : Number of the same answers between two methods in the insertion task Table 6 shows the number of the same answers between every pair of the methods."
                    },
                    {
                        "id": 212,
                        "string": "The SSV-MSV pair shares the most same answers in the insertion task among all pairs (84, 80.8%), followed by the PAV-MSV pair (79, 76.0%), then PAV-SSV pair (75, 72.1%)."
                    },
                    {
                        "id": 213,
                        "string": "The PAV-based method performs best without considering the overlapping terms between the adjacent sentences (uot) by setting α = 0."
                    },
                    {
                        "id": 214,
                        "string": "In this case, the PAV-based method is almost similar to the SSV-based method except for allowing only backwards edges."
                    },
                    {
                        "id": 215,
                        "string": "However, Table 6 shows the PAV-based method answered differently from the SSV-based method in almost 30% questions."
                    },
                    {
                        "id": 216,
                        "string": "To further investigate the difference, we focused on the questions that were answered incorrectly by the PAV-based method but answered correctly by the SSV-based method."
                    },
                    {
                        "id": 217,
                        "string": "There are 14 of such questions, in which the SSV-based method tends to establish edges between distant sentences; the average distance between sentence vertices is 2.8 (SD = 0.7)."
                    },
                    {
                        "id": 218,
                        "string": "This suggests that the SSVbased method could capture distant sentence relations contributing to text coherence more appropriately than the PAV-based method."
                    },
                    {
                        "id": 219,
                        "string": "We also investigated 11 questions that were answered incorrectly by the PAV-based method but answered correctly by the MSV-based method."
                    },
                    {
                        "id": 220,
                        "string": "In these questions, the MSV-based method tends to establish more edges than the PAV-based method."
                    },
                    {
                        "id": 221,
                        "string": "The average number of outgoing edges from a sentence vertex in the graph constructed by the MSVbased method is 2.5 (SD = 1.8)."
                    },
                    {
                        "id": 222,
                        "string": "In addition, the MSV-based method tends to establish edges between distant sentences as well as the SSV-based method; the average distance between sentence vertices is 2.6 (SD = 0.9)."
                    },
                    {
                        "id": 223,
                        "string": "This suggests that the MSV-based method also could capture many distant sentence relations contributing to text coherence more appropriately than the PAV-based method."
                    },
                    {
                        "id": 224,
                        "string": "Although the PAV-based method performs best with the present data, which considers only local cohesion between adjacent sentences, we need to introduce a more refined mechanism for incorporating distant sentence relations than the current SSV and MSV-based methods, as we showed that long-distance relations could contribute in determining text coherence."
                    },
                    {
                        "id": 225,
                        "string": "The representation of sentences and calculation of similarity between sentences would be direct targets of the refinement."
                    },
                    {
                        "id": 226,
                        "string": "Conclusion This paper presented three novel unsupervised text coherence scoring methods, in which text coherence is regarded to be realised by cohesion of sentences in the text and the cohesion is represented in a graph structure corresponding to the text."
                    },
                    {
                        "id": 227,
                        "string": "In the graph structure, a vertex corresponds to a sentence in the text, and an edge represents a semantic relationship between corresponding sentences."
                    },
                    {
                        "id": 228,
                        "string": "As cohesion is a matter of lexicosemantics, sentences are transformed into semantic vector representa-tions, and their similarity is calculated based on the cosine similarity between the vectors."
                    },
                    {
                        "id": 229,
                        "string": "Edges between sentence vertices are established based on the similarity and distance between the sentences."
                    },
                    {
                        "id": 230,
                        "string": "We presented three methods to construct a graph: the PAV, SSV, and MSV-based methods."
                    },
                    {
                        "id": 231,
                        "string": "We evaluated the proposed methods in the document discrimination task and the insertion task."
                    },
                    {
                        "id": 232,
                        "string": "Our best performing method (PAV) outperformed the unsupervised baseline (Entity Graph) but not the supervised baseline (Entity Grid) in the document discrimination task."
                    },
                    {
                        "id": 233,
                        "string": "The difference was statistically significant at p < 0.05."
                    },
                    {
                        "id": 234,
                        "string": "In the insertion task, our best performing method (PAV) outperformed both supervised and unsupervised baselines, but the difference is not statistically significant at p < 0.05."
                    },
                    {
                        "id": 235,
                        "string": "We argue that further experiment is necessary with a larger size of data in the insertion task."
                    },
                    {
                        "id": 236,
                        "string": "Our experimental result showed that our best proposed method (PAV) performed 0.774 in accuracy in the document discrimination task, but only performed 0.356 in the insertion task."
                    },
                    {
                        "id": 237,
                        "string": "There is a big gap in their performance between two tasks."
                    },
                    {
                        "id": 238,
                        "string": "The error analysis revealed a possibility to improve the performance by introducing a more refined representation of sentence vectors and calculation in semantic the similarity between sentences for capturing distant relations between sentences."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 30
                    },
                    {
                        "section": "Related work",
                        "n": "2",
                        "start": 31,
                        "end": 33
                    },
                    {
                        "section": "Entity Grid",
                        "n": "2.1",
                        "start": 34,
                        "end": 51
                    },
                    {
                        "section": "Entity Graph",
                        "n": "2.2",
                        "start": 52,
                        "end": 72
                    },
                    {
                        "section": "Constructing semantic similarity graphs",
                        "n": "3",
                        "start": 73,
                        "end": 91
                    },
                    {
                        "section": "Preceding adjacent vertex (PAV)",
                        "n": "3.1",
                        "start": 92,
                        "end": 97
                    },
                    {
                        "section": "Single similar vertex (SSV)",
                        "n": "3.2",
                        "start": 98,
                        "end": 111
                    },
                    {
                        "section": "Multiple similar vertex (MSV)",
                        "n": "3.3",
                        "start": 112,
                        "end": 117
                    },
                    {
                        "section": "Text coherence measure",
                        "n": "3.4",
                        "start": 118,
                        "end": 127
                    },
                    {
                        "section": "Evaluation and results",
                        "n": "4",
                        "start": 128,
                        "end": 151
                    },
                    {
                        "section": "Result and discussion",
                        "n": "4.1.2",
                        "start": 152,
                        "end": 164
                    },
                    {
                        "section": "Data",
                        "n": "4.2.1",
                        "start": 165,
                        "end": 190
                    },
                    {
                        "section": "Result and discussion",
                        "n": "4.2.2",
                        "start": 191,
                        "end": 225
                    },
                    {
                        "section": "Conclusion",
                        "n": "5",
                        "start": 226,
                        "end": 238
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1167-Table2-1.png",
                        "caption": "Table 2: Data for the document discrimination task (The columns “# sent.” and “# token” denote the average number of sentences and tokens in a text respectively.)",
                        "page": 5,
                        "bbox": {
                            "x1": 322.56,
                            "x2": 509.28,
                            "y1": 62.4,
                            "y2": 105.11999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1167-Table3-1.png",
                        "caption": "Table 3: Result of the document discrimination task",
                        "page": 5,
                        "bbox": {
                            "x1": 317.76,
                            "x2": 515.04,
                            "y1": 168.48,
                            "y2": 504.96
                        }
                    },
                    {
                        "filename": "../figure/image/1167-Table1-1.png",
                        "caption": "Table 1: Entity Grid example",
                        "page": 1,
                        "bbox": {
                            "x1": 98.88,
                            "x2": 499.2,
                            "y1": 62.4,
                            "y2": 115.19999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1167-Figure1-1.png",
                        "caption": "Figure 1: Part of an example text from (Barzilay and Lapata, 2008)",
                        "page": 1,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 154.56,
                            "y2": 250.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1167-Table4-1.png",
                        "caption": "Table 4: Number of the same judgements between two methods in the document discrimination task",
                        "page": 6,
                        "bbox": {
                            "x1": 82.56,
                            "x2": 280.32,
                            "y1": 62.4,
                            "y2": 135.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1167-Figure6-1.png",
                        "caption": "Figure 6: Example of the TOEFL R© iBT insertion type question (Education Testing Service, 2007)",
                        "page": 6,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 525.12,
                            "y1": 62.879999999999995,
                            "y2": 196.32
                        }
                    },
                    {
                        "filename": "../figure/image/1167-Figure3-1.png",
                        "caption": "Figure 3: Example of projection graphs",
                        "page": 2,
                        "bbox": {
                            "x1": 321.59999999999997,
                            "x2": 513.6,
                            "y1": 186.72,
                            "y2": 298.08
                        }
                    },
                    {
                        "filename": "../figure/image/1167-Figure2-1.png",
                        "caption": "Figure 2: Example of bipartite graph",
                        "page": 2,
                        "bbox": {
                            "x1": 116.64,
                            "x2": 481.44,
                            "y1": 61.44,
                            "y2": 137.28
                        }
                    },
                    {
                        "filename": "../figure/image/1167-Table6-1.png",
                        "caption": "Table 6: Number of the same answers between two methods in the insertion task",
                        "page": 7,
                        "bbox": {
                            "x1": 324.96,
                            "x2": 508.32,
                            "y1": 497.76,
                            "y2": 570.24
                        }
                    },
                    {
                        "filename": "../figure/image/1167-Table5-1.png",
                        "caption": "Table 5: Result of the insertion task",
                        "page": 7,
                        "bbox": {
                            "x1": 82.56,
                            "x2": 280.32,
                            "y1": 62.4,
                            "y2": 389.28
                        }
                    },
                    {
                        "filename": "../figure/image/1167-Figure4-1.png",
                        "caption": "Figure 4: Graph construction algorithm with similarity of PAV",
                        "page": 3,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 529.4399999999999,
                            "y1": 68.64,
                            "y2": 160.32
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-43"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivation",
                    "text": [
                        "Neural question answering (QA) systems are end-to-end trainable machine learning models which achieve top performance in domains with large training datasets",
                        "We apply an extractive neural QA system (FastQA [1]) to BioASQ 5B",
                        "Phase B (list & factoid questions)",
                        "Extractive QA: Answer is given as start and end pointers in the context (snippets)"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "2": {
                    "title": "Training Procedure",
                    "text": [
                        "Problem: Neural QA typically requires ~105 questions to train",
                        "Datasets of such scale exist in the open domain, e.g. SQuAD [2] with factoid questions on Wikipedia articles",
                        "We train in two steps:",
                        "Pre-training on a large (~105 questions) open-domain dataset (SQuAD)",
                        "Fine-tuning on BioASQ (~103 questions)"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "3": {
                    "title": "Systems",
                    "text": [
                        "We trained five models using 5-fold cross validation on all available training data",
                        "We submitted two systems:",
                        "Single: Best single model according to its respective development set",
                        "Ensemble: Ensemble of all five models (averaging scores before sigmoid/softmax activation)"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "4": {
                    "title": "Results",
                    "text": [
                        "Our system won 3/5 batches",
                        "Averaged over the five batches, our system",
                        "percentage points above the best competitor",
                        "On average, the best competitor performed 3.4 percentage points better than our ensemble model"
                    ],
                    "page_nums": [
                        8,
                        9
                    ],
                    "images": []
                },
                "5": {
                    "title": "Discussion",
                    "text": [
                        "Strengths: Competitive performance, despite:",
                        "Less feature engineering than traditional QA systems",
                        "A less domain-dependent architecture, because we dont rely on domain-specific structured resources",
                        "Extractive QA cannot generate answer which are not explicitly mentioned in the snippets",
                        "No yes/no & summary questions"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                }
            },
            "paper_title": "Neural Question Answering at BioASQ 5B",
            "paper_id": "1174",
            "paper": {
                "title": "Neural Question Answering at BioASQ 5B",
                "abstract": "This paper describes our submission to the 2017 BioASQ challenge. We participated in Task B, Phase B which is concerned with biomedical question answering (QA). We focus on factoid and list question, using an extractive QA model, that is, we restrict our system to output substrings of the provided text snippets. At the core of our system, we use FastQA, a state-ofthe-art neural QA system. We extended it with biomedical word embeddings and changed its answer layer to be able to answer list questions in addition to factoid questions. We pre-trained the model on a large-scale open-domain QA dataset, SQuAD, and then fine-tuned the parameters on the BioASQ training set. With our approach, we achieve state-of-the-art results on factoid questions and competitive results on list questions.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction BioASQ is a semantic indexing, question answering (QA) and information extraction challenge (Tsatsaronis et al., 2015) ."
                    },
                    {
                        "id": 1,
                        "string": "We participated in Task B of the challenge which is concerned with biomedical QA."
                    },
                    {
                        "id": 2,
                        "string": "More specifically, our system participated in Task B, Phase B: Given a question and gold-standard snippets (i.e., pieces of text that contain the answer(s) to the question), the system is asked to return a list of answer candidates."
                    },
                    {
                        "id": 3,
                        "string": "The fifth BioASQ challenge is taking place at the time of writing."
                    },
                    {
                        "id": 4,
                        "string": "Five batches of 100 questions each were released every two weeks."
                    },
                    {
                        "id": 5,
                        "string": "Participating systems have 24 hours to submit their results."
                    },
                    {
                        "id": 6,
                        "string": "At the time of writing, all batches had been released."
                    },
                    {
                        "id": 7,
                        "string": "The questions are categorized into different question types: factoid, list, summary and yes/no."
                    },
                    {
                        "id": 8,
                        "string": "Our work concentrates on answering factoid and list questions."
                    },
                    {
                        "id": 9,
                        "string": "For factoid questions, the system's responses are interpreted as a ranked list of answer candidates."
                    },
                    {
                        "id": 10,
                        "string": "They are evaluated using meanreciprocal rank (MRR)."
                    },
                    {
                        "id": 11,
                        "string": "For list questions, the system's responses are interpreted as a set of answers to the list question."
                    },
                    {
                        "id": 12,
                        "string": "Precision and recall are computed by comparing the given answers to the goldstandard answers."
                    },
                    {
                        "id": 13,
                        "string": "F1 score, i.e., the harmonic mean of precision and recall, is used as the official evaluation measure 1 ."
                    },
                    {
                        "id": 14,
                        "string": "Most existing biomedical QA systems employ a traditional QA pipeline, similar in structure to the baseline system by Weissenborn et al."
                    },
                    {
                        "id": 15,
                        "string": "(2013) ."
                    },
                    {
                        "id": 16,
                        "string": "They consist of several discrete steps, e.g., namedentity recognition, question classification, and candidate answer scoring."
                    },
                    {
                        "id": 17,
                        "string": "These systems require a large amount of resources and feature engineering that is specific to the biomedical domain."
                    },
                    {
                        "id": 18,
                        "string": "For example, OAQA (Zi et al., 2016) , which has been very successful in last year's challenge, uses a biomedical parser, entity tagger and a thesaurus to retrieve synonyms."
                    },
                    {
                        "id": 19,
                        "string": "Our system, on the other hand, is based on a neural network QA architecture that is trained endto-end on the target task."
                    },
                    {
                        "id": 20,
                        "string": "We build upon FastQA (Weissenborn et al., 2017) , an extractive factoid QA system which achieves state-of-the-art results on QA benchmarks that provide large amounts of training data."
                    },
                    {
                        "id": 21,
                        "string": "For example, SQuAD (Rajpurkar et al., 2016) provides a dataset of ≈ 100, 000 questions on Wikipedia articles."
                    },
                    {
                        "id": 22,
                        "string": "Our approach is to train FastQA (with some extensions) on the SQuAD dataset and then fine-tune the model parameters on the BioASQ training set."
                    },
                    {
                        "id": 23,
                        "string": "Note that by using an extractive QA network as our central component, we restrict our system's Figure 1 : Neural architecture of our system."
                    },
                    {
                        "id": 24,
                        "string": "Question and context (i.e., the snippets) are mapped directly to start and end probabilities for each context token."
                    },
                    {
                        "id": 25,
                        "string": "We use FastQA (Weissenborn et al., 2017) with modified input vectors and an output layer that supports list answers in addition to factoid answers."
                    },
                    {
                        "id": 26,
                        "string": "responses to substrings in the provided snippets."
                    },
                    {
                        "id": 27,
                        "string": "This also implies that the network will not be able to answer yes/no questions."
                    },
                    {
                        "id": 28,
                        "string": "We do, however, generalize the FastQA output layer in order to be able to answer list questions in addition to factoid questions."
                    },
                    {
                        "id": 29,
                        "string": "Model Our system is a neural network which takes as input a question and a context (i.e., the snippets) and outputs start and end pointers to tokens in the context."
                    },
                    {
                        "id": 30,
                        "string": "At its core, we use FastQA (Weissenborn et al., 2017) , a state-of-the-art neural QA system."
                    },
                    {
                        "id": 31,
                        "string": "In the following, we describe our changes to the architecture and how the network is trained."
                    },
                    {
                        "id": 32,
                        "string": "Network architecture In the input layer, the context and question tokens are mapped to high-dimensional word vectors."
                    },
                    {
                        "id": 33,
                        "string": "Our word vectors consists of three components, which are concatenated to form a single vector: • GloVe embedding: We use 300-dimensional GloVe embeddings 2 (Pennington et al., 2014) which have been trained on a large collection of web documents."
                    },
                    {
                        "id": 34,
                        "string": "• Character embedding: This embedding is computed by a 1-dimensional convolutional neural network from the characters of the words, as introduced by Seo et al."
                    },
                    {
                        "id": 35,
                        "string": "(2016) ."
                    },
                    {
                        "id": 36,
                        "string": "• Biomedical Word2Vec embeddings: We use the biomedical word embeddings provided by Pavlopoulos et al."
                    },
                    {
                        "id": 37,
                        "string": "(2014) ."
                    },
                    {
                        "id": 38,
                        "string": "These are 200-dimensional Word2Vec embeddings (Mikolov et al., 2013) which were trained on ≈ 10 million PubMed abstracts."
                    },
                    {
                        "id": 39,
                        "string": "To the embedding vectors, we concatenate a one-hot encoding of the question type (list or factoid)."
                    },
                    {
                        "id": 40,
                        "string": "Note that these features are identical for all tokens."
                    },
                    {
                        "id": 41,
                        "string": "Following our embedding layer, we invoke FastQA in order to compute start and end scores for all context tokens."
                    },
                    {
                        "id": 42,
                        "string": "Because end scores are conditioned on the chosen start, there are O(n 2 ) end scores where n is the number of context tokens."
                    },
                    {
                        "id": 43,
                        "string": "We denote the start index by i ∈ [1, n], the end index by j ∈ [i, n], the start scores by y i start , and end scores by y i,j end ."
                    },
                    {
                        "id": 44,
                        "string": "In our output layer, the start, end, and span probabilities are computed as: p i start = σ(y i start ) (1) p i,· end = sof tmax(y i,· end ) (2) p i,j span = p i start · p i,j end (3) where σ denotes the sigmoid function."
                    },
                    {
                        "id": 45,
                        "string": "By computing the start probability via the sigmoid rather than softmax function (as used in FastQA), we enable the model to output multiple spans as likely answer spans."
                    },
                    {
                        "id": 46,
                        "string": "This generalizes the factoid QA network to list questions."
                    },
                    {
                        "id": 47,
                        "string": "Training & decoding Loss We define our loss as the cross-entropy of the correct start and end indices."
                    },
                    {
                        "id": 48,
                        "string": "In the case of multiple occurrences of the same answer, we only minimize the span of the lowest loss."
                    },
                    {
                        "id": 49,
                        "string": "Table 1 : Preliminary results for factoid and list questions for all five batches and for our single and ensemble systems."
                    },
                    {
                        "id": 50,
                        "string": "We report MRR and F1 scores for factoid and list questions, respectively."
                    },
                    {
                        "id": 51,
                        "string": "In parentheses, we report the rank of the respective systems relative to all other systems in the challenge."
                    },
                    {
                        "id": 52,
                        "string": "The last row averages the performance numbers of the respective system and question type across the five batches."
                    },
                    {
                        "id": 53,
                        "string": "Optimization We train the network in two steps: First, the network is trained on SQuAD, following the procedure by Weissenborn et al."
                    },
                    {
                        "id": 54,
                        "string": "(2017) (pretraining phase) ."
                    },
                    {
                        "id": 55,
                        "string": "Second, we fine-tune the network parameters on BioASQ (fine-tuning phase)."
                    },
                    {
                        "id": 56,
                        "string": "For both phases, we use the Adam optimizer (Kingma and Ba, 2014) with an exponentially decaying learning rate."
                    },
                    {
                        "id": 57,
                        "string": "We start with learning rates of 10 −3 and 10 −4 for the pre-training and fine-tuning phases, respectively."
                    },
                    {
                        "id": 58,
                        "string": "BioASQ dataset preparation During finetuning, we extract answer spans from the BioASQ training data by looking for occurrences of the gold standard answer in the provided snippets."
                    },
                    {
                        "id": 59,
                        "string": "Note that this approach is not perfect as it can produce false positives (e.g., the answer is mentioned in a sentence which does not answer the question) and false negatives (e.g., a sentence answers the question, but the exact string used is not in the synonym list)."
                    },
                    {
                        "id": 60,
                        "string": "Because BioASQ usually contains multiple snippets for a given question, we process all snippets independently and then aggregate the answer spans, sorting globally according to their probability p i,j span ."
                    },
                    {
                        "id": 61,
                        "string": "Decoding During the inference phase, we retrieve the top 20 answers span via beam search with beam size 20."
                    },
                    {
                        "id": 62,
                        "string": "From this sorted list of answer strings, we remove all duplicate strings."
                    },
                    {
                        "id": 63,
                        "string": "For factoid questions, we output the top five answer strings as our ranked list of answer candidates."
                    },
                    {
                        "id": 64,
                        "string": "For list questions, we use a probability cutoff threshold t, such that {(i, j)|p i,j span ≥ t} is the set of answers."
                    },
                    {
                        "id": 65,
                        "string": "We set t to be the threshold for which the list F1 score on the development set is optimized."
                    },
                    {
                        "id": 66,
                        "string": "Ensemble In order to further tweak the performance of our systems, we built a model ensemble."
                    },
                    {
                        "id": 67,
                        "string": "For this, we trained five single models using 5-fold cross-validation on the entire training set."
                    },
                    {
                        "id": 68,
                        "string": "These models are combined by averaging their start and end scores before computing the span probabilities (Equations 1-3)."
                    },
                    {
                        "id": 69,
                        "string": "As a result, we submit two systems to the challenge: The best single model (according to its development set) and the model ensemble."
                    },
                    {
                        "id": 70,
                        "string": "Implementation We implemented our system using TensorFlow (Abadi et al., 2016) ."
                    },
                    {
                        "id": 71,
                        "string": "It was trained on an NVidia GForce Titan X GPU."
                    },
                    {
                        "id": 72,
                        "string": "Results & discussion We report the results for all five test batches of BioASQ 5 (Task 5b, Phase B) in Table 1 ."
                    },
                    {
                        "id": 73,
                        "string": "Note that the performance numbers are not final, as the provided synonyms in the gold-standard answers will be updated as a manual step, in order to reflect valid responses by the participating systems."
                    },
                    {
                        "id": 74,
                        "string": "This has not been done by the time of writing 3 ."
                    },
                    {
                        "id": 75,
                        "string": "Note also that -in contrast to previous BioASQ challenges -systems are no longer allowed to provide an own list of synonyms in this year's challenge."
                    },
                    {
                        "id": 76,
                        "string": "In general, the single and ensemble system are performing very similar relative to the rest of field: Their ranks are almost always right next to each other."
                    },
                    {
                        "id": 77,
                        "string": "Between the two, the ensemble model performed slightly better on average."
                    },
                    {
                        "id": 78,
                        "string": "On factoid questions, our system has been very successful, winning three out of five batches."
                    },
                    {
                        "id": 79,
                        "string": "On list questions, however, the relative performance varies significantly."
                    },
                    {
                        "id": 80,
                        "string": "We expect our system to perform better on factoid questions than list questions, because our pre-training dataset (SQuAD) does not contain any list questions."
                    },
                    {
                        "id": 81,
                        "string": "Starting with batch 3, we also submitted responses to yes/no questions by always answering yes."
                    },
                    {
                        "id": 82,
                        "string": "Because of a very skewed class distribution in the BioASQ dataset, this is a strong baseline."
                    },
                    {
                        "id": 83,
                        "string": "Because this is done merely to have baseline performance for this question type and because of the naivety of the method, we do not list or discuss the results here."
                    },
                    {
                        "id": 84,
                        "string": "Conclusion In this paper, we summarized the system design of our BioASQ 5B submission for factoid and list questions."
                    },
                    {
                        "id": 85,
                        "string": "We use a neural architecture which is trained end-to-end on the QA task."
                    },
                    {
                        "id": 86,
                        "string": "This approach has not been applied to BioASQ questions in previous challenges."
                    },
                    {
                        "id": 87,
                        "string": "Our results show that our approach achieves state-of-the art results on factoid questions and competitive results on list questions."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 28
                    },
                    {
                        "section": "Model",
                        "n": "2",
                        "start": 29,
                        "end": 31
                    },
                    {
                        "section": "Network architecture",
                        "n": "2.1",
                        "start": 32,
                        "end": 46
                    },
                    {
                        "section": "Training & decoding",
                        "n": "2.2",
                        "start": 47,
                        "end": 71
                    },
                    {
                        "section": "Results & discussion",
                        "n": "3",
                        "start": 72,
                        "end": 83
                    },
                    {
                        "section": "Conclusion",
                        "n": "4",
                        "start": 84,
                        "end": 87
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1174-Table1-1.png",
                        "caption": "Table 1: Preliminary results for factoid and list questions for all five batches and for our single and ensemble systems. We report MRR and F1 scores for factoid and list questions, respectively. In parentheses, we report the rank of the respective systems relative to all other systems in the challenge. The last row averages the performance numbers of the respective system and question type across the five batches.",
                        "page": 2,
                        "bbox": {
                            "x1": 133.92,
                            "x2": 463.2,
                            "y1": 62.4,
                            "y2": 184.32
                        }
                    },
                    {
                        "filename": "../figure/image/1174-Figure1-1.png",
                        "caption": "Figure 1: Neural architecture of our system. Question and context (i.e., the snippets) are mapped directly to start and end probabilities for each context token. We use FastQA (Weissenborn et al., 2017) with modified input vectors and an output layer that supports list answers in addition to factoid answers.",
                        "page": 1,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 299.03999999999996,
                            "y1": 61.44,
                            "y2": 276.0
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-44"
        },
        {
            "slides": {
                "0": {
                    "title": "Stance Classification in Tweets",
                    "text": [
                        "Automatically identify users positions on a pre-chosen target of interest (e.g., public issues) from text",
                        "Target (given): Climate Change is Real Concern",
                        "Tweet (given): We need to protect our islands and stop the destruction of coral reef.",
                        "(Output) Stance label (to be predicted): Favour"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Cross Target Stance Classification",
                    "text": [
                        "Generalise user stance on unseen targets",
                        "Target: A mining project in Australia (Destination)",
                        "Tweet: Environmentalists warn the $16 billion coal facility will damage the Great Barrier Reef.",
                        "Apply classifiers trained on a source target to the destination target",
                        "Target: Climate Change is Real Concern (Source)",
                        "Tweet: We need to protect our islands and stop the destruction of coral reef."
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Our Approach Basic Idea",
                    "text": [
                        "For targets both related to a common domain, stance generalisation is possible via domain-specific information that reflects users major concerns",
                        "Tweet: Environmentalists warn the $16 billion coal facility will damage the Great Barrier Reef.",
                        "Tweet: We need to protect our islands and stop the destruction of coral reef.",
                        "Target: A mining project in Australia",
                        "Target: Climate Change is Real Concern",
                        "Destination target Source target",
                        "Domain aspects: e.g., reef, destruction/damage"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Extraction of Domain Aspects",
                    "text": [
                        "Key properties of domain aspects",
                        "They tend to be mentioned by multiple users in a corpus",
                        "They tend to carry the core meaning of a stance-bearing tweet",
                        "In our project dataset, 3776 our of 41805 tweets mentioned the aspect reef",
                        "why fund Adani #Coal Mine and destroy our Reef when theres so much sun in Queensland?",
                        "And your massive polluting Carmichael mine will do its bit to kill Australia's great barrier reef?",
                        "And thousands of jobs will be lost in reef tourism when Adani goes ahead.",
                        "The coral reef crisis is actually a crisis of governance."
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "9": {
                    "title": "Visualisation of attention",
                    "text": [
                        "The heatmap of the attention weights assigned to some tweet examples",
                        "FM: Feminist Movement A: Against",
                        "LA: Legalization of Abortion F: Favour",
                        "HC: Hillary Clinton Words central to expressing stances",
                        "DT: Donald Trump CC: Climate Change is Concern AMP: Australian mining project",
                        "are highlighted by our model!"
                    ],
                    "page_nums": [
                        11,
                        12,
                        13,
                        14
                    ],
                    "images": [
                        "figure/image/1175-Table4-1.png"
                    ]
                },
                "10": {
                    "title": "Conclusion",
                    "text": [
                        "A self-attention model which can attend high-level information about the domain for stance generalisation",
                        "Domain aspect words are useful to determine the user stance",
                        "Incorporation of target divergence into our modelling.",
                        "Learning aspects from multiple sources (e.g., environment, community, and economics aspects for mining projects)"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                }
            },
            "paper_title": "Cross-Target Stance Classification with Self-Attention Networks",
            "paper_id": "1175",
            "paper": {
                "title": "Cross-Target Stance Classification with Self-Attention Networks",
                "abstract": "In stance classification, the target on which the stance is made defines the boundary of the task, and a classifier is usually trained for prediction on the same target. In this work, we explore the potential for generalizing classifiers between different targets, and propose a neural model that can apply what has been learned from a source target to a destination target. We show that our model can find useful information shared between relevant targets which improves generalization in certain scenarios.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Stance classification is the task of automatically identifying users' positions about a specific target from text (Mohammad et al., 2017) ."
                    },
                    {
                        "id": 1,
                        "string": "Table 1 shows an example of this task, where the stance of the sentence is recognized as favorable on the target climate change is concern."
                    },
                    {
                        "id": 2,
                        "string": "Traditionally, this task is approached by learning a target-specific classifier that is trained for prediction on the same target of interest (Hasan and Ng, 2013; Mohammad et al., 2016; Ebrahimi et al., 2016) ."
                    },
                    {
                        "id": 3,
                        "string": "This implies that a new classifier has to be built from scratch on a well-prepared set of ground-truth data whenever predictions are needed for an unseen target."
                    },
                    {
                        "id": 4,
                        "string": "An alternative to this approach is to conduct a cross-target classification, where the classifier is adapted from different but related targets (Augenstein et al., 2016) , which allows benefiting from the knowledge of existing targets."
                    },
                    {
                        "id": 5,
                        "string": "For example, in our project we are interested in online users' stances on the approvals of particular mining projects in the country."
                    },
                    {
                        "id": 6,
                        "string": "It might be useful to start with a classifier that is adapted from a related target such as climate change is concern (presumably available and annotated), as in both cases Sentence: We need to protect our islands and stop the destruction of coral reef."
                    },
                    {
                        "id": 7,
                        "string": "Target: Climate Change is Concern Stance: Favor Table 1 : An example of stance classification task."
                    },
                    {
                        "id": 8,
                        "string": "users could discuss the impacts from the targets to some common issues, such as the environment or communities."
                    },
                    {
                        "id": 9,
                        "string": "Cross-target stance classification is a more challenging task simply because the language models may not be compatible between different targets."
                    },
                    {
                        "id": 10,
                        "string": "However, for some targets that can be recognized as being related to the same and more general domains, it could be possible to generalize through certain aspects of the domains that reflect users' major concerns."
                    },
                    {
                        "id": 11,
                        "string": "For example, from the following sentence, whose stance is against the approval of a mining project, \"Environmentalists warn the $16 billion coal facility will damage the Great Barrier Reef\", it can be seen that both this sentence and the one in Table 1 mention the same aspect \"reef destruction/damage\", which is closely related to the \"environment\" domain."
                    },
                    {
                        "id": 12,
                        "string": "In this paper, we focus on cross-target stance classification and explore the limits of generalizing models between different but domain-related targets 1 ."
                    },
                    {
                        "id": 13,
                        "string": "The basic idea is to learn a set of domainspecific aspects from a source target, and then apply them to prediction on a destination target."
                    },
                    {
                        "id": 14,
                        "string": "To this end, we propose CrossNet, a novel neural model that implements the above idea based on the self-attention mechanism."
                    },
                    {
                        "id": 15,
                        "string": "Our preliminary analysis shows that the proposed model can find useful domain-specific information from a stancebearing sentence and that the classification performance is improved in certain domains."
                    },
                    {
                        "id": 16,
                        "string": "Model In this section, we introduce the proposed model, CrossNet, for cross-target stance classification."
                    },
                    {
                        "id": 17,
                        "string": "Figure 1 shows the architecture of CrossNet."
                    },
                    {
                        "id": 18,
                        "string": "It consists of four layers from the Embedding Layer (bottom) to the Prediction Layer (top)."
                    },
                    {
                        "id": 19,
                        "string": "It works by taking a stance-bearing sentence and a target as input and yielding the predicted stance label as output."
                    },
                    {
                        "id": 20,
                        "string": "In the following, we present the implementation of each layer in CrossNet."
                    },
                    {
                        "id": 21,
                        "string": "Embedding Layer There are two inputs in CrossNet: a stance-bearing sentence P and a descriptive target T (e.g, climate change is concern in Table 1)."
                    },
                    {
                        "id": 22,
                        "string": "We use word embeddings (Mikolov et al., 2013) to represent each word in the input as a dense vector."
                    },
                    {
                        "id": 23,
                        "string": "The output of this layer are two sequences of vectors P = {p 1 , ..., p |P | } and T = {t 1 , ..., t |T | }, where p, t are word vectors."
                    },
                    {
                        "id": 24,
                        "string": "Context Encoding Layer In this layer, we encode the contextual information in the input sentence and target."
                    },
                    {
                        "id": 25,
                        "string": "We use a bi-directional Long Short-Term Memory Network (BiLSTM) (Hochreiter and Schmidhuber, 1997) to capture the left and right contexts of each word in the input."
                    },
                    {
                        "id": 26,
                        "string": "Moreover, to account for the impact of the target on stance inference, we borrow the idea of conditional encoding (Augenstein et al., 2016) to model the dependency of the sentence on the target."
                    },
                    {
                        "id": 27,
                        "string": "Formally, we first use a BiLSTM T to encode the target: [ − → h T i − → c T i ] = − −−− → LSTM T (t i , − → h T i−1 , − → c T i−1 ) [ ← − h T i ← − c T i ] = ← −−− − LSTM T (t i , ← − h T i+1 , ← − c T i+1 ) (1) where h ∈ R h and c ∈ R h are the hidden state and cell state of LSTM."
                    },
                    {
                        "id": 28,
                        "string": "The symbol − →(← −) indicates the forward (backward) pass."
                    },
                    {
                        "id": 29,
                        "string": "t i is the input word vector at time step i."
                    },
                    {
                        "id": 30,
                        "string": "Then, we learn a conditional encoding of the sentence P , by initializing BiLSTM P (a different BiLSTM) with the final states of BiLSTM T : [ − → h P 1 − → c P 1 ] = − −−− → LSTM P (p 1 , − → h T |T | , − → c T |T | ) [ ← − h P |P | ← − c P |P | ] = ← −−− − LSTM P (p |P | , ← − h T 1 , ← − c T 1 ) (2) It can be seen that the initialization is done by aligning the forward (backward) pass of the two BiLSTMs."
                    },
                    {
                        "id": 31,
                        "string": "The output is a contextually-encoded sequence, H P = {h P 1 , ..., h P |P | }, where h = [ − → h ; ← − h ] ∈ R 2h with [; ] as the vector concatenation operation."
                    },
                    {
                        "id": 32,
                        "string": "Aspect Attention Layer In this layer, we implement the idea of discovering domain-specific aspects for cross-target stance inference."
                    },
                    {
                        "id": 33,
                        "string": "In particular, the key observation we make is that the domain aspects that reflect users' major concerns are usually the core of understanding their stances, and could be mentioned by multiple users in a discussion."
                    },
                    {
                        "id": 34,
                        "string": "For example, we find that many users in our corpus mention the aspect \"reef\" to express their concerns about the impact of a mining project on the Great Barrier Reef."
                    },
                    {
                        "id": 35,
                        "string": "Based on this observation, the perception of the domain aspects can be boiled down to finding the sentence parts that not only carry the core idea of a stance-bearing sentence but also tend to be recurring in the corpus."
                    },
                    {
                        "id": 36,
                        "string": "First, to capture the recurrences of the domain aspects, a simple way is to make every input sentence be consumed by this layer (see Figure 1 ), so that the layer parameters are shared across the corpus for being stimulated by all appearances of the domain aspects."
                    },
                    {
                        "id": 37,
                        "string": "Then, we utilize self-attention to signal the core parts of a stance-bearing sentence."
                    },
                    {
                        "id": 38,
                        "string": "Self-attention is an attention mechanism for selecting specific parts of a sequence by relating its elements at different positions (Vaswani et al., 2017; Cheng et al., 2016) ."
                    },
                    {
                        "id": 39,
                        "string": "In our case, the self-attention process is based on the assumption that the core parts of a sentence are those that are compatible with the semantics of the entire sentence."
                    },
                    {
                        "id": 40,
                        "string": "To this end, we introduce a compatibility function to score the semantic compatibility between the encoded se-quence H P and each of its hidden states h P : c i = w 2 σ(W 1 h P i + b 1 ) + b 2 (3) where W 1 ∈ R d×2h , w 2 ∈ R d , b 1 ∈ R d , and b 2 ∈ R are trainable parameters, and σ is the activation function."
                    },
                    {
                        "id": 41,
                        "string": "Note that all the above parameters are shared by every hidden state in H P ."
                    },
                    {
                        "id": 42,
                        "string": "Next, we compute the attention weight a i for each h P i based on its compatibility score via softmax operation: a i = exp(c i ) |P | j=1 exp(c j ) (4) Finally, we can obtain the domain aspect encoded representation based on the attention weights: A P = |P | i=1 a i h P i (5) where A P ∈ R 2h is the domain aspect encoding for sentence P and also the output of this layer."
                    },
                    {
                        "id": 43,
                        "string": "Prediction Layer We predict the stance label of the sentence based on its domain aspect encoding: y = softmax(MLP(A P )) (6) where we use a multilayer perceptron (MLP) to consume the domain aspect encoding A P and apply the softmax to get the predicted probability for each of the C classes,ŷ = {y 1 , ..., y C }."
                    },
                    {
                        "id": 44,
                        "string": "Model Training For model training, we use multi-class crossentropy loss, J (θ) = − N i C j y (i) j logŷ (i) j + λ Θ (7) where N is the size of training set."
                    },
                    {
                        "id": 45,
                        "string": "y is the groundtruth label indicator for each class, andŷ is the predicted probability."
                    },
                    {
                        "id": 46,
                        "string": "λ is the coefficient for L 2regularization."
                    },
                    {
                        "id": 47,
                        "string": "Θ denotes the set of all trainable parameters in our model."
                    },
                    {
                        "id": 48,
                        "string": "Experiments This section reports the results of quantitative and qualitative evaluations of the proposed model."
                    },
                    {
                        "id": 49,
                        "string": "Table 2 ."
                    },
                    {
                        "id": 50,
                        "string": "Tweets on an Australian mining project (AM): the second is our collection of tweets on a mining project in Australia obtained using Twitter API."
                    },
                    {
                        "id": 51,
                        "string": "It includes 220,067 tweets posted from January 2016 to June 2017 that contain the project name in the text."
                    },
                    {
                        "id": 52,
                        "string": "We remove all URL-only tweets and duplicate tweets, and obtain a set of 40,852 (unlabeled) tweets."
                    },
                    {
                        "id": 53,
                        "string": "Due to the lack of annotation, this dataset is only used for our qualitative evaluation."
                    },
                    {
                        "id": 54,
                        "string": "To align with our scenario, the above targets can be categorized into three different domains: Women's Rights (FM, LA), American Politics (HC, DT), and Environments (CC, AM)."
                    },
                    {
                        "id": 55,
                        "string": "Metric We use F1-score to measure the classification performance."
                    },
                    {
                        "id": 56,
                        "string": "Due to the imbalanced class distributions of the SemEval dataset, we compute both micro-averaged (large classes dominate) and macro-averaged (small classes dominate) F1scores (Manning et al., 2008) , and use their average as the metric, i.e., F = 1 2 (F micro + F macro )."
                    },
                    {
                        "id": 57,
                        "string": "To evaluate the effectiveness of target adaptation, we use the metric transfer ratio (Glorot et al., 2011) to compare the cross-target and in-target performance of a model: D) is the cross-target F1-score of a model trained on the source target S and tested on the destination target D, and F b (D, D) is the in-target F1-score of a baseline model trained and tested on the same target D, which serves as the performance calibration for target adaptation."
                    },
                    {
                        "id": 58,
                        "string": "Q = F (S,D) F b (D,D) , where F (S, Training setup The word embeddings are initialized with the pretrained 200d GloVe word vectors on the 27B Twitter corpus (Pennington et al., 2014) , and fixed during training."
                    },
                    {
                        "id": 59,
                        "string": "The model is trained (90%) and validated (10%) on a source target, and tested on a destination target."
                    },
                    {
                        "id": 60,
                        "string": "The following model settings are selected based on a small grid search on the validation set: the LSTM hidden size of 60, the MLP layer size of 60, and dropout 0.1."
                    },
                    {
                        "id": 61,
                        "string": "The L2-regularization coefficient λ in the loss is 0.01."
                    },
                    {
                        "id": 62,
                        "string": "ADAM (Kingma and Ba, 2014) is used as the optimizer, with a learning rate of 10 −3 ."
                    },
                    {
                        "id": 63,
                        "string": "Stratified 10-fold cross-validation is conducted to produce averaged results."
                    },
                    {
                        "id": 64,
                        "string": "Classification Performance This section reports the results of our model and two baseline approaches on cross-target stance classification."
                    },
                    {
                        "id": 65,
                        "string": "BiLSTM: this is a base model for our task."
                    },
                    {
                        "id": 66,
                        "string": "It has two BiLSTMs for encoding the sentence and target separately."
                    },
                    {
                        "id": 67,
                        "string": "Then, the concatenation of the resulting encodings is fed into the final Prediction Layer to generate predicted stance labels."
                    },
                    {
                        "id": 68,
                        "string": "In our evaluation, this model is treated as the baseline model for deriving the in-target performance calibration F b (D, D)."
                    },
                    {
                        "id": 69,
                        "string": "MITRE (Augenstein et al., 2016) best system in SemEval-2016 Task 6."
                    },
                    {
                        "id": 70,
                        "string": "It utilizes the conditional encoding to learn a targetdependent representation for the input sentence."
                    },
                    {
                        "id": 71,
                        "string": "The conditional encoding is realized in the same way as the Context Encoding Layer does in our model, namely by using the hidden states of the target-encoding BiLSTM to initialize the sentence-encoding BiLSTM."
                    },
                    {
                        "id": 72,
                        "string": "Table 3 shows the results (in-target and crosstarget) on the two domains: Women's Rights and American Politics."
                    },
                    {
                        "id": 73,
                        "string": "First, it is observed that MITRE outperforms BiLSTM over all target configurations, suggesting that, compared to simple concatenation, the conditional encoding of the target information could be more helpful to capture the dependency of the sentence on the target."
                    },
                    {
                        "id": 74,
                        "string": "Second, our model is shown to achieve better results than the two baselines in almost all cases (only slightly worse than MITRE on LA under the in-target setting, and the difference is not statistically significant), which implies that the aspect attention mechanism adopted in our model could benefit target-level generalization while it does not hurt the in-target performance."
                    },
                    {
                        "id": 75,
                        "string": "Moreover, by comparing the performance of our model under different target configurations, we see that the improvements brought by our model are more significant on the cross-target task than they are on the intarget task, with an average improvement of 6.6% (cross-target) vs. 3.0% (in-target) over MITRE in F1-score, which demonstrates a greater advantage of our model in the cross-target task."
                    },
                    {
                        "id": 76,
                        "string": "Finally, according to the transfer ratio results, the general drop from the in-target to cross-target performance (26% averaged over all cases) could imply that while the target-independent information (i.e., the domain-specific aspects) is shown to benefit generalization, it could be important to also consider the information that is specific to the destination target for model building (which has not yet been explored in this work)."
                    },
                    {
                        "id": 77,
                        "string": "Visualization of Attention To show that our model can select sentence parts that are related to domain aspects, we visualize the self-attention results on some tweet examples that are correctly classified by our model in Table 4 ."
                    },
                    {
                        "id": 78,
                        "string": "We can see that the most highlighted parts in each example are relevant to the respective domain."
                    },
                    {
                        "id": 79,
                        "string": "For example, \"feminist\", \"rights\", and \"equality\" are commonly used when talking about women's rights, and \"president\" and \"dreams\" of-   ten appear in text about politics."
                    },
                    {
                        "id": 80,
                        "string": "It is also interesting to note that words that are specific to the destination target may not be captured by the model learned from the source target, such as \"abortion\" in sentence 1 and \"trumps\" in sentence 3."
                    },
                    {
                        "id": 81,
                        "string": "This makes sense because those words are rare in the source target corpus and thus not well noticed by the model."
                    },
                    {
                        "id": 82,
                        "string": "Finally, for our project, we can see from the last two sentences that the model learned from climate change is concern is able to concentrate on words that are central to understanding the authors' stances on the approval of the mining project, such as \"reef\", \"destroy\", \"environmental\", and \"disaster\"."
                    },
                    {
                        "id": 83,
                        "string": "Overall, the above visualization demonstrates that our model could benefit stance inference across related targets through capturing domain-specific information."
                    },
                    {
                        "id": 84,
                        "string": "Learned Domain-Specific Aspects Finally, it is also possible to show the learned domain aspects by extracting all sentence parts in a corpus that are highly attended by our model."
                    },
                    {
                        "id": 85,
                        "string": "Table 5 presents a number of samples from the intersections between the sets of highly-attended words on the respective targets in the three domains."
                    },
                    {
                        "id": 86,
                        "string": "Again, we see that these highly-attended words are specific to the respective domains."
                    },
                    {
                        "id": 87,
                        "string": "We also notice that besides the domain-aspect words, our model can find words that carry sentiments as well, such as \"great\", \"crazy\", and \"beautiful\", which contribute to stance prediction."
                    },
                    {
                        "id": 88,
                        "string": "Conclusion and Future Work In this work, we study cross-target stance classification and propose a novel self-attention neural model that can extract target-independent information for model generalization."
                    },
                    {
                        "id": 89,
                        "string": "Experimental results show that the proposed model can perceive high-level domain-specific information in a sentence and achieves superior results over a number of baselines in certain domains."
                    },
                    {
                        "id": 90,
                        "string": "In the future, there are several ways of extending our model."
                    },
                    {
                        "id": 91,
                        "string": "First, selecting the effective source targets to generalize from is crucial for achieving satisfying results on the destination targets."
                    },
                    {
                        "id": 92,
                        "string": "One possibility could be to learn certain correlations between target closeness and generalization performance, which could further be used for guiding the target selection process."
                    },
                    {
                        "id": 93,
                        "string": "Second, our current model for identifying users' stances on mining projects only generalizes from one source target (i.e., Climate Change is Concern)."
                    },
                    {
                        "id": 94,
                        "string": "However, a mining project in general could affect other aspects of our society such as community and economics."
                    },
                    {
                        "id": 95,
                        "string": "It could be useful to also consider other related sources for knowledge transfer."
                    },
                    {
                        "id": 96,
                        "string": "Finally, it would be interesting to evaluate our model in a multilingual scenario (Taulé et al., 2017) , in order to examine its generalization ability (whether it can attend to useful domain-specific information in a new language) and multilingual scope."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 15
                    },
                    {
                        "section": "Model",
                        "n": "2",
                        "start": 16,
                        "end": 20
                    },
                    {
                        "section": "Embedding Layer",
                        "n": "2.1",
                        "start": 21,
                        "end": 23
                    },
                    {
                        "section": "Context Encoding Layer",
                        "n": "2.2",
                        "start": 24,
                        "end": 31
                    },
                    {
                        "section": "Aspect Attention Layer",
                        "n": "2.3",
                        "start": 32,
                        "end": 42
                    },
                    {
                        "section": "Prediction Layer",
                        "n": "2.4",
                        "start": 43,
                        "end": 43
                    },
                    {
                        "section": "Model Training",
                        "n": "2.5",
                        "start": 44,
                        "end": 47
                    },
                    {
                        "section": "Experiments",
                        "n": "3",
                        "start": 48,
                        "end": 54
                    },
                    {
                        "section": "Metric",
                        "n": "3.2",
                        "start": 55,
                        "end": 57
                    },
                    {
                        "section": "Training setup",
                        "n": "3.3",
                        "start": 58,
                        "end": 63
                    },
                    {
                        "section": "Classification Performance",
                        "n": "3.4",
                        "start": 64,
                        "end": 76
                    },
                    {
                        "section": "Visualization of Attention",
                        "n": "3.5",
                        "start": 77,
                        "end": 83
                    },
                    {
                        "section": "Learned Domain-Specific Aspects",
                        "n": "3.6",
                        "start": 84,
                        "end": 87
                    },
                    {
                        "section": "Conclusion and Future Work",
                        "n": "4",
                        "start": 88,
                        "end": 96
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1175-Table1-1.png",
                        "caption": "Table 1: An example of stance classification task.",
                        "page": 0,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 525.12,
                            "y1": 221.76,
                            "y2": 259.2
                        }
                    },
                    {
                        "filename": "../figure/image/1175-Figure1-1.png",
                        "caption": "Figure 1: The Architecture of CrossNet.",
                        "page": 1,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 286.08,
                            "y1": 61.44,
                            "y2": 235.2
                        }
                    },
                    {
                        "filename": "../figure/image/1175-Table2-1.png",
                        "caption": "Table 2: SemEval-2016 Task 6 Tweet Stance Detection dataset used in our evaluation.",
                        "page": 2,
                        "bbox": {
                            "x1": 314.88,
                            "x2": 518.4,
                            "y1": 62.4,
                            "y2": 132.0
                        }
                    },
                    {
                        "filename": "../figure/image/1175-Table3-1.png",
                        "caption": "Table 3: Classification performance of our model and other baselines on 4 targets: Feminist Movement (FM), Hillary Clinton (HC), Legalization of Abortion (LA), and Donald Trump (DT).",
                        "page": 3,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 288.0,
                            "y1": 480.47999999999996,
                            "y2": 700.3199999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1175-Table4-1.png",
                        "caption": "Table 4: The heatmap of the attention weights assigned by the Aspect Attention Layer to the tweets with stance labels favor (F) and against (A). “[N]” denotes the mining project’s name of interest.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 70.08,
                            "y2": 171.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1175-Table5-1.png",
                        "caption": "Table 5: Samples of the learned domain aspects.",
                        "page": 4,
                        "bbox": {
                            "x1": 80.64,
                            "x2": 281.28,
                            "y1": 213.6,
                            "y2": 354.24
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-45"
        },
        {
            "slides": {
                "1": {
                    "title": "Motivation",
                    "text": [
                        "I We previously explored using unsupervised probabilistic methods to predict",
                        "sentence acceptability, and found some success.",
                        "I It provides evidence that linguistic knowledge can be represented as a probabilistic",
                        "system, addressing foundational questions concerning the categorical nature of grammatical knowledge."
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Acceptability in Context",
                    "text": [
                        "I In previous experiments sentence acceptability was judged (by humans) or",
                        "predicted (by models) independently of context.",
                        "I Here we extend the research to investigate the impact of context on acceptability.",
                        "I Context is defined as the full document environment surrounding a sentence.",
                        "I Specifically, we want to understand the influence of context on:",
                        "I Human acceptability ratings",
                        "I Model prediction of acceptability"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Human Acceptability Ratings in Context",
                    "text": [
                        "I We perform round-trip translation of sentences (e.g. ENFREN) from English",
                        "Wikipedia to generate a set of sentences with varying degrees of acceptability.",
                        "I We use MTurk to collect acceptability judgements (rated on a 4-point scale).",
                        "I Annotation task was run twice: first without context, and second within the",
                        "I We collect multiple ratings for a sentence and take the mean.",
                        "I Human acceptability ratings:",
                        "I without context = h;",
                        "I with context = h+"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "With context h Against Without context h Ratings",
                    "text": [
                        "mean h per sentence"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": [
                        "figure/image/1178-Figure1-1.png"
                    ]
                },
                "5": {
                    "title": "Observations",
                    "text": [
                        "I Pearsons r = 0.80 between h+ and h.",
                        "I Context boosts acceptability ratings most for ill-formed sentences.",
                        "I Surprisingly, context reduces acceptability for the most acceptable sentences.",
                        "I Context compresses distribution of ratings.",
                        "I One-vs-rest correlation, performance of a single annotator against the rest: 0.628",
                        "I Low correlation is explained by the compression effect of context - good and bad",
                        "sentences are now less separable."
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "6": {
                    "title": "Modelling Acceptability with Unsupervised Models",
                    "text": [
                        "I lstm: standard LSTM language model",
                        "I tdlm: a topically driven language model; language model is driven by a topic",
                        "vector automatically learnt on the document context.",
                        "I 4 variants at test time:",
                        "I Use only the sentence as input: lstm and tdlm;",
                        "I Use both sentence and context as input: lstm+ and tdlm+.",
                        "I lstm+ incorporates context by feeding it to the LSTM network and taking the",
                        "f inal state as the initial state for the current sentence.",
                        "I Models trained on 100K English Wikipedia articles (40M tokens)."
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "7": {
                    "title": "Acceptability Measures",
                    "text": [
                        "I P = probability of the sentence given by a model;",
                        "I U = unigram probability of the sentence;",
                        "I L = sentence length"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "8": {
                    "title": "Results",
                    "text": [
                        "lstm lstm+ tdlm tdlm+",
                        "I Across all models (lstm or tdlm) and human ratings (h or h+), using context at",
                        "test time improves performance.",
                        "I tdlm consistently outperforms lstm (even tdlm lstm+).",
                        "I Lower correlation when predicting sentence acceptability judged with context.",
                        "I It suggests h+ ratings are more difficult to predict than h, which corresponds to",
                        "the low one-vs-rest human performance."
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "9": {
                    "title": "Summary",
                    "text": [
                        "I Context positively influences acceptability, particularly for ill-formed sentences.",
                        "I But it also has the reverse effect for well-formed sentences.",
                        "I Incorporating context (during training or testing) helps modelling acceptability.",
                        "I Prediction performance declines when tested on acceptability ratings judged with",
                        "context, due to the compression effect of ratings.",
                        "I Future work: investigate why context reduces acceptability for highly acceptable"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                }
            },
            "paper_title": "The Influence of Context on Sentence Acceptability Judgements",
            "paper_id": "1178",
            "paper": {
                "title": "The Influence of Context on Sentence Acceptability Judgements",
                "abstract": "We investigate the influence that document context exerts on human acceptability judgements for English sentences, via two sets of experiments. The first compares ratings for sentences presented on their own with ratings for the same set of sentences given in their document contexts. The second assesses the accuracy with which two types of neural models -one that incorporates context during training and one that does not -predict these judgements. Our results indicate that: (1) context improves acceptability ratings for ill-formed sentences, but also reduces them for well-formed sentences; and (2) context helps unsupervised systems to model acceptability. 1",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Sentence acceptability is defined as the extent to which a sentence is well formed or natural to native speakers of a language."
                    },
                    {
                        "id": 1,
                        "string": "It encompasses semantic, syntactic and pragmatic plausibility and other non-linguistic factors such as memory limitation."
                    },
                    {
                        "id": 2,
                        "string": "Grammaticality, by contrast, is the syntactic well-formedness of a sentence."
                    },
                    {
                        "id": 3,
                        "string": "Grammaticality as characterised by formal linguists is a theoretical concept that is difficult to elicit from non-expert assessors."
                    },
                    {
                        "id": 4,
                        "string": "In the research presented here we are interested in predicting acceptability judgements."
                    },
                    {
                        "id": 5,
                        "string": "2 Lau et al."
                    },
                    {
                        "id": 6,
                        "string": "(2015, 2016) present unsupervised probabilistic methods to predict sentence acceptability, where sentences were judged independently of context."
                    },
                    {
                        "id": 7,
                        "string": "In this paper we extend this 1 Annotated data (with acceptability ratings) is available at: https://github.com/GU-CLASP/BLL2018."
                    },
                    {
                        "id": 8,
                        "string": "2 See Lau et al."
                    },
                    {
                        "id": 9,
                        "string": "(2016) for a detailed discussion of the relationship between acceptability and grammaticality."
                    },
                    {
                        "id": 10,
                        "string": "They provide motivation for measuring acceptability rather than grammaticality in their crowd source surveys and modelling experiments."
                    },
                    {
                        "id": 11,
                        "string": "research to investigate the impact of context on human acceptability judgements, where context is defined as the full document environment surrounding a sentence."
                    },
                    {
                        "id": 12,
                        "string": "We also test the accuracy of more sophisticated language models -one which incorporates document context during trainingto predict human acceptability judgements."
                    },
                    {
                        "id": 13,
                        "string": "We believe that understanding how context influences acceptability is crucial to success in modelling human acceptability judgements."
                    },
                    {
                        "id": 14,
                        "string": "It has implications for tasks such as style/coherence assessment and language generation."
                    },
                    {
                        "id": 15,
                        "string": "Showing a strong correlation between unsupervised language model sentence probability and acceptability supports the view that linguistic knowledge can be represented as a probabilistic system."
                    },
                    {
                        "id": 16,
                        "string": "This result addresses foundational questions concerning the nature of grammatical knowledge (Lau et al., 2016) ."
                    },
                    {
                        "id": 17,
                        "string": "Our work is guided by 3 hypotheses: H 1 : Document context boosts sentence acceptability judgements."
                    },
                    {
                        "id": 18,
                        "string": "H 2 : Document context helps language models to model acceptability."
                    },
                    {
                        "id": 19,
                        "string": "H 3 : A language model predicts acceptability more accurately when it is tested on sentences within document context than when it is tested on the sentences alone."
                    },
                    {
                        "id": 20,
                        "string": "We sample sentences and their document contexts from English Wikipedia articles."
                    },
                    {
                        "id": 21,
                        "string": "We perform round-trip machine translation to generate sentences of varying degrees of well-formedness and ask crowdsourced workers to judge the acceptability of these sentences, presenting the sentences with and without their document environments."
                    },
                    {
                        "id": 22,
                        "string": "We describe this experiment and address H 1 in Section 2."
                    },
                    {
                        "id": 23,
                        "string": "In Section 3, we experiment with two types of language models to predict acceptability: a standard language model and a topically-driven model."
                    },
                    {
                        "id": 24,
                        "string": "The latter extends the language model by incorporating document context as a conditioning variable."
                    },
                    {
                        "id": 25,
                        "string": "The model comparison allows us to understand the impact of incorporating context during training for acceptability prediction."
                    },
                    {
                        "id": 26,
                        "string": "We also experiment with adding context as input at test time for both models."
                    },
                    {
                        "id": 27,
                        "string": "These experiments collectively address H 2 , by investigating the impact of using context during training and testing for modelling acceptability."
                    },
                    {
                        "id": 28,
                        "string": "We evaluate the models against crowd-sourced annotated sentences judged both in context and out of context."
                    },
                    {
                        "id": 29,
                        "string": "This tests H 3 ."
                    },
                    {
                        "id": 30,
                        "string": "In Section 4 we briefly consider related work."
                    },
                    {
                        "id": 31,
                        "string": "We indicate the issues to be addressed in future research and summarise our conclusions in Section 5."
                    },
                    {
                        "id": 32,
                        "string": "The Influence of Document Context on Acceptability Ratings Our goal is to construct a dataset of sentences annotated with acceptability ratings, judged with and without document context."
                    },
                    {
                        "id": 33,
                        "string": "To obtain sentences and their document context, we extracted 100 random articles from the English Wikipedia and sampled a sentence from each article."
                    },
                    {
                        "id": 34,
                        "string": "To generate a set of sentences with varying degrees of acceptability we used the Moses MT system (Koehn et al., 2007) to translate each sentence from English to 4 target languages -Czech, Spanish, German and French -and then back to English."
                    },
                    {
                        "id": 35,
                        "string": "3 We chose these 4 languages because preliminary experiments found that they produce sentences with different sorts of grammatical, semantic, and lexical infelicities."
                    },
                    {
                        "id": 36,
                        "string": "Note that we only translate the sentences; the document context is not modified."
                    },
                    {
                        "id": 37,
                        "string": "To gather acceptability judgements we used Amazon Mechanical Turk and asked workers to judge acceptability using a 4-point scale."
                    },
                    {
                        "id": 38,
                        "string": "4 We ran the annotation task twice: first where we presented sentences without context, and second within their document context."
                    },
                    {
                        "id": 39,
                        "string": "For the in-context experiment, the target sentence was highlighted in boldface, with one preceding and one succeeding sentence included as additional context."
                    },
                    {
                        "id": 40,
                        "string": "Workers had the option of revealing the full document context by clicking on the preceding and succeeding sentences."
                    },
                    {
                        "id": 41,
                        "string": "We did not check whether subjects viewed the full context when recording their ratings."
                    },
                    {
                        "id": 42,
                        "string": "Henceforth human judgements made without context are denoted as h − and judgements with context as h + ."
                    },
                    {
                        "id": 43,
                        "string": "We collected 20 judgements per sentence, giving us a total of a 20,000 annotations (100 sentences × 5 languages × 2 presentations × 20 judgements)."
                    },
                    {
                        "id": 44,
                        "string": "To ensure annotation reliability, sentences were presented in groups of five, one from the original English set, and four from the round-trip translations, one per target language, with no sentence type (English original or its translated variant) appearing more than once in a HIT."
                    },
                    {
                        "id": 45,
                        "string": "5 We assume that the original English sentences are generally acceptable, and we filtered out workers who fail to consistently rate these sentences as such."
                    },
                    {
                        "id": 46,
                        "string": "6 Postfiltering, we aggregate the multiple ratings and compute the mean."
                    },
                    {
                        "id": 47,
                        "string": "We first look at the correlation between withoutcontext (h − ) and with-context (h + ) mean ratings."
                    },
                    {
                        "id": 48,
                        "string": "Figure 1 is a scatter plot of this relation."
                    },
                    {
                        "id": 49,
                        "string": "We found a strong correlation of Pearson's r = 0.80 between the two sets of ratings."
                    },
                    {
                        "id": 50,
                        "string": "We see that adding context generally improves acceptability (evidenced by points above the diagonal), but the pattern reverses as acceptability increases, suggesting that context boosts sentence ratings most for ill-formed sentences."
                    },
                    {
                        "id": 51,
                        "string": "The trend persists throughout the whole range of acceptability, so that for the most acceptable sentences, adding context actually diminishes their rated acceptability."
                    },
                    {
                        "id": 52,
                        "string": "We can see this trend clearly in Figure 1 , where the average difference between h − and h + is represented by the distance between the linear regression and the diagonal."
                    },
                    {
                        "id": 53,
                        "string": "These lines cross at h + = h − = 3.28, the point where context no longer boosts acceptability."
                    },
                    {
                        "id": 54,
                        "string": "To understand the spread of individual judgements on a sentence, we compute the standard deviation of ratings for each sentence and then take the mean over all sentences."
                    },
                    {
                        "id": 55,
                        "string": "We found a small difference: 0.71 for h − and 0.76 for h + ."
                    },
                    {
                        "id": 56,
                        "string": "We also calculate one-vs-rest correlation, where for each  sentence we randomly single out an annotator rating and compute the Pearson correlation between these judgements against the mean ratings for the rest of the annotators."
                    },
                    {
                        "id": 57,
                        "string": "7 This number can be interpreted as a performance upper bound on a single annotator for predicting the mean acceptability of a group of annotators."
                    },
                    {
                        "id": 58,
                        "string": "We found a big gap in the one-vs-rest correlations: 0.628 for h − and 0.293 for h + ."
                    },
                    {
                        "id": 59,
                        "string": "We were initially surprised as to why the correlation is so different, even though the standard deviation is similar."
                    },
                    {
                        "id": 60,
                        "string": "Further investigation reveals that this dif-7 Trials are repeated 1000 times and the average correlation is computed, to insure that we obtain robust results and avoid outlier ratings skewing our Pearson coefficient value."
                    },
                    {
                        "id": 61,
                        "string": "See Lau et al."
                    },
                    {
                        "id": 62,
                        "string": "(2016) for the details of this and an alternative method for simulating an individual annotator."
                    },
                    {
                        "id": 63,
                        "string": "ference is explained by the pattern shown in Figure 1."
                    },
                    {
                        "id": 64,
                        "string": "Adding context \"compressess\" the distribution of (mean) ratings, pushing the extremes to the middle (i.e."
                    },
                    {
                        "id": 65,
                        "string": "very ill/well-formed sentences are now less ill/well-formed)."
                    },
                    {
                        "id": 66,
                        "string": "The net effect is that it lowers correlation, as the good and bad sentences are now less separable."
                    },
                    {
                        "id": 67,
                        "string": "One possible explanation for this compression is that workers focus more on global semantic and pragmatic coherence when context is supplied."
                    },
                    {
                        "id": 68,
                        "string": "If this is the case, then the syntactic mistakes introduced by MT have less effect on ratings than for the out-of-context sentences, where global coherence is not a factor."
                    },
                    {
                        "id": 69,
                        "string": "To give a sense how context influences ratings, we present a sample of sentences with their without-context (h − ) and with-context (h + ) ratings in Table 1 ."
                    },
                    {
                        "id": 70,
                        "string": "Modelling Sentence Acceptability with Enriched LMs Lau et al."
                    },
                    {
                        "id": 71,
                        "string": "(2015 Lau et al."
                    },
                    {
                        "id": 72,
                        "string": "( , 2016 ) explored a number of unsupervised models for predicting acceptability, including n-gram language models, Bayesian HMMs, LDA-based models, and a simple recurrent network language model."
                    },
                    {
                        "id": 73,
                        "string": "They found that the neural model outperforms the others consistently over multiple domains, in several languages."
                    },
                    {
                        "id": 74,
                        "string": "In light of this, we experiment with neural models in this paper."
                    },
                    {
                        "id": 75,
                        "string": "We use: (1) (2017) )."
                    },
                    {
                        "id": 76,
                        "string": "8 lstm is a standard LSTM language model, trained over a corpus to predict word sequences."
                    },
                    {
                        "id": 77,
                        "string": "8 We use the following tdlm implementation: https://github.com/jhlau/ topically-driven-language-model."
                    },
                    {
                        "id": 78,
                        "string": "Table 2 : Acceptability measures for predicting the acceptability of a sentence."
                    },
                    {
                        "id": 79,
                        "string": "s is the sentence (|s| is the sentence length); c is the document context (only used by lstm + and tdlm + ); P m (s, c) is the probability of the sentence given by a model; P u (s) is the unigram probability of the sentence."
                    },
                    {
                        "id": 80,
                        "string": "tdlm is a joint model of topic and language."
                    },
                    {
                        "id": 81,
                        "string": "The topic model component produces topics by processing documents through a convolutional layer and aligning it with trainable topic embeddings."
                    },
                    {
                        "id": 82,
                        "string": "The language model component incorporates context by combining its topic vector (produced by the topic model component) with the LSTM's hidden state, to generate the probability distribution for the next word."
                    },
                    {
                        "id": 83,
                        "string": "After training, given a sentence both lstm and tdlm produce a sentence probability (aggregated using the sequence of conditional word probabilities)."
                    },
                    {
                        "id": 84,
                        "string": "In our case, we also have the document context, information which both models can leverage."
                    },
                    {
                        "id": 85,
                        "string": "Therefore we have 4 variants at test time: models that use only the sentence as input, lstm − and tdlm − , and models that use both sentence and context, lstm + and tdlm + ."
                    },
                    {
                        "id": 86,
                        "string": "9 lstm + incorporates context by feeding it to the LSTM network and taking its final state 10 as the initial state for the current sentence."
                    },
                    {
                        "id": 87,
                        "string": "tdlm − ignores the context by converting the topic vector into a vector of zeros."
                    },
                    {
                        "id": 88,
                        "string": "To map sentence probability to acceptability, we compute several acceptability measures (Lau et al., 2016) , which are designed to normalise sentence length and word frequency."
                    },
                    {
                        "id": 89,
                        "string": "These are given in Table 2 ."
                    },
                    {
                        "id": 90,
                        "string": "We train tdlm and lstm on a sample of 100K English Wikipedia articles, which has no over-9 There are only two trained models: lstm and tdlm."
                    },
                    {
                        "id": 91,
                        "string": "The four variants are generated by varying the type of input provided at test time when computing the sentence probability."
                    },
                    {
                        "id": 92,
                        "string": "10 The final state is the hidden state produced by the last word of the context."
                    },
                    {
                        "id": 93,
                        "string": "Table 3 : Pearson's r of acceptability measures and human ratings."
                    },
                    {
                        "id": 94,
                        "string": "\"Rtg\" = \"Rating\", \"LP\" = Log-Prob, \"Mean\" = Mean LP, \"NrmD\" = Norm LP (Div) and \"NrmS\" = Norm LP (Sub)."
                    },
                    {
                        "id": 95,
                        "string": "Boldface indicates optimal performance in each row."
                    },
                    {
                        "id": 96,
                        "string": "lap with the 100 documents used for the annotation described in Section 2."
                    },
                    {
                        "id": 97,
                        "string": "The training data has approximately 40M tokens and a vocabulary size of 66K."
                    },
                    {
                        "id": 98,
                        "string": "11 Training details and all model hyperparameter settings are detailed in the supplementary material."
                    },
                    {
                        "id": 99,
                        "string": "To assess the performance of the acceptability measures, we compute Pearson's r against mean human ratings (Table 3) ."
                    },
                    {
                        "id": 100,
                        "string": "We also experimented with Spearman's rank correlation, but found similar trends and so present only the Pearson results."
                    },
                    {
                        "id": 101,
                        "string": "The first observation is that we replicate the performance of the original experiment setting (Lau et al., 2015) ."
                    },
                    {
                        "id": 102,
                        "string": "We achieved a correlation of 0.584 when we compared lstm − against h − , which is similar to the previously reported performance (0.570)."
                    },
                    {
                        "id": 103,
                        "string": "12 SLOR outperforms all other measures, which is consistent with the findings in Lau et al."
                    },
                    {
                        "id": 104,
                        "string": "(2015) ."
                    },
                    {
                        "id": 105,
                        "string": "We will focus on SLOR for the remainder of the discussion."
                    },
                    {
                        "id": 106,
                        "string": "Across all models (lstm and tdlm) and human ratings (h − and h + ), using context at test time improves model performance."
                    },
                    {
                        "id": 107,
                        "string": "This suggests that taking context into account helps in modelling acceptability, regardless of whether it is tested against judgements made with (h + ) or without context (h − )."
                    },
                    {
                        "id": 108,
                        "string": "13 We also see that tdlm consis-tently outperforms lstm over both types of human ratings and test input variants, showing that tdlm is a better model at predicting acceptability."
                    },
                    {
                        "id": 109,
                        "string": "In fact, if we look at tdlm − vs. lstm + (h − : 0.640 vs. 0.633; h + : 0.557 vs. 0.546), tdlm still performs better without context than lstm with context."
                    },
                    {
                        "id": 110,
                        "string": "These observations confirm that context helps in the modelling of acceptability, whether it is incorporated during training (lstm vs. tdlm) or at test time (lstm − /tdlm − vs. lstm + /tdlm + )."
                    },
                    {
                        "id": 111,
                        "string": "Interestingly, we see a lower correlation when we are predicting sentence acceptability that is judged with context."
                    },
                    {
                        "id": 112,
                        "string": "The SLOR correlation of lstm + /tdlm + vs. h + (0.546/568) is lower than that of lstm − /tdlm − vs. h − (0.584/0.640)."
                    },
                    {
                        "id": 113,
                        "string": "This result corresponds to the low one-vs-rest human performance of h + compared to h − (0.299 vs. 0.636, see Section 2)."
                    },
                    {
                        "id": 114,
                        "string": "It suggests that h + ratings are more difficult to predict than h − ."
                    },
                    {
                        "id": 115,
                        "string": "With human performance taken into account, both models substantially outperform the average single-annotator correlation, which is encouraging for the prospect of accurate model prediction on this task."
                    },
                    {
                        "id": 116,
                        "string": "Nagata (1988) reports a small scale experiment with 12 Japanese speakers on the effect of repetition of sentences, and embedding them in context."
                    },
                    {
                        "id": 117,
                        "string": "He notes that both repetition and context cause acceptability judgements for ill formed sentences to be more lenient."
                    },
                    {
                        "id": 118,
                        "string": "Gradience in acceptability judgements are studied in the works of Sorace and Keller (2005) and Sprouse (2007) ."
                    },
                    {
                        "id": 119,
                        "string": "Related Work There is an extensive literature on automatic detection of grammatical errors (Atwell, 1987; Chodorow and Leacock, 2000; Bigert and Knutsson, 2002; Sjöbergh, 2005; Wagner et al., 2007) , but limited work on acceptability prediction."
                    },
                    {
                        "id": 120,
                        "string": "Heilman et al."
                    },
                    {
                        "id": 121,
                        "string": "(2014) trained a linear regression model that uses features such as spelling errors, sentence scores from n-gram models and parsers."
                    },
                    {
                        "id": 122,
                        "string": "Lau et al."
                    },
                    {
                        "id": 123,
                        "string": "(2015 Lau et al."
                    },
                    {
                        "id": 124,
                        "string": "( , 2016 experimented with unsupervised learners and found that a simple RNN was the best performing model."
                    },
                    {
                        "id": 125,
                        "string": "Both works predict acceptability independently of any contextual factors outside the target sentence."
                    },
                    {
                        "id": 126,
                        "string": "model has no information as to what words will be relevant."
                    },
                    {
                        "id": 127,
                        "string": "Future Work and Conclusions We found that (i) context positively influences acceptability, particularly for ill-formed sentences, but it also has the reverse effect for well-formed sentences (H 1 ); (ii) incorporating context (during training or testing) when modelling acceptability improves model performance (H 2 ); and (iii) prediction performance declines when tested on judgements collected with context, overturning our original hypothesis (H 3 )."
                    },
                    {
                        "id": 128,
                        "string": "We discovered that human agreement decreases when context is introduced, suggesting that ratings are less predictable in this case."
                    },
                    {
                        "id": 129,
                        "string": "While it is intuitive that context should improve acceptability for ill-formed sentences, it is less obvious why it reduces acceptability for well-formed sentences."
                    },
                    {
                        "id": 130,
                        "string": "We will investigate this question in future work."
                    },
                    {
                        "id": 131,
                        "string": "We will also experiment with a wider range of models, including sentence embedding methodologies such as Skip-Thought (Kiros et al., 2015) ."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 31
                    },
                    {
                        "section": "The Influence of Document Context on Acceptability Ratings",
                        "n": "2",
                        "start": 32,
                        "end": 69
                    },
                    {
                        "section": "Modelling Sentence Acceptability with",
                        "n": "3",
                        "start": 70,
                        "end": 118
                    },
                    {
                        "section": "Related Work",
                        "n": "4",
                        "start": 119,
                        "end": 126
                    },
                    {
                        "section": "Future Work and Conclusions",
                        "n": "5",
                        "start": 127,
                        "end": 131
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1178-Table1-1.png",
                        "caption": "Table 1: A sample of sentences with their without-context (h−) and with-context (h+) ratings. The “Language” column denotes the intermediate translation language. The original English sentence is marked with “—”.",
                        "page": 2,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 67.2,
                            "y2": 154.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1178-Figure1-1.png",
                        "caption": "Figure 1: With-context (h+) against withoutcontext (h−) ratings. Points above the full diagonal represent sentences which are judged more acceptable when presented with context. The total least-square linear regression is shown as the second line.",
                        "page": 2,
                        "bbox": {
                            "x1": 81.6,
                            "x2": 283.2,
                            "y1": 226.07999999999998,
                            "y2": 422.4
                        }
                    },
                    {
                        "filename": "../figure/image/1178-Table2-1.png",
                        "caption": "Table 2: Acceptability measures for predicting the acceptability of a sentence. s is the sentence (|s| is the sentence length); c is the document context (only used by lstm+ and tdlm+); Pm(s, c) is the probability of the sentence given by a model; Pu(s) is the unigram probability of the sentence.",
                        "page": 3,
                        "bbox": {
                            "x1": 93.6,
                            "x2": 269.28,
                            "y1": 62.4,
                            "y2": 168.0
                        }
                    },
                    {
                        "filename": "../figure/image/1178-Table3-1.png",
                        "caption": "Table 3: Pearson’s r of acceptability measures and human ratings. “Rtg” = ”Rating”, “LP” = LogProb, “Mean” = Mean LP, “NrmD” = Norm LP (Div) and “NrmS” = Norm LP (Sub). Boldface indicates optimal performance in each row.",
                        "page": 3,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 63.36,
                            "y2": 178.07999999999998
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-46"
        },
        {
            "slides": {
                "0": {
                    "title": "STARTING FROM THE END spoiler",
                    "text": [
                        "the Indo-European phylogenetic tree",
                        "(the ground truth) phylogenetic tree reconstructed from monolingual English texts translated from",
                        "French Italian Spanish Portuguese Latvian Lithuanian Polish Slovak Czech Slovenian Bulgarian",
                        "Swedish Danish Romanian Lithuanian Portuguese Czech Slovak Bulgarian Latvian Polish Slovenian"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "3": {
                    "title": "Reconstruction of language trees",
                    "text": [
                        "POS-trigrams, reflecting shallow syntactic structures",
                        "(strongly associated with interference)",
                        "Function words, reflecting grammar (associated with interference)",
                        "Cohesive markers (associated with a translation universals)",
                        "AGGLOMERATIVE (HIERARCHICAL) CLUSTERING OF FEATURE VECTORS",
                        "Using the variance minimization algorithm (Ward, 1963)",
                        "Phylogenetic language trees generated with translated text",
                        "Romanian Lithuanian Portuguese Czech Slovak Bulgarian Latvian Polish Slovenian",
                        "Slovak Lithuanian Latvian Bulgarian Romanian Slovenian Portuguese Polish Czech",
                        "ENGLISH translations FRENCH translations"
                    ],
                    "page_nums": [
                        4,
                        6
                    ],
                    "images": []
                },
                "4": {
                    "title": "Identification of translationese and its source language",
                    "text": [
                        "MATRIX source-language classification (POS-trigrams)"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": [
                        "figure/image/1193-Figure2-1.png"
                    ]
                },
                "8": {
                    "title": "Summary",
                    "text": [
                        "Translation does not distorts the original text randomly",
                        "A phylogenetic language tree can be reconstructed from monolingual texts translated from various languages",
                        "Features associated with interference (POS-ngrams, FWs) yield more accurate phylogenetic language trees",
                        "Translations impact the evolution of languages",
                        "It is estimated that for certain languages up to 30% of published texts are mediated through translations (Pym and Chrupaa, 2005)",
                        "Are translations likely to play a role in language change?"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "9": {
                    "title": "Starting from the end",
                    "text": [
                        "phylogenetic tree reconstructed from monolingual English texts translated from",
                        "17 IE languages phylogenetic tree reconstructed from monolingual French texts translated indirectly from 17 IE languages via English pivot",
                        "Danish Romanian Lithuanian Portuguese Czech Slovak Bulgarian Latvian Polish Slovenian",
                        "English Slovak Lithuanian Latvian Bulgarian Romanian Slovenian Portuguese Polish Czech"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                }
            },
            "paper_title": "Found in Translation: Reconstructing Phylogenetic Language Trees from Translations",
            "paper_id": "1193",
            "paper": {
                "title": "Found in Translation: Reconstructing Phylogenetic Language Trees from Translations",
                "abstract": "Translation has played an important role in trade, law, commerce, politics, and literature for thousands of years. Translators have always tried to be invisible; ideal translations should look as if they were written originally in the target language. We show that traces of the source language remain in the translation product to the extent that it is possible to uncover the history of the source language by looking only at the translation. Specifically, we automatically reconstruct phylogenetic language trees from monolingual texts (translated from several source languages). The signal of the source language is so powerful that it is retained even after two phases of translation. This strongly indicates that source language interference is the most dominant characteristic of translated texts, overshadowing the more subtle signals of universal properties of translation.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Translation has played a major role in human civilization since the rise of law, religion, and trade in multilingual societies."
                    },
                    {
                        "id": 1,
                        "string": "Evidence of scribe translations goes as far back as four millennia ago, to the time of Hammurabi; this practice is also mentioned in the Bible (Esther 1:22; 8:9)."
                    },
                    {
                        "id": 2,
                        "string": "For thousands of years, translators have tried to remain invisible, setting a standard according to which the act of translation should be seamless, and its product should look as if it were written originally in the target language."
                    },
                    {
                        "id": 3,
                        "string": "Cicero (106-43 BC) commented on his translation ethics, \"I did not hold it necessary to render word for word, but I preserved the general style and force of the language.\""
                    },
                    {
                        "id": 4,
                        "string": "These words were echoed 500 years later by St. Jerome , also known as the patron saint of translators, who wrote, \"I render, not word for word, but sense for sense.\""
                    },
                    {
                        "id": 5,
                        "string": "Translator tendency for invisibility has peaked in the past 150 years in the English speaking world (Venuti, 2008) , in spite of some calls for \"foreignization\" in translations, e.g., the German Romanticists, especially the translations from Greek by Friedrich Hölderlin (Steiner, 1975) and Nabokov's translation of Eugene Onegin."
                    },
                    {
                        "id": 6,
                        "string": "These, however, as both Steiner (1975) and Venuti (2008) argue, are the exception to the rule."
                    },
                    {
                        "id": 7,
                        "string": "In fact, in recent years, the quality of translations has been standardized (ISO 17100)."
                    },
                    {
                        "id": 8,
                        "string": "Importantly, the translations we studied in our work conform to this standard."
                    },
                    {
                        "id": 9,
                        "string": "Despite the continuous efforts of translators, translations are known to feature unique characteristics that set them apart from non-translated texts, referred to as originals here (Toury, 1980 (Toury, , 1995 Frawley, 1984; Baker, 1993) ."
                    },
                    {
                        "id": 10,
                        "string": "This is not the result of poor translation, but rather a statistical phenomenon: various features distribute differently in originals than in translations (Gellerstam, 1986) ."
                    },
                    {
                        "id": 11,
                        "string": "Several factors may account for the differences between originals and translations; many are classified as universal features of translation."
                    },
                    {
                        "id": 12,
                        "string": "Cognitively speaking, all translations, regardless of the source and target language, are susceptible to the same constraints."
                    },
                    {
                        "id": 13,
                        "string": "Therefore, translation products are expected to share similar artifacts."
                    },
                    {
                        "id": 14,
                        "string": "Such universals include simplification: the tendency to make complex source structures simpler in the target (Blum-Kulka and Levenston, 1983; Vanderauwerea, 1985) ; standardization: the tendency to over-conform to target language standards (Toury, 1995) ; and explicitation: the tendency to render implicit source structures more explicit in the target language (Blum-Kulka, 1986; Øverås, 1998) ."
                    },
                    {
                        "id": 15,
                        "string": "In contrast to translation universals, interference reflects the \"fingerprints\" of the source lan-guage on the translation product."
                    },
                    {
                        "id": 16,
                        "string": "Toury (1995) defines interference as \"phenomena pertaining to the make-up of the source text tend to be transferred to the target text\"."
                    },
                    {
                        "id": 17,
                        "string": "Interference, by definition, is a language-pair specific phenomenon; isomorphic structures shared by the source and target languages can easily replace one another, thereby manifesting the underlying process of cross-linguistic influence of the source language on the translation outcome."
                    },
                    {
                        "id": 18,
                        "string": "Pym (2008) points out that interference is a set of both segmentational and macrostructural features."
                    },
                    {
                        "id": 19,
                        "string": "Our main hypothesis is that, due to interference, languages with shared isomorphic structures are likely to share more features in the target language of a translation."
                    },
                    {
                        "id": 20,
                        "string": "Consequently, the distance between two languages, when assessed using such features, can be retained to some extent in translations from these two languages to a third one."
                    },
                    {
                        "id": 21,
                        "string": "Furthermore, we hypothesize that by extracting structures from translated texts, we can generate a phylogenetic tree that reflects the \"true\" distances among the source languages."
                    },
                    {
                        "id": 22,
                        "string": "Finally, we conjecture that the quality of such trees will improve when constructed using features that better correspond to interference phenomena, and will deteriorate using more universal features of translation."
                    },
                    {
                        "id": 23,
                        "string": "The main contribution of this paper is thus the demonstration that interference phenomena in translation are powerful to an extent that facilitates clustering source languages into families and (partially) reconstructing intra-families ties; so much so, that these results hold even after two rounds of translation."
                    },
                    {
                        "id": 24,
                        "string": "Moreover, we perform analysis of various linguistic phenomena in the source languages, laying out quantitative grounds for the language typology reconstruction results."
                    },
                    {
                        "id": 25,
                        "string": "Related work A number of works in historical linguistics have applied methods from the field of bioinformatics, in particular algorithms for generating phylogenetic trees (Ringe et al., 2002; Nakhleh et al., 2005a,b; Ellison and Kirby, 2006; Boc et al., 2010) ."
                    },
                    {
                        "id": 26,
                        "string": "Most of them rely on lists of cognates, words in multiple languages with a common origin that share a similar meaning and a similar pronunciation (Dyen et al., 1992; ."
                    },
                    {
                        "id": 27,
                        "string": "These works all rely on multilingual data, whereas we construct phylogenetic trees from texts in a single language."
                    },
                    {
                        "id": 28,
                        "string": "The claim that translations exhibit unique properties is well established in translation studies literature (Toury, 1980; Frawley, 1984; Baker, 1993; Toury, 1995) ."
                    },
                    {
                        "id": 29,
                        "string": "Based on this assumption, several works use text classification techniques employing supervised, and recently also unsupervised, machine learning approaches, to distinguish between originals and translations (Baroni and Bernardini, 2006; Ilisei et al., 2010; Koppel and Ordan, 2011; Volansky et al., 2015; Avner et al., 2016) ."
                    },
                    {
                        "id": 30,
                        "string": "The features used in these studies reflect both universal and interference-related traits."
                    },
                    {
                        "id": 31,
                        "string": "Along the way, interference was proven to be a robust phenomenon, operating in every single sentence, even on the morpheme level (Avner et al., 2016) ."
                    },
                    {
                        "id": 32,
                        "string": "Interference can also be studied on pairs of source-and target languages and focus, for example, on word order (Eetemadi and Toutanova, 2014) ."
                    },
                    {
                        "id": 33,
                        "string": "The powerful signal of interference is evident, e.g., by the finding that a classifier trained to distinguish between originals and translations from one language, exhibits lower accuracy when tested on translations from another language, and this accuracy deteriorates proportionally to the distance between the source and target languages (Koppel and Ordan, 2011) ."
                    },
                    {
                        "id": 34,
                        "string": "Consequently, it is possible to accurately distinguish among translations from various source languages (van Halteren, 2008) ."
                    },
                    {
                        "id": 35,
                        "string": "A related task, identifying the native tongue of English language students based only on their writing in English, has been the subject of recent interest (Tetreault et al., 2013) ."
                    },
                    {
                        "id": 36,
                        "string": "The relations between this task and identification of the source language of translation has been emphazied, e.g., by Tsvetkov et al."
                    },
                    {
                        "id": 37,
                        "string": "(2013) ."
                    },
                    {
                        "id": 38,
                        "string": "English texts produced by native speakers of a variety of languages have been used to reconstruct phylogenetic trees, with varying degrees of success (Nagata and Whittaker, 2013; Berzak et al., 2014) ."
                    },
                    {
                        "id": 39,
                        "string": "In contrast to language learners, however, translators translate into their mother tongue, so the texts we studied were written by highly competent native speakers."
                    },
                    {
                        "id": 40,
                        "string": "Our work is the first to construct phylogenetic trees from translations."
                    },
                    {
                        "id": 41,
                        "string": "Methodology Dataset This corpus-based study uses Europarl (Koehn, 2005) , the proceedings of the European Parliament and their translations into all the official Eu-ropean Union (EU) languages."
                    },
                    {
                        "id": 42,
                        "string": "Europarl is one of the most popular parallel resources in natural language processing, and has been used extensively in machine translation."
                    },
                    {
                        "id": 43,
                        "string": "We use a version of Europarl spanning the years 1999 through 2011, in which the direction of translation has been established through a comprehensive cross-lingual validation of the speakers' original language ."
                    },
                    {
                        "id": 44,
                        "string": "All parliament speeches were translated 1 from the original language into all other EU languages (21 at the time) using English as an intermediate, pivot language."
                    },
                    {
                        "id": 45,
                        "string": "We thus refer to translations into English as direct, while translations into all other languages, via English as a third language, are indirect."
                    },
                    {
                        "id": 46,
                        "string": "We hypothesize that indirect translation will obscure the markers of the original language in the final translation."
                    },
                    {
                        "id": 47,
                        "string": "Nevertheless, we expect (weakened) fingerprints of the source language to be identifiable in the target despite the pivot, presumably resulting in somewhat poorer phylogenetic trees."
                    },
                    {
                        "id": 48,
                        "string": "We focus on 17 source languages, grouped into 3 language families: Germanic, Romance, and Balto-Slavic."
                    },
                    {
                        "id": 49,
                        "string": "2 These include translations to English and to French from Bulgarian (BG), Czech (CS), Danish (DA), Dutch (NL), English (EN), French (FR), German (DE), Italian (IT), Latvian (LV), Lithuanian (LT), Polish (PL), Portuguese (PT), Romanian (RO), Slovak (SK), Slovenian (SL), Spanish (ES), and Swedish (SV)."
                    },
                    {
                        "id": 50,
                        "string": "We also included texts written originally in English and French."
                    },
                    {
                        "id": 51,
                        "string": "All datasets were split on sentence boundary, cleaned (empty lines removed), tokenized, and annotated for part-of-speech (POS) using the Stanford tools (Manning et al., 2014) ."
                    },
                    {
                        "id": 52,
                        "string": "In all the tree reconstruction experiments, we sampled equal-sized chunks from each source language, using as much data as available for all languages."
                    },
                    {
                        "id": 53,
                        "string": "This yielded 27, 000 tokens from translations to English, and 30, 000 tokens from translations into French."
                    },
                    {
                        "id": 54,
                        "string": "1 The common practice is that one translates into one's native language; in particular, this practice is strictly imposed in the EU parliament where a translator must have perfect proficiency in the target language, meeting very high standards of accuracy."
                    },
                    {
                        "id": 55,
                        "string": "2 We excluded source languages with insufficient amounts of data, along with Greek, which is the only representative of the Hellenic family."
                    },
                    {
                        "id": 56,
                        "string": "Features Following standard practice (Volansky et al., 2015; , we represented both original and translated texts as feature vectors, where the choice of features determines the extent to which we expect sourcelanguage interference to be present in the translation product."
                    },
                    {
                        "id": 57,
                        "string": "Crucially, the features abstract away from the contents of the texts and focus on their structure, reflecting, among other things, morphological and syntactic patterns."
                    },
                    {
                        "id": 58,
                        "string": "We use the following feature sets: 1."
                    },
                    {
                        "id": 59,
                        "string": "The top-1,000 most frequent POS trigrams, reflecting shallow syntactic structure."
                    },
                    {
                        "id": 60,
                        "string": "2."
                    },
                    {
                        "id": 61,
                        "string": "Function words (FW), words known to reflect grammar of texts in numerous classification tasks, as they include non-content words such as articles, prepositions, etc."
                    },
                    {
                        "id": 62,
                        "string": "(Koppel and Ordan, 2011) ."
                    },
                    {
                        "id": 63,
                        "string": "3 3."
                    },
                    {
                        "id": 64,
                        "string": "Cohesive markers (Hinkel, 2001) ; these words and phrases are assumed to be overrepresented in translated texts, where, for example, an implicit contrast in the original is made explicit in the target text with words such as 'but' or 'however'."
                    },
                    {
                        "id": 65,
                        "string": "4 Note that the first two feature sets are strongly associated with interference, whereas the third is assumed to be universal and an instance of explicitation."
                    },
                    {
                        "id": 66,
                        "string": "We therefore expect trees based on the first two feature sets to be much better than those based on the third."
                    },
                    {
                        "id": 67,
                        "string": "The Indo-European phylogenetic tree The last few decades produced a large body of research on the evolution of individual languages and language families."
                    },
                    {
                        "id": 68,
                        "string": "While the existence of the Indo-European (IE) family of languages is an established fact, its history and origins are still a matter of much controversy (Pereltsvaig and Lewis, 2015) ."
                    },
                    {
                        "id": 69,
                        "string": "Furthermore, the actual subgroupings of languages within this family are not clear-cut (Ringe et al., 2002) ."
                    },
                    {
                        "id": 70,
                        "string": "Consequently, algorithms that attempt to reconstruct the IE languages tree face a serious evaluation challenge (Ringe et al., 2002; Nakhleh et al., 2005a) ."
                    },
                    {
                        "id": 71,
                        "string": "To evaluate the quality of the reconstructed trees, we define a metric to accurately assess their distance from the \"true\" tree."
                    },
                    {
                        "id": 72,
                        "string": "The tree that we use as ground truth (Serva and Petroni, 2008) has several advantages."
                    },
                    {
                        "id": 73,
                        "string": "First, it is similar to a wellaccepted tree (Gray and Atkinson, 2003 ) (which is not insusceptible to criticism (Pereltsvaig and Lewis, 2015) )."
                    },
                    {
                        "id": 74,
                        "string": "The differences between the two are mostly irrelevant for the group of languages that we address in this research."
                    },
                    {
                        "id": 75,
                        "string": "Second, it is a binary tree, facilitating comparison with the trees we produce, which are also binary branching."
                    },
                    {
                        "id": 76,
                        "string": "Third, its branches are decorated with the approximate year in which splitting occurred."
                    },
                    {
                        "id": 77,
                        "string": "This provides a way to induce the distance between two languages, modeled as lengths of paths in the tree, based on chronological information."
                    },
                    {
                        "id": 78,
                        "string": "We projected the gold tree (Serva and Petroni, 2008) onto the set of 17 languages we considered in this work, preserving branch lengths."
                    },
                    {
                        "id": 79,
                        "string": "Figure 1 depicts the resulting gold-standard subtree."
                    },
                    {
                        "id": 80,
                        "string": "We reconstructed phylogenetic language trees by performing agglomerative (hierarchical) clustering of feature vectors extracted separately from English and French translations."
                    },
                    {
                        "id": 81,
                        "string": "We performed clustering using the variance minimization algorithm (Ward Jr, 1963) with Euclidean distance (the implementation available in the Python SciPy library)."
                    },
                    {
                        "id": 82,
                        "string": "All feature values were normalized to a zero-one scale prior to clustering."
                    },
                    {
                        "id": 83,
                        "string": "Evaluation methodology To evaluate the quality of the trees we generate, we compute their similarity to the gold standard via two metrics: unweighted, assessing only structural (topological) similarity, and weighted, estimating similarity based on both structure and branching length."
                    },
                    {
                        "id": 84,
                        "string": "Several methods have been proposed for evaluating the quality of phylogenetic language trees (Pompei et al., 2011; Wichmann and Grant, 2012; Nouri and Yangarber, 2016) ."
                    },
                    {
                        "id": 85,
                        "string": "A popular metric is the Robinson-Foulds (RF) methodology (Robinson and Foulds, 1981) , which is based on the symmetric difference in the number of bi-partitions, the ways in which an edge can split the leaves of a tree into two sets."
                    },
                    {
                        "id": 86,
                        "string": "The distance between two trees is then defined as the number of splits induced by one of the trees, but not the other."
                    },
                    {
                        "id": 87,
                        "string": "Despite its popularity, the RF metric has well-known shortcomings; for example, relocating a single leaf can result in a tree maximally distant from the original one (Böcker et al., 2013) ."
                    },
                    {
                        "id": 88,
                        "string": "Additional methodologies for evaluating phylogenetic trees include branch score distance (Kuhner and Felsenstein, 1994) , enhancing RF with branch lengths, purity score (Heller and Ghahramani, 2005) , and subtree score (Teh et al., 2009 )."
                    },
                    {
                        "id": 89,
                        "string": "The latter two ignore branch lengths and only consider structural similarities for evaluation."
                    },
                    {
                        "id": 90,
                        "string": "We opted for a simple yet powerful adaptation of the L2-norm to leaf-pair distance, inherently suitable for both unweighted and weighted evaluation."
                    },
                    {
                        "id": 91,
                        "string": "Given a tree of N leaves, l i , i ∈ [1..N ], the weighted distance between two leaves l i , l j in a tree τ , denoted D τ (l i , l j ), is the sum of the weights of all edges on the shortest path between l i and l j ."
                    },
                    {
                        "id": 92,
                        "string": "The unweighted distance sums up the number of the edges in this path (i.e., all weights are equal to 1)."
                    },
                    {
                        "id": 93,
                        "string": "The distance Dist(τ, g) between a generated tree τ and the gold tree g is then calculated by summing the square differences between all leafpair distances (whether weighted or unweighted) in the two trees: (Baroni and Bernardini, 2006; van Halteren, 2008; Volansky et al., 2015) to the 16 original languages considered in this work."
                    },
                    {
                        "id": 94,
                        "string": "We also conducted similar experiments with French originals and translations."
                    },
                    {
                        "id": 95,
                        "string": "We used 200 chunks of approximately 2K tokens (respecting sentence boundaries) from both O and T, and normalized the values of lexical features by the number of tokens in each chunk."
                    },
                    {
                        "id": 96,
                        "string": "For classification, we used Platt's sequential minimal optimization algorithm (Keerthi et al., 2001; Hall et al., 2009) to train support vector machine classifiers with the default linear kernel."
                    },
                    {
                        "id": 97,
                        "string": "We evaluated the results with 10-fold cross-validation."
                    },
                    {
                        "id": 98,
                        "string": "Table 1 presents the classification accuracy of (English and French) O vs. T using each feature set."
                    },
                    {
                        "id": 99,
                        "string": "In line with previous works (Ilisei et al., 2010; Volansky et al., 2015; , the binary classification results are highly accurate, achieving over 95% accuracy using POS-trigrams and function words for both English and French, and above 85% using cohesive markers."
                    },
                    {
                        "id": 100,
                        "string": "Dist(τ, g) = i,j∈[1..N ];i =j (D τ (l i , l j ) − D g (l i , l j )) 2 Feature English Identification of source language Identifying the source language of translated texts is a task in which machines clearly outperform humans (Baroni and Bernardini, 2006) ."
                    },
                    {
                        "id": 101,
                        "string": "Koppel and Ordan (2011) performed 5-way classification of texts translated from Italian, French, Spanish, German, and Finnish, achieving an accuracy of 92.7%."
                    },
                    {
                        "id": 102,
                        "string": "Furthermore, misclassified instances were more frequently assigned to genetically related languages."
                    },
                    {
                        "id": 103,
                        "string": "We extended this experiment to 14 languages representing 3 language families (the number of languages was limited by the amount of data available)."
                    },
                    {
                        "id": 104,
                        "string": "We extracted 100 chunks of 1,000 tokens each from each source language and classified the translated English (and, separately, French) texts into 14 classes using the best performing POStrigrams feature set."
                    },
                    {
                        "id": 105,
                        "string": "Cross-validation evaluation yielded an accuracy of 75.61% on English translations (note that the baseline is 100/14 = 7.14%)."
                    },
                    {
                        "id": 106,
                        "string": "The corresponding confusion matrix, presented in Figure 2 (left), reveals interesting phenomena: much of the confusion resides within language families, framed by the bold line in the figure."
                    },
                    {
                        "id": 107,
                        "string": "For example, instances of Germanic languages are almost perfectly classified as Germanic, with only a few chunks assigned to other language families."
                    },
                    {
                        "id": 108,
                        "string": "The evident intra-family linguistic ties exposed by this experiment support the intuition that cross-linguistic transfer in translation is governed by typological properties of the source language."
                    },
                    {
                        "id": 109,
                        "string": "That is, translations from related sources tend to resemble each other to a greater extent than translations from more distant languages."
                    },
                    {
                        "id": 110,
                        "string": "This observation is further supported by the evaluation of a three-way classification task, where the goal is to only identify the language family (Germanic, Romance, or Balto-Slavic): the accuracy of this task is 90.62%."
                    },
                    {
                        "id": 111,
                        "string": "Note also that the mis-classified instances of both Romance and Germanic languages are nearly never attributed to Balto-Slavic languages, since Germanic and Romance are much closer to each other than to Balto-Slavic."
                    },
                    {
                        "id": 112,
                        "string": "Figure 2 (right) displays a similar confusion matrix, the only difference being that French translations are classified."
                    },
                    {
                        "id": 113,
                        "string": "We attribute the lower cross-validation accuracy (48.92%, reflected also by the lower number of correctly assigned instances on the matrix diagonal, compared to English) to the intervention of the pivot language in the translation process."
                    },
                    {
                        "id": 114,
                        "string": "Nevertheless, the confusion is still mainly constrained to intra-family boundaries."
                    },
                    {
                        "id": 115,
                        "string": "Reconstruction of Phylogenetic Language Trees Reconstructing language typology Inspired by the results reported in Section 4.2, we generated phylogenetic language trees from both English and French texts translated from the other European languages."
                    },
                    {
                        "id": 116,
                        "string": "We hypothesized that interference from the source language was present in the translation product to an extent that would facilitate the construction of a tree sufficiently similar to the gold IE tree (Figure 1) ."
                    },
                    {
                        "id": 117,
                        "string": "The best trees, those closest to the gold standard, were generated using POS-trigrams: these are the features that are most closely associated with source-language interference (see Section 3.2)."
                    },
                    {
                        "id": 118,
                        "string": "Figure 3 depicts the trees produced from English and French translations using POStrigrams."
                    },
                    {
                        "id": 119,
                        "string": "Both trees reasonably group individual languages into three language-family branches."
                    },
                    {
                        "id": 120,
                        "string": "In particular, they cluster the Germanic and Romance languages closer than the Balto-Slavic."
                    },
                    {
                        "id": 121,
                        "string": "Capturing the more subtle intra-family ties turned out to be Figure 3 : Phylogenetic language trees generated with English (left) and French (right) translations more challenging, although English outperformed its French counterpart on this task by almost perfectly reconstructing the Germanic sub-tree."
                    },
                    {
                        "id": 122,
                        "string": "We repeated the clustering experiments with various feature sets."
                    },
                    {
                        "id": 123,
                        "string": "For each feature set, we randomly sampled equally-sized subsets of the dataset (translated from each of the source languages), represented the data as feature vectors, generated a tree by clustering the feature vectors, and then computed the weighted and unweighted distances between the generated tree and the gold standard."
                    },
                    {
                        "id": 124,
                        "string": "We repeated this procedure 50 times for each feature set, and then averaged the resulting distances."
                    },
                    {
                        "id": 125,
                        "string": "We report this average and the standard deviation."
                    },
                    {
                        "id": 126,
                        "string": "5 Evaluation results The unweighted evaluation results are listed in Table 2 ."
                    },
                    {
                        "id": 127,
                        "string": "For comparison, we also present the distance obtained for a random tree, generated by sampling a random distance matrix from the uniform (0, 1) distribution."
                    },
                    {
                        "id": 128,
                        "string": "The reported random tree evaluation score is averaged over 1000 experiments."
                    },
                    {
                        "id": 129,
                        "string": "Similarly, we present weighted evaluation results in Table 3 ."
                    },
                    {
                        "id": 130,
                        "string": "All distances are normalized to a zero-one scale, where the bounds -zero and one -represent the identical and the most distant tree w.r.t."
                    },
                    {
                        "id": 131,
                        "string": "the gold standard, respectively."
                    },
                    {
                        "id": 132,
                        "string": "The results reveal several interesting observations."
                    },
                    {
                        "id": 133,
                        "string": "First, as expected, POS-trigrams induce the distance between two nodes), can be found at http:// cl.haifa.ac.il/projects/translationese/ acl2017_found-in-translation_trees.pdf Table 3 : Weighted evaluation of generated trees."
                    },
                    {
                        "id": 134,
                        "string": "AVG represents the average distance of a tree from the gold standard."
                    },
                    {
                        "id": 135,
                        "string": "The lowest distance in a column is boldfaced."
                    },
                    {
                        "id": 136,
                        "string": "trees closest to the gold standard among distinct feature sets."
                    },
                    {
                        "id": 137,
                        "string": "This corroborates our hypothesis that this feature set carries over interference of the source language to a considerable extent (see Section 1)."
                    },
                    {
                        "id": 138,
                        "string": "Furthermore, function words achieve more moderate results, but still much better than random."
                    },
                    {
                        "id": 139,
                        "string": "This reflects the fact that these features carry over some grammatical constructs of the source language into the translation product."
                    },
                    {
                        "id": 140,
                        "string": "Finally, in all cases, the least accurate tree, nearly random, is produced by cohesive markers; this is an evidence that this feature is sourcelanguage agnostic and reflects the universal effect of explicitation (see Section 3.2)."
                    },
                    {
                        "id": 141,
                        "string": "While cohesive markers are a good indicator of translations, they reflect properties that are not indicative of the source language."
                    },
                    {
                        "id": 142,
                        "string": "The combination of POS-trigrams and FW yields the best tree in three out of four cases, implying that these feature sets capture different, complementary aspects of the source-language interference."
                    },
                    {
                        "id": 143,
                        "string": "Surprisingly, reasonably good trees were also generated from French translations; yet, these trees are systematically worse than their English counterparts."
                    },
                    {
                        "id": 144,
                        "string": "The original signal of the source language is distorted twice: first via a Germanic language (English) and then via a Romance language (French)."
                    },
                    {
                        "id": 145,
                        "string": "However, the signal is strong enough to yield a clear phylogenetic tree of the source languages."
                    },
                    {
                        "id": 146,
                        "string": "Interference is thus revealed to be an extremely powerful force, partially resistant to intermediate distortions."
                    },
                    {
                        "id": 147,
                        "string": "Analysis We demonstrated that source-language traces are dominant in translation products to an extent that facilitates reconstruction of the history of the source languages."
                    },
                    {
                        "id": 148,
                        "string": "We now inspect some of these phenomena in more detail to better understand the prominent characteristics of interference."
                    },
                    {
                        "id": 149,
                        "string": "For each phenomenon, we computed the frequencies of patterns that reflect it in texts translated to English from each individual language, and averaged the measures over each language family (Germanic, Romance, and Balto-Slavic)."
                    },
                    {
                        "id": 150,
                        "string": "Figure 4 depicts the results."
                    },
                    {
                        "id": 151,
                        "string": "Definite articles Languages vary greatly in their use of articles."
                    },
                    {
                        "id": 152,
                        "string": "Like other Germanic languages, English has both definite ('a' ) and indefinite ('the' ) articles."
                    },
                    {
                        "id": 153,
                        "string": "However, many languages only have definite articles and some only have indefinite articles."
                    },
                    {
                        "id": 154,
                        "string": "Romance languages, and in particular the five Romance languages of our dataset, have definite articles that can sometimes be omitted, but not as commonly as in English."
                    },
                    {
                        "id": 155,
                        "string": "Balto-Slavic languages typically do not have any articles."
                    },
                    {
                        "id": 156,
                        "string": "Mastering the use of articles in English is notoriously hard, leading to errors in non-native speakers (Han et al., 2006) ."
                    },
                    {
                        "id": 157,
                        "string": "For example, native speakers of Slavic languages tend to overuse definite articles in German (Hirschmann et al., 2013) ."
                    },
                    {
                        "id": 158,
                        "string": "Similarly, we expect translations from Balto-Slavic languages to overuse 'the'."
                    },
                    {
                        "id": 159,
                        "string": "We computed the frequencies of 'the' in translations to English from each of the three language families."
                    },
                    {
                        "id": 160,
                        "string": "The results show a significant overuse of 'the' in translations from Balto-Slavic languages, and some overuse in translations from Romance languages."
                    },
                    {
                        "id": 161,
                        "string": "Possessive constructions Languages also vary in the way they mark possession."
                    },
                    {
                        "id": 162,
                        "string": "English marks it in three ways: with the clitic ''s' ('the guest's room' ), with a prepositional phrase containing 'of' ('the room of the guest' ), and, like in other Germanic languages, with noun compounds ('guest room' )."
                    },
                    {
                        "id": 163,
                        "string": "Compounds are considerably less frequent in Romance languages Romance Balto-Slavic Figure 4 : Frequencies reflecting various linguistic phenomena (Sections 6.1-6.4) in English translations (Swan and Smith, 2001) ; Balto-Slavic indicates possession using case-marking."
                    },
                    {
                        "id": 164,
                        "string": "Languages also vary with respect to whether or not possession is head-marked."
                    },
                    {
                        "id": 165,
                        "string": "In Balto-Slavic languages, the genitive case is head-marked, which reverses the order of the two nouns with respect to the common English ''s' construction."
                    },
                    {
                        "id": 166,
                        "string": "Since copying word order, if possible across languages, is one of the major features of interference (Eetemadi and Toutanova, 2014) , we anticipated that Balto-Slavic languages will exhibit the highest rate of noun-'of' -NP constructions."
                    },
                    {
                        "id": 167,
                        "string": "This would be followed by Romance languages, in which this construction is highly common, and then by Germanic languages, where noun compounds can often be copied as such."
                    },
                    {
                        "id": 168,
                        "string": "The results are consistent with our expectations."
                    },
                    {
                        "id": 169,
                        "string": "Verb-particle constructions Verb-particle constructions (e.g., 'turn down' ) consist of verbs that combine with a particle to create a new meaning (Dehé et al., 2002) ."
                    },
                    {
                        "id": 170,
                        "string": "Such constructions are much more common in Germanic languages (Iacobini and Masini, 2005) , hence we expect to encounter their equivalents in English translations more frequently."
                    },
                    {
                        "id": 171,
                        "string": "We computed the frequencies of these constructions in the data; the results show a clear overuse of verb-particle constructions in translations from Germanic, and an underuse of such constructions in translations from Balto-Slavic."
                    },
                    {
                        "id": 172,
                        "string": "Tense and aspect Tense and aspect are expressed in different ways across languages."
                    },
                    {
                        "id": 173,
                        "string": "English, like other Germanic languages, uses a full system of aspectual distinctions, expressed via perfect and progressive forms (with the auxiliary verbs 'have' or 'be' )."
                    },
                    {
                        "id": 174,
                        "string": "Balto-Slavic, in contrast, has no such system, and the distinction is marked lexically, by having two types of verbs."
                    },
                    {
                        "id": 175,
                        "string": "Romance languages are in between, with both lexical and grammatical distinctions."
                    },
                    {
                        "id": 176,
                        "string": "We computed the frequencies of perfect forms (defined as the auxiliary 'have' followed by the past participle form), and the progressive forms (defined as the auxiliary 'be' plus a present participle form)."
                    },
                    {
                        "id": 177,
                        "string": "Indeed, Germanic overuses the perfect aspect significantly; the use of the progressive aspect also varies across language families, exhibiting the lowest frequency in translations from Balto-Slavic."
                    },
                    {
                        "id": 178,
                        "string": "Conclusion Translations may be considered distortions of the original text, but this distortion is far from random."
                    },
                    {
                        "id": 179,
                        "string": "It depicts a very clear picture, reflecting language typology to the extent that disregarding the sources altogether, a phylogenetic tree can be reconstructed from a monolingual corpus consisting of multiple translations."
                    },
                    {
                        "id": 180,
                        "string": "This holds for the product of highly professional translators, who conform to a common standard, and whose products are edited by native speakers, like themselves."
                    },
                    {
                        "id": 181,
                        "string": "It even holds after two phases of translations."
                    },
                    {
                        "id": 182,
                        "string": "We are presently trying to extend these results to translations in a different domain (literary texts) into a very different language (Hebrew)."
                    },
                    {
                        "id": 183,
                        "string": "Postulated universals in linguistics (Greenberg, 1963) were confronted with much contradicting evidence in recent years (Evans and Levinson, 2009) , and the long quest for translation universals (Mauranen and Kujamäki, 2004) should now be viewed in light of our finding: more than anything else, translations are typified by interference."
                    },
                    {
                        "id": 184,
                        "string": "This does not undermine the force of translation universals: we demonstrated how explicitation, in the form of cohesive markers, can help identify translations."
                    },
                    {
                        "id": 185,
                        "string": "It may be possible to define classi-fiers implementing other universal facets of translation, e.g., simplification, which will yield good separation between O and T. However, explicitation fails in the reproduction of language typology, whereas interference-based features produce trees of considerable quality."
                    },
                    {
                        "id": 186,
                        "string": "Remarkably, translations to contemporary English and French capture part of the millenniumold history of the source languages from which the translations were made."
                    },
                    {
                        "id": 187,
                        "string": "Our trees reflect some of the historical connections among the languages, but of course they are related in other ways, too (whether incidental, areal, etc.)"
                    },
                    {
                        "id": 188,
                        "string": "."
                    },
                    {
                        "id": 189,
                        "string": "This may explain the case of Romanian in our reconstructed trees: it has been isolated for many years from other Romance languages and was under heavy influence from Balto-Slavic languages."
                    },
                    {
                        "id": 190,
                        "string": "Very little research has been done in historical linguistics on how translations impact the evolvement of languages."
                    },
                    {
                        "id": 191,
                        "string": "The major trends relate to loan translations (Jahr, 1999) , or the impact of canonical texts, such as Luther's translation of the Bible to German (Russ, 1994) or the case of the King James translation to English (Crystal, 2010) ."
                    },
                    {
                        "id": 192,
                        "string": "It has been attested that for certain languages, up to 30% of published materials are mediated through translation (Pym and Chrupała, 2005) ."
                    },
                    {
                        "id": 193,
                        "string": "Given the fingerprints left on target language texts, translations very likely play a role in language change."
                    },
                    {
                        "id": 194,
                        "string": "We leave this as a direction for future research."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 24
                    },
                    {
                        "section": "Related work",
                        "n": "2",
                        "start": 25,
                        "end": 40
                    },
                    {
                        "section": "Dataset",
                        "n": "3.1",
                        "start": 41,
                        "end": 55
                    },
                    {
                        "section": "Features",
                        "n": "3.2",
                        "start": 56,
                        "end": 66
                    },
                    {
                        "section": "The Indo-European phylogenetic tree",
                        "n": "3.3",
                        "start": 67,
                        "end": 82
                    },
                    {
                        "section": "Evaluation methodology",
                        "n": "3.4",
                        "start": 83,
                        "end": 99
                    },
                    {
                        "section": "Identification of source language",
                        "n": "4.2",
                        "start": 100,
                        "end": 114
                    },
                    {
                        "section": "Reconstructing language typology",
                        "n": "5.1",
                        "start": 115,
                        "end": 125
                    },
                    {
                        "section": "Evaluation results",
                        "n": "5.2",
                        "start": 126,
                        "end": 146
                    },
                    {
                        "section": "Analysis",
                        "n": "6",
                        "start": 147,
                        "end": 150
                    },
                    {
                        "section": "Definite articles",
                        "n": "6.1",
                        "start": 151,
                        "end": 160
                    },
                    {
                        "section": "Possessive constructions",
                        "n": "6.2",
                        "start": 161,
                        "end": 168
                    },
                    {
                        "section": "Verb-particle constructions",
                        "n": "6.3",
                        "start": 169,
                        "end": 171
                    },
                    {
                        "section": "Tense and aspect",
                        "n": "6.4",
                        "start": 172,
                        "end": 177
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 178,
                        "end": 194
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1193-Figure3-1.png",
                        "caption": "Figure 3: Phylogenetic language trees generated with English (left) and French (right) translations",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 310.56,
                            "y2": 484.32
                        }
                    },
                    {
                        "filename": "../figure/image/1193-Figure2-1.png",
                        "caption": "Figure 2: Confusion matrix of 14-way classification of English (left) and French (right) translations. The actual class is represented by rows and the predicted one by columns.",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 61.44,
                            "y2": 261.12
                        }
                    },
                    {
                        "filename": "../figure/image/1193-Table2-1.png",
                        "caption": "Table 2: Unweighted evaluation of generated trees. AVG represents the average distance of a tree from the gold standard. The lowest distance in a column is boldfaced.",
                        "page": 6,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 288.0,
                            "y1": 64.32,
                            "y2": 144.96
                        }
                    },
                    {
                        "filename": "../figure/image/1193-Table3-1.png",
                        "caption": "Table 3: Weighted evaluation of generated trees. AVG represents the average distance of a tree from the gold standard. The lowest distance in a column is boldfaced.",
                        "page": 6,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 288.0,
                            "y1": 221.76,
                            "y2": 301.92
                        }
                    },
                    {
                        "filename": "../figure/image/1193-Figure4-1.png",
                        "caption": "Figure 4: Frequencies reflecting various linguistic phenomena (Sections 6.1– 6.4) in English translations",
                        "page": 7,
                        "bbox": {
                            "x1": 75.36,
                            "x2": 524.64,
                            "y1": 63.839999999999996,
                            "y2": 183.84
                        }
                    },
                    {
                        "filename": "../figure/image/1193-Figure1-1.png",
                        "caption": "Figure 1: Gold standard tree, pruned",
                        "page": 3,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 299.03999999999996,
                            "y1": 306.71999999999997,
                            "y2": 491.03999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1193-Table1-1.png",
                        "caption": "Table 1: Classification accuracy (%) of English and French O vs. T",
                        "page": 4,
                        "bbox": {
                            "x1": 89.75999999999999,
                            "x2": 273.12,
                            "y1": 302.88,
                            "y2": 352.32
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-47"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivation Model senses instead of only words",
                    "text": [
                        "He withdrew money from the bank."
                    ],
                    "page_nums": [
                        1,
                        2,
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Idea",
                    "text": [
                        "A word is the surface form of a sense: we can exploit this intrinsic relationship for jointly training word and sense embeddings.",
                        "Updating the representation of associated senses interchangeably. the word and its"
                    ],
                    "page_nums": [
                        9,
                        10
                    ],
                    "images": []
                },
                "3": {
                    "title": "Methodology",
                    "text": [
                        "Given as input a corpus and a semantic network:",
                        "1. Use a semantic network to link to each word its associated senses in context.",
                        "He withdrew money from the bank.",
                        "2. Use a neural network where the update of word and sense embeddings is linked, exploiting virtual connections.",
                        "He withdrew from the bank",
                        "In this way it is possible to learn word and sense/synset embeddings jointly on a single training."
                    ],
                    "page_nums": [
                        11,
                        12,
                        16,
                        17,
                        18,
                        19,
                        20,
                        21,
                        22
                    ],
                    "images": []
                },
                "4": {
                    "title": "Methodology Linking words and senses in context",
                    "text": [
                        "He withdrew money from the bank",
                        "take out financial institution",
                        "Graph-based representation of the sentence using semantic networks"
                    ],
                    "page_nums": [
                        13,
                        14,
                        15
                    ],
                    "images": []
                },
                "5": {
                    "title": "Methodology Joint training of words and sense embeddings",
                    "text": [
                        "Once each word is connected to its set of senses in context, it is possible to modify standard word embedding architectures to take into account this information.",
                        "In this work we explore the CBOW architecture of Word2Vec",
                        "Other neural network architectures could be explored as well",
                        "(Skip-gram also included in the code)."
                    ],
                    "page_nums": [
                        23
                    ],
                    "images": []
                },
                "6": {
                    "title": "Full architecture of W2V Mikolov et al 2013",
                    "text": [
                        "Words and associated senses used both as input and output."
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": [
                        "figure/image/1195-Figure1-1.png"
                    ]
                },
                "7": {
                    "title": "Full architecture of SW2V this work",
                    "text": [
                        "Words and associated senses used both as input and output."
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": [
                        "figure/image/1195-Figure1-1.png"
                    ]
                },
                "20": {
                    "title": "Conclusion",
                    "text": [
                        "W e presented SW2V: a neural architecture for jointly learning word and sense embeddings in the same vector space using text corpora and knowled ge obtained from semantic networks.",
                        "Fu ture wor k:",
                        "Exploitin g our model for other linked representations such as multilingual or Image-to-Text embeddings.",
                        "Word Sense Disambiguation and Entity Linking.",
                        "- Integrating our embeddings into downstream NLP applications, following the lines of Pilehvar et al. (ACL 2017)."
                    ],
                    "page_nums": [
                        47,
                        48,
                        49
                    ],
                    "images": []
                }
            },
            "paper_title": "Embedding Words and Senses Together via Joint Knowledge-Enhanced Training",
            "paper_id": "1195",
            "paper": {
                "title": "Embedding Words and Senses Together via Joint Knowledge-Enhanced Training",
                "abstract": "Word embeddings are widely used in Natural Language Processing, mainly due to their success in capturing semantic information from massive corpora. However, their creation process does not allow the different meanings of a word to be automatically separated, as it conflates them into a single vector. We address this issue by proposing a new model which learns word and sense embeddings jointly. Our model exploits large corpora and knowledge from semantic networks in order to produce a unified vector space of word and sense embeddings. We evaluate the main features of our approach both qualitatively and quantitatively in a variety of tasks, highlighting the advantages of the proposed method in comparison to stateof-the-art word-and sense-based models.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Recently, approaches based on neural networks which embed words into low-dimensional vector spaces from text corpora (i.e."
                    },
                    {
                        "id": 1,
                        "string": "word embeddings) have become increasingly popular (Mikolov et al., 2013; Pennington et al., 2014) ."
                    },
                    {
                        "id": 2,
                        "string": "Word embeddings have proved to be beneficial in many Natural Language Processing tasks, such as Machine Translation (Zou et al., 2013) , syntactic parsing (Weiss et al., 2015) , and Question Answering (Bordes et al., 2014) , to name a few."
                    },
                    {
                        "id": 3,
                        "string": "Despite their success in capturing semantic properties of words, these representations are generally hampered by an important limitation: the inability to discriminate among different meanings of the same word."
                    },
                    {
                        "id": 4,
                        "string": "Authors marked with an asterisk (*) contributed equally."
                    },
                    {
                        "id": 5,
                        "string": "Previous works have addressed this limitation by automatically inducing word senses from monolingual corpora (Schütze, 1998; Reisinger and Mooney, 2010; Huang et al., 2012; Di Marco and Navigli, 2013; Neelakantan et al., 2014; Tian et al., 2014; Li and Jurafsky, 2015; Vu and Parker, 2016; Qiu et al., 2016) , or bilingual parallel data (Guo et al., 2014; Ettinger et al., 2016; Suster et al., 2016) ."
                    },
                    {
                        "id": 6,
                        "string": "However, these approaches learn solely on the basis of statistics extracted from text corpora and do not exploit knowledge from semantic networks."
                    },
                    {
                        "id": 7,
                        "string": "Additionally, their induced senses are neither readily interpretable (Panchenko et al., 2017) nor easily mappable to lexical resources, which limits their application."
                    },
                    {
                        "id": 8,
                        "string": "Recent approaches have utilized semantic networks to inject knowledge into existing word representations (Yu and Dredze, 2014; Faruqui et al., 2015; Goikoetxea et al., 2015; Speer and Lowry-Duda, 2017; Mrksic et al., 2017) , but without solving the meaning conflation issue."
                    },
                    {
                        "id": 9,
                        "string": "In order to obtain a representation for each sense of a word, a number of approaches have leveraged lexical resources to learn sense embeddings as a result of post-processing conventional word embeddings Johansson and Pina, 2015; Rothe and Schütze, 2015; Pilehvar and Collier, 2016; Camacho-Collados et al., 2016) ."
                    },
                    {
                        "id": 10,
                        "string": "Instead, we propose SW2V (Senses and Words to Vectors), a neural model that exploits knowledge from both text corpora and semantic networks in order to simultaneously learn embeddings for both words and senses."
                    },
                    {
                        "id": 11,
                        "string": "Moreover, our model provides three additional key features: (1) both word and sense embeddings are represented in the same vector space, (2) it is flexible, as it can be applied to different predictive models, and (3) it is scalable for very large semantic networks and text corpora."
                    },
                    {
                        "id": 12,
                        "string": "Related work Embedding words from large corpora into a lowdimensional vector space has been a popular task since the appearance of the probabilistic feedforward neural network language model (Bengio et al., 2003) and later developments such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) ."
                    },
                    {
                        "id": 13,
                        "string": "However, little research has focused on exploiting lexical resources to overcome the inherent ambiguity of word embeddings."
                    },
                    {
                        "id": 14,
                        "string": "Iacobacci et al."
                    },
                    {
                        "id": 15,
                        "string": "(2015) overcame this limitation by applying an off-the-shelf disambiguation system (i.e."
                    },
                    {
                        "id": 16,
                        "string": "Babelfy (Moro et al., 2014) ) to a corpus and then using word2vec to learn sense embeddings over the pre-disambiguated text."
                    },
                    {
                        "id": 17,
                        "string": "However, in their approach words are replaced by their intended senses, consequently producing as output sense representations only."
                    },
                    {
                        "id": 18,
                        "string": "The representation of words and senses in the same vector space proves essential for applying these knowledgebased sense embeddings in downstream applications, particularly for their integration into neural architectures (Pilehvar et al., 2017) ."
                    },
                    {
                        "id": 19,
                        "string": "In the literature, various different methods have attempted to overcome this limitation."
                    },
                    {
                        "id": 20,
                        "string": "proposed a model for obtaining both word and sense representations based on a first training step of conventional word embeddings, a second disambiguation step based on sense definitions, and a final training phase which uses the disambiguated text as input."
                    },
                    {
                        "id": 21,
                        "string": "Likewise, Rothe and Schütze (2015) aimed at building a shared space of word and sense embeddings based on two steps: a first training step of only word embeddings and a second training step to produce sense and synset embeddings."
                    },
                    {
                        "id": 22,
                        "string": "These two approaches require multiple steps of training and make use of a relatively small resource like WordNet, which limits their coverage and applicability."
                    },
                    {
                        "id": 23,
                        "string": "Camacho-Collados et al."
                    },
                    {
                        "id": 24,
                        "string": "(2016) increased the coverage of these WordNetbased approaches by exploiting the complementary knowledge of WordNet and Wikipedia along with pre-trained word embeddings."
                    },
                    {
                        "id": 25,
                        "string": "Finally,  and Fang et al."
                    },
                    {
                        "id": 26,
                        "string": "(2016) proposed a model to align vector spaces of words and entities from knowledge bases."
                    },
                    {
                        "id": 27,
                        "string": "However, these approaches are restricted to nominal instances only (i.e."
                    },
                    {
                        "id": 28,
                        "string": "Wikipedia pages or entities)."
                    },
                    {
                        "id": 29,
                        "string": "In contrast, we propose a model which learns both words and sense embeddings from a single joint training phase, producing a common vector space of words and senses as an emerging feature."
                    },
                    {
                        "id": 30,
                        "string": "Connecting words and senses in context In order to jointly produce embeddings for words and senses, SW2V needs as input a corpus where words are connected to senses 1 in each given context."
                    },
                    {
                        "id": 31,
                        "string": "One option for obtaining such connections could be to take a sense-annotated corpus as input."
                    },
                    {
                        "id": 32,
                        "string": "However, manually annotating large amounts of data is extremely expensive and therefore impractical in normal settings."
                    },
                    {
                        "id": 33,
                        "string": "Obtaining sense-annotated data from current off-the-shelf disambiguation and entity linking systems is possible, but generally suffers from two major problems."
                    },
                    {
                        "id": 34,
                        "string": "First, supervised systems are hampered by the very same problem of needing large amounts of sense-annotated data."
                    },
                    {
                        "id": 35,
                        "string": "Second, the relatively slow speed of current disambiguation systems, such as graph-based approaches (Hoffart et al., 2012; Agirre et al., 2014; Moro et al., 2014) , or word-expert supervised systems (Zhong and Ng, 2010; Iacobacci et al., 2016; Melamud et al., 2016) , could become an obstacle when applied to large corpora."
                    },
                    {
                        "id": 36,
                        "string": "This is the reason why we propose a simple yet effective unsupervised shallow word-sense connectivity algorithm, which can be applied to virtually any given semantic network and is linear on the corpus size."
                    },
                    {
                        "id": 37,
                        "string": "The main idea of the algorithm is to exploit the connections of a semantic network by associating words with the senses that are most connected within the sentence, according to the underlying network."
                    },
                    {
                        "id": 38,
                        "string": "Shallow word-sense connectivity algorithm."
                    },
                    {
                        "id": 39,
                        "string": "Formally, a corpus and a semantic network are taken as input and a set of connected words and senses is produced as output."
                    },
                    {
                        "id": 40,
                        "string": "We define a semantic network as a graph (S, E) where the set S contains synsets (nodes) and E represents a set of semantically connected synset pairs (edges)."
                    },
                    {
                        "id": 41,
                        "string": "Algorithm 1 describes how to connect words and senses in a given text (sentence or paragraph) T ."
                    },
                    {
                        "id": 42,
                        "string": "First, we gather in a set S T all candidate synsets of the words (including multiwords up to trigrams) in T (lines 1 to 3)."
                    },
                    {
                        "id": 43,
                        "string": "Second, for each candidate synset s we calculate the number of synsets which are connected with s in the semantic network and are included in S T , excluding connections of synsets which only appear as candidates of the Relative maximum connections max = 0 8: Set of senses associated with w, Cw ← ∅ 9: for each candidate synset s ∈ Sw 10: Number of edges n = |s ∈ ST : (s, s ) ∈ E & ∃w ∈ T : w = w & s ∈ S w | 11: if n ≥ max & n ≥ θ then 12: if n > max then 13: Cw ← {(w, s)} 14: max ← n 15: else 16: Cw ← Cw ∪ {(w, s)} 17: T * ← T * ∪ Cw 18: return Output set of connected words and senses T * same word (lines 5 to 10)."
                    },
                    {
                        "id": 44,
                        "string": "Finally, each word is associated with its top candidate synset(s) according to its/their number of connections in context, provided that its/their number of connections exceeds a threshold θ = |S T |+|T | 2 δ (lines 11 to 17)."
                    },
                    {
                        "id": 45,
                        "string": "2 This parameter aims to retain relevant connectivity across senses, as only senses above the threshold will be connected to words in the output corpus."
                    },
                    {
                        "id": 46,
                        "string": "θ is proportional to the reciprocal of a parameter δ, 3 and directly proportional to the average text length and number of candidate synsets within the text."
                    },
                    {
                        "id": 47,
                        "string": "The complexity of the proposed algorithm is N + (N × α), where N is the number of words of the training corpus and α is the average polysemy degree of a word in the corpus according to the input semantic network."
                    },
                    {
                        "id": 48,
                        "string": "Considering that noncontent words are not taken into account (i.e."
                    },
                    {
                        "id": 49,
                        "string": "polysemy degree 0) and that the average polysemy degree of words in current lexical resources (e.g."
                    },
                    {
                        "id": 50,
                        "string": "WordNet or BabelNet) does not exceed a small constant (3) in any language, we can safely assume that the algorithm is linear in the size of the training corpus."
                    },
                    {
                        "id": 51,
                        "string": "Hence, the training time is not significantly increased in comparison to training words only, irrespective of the corpus size."
                    },
                    {
                        "id": 52,
                        "string": "This enables a fast training on large amounts of text corpora, in contrast to current unsupervised disambiguation algorithms."
                    },
                    {
                        "id": 53,
                        "string": "Additionally, as we will show in Section 5.2, this algorithm does not only speed up significantly the training phase, but also leads to more accurate results."
                    },
                    {
                        "id": 54,
                        "string": "Note that with our algorithm a word is allowed to have more than one sense associated."
                    },
                    {
                        "id": 55,
                        "string": "In fact, current lexical resources like WordNet (Miller, 1995) or BabelNet (Navigli and Ponzetto, 2012) are hampered by the high granularity of their sense inventories (Hovy et al., 2013) ."
                    },
                    {
                        "id": 56,
                        "string": "In Section 6.2 we show how our sense embeddings are particularly suited to deal with this issue."
                    },
                    {
                        "id": 57,
                        "string": "Joint training of words and senses The goal of our approach is to obtain a shared vector space of words and senses."
                    },
                    {
                        "id": 58,
                        "string": "To this end, our model extends conventional word embedding models by integrating explicit knowledge into its architecture."
                    },
                    {
                        "id": 59,
                        "string": "While we will focus on the Continuous Bag Of Words (CBOW) architecture of word2vec (Mikolov et al., 2013) , our extension can easily be applied similarly to Skip-Gram, or to other predictive approaches based on neural networks."
                    },
                    {
                        "id": 60,
                        "string": "The CBOW architecture is based on the feedforward neural network language model (Bengio et al., 2003) and aims at predicting the current word using its surrounding context."
                    },
                    {
                        "id": 61,
                        "string": "The architecture consists of input, hidden and output layers."
                    },
                    {
                        "id": 62,
                        "string": "The input layer has the size of the word vocabulary and encodes the context as a combination of onehot vector representations of surrounding words of a given target word."
                    },
                    {
                        "id": 63,
                        "string": "The output layer has the same size as the input layer and contains a one-hot vector of the target word during the training phase."
                    },
                    {
                        "id": 64,
                        "string": "Our model extends the input and output layers of the neural network with word senses 4 by exploiting the intrinsic relationship between words and senses."
                    },
                    {
                        "id": 65,
                        "string": "The leading principle is that, since a word is the surface form of an underlying sense, updating the embedding of the word should produce a consequent update to the embedding representing that particular sense, and vice-versa."
                    },
                    {
                        "id": 66,
                        "string": "As a consequence of the algorithm described in the previous section, each word in the corpus may be connected with zero, one or more senses."
                    },
                    {
                        "id": 67,
                        "string": "We re- Figure 1 : The SW2V architecture on a sample training instance using four context words."
                    },
                    {
                        "id": 68,
                        "string": "Dotted lines represent the virtual link between words and associated senses in context."
                    },
                    {
                        "id": 69,
                        "string": "In this example, the input layer consists of a context of two previous words (w t−2 , w t−1 ) and two subsequent words (w t+1 , w t+2 ) with respect to the target word w t ."
                    },
                    {
                        "id": 70,
                        "string": "Two words (w t−1 , w t+2 ) do not have senses associated in context, while w t−2 , w t+1 have three senses (s 1 t−1 , s 2 t−1 , s 3 t−1 ) and one sense associated (s 1 t+1 ) in context, respectively."
                    },
                    {
                        "id": 71,
                        "string": "The output layer consists of the target word w t , which has two senses associated (s 1 t , s 2 t ) in context."
                    },
                    {
                        "id": 72,
                        "string": "fer to the set of senses connected to a given word within the specific context as its associated senses."
                    },
                    {
                        "id": 73,
                        "string": "Formally, we define a training instance as a sequence of words W = w t−n , ..., w t , ..., w t+n (being w t the target word) and S = S t−n , ..., S t , ...., S t+n , where S i = s 1 i , ..., s k i i is the sequence of all associated senses in context of w i ∈ W ."
                    },
                    {
                        "id": 74,
                        "string": "Note that S i might be empty if the word w i does not have any associated sense."
                    },
                    {
                        "id": 75,
                        "string": "In our model each target word takes as context both its surrounding words and all the senses associated with them."
                    },
                    {
                        "id": 76,
                        "string": "In contrast to the original CBOW architecture, where the training criterion is to correctly classify w t , our approach aims to predict the word w t and its set S t of associated senses."
                    },
                    {
                        "id": 77,
                        "string": "This is equivalent to minimizing the following loss function: Figure 1 shows the organization of the input and the output layers on a sample training instance."
                    },
                    {
                        "id": 78,
                        "string": "In what follows we present a set of variants of the model on the output and the input layers."
                    },
                    {
                        "id": 79,
                        "string": "E = − log(p(w t |W t , S t ))− s∈St log(p(s|W t , S t )) where W t = w t−n , ..., w t−1 , w t+1 , ..., w t+n and S t = S t−n , ..., S t−1 , S t+1 , ..., S t+n ."
                    },
                    {
                        "id": 80,
                        "string": "Output layer alternatives Both words and senses."
                    },
                    {
                        "id": 81,
                        "string": "This is the default case explained above."
                    },
                    {
                        "id": 82,
                        "string": "If a word has one or more associated senses, these senses are also used as target on a separate output layer."
                    },
                    {
                        "id": 83,
                        "string": "Only words."
                    },
                    {
                        "id": 84,
                        "string": "In this case we exclude senses as target."
                    },
                    {
                        "id": 85,
                        "string": "There is a single output layer with the size of the word vocabulary as in the original CBOW model."
                    },
                    {
                        "id": 86,
                        "string": "Only senses."
                    },
                    {
                        "id": 87,
                        "string": "In contrast, this alternative excludes words, using only senses as target."
                    },
                    {
                        "id": 88,
                        "string": "In this case, if a word does not have any associated sense, it is not used as target instance."
                    },
                    {
                        "id": 89,
                        "string": "Input layer alternatives Both words and senses."
                    },
                    {
                        "id": 90,
                        "string": "Words and their associated senses are included in the input layer and contribute to the hidden state."
                    },
                    {
                        "id": 91,
                        "string": "Both words and senses are updated as a consequence of the backpropagation algorithm."
                    },
                    {
                        "id": 92,
                        "string": "Only words."
                    },
                    {
                        "id": 93,
                        "string": "In this alternative only the surrounding words contribute to the hidden state, i.e."
                    },
                    {
                        "id": 94,
                        "string": "the target word/sense (depending on the alternative of the output layer) is predicted only from word features."
                    },
                    {
                        "id": 95,
                        "string": "The update of an input word is propagated to the embeddings of its associated senses, if any."
                    },
                    {
                        "id": 96,
                        "string": "In other words, despite not being included in the input layer, senses still receive the same gradient of the associated input word, through a virtual connection."
                    },
                    {
                        "id": 97,
                        "string": "This configuration, coupled with the only-words output layer configuration, corresponds exactly to the default CBOW architecture of word2vec with the only addition of the update step for senses."
                    },
                    {
                        "id": 98,
                        "string": "Only senses."
                    },
                    {
                        "id": 99,
                        "string": "Words are excluded from the input layer and the target is predicted only from the senses associated with the surrounding words."
                    },
                    {
                        "id": 100,
                        "string": "The weights of the words are updated through the updates of the associated senses, in contrast to the only-words alternative."
                    },
                    {
                        "id": 101,
                        "string": "Analysis of Model Components In this section we analyze the different components of SW2V, including the nine model configurations (Section 5.1) and the algorithm which generates the connections between words and senses in context (Section 5.2)."
                    },
                    {
                        "id": 102,
                        "string": "In what follows we describe the common analysis setting: • Training model and hyperparameters."
                    },
                    {
                        "id": 103,
                        "string": "For evaluation purposes, we use the CBOW model of word2vec with standard hyperparameters: the dimensionality of the vectors is set to 300 and the window size to 8, and hierarchical softmax is used for normalization."
                    },
                    {
                        "id": 104,
                        "string": "These hyperparameter values are set across all experiments."
                    },
                    {
                        "id": 105,
                        "string": "• Corpus and semantic network."
                    },
                    {
                        "id": 106,
                        "string": "We use a 300M-words corpus from the UMBC project (Han et al., 2013) , which contains English paragraphs extracted from the web."
                    },
                    {
                        "id": 107,
                        "string": "5 As semantic network we use BabelNet 3.0 6 , a large multilingual semantic network with over 350 million semantic connections, integrating resources such as Wikipedia and WordNet."
                    },
                    {
                        "id": 108,
                        "string": "We chose BabelNet owing to its wide coverage of named entities and lexicographic knowledge."
                    },
                    {
                        "id": 109,
                        "string": "• Benchmark."
                    },
                    {
                        "id": 110,
                        "string": "Word similarity has been one of the most popular benchmarks for in-vitro evaluation of vector space models (Pennington et al., 2014; Levy et al., 2015) ."
                    },
                    {
                        "id": 111,
                        "string": "For the analysis we use two word similarity datasets: the similarity portion (Agirre et al., 2009 , WS-Sim) of the WordSim-353 dataset (Finkelstein et al., 2002) and RG-65 (Rubenstein and Goodenough, 1965) ."
                    },
                    {
                        "id": 112,
                        "string": "In order to compute the similarity of two words using our sense embeddings, we apply the standard closest senses strategy (Resnik, 1995; Budanitsky and Hirst, 2006; Camacho-Collados et al., 2015) , using cosine similarity (cos) as comparison measure between senses: sim(w 1 , w 2 ) = max s∈Sw 1 ,s ∈Sw 2 cos( s 1 , s 2 ) (1) where S w i represents the set of all candidate senses of w i and s i refers to the sense vector representation of the sense s i ."
                    },
                    {
                        "id": 113,
                        "string": "Model configurations In this section we analyze the different configurations of our model in respect of the input and the output layer on a word similarity experiment."
                    },
                    {
                        "id": 114,
                        "string": "Recall from Section 4 that our model could have words, senses or both in either the input and output layers."
                    },
                    {
                        "id": 115,
                        "string": "Table 1 shows the results of all nine configurations on the WS-Sim and RG-65 datasets."
                    },
                    {
                        "id": 116,
                        "string": "As shown in Table 1 , the best configuration according to both Spearman and Pearson correlation measures is the configuration which has only senses in the input layer and both words and senses in the output layer."
                    },
                    {
                        "id": 117,
                        "string": "7 In fact, taking only senses as input seems to be consistently the best alternative for the input layer."
                    },
                    {
                        "id": 118,
                        "string": "Our hunch is that the knowledge learned from both the co-occurrence information and the semantic network is more balanced with this input setting."
                    },
                    {
                        "id": 119,
                        "string": "For instance, in the case of including both words and senses in the input layer, the co-occurrence information learned by the network would be duplicated for both words and senses."
                    },
                    {
                        "id": 120,
                        "string": "Disambiguation / Shallow word-sense connectivity algorithm In this section we evaluate the impact of our shallow word-sense connectivity algorithm (Section 3) by testing our model directly taking a predisambiguated text as input."
                    },
                    {
                        "id": 121,
                        "string": "In this case the network exploits the connections between each word and its disambiguated sense in context."
                    },
                    {
                        "id": 122,
                        "string": "For this comparison we used Babelfy 8 (Moro et al., 2014) , a state-of-the-art graph-based disambiguation and entity linking system based on BabelNet."
                    },
                    {
                        "id": 123,
                        "string": "We compare to both the default Babelfy system which 7 In this analysis we used the word similarity task for optimizing the sense embeddings, without caring about the performance of word embeddings or their interconnectivity."
                    },
                    {
                        "id": 124,
                        "string": "Therefore, this configuration may not be optimal for word embeddings and may be further tuned on specific applications."
                    },
                    {
                        "id": 125,
                        "string": "More information about different configurations in the documentation of the source code."
                    },
                    {
                        "id": 126,
                        "string": "uses the Most Common Sense (MCS) heuristic as a back-off strategy and, following (Iacobacci et al., 2015) , we also include a version in which only instances above the Babelfy default confidence threshold are disambiguated (i.e."
                    },
                    {
                        "id": 127,
                        "string": "the MCS backoff strategy is disabled)."
                    },
                    {
                        "id": 128,
                        "string": "We will refer to this latter version as Babelfy* and report the best configuration of each strategy according to our analysis."
                    },
                    {
                        "id": 129,
                        "string": "Table 2 shows the results of our model using the three different strategies on RG-65 and WS-Sim."
                    },
                    {
                        "id": 130,
                        "string": "Our shallow word-sense connectivity algorithm achieves the best overall results."
                    },
                    {
                        "id": 131,
                        "string": "We believe that these results are due to the semantic connectivity ensured by our algorithm and to the possibility of associating words with more than one sense, which seems beneficial for training, making it more robust to possible disambiguation errors and to the sense granularity issue (Erk et al., 2013) ."
                    },
                    {
                        "id": 132,
                        "string": "The results are especially significant considering that our algorithm took a tenth of the time needed by Babelfy to process the corpus."
                    },
                    {
                        "id": 133,
                        "string": "Evaluation We perform a qualitative and quantitative evaluation of important features of SW2V in three different tasks."
                    },
                    {
                        "id": 134,
                        "string": "First, in order to compare our model against standard word-based approaches, we evaluate our system in the word similarity task (Section 6.1)."
                    },
                    {
                        "id": 135,
                        "string": "Second, we measure the quality of our sense embeddings in a sense-specific application: sense clustering (Section 6.2)."
                    },
                    {
                        "id": 136,
                        "string": "Finally, we evaluate the coherence of our unified vector space by measuring the interconnectivity of word and sense embeddings (Section 6.3)."
                    },
                    {
                        "id": 137,
                        "string": "Experimental setting."
                    },
                    {
                        "id": 138,
                        "string": "Throughout all the experiments we use the same standard hyperparameters mentioned in Section 5 for both the original word2vec implementation and our proposed model SW2V."
                    },
                    {
                        "id": 139,
                        "string": "For SW2V we use the same optimal configuration according to the analysis of the previous section (only senses as input, and both words and senses as output) for all tasks."
                    },
                    {
                        "id": 140,
                        "string": "As training corpus we take the full 3B-words UMBC webbase corpus and the Wikipedia (Wikipedia dump of November 2014), used by three of the comparison systems."
                    },
                    {
                        "id": 141,
                        "string": "We use BabelNet 3.0 (SW2V BN ) and WordNet 3.0 (SW2V WN ) as semantic networks."
                    },
                    {
                        "id": 142,
                        "string": "Comparison systems."
                    },
                    {
                        "id": 143,
                        "string": "We compare with the publicly available pre-trained sense embeddings of four state-of-the-art models: 9 and AutoExtend 10 (Rothe and Schütze, 2015) based on WordNet, and SensEmbed 11 (Iacobacci et al., 2015) and NASARI 12 (Camacho-Collados et al., 2016) based on BabelNet."
                    },
                    {
                        "id": 144,
                        "string": "Word Similarity In this section we evaluate our sense representations on the standard SimLex-999 (Hill et al., 2015) and MEN (Bruni et al., 2014) word similarity datasets 13 ."
                    },
                    {
                        "id": 145,
                        "string": "SimLex and MEN contain 999 and 3000 word pairs, respectively, which constitute, to our knowledge, the two largest similar-  ity datasets comprising a balanced set of noun, verb and adjective instances."
                    },
                    {
                        "id": 146,
                        "string": "As explained in Section 5, we use the closest sense strategy for the word similarity measurement of our model and all sense-based comparison systems."
                    },
                    {
                        "id": 147,
                        "string": "As regards the word embedding models, words are directly compared by using cosine similarity."
                    },
                    {
                        "id": 148,
                        "string": "We also include a retrofitted version of the original word2vec word vectors (Faruqui et al., 2015, Retrofitting 14 ) using WordNet (Retrofitting WN ) and BabelNet (Retrofitting BN ) as lexical resources."
                    },
                    {
                        "id": 149,
                        "string": "Table 3 shows the results of SW2V and all comparison models in SimLex and MEN."
                    },
                    {
                        "id": 150,
                        "string": "SW2V consistently outperforms all sense-based comparison systems using the same corpus, and clearly performs better than the original word2vec trained on the same corpus."
                    },
                    {
                        "id": 151,
                        "string": "Retrofitting decreases the performance of the original word2vec on the Wikipedia corpus using BabelNet as lexical resource, but significantly improves the original word vectors on the UMBC corpus, obtaining comparable results to our approach."
                    },
                    {
                        "id": 152,
                        "string": "However, while our approach provides a shared space of words and senses, Retrofitting still conflates different meanings of a word into the same vector."
                    },
                    {
                        "id": 153,
                        "string": "Additionally, we noticed that most of the score divergences between our system and the gold standard scores in SimLex-999 were produced on 14 https://github.com/mfaruqui/ retrofitting antonym pairs, which are over-represented in this dataset: 38 word pairs hold a clear antonymy relation (e.g."
                    },
                    {
                        "id": 154,
                        "string": "encourage-discourage or long-short), while 41 additional pairs hold some degree of antonymy (e.g."
                    },
                    {
                        "id": 155,
                        "string": "new-ancient or man-woman)."
                    },
                    {
                        "id": 156,
                        "string": "15 In contrast to the consistently low gold similarity scores given to antonym pairs, our system varies its similarity scores depending on the specific nature of the pair 16 ."
                    },
                    {
                        "id": 157,
                        "string": "Recent works have managed to obtain significant improvements by tweaking usual word embedding approaches into providing low similarity scores for antonym pairs (Pham et al., 2015; Schwartz et al., 2015; Nguyen et al., 2016; Mrksic et al., 2017) , but this is outside the scope of this paper."
                    },
                    {
                        "id": 158,
                        "string": "Sense Clustering Current lexical resources tend to suffer from the high granularity of their sense inventories ."
                    },
                    {
                        "id": 159,
                        "string": "In fact, a meaningful clustering of their senses may lead to improvements on downstream tasks (Hovy et al., 2013; Flekova and Gurevych, 2016; Pilehvar et al., 2017) ."
                    },
                    {
                        "id": 160,
                        "string": "In this section we evaluate our synset representations on the Wikipedia sense clustering task."
                    },
                    {
                        "id": 161,
                        "string": "For a fair comparison with respect to the BabelNet-based com- Dandala et al."
                    },
                    {
                        "id": 162,
                        "string": "(2013) ."
                    },
                    {
                        "id": 163,
                        "string": "In these datasets sense clustering is viewed as a binary classification task in which, given a pair of Wikipedia pages, the system has to decide whether to cluster them into a single instance or not."
                    },
                    {
                        "id": 164,
                        "string": "To this end, we use our synset embeddings and cluster Wikipedia pages 17 together if their similarity exceeds a threshold γ."
                    },
                    {
                        "id": 165,
                        "string": "In order to set the optimal value of γ, we follow Dandala et al."
                    },
                    {
                        "id": 166,
                        "string": "(2013) and use the first 500-pairs sense clustering dataset for tuning."
                    },
                    {
                        "id": 167,
                        "string": "We set the threshold γ to 0.35, which is the value leading to the highest F-Measure among all values from 0 to 1 with a 0.05 step size on the 500-pair dataset."
                    },
                    {
                        "id": 168,
                        "string": "Likewise, we set a threshold for NASARI (0.7) and SensEmbed (0.3) comparison systems."
                    },
                    {
                        "id": 169,
                        "string": "Finally, we evaluate our approach on the Se-mEval sense clustering test set."
                    },
                    {
                        "id": 170,
                        "string": "This test set consists of 925 pairs which were obtained from a set of highly ambiguous words gathered from past SemEval tasks."
                    },
                    {
                        "id": 171,
                        "string": "For comparison, we also include the supervised approach of Dandala et al."
                    },
                    {
                        "id": 172,
                        "string": "(2013) based on a multi-feature Support Vector Machine classifier trained on an automaticallylabeled dataset of the English Wikipedia (Mono-SVM) and Wikipedia in four different languages (Multi-SVM)."
                    },
                    {
                        "id": 173,
                        "string": "As naive baseline we include the system which would cluster all given pairs."
                    },
                    {
                        "id": 174,
                        "string": "Table 4 shows the F-Measure and accuracy results on the SemEval sense clustering dataset."
                    },
                    {
                        "id": 175,
                        "string": "SW2V outperforms all comparison systems according to both measures, including the sense rep-resentations of NASARI and SensEmbed using the same setup and the same underlying lexical resource."
                    },
                    {
                        "id": 176,
                        "string": "This confirms the capability of our system to accurately capture the semantics of word senses on this sense-specific task."
                    },
                    {
                        "id": 177,
                        "string": "Word and sense interconnectivity In the previous experiments we evaluated the effectiveness of the sense embeddings."
                    },
                    {
                        "id": 178,
                        "string": "In contrast, this experiment aims at testing the interconnectivity between word and sense embeddings in the vector space."
                    },
                    {
                        "id": 179,
                        "string": "As explained in Section 2, there have been previous approaches building a shared space of word and sense embeddings, but to date little research has focused on testing the semantic coherence of the vector space."
                    },
                    {
                        "id": 180,
                        "string": "To this end, we evaluate our model on a Word Sense Disambiguation (WSD) task, using our shared vector space of words and senses to obtain a Most Common Sense (MCS) baseline."
                    },
                    {
                        "id": 181,
                        "string": "The insight behind this experiment is that a semantically coherent shared space of words and senses should be able to build a relatively strong baseline for the task, as the MCS of a given word should be closer to the word vector than any other sense."
                    },
                    {
                        "id": 182,
                        "string": "The MCS baseline is generally integrated into the pipeline of stateof-the-art WSD and Entity Linking systems as a back-off strategy (Navigli, 2009; Jin et al., 2009; Zhong and Ng, 2010; Moro et al., 2014; Raganato et al., 2017) and is used in various NLP applications (Bennett et al., 2016) ."
                    },
                    {
                        "id": 183,
                        "string": "Therefore, a system which automatically identifies the MCS of words from non-annotated text may be quite valuable, especially for resource-poor languages or large knowledge resources for which obtaining senseannotated corpora is extremely expensive."
                    },
                    {
                        "id": 184,
                        "string": "Moreover, even in a resource like WordNet for which sense-annotated data is available (Miller et al., 1993, SemCor) , 61% of its polysemous lemmas have no sense annotations (Bennett et al., 2016) ."
                    },
                    {
                        "id": 185,
                        "string": "Given an input word w, we compute the cosine similarity between w and all its candidate senses, picking the sense leading to the highest similarity: M CS(w) = argmax s∈Sw cos( w, s) (2) where cos( w, s) refers to the cosine similarity between the embeddings of w and s. In order to assess the reliability of SW2V against previous models using WordNet as sense inventory, we test our model on the all-words SemEval-2007 (task 17) (Pradhan et al., 2007) and SemEval-2013 (task    WSD datasets."
                    },
                    {
                        "id": 186,
                        "string": "Note that our model using BabelNet as semantic network has a far larger coverage than just WordNet and may additionally be used for Wikification (Mihalcea and Csomai, 2007) and Entity Linking tasks."
                    },
                    {
                        "id": 187,
                        "string": "Since the versions of WordNet vary across datasets and comparison systems, we decided to evaluate the systems on the portion of the datasets covered by all comparison systems 18 (less than 10% of instances were removed from each dataset)."
                    },
                    {
                        "id": 188,
                        "string": "Table 5 shows the results of our system and AutoExtend on the SemEval-2007 and SemEval-2013 WSD datasets."
                    },
                    {
                        "id": 189,
                        "string": "SW2V provides the best MCS results in both datasets."
                    },
                    {
                        "id": 190,
                        "string": "In general, AutoExtend does not accurately capture the predominant sense of a word and performs worse than a baseline that selects the intended sense randomly from the set of all possible senses of the target word."
                    },
                    {
                        "id": 191,
                        "string": "In fact, AutoExtend tends to create clusters which include a word and all its possible senses."
                    },
                    {
                        "id": 192,
                        "string": "As an example, Table 6 shows the closest word and sense 19 embeddings of our SW2V model and Au-toExtend to the military and fish senses of, respectively, company and school."
                    },
                    {
                        "id": 193,
                        "string": "AutoExtend creates clusters with all the senses of company and school and their related instances, even if they belong to different domains (e.g., firm 2 n or business 1 n clearly concern the business sense of company)."
                    },
                    {
                        "id": 194,
                        "string": "Instead, SW2V creates a semantic cluster of word and sense embeddings which are semantically close to the corresponding company 2 n and school 7 n senses."
                    },
                    {
                        "id": 195,
                        "string": "Conclusion and Future Work In this paper we proposed SW2V (Senses and Words to Vectors), a neural model which learns vector representations for words and senses in a joint training phase by exploiting both text corpora and knowledge from semantic networks."
                    },
                    {
                        "id": 196,
                        "string": "Data (in- 18 We were unable to obtain the word embeddings of  for comparison even after contacting the authors."
                    },
                    {
                        "id": 197,
                        "string": "19 Following Navigli (2009), word p n is the n th sense of word with part of speech p (using WordNet 3.0)."
                    },
                    {
                        "id": 198,
                        "string": "Table 6 : Ten closest word and sense embeddings to the senses company 2 n (military unit) and school 7 n (group of fish)."
                    },
                    {
                        "id": 199,
                        "string": "cluding the preprocessed corpora and pre-trained embeddings used in the evaluation) and source code to apply our extension of the word2vec architecture to learn word and sense embeddings from any preprocessed corpus are freely available at http://lcl.uniroma1.it/sw2v."
                    },
                    {
                        "id": 200,
                        "string": "Unlike previous sense-based models which require post-processing steps and use WordNet as sense inventory, our model achieves a semantically coherent vector space of both words and senses as an emerging feature of a single training phase and is easily scalable to larger semantic networks like BabelNet."
                    },
                    {
                        "id": 201,
                        "string": "Finally, we showed, both quantitatively and qualitatively, some of the advantages of using our approach as against previous state-ofthe-art word-and sense-based models in various tasks, and highlighted interesting semantic properties of the resulting unified vector space of word and sense embeddings."
                    },
                    {
                        "id": 202,
                        "string": "As future work we plan to integrate a WSD and Entity Linking system for applying our model on downstream NLP applications, along the lines of Pilehvar et al."
                    },
                    {
                        "id": 203,
                        "string": "(2017) ."
                    },
                    {
                        "id": 204,
                        "string": "We are also planning to apply our model to languages other than English and to study its potential on multilingual and crosslingual applications."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 11
                    },
                    {
                        "section": "Related work",
                        "n": "2",
                        "start": 12,
                        "end": 29
                    },
                    {
                        "section": "Connecting words and senses in context",
                        "n": "3",
                        "start": 30,
                        "end": 56
                    },
                    {
                        "section": "Joint training of words and senses",
                        "n": "4",
                        "start": 57,
                        "end": 79
                    },
                    {
                        "section": "Output layer alternatives",
                        "n": "4.1",
                        "start": 80,
                        "end": 88
                    },
                    {
                        "section": "Input layer alternatives",
                        "n": "4.2",
                        "start": 89,
                        "end": 100
                    },
                    {
                        "section": "Analysis of Model Components",
                        "n": "5",
                        "start": 101,
                        "end": 112
                    },
                    {
                        "section": "Model configurations",
                        "n": "5.1",
                        "start": 113,
                        "end": 119
                    },
                    {
                        "section": "Disambiguation / Shallow word-sense connectivity algorithm",
                        "n": "5.2",
                        "start": 120,
                        "end": 132
                    },
                    {
                        "section": "Evaluation",
                        "n": "6",
                        "start": 133,
                        "end": 143
                    },
                    {
                        "section": "Word Similarity",
                        "n": "6.1",
                        "start": 144,
                        "end": 157
                    },
                    {
                        "section": "Sense Clustering",
                        "n": "6.2",
                        "start": 158,
                        "end": 176
                    },
                    {
                        "section": "Word and sense interconnectivity",
                        "n": "6.3",
                        "start": 177,
                        "end": 194
                    },
                    {
                        "section": "Conclusion and Future Work",
                        "n": "7",
                        "start": 195,
                        "end": 204
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1195-Table1-1.png",
                        "caption": "Table 1: Pearson (r) and Spearman (ρ) correlation performance of the nine configurations of SW2V",
                        "page": 5,
                        "bbox": {
                            "x1": 106.56,
                            "x2": 488.15999999999997,
                            "y1": 61.44,
                            "y2": 144.96
                        }
                    },
                    {
                        "filename": "../figure/image/1195-Table2-1.png",
                        "caption": "Table 2: Pearson (r) and Spearman (ρ) correlation performance of SW2V integrating our shallow word-sense connectivity algorithm (default), Babelfy, or Babelfy*.",
                        "page": 5,
                        "bbox": {
                            "x1": 94.56,
                            "x2": 268.32,
                            "y1": 191.51999999999998,
                            "y2": 258.24
                        }
                    },
                    {
                        "filename": "../figure/image/1195-Table3-1.png",
                        "caption": "Table 3: Pearson (r) and Spearman (ρ) correlation performance on the SimLex-999 and MEN word similarity datasets.",
                        "page": 6,
                        "bbox": {
                            "x1": 128.64,
                            "x2": 468.47999999999996,
                            "y1": 61.44,
                            "y2": 284.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1195-Table4-1.png",
                        "caption": "Table 4: Accuracy and F-Measure percentages of different systems on the SemEval Wikipedia sense clustering dataset.",
                        "page": 7,
                        "bbox": {
                            "x1": 87.84,
                            "x2": 274.08,
                            "y1": 61.44,
                            "y2": 161.28
                        }
                    },
                    {
                        "filename": "../figure/image/1195-Figure1-1.png",
                        "caption": "Figure 1: The SW2V architecture on a sample training instance using four context words. Dotted lines represent the virtual link between words and associated senses in context. In this example, the input layer consists of a context of two previous words (wt−2, wt−1) and two subsequent words (wt+1, wt+2) with respect to the target word wt. Two words (wt−1, wt+2) do not have senses associated in context, while wt−2, wt+1 have three senses (s1t−1, s2t−1, s3t−1) and one sense associated (s1t+1) in context, respectively. The output layer consists of the target word wt, which has two senses associated (s1t , s 2 t ) in context.",
                        "page": 3,
                        "bbox": {
                            "x1": 104.64,
                            "x2": 413.28,
                            "y1": 61.44,
                            "y2": 211.2
                        }
                    },
                    {
                        "filename": "../figure/image/1195-Table6-1.png",
                        "caption": "Table 6: Ten closest word and sense embeddings to the senses company2n (military unit) and school 7 n (group of fish).",
                        "page": 8,
                        "bbox": {
                            "x1": 314.4,
                            "x2": 519.36,
                            "y1": 63.839999999999996,
                            "y2": 202.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1195-Table5-1.png",
                        "caption": "Table 5: F-Measure percentage of different MCS strategies on the SemEval-2007 and SemEval2013 WSD datasets.",
                        "page": 8,
                        "bbox": {
                            "x1": 89.75999999999999,
                            "x2": 272.15999999999997,
                            "y1": 61.44,
                            "y2": 113.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-48"
        },
        {
            "slides": {
                "5": {
                    "title": "Segment Level HUMAN",
                    "text": [
                        "Baseline: #Hyp 7 #Base",
                        "Source/Reference English Gloss: Tmelt (DSC) = 8 9. 9 C; Teryst (DSC) = 7 C (measured using DSC at 5 C / min) re"
                    ],
                    "page_nums": [
                        12,
                        13,
                        14
                    ],
                    "images": []
                },
                "8": {
                    "title": "Segment level Meta Evaluation",
                    "text": [
                        "Difference in Segment level RIBES (Hypothesis - Baseline RIBES)",
                        "An interactive graph can be found here: https://plot.ly/171/~alvations/ (Hint: click on the bubbles here on the interactive graph",
                        "Generally, -BLEU or RIBES from baseline means worse translations",
                        "Note that the grey bubbles are the same as the previous graph",
                        "Its more prominent here since there are many more instances of +BLEU with 0 HUMAN score than negative HUMAN score",
                        "With regards to positive HUMAN scores, it fits the conventional wisdom that",
                        "lower BLEU/RIBES = worse translation",
                        "Higher BLEU/RIBES = better translation",
                        "When it comes to negative HUMAN scores, it is inconsistent with the conventional wisdom"
                    ],
                    "page_nums": [
                        18,
                        19,
                        20,
                        21,
                        22,
                        23,
                        24,
                        25,
                        26,
                        27,
                        28,
                        29,
                        30
                    ],
                    "images": [
                        "figure/image/1200-Figure2-1.png",
                        "figure/image/1200-Figure1-1.png"
                    ]
                }
            },
            "paper_title": "An Awkward Disparity between BLEU / RIBES Scores and Human Judgements in Machine Translation",
            "paper_id": "1200",
            "paper": {
                "title": "An Awkward Disparity between BLEU / RIBES Scores and Human Judgements in Machine Translation",
                "abstract": "Automatic evaluation of machine translation (MT) quality is essential in developing high quality MT systems. Despite previous criticisms, BLEU remains the most popular machine translation metric. Previous studies on the schism between BLEU and manual evaluation highlighted the poor correlation between MT systems with low BLEU scores and high manual evaluation scores. Alternatively, the RIBES metric-which is more sensitive to reordering-has shown to have better correlations with human judgements, but in our experiments it also fails to correlate with human judgements. In this paper we demonstrate, via our submission to the Workshop on Asian Translation 2015 (WAT 2015), a patent translation system with very high BLEU and RIBES scores and very poor human judgement scores.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Automatic Machine Translation (MT) evaluation metrics have been criticized for a variety of reasons (Babych and Hartley, 2004; Callison-Burch et al., 2006) ."
                    },
                    {
                        "id": 1,
                        "string": "However, the relatively consistent correlation of higher BLEU scores (Papineni et al., 2002) and better human judgements in major machine translation shared tasks has led to the conventional wisdom that translations with significantly higher BLEU scores generally suggests a better translation than its lower scoring counterparts (Bojar et al., 2014; Nakazawa et al., 2014; Cettolo et al., 2014) ."
                    },
                    {
                        "id": 2,
                        "string": "Callison-Burch et al."
                    },
                    {
                        "id": 3,
                        "string": "(2006) has anecdotally presented possible failures of BLEU by showing examples of translations with the same BLEU score but of different translation quality."
                    },
                    {
                        "id": 4,
                        "string": "Through meta-evaluation 1 of BLEU scores and human judgements scores of the 2005 NIST MT Evaluation exercise, they have also showed high correlations of R 2 = 0.87 (for adequacy) and R 2 = 0.74 (for fluency) when an outlier rule-based machine translation system with poor BLEU score and high human score is excluded; when included the correlations drops to 0.14 for adequacy and 0.74 for fluency."
                    },
                    {
                        "id": 5,
                        "string": "Despite showing the poor correlation between BLEU and human scores, Callison-Burch et al."
                    },
                    {
                        "id": 6,
                        "string": "(2006) had only empirically meta-evaluated a scenario where low BLEU score does not necessary result in a poor human judgement score."
                    },
                    {
                        "id": 7,
                        "string": "In this paper, we demonstrate a real-world example of machine translation that yielded high automatic evaluation scores but failed to obtain a good score on manual evaluation in an MT shared task submission."
                    },
                    {
                        "id": 8,
                        "string": "Papineni et al."
                    },
                    {
                        "id": 9,
                        "string": "(2002) originally define BLEU n-gram precision p n by summing the n-gram matches for every hypothesis sentence S in the test corpus C: BLEU p n = S∈C ngram∈S Count matched (ngram) S∈C ngram∈S Count(ngram) (1) BLEU is a precision based metric; to emulate recall, the brevity penalty (BP) is introduced to compensate for the possibility of high precision translation that are too short."
                    },
                    {
                        "id": 10,
                        "string": "The BP is calculated as: BP = 1 if c > r e 1−r/c if c ≤ r (2) where c and r respectively refers to the length of the hypothesis translations and the reference translations."
                    },
                    {
                        "id": 11,
                        "string": "The resulting system BLEU score is calculated as follows: BLEU = BP × exp( N n=1 w n log p n ) (3) where n refers to the orders of n-gram considered for p n and w n refers to the weights assigned for the n-gram precisions; in practice, the weights are uniformly distributed."
                    },
                    {
                        "id": 12,
                        "string": "A BLEU score can range from 0 to 1 and the closer to 1 indicates that a hypothesis translation is closer to the reference translation 2 ."
                    },
                    {
                        "id": 13,
                        "string": "Traditionally, BLEU scores has showed high correlation with human judgements and is still used as the de facto standard automatic evaluation metric for major machine translation shared tasks."
                    },
                    {
                        "id": 14,
                        "string": "And BLEU continues to show high correlations primarily for n-gram-based machine translation systems Nakazawa et al., 2014) ."
                    },
                    {
                        "id": 15,
                        "string": "However, the fallacy of BLEU-human correlations can be easily highlighted with the following example: Source: ᄋ ᅵᄅ ᅥᄒ ᅡ ᆫᄌ ᅡ ᆨᄋ ᅭ ᆼᄋ ᅳ ᆯᄇ ᅡ ᆯᄒ ᅱᄒ ᅡᄀ ᅵᄋ ᅱᄒ ᅢᄉ ᅥᄂ ᅳ ᆫ, ᄀ ᅡ ᆨᄀ ᅡ ᆨ 0.005% ᄋ ᅵᄉ ᅡ ᆼᄒ ᅡ ᆷᄋ ᅲᄒ ᅡᄂ ᅳ ᆫᄀ ᅥ ᆺᄋ ᅵᄇ ᅡᄅ ᅡ ᆷᄌ ᅵ ᆨᄒ ᅡᄃ ᅡ. Hypothesis: このような作用を発揮するためには、夫々 0.005%以上含有することが好ましい。 Baseline: こ の よ う な 作 用 を 発 揮 す る た め に は 、それぞれ 0 . 0 0 5 % 以 上 含 有 す ることが好ましい。 Reference: このような作用を発揮させるためには、夫々 0.005%以上含有させることが好まし い。 2 Alternatively, researchers would choose to inflate the BLEU score to a range between 0 to 100 to improve readability of the scores without the decimal prefix."
                    },
                    {
                        "id": 16,
                        "string": "Source/Reference English Gloss: \"So as to achieve the reaction, it is preferable that it contains more 0.005% of each [chemical]\" The unigram, bigram, trigrams and fourgrams (p 1 , p 2 , p 3 , p 4 ) precision of the hypothesis translation are 90.0, 78.9, 66.7 and 52.9 respectively."
                    },
                    {
                        "id": 17,
                        "string": "The p n score for the hypothesis sentence precision score for the reference is 70.75."
                    },
                    {
                        "id": 18,
                        "string": "When considering the brevity penalty of 0.905, the overall BLEU is 64.03."
                    },
                    {
                        "id": 19,
                        "string": "Comparatively, the n-gram precisions for the baseline translations are p 1 =84.2, p 2 =66.7, p 3 =47.1 and p 4 =25.0 and the overall BLEU is 43.29 with a BP of 0.854."
                    },
                    {
                        "id": 20,
                        "string": "In this respect, one would consider the baseline translation inferior to the hypothesis with a >10 BLEU difference."
                    },
                    {
                        "id": 21,
                        "string": "However, there is only a subtle difference between the hypothesis and the baseline translation (そ れ ぞ れvs 夫々)."
                    },
                    {
                        "id": 22,
                        "string": "This is an actual example from the 2 nd Workshop on Asian Translation (WAT 2015) MT shared task evaluation, and five crowd-sourced evaluators consider the baseline translation a better translation."
                    },
                    {
                        "id": 23,
                        "string": "For this particular example, the human evaluators preferred the natural translation from Korean ᄀ ᅡ ᆨ ᄀ ᅡ ᆨ gaggag to Japanese そ れ ぞ れ sorezore instead of the patent document usage of 夫々 sorezore, both それぞれ and 夫々 can be loosely translated as 'respectively' or '(for) each' in English."
                    },
                    {
                        "id": 24,
                        "string": "The big difference in BLEU for a single lexical difference in translation is due to the geometric averaged scores for the individual n-gram precisions."
                    },
                    {
                        "id": 25,
                        "string": "It assumes the independence of n-gram precisions and accentuates the precision disparity by involving the single lexical difference in all possible n-grams that capture the particular position in the sentence."
                    },
                    {
                        "id": 26,
                        "string": "This is clearly indicated by the growing precision difference in the higher order n-grams."
                    },
                    {
                        "id": 27,
                        "string": "RIBES Another failure of BLEU is the lack of explicit consideration for reordering."
                    },
                    {
                        "id": 28,
                        "string": "Callison-Burch et al."
                    },
                    {
                        "id": 29,
                        "string": "(2006) highlighted that since BLEU only takes reordering into account by rewarding the higher n-gram orders, freely permuted unigrams and bigrams matches are able to sustain a high BLEU score with little penalty caused by tri/fourgram mismatches."
                    },
                    {
                        "id": 30,
                        "string": "To overcome reordering, the RIBES score was introduced by adding a rank correlation coefficient 3 prior to unigram matches without the need for higher order n-gram matches (Isozaki et al., 2010) ."
                    },
                    {
                        "id": 31,
                        "string": "Let us consider another example: Source: Tᄋ ᅭ ᆼᄋ ᅲ ᆼ(DSC) = 89.9℃; Tᄀ ᅧ ᆯᄌ ᅥ ᆼᄒ ᅪ(DSC) = 72℃( 5℃/ ᄇ ᅮ ᆫᄋ ᅦᄉ ᅥDSC ᄅ ᅩᄎ ᅳ ᆨᄌ ᅥ ᆼ) ."
                    },
                    {
                        "id": 32,
                        "string": "Hypothesis: Tmelt(DSC)=72℃(5℃/分 でDSC測定(DSC)=89 ."
                    },
                    {
                        "id": 33,
                        "string": "9 結晶化度 (T)。 Baseline: T溶融(DSC)=89."
                    },
                    {
                        "id": 34,
                        "string": "9℃;T結晶化 (DSC)=72℃(5℃/分でDSCで測 定)。 Reference: Tmelt(DSC)=89.9℃;Tcr yst(DSC)=72℃(5℃/分でDS Cを用いて測定)。 Source/Reference English Gloss: Tmelt (DSC) = 8 9."
                    },
                    {
                        "id": 35,
                        "string": "9 • C; Tcryst (DSC) = 7 • C (measured using DSC at 5 • C / min) The example above shows the marginal effectiveness of RIBES when penalizing wrongly ordered phrases in the hypothesis."
                    },
                    {
                        "id": 36,
                        "string": "The baseline translation accurately translates the meaning of the sentence with a minor partial translation of the technical variables (i.e."
                    },
                    {
                        "id": 37,
                        "string": "Tmelt -> T溶融 and Tᄀ ᅧ ᆯ ᄌ ᅥ ᆼᄒ ᅪ -> T結晶化."
                    },
                    {
                        "id": 38,
                        "string": "However, the hypothesis translation made serious adequacy errors when inverting the values of the technical variables but the hypothesis translation was minimally penalized in RIBES and also BLEU."
                    },
                    {
                        "id": 39,
                        "string": "The RIBES score for the hypothesis and baseline translations are 94.04 and 86.33 respectively whereas their BLEU scores are 53.3 and 58.8."
                    },
                    {
                        "id": 40,
                        "string": "In the WAT 2015 evaluation, five evaluators unanimously voted in favor for the baseline translation."
                    },
                    {
                        "id": 41,
                        "string": "Although the RIBES score presents a wider difference between the hypothesis and baseline translation than BLEU, it is insufficient to account for the arrant error that the hypothesis translation made."
                    },
                    {
                        "id": 42,
                        "string": "Other Shades of BLEU / RIBES It is worth noting that there are other automatic MT evaluation metrics that depend on the same precision-based score with primary differences in how the Count match (ngram) is measured; Giménez and Màrquez (2007) described other linguistics features that one could match in place of surface n-grams, such as lexicalized syntactic parse features, semantic entities and roles annotations, etc."
                    },
                    {
                        "id": 43,
                        "string": "As such, the modified BLEU-like metrics can present other aspects of syntactic fluency and semantic adequacy complementary to the string-based BLEU."
                    },
                    {
                        "id": 44,
                        "string": "A different approach to improve upon the BLEU scores is to allow paraphrases or gappy variants and replace the proportion of Count match (ngram) / Count(ngram) by a lexical similarity measure."
                    },
                    {
                        "id": 45,
                        "string": "Banerjee and Lavie (2005) introduced the METEOR metric that allows hypotheses' n-grams to match paraphrases and stems instead of just the surface strings."
                    },
                    {
                        "id": 46,
                        "string": "Lin and Och (2004) presented the ROUGE-S metrics that uses skip-grams matches."
                    },
                    {
                        "id": 47,
                        "string": "More recently, pre-trained regression models based on semantic textual similarity and neural network-based similarity measures trained on skip-grams are applied to replace the n-gram matching (Vela and Tan, 2015; Gupta et al., 2015) ."
                    },
                    {
                        "id": 48,
                        "string": "While enriching the surface n-gram matching allows the automatic evaluation metric to handle variant translations, it does not resolves the \"prominent crudeness\" of BLEU (Callison-Burch, 2006) involving (i) the omission of contentbearing materials not being penalized, and (ii) the inability to calculate recall despite the brevity penalty."
                    },
                    {
                        "id": 49,
                        "string": "Experimental Setup We describe our system submission 4 to the WAT 2015 shared task (Nakazawa et al., 2015) for Korean to Japanese patent translation."
                    },
                    {
                        "id": 50,
                        "string": "5 ."
                    },
                    {
                        "id": 51,
                        "string": "The Japan Patent Office (JPO) Patent Corpus is the official resource provided for the shared task."
                    },
                    {
                        "id": 52,
                        "string": "The training dataset is made up of 1 million sentences (250k each from the chemistry, electricity, mechanical engineering and physics do- (Kudo et al., 2004) respectively."
                    },
                    {
                        "id": 53,
                        "string": "We used the phrase-based SMT implemented in the Moses toolkit (Koehn et al., 2003; with the following vanilla Moses experimental settings: • MGIZA++ implementation of IBM word alignment model 4 with grow-diagonalfinal-and heuristics for word alignment and phrase-extraction (Och and Ney, 2003; Koehn et al., 2003; Gao and Vogel, 2008 ) (Koehn et al., 2003; Och and Ney, 2003; Gao and Vogel, 2008) • Bi-directional lexicalized reordering model that considers monotone, swap and discontinuous orientations (Koehn, 2005; Galley and Manning, 2008) • To minimize the computing load on the translation model, we compressed the phrasetable and lexical reordering model (Junczys-Dowmunt, 2012) • Language modeling is trained using KenLM using 5-grams, with modified Kneser-Ney smoothing (Heafield, 2011; Kneser and Ney, 1995; Chen and Goodman, 1998) ."
                    },
                    {
                        "id": 54,
                        "string": "The language model is quantized to reduce filesize and improve querying speed (Heafield et al., 2013; Whittaker and Raj, 2001) ."
                    },
                    {
                        "id": 55,
                        "string": "• Minimum Error Rate Training (MERT) (Och, 2003) to tune the decoding parameters."
                    },
                    {
                        "id": 56,
                        "string": "6 dev.txt and devtest.txt Human Evaluation The human judgment scores for the WAT evaluations were acquired using the Lancers crowdsourcing platform (WAT, 2014) ."
                    },
                    {
                        "id": 57,
                        "string": "Human evaluators were randomly assigned documents from the test set."
                    },
                    {
                        "id": 58,
                        "string": "They were shown the source document, the hypothesis translation and a baseline translation generated by the baseline phrase-based MT system."
                    },
                    {
                        "id": 59,
                        "string": "Baseline System Human evaluations were conducted as pairwise comparisons between translations from our system and the WAT organizers' phrase-based statistical MT baseline system."
                    },
                    {
                        "id": 60,
                        "string": "Table 1 highlights the parameter differences between the organizers and our phrase-based SMT system."
                    },
                    {
                        "id": 61,
                        "string": "Pairwise Comparison The human judgment scores for the WAT evaluations were acquired using the Lancers crowdsourcing platform."
                    },
                    {
                        "id": 62,
                        "string": "Human evaluators were randomly assigned documents from the test set."
                    },
                    {
                        "id": 63,
                        "string": "They were shown the source document, the hypothesis translation and a baseline translation generated by the phrase-based MT system."
                    },
                    {
                        "id": 64,
                        "string": "Five evaluators were asked to judge each document."
                    },
                    {
                        "id": 65,
                        "string": "The crowdsourced evaluators were non-experts, thus their judgements were not necessary precise, especially for patent translations."
                    },
                    {
                        "id": 66,
                        "string": "The evaluators were asked to judge whether the hypothesis or the baseline translation was better, or they were tied."
                    },
                    {
                        "id": 67,
                        "string": "The translation that was judged better constituted a win and the other a loss."
                    },
                    {
                        "id": 68,
                        "string": "For each, the majority vote between the five evaluators for the hypothesis decided whether the hypothesis won, lost or tied the baseline."
                    },
                    {
                        "id": 69,
                        "string": "The final human judgment score, 77 HUMAN, is calculated as follows: HUMAN = 100 × W − L W + L + T (4) By definition, the HUMAN score ranges from −100 to +100, where higher is better."
                    },
                    {
                        "id": 70,
                        "string": "Results Moses' default parameter tuning method, MERT, is non-deterministic, and hence it is advisable to tune the phrase-based model more than once (Clark et al."
                    },
                    {
                        "id": 71,
                        "string": "2011) ."
                    },
                    {
                        "id": 72,
                        "string": "We repeated the tuning step and submitted the system translations that achieved the higher BLEU score for manual evaluation."
                    },
                    {
                        "id": 73,
                        "string": "As a sanity check we also replicated the organizers' baseline system and submitted it for manual evaluation."
                    },
                    {
                        "id": 74,
                        "string": "We expect this system to score close to zero."
                    },
                    {
                        "id": 75,
                        "string": "We submitted a total of three sets of output to the WAT 2015 shared task, two of which underwent manual evaluation."
                    },
                    {
                        "id": 76,
                        "string": "Table 2 presents the BLEU scores achieved by our phrase-based MT system in contrast to the organizers' baseline phrase-based system."
                    },
                    {
                        "id": 77,
                        "string": "The difference in BLEU between the organizers' system and ours may be due to our inclusion of the second development set in building our language model and the inclusion of more training data by allowing a maximum of 80 tokens per document as compared to 40 (see Table 1 )."
                    },
                    {
                        "id": 78,
                        "string": "Systems Another major difference is the high distortion limit we have set as compared to the organizers' monotonic system, it is possible that the high distortion limit compensates for the long distance word alignments that might have been penalized by the phrasal and reordering probabilities which results in the higher RIBES and BLEU score."
                    },
                    {
                        "id": 79,
                        "string": "7 7 In our submission Byte2String refers to the encoding problem we encountered when tokenizing the Korean text with MeCab causing our system to read Korean byte-However, the puzzling fact is that our system being 15 BLEU points better than the organizers' baseline begets a terribly low human judgement score."
                    },
                    {
                        "id": 80,
                        "string": "We discuss this next."
                    },
                    {
                        "id": 81,
                        "string": "Segment Level Meta-Evaluation We perform a segment level meta-evaluation by calculating the BLEU and RIBES score difference for each hypothesis-baseline translation."
                    },
                    {
                        "id": 82,
                        "string": "Figures 1 and 2 show the correlations of the BLEU and RIBES score difference against the positive and negative human judgements score for every sentence."
                    },
                    {
                        "id": 83,
                        "string": "Figure 1 presents the considerable incongruity between our system's high BLEU improvements (>+60 BLEU) being rated marginally better than the baseline translation, indicated by the orange and blue bubbles on the top right corner."
                    },
                    {
                        "id": 84,
                        "string": "There were even translations from our system with >+40 BLEU improvements that tied with the organizer's baseline translations, indicated by the grey bubbles at around the +40 BLEU and +5 RIBES region."
                    },
                    {
                        "id": 85,
                        "string": "Except for the a portion of segments that scored worse than the baseline system (lower right part of the graph where BLEU and RIBES falls below 0), the overall trend in Figure 1 presents the conventional wisdom that the BLEU improvements from our systems reflects positive human judgement scores."
                    },
                    {
                        "id": 86,
                        "string": "However, Figure 2 presents the awkward disparity where many segments with BLEU improvements were rated strongly as poorer translations when compared against the baseline."
                    },
                    {
                        "id": 87,
                        "string": "Also, many segments with high BLEU improvements were tied with the baseline translations, indicated by the grey bubbles across the positive BLEU scores."
                    },
                    {
                        "id": 88,
                        "string": "As shown in the examples in Section 2, a number of prominent factors contribute to these disparity in high BLEU / RIBES improvements and low HUMAN judgement scores: • Minor lexical differences causing a huge difference in n-gram precision • Crowd-sourced vs. expert preferences on terminology, especially for patents code instead of Unicode."
                    },
                    {
                        "id": 89,
                        "string": "But the decoder could still output Unicode since our Japanese data was successfully tokenized using MeCab, we submitted this output under the submission name Byte2String; the Byte2String submission is not reported in this paper."
                    },
                    {
                        "id": 90,
                        "string": "Later we rectified the encoding problem by using KoNLPy and re-ran the alignment, phrase extraction, MERT and decoding, hence the submission name, Unicode2String, i.e."
                    },
                    {
                        "id": 91,
                        "string": "the system reported in Table 2 ."
                    },
                    {
                        "id": 92,
                        "string": "Figure 1 : Correlation between BLEU, RIBES differences and Positive HUMAN Judgements (HUMAN Scores of 0, +1, +2, +3, +4 and +5 represented by the colored bubbles: grey, orange, blue, green, red and purple; larger area means more segments with the respective HUMAN Scores) Figure 2 : Correlation between BLEU, RIBES differences and Negative HUMAN Judgements (HUMAN Scores of 0, -1, -2, -3, -4 and -5 represented by the colored bubbles: grey, orange, blue, green, red and purple; larger area means more segments with the respective HUMAN Scores) • Minor MT evaluation metric differences not reflecting major translation inadequacy Each of these failures contributes to an increased amount of disparity between the automatic translation metric improvements and human judgement scores."
                    },
                    {
                        "id": 93,
                        "string": "Conclusion In this paper we have demonstrated a real-world case where high BLEU and RIBES scores do not correlate with better human judgement."
                    },
                    {
                        "id": 94,
                        "string": "Using our system's submission for the WAT 2015 patent shared task, we presented several factors that might contribute to the poor correlation, and also performed a segment level meta-evaluation to identify segments where our system's high BLEU / RIBES improvements were deemed substantially worse than the baseline translations."
                    },
                    {
                        "id": 95,
                        "string": "We hope our results and analysis will lead to improvements in automatic translation evaluation metrics."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 8
                    },
                    {
                        "section": "BLEU",
                        "n": "2",
                        "start": 9,
                        "end": 26
                    },
                    {
                        "section": "RIBES",
                        "n": "2.1",
                        "start": 27,
                        "end": 41
                    },
                    {
                        "section": "Other Shades of BLEU / RIBES",
                        "n": "2.2",
                        "start": 42,
                        "end": 48
                    },
                    {
                        "section": "Experimental Setup",
                        "n": "3",
                        "start": 49,
                        "end": 55
                    },
                    {
                        "section": "Human Evaluation",
                        "n": "3.1",
                        "start": 56,
                        "end": 58
                    },
                    {
                        "section": "Baseline System",
                        "n": "3.1.1",
                        "start": 59,
                        "end": 60
                    },
                    {
                        "section": "Pairwise Comparison",
                        "n": "3.1.2",
                        "start": 61,
                        "end": 69
                    },
                    {
                        "section": "Results",
                        "n": "4",
                        "start": 70,
                        "end": 80
                    },
                    {
                        "section": "Segment Level Meta-Evaluation",
                        "n": "5",
                        "start": 81,
                        "end": 91
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 92,
                        "end": 95
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1200-Figure2-1.png",
                        "caption": "Figure 2: Correlation between BLEU, RIBES differences and Negative HUMAN Judgements (HUMAN Scores of 0, -1, -2, -3, -4 and -5 represented by the colored bubbles: grey, orange, blue, green, red and purple; larger area means more segments with the respective HUMAN Scores)",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 353.76,
                            "y2": 593.28
                        }
                    },
                    {
                        "filename": "../figure/image/1200-Figure1-1.png",
                        "caption": "Figure 1: Correlation between BLEU, RIBES differences and Positive HUMAN Judgements (HUMAN Scores of 0, +1, +2, +3, +4 and +5 represented by the colored bubbles: grey, orange, blue, green, red and purple; larger area means more segments with the respective HUMAN Scores)",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 61.44,
                            "y2": 300.96
                        }
                    },
                    {
                        "filename": "../figure/image/1200-Table2-1.png",
                        "caption": "Table 2: BLEU and HUMAN scores for WAT 2015",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 293.28,
                            "y1": 368.15999999999997,
                            "y2": 459.84
                        }
                    },
                    {
                        "filename": "../figure/image/1200-Table1-1.png",
                        "caption": "Table 1: Differences between Organizer’s and our Phrase-based SMT system",
                        "page": 3,
                        "bbox": {
                            "x1": 175.68,
                            "x2": 422.4,
                            "y1": 64.8,
                            "y2": 195.35999999999999
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-49"
        },
        {
            "slides": {
                "1": {
                    "title": "Presentation of the challenge Tasks",
                    "text": [
                        "Task A: Hierarchical text classification",
                        "I Organizers distribute new unclassified MEDLINE articles.",
                        "I Participants have 21 hours to assign MeSH terms to the articles.",
                        "I Evaluation based on annotations of MEDLINE curators.",
                        "1st batch 2nd batch 3rd batch End of Task5a",
                        "ry ry ry rch rch rch rch rch ril ril br br br Ma Ma Ma Ma ua ua ua Ma y 0 Ma y 0 Ma Ap y 1 Ma Ap Ap ril y 2 Ma Fe Fe Fe",
                        "G. Paliouras. Results of the fifth edition of the BioASQ Challenge, 4th of August 2017",
                        "Task B: IR, QA, summarization",
                        "I Organizers distribute English biomedical questions.",
                        "I Participants have 24 hours to provide: relevant articles,",
                        "snippets, concepts, triples, exact answers, ideal answers.",
                        "I Evaluation: both automatic (GMAP, MRR, Rouge etc.) and",
                        "manual (by biomedical experts).",
                        "1st batch 2nd batch 3rd batch 4th batch 5th batch",
                        "rch rch Ma Ma rch Ma Ma Ap Ap ril ril Ap Ap ril ril rch y 3 Ma y 4 Ma",
                        "Phase A Phase B"
                    ],
                    "page_nums": [
                        2,
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Presentation of the challenge New task",
                    "text": [
                        "Task C: Funding Information Extraction",
                        "I Organizers distribute PMC full-text articles.",
                        "I Participants have 48 hours to extract: grant-IDs, funding",
                        "agencies, full grants (i.e. the combination of a grant-ID and the corresponding funding agency).",
                        "I Evaluation based on annotations of MEDLINE curators.",
                        "Dry Run Test Batch",
                        "ril Ap ril Ap",
                        "G. Paliouras. Results of the fifth edition of the BioASQ Challenge, 4th of August 2017"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Presentation of the challenge BioASQ ecosystem",
                    "text": [
                        "G. Paliouras. Results of the fifth edition of the BioASQ Challenge, 4th of August 2017"
                    ],
                    "page_nums": [
                        5,
                        6
                    ],
                    "images": []
                },
                "4": {
                    "title": "Presentation of the challenge Per task",
                    "text": [
                        "G. Paliouras. Results of the fifth edition of the BioASQ Challenge, 4th of August 2017"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "19": {
                    "title": "Challenge Participation Overall",
                    "text": [
                        "G. Paliouras. Results of the fifth edition of the BioASQ Challenge, 4th of August 2017"
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": []
                }
            },
            "paper_title": "Results of the fifth edition of the BioASQ Challenge",
            "paper_id": "1201",
            "paper": {
                "title": "Results of the fifth edition of the BioASQ Challenge",
                "abstract": "The goal of the BioASQ challenge is to engage researchers into creating cuttingedge biomedical information systems. Specifically, it aims at the promotion of systems and methodologies that are able to deal with a plethora of different tasks in the biomedical domain. This is achieved through the organization of challenges. The fifth challenge consisted of three tasks: semantic indexing, question answering and a new task on information extraction. In total, 29 teams with more than 95 systems participated in the challenge. Overall, as in previous years, the best systems were able to outperform the strong baselines. This suggests that stateof-the art systems are continuously improving, pushing the frontier of research.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The aim of this paper is twofold."
                    },
                    {
                        "id": 1,
                        "string": "First, we aim to give an overview of the data issued during the BioASQ challenge in 2017."
                    },
                    {
                        "id": 2,
                        "string": "In addition, we aim to present the systems that participated in the challenge and evaluate their performance."
                    },
                    {
                        "id": 3,
                        "string": "To achieve these goals, we begin by giving a brief overview of the tasks, which took place from February to May 2017, and the challenge's data."
                    },
                    {
                        "id": 4,
                        "string": "Thereafter, we provide an overview of the systems that participated in the challenge."
                    },
                    {
                        "id": 5,
                        "string": "Detailed descriptions of some of the systems are given in workshop proceedings."
                    },
                    {
                        "id": 6,
                        "string": "The evaluation of the systems, which was carried out using state-of-the-art measures or manual assessment, is the last focal point of this paper, with remarks regarding the results of each task."
                    },
                    {
                        "id": 7,
                        "string": "The conclusions sum up this year's challenge."
                    },
                    {
                        "id": 8,
                        "string": "Overview of the Tasks The challenge comprised three tasks: (1) a largescale semantic indexing task (Task 5a), (2) a ques-tion answering task (Task 5b) and (3) a funding information extraction task (Task 5c), described in more detail in the following sections."
                    },
                    {
                        "id": 9,
                        "string": "Large-scale semantic indexing -5a In Task 5a the goal is to classify documents from the PubMed digital library into concepts of the MeSH hierarchy."
                    },
                    {
                        "id": 10,
                        "string": "Here, new PubMed articles that are not yet annotated by MEDLINE indexers are collected and used as test sets for the evaluation of the participating systems."
                    },
                    {
                        "id": 11,
                        "string": "In contrast to previous years, articles from all journals were included in the test data sets of task 5a."
                    },
                    {
                        "id": 12,
                        "string": "As soon as the annotations are available from the MEDLINE indexers, the performance of each system is calculated using standard flat information retrieval measures, as well as, hierarchical ones."
                    },
                    {
                        "id": 13,
                        "string": "As in previous years, an on-line and large-scale scenario was provided, dividing the task into three independent batches of 5 weekly test sets each."
                    },
                    {
                        "id": 14,
                        "string": "Participants had 21 hours to provide their answers for each test set."
                    },
                    {
                        "id": 15,
                        "string": "Table  1 shows the number of articles in each test set of each batch of the challenge."
                    },
                    {
                        "id": 16,
                        "string": "12,834,585 articles with 27,773 labels were provided as training data to the participants."
                    },
                    {
                        "id": 17,
                        "string": "Biomedical semantic QA -5b The goal of Task 5b was to provide a largescale question answering challenge where the systems had to cope with all the stages of a question answering task for four types of biomedical questions: yes/no, factoid, list and summary questions (Balikas et al., 2013) ."
                    },
                    {
                        "id": 18,
                        "string": "As in previous years, the task comprised two phases: In phase A, BioASQ released 100 questions and participants were asked to respond with relevant elements from specific resources, including relevant MEDLINE articles, relevant snippets extracted from the articles, relevant concepts and relevant RDF triples."
                    },
                    {
                        "id": 19,
                        "string": "In phase B, the released questions were enhanced with relevant articles and snippets selected manu- ally and the participants had to respond with exact answers, as well as with summaries in natural language (dubbed ideal answers)."
                    },
                    {
                        "id": 20,
                        "string": "The task was split into five independent batches and the two phases for each batch were run with a time gap of 24 hours."
                    },
                    {
                        "id": 21,
                        "string": "In each phase, the participants received 100 questions and had 24 hours to submit their answers."
                    },
                    {
                        "id": 22,
                        "string": "Funding information extraction -5c Task 5c was introduced for the first time this year and the challenge at hand was to extract grant in-formation from Biomedical articles."
                    },
                    {
                        "id": 23,
                        "string": "Funding information can be very useful; in order to estimate, for example, the impact of an agency's funding in the biomedical scientific literature or to identify agencies actively supporting specific directions in research."
                    },
                    {
                        "id": 24,
                        "string": "MEDLINE citations are annotated with information about funding from specified agencies 1 ."
                    },
                    {
                        "id": 25,
                        "string": "This funding information is either provided by the author manuscript submission systems or extracted manually from the full text of articles during the indexing process."
                    },
                    {
                        "id": 26,
                        "string": "In particular, NLM human indexers identify the grant ID and the funding agencies can be extracted from the string of the grant ID 2 ."
                    },
                    {
                        "id": 27,
                        "string": "In some cases, only the funding agency is mentioned in the article, without the grant ID."
                    },
                    {
                        "id": 28,
                        "string": "In this task funding information from MED-LINE was used, as golden data, in order to train and evaluate systems."
                    },
                    {
                        "id": 29,
                        "string": "The systems were asked to extract grant information mentioned in the full text, but author-provided information is not necessarily mentioned in the article."
                    },
                    {
                        "id": 30,
                        "string": "Therefore, grant IDs not mentioned in the article were filtered out."
                    },
                    {
                        "id": 31,
                        "string": "This filtering also excluded grant IDs deviating from NLM's general policy of storing grant IDs as published, without any normalization."
                    },
                    {
                        "id": 32,
                        "string": "When an agency was mentioned in the text without a grant ID, it was kept only if it appeared in the list of agencies and abbreviations provided by NLM."
                    },
                    {
                        "id": 33,
                        "string": "Cases of misspellings or alternative naming of agencies were removed."
                    },
                    {
                        "id": 34,
                        "string": "In addition, information for funding agencies that are no longer indexed by NLM was omitted."
                    },
                    {
                        "id": 35,
                        "string": "Consequently, the golden data used in the task consisted of a subset of all funding information mentioned in the articles."
                    },
                    {
                        "id": 36,
                        "string": "During the challenge, a training and a test dataset were prepared."
                    },
                    {
                        "id": 37,
                        "string": "The test set of MED-LINE documents with their full-text available in PubMed Central was released and the participants were asked to extract grant IDs and grant agencies mentioned in each test article."
                    },
                    {
                        "id": 38,
                        "string": "The participating systems were evaluated on (a) the extraction of grant IDs, (b) the extraction of grant agencies and (c) full-grant extraction, i.e."
                    },
                    {
                        "id": 39,
                        "string": "the combination of grant ID and the corresponding funding agency."
                    },
                    {
                        "id": 40,
                        "string": "For this task, 10 teams participated and results from 31 different systems were submitted."
                    },
                    {
                        "id": 41,
                        "string": "In the following paragraphs we describe those systems for which a description was obtained, stressing their key characteristics."
                    },
                    {
                        "id": 42,
                        "string": "An overview of the systems and their approaches can be seen in Table 4 ."
                    },
                    {
                        "id": 43,
                        "string": "Table 4 : Systems and approaches for Task 5a."
                    },
                    {
                        "id": 44,
                        "string": "Systems for which no description was available at the time of writing are omitted."
                    },
                    {
                        "id": 45,
                        "string": "The \"Search system\" and its variants were developed as a UIMA-based text and data mining workflow, where different search strategies were adopted to automatically annotate documents with MeSH terms."
                    },
                    {
                        "id": 46,
                        "string": "On the other hand, the \"MZ\" systems applied Binary Relevance (BR) classification, using TF-IDF features, and Latent Dirichlet allocation (LDA) models with label frequencies per journal as prior frequencies, using regression for threshold prediction."
                    },
                    {
                        "id": 47,
                        "string": "A different approach is adopted by the \"Sequencer\" systems, developed by the team from the Technical University of Darmstadt, that considers the task as a sequenceto-sequence prediction problem and use recurrent neural networks based algorithm to cope with it."
                    },
                    {
                        "id": 48,
                        "string": "The \"DeepMeSH\" systems implement document to vector (d2v) and tf-idf feature embeddings (Peng et al., 2016) , alongside the MESHLabeler system (Liu et al., 2015) that achieved the best scores overall, integrating multiple evidence using learning to rank (LTR)."
                    },
                    {
                        "id": 49,
                        "string": "A similar approach, with regards to the d2v and tf-idf representations of the text, is followed by the \"AUTH\" team."
                    },
                    {
                        "id": 50,
                        "string": "Regarding the learning algorithms they've extended their previous system (Papagiannopoulou et al., 2016) , improving the Labeled LDA and SVM base models, as well as introducing a new ensemble methodology based on label frequencies and multi-label stacking."
                    },
                    {
                        "id": 51,
                        "string": "Last but not least, the team from the University of Vigo developed the \"Iria\" systems."
                    },
                    {
                        "id": 52,
                        "string": "Building upon their previous approach (Ribadas et al., 2014) that uses an Apache Lucene Index to provide most similar citations, they developed two systems that follow a multilabel k-NN approach."
                    },
                    {
                        "id": 53,
                        "string": "They also incorporated token bigrams and PMI scores to capture relevant multiword terms through a voting ensemble scheme and the ConceptMapper annotator tool, from the Apache UIMA project (Tanenblatt et al., 2010) , to match subject headings with the citation's abstract text."
                    },
                    {
                        "id": 54,
                        "string": "Baselines: During the challenge, two systems served as baselines."
                    },
                    {
                        "id": 55,
                        "string": "The first baseline is a stateof-the-art method called Medical Text Indexer (MTI) (Mork et al., 2014) with recent improvements incorporated as described in (Zavorin et al., 2016) ."
                    },
                    {
                        "id": 56,
                        "string": "MTI is developed by the National Library of Medicine (NLM) and serves as a classification system for articles of MEDLINE, assisting the indexers in the annotation process."
                    },
                    {
                        "id": 57,
                        "string": "The second baseline is an extension of the system MTI, incorporating features of the winning system of the first BioASQ challenge (Tsoumakas et al., 2013) ."
                    },
                    {
                        "id": 58,
                        "string": "Task 5b The question answering task was tackled by 51 different systems, developed by 17 teams."
                    },
                    {
                        "id": 59,
                        "string": "In the first phase, which concerns the retrieval of information required to answer a question, 9 teams with 25 systems participated."
                    },
                    {
                        "id": 60,
                        "string": "In the second phase, where teams are requested to submit exact and ideal answers, 10 teams with 29 different systems participated."
                    },
                    {
                        "id": 61,
                        "string": "Two of the teams participated in both phases."
                    },
                    {
                        "id": 62,
                        "string": "An overview of the technologies employed by each team can be seen in Table 5 ."
                    },
                    {
                        "id": 63,
                        "string": "The \"Basic QA pipeline\" approach is one of the two that participated in both Phases."
                    },
                    {
                        "id": 64,
                        "string": "It uses MetaMap for query expansion, taking into account  the text and the title of each article, and the BM25 probabilistic model (Robertson et al., 1995) in order to match questions with documents, snippets etc."
                    },
                    {
                        "id": 65,
                        "string": "The same goes for phase B, except for the exact answers, where stop words were removed and the top-k most frequent words were selected."
                    },
                    {
                        "id": 66,
                        "string": "\"Olelo\" is the second approach that tackles both phases of task B."
                    },
                    {
                        "id": 67,
                        "string": "It is built on top of the SAP HANA database and uses various NLP components, such as question processing, document and passage retrieval, answer processing and multidocument summarization based on previous approaches (Schulze et al., 2016) to develop a comprehensive system that retrieves relevant information and provides both exact and ideal answers for biomedical questions."
                    },
                    {
                        "id": 68,
                        "string": "Semantic role labeling (SRL) based extensions were also investigated."
                    },
                    {
                        "id": 69,
                        "string": "One of the teams that participated only in phase A, is \"USTB\" who combined different strategies to enrich query terms."
                    },
                    {
                        "id": 70,
                        "string": "Specifically, sequential dependence models (Metzler and Croft, 2005) , pseudorelevance feedback models, fielded sequential dependence models and divergence from random-ness models are used on the training data to create better search queries."
                    },
                    {
                        "id": 71,
                        "string": "The \"fdu\" systems, as in previous years , use a language model in order to retrieve relevant documents and keyword scoring with word similarity for snippet extraction."
                    },
                    {
                        "id": 72,
                        "string": "The \"UNCC\" team on the other hand, focused mainly on the retrieval of relevant concepts and articles using the Stanford Parser (Chen and Manning, 2014) and semantic indexing."
                    },
                    {
                        "id": 73,
                        "string": "In Phase B, the Macquarie University (MQU) team focused on ideal answers (Molla, 2017), submitting different models ranging from a \"trivial baseline\" of relevant snippets to deep learning under regression settings (Malakasiotis et al., 2015) and neural networks with word embeddings."
                    },
                    {
                        "id": 74,
                        "string": "The Carnegie Mellon University team (\"OAQA\"), focused also on ideal answer generation, building upon previous versions of the \"OAQA\" system."
                    },
                    {
                        "id": 75,
                        "string": "They used extractive summarization techniques and experimented with different biomedical ontologies and algorithms including agglomerative clustering, Maximum Marginal Relevance and sentence compression."
                    },
                    {
                        "id": 76,
                        "string": "They also introduced a novel similarity metric that incorporates both semantic information (using word embeddings) and tf-idf statistics for each sentence/question."
                    },
                    {
                        "id": 77,
                        "string": "Many systems used a modular approach breaking the problem down to question analysis, candidate answer generation and answer ranking."
                    },
                    {
                        "id": 78,
                        "string": "The \"LabZhu\" systems, followed this approach, based on previous years' methodologies ."
                    },
                    {
                        "id": 79,
                        "string": "In particular, they applied rule-based question type analysis and used Standford POS tool and PubTator for candidate answer generation."
                    },
                    {
                        "id": 80,
                        "string": "They also used word frequencies for candidate answer ranking."
                    },
                    {
                        "id": 81,
                        "string": "The \"DeepQA\" systems focused on factoid and list questions, using an extractive QA model, restricting the system to output substrings of the provided text snippets."
                    },
                    {
                        "id": 82,
                        "string": "At the core of their system stands a state-of-the-art neural QA system, namely FastQA (Weissenborn et al., 2017) , extended with biomedical word embeddings."
                    },
                    {
                        "id": 83,
                        "string": "The model was pre-trained on a large-scale opendomain QA dataset, SQuAD (Rajpurkar et al., 2016) , and then the parameters were fine-tuned on the BioASQ training set."
                    },
                    {
                        "id": 84,
                        "string": "Finally, the \"sarrouti\" system, from Morocco's USMBA, uses among others a dictionary approach, term frequencies of UMLS metathesaurus' concepts and the BM25 model."
                    },
                    {
                        "id": 85,
                        "string": "Baselines: For this challenge the open source OAQA system proposed by (Yang et al., 2016) for BioASQ4 was used as a strong baseline."
                    },
                    {
                        "id": 86,
                        "string": "This system, as well as its previous version (Yang et al., 2015) for BioASQ3, had achieved top performance in producing exact answers."
                    },
                    {
                        "id": 87,
                        "string": "The system uses an UIMA based framework to combine different components."
                    },
                    {
                        "id": 88,
                        "string": "Question and snippet parsing is based on ClearNLP."
                    },
                    {
                        "id": 89,
                        "string": "MetaMap, TmTool, C-Value and LingPipe are used for concept identification and UMLS Terminology Services (UTS) for concept retrieval."
                    },
                    {
                        "id": 90,
                        "string": "In addition, identification of concept, document and snippet relevance is based on classifier components and scoring, ranking and reranking techniques are also applied in the final steps."
                    },
                    {
                        "id": 91,
                        "string": "Task 5c In this inaugural year for task c, 3 teams participated with a total of 11 systems."
                    },
                    {
                        "id": 92,
                        "string": "A brief outline of the techniques used by the participating systems is provided in table 6."
                    },
                    {
                        "id": 93,
                        "string": "Systems Approach Simple regions of interest, SVM, regular expressions, hand-made rules, char-distances, ensemble DZG regions of interest, SVM, tf-idf of bigrams, HMMs, MaxEnt, CRFs, ensemble AUTH regions of interest, regular expressions Table 6 : Overview of the methodologies used by the participating systems in Task 5c."
                    },
                    {
                        "id": 94,
                        "string": "The Fudan University team, participated with a series of similar systems (\"Simple\" systems) as well as their ensemble."
                    },
                    {
                        "id": 95,
                        "string": "The general approach included the following steps: First, the articles were parsed and some sections, such as affiliation or references, were removed."
                    },
                    {
                        "id": 96,
                        "string": "Then, using NLP techniques, alongside pre-defined rules, each paragraph was split into sentences."
                    },
                    {
                        "id": 97,
                        "string": "These sentences were classified as positive (i.e."
                    },
                    {
                        "id": 98,
                        "string": "containing grant information) or not, using a linear SVM."
                    },
                    {
                        "id": 99,
                        "string": "The positive sentences were scanned for grant IDs and agencies through the use of regular expressions and hand-made rules."
                    },
                    {
                        "id": 100,
                        "string": "Finally, multiple classifiers were trained in order to merge grant IDs and agencies into suitable pairs, based on a wide range of features, such as character-level features of the grant ID, the agency in the sentence and the distance between the grant ID and the agency in the sentence."
                    },
                    {
                        "id": 101,
                        "string": "The \"DZG\" systems followed a similar methodology, in order to classify snippets of text as possible grant information sources, implementing a linear SVM with tf-idf vectors of bigrams as input features."
                    },
                    {
                        "id": 102,
                        "string": "However, their methodology differed from that of Fudan in two ways."
                    },
                    {
                        "id": 103,
                        "string": "Firstly, they used an in-house-created dataset consisting of more than 1,600 articles with grant information in order to train their systems."
                    },
                    {
                        "id": 104,
                        "string": "Secondly, the systems deployed were based on a variety of sequential learning models namely conditional random fields (Finkel et al., 2005) , hidden markov models (Collins, 2002) and maximum entropy models (Ratnaparkhi, 1998 )."
                    },
                    {
                        "id": 105,
                        "string": "The final system deployed was a pooling ensemble of these three approaches, in order to maximize recall and exploit complementarity between predictions of different models."
                    },
                    {
                        "id": 106,
                        "string": "Likewise, the AUTH team, with systems \"Asclepius\", \"Gallen\" and \"Hippocrates\" emphasized on specific sections of the text that could contain grant support information and extracted grant IDs and agencies using regular expressions."
                    },
                    {
                        "id": 107,
                        "string": "Baselines: For this challenge a baseline was provided by NLM (\"BioASQ Filtering\") which is based on a two-step procedure."
                    },
                    {
                        "id": 108,
                        "string": "First, the system classifies snippets from the full-text, as possible grant support \"zones\" based on the average probability ratio, generated separately by Naive Bayes (Zhang et al., 2009 ) and SVM (Kim et al., 2009) ."
                    },
                    {
                        "id": 109,
                        "string": "Then, the system identified grant IDs and agencies in these selected grant support \"zones\", using mainly heuristic rules, such as regular expressions, especially for detecting uncommon and irregularly formatted grant IDs."
                    },
                    {
                        "id": 110,
                        "string": "Results Task 5a Each of the three batches of task 5a was evaluated independently."
                    },
                    {
                        "id": 111,
                        "string": "The classification performance of the systems was measured using flat and hierarchical evaluation measures (Balikas et al., 2013) ."
                    },
                    {
                        "id": 112,
                        "string": "The micro F-measure (MiF) and the Lowest Common Ancestor F-measure (LCA-F) were used to choose the winners for each batch (Kosmopoulos et al., 2013) ."
                    },
                    {
                        "id": 113,
                        "string": "According to (Demsar, 2006) the appropriate way to compare multiple classification systems over multiple datasets is based on their average rank across all the datasets."
                    },
                    {
                        "id": 114,
                        "string": "On each dataset the system with the best performance gets rank 1.0, Table 7 : Average system ranks across the batches of the Task 5a."
                    },
                    {
                        "id": 115,
                        "string": "A hyphenation symbol (-) is used whenever the system participated in fewer than 4 tests in the batch."
                    },
                    {
                        "id": 116,
                        "string": "Systems with fewer than 4 participations in all batches are omitted."
                    },
                    {
                        "id": 117,
                        "string": "the second best rank 2.0 and so on."
                    },
                    {
                        "id": 118,
                        "string": "In case two or more systems tie, they all receive the average rank."
                    },
                    {
                        "id": 119,
                        "string": "Table 7 presents the average rank (according to MiF and LCA-F) of each system over all the test sets for the corresponding batches."
                    },
                    {
                        "id": 120,
                        "string": "Note, that the average ranks are calculated for the 4 best results of each system in the batch according to the rules of the challenge."
                    },
                    {
                        "id": 121,
                        "string": "On both test batches and for both flat and hierarchical measures, the DeepMeSH systems (Peng et al., 2016) and the AUTH systems outperform the strong baselines, indicating the importance of the methodologies proposed, including d2v and tf-idf transformations to generate feature embeddings, for semantic indexing."
                    },
                    {
                        "id": 122,
                        "string": "More detailed results can be found in the online results page 3 ."
                    },
                    {
                        "id": 123,
                        "string": "3 http://participants-area.bioasq.org/ results/5a/ Task 5b Phase A: For phase A and for each of the four types of annotations: documents, concepts, snippets and RDF triples, we rank the systems according to the Mean Average Precision (MAP) measure."
                    },
                    {
                        "id": 124,
                        "string": "The final ranking for each batch is calculated as the average of the individual rankings in the different categories."
                    },
                    {
                        "id": 125,
                        "string": "In tables 8 and 9 some indicative results from batch 3 are presented."
                    },
                    {
                        "id": 126,
                        "string": "Full results are available in the online results page of task 5b, phase A 4 ."
                    },
                    {
                        "id": 127,
                        "string": "It is worth noting that document and snippet retrieval for the given questions were the most popular part of the task."
                    },
                    {
                        "id": 128,
                        "string": "Moreover, for different evaluation metrics, there are different systems performing best, indicating that different approaches to the task may be preferable depending on the target Table 9 : Results for document retrieval in batch 3 of phase A of Task 5b."
                    },
                    {
                        "id": 129,
                        "string": "outcome."
                    },
                    {
                        "id": 130,
                        "string": "For example, one can see that the UNCC System 1 performed the best on some unordered measures, namely mean precision and f-measure, however using MAP or GMAP to consider the order of retrieved elements, it is out preformed by other systems, such as the ustb-prir."
                    },
                    {
                        "id": 131,
                        "string": "Additionally, the combination of some of these approaches seem like a promising direction for future research."
                    },
                    {
                        "id": 132,
                        "string": "Phase B: In phase B of Task 5b the systems were asked to produce exact and ideal answers."
                    },
                    {
                        "id": 133,
                        "string": "For ideal answers, the systems will eventually be ranked according to manual evaluation by the BioASQ experts (Balikas et al., 2013) ."
                    },
                    {
                        "id": 134,
                        "string": "Regarding exact answers 5 , the systems were ranked according to accuracy for the yes/no questions, mean reciprocal rank (MRR) for the factoids and mean  F-measure for the list questions."
                    },
                    {
                        "id": 135,
                        "string": "Table 10 shows the results for exact answers for the fourth batch of task 5b."
                    },
                    {
                        "id": 136,
                        "string": "The symbol (-) is used when systems don't provide exact answers for a particular type of question."
                    },
                    {
                        "id": 137,
                        "string": "The full results of phase B of task 5b are available online 6 ."
                    },
                    {
                        "id": 138,
                        "string": "From the results presented in Table 10 , it can be seen that systems achieve high scores in the yes/no questions."
                    },
                    {
                        "id": 139,
                        "string": "This was especially in the first batches, where a high imbalance in yes-no classes leaded to trivial baseline solutions being very strong."
                    },
                    {
                        "id": 140,
                        "string": "This was amended in the later batches, as shown in the table for batch 4, where the best systems outper-6 http://participants-area.bioasq.org/ results/5b/phaseB/ form baseline approaches."
                    },
                    {
                        "id": 141,
                        "string": "On the other hand, the performance in factoid and list questions indicates that there is more room for improvement in these types of answer."
                    },
                    {
                        "id": 142,
                        "string": "Task 5c Regarding the evaluation of Task 5c and taking into account the fact that only a subset of grant IDs and agencies mentioned in the full text were included in the ground truth data sets, both for training and testing, micro-recall was the evaluation measure used for all three sub-tasks."
                    },
                    {
                        "id": 143,
                        "string": "This means that each system was assigned a micro-recall score for grant IDs, agencies and full-grants independently and the top-two contenders for each sub-  task were selected as winners."
                    },
                    {
                        "id": 144,
                        "string": "The results of the participating systems can be seen in Table 11 ."
                    },
                    {
                        "id": 145,
                        "string": "Firstly, it can be seen that the grant ID extraction task is harder compared to the agency extraction."
                    },
                    {
                        "id": 146,
                        "string": "Moreover, the overall performance of the participants was very good, and certainly better than the baseline system."
                    },
                    {
                        "id": 147,
                        "string": "This indicates that the currently deployed techniques can be improved and as discussed in section 3.3, this can be done through the use of multiple methodologies."
                    },
                    {
                        "id": 148,
                        "string": "Finally, these results, despite being obtained on a filtered subset of the data available, could serve as a springboard to enhance and redeploy the currently implemented systems."
                    },
                    {
                        "id": 149,
                        "string": "Conclusion In this paper, an overview of the fifth BioASQ challenge is presented."
                    },
                    {
                        "id": 150,
                        "string": "The challenge consisted of three tasks: semantic indexing, question answering and funding information extraction."
                    },
                    {
                        "id": 151,
                        "string": "Overall, as in previous years, the best systems were able to outperform the strong baselines provided by the organizers."
                    },
                    {
                        "id": 152,
                        "string": "This suggests that advances over the state of the art were achieved through the BioASQ challenge but also that the benchmark in itself is challenging."
                    },
                    {
                        "id": 153,
                        "string": "Consequently, we believe that the challenge is successfully towards pushing the research frontier in on biomedical information systems."
                    },
                    {
                        "id": 154,
                        "string": "In future editions of the challenge, we aim to provide even more benchmark data derived from a community-driven acquisition process and design a multi-batch scenario for Task 5c similar to the other tasks."
                    },
                    {
                        "id": 155,
                        "string": "Finally, as a concluding remark, it is worth mentioning that the increase in challenge participation this year 7 highlights the healthy growth of the BioASQ community, gathering attention from different teams around the globe and constituting a reference point for biomedical semantic indexing and question answering."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 6
                    },
                    {
                        "section": "Overview of the Tasks",
                        "n": "2",
                        "start": 7,
                        "end": 8
                    },
                    {
                        "section": "Large-scale semantic indexing -5a",
                        "n": "2.1",
                        "start": 9,
                        "end": 16
                    },
                    {
                        "section": "Biomedical semantic QA -5b",
                        "n": "2.2",
                        "start": 17,
                        "end": 21
                    },
                    {
                        "section": "Funding information extraction -5c",
                        "n": "2.3",
                        "start": 22,
                        "end": 57
                    },
                    {
                        "section": "Task 5b",
                        "n": "3.2",
                        "start": 58,
                        "end": 89
                    },
                    {
                        "section": "Task 5c",
                        "n": "3.3",
                        "start": 90,
                        "end": 109
                    },
                    {
                        "section": "Task 5a",
                        "n": "4.1",
                        "start": 110,
                        "end": 122
                    },
                    {
                        "section": "Task 5b",
                        "n": "4.2",
                        "start": 123,
                        "end": 141
                    },
                    {
                        "section": "Task 5c",
                        "n": "4.3",
                        "start": 142,
                        "end": 148
                    },
                    {
                        "section": "Conclusion",
                        "n": "5",
                        "start": 149,
                        "end": 155
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1201-Table7-1.png",
                        "caption": "Table 7: Average system ranks across the batches of the Task 5a. A hyphenation symbol (-) is used whenever the system participated in fewer than 4 tests in the batch. Systems with fewer than 4 participations in all batches are omitted.",
                        "page": 5,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 525.12,
                            "y1": 64.8,
                            "y2": 431.03999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1201-Table1-1.png",
                        "caption": "Table 1: Statistics on test datasets for Task 5a.",
                        "page": 1,
                        "bbox": {
                            "x1": 77.75999999999999,
                            "x2": 284.15999999999997,
                            "y1": 62.879999999999995,
                            "y2": 352.32
                        }
                    },
                    {
                        "filename": "../figure/image/1201-Table2-1.png",
                        "caption": "Table 2: Statistics on the training and test datasets of Task 5b. All the numbers for the documents and snippets refer to averages.",
                        "page": 1,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 286.08,
                            "y1": 536.64,
                            "y2": 647.04
                        }
                    },
                    {
                        "filename": "../figure/image/1201-Table8-1.png",
                        "caption": "Table 8: Results for snippet retrieval in batch 3 of phase A of Task 5b.",
                        "page": 6,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 523.1999999999999,
                            "y1": 65.28,
                            "y2": 281.28
                        }
                    },
                    {
                        "filename": "../figure/image/1201-Table9-1.png",
                        "caption": "Table 9: Results for document retrieval in batch 3 of phase A of Task 5b.",
                        "page": 6,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 523.1999999999999,
                            "y1": 313.44,
                            "y2": 600.0
                        }
                    },
                    {
                        "filename": "../figure/image/1201-Table4-1.png",
                        "caption": "Table 4: Systems and approaches for Task 5a. Systems for which no description was available at the time of writing are omitted.",
                        "page": 2,
                        "bbox": {
                            "x1": 80.64,
                            "x2": 281.28,
                            "y1": 293.76,
                            "y2": 474.24
                        }
                    },
                    {
                        "filename": "../figure/image/1201-Table10-1.png",
                        "caption": "Table 10: Results for batch 4 for exact answers in phase B of Task 5b.",
                        "page": 7,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 521.28,
                            "y1": 64.8,
                            "y2": 512.16
                        }
                    },
                    {
                        "filename": "../figure/image/1201-Table5-1.png",
                        "caption": "Table 5: Systems and approaches for Task 5b. Systems for which no information was available at the time of writing are omitted.",
                        "page": 3,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 287.03999999999996,
                            "y1": 62.879999999999995,
                            "y2": 380.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1201-Table11-1.png",
                        "caption": "Table 11: Micro Recall (MR) results on the test set of Task 5c.",
                        "page": 8,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 523.1999999999999,
                            "y1": 64.8,
                            "y2": 240.95999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1201-Table6-1.png",
                        "caption": "Table 6: Overview of the methodologies used by the participating systems in Task 5c.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 293.28,
                            "y1": 362.88,
                            "y2": 474.24
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-50"
        },
        {
            "slides": {
                "0": {
                    "title": "Simultaneous Interpretation SI",
                    "text": [
                        "Translation of the spoken word in real time"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Computer Assisted Interpretation CAI",
                    "text": [
                        "How do we ensure",
                        "maximum utility with minimum distraction?"
                    ],
                    "page_nums": [
                        2,
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Estimating Interpreter Performance",
                    "text": [
                        "Dont offer help when they dont need it!",
                        "Estimate how well the interpreter is doing"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Quality Estimation",
                    "text": [
                        "We already do this in Machine Translation!",
                        "Can we apply it to Simultaneous Interpretation?",
                        "QuEst++ is an existing framework for QE (Specia et al., 2015)"
                    ],
                    "page_nums": [
                        5,
                        6,
                        7,
                        8,
                        9,
                        10,
                        11,
                        12
                    ],
                    "images": []
                },
                "4": {
                    "title": "Method",
                    "text": [
                        "QuEst++ baseline features Apply",
                        "Features tailored to interpretation (METEOR)",
                        "Test using 10-fold cross-validation"
                    ],
                    "page_nums": [
                        13,
                        14,
                        15,
                        16,
                        17,
                        18,
                        19,
                        20,
                        21
                    ],
                    "images": []
                },
                "6": {
                    "title": "Features of interpretation",
                    "text": [
                        "SOURCE: Will the Parliament grant President Dilma Rousseff, on the very first occasion after her groundbaking groundbreaking election and for no sound formal reason, the kind of debate that we usually reserve for people like Mugabe? So, I ask you to remove Brazil from the agenda of the urgencies. (48 words)",
                        "INTERP: Ehm il Parlamento... dopo le elezioni... darem- dar spazio a un dibattito sul ehm sul caso per esempio del presidente Mugabe invece di mettere il Brasile allordine del giorno? (27 words)",
                        "GLOSS: Ehm the Parliament... after the elections... well gi- will give way to a",
                        "debate on the ehm on the case for example of President Mugabe instead of putting Brazil on the agenda?"
                    ],
                    "page_nums": [
                        23,
                        24,
                        25,
                        26,
                        27
                    ],
                    "images": []
                },
                "7": {
                    "title": "SI Model Features",
                    "text": [
                        "Non-specific words - is the interpreter avoiding specific terminology?",
                        "Cognates/loan words - if a word is almost identical in both languages an interpreter shouldnt struggle with it (unless its a false friend!)"
                    ],
                    "page_nums": [
                        28
                    ],
                    "images": []
                },
                "10": {
                    "title": "Future Work",
                    "text": [
                        "Evaluation Metric - finding a metric better aligned with the uniqueness of strategies in SI",
                        "Live system integration - streamlining the system to provide instantaneous feedback",
                        "ASR - evaluate the model on ASR output",
                        "Speech model - enhance the model using prosodic speech features"
                    ],
                    "page_nums": [
                        32
                    ],
                    "images": []
                }
            },
            "paper_title": "Automatic Estimation of Simultaneous Interpreter Performance",
            "paper_id": "1202",
            "paper": {
                "title": "Automatic Estimation of Simultaneous Interpreter Performance",
                "abstract": "Simultaneous interpretation, translation of the spoken word in real-time, is both highly challenging and physically demanding. Methods to predict interpreter confidence and the adequacy of the interpreted message have a number of potential applications, such as in computerassisted interpretation interfaces or pedagogical tools. We propose the task of predicting simultaneous interpreter performance by building on existing methodology for quality estimation (QE) of machine translation output. In experiments over five settings in three language pairs, we extend a QE pipeline to estimate interpreter performance (as approximated by the METEOR evaluation metric) and propose novel features reflecting interpretation strategy and evaluation measures that further improve prediction accuracy. 1",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Simultaneous Interpretation (SI) is an inherently difficult task that carries significant cognitive and attentional burdens."
                    },
                    {
                        "id": 1,
                        "string": "The role of the simultaneous interpreter is to accurately render the source speech in a given target language in a timely and precise manner."
                    },
                    {
                        "id": 2,
                        "string": "Interpreters employ a range of strategies, including generalization and summarization, to convey the source message as efficiently and reliably as possible (He et al., 2016) ."
                    },
                    {
                        "id": 3,
                        "string": "Unfortunately, the interpreter is pitched against the limits of human memory and stamina, and after only minutes of interpreting, the number of errors made by an interpreter begins to increase exponentially (Moser-Mercer et al., 1998) ."
                    },
                    {
                        "id": 4,
                        "string": "We examine the task of estimating simultaneous interpreter performance: automatically predicting when interpreters are interpreting smoothly and when they are struggling."
                    },
                    {
                        "id": 5,
                        "string": "This has several immediate potential applications, one of which being in Computer-Assisted Interpretation (CAI)."
                    },
                    {
                        "id": 6,
                        "string": "CAI is quickly gaining traction in the interpreting community, with software products such as Interpret-Bank (Fantinouli, 2016) deployed in interpreting booths to provide live and interactive terminology support."
                    },
                    {
                        "id": 7,
                        "string": "Figure 1 (b) shows how this might work; both the interpreter and the CAI system receive the source message and the system displays assistive information in the form of terminology and informational support."
                    },
                    {
                        "id": 8,
                        "string": "While this might improve the quality of interpreter output, there is a danger that these systems will provide too much information and increase the cognitive load imposed upon the interpreter (Fantinouli, 2018) ."
                    },
                    {
                        "id": 9,
                        "string": "Intuitively, the ideal level of support depends on current interpreter performance."
                    },
                    {
                        "id": 10,
                        "string": "The system can minimize distraction by providing assistance only when an interpreter is struggling."
                    },
                    {
                        "id": 11,
                        "string": "This level of support could be moderated appropriately if interpreter performance can be accurately predicted."
                    },
                    {
                        "id": 12,
                        "string": "Figure 1 (c) demonstrates how our proposed quality estimation (QE) system receives and evaluates interpreter output, allowing the CAI system to appropriately lower the amount of information passed to the interpreter, maximizing the quality of interpreter output."
                    },
                    {
                        "id": 13,
                        "string": "As a concrete method for estimating interpreter performance, we turn to existing work on QE for machine translation (MT) systems (Specia et al., 2010 (Specia et al., , 2015 , which takes in the source sentence and MT-generated outputs and estimates a measure of quality."
                    },
                    {
                        "id": 14,
                        "string": "In doing so, we arrive at two natural research questions: 1."
                    },
                    {
                        "id": 15,
                        "string": "Do existing methods for performing QE on MT output also allow for accurate estimation of interpreter performance, despite the inherent differences between MT and SI?"
                    },
                    {
                        "id": 16,
                        "string": "2."
                    },
                    {
                        "id": 17,
                        "string": "What unique aspects of the problem of interpreter performance estimation, such as the availability of prosody and other linguistic cues, can be exploited to further improve the accuracy of our predictions?"
                    },
                    {
                        "id": 18,
                        "string": "The remainder of the paper describes methods and experiments on English-Japanese (EN-JA), English-French (EN-FR), and English-Italian (EN-IT) interpretation data attempting to answer these questions."
                    },
                    {
                        "id": 19,
                        "string": "2 Quality Estimation Blatz et al."
                    },
                    {
                        "id": 20,
                        "string": "(2004) first proposed the problem of measuring the quality of MT output as a prediction task, given that existing metrics such as BLEU (Papineni et al., 2002) rely on the availability of reference translations to evaluate MT output quality, which aren't always available."
                    },
                    {
                        "id": 21,
                        "string": "As such, QE has since received widespread attention in the MT community and since 2012 has been included as a task in the Workshop on Statistical Machine Translation (Callison-Burch et al., 2012), using approaches ranging from linear classifiers (Ueffing and Ney, 2007; Luong et al., 2014) to neural models (Martins et al., 2016 (Martins et al., , 2017 ."
                    },
                    {
                        "id": 22,
                        "string": "QuEst++ (Specia et al., 2015) is a well-known QE pipeline that supports word-level, sentencelevel, and document-level QE."
                    },
                    {
                        "id": 23,
                        "string": "Its effectiveness and flexibility make it an attractive candidate for our proposed task."
                    },
                    {
                        "id": 24,
                        "string": "There are two main modules to QuEst++: a feature extractor and a learning module."
                    },
                    {
                        "id": 25,
                        "string": "The feature extractor produces an intermediate representation of the source and translation in a continuous feature vector."
                    },
                    {
                        "id": 26,
                        "string": "The goal of the learning module, given a source and translation pair, is to predict the quality of the translation, either as a label or as a continuous value."
                    },
                    {
                        "id": 27,
                        "string": "This module is trained on example translations that have an assigned score (such as BLEU) and then predicts the score of a new example."
                    },
                    {
                        "id": 28,
                        "string": "QuEst++ offers a range of learning algorithms but defaults to Support Vector Regression for sentence-level QE."
                    },
                    {
                        "id": 29,
                        "string": "Quality Estimation for Interpretation The default, out-of-the-box, sentence-level feature set for QuEst++ includes seventeen features such as number of tokens in source/target utterances, average token length, n-gram frequency, etc."
                    },
                    {
                        "id": 30,
                        "string": "(Specia et al., 2015) ."
                    },
                    {
                        "id": 31,
                        "string": "While this feature set is effective for evaluation of MT output, SI output is inherently different-full of pauses, hesitations, paraphrases, re-orderings and repetitions."
                    },
                    {
                        "id": 32,
                        "string": "In the following sections, we describe our methods to adapt QE to handle these phenomena."
                    },
                    {
                        "id": 33,
                        "string": "Interpretation-specific Features To adapt QE to interpreter output, we augment the baseline feature set with four additional types of features that may indicate a struggling interpreter."
                    },
                    {
                        "id": 34,
                        "string": "Sridhar et al."
                    },
                    {
                        "id": 35,
                        "string": "(2013) propose that interpreters regularly use pauses to gain more time to think and as a cognitive strategy to manage memory constraints."
                    },
                    {
                        "id": 36,
                        "string": "An increased number of hesitations or incomplete words in interpreter output might indicate that an interpreter is struggling to produce accurate output."
                    },
                    {
                        "id": 37,
                        "string": "In our particular case, both corpora we use in experiments are annotated for pauses and partial renditions of words."
                    },
                    {
                        "id": 38,
                        "string": "Ratio of pauses/hesitations/incomplete words: Ratio of non-specific words: Interpreters often compress output by replacing or omitting common nouns to avoid specific terminology (Sridhar et al., 2013) , either to prevent redundancy or to ease cognitive load."
                    },
                    {
                        "id": 39,
                        "string": "For example: \"The chairman explained the proposal to the delegates\" might be rendered in a target language as \"he explained it to them.\""
                    },
                    {
                        "id": 40,
                        "string": "To capture this, we include a feature that checks for words from a pre-determined seed list of pronouns and demonstrative adjectives."
                    },
                    {
                        "id": 41,
                        "string": "Ratio of 'quasi-'cognates: In related language pairs, often words of a similar root are orthographically similar, for example \"artificial\"(EN), \"artificiel\"(FR) and \"artificiale\"(IT)."
                    },
                    {
                        "id": 42,
                        "string": "Likewise in Japanese, words adapted from English are transcribed in katakana script to indicate their foreign origin."
                    },
                    {
                        "id": 43,
                        "string": "Transliterated words in interpreted speech could represent facilitated translation by language proximity, or an attempt to produce an approximation of a word that the interpreter did not know."
                    },
                    {
                        "id": 44,
                        "string": "We include a feature that counts the number of words that share at least 50% identical orthography (for EN, FR, IT) or are rendered in the interpreter transcript in katakana (JA)."
                    },
                    {
                        "id": 45,
                        "string": "Ratio of number of words: We further include three features from the bank of features provided with QuEst++ that compare source and target length and the amount of transcribed punctuation."
                    },
                    {
                        "id": 46,
                        "string": "Information about utterance length makes sense in an interpreting scenario, given the aforementioned strategies of omission and compression of information."
                    },
                    {
                        "id": 47,
                        "string": "A list, for example, may be compressed to avoid redundancy or may be an erroneous omission (Barik, 1994) ."
                    },
                    {
                        "id": 48,
                        "string": "Evaluation Metric Novice interpreters are assessed for accuracy on the number of omissions, additions and the inaccurate renditions of lexical items and longer phrases (Altman, 1994) , but recovery of content and correct terminology are highly valued."
                    },
                    {
                        "id": 49,
                        "string": "While no large corpus exists that has been manually annotated with these measures, they align with the phenomena that MT evaluation tries to solve."
                    },
                    {
                        "id": 50,
                        "string": "One important design decision is which evaluation metric to target in our QE system."
                    },
                    {
                        "id": 51,
                        "string": "There is an abundance of evaluation metrics available for MT including WER (Su et al.)"
                    },
                    {
                        "id": 52,
                        "string": ", BLEU (Papineni et al., 2002) , NIST (Doddington, 2002) and ME-TEOR (Denkowski and Lavie, 2014) , all of which compare the similarity between reference translations and translations."
                    },
                    {
                        "id": 53,
                        "string": "Interpreter output is fundamentally different from any reference that we may use in evaluation because interpreters employ a range of economizing strategies such as segmentation, omission, generalization, and reformulation (Riccardi, 2005) ."
                    },
                    {
                        "id": 54,
                        "string": "As such, measuring interpretation quality by some metrics employed in MT such as BLEU can result in artificially low scores (Shimizu et al., 2013) ."
                    },
                    {
                        "id": 55,
                        "string": "To mitigate this, we use METEOR, a more sophisticated MT evaluation metric that considers paraphrases and contentfunction word distinctions, and thus should be better equipped to deal with the disparity between MT and SI."
                    },
                    {
                        "id": 56,
                        "string": "Better handling of these divergences for evaluation of interpreter output, or fine-grained evaluation based on measures from interpretation studies is an interesting direction for future work."
                    },
                    {
                        "id": 57,
                        "string": "Data: Interpretation Corpora For our EN-JA language data we train the pipeline on combined data from seven TED Talks taken from the NAIST TED SI corpus (Shimizu et al., 2013) ."
                    },
                    {
                        "id": 58,
                        "string": "This corpus provides human transcribed SI output from three interpreters of low, intermediate and high levels of proficiency denoted B-rank, Arank and S-rank respectively, with 559 utterances from each interpreter."
                    },
                    {
                        "id": 59,
                        "string": "The corpus also provides written translations of the source speech, which we use as reference translations when evaluating interpreter output using METEOR."
                    },
                    {
                        "id": 60,
                        "string": "Our EN-FR and EN-IT data are drawn from the EPTIC corpus (Bernardini et al., 2016) , which provides source and interpreter transcripts for speeches from the European Parliament (manually transcribed to include vocal expressions), as well as translations of transcripts of the source speech."
                    },
                    {
                        "id": 61,
                        "string": "The EN-FR and EN-IT datasets contain 739 and 731 utterances respectively."
                    },
                    {
                        "id": 62,
                        "string": "While the EPTIC translations are accurate, they were created from an official transcript that differs significantly in register from the source speech."
                    },
                    {
                        "id": 63,
                        "string": "As a proxy for our experiments, we generated translations of the original speech using Google Translate, which resulted in much more qualitatively reliable ME-TEOR scores than the EPTIC translations."
                    },
                    {
                        "id": 64,
                        "string": "Interpreter Quality Experiments To evaluate the quality of our QE system, we use the Pearson's r correlation between the predicted and true METEOR for each language pair (Graham, 2015) ."
                    },
                    {
                        "id": 65,
                        "string": "As a baseline, we train QuEst++ on the out-of-the-box feature set (Section 2)."
                    },
                    {
                        "id": 66,
                        "string": "We use k-fold cross-validation individually on EN-JA, EN-FR, and EN-IT source-interpreter language pairs with a held-out development set and test set for each fold."
                    },
                    {
                        "id": 67,
                        "string": "For each experiment setting, we run the experiment for each fold (ten iterations for each set) and evaluate average Pearson's r correlation on the development set."
                    },
                    {
                        "id": 68,
                        "string": "In our baseline setting, we extract features based on the default QuEst++ sentence-level feature set (baseline)."
                    },
                    {
                        "id": 69,
                        "string": "We ablate baseline features through cross-validation and remove features relating to bigram and trigram frequency and punctuation frequency in the source utterance, creating baseline trimmed proposed EN-JA(B-rank) 0.514 0.542 0.593 EN-JA(A-rank) 0.487 0.554 0.591 EN-JA(S-rank) 0.325 0.334 0.411 EN-FR 0.631 0.610 0.691 EN-IT 0.569 0.543 0.576 Table 1 : Pearson's r scores for predicted ME-TEOR for baseline, trimmed and proposed feature sets on the test set (highest accuracy for each dataset indicated in bold)."
                    },
                    {
                        "id": 70,
                        "string": "a more effective trimmed model (trimmed)."
                    },
                    {
                        "id": 71,
                        "string": "Subsequently, we add our interpreter features (Section 3.1) and arrive at our proposed model."
                    },
                    {
                        "id": 72,
                        "string": "We then repeat each experiment using the test set data from each fold and compare the resulting average Pearson's r scores."
                    },
                    {
                        "id": 73,
                        "string": "Table 1 shows our primary results comparing the baseline, trimmed, and proposed feature sets."
                    },
                    {
                        "id": 74,
                        "string": "Our first observation is that, even with the baseline feature set, QE obtains respectable correlation scores, proving feasible as a method to predict interpreter performance."
                    },
                    {
                        "id": 75,
                        "string": "Our trimmed feature set performs moderately better than the baseline for Japanese, and slightly under-performs for French and Italian."
                    },
                    {
                        "id": 76,
                        "string": "However, our proposed, interpreter-focused model out-performs in all language settings with notable gains in particular for EN-JA(A-Rank) (+0.104), achieving its highest accuracy on the EN-FR dataset."
                    },
                    {
                        "id": 77,
                        "string": "Over all datasets, the gain of the proposed model is statistically significant at p < 0.05 by the pairwise bootstrap (Koehn, 2004) ."
                    },
                    {
                        "id": 78,
                        "string": "Results Analysis We further present two analyses: ablation on the full feature set and a qualitative comparison."
                    },
                    {
                        "id": 79,
                        "string": "Table 2 iteratively reduces the feature set by first removing the 'quasi-'cognate feature (w/o cog); specific words (w/o spec); pauses, hesitations, and incomplete words (w/o fill); and finally sentence length and punctuation differences (w/o length)."
                    },
                    {
                        "id": 80,
                        "string": "Relative difference in utterance length appears to aid Japanese and French above other languages."
                    },
                    {
                        "id": 81,
                        "string": "Cognates are particularly useful in EN-FR and EN-IT; this may be indicative of the corpus domain (European Parliament proceedings being rich in Latinate legalese) or of cognate frequency in those languages."
                    },
                    {
                        "id": 82,
                        "string": "In Japanese, cognates were  more indicative of quality for the more skilled interpreter (S-rank)."
                    },
                    {
                        "id": 83,
                        "string": "While pauses and hesitations seem to aid the model in EN-FR and EN-IT, they appear to hinder EN-JA."
                    },
                    {
                        "id": 84,
                        "string": "Below is a qualitative EN-IT example with a METEOR score of 0.079 (being substantially lower than the average METEOR score across all datasets; 0.262)."
                    },
                    {
                        "id": 85,
                        "string": "The baseline model prediction of its score was 0.127, and our proposed model, 0.066: SOURCE: \"Will the Parliament grant President Dilma Rousseff, on the very first occasion after her groundbaking groundbreaking election and for no sound formal reason, the kind of debate that we usually reserve for people like Mugabe?"
                    },
                    {
                        "id": 86,
                        "string": "So, I ask you to remove Brazil from the agenda of the urgencies.\""
                    },
                    {
                        "id": 87,
                        "string": "INTERP: \"Ehm il Parlamento... dopo le elezioni... daremdar spazio a un dibattito sul ehm sul caso per esempio del presidente Mugabe invece di mettere il Brasile all'ordine del giorno?\""
                    },
                    {
                        "id": 88,
                        "string": "GLOSS: \"Ehm the Parliament... after the elections... we'll gi-will give way to a debate on the ehm on the case for example of President Mugabe instead of putting Brazil on the agenda?\""
                    },
                    {
                        "id": 89,
                        "string": "Our model can better capture the issues in this example because it has many interpretation specific qualities (pauses, compression, and omission)."
                    },
                    {
                        "id": 90,
                        "string": "This is an example in which a CAI system might offer assistance to an interpreter struggling to produce an accurate rendition."
                    },
                    {
                        "id": 91,
                        "string": "Conclusion We introduce a novel and effective application of QE to evaluate interpreter output, which could be immediately applied to allow CAI systems to selectively offer assistance to struggling interpreters."
                    },
                    {
                        "id": 92,
                        "string": "This work uses METEOR to evaluate interpreter output, but creation of fine-grained mea-sures to evaluate various aspects of interpreter performance is an interesting avenue for future work."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 28
                    },
                    {
                        "section": "Quality Estimation for Interpretation",
                        "n": "3",
                        "start": 29,
                        "end": 32
                    },
                    {
                        "section": "Interpretation-specific Features",
                        "n": "3.1",
                        "start": 33,
                        "end": 47
                    },
                    {
                        "section": "Evaluation Metric",
                        "n": "3.2",
                        "start": 48,
                        "end": 56
                    },
                    {
                        "section": "Data: Interpretation Corpora",
                        "n": "4",
                        "start": 57,
                        "end": 63
                    },
                    {
                        "section": "Interpreter Quality Experiments",
                        "n": "5",
                        "start": 64,
                        "end": 77
                    },
                    {
                        "section": "Analysis",
                        "n": "5.2",
                        "start": 78,
                        "end": 88
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 89,
                        "end": 92
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1202-Table2-1.png",
                        "caption": "Table 2: Relative difference in Pearson’s r scores for ablated features after removing cognates, specifics, fillers and length difference (cumulative ablation, left to right). Omission and addition are key features distinguishing SI from translation.",
                        "page": 3,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 524.16,
                            "y1": 62.4,
                            "y2": 164.16
                        }
                    },
                    {
                        "filename": "../figure/image/1202-Table1-1.png",
                        "caption": "Table 1: Pearson’s r scores for predicted METEOR for baseline, trimmed and proposed feature sets on the test set (highest accuracy for each dataset indicated in bold).",
                        "page": 3,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 288.0,
                            "y1": 62.879999999999995,
                            "y2": 151.2
                        }
                    },
                    {
                        "filename": "../figure/image/1202-Figure1-1.png",
                        "caption": "Figure 1: Simultaneous interpretation scenarios",
                        "page": 0,
                        "bbox": {
                            "x1": 329.28,
                            "x2": 504.0,
                            "y1": 224.64,
                            "y2": 377.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-51"
        },
        {
            "slides": {
                "0": {
                    "title": "The Word Embedding Pipeline",
                    "text": [
                        "U nlabeled corpus c or pus Unlabeled c orpus corpus",
                        "W2V GloVe Polyglot FastText",
                        "Unlabeled Supervised task corpus",
                        "U nlabeled corpus c or pus c orpus",
                        "Penn TreeBank SemEval OntoNotes Univ. Dependencies",
                        "Tagging Parsing Sentiment NER"
                    ],
                    "page_nums": [
                        1,
                        2,
                        3,
                        4
                    ],
                    "images": []
                },
                "2": {
                    "title": "Actual Pattern",
                    "text": [
                        "Affects supervised tas ks",
                        "Our method - compositional"
                    ],
                    "page_nums": [
                        6,
                        7,
                        8,
                        9,
                        10,
                        11
                    ],
                    "images": []
                },
                "3": {
                    "title": "Sources of OOVs",
                    "text": [
                        "Names Chalabi has increasingly marginalized within Iraq, ...",
                        "Domain-specific jargon Important species (...) include shrimp, (...) and some varieties of flatfish.",
                        "Foreign words This term was first used in German (Hochrenaissance),",
                        "Without George Martin the Beatles would have been just another",
                        "untalented band as Oasis.",
                        "What if Google morphed into GoogleOS?",
                        "Well have four bands, and Big D is cookin. lots of fun and great prizes.",
                        "Typos and other errors",
                        "I dislike this urban society and I want to leave this whole enviroment."
                    ],
                    "page_nums": [
                        12,
                        13,
                        14,
                        15,
                        16,
                        17,
                        18,
                        19,
                        20
                    ],
                    "images": []
                },
                "4": {
                    "title": "Common OOV handling techniques",
                    "text": [
                        "task corpus U nlabeled c o rpus Unlabeled c orpus corpus",
                        "One UNK to rule them all",
                        "Trained with embeddings (stochastic unking)",
                        "Add subword model during WE training",
                        "What if we dont have access to the original corpus? (e.g. FastText) OOV"
                    ],
                    "page_nums": [
                        21,
                        22,
                        23,
                        24,
                        25,
                        26,
                        27
                    ],
                    "images": []
                },
                "5": {
                    "title": "Char2Tag",
                    "text": [
                        "Unlabeled U nlabeled corpus Supervised task corpus U nlabeled c o rpus Unlabeled c orpus corpus",
                        "Add subword layer to supervised task",
                        "OOVs benefit from co-trained character model",
                        "Requires large supervised training set for efficient transfer to test set OOVs"
                    ],
                    "page_nums": [
                        28,
                        29,
                        30,
                        31
                    ],
                    "images": []
                },
                "6": {
                    "title": "Enter MIMICK",
                    "text": [
                        "(No context) Unlabeled U nlabeled corpus Supervised task corpus U nlabeled c o rpus Unlabeled c orpus Subword units as inputs corpus",
                        "What data do we have, post-unlabeled corpus?",
                        "Orthography (the way words are spelled)",
                        "Use the former as training objective, latter as input",
                        "Pre-trained vectors as target",
                        "No need to access original unlabeled corpus"
                    ],
                    "page_nums": [
                        32,
                        33,
                        34,
                        35,
                        36,
                        37
                    ],
                    "images": []
                },
                "7": {
                    "title": "MIMICK Training",
                    "text": [
                        "m a k e"
                    ],
                    "page_nums": [
                        38,
                        39,
                        40,
                        41,
                        42,
                        43
                    ],
                    "images": [
                        "figure/image/1217-Figure1-1.png"
                    ]
                },
                "8": {
                    "title": "MIMICK Inference",
                    "text": [
                        "b l a h"
                    ],
                    "page_nums": [
                        44
                    ],
                    "images": [
                        "figure/image/1217-Figure1-1.png"
                    ]
                },
                "9": {
                    "title": "Observation Nearest Neighbors",
                    "text": [
                        "English (OOV Nearest in-vocab words)",
                        "MCT AWS, OTA, APT, PDM",
                        "pesky euphoric, disagreeable, horrid, ghastly",
                        "lawnmower tradesman, bookmaker, postman, hairdresser",
                        "geometric (m.pl., nontrad. spelling) geometric (m.pl.)",
                        "Surface form Syntactic properties Semantics"
                    ],
                    "page_nums": [
                        46,
                        47,
                        48,
                        49,
                        50,
                        51,
                        52,
                        53,
                        54
                    ],
                    "images": []
                },
                "10": {
                    "title": "Intrinsic Evaluation RareWords",
                    "text": [
                        "RareWords similarity task: morphologically-complex, mostly unseen words",
                        "Foreign words Rare(-ish) morphological derivations Nonce words Nonstandard orthography Typos and other errors"
                    ],
                    "page_nums": [
                        56,
                        57,
                        58,
                        59
                    ],
                    "images": []
                },
                "11": {
                    "title": "Extrinsic Evaluation POS Attribute Tagging",
                    "text": [
                        "UD is annotated for POS and morphosyntactic attributes",
                        "Cze: his stated goals",
                        "osoby v pokrocilem veku",
                        "people of advanced age",
                        "Rare(-ish) morphological derivations Nonce words Nonstandard orthography Typos and other errors",
                        "DT NN VBZ VBG POS",
                        "the cat is sitting",
                        "Attributes - same as POS layer",
                        "Negative effect on POS",
                        "Backward LSTM Micro F1"
                    ],
                    "page_nums": [
                        60,
                        61,
                        62,
                        63,
                        64
                    ],
                    "images": []
                },
                "12": {
                    "title": "Language Selection",
                    "text": [
                        "13 Indo-European (7 different branches)",
                        "10 from 8 non-IE branches",
                        "MRLs (e.g. Slavic languages)",
                        "Relatively free word order"
                    ],
                    "page_nums": [
                        65,
                        66,
                        67,
                        68,
                        69,
                        70,
                        71,
                        72,
                        73,
                        74,
                        75,
                        76,
                        77,
                        78
                    ],
                    "images": []
                },
                "13": {
                    "title": "Language Selection contd",
                    "text": [
                        "7 in non-alphabetic scripts",
                        "Ideographic (Chinese) - ~12K characters",
                        "Hebrew, Arabic - no casing, no vowels, syntactic fusion",
                        "Vietnamese - tokens are non-compositional syllables",
                        "OOV rate (UD against Polyglot vocabulary)"
                    ],
                    "page_nums": [
                        79,
                        80,
                        81,
                        82,
                        83,
                        84,
                        85,
                        86,
                        87,
                        88,
                        89
                    ],
                    "images": []
                },
                "14": {
                    "title": "Evaluated Systems",
                    "text": [
                        "NONE: Polyglots default UNK embedding",
                        "the flatf ish is sitt ing",
                        "CHAR2TAG - additional RNN layer",
                        "Char- LSTM Char- LSTM Char- LSTM Char- LSTM",
                        "BOTH: MIMICK + CHAR2TAG"
                    ],
                    "page_nums": [
                        90,
                        91,
                        92,
                        93,
                        94
                    ],
                    "images": []
                },
                "19": {
                    "title": "A Word Model from our Sponsor",
                    "text": [
                        "Our extrinsic results are on tagging",
                        "Please consider us for all your WE use cases!",
                        "IE! Code & models:",
                        "Code compatible with w2v, Polyglot, FastText",
                        "Models for Polyglot also on github",
                        "<1MB each, dynet format",
                        "Learn all OOVs in advance and add to param table, or",
                        "Load into memory and infer on-line"
                    ],
                    "page_nums": [
                        100,
                        101,
                        102,
                        103,
                        104,
                        105,
                        106,
                        107,
                        108,
                        109,
                        110,
                        111,
                        112
                    ],
                    "images": []
                }
            },
            "paper_title": "Mimicking Word Embeddings using Subword RNNs",
            "paper_id": "1217",
            "paper": {
                "title": "Mimicking Word Embeddings using Subword RNNs",
                "abstract": "Word embeddings improve generalization over lexical features by placing each word in a lower-dimensional space, using distributional information obtained from unlabeled data. However, the effectiveness of word embeddings for downstream NLP tasks is limited by out-of-vocabulary (OOV) words, for which embeddings do not exist. In this paper, we present MIM-ICK, an approach to generating OOV word embeddings compositionally, by learning a function from spellings to distributional embeddings. Unlike prior work, MIMICK does not require re-training on the original word embedding corpus; instead, learning is performed at the type level. Intrinsic and extrinsic evaluations demonstrate the power of this simple approach. On 23 languages, MIMICK improves performance over a word-based baseline for tagging part-of-speech and morphosyntactic attributes. It is competitive with (and complementary to) a supervised characterbased model in low-resource settings.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction One of the key advantages of word embeddings for natural language processing is that they enable generalization to words that are unseen in labeled training data, by embedding lexical features from large unlabeled datasets into a relatively low-dimensional Euclidean space."
                    },
                    {
                        "id": 1,
                        "string": "These low-dimensional embeddings are typically trained to capture distributional similarity, so that information can be shared among words that tend to appear in similar contexts."
                    },
                    {
                        "id": 2,
                        "string": "However, it is not possible to enumerate the entire vocabulary of any language, and even large unlabeled datasets will miss terms that appear in later applications."
                    },
                    {
                        "id": 3,
                        "string": "The issue of how to handle these out-of-vocabulary (OOV) words poses challenges for embedding-based methods."
                    },
                    {
                        "id": 4,
                        "string": "These challenges are particularly acute when working with lowresource languages, where even unlabeled data may be difficult to obtain at scale."
                    },
                    {
                        "id": 5,
                        "string": "A typical solution is to abandon hope, by assigning a single OOV embedding to all terms that do not appear in the unlabeled data."
                    },
                    {
                        "id": 6,
                        "string": "We approach this challenge from a quasigenerative perspective."
                    },
                    {
                        "id": 7,
                        "string": "Knowing nothing of a word except for its embedding and its written form, we attempt to learn the former from the latter."
                    },
                    {
                        "id": 8,
                        "string": "We train a recurrent neural network (RNN) on the character level with the embedding as the target, and use it later to predict vectors for OOV words in any downstream task."
                    },
                    {
                        "id": 9,
                        "string": "We call this model the MIMICK-RNN, for its ability to read a word's spelling and mimick its distributional embedding."
                    },
                    {
                        "id": 10,
                        "string": "Through nearest-neighbor analysis, we show that vectors learned via this method capture both word-shape features and lexical features."
                    },
                    {
                        "id": 11,
                        "string": "As a result, we obtain reasonable near-neighbors for OOV abbreviations, names, novel compounds, and orthographic errors."
                    },
                    {
                        "id": 12,
                        "string": "Quantitative evaluation on the Stanford RareWord dataset (Luong et al., 2013) provides more evidence that these character-based embeddings capture word similarity for rare and unseen words."
                    },
                    {
                        "id": 13,
                        "string": "As an extrinsic evaluation, we conduct experiments on joint prediction of part-of-speech tags and morphosyntactic attributes for a diverse set of 23 languages, as provided in the Universal Dependencies dataset (De Marneffe et al., 2014) ."
                    },
                    {
                        "id": 14,
                        "string": "Our model shows significant improvement across the board against a single UNK-embedding backoff method, and obtains competitive results against a supervised character-embedding model, which is trained end-to-end on the target task."
                    },
                    {
                        "id": 15,
                        "string": "In low-resource settings, our approach is particularly effective, and is complementary to supervised character embeddings trained from labeled data."
                    },
                    {
                        "id": 16,
                        "string": "The MIMICK-RNN therefore provides a useful new tool for tagging tasks in settings where there is limited labeled data."
                    },
                    {
                        "id": 17,
                        "string": "Models and code are available at www.github.com/ yuvalpinter/mimick ."
                    },
                    {
                        "id": 18,
                        "string": "Related Work Compositional models for embedding rare and unseen words."
                    },
                    {
                        "id": 19,
                        "string": "Several studies make use of morphological or orthographic information when training word embeddings, enabling the prediction of embeddings for unseen words based on their internal structure."
                    },
                    {
                        "id": 20,
                        "string": "Botha and Blunsom (2014) compute word embeddings by summing over embeddings of the morphemes; Luong et al."
                    },
                    {
                        "id": 21,
                        "string": "(2013) construct a recursive neural network over each word's morphological parse; Bhatia et al."
                    },
                    {
                        "id": 22,
                        "string": "(2016) use morpheme embeddings as a prior distribution over probabilistic word embeddings."
                    },
                    {
                        "id": 23,
                        "string": "While morphology-based approaches make use of meaningful linguistic substructures, they struggle with names and foreign language words, which include out-of-vocabulary morphemes."
                    },
                    {
                        "id": 24,
                        "string": "Character-based approaches avoid these problems: for example, Kim et al."
                    },
                    {
                        "id": 25,
                        "string": "(2016) train a recurrent neural network over words, whose embeddings are constructed by convolution over character embeddings; Wieting et al."
                    },
                    {
                        "id": 26,
                        "string": "(2016) learn embeddings of character ngrams, and then sum them into word embeddings."
                    },
                    {
                        "id": 27,
                        "string": "In all of these cases, the model for composing embeddings of subword units into word embeddings is learned by optimizing an objective over a large unlabeled corpus."
                    },
                    {
                        "id": 28,
                        "string": "In contrast, our approach is a post-processing step that can be applied to any set of word embeddings, regardless of how they were trained."
                    },
                    {
                        "id": 29,
                        "string": "This is similar to the \"retrofitting\" approach of Faruqui et al."
                    },
                    {
                        "id": 30,
                        "string": "(2015) , but rather than smoothing embeddings over a graph, we learn a function to build embeddings compositionally."
                    },
                    {
                        "id": 31,
                        "string": "Supervised subword models."
                    },
                    {
                        "id": 32,
                        "string": "Another class of methods learn task-specific character-based word embeddings within end-to-end supervised systems."
                    },
                    {
                        "id": 33,
                        "string": "For example, Santos and Zadrozny (2014) build word embeddings by convolution over char-acters, and then perform part-of-speech (POS) tagging using a local classifier; the tagging objective drives the entire learning process."
                    },
                    {
                        "id": 34,
                        "string": "Ling et al."
                    },
                    {
                        "id": 35,
                        "string": "(2015) propose a multi-level long shortterm memory (LSTM; Hochreiter and Schmidhuber, 1997) , in which word embeddings are built compositionally from an LSTM over characters, and then tagging is performed by an LSTM over words."
                    },
                    {
                        "id": 36,
                        "string": "Plank et al."
                    },
                    {
                        "id": 37,
                        "string": "(2016) show that concatenating a character-level or bit-level LSTM network to a word representation helps immensely in POS tagging."
                    },
                    {
                        "id": 38,
                        "string": "Because these methods learn from labeled data, they can cover only as much of the lexicon as appears in their labeled training sets."
                    },
                    {
                        "id": 39,
                        "string": "As we show, they struggle in several settings: lowresource languages, where labeled training data is scarce; morphologically rich languages, where the number of morphemes is large, or where the mapping from form to meaning is complex; and in Chinese, where the number of characters is orders of magnitude larger than in non-logographic scripts."
                    },
                    {
                        "id": 40,
                        "string": "Furthermore, supervised subword models can be combined with MIMICK, offering additive improvements."
                    },
                    {
                        "id": 41,
                        "string": "Morphosyntactic attribute tagging."
                    },
                    {
                        "id": 42,
                        "string": "We evaluate our method on the task of tagging word tokens for their morphosyntactic attributes, such as gender, number, case, and tense."
                    },
                    {
                        "id": 43,
                        "string": "The task of morpho-syntactic tagging dates back at least to the mid 1990s (Oflazer and Kuruöz, 1994; Hajič and Hladká, 1998) , and interest has been rejuvenated by the availability of large-scale multilingual morphosyntactic annotations through the Universal Dependencies (UD) corpus (De Marneffe et al., 2014) ."
                    },
                    {
                        "id": 44,
                        "string": "For example, Faruqui et al."
                    },
                    {
                        "id": 45,
                        "string": "(2016) propose a graph-based technique for propagating typelevel morphological information across a lexicon, improving token-level morphosyntactic tagging in 11 languages, using an SVM tagger."
                    },
                    {
                        "id": 46,
                        "string": "In contrast, we apply a neural sequence labeling approach, inspired by the POS tagger of Plank et al."
                    },
                    {
                        "id": 47,
                        "string": "(2016) ."
                    },
                    {
                        "id": 48,
                        "string": "MIMICK Word Embeddings We approach the problem of out-of-vocabulary (OOV) embeddings as a generation problem: regardless of how the original embeddings were created, we assume there is a generative wordformbased protocol for creating these embeddings."
                    },
                    {
                        "id": 49,
                        "string": "By training a model over the existing vocabulary, we can later use that model for predicting the embedding of an unseen word."
                    },
                    {
                        "id": 50,
                        "string": "Formally: given a language L, a vocabulary V ⊆ L of size V , and a pre-trained embeddings table W ∈ R V ×d where each word {w k } V k=1 is assigned a vector e k of dimension d, our model is trained to find the function f : L → R d such that the projected function f | V approximates the assignments f (w k ) ≈ e k ."
                    },
                    {
                        "id": 51,
                        "string": "Given such a model, a new word w k * ∈ L \\ V can now be assigned an embedding e k * = f (w k * )."
                    },
                    {
                        "id": 52,
                        "string": "Our predictive function of choice is a Word Type Character Bi-LSTM."
                    },
                    {
                        "id": 53,
                        "string": "Given a word with character sequence w = {c i } n 1 , a forward-LSTM and a backward-LSTM are run over the corresponding character embeddings sequence {e (c) i } n 1 ."
                    },
                    {
                        "id": 54,
                        "string": "Let h n f represent the final hidden vector for the forward-LSTM, and let h 0 b represent the final hidden vector for the backward-LSTM."
                    },
                    {
                        "id": 55,
                        "string": "The word embedding is computed by a multilayer perceptron: (1) f (w) = O T · g(T h · [h n f ; h 0 b ] + b h ) + b T , where T h , b h and O T , b T are parameters of affine transformations, and g is a nonlinear elementwise function."
                    },
                    {
                        "id": 56,
                        "string": "The model is presented in Figure 1 ."
                    },
                    {
                        "id": 57,
                        "string": "The training objective is similar to that of Yin and Schütze (2016) ."
                    },
                    {
                        "id": 58,
                        "string": "We match the predicted embeddings f (w k ) to the pre-trained word embeddings e w k , by minimizing the squared Euclidean distance, (2) L = f (w k ) − e w k 2 2 ."
                    },
                    {
                        "id": 59,
                        "string": "By backpropagating from this loss, it is possible to obtain local gradients with respect to the parameters of the LSTMs, the character embeddings, and the output model."
                    },
                    {
                        "id": 60,
                        "string": "The ultimate output of the training phase is the character embeddings matrix C and the parameters of the neural network: M = {C, F, B, T h , b h , O T , b T }, where F, B are the forward and backward LSTM component parameters, respectively."
                    },
                    {
                        "id": 61,
                        "string": "MIMICK Polyglot Embeddings The pretrained embeddings we use in our experiments are obtained from Polyglot (Al-Rfou et al., 2013), a multilingual word embedding effort."
                    },
                    {
                        "id": 62,
                        "string": "Available for dozens of languages, each dataset contains 64-dimension embeddings for the 100,000 most frequent words in a language's training corpus (of variable size), as well as an UNK embedding to be used for OOV words."
                    },
                    {
                        "id": 63,
                        "string": "Even with this vocabulary size, querying words from respective UD corpora (train + dev + test) yields high OOV rates: in at least half of the 23 languages in our experiments (see Section 5), 29.1% or more of the word types do not appear in the Polyglot vocabulary."
                    },
                    {
                        "id": 64,
                        "string": "The token-level median rate is 9.2%."
                    },
                    {
                        "id": 65,
                        "string": "1 Applying our MIMICK algorithm to Polyglot embeddings, we obtain a prediction model for each of the 23 languages."
                    },
                    {
                        "id": 66,
                        "string": "Based on preliminary testing on randomly selected held-out development sets of 1% from each Polyglot vocabulary (with error calculated as in Equation 2), we set the following hyper-parameters for the remainder of the experiments: character embedding dimension = 20; one LSTM layer with 50 hidden units; 60 training epochs with no dropout; nonlinearity function g = tanh."
                    },
                    {
                        "id": 67,
                        "string": "2 We initialize character embeddings randomly, and use DyNet to implement the model (Neubig et al., 2017) ."
                    },
                    {
                        "id": 68,
                        "string": "Nearest-neighbor examination."
                    },
                    {
                        "id": 69,
                        "string": "As a preliminary sanity check for the validity of our protocol, we examined nearest-neighbor samples in languages for which speakers were available: English, Hebrew, Tamil, and Spanish."
                    },
                    {
                        "id": 70,
                        "string": "(b) the model shows robustness to typos (e.g., developiong, corssing); (c) part-of-speech is learned across multiple suffixes (pesky -euphoric, ghastly); (d) word compounding is detected (e.g., lawnmower -bookmaker, postman); (e) semantics are not learned well (as is to be expected from the lack of context in training), but there are surprises (e.g., flatfish -slimy, watery)."
                    },
                    {
                        "id": 71,
                        "string": "word embeddings for all words in the test corpus."
                    },
                    {
                        "id": 72,
                        "string": "VarEmbed estimates a prior distribution over word embeddings, conditional on the morphological composition."
                    },
                    {
                        "id": 73,
                        "string": "For in-vocabulary words, a posterior is estimated from unlabeled data; for outof-vocabulary words, the expected embedding can be obtained from the prior alone."
                    },
                    {
                        "id": 74,
                        "string": "In addition, we compare to FastText (Bojanowski et al., 2016) , a high-vocabulary, high-dimensionality embedding benchmark."
                    },
                    {
                        "id": 75,
                        "string": "The results, shown in Table 3 , demonstrate that the MIMICK RNN recovers about half of the loss in performance incurred by the original Polyglot training model due to out-of-vocabulary words in the \"All pairs\" condition."
                    },
                    {
                        "id": 76,
                        "string": "MIMICK also outperforms VarEmbed."
                    },
                    {
                        "id": 77,
                        "string": "FastText can be considered an upper bound: with a vocabulary that is 25 times larger than the other models, it was missing words from only 44 pairs on this data."
                    },
                    {
                        "id": 78,
                        "string": "Joint Tagging of Parts-of-Speech and Morphosyntactic Attributes The Universal Dependencies (UD) scheme (De Marneffe et al., 2014) features a minimal set of 17 POS tags (Petrov et al., 2012) and supports tagging further language-specific features using attribute-specific inventories."
                    },
                    {
                        "id": 79,
                        "string": "For example, a verb in Turkish could be assigned a value for the evidentiality attribute, one which is absent from Danish."
                    },
                    {
                        "id": 80,
                        "string": "These additional morphosyntactic attributes are marked in the UD dataset as optional per-token attribute-value pairs."
                    },
                    {
                        "id": 81,
                        "string": "Our approach for tagging morphosyntactic attributes is similar to the part-of-speech tagging model of Ling et al."
                    },
                    {
                        "id": 82,
                        "string": "(2015) , who attach a projection layer to the output of a sentence-level bidirectional LSTM."
                    },
                    {
                        "id": 83,
                        "string": "We extend this approach to morphosyntactic tagging by duplicating this projection layer for each attribute type."
                    },
                    {
                        "id": 84,
                        "string": "The input to our multilayer perceptron (MLP) projection network is the hidden state produced for each token in the sentence by an underlying LSTM, and the output is OOV word Nearest neighbors TTGFM '(s/y) will come true', TPTVR '(s/y) will solve', TBTL '(s/y) will cancel', TSIR '(s/y) will remove' GIAVMTRIIM 'geometric(m-pl)'2 ANTVMIIM 'anatomic(m-pl)', GAVMTRIIM 'geometric(m-pl)'1 BQFTNV 'our request' IVFBIHM 'their(m) residents', XTAIHM 'their(m) sins', IRVFTV 'his inheritance' RIC'RDSVN 'Richardson' AVISTRK 'Eustrach', QMINQA 'Kaminka', GVLDNBRG 'Goldenberg'  attribute-specific probability distributions over the possible values for each attribute on each token in the sequence."
                    },
                    {
                        "id": 85,
                        "string": "Formally, for a given attribute a with possible values v ∈ V a , the tagging probability for the i'th word in a sentence is given by: Pr(a w i = v) = (Softmax(φ(h i ))) v , (3) with (4) φ(h i ) = O a W · tanh(W a h · h i + b a h ) + b a W , where h i is the i'th hidden state in the underlying LSTM, and φ(h i ) is a two-layer feedforward neural network, with weights W a h and O a W ."
                    },
                    {
                        "id": 86,
                        "string": "We apply a softmax transformation to the output; the value at position v is then equal to the probability of attribute v applying to token w i ."
                    },
                    {
                        "id": 87,
                        "string": "The input to the underlying LSTM is a sequence of word embeddings, which are initialized to the Polyglot vectors when possible, and to MIMICK vectors when necessary."
                    },
                    {
                        "id": 88,
                        "string": "Alternative initializations are considered in the evaluation, as described in Section 5.2."
                    },
                    {
                        "id": 89,
                        "string": "Each tagged attribute sequence (including POS tags) produces a loss equal to the sum of negative log probabilities of the true tags."
                    },
                    {
                        "id": 90,
                        "string": "One way to combine these losses is to simply compute the sum loss."
                    },
                    {
                        "id": 91,
                        "string": "However, many languages have large differences in sparsity across morpho-syntactic attributes, as apparent from Table 4 (rightmost column)."
                    },
                    {
                        "id": 92,
                        "string": "We therefore also compute a weighted sum loss, in which each attribute is weighted by the proportion of training corpus tokens on which it is assigned a non-NONE value."
                    },
                    {
                        "id": 93,
                        "string": "Preliminary experiments on development set data were inconclusive across languages and training set sizes, and so we kept the simpler sum loss objective for the remainder of our study."
                    },
                    {
                        "id": 94,
                        "string": "In all cases, part-of-speech tagging was less accurate when learned jointly with morphosyntactic attributes."
                    },
                    {
                        "id": 95,
                        "string": "This may be because the attribute loss acts as POS-unrelated \"noise\" affecting the common LSTM layer and the word embeddings."
                    },
                    {
                        "id": 96,
                        "string": "Experimental Settings The morphological complexity and compositionality of words varies greatly across languages."
                    },
                    {
                        "id": 97,
                        "string": "While a morphologically-rich agglutinative language such as Hungarian contains words that carry many attributes as fully separable morphemes, a sentence in an analytic language such as Vietnamese may have not a single polymorphemic or inflected word in it."
                    },
                    {
                        "id": 98,
                        "string": "To see whether this property is influential on our MIMICK model and its performance in the downstream tagging task, we select languages that comprise a sample of multiple morphological patterns."
                    },
                    {
                        "id": 99,
                        "string": "Language family and script type are other potentially influential factors in an orthography-based approach such as ours, and so we vary along these parameters as well."
                    },
                    {
                        "id": 100,
                        "string": "We also considered language selection recommendations from de Lhoneux and Nivre (2016) and Schluter and Agić (2017) ."
                    },
                    {
                        "id": 101,
                        "string": "As stated above, our approach is built on the Polyglot word embeddings."
                    },
                    {
                        "id": 102,
                        "string": "The intersection of the Polyglot embeddings and the UD dataset (version 1.4) yields 44 languages."
                    },
                    {
                        "id": 103,
                        "string": "Of these, many are under-annotated for morphosyntactic attributes; we select twenty-three sufficiently-tagged languages, with the exception of Indonesian."
                    },
                    {
                        "id": 104,
                        "string": "3 Table 4 presents the selected languages and their typological properties."
                    },
                    {
                        "id": 105,
                        "string": "As an additional proxy for mor- Table 4 : Languages used in tagging evaluation."
                    },
                    {
                        "id": 106,
                        "string": "Languages on the right are Indo-European."
                    },
                    {
                        "id": 107,
                        "string": "*In Vietnamese script, whitespace separates syllables rather than words."
                    },
                    {
                        "id": 108,
                        "string": "phological expressiveness, the rightmost column shows the proportion of UD tokens which are annotated with any morphosyntactic attribute."
                    },
                    {
                        "id": 109,
                        "string": "Metrics As noted above, we use the UD datasets for testing our MIMICK algorithm on 23 languages 4 with the supplied train/dev/test division."
                    },
                    {
                        "id": 110,
                        "string": "We measure partof-speech tagging by overall token-level accuracy."
                    },
                    {
                        "id": 111,
                        "string": "For morphosyntactic attributes, there does not seem to be an agreed-upon metric for reporting performance."
                    },
                    {
                        "id": 112,
                        "string": "Dzeroski et al."
                    },
                    {
                        "id": 113,
                        "string": "(2000) report pertag accuracies on a morphosyntactically tagged corpus of Slovene."
                    },
                    {
                        "id": 114,
                        "string": "Faruqui et al."
                    },
                    {
                        "id": 115,
                        "string": "(2016) report macro-averages of F1 scores of 11 languages from UD 1.1 for the various attributes (e.g., part-ofspeech, case, gender, tense); recall and precision were calculated for the full set of each attribute's values, pooled together."
                    },
                    {
                        "id": 116,
                        "string": "5 Agić et al."
                    },
                    {
                        "id": 117,
                        "string": "(2013) report separately on parts-of-speech and morphosyntactic attribute accuracies in Serbian and Croatian, as well as precision, recall, and F1 scores per tag."
                    },
                    {
                        "id": 118,
                        "string": "Georgiev et al."
                    },
                    {
                        "id": 119,
                        "string": "(2012) report token-level accuracy for exact all-attribute tags (e.g."
                    },
                    {
                        "id": 120,
                        "string": "'Ncmsh' for \"Noun short masculine singular definite\") in Bulgarian, reaching a tagset of size 680."
                    },
                    {
                        "id": 121,
                        "string": "Müller et al."
                    },
                    {
                        "id": 122,
                        "string": "(2013) do the same for six other languages."
                    },
                    {
                        "id": 123,
                        "string": "We report micro F1: each token's value for each attribute is compared separately with the gold labeling, where a correct prediction is a matching non-NONE attribute/value assignment."
                    },
                    {
                        "id": 124,
                        "string": "Recall and 4 When several datasets are available for a language, we use the unmarked corpus."
                    },
                    {
                        "id": 125,
                        "string": "5 Details were clarified in personal communication with the authors."
                    },
                    {
                        "id": 126,
                        "string": "precision are calculated over the entire set, with F1 defined as their harmonic mean."
                    },
                    {
                        "id": 127,
                        "string": "Models We implement and test the following models: No-Char."
                    },
                    {
                        "id": 128,
                        "string": "Word embeddings are initialized from Polyglot models, with unseen words assigned the Polyglot-supplied UNK vector."
                    },
                    {
                        "id": 129,
                        "string": "Following tuning experiments on all languages with cased script, we found it beneficial to first back off to the lowercased form for an OOV word if its embedding exists, and only otherwise assign UNK."
                    },
                    {
                        "id": 130,
                        "string": "MIMICK."
                    },
                    {
                        "id": 131,
                        "string": "Word embeddings are initialized from Polyglot, with OOV embeddings inferred from a MIMICK model (Section 3) trained on the Polyglot embeddings."
                    },
                    {
                        "id": 132,
                        "string": "Unlike the No-Char case, backing off to lowercased embeddings before using the MIMICK output did not yield conclusive benefits and thus we report results for the more straightforward no-backoff implementation."
                    },
                    {
                        "id": 133,
                        "string": "CHAR→TAG."
                    },
                    {
                        "id": 134,
                        "string": "Word embeddings are initialized from Polyglot as in the No-Char model (with lowercase backoff), and appended with the output of a character-level LSTM updated during training (Plank et al., 2016) ."
                    },
                    {
                        "id": 135,
                        "string": "This additional module causes a threefold increase in training time."
                    },
                    {
                        "id": 136,
                        "string": "Both."
                    },
                    {
                        "id": 137,
                        "string": "Word embeddings are initialized as in MIMICK, and appended with the CHAR→TAG LSTM."
                    },
                    {
                        "id": 138,
                        "string": "Other models."
                    },
                    {
                        "id": 139,
                        "string": "Several non-Polyglot embedding models were examined, all performed substantially worse than Polyglot."
                    },
                    {
                        "id": 140,
                        "string": "Two of these are notable: a random-initialization baseline, and a model initialized from FastText embeddings (tested on English)."
                    },
                    {
                        "id": 141,
                        "string": "FastText supplies 300-dimension embeddings for 2.51 million lowercase-only forms, and no UNK vector."
                    },
                    {
                        "id": 142,
                        "string": "6 Both of these embedding models were attempted with and without CHAR→TAG concatenation."
                    },
                    {
                        "id": 143,
                        "string": "Another model, initialized from only MIMICK output embeddings, performed well only on the language with smallest Polyglot training corpus (Latvian)."
                    },
                    {
                        "id": 144,
                        "string": "A Polyglot model where OOVs were initialized using an averaged embedding of all Polyglot vectors, rather than the supplied UNK vector, performed worse than our No-Char baseline on a great majority of the languages."
                    },
                    {
                        "id": 145,
                        "string": "Last, we do not employ type-based tagset restrictions."
                    },
                    {
                        "id": 146,
                        "string": "All tag inventories are computed from the training sets and each tag selection is performed over the full set."
                    },
                    {
                        "id": 147,
                        "string": "Hyperparameters Based on development set experiments, we set the following hyperparameters for all models on all languages: two LSTM layers of hidden size 128, MLP hidden layers of size equal to the number of each attribute's possible values; momentum stochastic gradient descent with 0.01 learning rate; 40 training epochs (80 for 5K settings) with a dropout rate of 0.5."
                    },
                    {
                        "id": 148,
                        "string": "The CHAR→TAG models use 20-dimension character embeddings and a single hidden layer of size 128."
                    },
                    {
                        "id": 149,
                        "string": "Results We report performance in both low-resource and full-resource settings."
                    },
                    {
                        "id": 150,
                        "string": "Low-resource training sets were obtained by randomly sampling training sentences, without replacement, until a predefined token limit was reached."
                    },
                    {
                        "id": 151,
                        "string": "We report the results on the full sets and on N = 5000 tokens in Table 5 (partof-speech tagging accuracy) and POS and morphosyntactic tagging."
                    },
                    {
                        "id": 152,
                        "string": "For POS, the largest margins are in the Slavic languages (Russian, Czech, Bulgarian), where word order is relatively free and thus rich word representations are imperative."
                    },
                    {
                        "id": 153,
                        "string": "Chinese also exhibits impressive improvement across all settings, perhaps due to the large character inventory (> 12,000), for which a model such as MIMICK can learn well-informed embeddings using the large Polyglot vocabulary dataset, overcoming both word-and characterlevel sparsity in the UD corpus."
                    },
                    {
                        "id": 154,
                        "string": "7 In morphosyntactic tagging, gains are apparent for Slavic languages and Chinese, but also for agglutinative languages -especially Tamil and Turkish -where the stable morpheme representation makes it easy for subword modeling to provide a type-level signal."
                    },
                    {
                        "id": 155,
                        "string": "8 To examine the effects on Slavic and agglutinative languages in a more fine-grained view, we present results of multiple training-set size experiments for each model, averaged over five repetitions (with different corpus samples), in Figure 2 ."
                    },
                    {
                        "id": 156,
                        "string": "MIMICK vs. CHAR→TAG."
                    },
                    {
                        "id": 157,
                        "string": "In several languages, the MIMICK algorithm fares better than the CHAR→TAG model on part-of-speech tagging in low-resource settings."
                    },
                    {
                        "id": 158,
                        "string": "Table 7 presents the POS tagging improvements that MIMICK achieves over the pre-trained Polyglot models, with and without CHAR→TAG concatenation, with 10,000 tokens of training data."
                    },
                    {
                        "id": 159,
                        "string": "We obtain statistically significant improvements in most languages, even when CHAR→TAG is included."
                    },
                    {
                        "id": 160,
                        "string": "These improvements are particularly substantial for test-set tokens outside the UD training set, as shown in the right two columns."
                    },
                    {
                        "id": 161,
                        "string": "While test set OOVs are a strength of the CHAR→TAG model (Plank et al., 2016) , in many languages there are still considerable improvements to be obtained from the application of MIMICK initialization."
                    },
                    {
                        "id": 162,
                        "string": "This suggests that with limited training data, the end-to-end CHAR→TAG model is unable to learn a sufficiently accurate representational mapping from orthography."
                    },
                    {
                        "id": 163,
                        "string": "Conclusion We present a straightforward algorithm to infer OOV word embedding vectors from pre-trained, 7 Character coverage in Chinese Polyglot is surprisingly good: only eight characters from the UD dataset are unseen in Polyglot, across more than 10,000 unseen word types."
                    },
                    {
                        "id": 164,
                        "string": "8 Persian is officially classified as agglutinative but it is mostly so with respect to derivations."
                    },
                    {
                        "id": 165,
                        "string": "Its word-level inflections are rare and usually fusional."
                    },
                    {
                        "id": 166,
                        "string": "limited-vocabulary models, without need to access the originating corpus."
                    },
                    {
                        "id": 167,
                        "string": "This method is particularly useful for low-resource languages and tasks with little labeled data available, and in fact is task-agnostic."
                    },
                    {
                        "id": 168,
                        "string": "Our method improves performance over word-based models on annotated sequence-tagging tasks for a large variety of languages across dimensions of family, orthography, and morphology."
                    },
                    {
                        "id": 169,
                        "string": "In addition, we present a Bi-LSTM approach for tagging morphosyntactic attributes at the token level."
                    },
                    {
                        "id": 170,
                        "string": "In this paper, the MIM-ICK model was trained using characters as input, but future work may consider the use of other subword units, such as morphemes, phonemes, or even bitmap representations of ideographic characters (Costa-jussà et al., 2017)."
                    },
                    {
                        "id": 171,
                        "string": "Acknowledgments We thank Umashanthi Pavalanathan, Sandeep Soni, Roi Reichart, and our anonymous reviewers for their valuable input."
                    },
                    {
                        "id": 172,
                        "string": "We thank Manaal Faruqui and Ryan McDonald for their help in understanding the metrics for morphosyntactic tagging."
                    },
                    {
                        "id": 173,
                        "string": "The project was supported by project HDTRA1-15-1-0019 from the Defense Threat Reduction Agency."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 17
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 18,
                        "end": 47
                    },
                    {
                        "section": "MIMICK Word Embeddings",
                        "n": "3",
                        "start": 48,
                        "end": 60
                    },
                    {
                        "section": "MIMICK Polyglot Embeddings",
                        "n": "3.1",
                        "start": 61,
                        "end": 77
                    },
                    {
                        "section": "Joint Tagging of Parts-of-Speech and Morphosyntactic Attributes",
                        "n": "4",
                        "start": 78,
                        "end": 95
                    },
                    {
                        "section": "Experimental Settings",
                        "n": "5",
                        "start": 96,
                        "end": 108
                    },
                    {
                        "section": "Metrics",
                        "n": "5.1",
                        "start": 109,
                        "end": 126
                    },
                    {
                        "section": "Models",
                        "n": "5.2",
                        "start": 127,
                        "end": 146
                    },
                    {
                        "section": "Hyperparameters",
                        "n": "5.3",
                        "start": 147,
                        "end": 148
                    },
                    {
                        "section": "Results",
                        "n": "6",
                        "start": 149,
                        "end": 162
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 163,
                        "end": 169
                    },
                    {
                        "section": "Acknowledgments",
                        "n": "8",
                        "start": 170,
                        "end": 173
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1217-Table4-1.png",
                        "caption": "Table 4: Languages used in tagging evaluation. Languages on the right are Indo-European. *In Vietnamese script, whitespace separates syllables rather than words.",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 523.1999999999999,
                            "y1": 67.2,
                            "y2": 235.2
                        }
                    },
                    {
                        "filename": "../figure/image/1217-Figure1-1.png",
                        "caption": "Figure 1: MIMICK model architecture.",
                        "page": 2,
                        "bbox": {
                            "x1": 318.71999999999997,
                            "x2": 511.2,
                            "y1": 62.879999999999995,
                            "y2": 259.2
                        }
                    },
                    {
                        "filename": "../figure/image/1217-Table6-1.png",
                        "caption": "Table 6: Micro-F1 for morphosyntactic attributes (UD 1.4 Test). Bold (Italic) type indicates significant improvement (degradation) by a bootstrapped Z-test, p < .01, comparing models as in Table 5. Note that the Kazakh (kk) test set has only 78 morphologically tagged tokens.",
                        "page": 7,
                        "bbox": {
                            "x1": 132.96,
                            "x2": 463.2,
                            "y1": 445.44,
                            "y2": 703.1999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1217-Table5-1.png",
                        "caption": "Table 5: POS tagging accuracy (UD 1.4 Test). Bold (Italic) indicates significant improvement (degradation) by McNemar’s test, p < .01, comparing MIMICK to “No-Char”, and “Both” to CHAR→TAG. * For reference, we copy the reported results of Plank et al. (2016)’s analog to CHAR→TAG. Note that these were obtained on UD 1.2, and without jointly tagging morphosyntactic attributes.",
                        "page": 7,
                        "bbox": {
                            "x1": 91.67999999999999,
                            "x2": 504.0,
                            "y1": 74.88,
                            "y2": 352.32
                        }
                    },
                    {
                        "filename": "../figure/image/1217-Table1-1.png",
                        "caption": "Table 1: Nearest-neighbor examples for the English MIMICK model.",
                        "page": 3,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 527.04,
                            "y1": 67.2,
                            "y2": 144.96
                        }
                    },
                    {
                        "filename": "../figure/image/1217-Table7-1.png",
                        "caption": "Table 7: Absolute gain in POS tagging accuracy from using MIMICK for 10,000-token datasets (all tokens for Tamil and Kazakh). Bold denotes statistical significance (McNemar’s test,p < 0.01).",
                        "page": 8,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 284.15999999999997,
                            "y1": 392.64,
                            "y2": 670.0799999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1217-Figure2-1.png",
                        "caption": "Figure 2: Results on agglutinative languages (top) and on Slavic languages (bottom). X-axis is number of training tokens, starting at 500. Error bars are the standard deviations over five random training data subsamples.",
                        "page": 8,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 527.04,
                            "y1": 61.44,
                            "y2": 283.2
                        }
                    },
                    {
                        "filename": "../figure/image/1217-Table2-1.png",
                        "caption": "Table 2: Nearest-neighbor examples for Hebrew (Transcriptions per Sima’an et al. (2001)). ‘s/y’ stands for ‘she/you-m.sg.’; subscripts denote alternative spellings, standard form being ‘X’1.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 523.1999999999999,
                            "y1": 67.2,
                            "y2": 125.28
                        }
                    },
                    {
                        "filename": "../figure/image/1217-Table3-1.png",
                        "caption": "Table 3: Similarity results on the RareWord set, measured as Spearman’s ρ× 100. VarEmbed was trained on a 20-million token dataset, Polyglot on a 1.7B-token dataset.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 288.0,
                            "y1": 178.56,
                            "y2": 281.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-52"
        },
        {
            "slides": {
                "0": {
                    "title": "Span Parsing is SOTA in Constituency Parsing",
                    "text": [
                        "Cross+Huang 2016 introduced Span Parsing",
                        "But with greedy decoding.",
                        "Stern et al. 2017 had Span Parsing with Exact Search and Global Training",
                        "But was too slow: O(n3)",
                        "Can we get the best of both worlds? Cross Huang",
                        "Something that is both fast and accurate?",
                        "Speed New at ACL 2018! Also Span Parsing!"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Both Fast and Accurate",
                    "text": [
                        "Baseline Chart Parser (Stern et al. 2017a)",
                        "Our Linear Time Parser"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": [
                        "figure/image/1224-Figure2-1.png"
                    ]
                },
                "2": {
                    "title": "In this talk we will discuss",
                    "text": [
                        "Linear Time Constituency Parsing using dynamic programming",
                        "Going slower in order to go faster: O(n3) O(n4) O(n)",
                        "Cube Pruning to speed up Incremental Parsing with Dynamic Programming",
                        "From O(n b2) to O(n b log b)",
                        "An improved loss function for Loss-Augmented Decoding",
                        "2nd highest accuracy among single systems trained on PTB only"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Span Parsing",
                    "text": [
                        "Span differences are taken from an encoder",
                        "(in our case: a bi-LSTM)",
                        "A span is scored and labeled by a feed-forward network. s",
                        "The score of a tree is the sum of all the labeled span scores",
                        "(i,j,X)2t s You should eat ice cream /s"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "Incremental Span Parsing Example",
                    "text": [
                        "Eat ice cream after lunch VB NN NN IN NN Cross + Huang 2016",
                        "Eat NN NN IN NN",
                        "S-VP ice cream after NN",
                        "NP PP Shift NP",
                        "S Action Label Stack",
                        "Eat ice cream after lunch Reduce S-VP VB NN NN IN NN Cross + Huang"
                    ],
                    "page_nums": [
                        5,
                        6,
                        7,
                        8,
                        9,
                        10,
                        11,
                        12,
                        13,
                        14
                    ],
                    "images": []
                },
                "5": {
                    "title": "How Many Possible Parsing Paths",
                    "text": [
                        "2 actions per state."
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "6": {
                    "title": "Equivalent Stacks",
                    "text": [
                        "Observe that all stacks that end with (i, j) will be treated the same!",
                        "Until (i, j) is popped off.",
                        "So we can treat these as temporarily equivalent, and merge.",
                        "This is our new stack representation. Left Pointers",
                        "Graph-Structured Stack (Tomita Huang Sagae 2010)"
                    ],
                    "page_nums": [
                        16,
                        17,
                        18
                    ],
                    "images": []
                },
                "7": {
                    "title": "Dynamic Programming Merging Stacks",
                    "text": [
                        "Temporarily merging stacks will make our state space polynomial."
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": []
                },
                "8": {
                    "title": "Becoming Action Synchronous",
                    "text": [
                        "Shift-Reduce Parsers are traditionally action synchronous.",
                        "This makes beam-search straight forward.",
                        "We will also do the same",
                        "But will show that this will slow down our DP (befo re applying beam -search)"
                    ],
                    "page_nums": [
                        20
                    ],
                    "images": []
                },
                "9": {
                    "title": "Action Synchronous Parsing Example",
                    "text": [
                        "Gold: (0,1) Shift Shift Shift Reduce Reduce Shift Shift Reduce Reduce",
                        "sh sh sh sh sh r r r r",
                        "r r sh r r r",
                        "r r r r r r",
                        "sh r r r sh sh sh",
                        "r r r sh r sh r r r r"
                    ],
                    "page_nums": [
                        21,
                        22,
                        23,
                        24,
                        25,
                        26,
                        27
                    ],
                    "images": []
                },
                "10": {
                    "title": "Runtime Analysis",
                    "text": [
                        "sh sh sh sh sh r r r r",
                        "r r r r r r",
                        "sh r r r sh sh sh",
                        "r r r sh r sh r r r r",
                        "#left pointers per state: O(n)",
                        "Check out the paper for our new theorem: r sh sh",
                        "Thanks to Dezhong Deng!"
                    ],
                    "page_nums": [
                        28,
                        29,
                        30,
                        31,
                        32
                    ],
                    "images": []
                },
                "11": {
                    "title": "Going slower to go faster",
                    "text": [
                        "Our Action-Synchronous algorithm has a slower runtime than CKY!",
                        "However, it also becomes straightforward to prune using beam search.",
                        "So we can achieve a linear runtime in the end.",
                        "sh sh sh sh sh sh sh sh sh sh r r r r r r r r O(n4) O(n4) r r r r r r r r r r r r sh sh r r r r r r sh sh sh sh sh sh r r r r r r sh sh r r sh sh r r r r r r r r r r",
                        "sh sh sh sh sh sh sh sh sh sh r r r r r r r r O(n) O(n) r r r r r r r sh r sh sh r sh r r (approx. (approx. DP) DP) sh r sh r sh r sh"
                    ],
                    "page_nums": [
                        33
                    ],
                    "images": []
                },
                "12": {
                    "title": "Now our runtime is On",
                    "text": [
                        "sh sh sh sh sh r r r r",
                        "r r r r r",
                        "r sh r sh"
                    ],
                    "page_nums": [
                        34
                    ],
                    "images": []
                },
                "13": {
                    "title": "But this On is hiding a constant",
                    "text": [
                        "b states per action step",
                        "O(b) left pointers per state"
                    ],
                    "page_nums": [
                        35,
                        36
                    ],
                    "images": []
                },
                "17": {
                    "title": "Loss Function",
                    "text": [
                        "Counts the incorrectly labeled spans in the tree (Stern et al. 2017)",
                        "Happens to be decomposable, so can even be used to compare partial trees."
                    ],
                    "page_nums": [
                        43
                    ],
                    "images": []
                },
                "18": {
                    "title": "Novel Cross Span Loss",
                    "text": [
                        "We observe that the null label is used in two different ways:",
                        "To facilitate ternary and n-ary branching trees.",
                        "As a default label for incorrect spans that violate other gold spans.",
                        "i j i j",
                        "We modify the loss to account for incorrect spans in the tree.",
                        "Indicates whether (i, j) is crossing a span in the gold tree",
                        "Still decomposable over spans, so can be used to compare partial trees."
                    ],
                    "page_nums": [
                        44,
                        45,
                        46
                    ],
                    "images": []
                },
                "20": {
                    "title": "Comparison with Baseline Chart Parser",
                    "text": [
                        "Model Note F1 (PTB test)",
                        "Stern et al. (2017a) Baseline Chart Parser"
                    ],
                    "page_nums": [
                        48
                    ],
                    "images": []
                },
                "21": {
                    "title": "Comparison to Other Parsers",
                    "text": [
                        "PTB only, Single Model, End-to-End Reranking, Ensemble, Extra Data",
                        "Model Note F1 Model Note F1",
                        "Durett + Klein 2015 Vinyals et al. 2015 Ensemble",
                        "Cross + Huang 2016 Original Span Parser Dyer et al. 2016 Generative Reranking",
                        "Dyer et al. 2016 Discriminative Fried et al. Reranking Ensemble",
                        "Stern et al. 2017a Chart Baseline Parser",
                        "Stern et al. 2017c Separate Decoding",
                        "Our Work Beam 20"
                    ],
                    "page_nums": [
                        49
                    ],
                    "images": []
                }
            },
            "paper_title": "Linear-Time Constituency Parsing with RNNs and Dynamic Programming",
            "paper_id": "1224",
            "paper": {
                "title": "Linear-Time Constituency Parsing with RNNs and Dynamic Programming",
                "abstract": "Recently, span-based constituency parsing has achieved competitive accuracies with extremely simple models by using bidirectional RNNs to model \"spans\". However, the minimal span parser of Stern et al. (2017a) which holds the current state of the art accuracy is a chart parser running in cubic time, O(n 3 ), which is too slow for longer sentences and for applications beyond sentence boundaries such as end-toend discourse parsing and joint sentence boundary detection and parsing. We propose a linear-time constituency parser with RNNs and dynamic programming using graph-structured stack and beam search, which runs in time O(nb 2 ) where b is the beam size. We further speed this up to O(nb log b) by integrating cube pruning. Compared with chart parsing baselines, this linear-time parser is substantially faster for long sentences on the Penn Treebank and orders of magnitude faster for discourse parsing, and achieves the highest F1 accuracy on the Penn Treebank among single model end-to-end systems.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Span-based neural constituency parsing (Cross and Huang, 2016; Stern et al., 2017a) has attracted attention due to its high accuracy and extreme simplicity."
                    },
                    {
                        "id": 1,
                        "string": "Compared with other recent neural constituency parsers (Dyer et al., 2016; Liu and Zhang, 2016; Durrett and Klein, 2015) which use neural networks to model tree structures, the spanbased framework is considerably simpler, only using bidirectional RNNs to model the input sequence and not the output tree."
                    },
                    {
                        "id": 2,
                        "string": "Because of this factorization, the output space is decomposable which enables efficient dynamic programming algorithm such as CKY."
                    },
                    {
                        "id": 3,
                        "string": "But existing span-based parsers suffer from a crucial limitation in terms of search: on the one hand, a greedy span parser (Cross and Huang, 2016 ) is fast (linear-time) but only explores one single path in the exponentially large search space, and on the other hand, a chartbased span parser (Stern et al., 2017a) performs exact search and achieves state-of-the-art accuracy, but in cubic time, which is too slow for longer sentences and for applications that go beyond sentence boundaries such as end-to-end discourse parsing (Hernault et al., 2010; Zhao and Huang, 2017) and integrated sentence boundary detection and parsing (Björkelund et al., 2016) ."
                    },
                    {
                        "id": 4,
                        "string": "We propose to combine the merits of both greedy and chart-based approaches and design a linear-time span-based neural parser that searches over exponentially large space."
                    },
                    {
                        "id": 5,
                        "string": "Following Huang and Sagae (2010) , we perform left-to-right dynamic programming in an action-synchronous style, with (2n − 1) actions (i.e., steps) for a sentence of n words."
                    },
                    {
                        "id": 6,
                        "string": "While previous non-neural work in this area requires sophisticated features (Huang and Sagae, 2010; Mi and Huang, 2015) and thus high time complexity such as O(n 11 ), our states are as simple as : (i, j) where is the step index and (i, j) is the span, modeled using bidirectional RNNs without any syntactic features."
                    },
                    {
                        "id": 7,
                        "string": "This gives a running time of O(n 4 ), with the extra O(n) for step index."
                    },
                    {
                        "id": 8,
                        "string": "We further employ beam search to have a practical runtime of O(nb 2 ) at the cost of exact search where b is the beam size."
                    },
                    {
                        "id": 9,
                        "string": "However, on the Penn Treebank, most sentences are less than 40 words (n < 40), and even with a small beam size of b = 10, the observed complexity of an O(nb 2 ) parser is not exactly linear in n (see Experiments)."
                    },
                    {
                        "id": 10,
                        "string": "To solve this problem, we apply cube pruning (Chiang, 2007; Huang and Chiang, 2007) to improve the runtime to O(nb log b) which renders an observed complexity that is linear in n (with minor extra inexactness)."
                    },
                    {
                        "id": 11,
                        "string": "We make the following contributions: • We design the first neural parser that is both linear time and capable of searching over exponentially large space."
                    },
                    {
                        "id": 12,
                        "string": "1 • We are the first to apply cube pruning to incremental parsing, and achieves, for the first time, the complexity of O(nb log b), i.e., linear in sentence length and (almost) linear in beam size."
                    },
                    {
                        "id": 13,
                        "string": "This leads to an observed complexity strictly linear in sentence length n. • We devise a novel loss function which penalizes wrong spans that cross gold-tree spans, and employ max-violation update (Huang et al., 2012) to train this parser with structured SVM and beam search."
                    },
                    {
                        "id": 14,
                        "string": "• Compared with chart parsing baselines, our parser is substantially faster for long sentences on the Penn Treebank, and orders of magnitude faster for end-to-end discourse parsing."
                    },
                    {
                        "id": 15,
                        "string": "It also achieves the highest F1 score on the Penn Treebank among single model end-to-end systems."
                    },
                    {
                        "id": 16,
                        "string": "• We devise a new formulation of graphstructured stack (Tomita, 1991) which requires no extra bookkeeping, proving a new theorem that gives deep insight into GSS."
                    },
                    {
                        "id": 17,
                        "string": "Preliminaries Span-Based Shift-Reduce Parsing A span-based shift-reduce constituency parser (Cross and Huang, 2016 ) maintains a stack of spans (i, j), and progressively adds a new span each time it takes a shift or reduce action."
                    },
                    {
                        "id": 18,
                        "string": "With (i, j) on top of the stack, the parser can either shift to push the next singleton span (j, j + 1) on the stack, or it can reduce to combine the top two spans, (k, i) and (i, j), forming the larger span (k, j)."
                    },
                    {
                        "id": 19,
                        "string": "After each shift/reduce action, the top-most span is labeled as either a constituent or with a null label ∅, which means that the subsequence is not a subtree in the final decoded parse."
                    },
                    {
                        "id": 20,
                        "string": "Parsing initializes with an empty stack and continues until (0, n) is formed, representing the entire sentence."
                    },
                    {
                        "id": 21,
                        "string": "shift : , j : (c, ) + 1 : j, j + 1 : (c + ξ, ξ) j < n reduce : k, i : (c , v ) : i, j : ( , v) + 1 : k, j : (c + v + σ, v + v + σ) Figure 1: Our shift-reduce deductive system."
                    },
                    {
                        "id": 22,
                        "string": "Here is the step index, c and v are prefix and inside scores."
                    },
                    {
                        "id": 23,
                        "string": "Unlike Huang and Sagae (2010) and Cross and Huang (2016) , ξ and σ are not shift/reduce scores; instead, they are the (best) label scores of the resulting span: ξ = max X s(j, j + 1, X) and σ = max X s(k, j, X) where X is a nonterminal symbol (could be ∅)."
                    },
                    {
                        "id": 24,
                        "string": "Here = − 2(j − i) + 1."
                    },
                    {
                        "id": 25,
                        "string": "Bi-LSTM features To get the feature representation of a span (i, j), we use the output sequence of a bi-directional LSTM (Cross and Huang, 2016; Stern et al., 2017a) ."
                    },
                    {
                        "id": 26,
                        "string": "The LSTM produces f 0 , ..., f n forwards and b n , ..., b 0 backwards outputs, which we concatenate the differences of (f j −f i ) and (b i −b j ) as the representation for span (i, j)."
                    },
                    {
                        "id": 27,
                        "string": "This eliminates the need for complex feature engineering, and can be stored for efficient querying during decoding."
                    },
                    {
                        "id": 28,
                        "string": "Dynamic Programming Score Decomposition Like Stern et al."
                    },
                    {
                        "id": 29,
                        "string": "(2017a) , we also decompose the score of a tree t to be the sum of the span scores: s(t) = (i,j,X)∈t s(i, j, X) (1) = (i,j)∈t max X s((f j − f i ; b i − b j ), X) (2) Note that X is a nonterminal label, a unary chain (e.g., S-VP), or null label ∅."
                    },
                    {
                        "id": 30,
                        "string": "2 In a shift-reduce setting, there are 2n − 1 steps (n shifts and n − 1 reduces) and after each step we take the best label for the resulting span; therefore there are exactly 2n−1 such (labeled) spans (i, j, X) in tree t. Also note that the choice of the label for any span (i, j) is only dependent on (i, j) itself (and not depending on any subtree information), thus the max over label X is independent of other spans, which is a nice property of span-based parsing (Cross and Huang, 2016; Stern et al., 2017a) ."
                    },
                    {
                        "id": 31,
                        "string": "Graph-Struct."
                    },
                    {
                        "id": 32,
                        "string": "Stack w/o Bookkeeping We now reformulate this DP parser in the above section as a shift-reduce parser."
                    },
                    {
                        "id": 33,
                        "string": "We maintain a step index in order to perform action-synchronous beam search (see below)."
                    },
                    {
                        "id": 34,
                        "string": "Figure 1 shows how to represent a parsing stack using only the top span (i, j)."
                    },
                    {
                        "id": 35,
                        "string": "If the top span (i, j) shifts, it produces (j, j + 1), but if it reduces, it needs to know the second last span on the stack, (k, i), which is not represented in the current state."
                    },
                    {
                        "id": 36,
                        "string": "This problem can be solved by graph-structure stack (Tomita, 1991; Huang and Sagae, 2010) , which maintains, for each state p, a set of predecessor states π(p) that p can combine with on the left."
                    },
                    {
                        "id": 37,
                        "string": "This is the way our actual code works (π(p) is implemented as a list of pointers, or \"left pointers\"), but here for simplicity of presentation we devise a novel but easier-to-understand formulation in Fig."
                    },
                    {
                        "id": 38,
                        "string": "1 , where we explicitly represent the set of predecessor states that state : (i, j) can combine with as : (k, i) where = − 2(j − i) + 1, i.e., (i, j) at step can combine with any (k, i) for any k at step ."
                    },
                    {
                        "id": 39,
                        "string": "The rationale behind this new formulation is the following theorem: Theorem 1 The predecessor states π( : (i, j)) are all in the same step = − 2(j − i) + 1."
                    },
                    {
                        "id": 40,
                        "string": "Proof."
                    },
                    {
                        "id": 41,
                        "string": "By induction."
                    },
                    {
                        "id": 42,
                        "string": "This Theorem bring new and deep insights and suggests an alternative implementation that does not require any extra bookkeeping."
                    },
                    {
                        "id": 43,
                        "string": "The time complexity of this algorithm is O(n 4 ) with the extra O(n) due to step index."
                    },
                    {
                        "id": 44,
                        "string": "3 Action-Synchronous Beam Search The incremental nature of our parser allows us to further lower the runtime complexity at the cost of inexact search."
                    },
                    {
                        "id": 45,
                        "string": "At each time step, we maintain the top b parsing states, pruning off the rest."
                    },
                    {
                        "id": 46,
                        "string": "Thus, a candidate parse that made it to the end of decoding had to survive within the top b at every step."
                    },
                    {
                        "id": 47,
                        "string": "With O(n) parsing actions our time complexity becomes linear in the length of the sentence."
                    },
                    {
                        "id": 48,
                        "string": "Cube Pruning However, Theorem 1 suggests that a parsing state p can have up to b predecessor states (\"left pointers\"), i.e., |π(p)| ≤ b because π(p) are all in the same step, a reduce action can produce up to b subsequent new reduced states."
                    },
                    {
                        "id": 49,
                        "string": "With b items on a beam and O(n) actions to take, this gives us an overall complexity of O(nb 2 )."
                    },
                    {
                        "id": 50,
                        "string": "Even though b 2 is a constant, even modest values of b can make b 2 dominate the length of the sentence."
                    },
                    {
                        "id": 51,
                        "string": "4 To improve this at the cost of additional inexactness, we introduce cube pruning to our beam search, where we put candidate actions into a heap and retrieve the top b states to be considered in the next time-step."
                    },
                    {
                        "id": 52,
                        "string": "We heapify the top b shiftmerged states and the top b reduced states."
                    },
                    {
                        "id": 53,
                        "string": "To avoid inserting all b 2 reduced states from the previous beam, we only consider each state's highest scoring left pointer, 5 and whenever we pop a reduced state from the heap, we iterate down its left pointers to insert the next non-duplicate reduced state back into the heap."
                    },
                    {
                        "id": 54,
                        "string": "This process finishes when we pop b items from the heap."
                    },
                    {
                        "id": 55,
                        "string": "The Training We use a Structured SVM approach for training (Stern et al., 2017a; Shi et al., 2017) ."
                    },
                    {
                        "id": 56,
                        "string": "We want the model to score the gold tree t * higher than any other tree t by at least a margin ∆(t, t * ): ∀t, s(t * ) − s(t) ≥ ∆(t, t * )."
                    },
                    {
                        "id": 57,
                        "string": "Note that ∆(t, t) = 0 for any t and ∆(t, t * ) > 0 for any t = t * ."
                    },
                    {
                        "id": 58,
                        "string": "At training time we perform lossaugmented decoding: t = arg max t s ∆ (t) = arg max t s(t) + ∆(t, t * )."
                    },
                    {
                        "id": 59,
                        "string": "4 The average length of a sentence in the Penn Treebank training set is about 24."
                    },
                    {
                        "id": 60,
                        "string": "Even with a beam size of 10, we already have b 2 = 100, which would be a significant factor in our runtime."
                    },
                    {
                        "id": 61,
                        "string": "In practice, each parsing state will rarely have the maximum b left pointers so this ends up being a loose upper-bound."
                    },
                    {
                        "id": 62,
                        "string": "Nevertheless, the beam search should be performed with the input length in mind, or else as b increases we risk losing a linear runtime."
                    },
                    {
                        "id": 63,
                        "string": "5 If each previous beam is sorted, and if the beam search is conducted by going top-to-bottom, then each state's left pointers will implicitly be kept in sorted order."
                    },
                    {
                        "id": 64,
                        "string": "where s ∆ (·) is the loss-augmented score."
                    },
                    {
                        "id": 65,
                        "string": "Ift = t * , then all constraints are satisfied (which implies arg max t s(t) = t * ), otherwise we perform an update by backpropagating from s ∆ (t) − s(t * )."
                    },
                    {
                        "id": 66,
                        "string": "Cross-Span Loss The baseline loss function from Stern et al."
                    },
                    {
                        "id": 67,
                        "string": "(2017a) counts the incorrect labels (i, j, X) in the predicted tree: ∆ base (t, t * ) = (i,j,X)∈t 1 X = t * (i,j) ."
                    },
                    {
                        "id": 68,
                        "string": "Note that X can be null ∅, and t * (i,j) denotes the gold label for span (i, j), which could also be ∅."
                    },
                    {
                        "id": 69,
                        "string": "6 However, there are two cases where t * (i,j) = ∅: a subspan (i, j) due to binarization (e.g., a span combining the first two subtrees in a ternary branching node), or an invalid span in t that crosses a gold span in t * ."
                    },
                    {
                        "id": 70,
                        "string": "In the baseline function above, these two cases are treated equivalently; for example, a span (3, 5, ∅) ∈ t is not penalized even if there is a gold span (4, 6, VP) ∈ t * ."
                    },
                    {
                        "id": 71,
                        "string": "So we revise our loss function as: ∆ new (t, t * ) = (i,j,X)∈t 1 X = t * (i,j) ∨ cross(i, j, t * ) 6 Note that the predicted tree t has exactly 2n − 1 spans but t * has much fewer spans (only labeled spans without ∅)."
                    },
                    {
                        "id": 72,
                        "string": "where cross(i, j, t * ) = ∃ (k, l) ∈ t * , and i < k < j < l or k < i < l < j. Max Violation Updates Given that we maintain loss-augmented scores even for partial trees, we can perform a training update on a given example sentence by choosing to take the loss where it is the greatest along the parse trajectory."
                    },
                    {
                        "id": 73,
                        "string": "At each parsing time-step , the violation is the difference between the highest augmented-scoring parse trajectory up to that point and the gold trajectory (Huang et al., 2012; Yu et al., 2013) ."
                    },
                    {
                        "id": 74,
                        "string": "Note that computing the violation gives us the max-margin loss described above."
                    },
                    {
                        "id": 75,
                        "string": "Taking the largest violation from all time-steps gives us the max-violation loss."
                    },
                    {
                        "id": 76,
                        "string": "Experiments We present experiments on the Penn Treebank (Marcus et al., 1993) and the PTB-RST discourse treebank (Zhao and Huang, 2017) ."
                    },
                    {
                        "id": 77,
                        "string": "In both cases, the training set is shuffled before each epoch, and dropout (Hinton et al., 2012) is employed with probability 0.4 to the recurrent outputs for regularization."
                    },
                    {
                        "id": 78,
                        "string": "Updates with minibatches of size 10 and 1 are used for PTB and the PTB-RST respectively."
                    },
                    {
                        "id": 79,
                        "string": "We use Adam (Kingma and Ba, 2014) with default settings to schedule learning rates for all the weights."
                    },
                    {
                        "id": 80,
                        "string": "To address unknown words during training, we adopt the strategy described by Kiperwasser and Goldberg (Kiperwasser and Goldberg, 2016) ; words in the training set are replaced with the unknown word symbol UNK with probability p unk = Socher et al."
                    },
                    {
                        "id": 81,
                        "string": "(2013) 90.4 Durrett and Klein (2015) 91.1 Cross and Huang (2016) 90.  occurrences of word w in the training corpus."
                    },
                    {
                        "id": 82,
                        "string": "Our system is implemented in Python using the DyNet neural network library (Neubig et al., 2017) ."
                    },
                    {
                        "id": 83,
                        "string": "Penn Treebank We use the Wall Street Journal portion of the Penn Treebank, with the standard split of sections 2-21 for training, 22 for development, and 23 for testing."
                    },
                    {
                        "id": 84,
                        "string": "Tags are provided using the Stanford tagger with 10-way jackknifing."
                    },
                    {
                        "id": 85,
                        "string": "Table 1 shows our development results and overall speeds, while Table 2 compares our test results."
                    },
                    {
                        "id": 86,
                        "string": "We show that a beam size of 20 can be fast while still achieving state-of-the-art performances."
                    },
                    {
                        "id": 87,
                        "string": "Discourse Parsing To measure the tractability of parsing on longer sequences, we also consider experiments on the Table 3 , broken down to focus on the discourse labels."
                    },
                    {
                        "id": 88,
                        "string": "PTB-RST discourse Treebank, a joint discourse and constituency dataset with a combined representation, allowing for parsing at either level (Zhao and Huang, 2017) ."
                    },
                    {
                        "id": 89,
                        "string": "We compare our runtimes out-of-the-box in Figure 3 ."
                    },
                    {
                        "id": 90,
                        "string": "Without any pre-processing, and by treating discourse examples as constituency trees with thousands of words, our trained models represent end-to-end discourse parsing systems."
                    },
                    {
                        "id": 91,
                        "string": "For our overall constituency results in Table 3 , and for discourse results in Table 4 , we adapt the split-point feature described in (Zhao and Huang, 2017) in addition to the base parser."
                    },
                    {
                        "id": 92,
                        "string": "We find that larger beamsizes are required to achieve good discourse scores."
                    },
                    {
                        "id": 93,
                        "string": "Conclusions We have developed a new neural parser that maintains linear time, while still searching over an exponentially large space."
                    },
                    {
                        "id": 94,
                        "string": "We also use cube pruning to further improve the runtime to O(nb log b)."
                    },
                    {
                        "id": 95,
                        "string": "For training, we introduce a new loss function, and achieve state-of-the-art results among singlemodel end-to-end systems."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 16
                    },
                    {
                        "section": "Span-Based Shift-Reduce Parsing",
                        "n": "2.1",
                        "start": 17,
                        "end": 24
                    },
                    {
                        "section": "Bi-LSTM features",
                        "n": "2.2",
                        "start": 25,
                        "end": 27
                    },
                    {
                        "section": "Score Decomposition",
                        "n": "3.1",
                        "start": 28,
                        "end": 31
                    },
                    {
                        "section": "Graph-Struct. Stack w/o Bookkeeping",
                        "n": "3.2",
                        "start": 32,
                        "end": 43
                    },
                    {
                        "section": "Action-Synchronous Beam Search",
                        "n": "3.3",
                        "start": 44,
                        "end": 47
                    },
                    {
                        "section": "Cube Pruning",
                        "n": "3.4",
                        "start": 48,
                        "end": 54
                    },
                    {
                        "section": "Training",
                        "n": "4",
                        "start": 55,
                        "end": 65
                    },
                    {
                        "section": "Cross-Span Loss",
                        "n": "4.1",
                        "start": 66,
                        "end": 71
                    },
                    {
                        "section": "Max Violation Updates",
                        "n": "4.2",
                        "start": 72,
                        "end": 75
                    },
                    {
                        "section": "Experiments",
                        "n": "5",
                        "start": 76,
                        "end": 82
                    },
                    {
                        "section": "Penn Treebank",
                        "n": "5.1",
                        "start": 83,
                        "end": 86
                    },
                    {
                        "section": "Discourse Parsing",
                        "n": "5.2",
                        "start": 87,
                        "end": 91
                    },
                    {
                        "section": "Conclusions",
                        "n": "6",
                        "start": 92,
                        "end": 95
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1224-Table1-1.png",
                        "caption": "Table 1: Comparison of PTB development set results, with the time measured in seconds-persentence. The baseline chart parser is from Stern et al. (2017b), with null-label scores unconstrained to be nonzero, replicating their paper.",
                        "page": 4,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 289.44,
                            "y1": 62.879999999999995,
                            "y2": 150.23999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1224-Table4-1.png",
                        "caption": "Table 4: F1 scores comparing discourse systems. Results correspond to the accuracies in Table 3, broken down to focus on the discourse labels.",
                        "page": 4,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 524.16,
                            "y1": 216.48,
                            "y2": 281.28
                        }
                    },
                    {
                        "filename": "../figure/image/1224-Table2-1.png",
                        "caption": "Table 2: Final PTB Test Results. We compare our models with other (neural) single-model end-toend trained systems.",
                        "page": 4,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 289.44,
                            "y1": 240.48,
                            "y2": 444.0
                        }
                    },
                    {
                        "filename": "../figure/image/1224-Table3-1.png",
                        "caption": "Table 3: Overall test accuracies for PTB-RST discourse treebank. Starred? rows indicate a run that was decoded from the beam 200 model.",
                        "page": 4,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 523.1999999999999,
                            "y1": 62.879999999999995,
                            "y2": 155.04
                        }
                    },
                    {
                        "filename": "../figure/image/1224-Figure1-1.png",
                        "caption": "Figure 1: Our shift-reduce deductive system. Here ` is the step index, c and v are prefix and inside scores. Unlike Huang and Sagae (2010) and Cross and Huang (2016), ξ and σ are not shift/reduce scores; instead, they are the (best) label scores of the resulting span: ξ = maxX s(j, j+1, X) and σ = maxX s(k, j,X) where X is a nonterminal symbol (could be ∅). Here `′ = `− 2(j − i) + 1.",
                        "page": 1,
                        "bbox": {
                            "x1": 314.88,
                            "x2": 515.04,
                            "y1": 64.32,
                            "y2": 208.32
                        }
                    },
                    {
                        "filename": "../figure/image/1224-Figure3-1.png",
                        "caption": "Figure 3: Runtime plot of decoding the discourse treebank training set. The log-log plot on the right shows the cubic complexity of baseline chart parsing. Whereas beam search decoding maintains linear time even for sequences of thousands of words.",
                        "page": 3,
                        "bbox": {
                            "x1": 309.59999999999997,
                            "x2": 522.24,
                            "y1": 64.8,
                            "y2": 202.56
                        }
                    },
                    {
                        "filename": "../figure/image/1224-Figure2-1.png",
                        "caption": "Figure 2: Runtime plots of decoding on the training set of the Penn Treebank. The differences between the different algorithms become evident after sentences of length 40. The regression curves have been empirically fitted.",
                        "page": 3,
                        "bbox": {
                            "x1": 84.47999999999999,
                            "x2": 278.4,
                            "y1": 64.8,
                            "y2": 259.68
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-53"
        },
        {
            "slides": {
                "0": {
                    "title": "Query Auto Completion",
                    "text": [
                        "Search engine suggests queries as the user types",
                        "an LSTM to generate completions",
                        "Memory savings over most popular completion",
                        "Handles previously unseen prefixes",
                        "Can we do better by adapting the LM to provide personalized suggestions?"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "3": {
                    "title": "Learning",
                    "text": [
                        "User embeddings, recurrent layer weights and {L, R} tensor learned jointly",
                        "Need online learning to adapt to users that were not previously seen",
                        "In joint training, learn a cold-start embedding for set of infrequent users",
                        "Initialize each users embedding with learned cold-start vector",
                        "After user selects a query, back-propagate and only update the user embedding"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "6": {
                    "title": "Qualitative Comparison",
                    "text": [
                        "What queries are boosted the most after searching for high school",
                        "softball and math homework help?",
                        "high school musical horoscope",
                        "chris brown high school musical",
                        "funnyjunk.com homes for sale",
                        "chat room hair styles",
                        "Queries that most decrease in likelihood with the",
                        "FactorCell include travel agencies and plane tickets."
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "8": {
                    "title": "Conclusions",
                    "text": [
                        "Personalization helps and the benefit increases as more queries are seen",
                        "Stronger adaptation of the recurrent layer (FactorCell) gives better results than concatenating a user vector",
                        "No extra latency/computation due to caching of adapted weight matrix",
                        "Try out the FactorCell on your data"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                }
            },
            "paper_title": "Personalized Language Model for Query Auto-Completion",
            "paper_id": "1225",
            "paper": {
                "title": "Personalized Language Model for Query Auto-Completion",
                "abstract": "Query auto-completion is a search engine feature whereby the system suggests completed queries as the user types. Recently, the use of a recurrent neural network language model was suggested as a method of generating query completions. We show how an adaptable language model can be used to generate personalized completions and how the model can use online updating to make predictions for users not seen during training. The personalized predictions are significantly better than a baseline that uses no user information.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Query auto-completion (QAC) is a feature used by search engines that provides a list of suggested queries for the user as they are typing."
                    },
                    {
                        "id": 1,
                        "string": "For instance, if the user types the prefix \"mete\" then the system might suggest \"meters\" or \"meteorite\" as completions."
                    },
                    {
                        "id": 2,
                        "string": "This feature can save the user time and reduce cognitive load (Cai et al., 2016) ."
                    },
                    {
                        "id": 3,
                        "string": "Most approaches to QAC are extensions of the Most Popular Completion (MPC) algorithm (Bar-Yossef and Kraus, 2011) ."
                    },
                    {
                        "id": 4,
                        "string": "MPC suggests completions based on the most popular queries in the training data that match the specified prefix."
                    },
                    {
                        "id": 5,
                        "string": "One way to improve MPC is to consider additional signals such as temporal information (Shokouhi and Radinsky, 2012; Whiting and Jose, 2014) or information gleaned from a users' past queries (Shokouhi, 2013) ."
                    },
                    {
                        "id": 6,
                        "string": "This paper deals with the latter of those two signals, i.e."
                    },
                    {
                        "id": 7,
                        "string": "personalization."
                    },
                    {
                        "id": 8,
                        "string": "Personalization relies on the fact that query likelihoods are drastically different among different people depending on their needs and interests."
                    },
                    {
                        "id": 9,
                        "string": "Recently, Park and Chiba (2017) suggested a significantly different approach to QAC."
                    },
                    {
                        "id": 10,
                        "string": "In their Cold Start Warm Start 1 bank of america bank of america 2 barnes and noble basketball 3 babiesrus baseball 4 baby names barnes and noble 5 bank one baltimore Table 1 : Top five completions for the prefix \"ba\" for a cold start model with no user knowledge and a warm model that has seen the queries espn, sports news, nascar, yankees, and nba."
                    },
                    {
                        "id": 11,
                        "string": "work, completions are generated from a character LSTM language model instead of by ranking completions retrieved from a database, as in the MPC algorithm."
                    },
                    {
                        "id": 12,
                        "string": "This approach is able to complete queries whose prefixes were not seen during training and has significant memory savings over having to store a large query database."
                    },
                    {
                        "id": 13,
                        "string": "Building on this work, we consider the task of personalized QAC, advancing current methods by combining the obvious advantages of personalization with the effectiveness of a language model in handling rare and previously unseen prefixes."
                    },
                    {
                        "id": 14,
                        "string": "The model must learn how to extract information from a user's past queries and use it to adapt the generative model for that person's future queries."
                    },
                    {
                        "id": 15,
                        "string": "To do this, we leverage recent advances in contextadaptive neural language modeling."
                    },
                    {
                        "id": 16,
                        "string": "In particular, we make use of the recently introduced FactorCell model that uses an embedding vector to additively transform the weights of the language model's recurrent layer with a low-rank matrix (Jaech and Ostendorf, 2017) ."
                    },
                    {
                        "id": 17,
                        "string": "By allowing a greater fraction of the weights to change during personalization, the FactorCell model has advantages over the traditional approach to adaptation of concatenating a context vector to the input of the LSTM (Mikolov and Zweig, 2012) ."
                    },
                    {
                        "id": 18,
                        "string": "Table 1 provides an anecdotal example from  the trained FactorCell model to demonstrate the  intended behavior."
                    },
                    {
                        "id": 19,
                        "string": "The table shows the top five  completions for the prefix \" ba\" in a cold start scenario and again after the user has completed five sports related queries."
                    },
                    {
                        "id": 20,
                        "string": "In the warm start scenario, the \"baby names\" and \"babiesrus\" completions no longer appear in the top five and have been replaced with \"basketball\" and \"baseball\"."
                    },
                    {
                        "id": 21,
                        "string": "The novel aspects of this work are the application of an adaptive language model to the task of QAC personalization and the demonstration of how RNN language models can be adapted to contexts (users) not seen during training."
                    },
                    {
                        "id": 22,
                        "string": "An additional contribution is showing that a richer adaptation framework gives added gains with added data."
                    },
                    {
                        "id": 23,
                        "string": "Model Adaptation depends on learning an embedding for each user, which we discuss in Section 2.1, and then using that embedding to adjust the weights of the recurrent layer, discussed in Section 2.2."
                    },
                    {
                        "id": 24,
                        "string": "Learning User Embeddings During training, we learn an embedding for each of the users."
                    },
                    {
                        "id": 25,
                        "string": "We think of these embeddings as holding latent demographic factors for each user."
                    },
                    {
                        "id": 26,
                        "string": "Users who have less than 15 queries in the training data (around half the users but less than 13% of the queries) are grouped together as a single entity, user 1 , leaving k users."
                    },
                    {
                        "id": 27,
                        "string": "The user embeddings matrix U k×m , where m is the user embedding size, is learned via back-propagation as part of the end-toend model."
                    },
                    {
                        "id": 28,
                        "string": "The embedding for an individual user is the ith row of U and is denoted by u i ."
                    },
                    {
                        "id": 29,
                        "string": "It is important to be able to apply the model to users that are not seen during training."
                    },
                    {
                        "id": 30,
                        "string": "This is done by online updating of the user embeddings during evaluation."
                    },
                    {
                        "id": 31,
                        "string": "When a new person, user k+1 is seen, a new row is added to U and initialized to u 1 ."
                    },
                    {
                        "id": 32,
                        "string": "Each person's user embedding is updated via back-propagation every time they select a query."
                    },
                    {
                        "id": 33,
                        "string": "When doing online updating of the user embeddings, the rest of the model parameters (everything except U) are frozen."
                    },
                    {
                        "id": 34,
                        "string": "Recurrent Layer Adaptation We consider three model architectures which differ only in the method for adapting the recurrent layer."
                    },
                    {
                        "id": 35,
                        "string": "First is the unadapted LM, analogous to the model from Park and Chiba (2017) , which does no personalization."
                    },
                    {
                        "id": 36,
                        "string": "The second architecture was introduced by Mikolov and Zweig (2012) and has been used multiple times for LM personalization (Wen et al., 2013; Huang et al., 2014; Li et al., 2016) ."
                    },
                    {
                        "id": 37,
                        "string": "It works by concatenating a user embedding to the character embedding at every step of the input to the recurrent layer."
                    },
                    {
                        "id": 38,
                        "string": "Jaech and Ostendorf (2017) refer to this model as the ConcatCell and show that it is equivalent to adding a term Vu to adjust the bias of the recurrent layer."
                    },
                    {
                        "id": 39,
                        "string": "The hidden state of a ConcatCell with embedding size e and hidden state size h is given in Equation 1 where σ is the activation function, w t is the character embedding, h t−1 is the previous hidden state, and W ∈ R e+h×h and b ∈ R h are the recurrent layer weight matrix and bias vector."
                    },
                    {
                        "id": 40,
                        "string": "h t = σ([w t , h t−1 ]W + b + Vu) (1) Adapting just the bias vector is a significant limitation."
                    },
                    {
                        "id": 41,
                        "string": "The FactorCell model, (Jaech and Ostendorf, 2017) , remedies this by letting the user embedding transform the weights of the recurrent layer via the use of a low-rank adaptation matrix."
                    },
                    {
                        "id": 42,
                        "string": "The FactorCell uses a weight matrix W = W + A that has been additively transformed by a personalized low-rank matrix A."
                    },
                    {
                        "id": 43,
                        "string": "Because the Fac-torCell weight matrix W is different for each user (See Equation 2), it allows for a much stronger adaptation than what is possible using the more standard ConcatCell model."
                    },
                    {
                        "id": 44,
                        "string": "1 h t = σ([w t , h t−1 ]W + b) (2) The low-rank adaptation matrix A is generated by taking the product between a user's m dimensional embedding and left and right bases tensors, Z L ∈ R m×e+h×r and Z R ∈ R r×h×m as so, A = (u i × 1 Z L )(Z R × 3 u i ) (3) where × i denotes the mode-i tensor product."
                    },
                    {
                        "id": 45,
                        "string": "The above product selects a user specific adaptation matrix by taking a weighted combination of the m rank r matrices held between Z L and Z R ."
                    },
                    {
                        "id": 46,
                        "string": "The rank, r, is a hyperparameter which controls the degree of personalization."
                    },
                    {
                        "id": 47,
                        "string": "Data Our experiments make use of the AOL Query data collected over three months in 2006 (Pass et al., 2006) ."
                    },
                    {
                        "id": 48,
                        "string": "The first six of the ten files were used for training."
                    },
                    {
                        "id": 49,
                        "string": "This contains approximately 12 million queries from 173,000 users for an average of 70 queries per user (median 15)."
                    },
                    {
                        "id": 50,
                        "string": "A set of 240,000 queries from those same users (2% of the data) was reserved for tuning and validation."
                    },
                    {
                        "id": 51,
                        "string": "From the remaining files, one million queries from 30,000 users are used to test the models on a disjoint set of users."
                    },
                    {
                        "id": 52,
                        "string": "Experiments Implementation Details The vocabulary consists of 79 characters including special start and stop tokens."
                    },
                    {
                        "id": 53,
                        "string": "Models were trained for six epochs."
                    },
                    {
                        "id": 54,
                        "string": "The Adam optimizer is used during training with a learning rate of 10 −3 (Kingma and Ba, 2014) ."
                    },
                    {
                        "id": 55,
                        "string": "When updating the user embeddings during evaluation, we found that it is easier to use an optimizer without momentum."
                    },
                    {
                        "id": 56,
                        "string": "We use Adadelta (Zeiler, 2012) and tune the online learning rate to give the best perplexity on a held-out set of 12,000 queries, having previously verified that perplexity is a good indicator of performance on the QAC task."
                    },
                    {
                        "id": 57,
                        "string": "2 The language model is a single-layer characterlevel LSTM with coupled input and forget gates and layer normalization (Melis et al., 2018; Ba et al., 2016) ."
                    },
                    {
                        "id": 58,
                        "string": "We do experiments on two model configurations: small and large."
                    },
                    {
                        "id": 59,
                        "string": "The small models use an LSTM hidden state size of 300 and 20 dimensional user embeddings."
                    },
                    {
                        "id": 60,
                        "string": "The large models use a hidden state size of 600 and 40 dimensional user embeddings."
                    },
                    {
                        "id": 61,
                        "string": "Both sizes use 24 dimensional character embeddings."
                    },
                    {
                        "id": 62,
                        "string": "For the small sized models, we experimented with different values of the FactorCell rank hyperparameter between 30 and 50 dimensions finding that bigger rank is better."
                    },
                    {
                        "id": 63,
                        "string": "The large sized models used a fixed value of 60 for the rank hyperparemeter."
                    },
                    {
                        "id": 64,
                        "string": "During training only and due to limited computational resources, queries are truncated to a length of 40 characters."
                    },
                    {
                        "id": 65,
                        "string": "Prefixes are selected uniformly at random with the constraint that they contain at least two characters in the prefix and that there is at least one character in the completion."
                    },
                    {
                        "id": 66,
                        "string": "To generate completions using beam search, we use a beam width of 100 and a branching factor of 4."
                    },
                    {
                        "id": 67,
                        "string": "Results are reported using mean reciprocal rank (MRR), the standard method of evaluating QAC systems."
                    },
                    {
                        "id": 68,
                        "string": "It is the mean of the reciprocal rank of the true completion in the 2 Code at http://github.com/ajaech/query completion Table 2 : MRR reported for seen and unseen prefixes for small (S) and big (B) models."
                    },
                    {
                        "id": 69,
                        "string": "top ten proposed completions."
                    },
                    {
                        "id": 70,
                        "string": "The reciprocal rank is zero if the true completion is not in the top ten."
                    },
                    {
                        "id": 71,
                        "string": "Neural models are compared against an MPC baseline."
                    },
                    {
                        "id": 72,
                        "string": "Following Park and Chiba (2017), we remove queries seen less than three times from the MPC training data."
                    },
                    {
                        "id": 73,
                        "string": "Table 2 compares the performance of the different models against the MPC baseline on a test set of one million queries from a user population that is disjoint with the training set."
                    },
                    {
                        "id": 74,
                        "string": "Results are presented separately for prefixes that are seen or unseen in the training data."
                    },
                    {
                        "id": 75,
                        "string": "Consistent with prior work, the neural models do better than the MPC baseline."
                    },
                    {
                        "id": 76,
                        "string": "The personalized models are both better than the unadapted one."
                    },
                    {
                        "id": 77,
                        "string": "The FactorCell model is the best overall in both the big and small sized experiments, but the gain is mainly for the seen prefixes."
                    },
                    {
                        "id": 78,
                        "string": "Figure 1 shows the relative improvement in MRR over an unpersonalized model versus the number of queries seen per user."
                    },
                    {
                        "id": 79,
                        "string": "Both the Factor- Cell and the ConcatCell show continued improvement as more queries from each user are seen, and the FactorCell outperforms the ConcatCell by an increasing margin over time."
                    },
                    {
                        "id": 80,
                        "string": "In the long run, we expect that the system will have seen many queries from most users."
                    },
                    {
                        "id": 81,
                        "string": "Therefore, the right side of Figure 1, where the relative gain of FactorCell is up to 2% better than that of the ConcatCell, is more indicative of the potential of these models for active users."
                    },
                    {
                        "id": 82,
                        "string": "Since the data was collected over a limited time frame and half of all users have fifteen or fewer queries, the results in Table 2 do not reflect the full benefit of personalization."
                    },
                    {
                        "id": 83,
                        "string": "Figure 2 shows the MRR for different prefix and query lengths."
                    },
                    {
                        "id": 84,
                        "string": "We find that longer prefixes help the model make longer completions and (more obviously) shorter completions have higher MRR."
                    },
                    {
                        "id": 85,
                        "string": "Comparing the personalized model against the unpersonalized baseline, we see that the biggest gains are for short queries and prefixes of length one or two."
                    },
                    {
                        "id": 86,
                        "string": "Results We found that one reason why the FactorCell outperforms the ConcatCell is that it is able to pick up sooner on the repetitive search behaviors that some users have."
                    },
                    {
                        "id": 87,
                        "string": "This commonly happens for navigational queries where someone searches for the name of their favorite website once or more per day."
                    },
                    {
                        "id": 88,
                        "string": "At the extreme tail there are users who search for nothing but free online poker."
                    },
                    {
                        "id": 89,
                        "string": "Both models do well on these highly predictable users but the Fac-torCell is generally a bit quicker to adapt."
                    },
                    {
                        "id": 90,
                        "string": "We conducted case studies to better understand what information is represented in the user embeddings and what makes the FactorCell different from the ConcatCell."
                    },
                    {
                        "id": 91,
                        "string": "From a cold start user embedding we ran two queries and allowed the model to update the user embedding."
                    },
                    {
                        "id": 92,
                        "string": "Then, we ranked FactorCell ConcatCell 1 high school musical  horoscope  2  chris brown  high school musical  3  funnyjunk.com  homes for sale  4 funbrain.com modular homes 5 chat room hair styles Table 3 : The five queries that have the greatest adapted vs. unadapted likelihood ratio after searching for \"high school softball\" and \"math homework help\"."
                    },
                    {
                        "id": 93,
                        "string": "the most frequent 1,500 queries based on the ratio of their likelihood from before and after updating the user embeddings."
                    },
                    {
                        "id": 94,
                        "string": "Tables 3 and 4 show the queries with the highest relative likelihood of the adapted vs. unadapted models after two related search queries: \"high school softball\" and \"math homework help\" for Table 3 , and \"Prada handbags\" and \"Versace eyewear\" for Table 4 ."
                    },
                    {
                        "id": 95,
                        "string": "In both cases, the Factor-Cell model examples are more semantically coherent than the ConcatCell examples."
                    },
                    {
                        "id": 96,
                        "string": "In the first case, the FactorCell model identifies queries that a high school student might make, including entertainment sources and a celebrity entertainer popular with that demographic."
                    },
                    {
                        "id": 97,
                        "string": "In the second case, the FactorCell model chooses retailers that carry woman's apparel and those that sell home goods."
                    },
                    {
                        "id": 98,
                        "string": "While these companies' brands are not as luxurious as Prada or Versace, most of the top luxury brand names do not appear in the top 1,500 queries and our model may not be capable of being that specific."
                    },
                    {
                        "id": 99,
                        "string": "There is no obvious semantic connection between the highest likelihood ratio phrases for the ConcatCell; it seems to be focusing more on orthography than semantics (e.g."
                    },
                    {
                        "id": 100,
                        "string": "\"home\" in the first example).. Not shown are the queries which experienced the greatest decrease in likelihood."
                    },
                    {
                        "id": 101,
                        "string": "For the \"high school\" case, these included searches for travel agencies and airline ticketswebsites not targeted towards the high school age demographic."
                    },
                    {
                        "id": 102,
                        "string": "Related Work While the standard implementation of MPC can not handle unseen prefixes, there are variants which do have that ability."
                    },
                    {
                        "id": 103,
                        "string": "Park and Chiba (2017) find that the neural LM outperforms MPC even when MPC has been augmented with the approach from Mitra and Craswell (2015) for handling rare FactorCell ConcatCell 1  neiman marcus  craigslist nyc  2  pottery barn  myspace layouts  3  jc penney  verizon wireless  4 verizon wireless jensen ackles 5 bed bath and beyond webster dictionary Table 4 : The five queries that have the greatest adapted vs. unadapted likelihood ratio after searching for \"prada handbags\" and \"versace eyewear\"."
                    },
                    {
                        "id": 104,
                        "string": "prefixes."
                    },
                    {
                        "id": 105,
                        "string": "There has also been work on personalizing MPC (Shokouhi, 2013; Cai et al., 2014) ."
                    },
                    {
                        "id": 106,
                        "string": "We did not compare against these specific models because our goal was to show how personalization can improve the already-proven generative neural model approach."
                    },
                    {
                        "id": 107,
                        "string": "RNN's have also previously been used for the related task of next query suggestion (Sordoni et al., 2015) ."
                    },
                    {
                        "id": 108,
                        "string": "Our results are not directly comparable to Park and Chiba (2017) or Mitra and Craswell (2015) due to differences in the partitioning of the data and the method for selecting random prefixes."
                    },
                    {
                        "id": 109,
                        "string": "Prior work partitions the data by time instead of by user."
                    },
                    {
                        "id": 110,
                        "string": "Splitting by users is necessary in order to properly test personalization over longer time ranges."
                    },
                    {
                        "id": 111,
                        "string": "Wang et al."
                    },
                    {
                        "id": 112,
                        "string": "(2018) show how spelling correction can be integrated into an RNN language model query auto-completion system and how the completions can be generated in real time using a GPU."
                    },
                    {
                        "id": 113,
                        "string": "Our method of updating the model during evaluation resembles work on dynamic evaluation for language modeling (Krause et al., 2017) , but differs in that only the user embeddings (latent demographic factors) are updated."
                    },
                    {
                        "id": 114,
                        "string": "Conclusion and Future Work Our experiments show that the LSTM model can be improved using personalization."
                    },
                    {
                        "id": 115,
                        "string": "The method of adapting the recurrent layer clearly matters and we obtained an advantage by using the FactorCell model."
                    },
                    {
                        "id": 116,
                        "string": "The reason the FactorCell does better is in part attributable to having two to three times as many parameters in the recurrent layer as either the ConcatCell or the unadapted models."
                    },
                    {
                        "id": 117,
                        "string": "By design, the adapted weight matrix W only needs to be computed at most once per query and is reused many thousands of times during beam search."
                    },
                    {
                        "id": 118,
                        "string": "As a result, for a given latency budget, the FactorCell model outperforms the Mikolov and Zweig (2012) model for LSTM adaptation."
                    },
                    {
                        "id": 119,
                        "string": "The cost for updating the user embeddings is similar to the cost of the forward pass and depends on the size of the user embedding, hidden state size, FactorCell rank, and query length."
                    },
                    {
                        "id": 120,
                        "string": "In most cases there will be time between queries for updates, but updates can be less frequent to reduce computational costs."
                    },
                    {
                        "id": 121,
                        "string": "We also showed that language model personalization can be effective even on users who are not seen during training."
                    },
                    {
                        "id": 122,
                        "string": "The benefits of personalization are immediate and increase over time as the system continues to leverage the incoming data to build better user representations."
                    },
                    {
                        "id": 123,
                        "string": "The approach can easily be extended to include time as an additional conditioning factor."
                    },
                    {
                        "id": 124,
                        "string": "We leave the question of whether the results can be improved by combining the language model with MPC for future work."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 22
                    },
                    {
                        "section": "Model",
                        "n": "2",
                        "start": 23,
                        "end": 23
                    },
                    {
                        "section": "Learning User Embeddings",
                        "n": "2.1",
                        "start": 24,
                        "end": 33
                    },
                    {
                        "section": "Recurrent Layer Adaptation",
                        "n": "2.2",
                        "start": 34,
                        "end": 46
                    },
                    {
                        "section": "Data",
                        "n": "3",
                        "start": 47,
                        "end": 51
                    },
                    {
                        "section": "Implementation Details",
                        "n": "4.1",
                        "start": 52,
                        "end": 85
                    },
                    {
                        "section": "Results",
                        "n": "4.2",
                        "start": 86,
                        "end": 101
                    },
                    {
                        "section": "Related Work",
                        "n": "5",
                        "start": 102,
                        "end": 113
                    },
                    {
                        "section": "Conclusion and Future Work",
                        "n": "6",
                        "start": 114,
                        "end": 124
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1225-Figure1-1.png",
                        "caption": "Figure 1: Relative improvement in MRR over the unpersonalized model versus queries seen using the large size models. Plot uses a moving average of width 9 to reduce noise.",
                        "page": 2,
                        "bbox": {
                            "x1": 313.92,
                            "x2": 519.36,
                            "y1": 219.84,
                            "y2": 363.36
                        }
                    },
                    {
                        "filename": "../figure/image/1225-Table2-1.png",
                        "caption": "Table 2: MRR reported for seen and unseen prefixes for small (S) and big (B) models.",
                        "page": 2,
                        "bbox": {
                            "x1": 314.88,
                            "x2": 518.4,
                            "y1": 64.8,
                            "y2": 173.28
                        }
                    },
                    {
                        "filename": "../figure/image/1225-Table4-1.png",
                        "caption": "Table 4: The five queries that have the greatest adapted vs. unadapted likelihood ratio after searching for “prada handbags” and “versace eyewear”.",
                        "page": 4,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 288.0,
                            "y1": 64.8,
                            "y2": 142.56
                        }
                    },
                    {
                        "filename": "../figure/image/1225-Table3-1.png",
                        "caption": "Table 3: The five queries that have the greatest adapted vs. unadapted likelihood ratio after searching for “high school softball” and “math homework help”.",
                        "page": 3,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 525.12,
                            "y1": 64.8,
                            "y2": 142.56
                        }
                    },
                    {
                        "filename": "../figure/image/1225-Figure2-1.png",
                        "caption": "Figure 2: MRR by prefix and query lengths for the large FactorCell and unadapted models with the first 50 queries per user excluded.",
                        "page": 3,
                        "bbox": {
                            "x1": 89.75999999999999,
                            "x2": 272.15999999999997,
                            "y1": 61.44,
                            "y2": 190.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1225-Table1-1.png",
                        "caption": "Table 1: Top five completions for the prefix “ba” for a cold start model with no user knowledge and a warm model that has seen the queries espn, sports news, nascar, yankees, and nba.",
                        "page": 0,
                        "bbox": {
                            "x1": 320.64,
                            "x2": 511.2,
                            "y1": 224.16,
                            "y2": 300.47999999999996
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-54"
        },
        {
            "slides": {
                "1": {
                    "title": "Goal",
                    "text": [
                        "Exploring what specific semantic properties are directly reflected by such embeddings.",
                        "Focusing on a few select aspects of sentence semantics.",
                        "Concurrent related work: Conneau et al. ACL 2018",
                        "(i) Their work studies what you can learn to predict using 100,000 training instances",
                        "(ii) Our goal: Directly study the embeddings (via cosine similarity)",
                        "Zhu, Li & de Melo. Exploring Semantic Properties of Sentence Embeddings"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Approach Contrastive Sentences",
                    "text": [
                        "Minor alterations of a sentence may lead to notable shifts in meaning.",
                        "(i) A rabbit is jumping over the fence ( S",
                        "(ii) A rabbit is hopping over the fence ( S=",
                        "(iii) A rabbit is not jumping over the fence S*",
                        "Zhu, Li & de Melo. Exploring Semantic Properties of Sentence Embeddings"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "4": {
                    "title": "Negation Detection",
                    "text": [
                        "A person is slicing an onion.",
                        "A person is cutting an onion.",
                        "A person is not slicing an onion.",
                        "Zhu, Li & de Melo. Exploring Semantic Properties of Sentence Embeddings",
                        "Average of Word Embeddings is more easier misled by negation.",
                        "Both InferSent and SkipThought succeed in distinguishing unnegated sentences from negated ones.",
                        "Glove Avg P Means Sent2Vec SkipThought InferSent"
                    ],
                    "page_nums": [
                        5,
                        11
                    ],
                    "images": []
                },
                "5": {
                    "title": "Negation Variant",
                    "text": [
                        "A man is not standing on his head under water.",
                        "There is no man standing on his head under water.",
                        "A man is standing on his head under water.",
                        "Zhu, Li & de Melo. Exploring Semantic Properties of Sentence Embeddings",
                        "Both averaging of word embeddings and SkipThought are dismal in terms of the accuracy.",
                        "InferSent appears to have acquired a better understanding of negation quantifiers, as these are commonplace in many NLI datasets.",
                        "Glove Avg P Means Sent2Vec SkipThought InferSent"
                    ],
                    "page_nums": [
                        6,
                        12
                    ],
                    "images": []
                },
                "6": {
                    "title": "Clause Relatedness",
                    "text": [
                        "Octel said the purchase was expected.",
                        "Octel said the purchase was not expected",
                        "Glove Avg P Means Sent2Vec SkipThought InferSent Zhu, Li & de Melo. Exploring Semantic Properties of Sentence Embeddings",
                        "Both SkipThought vectors and InferSent works poorly when sub clause is much shorter than original one.",
                        "Sent2vec best in distinguishing the embedded clause of a sentence from a negation of that sentence."
                    ],
                    "page_nums": [
                        7,
                        13
                    ],
                    "images": []
                },
                "7": {
                    "title": "Argument Sensitivity",
                    "text": [
                        "Francesca teaches Adam to adjust the microphone on his stage",
                        "Adam is taught to adjust the microphone on his stage",
                        "Adam teaches Francesca to adjust the microphone on his stage",
                        "Glove Avg P Means Sent2Vec SkipThought InferSent Zhu, Li & de Melo. Exploring Semantic Properties of Sentence Embeddings",
                        "None of the analyzed approaches prove adept at distinguishing the semantic information from structural information in this case."
                    ],
                    "page_nums": [
                        8,
                        14
                    ],
                    "images": []
                },
                "8": {
                    "title": "Fixed Point Reordering",
                    "text": [
                        "A black dog in the snow is jumping off the ground and catching a stick.",
                        "Fixed Point Inversion(Corrupted Sentence):",
                        "In the snow is jumping off the ground and catching a stick a black dog.",
                        "Glove Avg P Means Sent2Vec SkipThought InferSent Zhu, Li & de Melo. Exploring Semantic Properties of Sentence Embeddings",
                        "Methods based on word embeddings do not encode sufficient word order information into the sentence embeddings.",
                        "SkipThought and InferSent did well when the original sentence and its semantically equivalence share similar structure"
                    ],
                    "page_nums": [
                        9,
                        15
                    ],
                    "images": []
                },
                "9": {
                    "title": "Models and Dataset",
                    "text": [
                        "Dataset Embedding Dim of Sentences From",
                        "Glove Avg Common Crawl Negation Detection SICK, SNLI",
                        "P Means Common Crawl Negation Variant SICK, SNLI",
                        "Sent2Vec English Wiki Clause Relatedness TreebankMSR Penn Paraphrase",
                        "SkipThought Book Corpus Argument Sensitivity SICK, MS Paraphrase",
                        "InferSent SNLI Fixed Point Reordering SICK",
                        "Zhu, Li & de Melo. Exploring Semantic Properties of Sentence Embeddings"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                }
            },
            "paper_title": "Exploring Semantic Properties of Sentence Embeddings",
            "paper_id": "1228",
            "paper": {
                "title": "Exploring Semantic Properties of Sentence Embeddings",
                "abstract": "Neural vector representations are ubiquitous throughout all subfields of NLP. While word vectors have been studied in much detail, thus far only little light has been shed on the properties of sentence embeddings. In this paper, we assess to what extent prominent sentence embedding methods exhibit select semantic properties. We propose a framework that generate triplets of sentences to explore how changes in the syntactic structure or semantics of a given sentence affect the similarities obtained between their sentence embeddings.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Neural vector representations have become ubiquitous in all subfields of natural language processing."
                    },
                    {
                        "id": 1,
                        "string": "For the case of word vectors, important properties of the representations have been studied, including their linear substructures (Mikolov et al., 2013; Levy and Goldberg, 2014) , the linear superposition of word senses (Arora et al., 2016b) , and the nexus to pointwise mutual information scores between co-occurring words (Arora et al., 2016a) ."
                    },
                    {
                        "id": 2,
                        "string": "However, thus far, only little is known about the properties of sentence embeddings."
                    },
                    {
                        "id": 3,
                        "string": "Sentence embedding methods attempt to encode a variablelength input sentence into a fixed length vector."
                    },
                    {
                        "id": 4,
                        "string": "A number of such sentence embedding methods have been proposed in recent years (Le and Mikolov, 2014; Kiros et al., 2015; Wieting et al., 2015; Conneau et al., 2017; Arora et al., 2017) ."
                    },
                    {
                        "id": 5,
                        "string": "Sentence embeddings have mainly been evaluated in terms of how well their cosine similarities mirror human judgments of semantic relatedness, typically with respect to the SemEval Semantic Textual Similarity competitions."
                    },
                    {
                        "id": 6,
                        "string": "The SICK dataset (Marelli et al., 2014) was created to better benchmark the effectiveness of different models across a broad range of challenging lexical, syntactic, and semantic phenomena, in terms of both similarities and the ability to be predictive of entailment."
                    },
                    {
                        "id": 7,
                        "string": "However, even on SICK, oftentimes very shallow methods prove effective at obtaining fairly competitive results (Wieting et al., 2015) ."
                    },
                    {
                        "id": 8,
                        "string": "Adi et al."
                    },
                    {
                        "id": 9,
                        "string": "investigated to what extent different embedding methods are predictive of i) the occurrence of words in the original sentence, ii) the order of words in the original sentence, and iii) the length of the original sentence (Adi et al., 2016 (Adi et al., , 2017 ."
                    },
                    {
                        "id": 10,
                        "string": "inspected neural machine translation systems with regard to their ability to acquire morphology, while Shi et al."
                    },
                    {
                        "id": 11,
                        "string": "(2016) investigated to what extent they learn source side syntax."
                    },
                    {
                        "id": 12,
                        "string": "Wang et al."
                    },
                    {
                        "id": 13,
                        "string": "(2016) argue that the latent representations of advanced neural reading comprehension architectures encode information about predication."
                    },
                    {
                        "id": 14,
                        "string": "Finally, sentence embeddings have also often been investigated in classification tasks such as sentiment polarity or question type classification (Kiros et al., 2015) ."
                    },
                    {
                        "id": 15,
                        "string": "Concurrently with our research, Conneau et al."
                    },
                    {
                        "id": 16,
                        "string": "(2018) investigated to what extent one can learn to classify specific syntactic and semantic properties of sentences using large amounts of training data (100,000 instances) for each property."
                    },
                    {
                        "id": 17,
                        "string": "Overall, still, remarkably little is known about what specific semantic properties are directly reflected by such embeddings."
                    },
                    {
                        "id": 18,
                        "string": "In this paper, we specifically focus on a few select aspects of sentence semantics and inspect to what extent prominent sentence embedding methods are able to capture them."
                    },
                    {
                        "id": 19,
                        "string": "Our framework generates triplets of sentences to explore how changes in the syntactic structure or semantics of a given sentence affect the similarities obtained between their sentence embeddings."
                    },
                    {
                        "id": 20,
                        "string": "Analysis To conduct our analysis, we proceed by generating new phenomena-specific evaluation datasets."
                    },
                    {
                        "id": 21,
                        "string": "Our starting point is that even minor alterations of a sentence may lead to notable shifts in meaning."
                    },
                    {
                        "id": 22,
                        "string": "For instance, a sentence S such as A rabbit is jumping over the fence and a sentence S * such as A rabbit is not jumping over the fence diverge with respect to many of the inferences that they warrant."
                    },
                    {
                        "id": 23,
                        "string": "Even if sentence S * is somewhat less idiomatic than alternative wordings such as There are no rabbits jumping over the fence, we nevertheless expect sentence embedding methods to interpret both correctly, just as humans do."
                    },
                    {
                        "id": 24,
                        "string": "Despite the semantic differences between the two sentences due to the negation, we still expect the cosine similarity between their respective embeddings to be fairly high, in light of their semantic relatedness in touching on similar themes."
                    },
                    {
                        "id": 25,
                        "string": "Hence, only comparing the similarity between sentence pairs of this sort does not easily lend itself to insightful automated analyses."
                    },
                    {
                        "id": 26,
                        "string": "Instead, we draw on another key idea."
                    },
                    {
                        "id": 27,
                        "string": "It is common for two sentences to be semantically close despite differences in their specific linguistic realizations."
                    },
                    {
                        "id": 28,
                        "string": "Building on the previous example, we can construct a further contrasting sentence S + such as A rabbit is hopping over the fence."
                    },
                    {
                        "id": 29,
                        "string": "This sentence is very close in meaning to sentence S, despite minor differences in the choice of words."
                    },
                    {
                        "id": 30,
                        "string": "In this case, we would want for the semantic relatedness between sentences S and S + to be assessed as higher than between sentence S and sentence S * ."
                    },
                    {
                        "id": 31,
                        "string": "We refer to this sort of scheme as sentence triplets."
                    },
                    {
                        "id": 32,
                        "string": "We rely on simple transformations to generate several different sets of sentence triplets."
                    },
                    {
                        "id": 33,
                        "string": "Sentence Modification Schemes In the following, we first describe the kinds of transformations we apply to generate altered sentences."
                    },
                    {
                        "id": 34,
                        "string": "Subsequently, in Section 2.2, we shall consider how to assemble such sentences into sentence triplets of various kinds so as to assess different semantic properties of sentence embeddings."
                    },
                    {
                        "id": 35,
                        "string": "Not-Negation."
                    },
                    {
                        "id": 36,
                        "string": "We negate the original sentence by inserting the negation marker not before the first verb of the original sentence A to generate a new sentence B, including contractions as appropriate, or removing negations when they are already present, as in: A: The young boy is climbing the wall made of rock."
                    },
                    {
                        "id": 37,
                        "string": "B: The young boy isn't climbing the wall made of rock."
                    },
                    {
                        "id": 38,
                        "string": "Quantifier-Negation."
                    },
                    {
                        "id": 39,
                        "string": "We prepend the quantifier expression there is no to original sentences beginning with A to generate new sentences."
                    },
                    {
                        "id": 40,
                        "string": "A: A girl is cutting butter into two pieces."
                    },
                    {
                        "id": 41,
                        "string": "B: There is no girl cutting butter into two pieces."
                    },
                    {
                        "id": 42,
                        "string": "Synonym Substitution."
                    },
                    {
                        "id": 43,
                        "string": "We substitute the verb in the original sentence with an appropriate synonym to generate a new sentence B."
                    },
                    {
                        "id": 44,
                        "string": "A: The man is talking on the telephone."
                    },
                    {
                        "id": 45,
                        "string": "B: The man is chatting on the telephone."
                    },
                    {
                        "id": 46,
                        "string": "Embedded Clause Extraction."
                    },
                    {
                        "id": 47,
                        "string": "For those sentences containing verbs such as say, think with embedded clauses, we extract the clauses as the new sentence."
                    },
                    {
                        "id": 48,
                        "string": "A: Octel said the purchase was expected."
                    },
                    {
                        "id": 49,
                        "string": "B: The purchase was expected."
                    },
                    {
                        "id": 50,
                        "string": "Passivization."
                    },
                    {
                        "id": 51,
                        "string": "Sentences that are expressed in active voice are changed to passive voice."
                    },
                    {
                        "id": 52,
                        "string": "A: Harley asked Abigail to bake some muffins."
                    },
                    {
                        "id": 53,
                        "string": "B: Abigail is asked to bake some muffins."
                    },
                    {
                        "id": 54,
                        "string": "Argument Reordering."
                    },
                    {
                        "id": 55,
                        "string": "For sentences matching the structure \" somebody verb somebody to do something \", we swap the subject and object of the original sentence A to generate a new sentence B."
                    },
                    {
                        "id": 56,
                        "string": "A: Matilda encouraged Sophia to compete in a match."
                    },
                    {
                        "id": 57,
                        "string": "B: Sophia encouraged Matilda to compete in a match."
                    },
                    {
                        "id": 58,
                        "string": "Fixed Point Inversion."
                    },
                    {
                        "id": 59,
                        "string": "We select a word in the sentence as the pivot and invert the order of words before and after the pivot."
                    },
                    {
                        "id": 60,
                        "string": "The intuition here is that this simple corruption is likely to result in a new sentence that does not properly convey the original meaning, despite sharing the original words in common with it."
                    },
                    {
                        "id": 61,
                        "string": "Hence, these sorts of corruptions can serve as a useful diagnostic."
                    },
                    {
                        "id": 62,
                        "string": "A: A dog is running on concrete and is holding a blue ball B: concrete and is holding a blue ball a dog is running on."
                    },
                    {
                        "id": 63,
                        "string": "Sentence Triplet Generation Given the above forms of modified sentences, we induce five evaluation datasets, consisting of triplets of sentences as follows."
                    },
                    {
                        "id": 64,
                        "string": "1."
                    },
                    {
                        "id": 65,
                        "string": "Negation Detection: Original sentence, Synonym Substitution, Not-Negation With this dataset, we seek to explore how well sentence embeddings can distinguish sentences with similar structure and opposite meaning, while using Synonym Substitution as the contrast set."
                    },
                    {
                        "id": 66,
                        "string": "We would want the similarity between the original sentence and the negated sentence to be lower than that between the original sentence and its synonym version."
                    },
                    {
                        "id": 67,
                        "string": "Negation Variants: Quantifier-Negation, Not-Negation, Original sentence In the second dataset, we aim to investigate how well the sentence embeddings reflect negation quantifiers."
                    },
                    {
                        "id": 68,
                        "string": "We posit that the similarity between the Quantifier-Negation and Not-Negation versions should be a bit higher than between either the Not-Negation or the Quantifier-Negation and original sentences."
                    },
                    {
                        "id": 69,
                        "string": "3."
                    },
                    {
                        "id": 70,
                        "string": "Clause Relatedness: Original sentence, Embedded Clause Extraction, Not-Negation In this third set, we want to explore whether the similarity between a sentence and its embedded clause is higher than between a sentence and its negation."
                    },
                    {
                        "id": 71,
                        "string": "Argument Sensitivity: Original sentence, Passivization, Argument Reordering With this last test, we wish to ascertain whether the sentence embeddings succeed in distinguishing semantic information from structural information."
                    },
                    {
                        "id": 72,
                        "string": "Consider, for instance, the following triplet."
                    },
                    {
                        "id": 73,
                        "string": "S: Lilly loves Imogen."
                    },
                    {
                        "id": 74,
                        "string": "S + : Imogen is loved by Lilly."
                    },
                    {
                        "id": 75,
                        "string": "S * : Imogen loves Lilly."
                    },
                    {
                        "id": 76,
                        "string": "Here, S and S + mostly share the same meaning, whereas S + and S * have a similar word order, but do not possess the same specific meaning."
                    },
                    {
                        "id": 77,
                        "string": "If the sentence embeddings focus more on semantic cues, then the similarity between S and S + ought to be larger than that between S + and S * ."
                    },
                    {
                        "id": 78,
                        "string": "If the sentence embedding however is easily misled by matching sentence structures, the opposite will be the case."
                    },
                    {
                        "id": 79,
                        "string": "Fixed Point Reorder: Original sentence, Semantically equivalent sentence, Fixed Point Inversion With this dataset, our objective is to explore how well the sentence embeddings account for shifts in meaning due to the word order in a sentence."
                    },
                    {
                        "id": 80,
                        "string": "We select sentence pairs from the SICK dataset according to their semantic relatedness score and entailment labeling."
                    },
                    {
                        "id": 81,
                        "string": "Sentence pairs with a high relatedness score and the Entailment tag are considered semantically similar sentences."
                    },
                    {
                        "id": 82,
                        "string": "We rely on the Levenshtein Distance as a filter to ensure a structural similarity between the two sentences, i.e., sentence pairs whose Levenshtein Distance is sufficiently high are regarded as eligible."
                    },
                    {
                        "id": 83,
                        "string": "Additionally, we use the Fixed Point Inversion technique to generate a contrastive sentence."
                    },
                    {
                        "id": 84,
                        "string": "The resulting sentence likely no longer adequately reflects the original meaning."
                    },
                    {
                        "id": 85,
                        "string": "Hence, we expect that, on average, the similarity between the original sentence and the semantically similar sentence should be higher than that between the original sentence and the contrastive version."
                    },
                    {
                        "id": 86,
                        "string": "Experiments We now proceed to describe our experimental evaluation based on this paradigm."
                    },
                    {
                        "id": 87,
                        "string": "Datasets Using the aforementioned triplet generation methods, we create the evaluation datasets listed in Table 1, drawing on source sentences from SICK, Penn Treebank WSJ and MSR Paraphase corpus."
                    },
                    {
                        "id": 88,
                        "string": "Although the process to modify the sentences is automatic, we rely on human annotators to double-check the results for grammaticality and semantics."
                    },
                    {
                        "id": 89,
                        "string": "This is particularly important for synonym substitution, for which we relied on Word-Net (Fellbaum, 1998) ."
                    },
                    {
                        "id": 90,
                        "string": "Unfortunately, not all synonyms are suitable as replacements in a given context."
                    },
                    {
                        "id": 91,
                        "string": "Embedding Methods In our experiments, we compare three particularly prominent sentence embedding methods: 1."
                    },
                    {
                        "id": 92,
                        "string": "GloVe Averaging (GloVe Avg."
                    },
                    {
                        "id": 93,
                        "string": "): The simple approach of taking the average of the word vectors for all words in a sentence."
                    },
                    {
                        "id": 94,
                        "string": "Although this method neglects the order of words entirely, it can fare reasonably well on some of the most commonly invoked forms of evaluation (Wieting et al., 2015; Arora et al., 2017) ."
                    },
                    {
                        "id": 95,
                        "string": "Note that we here rely on regular unweighted GloVe vectors (Pennington et al., 2014) instead of fine-tuned or weighted word vectors."
                    },
                    {
                        "id": 96,
                        "string": "2."
                    },
                    {
                        "id": 97,
                        "string": "Concatenated P-Mean Embeddings (P-Means): Rücklé et al."
                    },
                    {
                        "id": 98,
                        "string": "(2018) proposed concatenating different p-means of multiple kinds of word vectors."
                    },
                    {
                        "id": 99,
                        "string": "3."
                    },
                    {
                        "id": 100,
                        "string": "Sent2Vec: Pagliardini et al."
                    },
                    {
                        "id": 101,
                        "string": "(2018) proposed a method to learn word and n-gram embeddings such that the average of all words and n-grams in a sentence can serve as a highquality sentence vector."
                    },
                    {
                        "id": 102,
                        "string": "The Skip-Thought Vector approach (SkipThought) by Kiros et al."
                    },
                    {
                        "id": 103,
                        "string": "(2015) applies the neighbour prediction intuitions of the word2vec Skip-Gram model at the level of entire sentences, as encoded and decoded via recurrent neural networks."
                    },
                    {
                        "id": 104,
                        "string": "The method trains an encoder to process an input sentence such that the resulting latent representation is optimized for predicting neighbouring sentences via the decoder."
                    },
                    {
                        "id": 105,
                        "string": "(Conneau et al., 2017) is based on supervision from an auxiliary task, namely the Stanford NLI dataset."
                    },
                    {
                        "id": 106,
                        "string": "InferSent Results and Discussion Negation Detection."
                    },
                    {
                        "id": 107,
                        "string": "Table 2 lists the results for the Negation Detection dataset, where S, S + , S * refer to the original, Synonym Substitution, and Not-Negation versions of the sentences, respectively."
                    },
                    {
                        "id": 108,
                        "string": "For each of the considered embedding methods, we first report the average cosine similarity scores between all relevant sorts of pairings of two sentences, i.e."
                    },
                    {
                        "id": 109,
                        "string": "between the original and the Synonym-Substitution sentences (S and S + ), between original and Not-Negated (S and S * ), and between Not-Negated and Synonym-Substitution (S + and S * )."
                    },
                    {
                        "id": 110,
                        "string": "Finally, in the last column, we report the Accuracy, computed as the percentage of sentence triplets for which the proximity relationships were as desired, i.e., the cosine similarity between the original and synonym-substituted versions was higher than the similarity between that same original and its Not-Negation version."
                    },
                    {
                        "id": 111,
                        "string": "On this dataset, we observe that GloVe Avg."
                    },
                    {
                        "id": 112,
                        "string": "is more often than not misled by the introduction of synonyms, although the corresponding word vector typically has a high cosine similarity with the original word's embedding."
                    },
                    {
                        "id": 113,
                        "string": "In contrast, both In-ferSent and SkipThought succeed in distinguishing unnegated sentences from negated ones."
                    },
                    {
                        "id": 114,
                        "string": "Negation Variants."
                    },
                    {
                        "id": 115,
                        "string": "In Table 3 , S, S + , S * refer to the original, Not-Negation, and Quantifier-Negation versions of a sentence, respectively."
                    },
                    {
                        "id": 116,
                        "string": "Accuracy in this problem is defined as percentage of sentence triples whose similarity between S+ and S * is the higher than similarity between S and S+ and S + and S * The results of both averaging of word embeddings."
                    },
                    {
                        "id": 117,
                        "string": "and SkipThought are dismal in terms of the accuracy."
                    },
                    {
                        "id": 118,
                        "string": "InferSent, in contrast, appears to have acquired a better understanding of negation quantifiers, as these are commonplace in many NLI datasets."
                    },
                    {
                        "id": 119,
                        "string": "Clause Relatedness."
                    },
                    {
                        "id": 120,
                        "string": "In Table 4 , S, S + , S * refer to original, Embedded Clause Extraction, and Not-Negation, respectively."
                    },
                    {
                        "id": 121,
                        "string": "Although not particularly more accurate than random guessing, among the considered approaches, Sent2vec fares best in distinguishing the embedded clause of a sentence from a negation of said sentence."
                    },
                    {
                        "id": 122,
                        "string": "For a detailed analysis, we can divide the sentence triplets in this dataset into two categories as exemplified by the following examples: a) Copperweld said it doesn't expect a protracted strike."
                    },
                    {
                        "id": 123,
                        "string": "-Copperweld said it expected a protracted strike."
                    },
                    {
                        "id": 124,
                        "string": "-It doesn't expect a protracted strike."
                    },
                    {
                        "id": 125,
                        "string": "b) \"We made our own decision,\" he said."
                    },
                    {
                        "id": 126,
                        "string": "-\"We didn't make our own decision,\" he said."
                    },
                    {
                        "id": 127,
                        "string": "-We made our own decision."
                    },
                    {
                        "id": 128,
                        "string": "For cases resembling a), the average SkipThought similarity between the sentence and its Not-Negation version is 79.90%, while for cases resembling b), it is 26.71%."
                    },
                    {
                        "id": 129,
                        "string": "The accuracy of SkipThought on cases resembling a is 36.90%, and the accuracy of SkipThought on cases like b is only 0.75% It seems plausible that SkipThought is more sensitive to the word order due to the recurrent architecture."
                    },
                    {
                        "id": 130,
                        "string": "Infersent also achieved better performance on sentences resembling a) compared with sentences resembling b), its accuracy on these two structures is 28.37% and 15.73% respectively."
                    },
                    {
                        "id": 131,
                        "string": "Argument Sensitivity."
                    },
                    {
                        "id": 132,
                        "string": "In Table 5 , S, S + , S * to refer to the original sentence, it Passivization form, and the Argument Reordering version, respectively."
                    },
                    {
                        "id": 133,
                        "string": "Although recurrent architectures are able to consider the order of words, unfortunately, none of the analysed approaches prove adept at distinguishing the semantic information from structural information in this case."
                    },
                    {
                        "id": 134,
                        "string": "Fixed Point Reorder."
                    },
                    {
                        "id": 135,
                        "string": "In Table 6 , S, S + , S * to refer to the original sentence, its semantically equivalent one and Fixed Point Inversion Version."
                    },
                    {
                        "id": 136,
                        "string": "As Table 6 indicates, sentence embeddings based on means (GloVe averages), weighted means (Sent2Vec), or concatenation of p-mean embeddings (P-Means) are unable to distinguish the fixed point inverted sentence from the semantically equivalent one, as they do not encode sufficient word order information into the sentence embeddings."
                    },
                    {
                        "id": 137,
                        "string": "Sent2Vec does consider ngrams but these do not affect the results sufficiently.SkipThought and InferSent did well when the original sentence and its semantically equivalence share similar structure."
                    },
                    {
                        "id": 138,
                        "string": "Conclusion This paper proposes a simple method to inspect sentence embeddings with respect to their semantic properties, analysing three popular embedding methods."
                    },
                    {
                        "id": 139,
                        "string": "We find that both SkipThought and InferSent distinguish negation of a sentence from synonymy."
                    },
                    {
                        "id": 140,
                        "string": "InferSent fares better at identifying semantic equivalence regardless of the order of words and copes better with quantifiers."
                    },
                    {
                        "id": 141,
                        "string": "SkipThoughts is more suitable for tasks in which the semantics of the sentence corresponds to its structure, but it often fails to identify sentences with different word order yet similar meaning."
                    },
                    {
                        "id": 142,
                        "string": "In almost all cases, dedicated sentence embeddings from hidden states a neural network outperform a simple averaging of word embeddings."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 19
                    },
                    {
                        "section": "Analysis",
                        "n": "2",
                        "start": 20,
                        "end": 32
                    },
                    {
                        "section": "Sentence Modification Schemes",
                        "n": "2.1",
                        "start": 33,
                        "end": 62
                    },
                    {
                        "section": "Sentence Triplet Generation",
                        "n": "2.2",
                        "start": 63,
                        "end": 66
                    },
                    {
                        "section": "Negation Variants: Quantifier-Negation, Not-Negation, Original sentence",
                        "n": "2.",
                        "start": 67,
                        "end": 70
                    },
                    {
                        "section": "Argument Sensitivity: Original sentence, Passivization, Argument Reordering",
                        "n": "4.",
                        "start": 71,
                        "end": 78
                    },
                    {
                        "section": "Fixed Point Reorder:",
                        "n": "5.",
                        "start": 79,
                        "end": 83
                    },
                    {
                        "section": "Experiments",
                        "n": "3",
                        "start": 84,
                        "end": 86
                    },
                    {
                        "section": "Datasets",
                        "n": "3.1",
                        "start": 87,
                        "end": 90
                    },
                    {
                        "section": "Embedding Methods",
                        "n": "3.2",
                        "start": 91,
                        "end": 101
                    },
                    {
                        "section": "The Skip-Thought",
                        "n": "4.",
                        "start": 102,
                        "end": 105
                    },
                    {
                        "section": "Results and Discussion",
                        "n": "3.3",
                        "start": 106,
                        "end": 137
                    },
                    {
                        "section": "Conclusion",
                        "n": "4",
                        "start": 138,
                        "end": 142
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1228-Table6-1.png",
                        "caption": "Table 6: Evaluation of Fixed Point Reorder",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 530.4,
                            "y1": 375.84,
                            "y2": 482.88
                        }
                    },
                    {
                        "filename": "../figure/image/1228-Table4-1.png",
                        "caption": "Table 4: Evaluation of Clause Relatedness",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 291.36,
                            "y1": 530.88,
                            "y2": 604.3199999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1228-Table3-1.png",
                        "caption": "Table 3: Evaluation of Negation Variants",
                        "page": 4,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 288.0,
                            "y1": 83.52,
                            "y2": 156.0
                        }
                    },
                    {
                        "filename": "../figure/image/1228-Table5-1.png",
                        "caption": "Table 5: Evaluation of Argument Sensitivity",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 83.52,
                            "y2": 156.0
                        }
                    },
                    {
                        "filename": "../figure/image/1228-Table1-1.png",
                        "caption": "Table 1: Generated Evaluation Datasets",
                        "page": 3,
                        "bbox": {
                            "x1": 89.75999999999999,
                            "x2": 272.15999999999997,
                            "y1": 80.64,
                            "y2": 175.2
                        }
                    },
                    {
                        "filename": "../figure/image/1228-Table2-1.png",
                        "caption": "Table 2: Evaluation of Negation Detection",
                        "page": 3,
                        "bbox": {
                            "x1": 309.59999999999997,
                            "x2": 523.1999999999999,
                            "y1": 407.52,
                            "y2": 480.96
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-55"
        },
        {
            "slides": {
                "0": {
                    "title": "Overview of pre reordering systems",
                    "text": [
                        "Reorder input text before translation",
                        "John hits a ball",
                        "John va_nsubj a ball va_obj hits"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Approaches of pre reordering",
                    "text": [
                        "Syntactic pre-reordering without parse tree",
                        "Head-finalization (Isozaki et al., 2010)",
                        "Supervised learning with word alignments",
                        "Automatically learning Rewrite Patterns (Xia and"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Overview of our pre reordering system",
                    "text": [
                        "Head-restructured CFG Parse Tree"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Head restructured CFG Parse Tree",
                    "text": [
                        "Problem of CFG parse tree",
                        "Hard to capture long-distance reordering patterns",
                        "Problem of Dependency parse tree",
                        "Fully lexicalized parse tree leads to a sparse reordering model",
                        "Restructure a CFG parse tree to inject head information into it",
                        "Head word is always lexicalized"
                    ],
                    "page_nums": [
                        5,
                        6
                    ],
                    "images": []
                },
                "4": {
                    "title": "Learning reordering model based on LM",
                    "text": [
                        "Extract tag sequences in golden order",
                        "Head-restructured CFG parse tree",
                        "nsubj prep_by calculated auxpass aux",
                        "Alignments Train a language model on"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "5": {
                    "title": "Finding golden order with word alignments",
                    "text": [
                        "Given a bilingual sentence pair, source-side parse tree and word alignments, the golden order of a node layer is defined as",
                        "Average position (Ranked) a1 a3 a2"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "6": {
                    "title": "Reordering a input parse tree",
                    "text": [
                        "1.List all possible orders for a treelet 3. Select the best order to adjust the treelet",
                        "nsubj dobj hits dobj nsubj hits hits nsubj dobj hits dobj nsubj dobj hits nsubj nsubj hits dobj nsubj dobj hits",
                        "2.Score them with language model"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "7": {
                    "title": "N best reordering",
                    "text": [
                        "Reordered treelets with LM scores",
                        "All 12 possible combinations here",
                        "Selected N-best results by accumulated scores (Cube Pruning is applied in the practice)"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "8": {
                    "title": "In house experiments",
                    "text": [
                        "N-best parse + N- best reorder",
                        "For N-best reorder, 10 candidate reordering results are considered.",
                        "For N-best parse, 30 candidate parse trees are considered.",
                        "We select the final translation by the sum of translation score (given by decoder) and the score of pre-reordering."
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "9": {
                    "title": "N best reordering and N best parse tree inputs",
                    "text": [
                        "Incorporating multiple reordering results and parse trees benefits automatic scores."
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": [
                        "figure/image/1231-Figure3-1.png",
                        "figure/image/1231-Figure4-1.png"
                    ]
                },
                "10": {
                    "title": "Official evaluation results",
                    "text": [
                        "N-best reorder + N-best parse"
                    ],
                    "page_nums": [
                        14,
                        15
                    ],
                    "images": []
                },
                "11": {
                    "title": "Effect of pre ordering",
                    "text": [
                        "Identical ordered sentences increases to 15%"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": [
                        "figure/image/1231-Figure5-1.png"
                    ]
                },
                "12": {
                    "title": "Example of pre reordering",
                    "text": [
                        "the improvement of the life is a large problem of the practical application.",
                        "Reordered input the life of the improvement va_nsubjpass the practical application of a large problem is ."
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                },
                "13": {
                    "title": "Review",
                    "text": [
                        "Language model is just a quick solution to the reordering problem, sometimes it fails in simple cases.",
                        "To gain more from forest input, its necessary to integrate it inside the pre-reordering model."
                    ],
                    "page_nums": [
                        18
                    ],
                    "images": []
                }
            },
            "paper_title": "Weblio Pre-reordering Statistical Machine Translation System",
            "paper_id": "1231",
            "paper": {
                "title": "Weblio Pre-reordering Statistical Machine Translation System",
                "abstract": "This paper describes details of the Weblio Pre-reordering Statistical Machine Translation (SMT) System, participated in the English-Japanese translation task of 1st Workshop on Asian Translation (WAT2014). In this system, we applied the pre-reordering method described in (Zhu et al., 2014) , and extended the model to obtain N -best pre-reordering results. We also utilized N -best parse trees simultaneously to explore the potential improvement for pre-reordering system with forest input.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction In this paper, we describe the details of Weblio Pre-reordering Statistical Machine Translation (SMT) System, experiments and some issues we faced."
                    },
                    {
                        "id": 1,
                        "string": "For this SMT system, we applied the pre-reordering method proposed in (Zhu et al., 2014) ."
                    },
                    {
                        "id": 2,
                        "string": "In particular, this method automatically learns pre-reordering model from word alignments and parse trees."
                    },
                    {
                        "id": 3,
                        "string": "Statistical language model is integrated in the pre-reordering model in order to reorder each node layer in parse trees."
                    },
                    {
                        "id": 4,
                        "string": "In the 1st Workshop on Asian Translation (WAT2014) (Nakazawa et al., 2014) , we mainly applied this method in English-Japanese translation subtask."
                    },
                    {
                        "id": 5,
                        "string": "The parse tree we used is head-restructured CFG parse tree for English, which is also proposed in (Zhu et al., 2014) ."
                    },
                    {
                        "id": 6,
                        "string": "After the pre-reordering phase, we trained a conventional Phrase-based model to do the final translation."
                    },
                    {
                        "id": 7,
                        "string": "To make some improvements, we enabled the pre-reordering system to output N -best reordering results."
                    },
                    {
                        "id": 8,
                        "string": "Also, we feed the whole translation pipe line with N -best parse trees generated by Egret parser."
                    },
                    {
                        "id": 9,
                        "string": "As a result, multiple translation hypotheses can be collected for one input sentence."
                    },
                    {
                        "id": 10,
                        "string": "Fi-nally, we select the best hypothesis according to a balanced score."
                    },
                    {
                        "id": 11,
                        "string": "In our experiments, the system utilizes N -best pre-reordering results shows the ability to obtain more accurate translation result."
                    },
                    {
                        "id": 12,
                        "string": "After incorporating N -best parse trees, improvements on the automatic evaluation scores are also observed."
                    },
                    {
                        "id": 13,
                        "string": "In section 2 and 3, we briefly describe the method used for tree parsing and pre-reordering."
                    },
                    {
                        "id": 14,
                        "string": "In the remaining sections, we give some details of the experiments of our system."
                    },
                    {
                        "id": 15,
                        "string": "Head-restructured CFG parse tree In order to reorder SVO (subject-verb-object) order into SOV (subject-object-verb) order, correctly reordering long-distance words those play important roles in a sentence is crucial."
                    },
                    {
                        "id": 16,
                        "string": "Thus, the reordering model is required to capture the reordering patterns for those words."
                    },
                    {
                        "id": 17,
                        "string": "Obviously, using dependency tree should be a quick solution for this problem."
                    },
                    {
                        "id": 18,
                        "string": "As in a dependency tree, all closely related words of a specific head word come underneath that head word."
                    },
                    {
                        "id": 19,
                        "string": "All we need to do is to find the best order for the branches under those related words."
                    },
                    {
                        "id": 20,
                        "string": "In particular, if the head word is the root of the whole sentence (usually a verb), then it's reasonable to think each dependent word leads to a branch that contains a important part of the whole sentence."
                    },
                    {
                        "id": 21,
                        "string": "However, using dependency tree naively does not work well in practice."
                    },
                    {
                        "id": 22,
                        "string": "First of all, not all components in a sentence need to be reorder."
                    },
                    {
                        "id": 23,
                        "string": "In the case of English-Japanese translation, noun phrases are usually in the identical order of English."
                    },
                    {
                        "id": 24,
                        "string": "Secondly, some local grammar structures tend to keep their unique order."
                    },
                    {
                        "id": 25,
                        "string": "For example, the combination of an adjective word with a noun usually has the same order in Japanese."
                    },
                    {
                        "id": 26,
                        "string": "But a noun follows the preposition \"of\" in English will appear before it in the order of Japanese."
                    },
                    {
                        "id": 27,
                        "string": "A model merely based on dependency parse trees will be sparse and hard to deal with unknown words correctly."
                    },
                    {
                        "id": 28,
                        "string": "POS (partof-speech) tags and also structural information are still necessary."
                    },
                    {
                        "id": 29,
                        "string": "The approach in (Zhu et al., 2014) addresses this problem by injecting sentence-level dependencies into CFG parse trees."
                    },
                    {
                        "id": 30,
                        "string": "Local grammatical structures are still kept unchanged in the parse tree."
                    },
                    {
                        "id": 31,
                        "string": "This new parse tree is called \"Headrestructured CFG parse tree\" in the original paper."
                    },
                    {
                        "id": 32,
                        "string": "In this paper, we use \"HRCFG tree\" in short to represent it."
                    },
                    {
                        "id": 33,
                        "string": "An example of HRCFG tree is shown in Figure 1 ."
                    },
                    {
                        "id": 34,
                        "string": "In Figure 1 , A normal CFG parse tree is shown in the left, and the corresponding HRCFG tree in the right."
                    },
                    {
                        "id": 35,
                        "string": "In this example, tree components explicitly shows subject, object and verb parts in the sentence."
                    },
                    {
                        "id": 36,
                        "string": "This structure with explicit annotations makes the reordering model easier to capture longdistance reordering patterns."
                    },
                    {
                        "id": 37,
                        "string": "Reordering model integrated with language model The reordering model we used in our MT systems follows the same fashion of the model in (Zhu et al., 2014) ."
                    },
                    {
                        "id": 38,
                        "string": "A language model is integrated to identify the best order of a node layer according to the order of target language."
                    },
                    {
                        "id": 39,
                        "string": "Although the using of language model still involves spare problem and fails to give the correct probability in some cases, it makes the implementation fairly simple."
                    },
                    {
                        "id": 40,
                        "string": "With a bilingual training data and automatically learned word alignments given by GIZA++ (Och and Ney, 2004) , we find the best order for each node layer in all parse trees, make them fit the order of aligned parts in the target sentence."
                    },
                    {
                        "id": 41,
                        "string": "Specifically, for tree nodes n = (n 1 , n 2 , ..., n k ), terminal nodes beneath n i is defined as t i ."
                    },
                    {
                        "id": 42,
                        "string": "Let w i represent a set of words in target side which are aligned with any terminal node in t i ."
                    },
                    {
                        "id": 43,
                        "string": "I this step, for each node layer n, we redetermine the order of n according to the average position of aligned words w i for each node n i ."
                    },
                    {
                        "id": 44,
                        "string": "Then we exports a sequence of reordered nonterminal tags."
                    },
                    {
                        "id": 45,
                        "string": "For some kinds of node, nonterminal tags in the sequences are replaced by head word."
                    },
                    {
                        "id": 46,
                        "string": "After we trained a language model on them, the language model can be used to estimate the likelihood that a tag sequence follows the order of target language."
                    },
                    {
                        "id": 47,
                        "string": "We show an example of this reordering process in Figure 2 ."
                    },
                    {
                        "id": 48,
                        "string": "To reorder the node layer underneath the \"S\" node in Figure 2 , we list all possible orders for the tag sequence \"nsubj hits dobj\" (6 possibilities in all)."
                    },
                    {
                        "id": 49,
                        "string": "Then we use the language model we trained on reordered tag sequences to estimate the probability for each possible order."
                    },
                    {
                        "id": 50,
                        "string": "Finally, it is expected that the sequence \"nsubj dobj hits\" gets the highest language model score, as it is most closer to the order in Japanese."
                    },
                    {
                        "id": 51,
                        "string": "The reordering operation in this fashion is applied to all node layers in the HRCFG tree."
                    },
                    {
                        "id": 52,
                        "string": "We export all terminal words in the reordered parse trees as new training data in the source side."
                    },
                    {
                        "id": 53,
                        "string": "Like Head-finalization (Isozaki et al., 2010) , we also incorporate seed words in the results."
                    },
                    {
                        "id": 54,
                        "string": "So the final reordering result of the sentence shown in Figure  1 will be \"John va nsubjpass a ball va dobj hits\"."
                    },
                    {
                        "id": 55,
                        "string": "N -best reordering In the reordering model we described above, the best order for the whole sentence is actually comprised by all 1 -best orders of every node layers in the parse tree."
                    },
                    {
                        "id": 56,
                        "string": "Although the language model usually works perfectly to give the best reordering result."
                    },
                    {
                        "id": 57,
                        "string": "In some cases, the best reordering result is unclear until the translation phase."
                    },
                    {
                        "id": 58,
                        "string": "We give an example here, for the sentence \"The rocket is launched by NASA\", two plausible reordering results are shown in Table 1 ."
                    },
                    {
                        "id": 59,
                        "string": "The first reordering result in Table 1 is preferred by the reordering model as \"nsubjpass auxpass launched prep by\" is usually reordered into \"nsubjpass prep by launched auxpass\"."
                    },
                    {
                        "id": 60,
                        "string": "Unfortunately, Table 1 , it's hard to find out a best order before translation."
                    },
                    {
                        "id": 61,
                        "string": "Considering N -best reordering results is necessary in order to obtain the best translation result."
                    },
                    {
                        "id": 62,
                        "string": "In our MT system, we implement this feature simply by collecting N -best reordering results for all node layers, and finally rank the reordering results by accumulated language model score."
                    },
                    {
                        "id": 63,
                        "string": "Experiments Experimental settings For our baseline system, we use 1 -best parse trees for training and test."
                    },
                    {
                        "id": 64,
                        "string": "Stanford tokenizer and Berkeley parser (Petrov et al., 2006) are selected in the pre-processing phase in order to produce CFG parse trees."
                    },
                    {
                        "id": 65,
                        "string": "Then we obtain dependency parse trees by applying Stanford rules (Klein and Manning, 2002) to CFG parse trees."
                    },
                    {
                        "id": 66,
                        "string": "HRCFG trees are built upon these two kinds of parse trees."
                    },
                    {
                        "id": 67,
                        "string": "For the Japanese text, we use Kytea (Neubig, Nakata, and Mori, 2011) to tokenize it."
                    },
                    {
                        "id": 68,
                        "string": "Due to the limitation of computational resource, we are only able to train our reordering model on 1.5M bilingual text (with relatively high scores in ASPEC parallel corpus) for English-to-Japanese translation task."
                    },
                    {
                        "id": 69,
                        "string": "We used this trained reordering model to reorder all training data in the source side."
                    },
                    {
                        "id": 70,
                        "string": "We use conventional Phrase-based model implemented in Moses toolkit to finish remaining SMT pipe line."
                    },
                    {
                        "id": 71,
                        "string": "Distortion limit is set to 6 in all our experiments."
                    },
                    {
                        "id": 72,
                        "string": "For system translates forest inputs, we use Egret parser to generate N -best packed forests."
                    },
                    {
                        "id": 73,
                        "string": "We unpack each forest and parse each individual tree to HRCFG tree."
                    },
                    {
                        "id": 74,
                        "string": "For all candidate of parse trees, we reorder them and merge same reordering results."
                    },
                    {
                        "id": 75,
                        "string": "Then for all reordering results we obtained, we translate them and record translation scores given by Moses."
                    },
                    {
                        "id": 76,
                        "string": "Finally, a best translation result is selected out by the sum of translation score and reordering score."
                    },
                    {
                        "id": 77,
                        "string": "Experiment results We carried out several experiments combining the use of N -best parse trees and N -best reordering results."
                    },
                    {
                        "id": 78,
                        "string": "A list of automatic evaluation scores for different system settings are listed in Table 2 ."
                    },
                    {
                        "id": 79,
                        "string": "In particular, for systems marked with \"N -best parse\", 30 parse trees with highest parsing scores are used."
                    },
                    {
                        "id": 80,
                        "string": "For systems marked with \"N -best reorder\", 10 reordering results with highest reordering scores are accepted for each parse tree."
                    },
                    {
                        "id": 81,
                        "string": "That is, for System 4, a maximum of 300 reordering results are generated for one sentence."
                    },
                    {
                        "id": 82,
                        "string": "In WAT2014, we submitted System 3 and System 4 to human evaluation."
                    },
                    {
                        "id": 83,
                        "string": "Note that in Table  2 , our in-house automatic evaluation scores are slightly different from that on the score board of WAT2014 due to different automatic evaluation pipe line we used."
                    },
                    {
                        "id": 84,
                        "string": "Official evaluation scores are listed in Table 3 ."
                    },
                    {
                        "id": 85,
                        "string": "Where \"BASELINE\" refers to Phrase-based SMT system (Koehn, Och, and Marcu, 2003) as the official baseline for human evaluation."
                    },
                    {
                        "id": 86,
                        "string": "Our experiment results shown in Table 2 show that incorporating N -best parse tree and reordering results gained improvements for both BLEU and RIBES metrics."
                    },
                    {
                        "id": 87,
                        "string": "In the official human evaluation, although System 4 achieved better results in automatic evaluations."
                    },
                    {
                        "id": 88,
                        "string": "Human evaluation score of it degraded compared to System 3, which only considers 1 -best parse tree."
                    },
                    {
                        "id": 89,
                        "string": "In Figure 3 and 4, we show the growth of BLEU and RIBES when increasing candidate number considered for N -best parse trees and reordering results."
                    },
                    {
                        "id": 90,
                        "string": "Both BLEU and RIBES scores are tending to converge after we increased the N -best parse tree candidates to 30 for System 2."
                    },
                    {
                        "id": 91,
                        "string": "For System 3, the automatic evaluation scores are still increasing after 10 reordering results are considered."
                    },
                    {
                        "id": 92,
                        "string": "Evaluation for pre-reordering In this section, we evaluate the performance of pre-reordering."
                    },
                    {
                        "id": 93,
                        "string": "Follows the method described in (Isozaki et al., 2010) , we estimate Kendall's τ from word alignments."
                    },
                    {
                        "id": 94,
                        "string": "A comparison of Kendall's τ distribution upon first 1.5M sentences of ASPEC corpus is shown in Figure 5 ."
                    },
                    {
                        "id": 95,
                        "string": "Average Kendall's τ of natural order and adjusted order is 0.30 and 0.71 respectively."
                    },
                    {
                        "id": 96,
                        "string": "Note that in (Isozaki et al., 2010) , the algorithm for estimating Kendall's τ does not take the words with multiple alignments into account."
                    },
                    {
                        "id": 97,
                        "string": "Hence, the graph of Kendall's τ only gives a rough idea of the performance of pre-reordering."
                    },
                    {
                        "id": 98,
                        "string": "In particular, the algorithm skipped 20.30% aligned words for corpus in natural order and 14.06% aligned words for pre-reordered corpus."
                    },
                    {
                        "id": 99,
                        "string": "However, the distribution of Kendall's τ in Figure 5 gives a intuitive picture of the improvements of word order."
                    },
                    {
                        "id": 100,
                        "string": "Sentences which are fully identical in word order increased from 1.8% to 15% after pre-reordering (labeled with \"=1.0\" in Figure 5 )."
                    },
                    {
                        "id": 101,
                        "string": "Error analysis Issues of pre-reordering Although our pre-reordering SMT system is able to produce relatively better translation results compared to baseline SMT systems."
                    },
                    {
                        "id": 102,
                        "string": "In many cases, the translation results still suffer from the defect of the reordering model."
                    },
                    {
                        "id": 103,
                        "string": "As the reordering model described in Section 3 is actually a language model built on sequences mixed with nonterminal tags and words."
                    },
                    {
                        "id": 104,
                        "string": "Involving words in the model makes the reordering more flexible, but also makes the model sparse."
                    },
                    {
                        "id": 105,
                        "string": "For some rare or unknown words, the reordering model usually fails to reorder sentences correctly."
                    },
                    {
                        "id": 106,
                        "string": "In Table 4 , we show 2 reordering samples."
                    },
                    {
                        "id": 107,
                        "string": "In Sample 1, the sentence is correctly reordered."
                    },
                    {
                        "id": 108,
                        "string": "The word \"were\" in English side should be placed in the end of the reordered sentence, which is expected to be translated to \" \" in Japanese."
                    },
                    {
                        "id": 109,
                        "string": "In Sample 2, we replace the verb \"confirmed\" in Sample 1 to \"observed\", then the reordering model the changes were observed → the changes va nsubjpass were observed fails to place the word \"were\" into the rightmost position."
                    },
                    {
                        "id": 110,
                        "string": "The errors like what we show in Table 4 are actually widespread in the reordering results for the ASPEC test corpus."
                    },
                    {
                        "id": 111,
                        "string": "Although in the decoding phase, the lexical distortion model of Phrasebased SMT model can partially mitigate some local errors, some critical errors still can be observed from the final translations."
                    },
                    {
                        "id": 112,
                        "string": "Issues for Context-aware Machine Translation In this section, we describe some efforts for utilizing context information during the translation."
                    },
                    {
                        "id": 113,
                        "string": "We made an attempt to tackle the phrase selection problem for English-Japanese translation."
                    },
                    {
                        "id": 114,
                        "string": "In Japanese, many English words have multiple translations."
                    },
                    {
                        "id": 115,
                        "string": "Especially Japanese words in the form of Katakana usually also have corresponding expressions comprised of Chinese characters."
                    },
                    {
                        "id": 116,
                        "string": "For instance, the phrase \"remote control\" can be translated to both \"ENKAKUSEIGYO\"( ) and \"RIMOKON\"( )."
                    },
                    {
                        "id": 117,
                        "string": "We show the distribution of these two translations across different domains in Figure 6 ."
                    },
                    {
                        "id": 118,
                        "string": "Figure 6 , it's reasonable to think the phrase \"RIMOKON\" is more preferred in domain J, P, Q and R. While in domain N, the two phrases appear almost same times."
                    },
                    {
                        "id": 119,
                        "string": "A simple solution is to make language model more domain-specific."
                    },
                    {
                        "id": 120,
                        "string": "We carried out experiments that simply interpolate general language model and in-domain language model."
                    },
                    {
                        "id": 121,
                        "string": "The experiment results for first three domains are shown in Figure  7 ."
                    },
                    {
                        "id": 122,
                        "string": "Figure 7 , we show the language model perplexity achieved on domain-specific test data using different settings."
                    },
                    {
                        "id": 123,
                        "string": "Different interpolation weights for the in-domain language model are tried."
                    },
                    {
                        "id": 124,
                        "string": "We can see the interpolated language model generally achieves best perplexities when the weight for in-domain language model is set to 0.5."
                    },
                    {
                        "id": 125,
                        "string": "Applying these interpolated language models for translation tasks in corresponding domains should help improving the quality of translation."
                    },
                    {
                        "id": 126,
                        "string": "Conclusion In this paper, we described the reordering model we applied in Weblio Pre-reordering SMT system, and also some efforts to utilize N -best parse trees and N -best reordering results."
                    },
                    {
                        "id": 127,
                        "string": "According to our in-house experiment results, the automatic evaluation scores are generally improved when multiple candidates of parse tree and reordering result are considered."
                    },
                    {
                        "id": 128,
                        "string": "However, in the human evaluation, incorporating N -best parse trees did not gain improvements."
                    },
                    {
                        "id": 129,
                        "string": "As we demonstrated in Section 5.1, the reordering model is still unstable, and fails to work correctly even for some simple cases."
                    },
                    {
                        "id": 130,
                        "string": "Further improvement is required to enable the reordering model to deal with general cases correctly."
                    },
                    {
                        "id": 131,
                        "string": "Then, in Section 5.2, we show interpolating general and in-domain language models can be a quick solution to improve translation quality when domain information is given as context."
                    },
                    {
                        "id": 132,
                        "string": "For future research, we still plan to explore the performance limit of pre-reordering models."
                    },
                    {
                        "id": 133,
                        "string": "With a complex reordering model considers multiple factors of the language, it's still plausible for this approach to grow in performance."
                    },
                    {
                        "id": 134,
                        "string": "Also, as the prereordering model used in this paper is independent of specific language pair, more experiments can be conducted on different language pairs."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 14
                    },
                    {
                        "section": "Head-restructured CFG parse tree",
                        "n": "2",
                        "start": 15,
                        "end": 36
                    },
                    {
                        "section": "Reordering model integrated with language model",
                        "n": "3",
                        "start": 37,
                        "end": 54
                    },
                    {
                        "section": "N -best reordering",
                        "n": "3.1",
                        "start": 55,
                        "end": 62
                    },
                    {
                        "section": "Experimental settings",
                        "n": "4.1",
                        "start": 63,
                        "end": 76
                    },
                    {
                        "section": "Experiment results",
                        "n": "4.2",
                        "start": 77,
                        "end": 91
                    },
                    {
                        "section": "Evaluation for pre-reordering",
                        "n": "4.3",
                        "start": 92,
                        "end": 100
                    },
                    {
                        "section": "Issues of pre-reordering",
                        "n": "5.1",
                        "start": 101,
                        "end": 111
                    },
                    {
                        "section": "Issues for Context-aware Machine Translation",
                        "n": "5.2",
                        "start": 112,
                        "end": 125
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 126,
                        "end": 134
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1231-Table1-1.png",
                        "caption": "Table 1: Two reordering results of the sentence “The rocket is launched by NASA”",
                        "page": 2,
                        "bbox": {
                            "x1": 80.64,
                            "x2": 293.28,
                            "y1": 120.96,
                            "y2": 227.04
                        }
                    },
                    {
                        "filename": "../figure/image/1231-Table2-1.png",
                        "caption": "Table 2: Experiment results for different system settings",
                        "page": 2,
                        "bbox": {
                            "x1": 305.76,
                            "x2": 529.4399999999999,
                            "y1": 284.64,
                            "y2": 352.32
                        }
                    },
                    {
                        "filename": "../figure/image/1231-Table3-1.png",
                        "caption": "Table 3: Official evaluation scores in WAT2014 (kytea used for post-processing)",
                        "page": 2,
                        "bbox": {
                            "x1": 315.84,
                            "x2": 503.03999999999996,
                            "y1": 627.84,
                            "y2": 681.12
                        }
                    },
                    {
                        "filename": "../figure/image/1231-Table4-1.png",
                        "caption": "Table 4: Samples of reordering result",
                        "page": 4,
                        "bbox": {
                            "x1": 80.64,
                            "x2": 293.28,
                            "y1": 108.96,
                            "y2": 201.12
                        }
                    },
                    {
                        "filename": "../figure/image/1231-Figure6-1.png",
                        "caption": "Figure 6: Translation distribution for “remote control” across several categories",
                        "page": 4,
                        "bbox": {
                            "x1": 93.6,
                            "x2": 287.03999999999996,
                            "y1": 570.72,
                            "y2": 705.12
                        }
                    },
                    {
                        "filename": "../figure/image/1231-Figure7-1.png",
                        "caption": "Figure 7: LM perplexity on domain-specific test data using interpolated language models",
                        "page": 4,
                        "bbox": {
                            "x1": 318.71999999999997,
                            "x2": 516.0,
                            "y1": 222.72,
                            "y2": 356.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1231-Figure1-1.png",
                        "caption": "Figure 1: An example of HRCFG tree converted from CFG parse tree (A direct parent nodes of terminal nodes are now include)",
                        "page": 1,
                        "bbox": {
                            "x1": 85.92,
                            "x2": 294.24,
                            "y1": 249.6,
                            "y2": 303.36
                        }
                    },
                    {
                        "filename": "../figure/image/1231-Figure2-1.png",
                        "caption": "Figure 2: An example of HRCFG tree reordering",
                        "page": 1,
                        "bbox": {
                            "x1": 310.56,
                            "x2": 519.36,
                            "y1": 225.6,
                            "y2": 277.44
                        }
                    },
                    {
                        "filename": "../figure/image/1231-Figure4-1.png",
                        "caption": "Figure 4: Growth of RIBES with increasing N - best candidates",
                        "page": 3,
                        "bbox": {
                            "x1": 85.92,
                            "x2": 287.03999999999996,
                            "y1": 461.76,
                            "y2": 596.16
                        }
                    },
                    {
                        "filename": "../figure/image/1231-Figure5-1.png",
                        "caption": "Figure 5: A comparison of Kendall’s τ distribution",
                        "page": 3,
                        "bbox": {
                            "x1": 313.92,
                            "x2": 513.12,
                            "y1": 81.6,
                            "y2": 217.44
                        }
                    },
                    {
                        "filename": "../figure/image/1231-Figure3-1.png",
                        "caption": "Figure 3: Growth of BLEU with increasing N - best candidates",
                        "page": 3,
                        "bbox": {
                            "x1": 88.8,
                            "x2": 287.03999999999996,
                            "y1": 264.0,
                            "y2": 396.47999999999996
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-56"
        },
        {
            "slides": {
                "0": {
                    "title": "What is Cross Language Plagiarism Detection",
                    "text": [
                        "Cross-Language Plagiarism is a plagiarism by translation, i.e. a text has been plagiarized while being translated (manually or automatically).",
                        "From a text in a language L, we must find similar passage(s) in other text(s) from among a set of candidate texts in language L (cross-language textual similarity)."
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Why is it so important",
                    "text": [
                        "- McCabe, D. (2010). Students cheating takes a high-tech turn. In Rutgers Business School. - Josephson Institute. (2011). What would honest Abe Lincoln say?"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Research Questions",
                    "text": [
                        "Are Word Embeddings useful for cross-language plagiarism detection?",
                        "Is syntax weighting in distributed representations of sentences useful for the text entailment?",
                        "Are cross-language plagiarism detection methods complementary?"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "State of the Art Methods",
                    "text": [
                        "Length Model, CL-CnG [Mcnamee and Mayfield, 2004, Potthast et al., 2011], Cognateness",
                        "MT-Based Models Translation + Monolingual Analysis [Muhr et al., 2010, Barron-Cedeno, 2012]"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "Augmented CL CTS",
                    "text": [
                        "We use DBNary [Serasset, 2015] as linked lexical resource.",
                        "< ) Le chat boit du lait Sp the cat drinks milk",
                        "CL-CTS-WE uses the top 10 closest words in the embeddings model to build the",
                        "BOW of a word;",
                        "A BOW of a sentence is a merge of the BOW of its words;",
                        "Jaccard distance between the two BOW."
                    ],
                    "page_nums": [
                        5,
                        6,
                        7,
                        23
                    ],
                    "images": []
                },
                "5": {
                    "title": "CL WES Cross Language Word Embedding based Similarity",
                    "text": [
                        "This feature is available in MultiVec [Berard et al., 2016] (https://github.com/eske/multivec)",
                        "The similarity between two sentences S and S is calculated by Cosine Distance between the two vectors V and V , built such as:",
                        "ui is the ith word of S; vector is the function which gives the word embedding vector of a word."
                    ],
                    "page_nums": [
                        8,
                        9,
                        10,
                        24
                    ],
                    "images": []
                },
                "6": {
                    "title": "CL WESS Cross Language Word Embedding based Syntax Similarity",
                    "text": [
                        "This feature is available in MultiVec [Berard et al., 2016] (https://github.com/eske/multivec)",
                        "ui is the ith word of S; pos is the function which gives the universal part-of-speech tag of a word; weight is the function which gives the weight of a part-of-speech; vector is the function which gives the word embedding vector of a word; is the scalar product."
                    ],
                    "page_nums": [
                        11,
                        12,
                        13,
                        14,
                        25
                    ],
                    "images": []
                },
                "9": {
                    "title": "Results",
                    "text": [
                        "Decision Tree fusion significantly improves the results.",
                        "CL-CTS-WE: Cross-Language Conceptual Thesaurus-based Similarity with Word-Embedding Table: Average F1 scores of methods applied",
                        "Table: Average F1 scores of methods applied on ENFR sub-corpora.",
                        "CL-WES: Cross-Language Word Embedding-based Similarity",
                        "CL-WESS: Cross-Language Word Embedding-based Syntax Similarity",
                        "CL-C3G: Cross-Language Character 3-Gram"
                    ],
                    "page_nums": [
                        17,
                        18,
                        19,
                        20
                    ],
                    "images": []
                },
                "10": {
                    "title": "Conclusion",
                    "text": [
                        "Augmentation of several baseline approaches using word embeddings instead of lexical resources;",
                        "CL-WESS beats in overall the precedent best state-of-the-art methods;",
                        "Methods are complementary and their fusion significantly helps cross-language textual similarity detection performance;",
                        "Winning method at SemEval-2017 Task 1 track 4a, i.e. the task on",
                        "Spanish-English Cross-lingual Semantic Textual Similarity detection."
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": []
                },
                "11": {
                    "title": "Complementarity",
                    "text": [
                        "Figure: Distribution histograms of CL-CNG (left) and CL-ASA (right) for 1000 positives and"
                    ],
                    "page_nums": [
                        26
                    ],
                    "images": []
                },
                "12": {
                    "title": "Fusions",
                    "text": [
                        "Weighted Average Fusion Decision Tree Fusion C4.5 [Quinlan, 1993]"
                    ],
                    "page_nums": [
                        27
                    ],
                    "images": []
                },
                "14": {
                    "title": "Results at Chunk Level",
                    "text": [
                        "Methods Wikipedia TALN (%) JRC (%) APR (%) Europarl Overall (%)",
                        "Table: Average F1 scores of cross-language similarity detection methods applied on chunk-level",
                        "ENFR sub-corpora 8 folds validation."
                    ],
                    "page_nums": [
                        29
                    ],
                    "images": []
                },
                "15": {
                    "title": "Results at Sentence Level",
                    "text": [
                        "Methods Wikipedia TALN (%) JRC (%) APR (%) Europarl Overall (%)",
                        "Table: Average F1 scores of cross-language similarity detection methods applied on sentence-level ENFR sub-corpora 8 folds validation."
                    ],
                    "page_nums": [
                        30
                    ],
                    "images": []
                }
            },
            "paper_title": "Using Word Embedding for Cross-Language Plagiarism Detection",
            "paper_id": "1235",
            "paper": {
                "title": "Using Word Embedding for Cross-Language Plagiarism Detection",
                "abstract": "This paper proposes to use distributed representation of words (word embeddings) in cross-language textual similarity detection. The main contributions of this paper are the following: (a) we introduce new cross-language similarity detection methods based on distributed representation of words; (b) we combine the different methods proposed to verify their complementarity and finally obtain an overall F 1 score of 89.15% for English-French similarity detection at chunk level (88.5% at sentence level) on a very challenging corpus.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions."
                    },
                    {
                        "id": 1,
                        "string": "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) ."
                    },
                    {
                        "id": 2,
                        "string": "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism."
                    },
                    {
                        "id": 3,
                        "string": "Cross-language plagiarism means plagiarism by translation, i.e."
                    },
                    {
                        "id": 4,
                        "string": "a text has been plagiarized while being translated (manually or automatically)."
                    },
                    {
                        "id": 5,
                        "string": "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source."
                    },
                    {
                        "id": 6,
                        "string": "We investigate how distributed representations of words can help to propose new cross-lingual similarity measures, helpful for plagiarism detection."
                    },
                    {
                        "id": 7,
                        "string": "We use word embeddings (Mikolov et al., 2013) that have shown promising performances for all kinds of NLP tasks, as shown in Upadhyay et al."
                    },
                    {
                        "id": 8,
                        "string": "(2016) , Ammar et al."
                    },
                    {
                        "id": 9,
                        "string": "(2016) and Ghannay et al."
                    },
                    {
                        "id": 10,
                        "string": "(2016) , for instance."
                    },
                    {
                        "id": 11,
                        "string": "Contributions."
                    },
                    {
                        "id": 12,
                        "string": "The main contributions of this paper are the following: • we augment some state-of-the-art methods with the use of word embeddings instead of lexical resources; • we introduce a syntax weighting in distributed representations of sentences, and prove its usefulness for textual similarity detection; • we combine our methods to verify their complementarity and finally obtain an overall F 1 score of 89.15% for English-French similarity detection at chunk level (88.5% at sentence level) on a very challenging corpus (mix of Wikipedia, conference papers, product reviews, Europarl and JRC) while the best method alone hardly reaches F 1 score higher than 50%."
                    },
                    {
                        "id": 13,
                        "string": "Evaluation Conditions Dataset The reference dataset used during our study is the new dataset recently introduced by Ferrero et al."
                    },
                    {
                        "id": 14,
                        "string": "(2016) 1 ."
                    },
                    {
                        "id": 15,
                        "string": "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection."
                    },
                    {
                        "id": 16,
                        "string": "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, conference papers, product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals) and cover various fields."
                    },
                    {
                        "id": 17,
                        "string": "In this paper, we only use the French and English sub-corpora."
                    },
                    {
                        "id": 18,
                        "string": "Overview of State-of-the-Art Methods Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities."
                    },
                    {
                        "id": 19,
                        "string": "However, textual similarity detection can be used to detect plagiarism."
                    },
                    {
                        "id": 20,
                        "string": "The aim of cross-language textual similarity detection is to estimate if two textual units in different languages express the same or not."
                    },
                    {
                        "id": 21,
                        "string": "We quickly review below the state-of-the-art methods used in this paper, for more details, see Ferrero et al."
                    },
                    {
                        "id": 22,
                        "string": "(2016) ."
                    },
                    {
                        "id": 23,
                        "string": "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model."
                    },
                    {
                        "id": 24,
                        "string": "We use the Potthast et al."
                    },
                    {
                        "id": 25,
                        "string": "(2011) implementation which compares two textual units under their 3-grams vectors representation."
                    },
                    {
                        "id": 26,
                        "string": "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) (Pataki, 2012) aims to measure the semantic similarity using abstract con-cepts from words in textual units."
                    },
                    {
                        "id": 27,
                        "string": "In our implementation, these concepts are given by a linked lexical resource called DBNary (Sérasset, 2015) ."
                    },
                    {
                        "id": 28,
                        "string": "Cross-Language Alignment-based Similarity Analysis (CL-ASA) aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus (Barrón-Cedeño et al."
                    },
                    {
                        "id": 29,
                        "string": "(2008) , Pinto et al."
                    },
                    {
                        "id": 30,
                        "string": "(2009) )."
                    },
                    {
                        "id": 31,
                        "string": "Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model (Gabrilovich and Markovitch, 2007) , which represents the meaning of a document by a vector based on concepts derived from Wikipedia."
                    },
                    {
                        "id": 32,
                        "string": "It was reused by Potthast et al."
                    },
                    {
                        "id": 33,
                        "string": "(2008) in the context of cross-language document retrieval."
                    },
                    {
                        "id": 34,
                        "string": "Translation + Monolingual Analysis (T+MA) consists in translating the two units into the same language, in order to operate a monolingual comparison between them (Barrón-Cedeño, 2012)."
                    },
                    {
                        "id": 35,
                        "string": "We use the Muhr et al."
                    },
                    {
                        "id": 36,
                        "string": "(2010) approach using DBNary (Sérasset, 2015) , followed by monolingual matching based on bags of words."
                    },
                    {
                        "id": 37,
                        "string": "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al."
                    },
                    {
                        "id": 38,
                        "string": "(2016)'s paper."
                    },
                    {
                        "id": 39,
                        "string": "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus."
                    },
                    {
                        "id": 40,
                        "string": "Each textual unit of S is compared to itself (to its corresponding unit in the target language, since this is cross-lingual similarity detection) and to M -1 other units randomly selected from S. The same unit may be selected several times."
                    },
                    {
                        "id": 41,
                        "string": "Then, a matching score for each comparison performed is obtained, leading to the distance matrix."
                    },
                    {
                        "id": 42,
                        "string": "Thresholding on the matrix is applied to find the threshold giving the best F 1 score."
                    },
                    {
                        "id": 43,
                        "string": "The F 1 score is the harmonic mean of precision and recall."
                    },
                    {
                        "id": 44,
                        "string": "Precision is defined as the proportion of relevant matches (similar cross-language units) retrieved among all the matches retrieved."
                    },
                    {
                        "id": 45,
                        "string": "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve."
                    },
                    {
                        "id": 46,
                        "string": "Each method is applied on each EN-FR sub-corpus for chunk and sentence granularities."
                    },
                    {
                        "id": 47,
                        "string": "For each configuration (i.e."
                    },
                    {
                        "id": 48,
                        "string": "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units."
                    },
                    {
                        "id": 49,
                        "string": "Proposed Methods The main idea of word embeddings is that their representation is obtained according to the context (the words around it)."
                    },
                    {
                        "id": 50,
                        "string": "The words are projected on a continuous space and those with similar context should be close in this multi-dimensional space."
                    },
                    {
                        "id": 51,
                        "string": "A similarity between two word vectors can be measured by cosine similarity."
                    },
                    {
                        "id": 52,
                        "string": "So using wordembeddings for plagiarism detection is appealing since they can be used to calculate similarity between sentences in the same or in two different languages (they capture intrinsically synonymy and morphological closeness)."
                    },
                    {
                        "id": 53,
                        "string": "We use the MultiVec (Berard et al., 2016) toolkit for computing and managing the continuous representations of the texts."
                    },
                    {
                        "id": 54,
                        "string": "It includes word2vec (Mikolov et al., 2013) , paragraph vector (Le and Mikolov, 2014) and bilingual distributed representations (Luong et al., 2015) features."
                    },
                    {
                        "id": 55,
                        "string": "The corpus used to build the vectors is the News Commentary 2 parallel corpus."
                    },
                    {
                        "id": 56,
                        "string": "For training our embeddings, we use CBOW model with a vector size of 100, a window size of 5, a negative sampling parameter of 5, and an alpha of 0.02."
                    },
                    {
                        "id": 57,
                        "string": "Improving Textual Similarity Using Word Embeddings (CL-CTS-WE and CL-WES) We introduce two new methods."
                    },
                    {
                        "id": 58,
                        "string": "First, we propose to replace the lexical resource used in CL-CTS (i.e."
                    },
                    {
                        "id": 59,
                        "string": "DBNary) by distributed representation of words."
                    },
                    {
                        "id": 60,
                        "string": "We call this new implementation CL-CTS-WE."
                    },
                    {
                        "id": 61,
                        "string": "More precisely, CL-CTS-WE uses the top 10 closest words in the embeddings model to build the BOW of a word."
                    },
                    {
                        "id": 62,
                        "string": "Secondly, we implement a more straightforward method (CL-WES), which performs a direct comparison between two sentences in different languages, through the use of word embeddings."
                    },
                    {
                        "id": 63,
                        "string": "It consists in a cosine similarity on distributed representations of the sentences, which are the summation of the embeddings vectors of each word of the sentences."
                    },
                    {
                        "id": 64,
                        "string": "Let U a textual unit, the n words of the unit are represented by u i as: U = {u 1 , u 2 , u 3 , ..., u n } (1) If U x and U y are two textual units in two different languages, CL-WES builds their (bilingual) common representation vectors V x and V y and applies a cosine similarity between them."
                    },
                    {
                        "id": 65,
                        "string": "A distributed representation V of a textual unit U is calculated as follows: V = n i=1 (vector(u i )) (2) where u i is the i th word of the textual unit and vector is the function which gives the word embedding vector of a word."
                    },
                    {
                        "id": 66,
                        "string": "This feature is available in MultiVec 3 (Berard et al., 2016) ."
                    },
                    {
                        "id": 67,
                        "string": "Cross-Language Word Embedding-based Syntax Similarity (CL-WESS) Our next innovation is the improvement of CL-WES by introducing a syntax flavour in it."
                    },
                    {
                        "id": 68,
                        "string": "Let U a textual unit, the n words of the unit are represented by u i as expressed in the formula (1)."
                    },
                    {
                        "id": 69,
                        "string": "First, we syntactically tag U with a part-of-speech tagger (TreeTagger (Schmid, 1994) ) and we normalize the tags with Universal Tagset of Petrov et al."
                    },
                    {
                        "id": 70,
                        "string": "(2012) ."
                    },
                    {
                        "id": 71,
                        "string": "Then, we assign a weight to each type of tag: this weight will be used to compute the final vector representation of the unit."
                    },
                    {
                        "id": 72,
                        "string": "Finally, we optimize the weights with the help of Condor (Berghen and Bersini, 2005) ."
                    },
                    {
                        "id": 73,
                        "string": "Condor applies a Newton's method with a trust region algorithm to determinate the weights that optimize the F 1 score."
                    },
                    {
                        "id": 74,
                        "string": "We use the first two folds of each sub-corpus to determinate the optimal weights."
                    },
                    {
                        "id": 75,
                        "string": "The formula of the syntactic aggregation is: V = n i=1 (weight(pos(u i )).vector(u i )) (3) where u i is the i th word of the textual unit, pos is the function which gives the universal part-ofspeech tag of a word, weight is the function which gives the weight of a part-of-speech, vector is the function which gives the word embedding vector of a word and ."
                    },
                    {
                        "id": 76,
                        "string": "is the scalar product."
                    },
                    {
                        "id": 77,
                        "string": "If U x and U y are two textual units in two different languages, we build their representation vectors V x and V y following the formula (3) instead of (2), and apply a cosine similarity between them."
                    },
                    {
                        "id": 78,
                        "string": "We call this method CL-WESS and we have implemented it in MultiVec (Berard et al., 2016) ."
                    },
                    {
                        "id": 79,
                        "string": "It is important to note that, contrarily to what is done in other tasks such as neural parsing (Chen and Manning, 2014) , we did not use POS information as an additional vector input because we considered it would be more useful to use it to weight the contribution of each word to the sentence representation, according to its morpho-syntactic category."
                    },
                    {
                        "id": 80,
                        "string": "Combining multiple methods Weighted Fusion We try to combine our methods to improve crosslanguage similarity detection performance."
                    },
                    {
                        "id": 81,
                        "string": "During weighted fusion, we assign one weight to the similarity score of each method and we calculate a (weighted) composite score."
                    },
                    {
                        "id": 82,
                        "string": "We optimize the distribution of the weights with Condor (Berghen and Bersini, 2005) ."
                    },
                    {
                        "id": 83,
                        "string": "We use the first two folds of each sub-corpus to determinate the optimal weights, while the other eight folds evaluate the fusion."
                    },
                    {
                        "id": 84,
                        "string": "We also try an average fusion, i.e."
                    },
                    {
                        "id": 85,
                        "string": "a weighted fusion where all the weights are equal."
                    },
                    {
                        "id": 86,
                        "string": "Regardless of their capacity to predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e."
                    },
                    {
                        "id": 87,
                        "string": "their ability to correctly separate the positives (similar units) and the negatives (different units) in order to minimize the doubts on the classification."
                    },
                    {
                        "id": 88,
                        "string": "Distribution histograms on Figure 1 highlight the fact that each method has its own fingerprint."
                    },
                    {
                        "id": 89,
                        "string": "Even if two methods look equivalent in term of final performance, their distribution can be different."
                    },
                    {
                        "id": 90,
                        "string": "One explanation is that the methods do not process on the same way."
                    },
                    {
                        "id": 91,
                        "string": "Some methods are lexical-syntax-based, others process by aligning concepts (more semantic) and still others capture context with word vectors."
                    },
                    {
                        "id": 92,
                        "string": "For instance, CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 1 (a) ), whereas the opposite is true for CL-ASA (Figure 1 (b) )."
                    },
                    {
                        "id": 93,
                        "string": "We try to exploit this complementarity using decision tree based fusion."
                    },
                    {
                        "id": 94,
                        "string": "We use the C4.5 algorithm (Quinlan, 1993) implemented in Weka 3.8.0 (Hall et al., 2009 )."
                    },
                    {
                        "id": 95,
                        "string": "The first two folds of each sub-corpus are used to determinate the optimal decision tree and the other eight folds to evaluate the fusion (same protocol as weighted fusion)."
                    },
                    {
                        "id": 96,
                        "string": "While analyzing the trained decision tree, we see that CL-C3G, CL-WESS and CL-CTS-WE are the closest to the root."
                    },
                    {
                        "id": 97,
                        "string": "This confirms their relevance for similarity detection, as well as their complementarity."
                    },
                    {
                        "id": 98,
                        "string": "Decision Tree Fusion Results and Discussion Use of word embeddings."
                    },
                    {
                        "id": 99,
                        "string": "We can see in Table 1 that the use of distributed representation of words instead of lexical resources improves CL-CTS (CL-CTS-WE obtains overall performance gain of +3.83% on chunks and +3.19% on sentences)."
                    },
                    {
                        "id": 100,
                        "string": "Despite this improvement, CL-CTS-WE remains less efficient than CL-C3G."
                    },
                    {
                        "id": 101,
                        "string": "While the use of bilingual sentence vector (CL-WES) is simple and elegant, its performance is lower than three state-of-the-art methods."
                    },
                    {
                        "id": 102,
                        "string": "However, its syntactically weighted version (CL-WESS) looks very promising and boosts the CL-WES overall performance by +11.78% on chunks and +14.92% on sentences."
                    },
                    {
                        "id": 103,
                        "string": "Thanks to this improvement, CL-WESS is significantly better than CL-C3G (+2.97% on chunks and +7.01% on sentences) and is the best single method evaluated so far on our corpus."
                    },
                    {
                        "id": 104,
                        "string": "Fusion."
                    },
                    {
                        "id": 105,
                        "string": "Results of the decision tree fusion are reported at both chunk and sentence level in Table 1."
                    },
                    {
                        "id": 106,
                        "string": "Weighted and average fusion are only re-  ported at chunk level."
                    },
                    {
                        "id": 107,
                        "string": "In each case, we combine the 8 previously presented methods (the 5 state-of-the-art and the 3 new methods)."
                    },
                    {
                        "id": 108,
                        "string": "Weighted fusion outperforms the state-of-the-art and the embedding-based methods in any case."
                    },
                    {
                        "id": 109,
                        "string": "Nevertheless, fusion based on a decision tree looks much more efficient."
                    },
                    {
                        "id": 110,
                        "string": "At chunk level, decision tree fusion leads to an overall F 1 score of 89.15% while the precedent best weighted fusion obtains 80.01% and the best single method only obtains 53.73%."
                    },
                    {
                        "id": 111,
                        "string": "The trend is the same at the sentence level where decision tree fusion largely overpasses any other method (88.50% against 56.35% for the best single method)."
                    },
                    {
                        "id": 112,
                        "string": "In our evaluation, the best decision tree, for an overall higher than 85% of correct classification on both levels, involves at a minimum CL-C3G, CL-WESS and CL-CTS-WE."
                    },
                    {
                        "id": 113,
                        "string": "These results confirm that different methods proposed complement each other, and that embeddings are useful for cross-language textual similarity detection."
                    },
                    {
                        "id": 114,
                        "string": "Conclusion and Perspectives We have augmented several baseline approaches using word embeddings."
                    },
                    {
                        "id": 115,
                        "string": "The most promising approach is a cosine similarity on syntactically weighted distributed representation of sentence (CL-WESS), which beats in overall the precedent best state-of-the-art method."
                    },
                    {
                        "id": 116,
                        "string": "Finally, we have also demonstrated that all methods are complementary and their fusion significantly helps crosslanguage textual similarity detection performance."
                    },
                    {
                        "id": 117,
                        "string": "At chunk level, decision tree fusion leads to an overall F 1 score of 89.15% while the precedent best weighted fusion obtains 80.01% and the best single method only obtains 53.73%."
                    },
                    {
                        "id": 118,
                        "string": "The trend is the same at the sentence level where decision tree fusion largely overpasses any other method."
                    },
                    {
                        "id": 119,
                        "string": "Our future short term goal is to work on the improvement of CL-WESS by analyzing the syntactic weights or even adapt them according to the plagiarist's stylometry."
                    },
                    {
                        "id": 120,
                        "string": "We have also made a submission at the SemEval-2017 Task 1, i.e."
                    },
                    {
                        "id": 121,
                        "string": "the task on Semantic Textual Similarity detection."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 12
                    },
                    {
                        "section": "Dataset",
                        "n": "2.1",
                        "start": 13,
                        "end": 17
                    },
                    {
                        "section": "Overview of State-of-the-Art Methods",
                        "n": "2.2",
                        "start": 18,
                        "end": 36
                    },
                    {
                        "section": "Evaluation Protocol",
                        "n": "2.3",
                        "start": 37,
                        "end": 48
                    },
                    {
                        "section": "Proposed Methods",
                        "n": "3",
                        "start": 49,
                        "end": 56
                    },
                    {
                        "section": "Improving Textual Similarity Using",
                        "n": "3.1",
                        "start": 57,
                        "end": 66
                    },
                    {
                        "section": "Cross-Language Word Embedding-based Syntax Similarity (CL-WESS)",
                        "n": "3.2",
                        "start": 67,
                        "end": 79
                    },
                    {
                        "section": "Weighted Fusion",
                        "n": "4.1",
                        "start": 80,
                        "end": 97
                    },
                    {
                        "section": "Results and Discussion",
                        "n": "5",
                        "start": 98,
                        "end": 113
                    },
                    {
                        "section": "Conclusion and Perspectives",
                        "n": "6",
                        "start": 114,
                        "end": 121
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1235-Table1-1.png",
                        "caption": "Table 1: Average F1 scores and confidence intervals of cross-language similarity detection methods applied on EN→FR sub-corpora – 8 folds validation.",
                        "page": 4,
                        "bbox": {
                            "x1": 78.72,
                            "x2": 519.36,
                            "y1": 62.879999999999995,
                            "y2": 309.12
                        }
                    },
                    {
                        "filename": "../figure/image/1235-Figure1-1.png",
                        "caption": "Figure 1: Distribution histograms of two state-ofthe-art methods for 1000 positives and 1000 negatives (mis)matches.",
                        "page": 3,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 287.03999999999996,
                            "y1": 394.56,
                            "y2": 725.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-57"
        },
        {
            "slides": {
                "1": {
                    "title": "NLP4NLP Corpus",
                    "text": [
                        "Presently conduct large scholar analysis of NLP domain",
                        "Production, Collaboration, Citation, Innovation",
                        "Major conferences (ACL, IEEE-ICASSP, ISCA-Interspeech,",
                        "ELRA-LREC, etc.) and Journals (IEEE-TASLP, CL,",
                        "SpeechCom, CSAL, LRE, etc.)",
                        "558 Venues (conferences) / Issues (journals)",
                        "short name # docs format long name language access to content period # venues",
                        "acl conference Association for Computational Linguistics Conference English open access *",
                        "acmtslp journal ACM Transaction on Speech and Language Processing English private access",
                        "alta conference Australasian Language Technology Association English open access *",
                        "anlp conference Applied Natural Language Processing English open access *",
                        "cath journal Computers and the Humanities English private access",
                        "cl journal American Journal of Computational Linguistics English open access *",
                        "coling conference Conference on Computational Linguistics English open access *",
                        "conll conference Computational Natural Language Learning English open access *",
                        "csal journal Computer Speech and Language English private access eacl conference European Chapter of the ACL English open access * emnlp conference Empirical methods in natural language processing English open access * hlt conference Human Language Technology English open access *",
                        "icassps conference IEEE International Conference on Acoustics, Speech and Signal Processing - Speech Track English private access ijcnlp conference International Joint Conference on NLP English open access * inlg conference International Conference on Natural Language Generation English open access * isca conference International Speech Communication Association English open access jep conference Journees d'Etudes sur la Parole French open access * lre journal Language Resources and Evaluation English private access lrec conference Language Resources and Evaluation Conference English open access * ltc conference Language and Technology Conference English private access modulad journal Le Monde des Utilisateurs de L'Analyse des Donnees French open access mts conference Machine Translation Summit English open access muc conference Message Understanding Conference English open access * naacl conference North American Chapter of ACL English open access *",
                        "paclic conference Pacific Asia Conference on Language, Information and Computation English open access * ranlp conference Recent Advances in Natural Language Processing English open access * sem conference Lexical and Computational Semantics / Semantic Evaluation English open access * speechc journal Speech Communication English private access tacl journal Transactions of the Association for Computational Linguistics English open access * tal journal Revue Traitement Automatique du Langage French open access taln conference Traitement Automatique du Langage Naturel French open access * taslp journal IEEE/ACM Transactions on Audio, Speech and Language Processing English private access tipster conference Tipster DARPA text program English open access * trec conference Text Retrieval Conference English open access Total incl. duplicates Total excl. duplicates"
                    ],
                    "page_nums": [
                        2,
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Each year Papers of the focus borrowing papers of the search",
                    "text": [
                        "Focus NLP4NLP (Same year or previous years)",
                        "Self-Reusing Self-Plagiarizing Reusing Plagiarizing"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "4": {
                    "title": "Each year Papers of the focus being borrowed by papers of",
                    "text": [
                        "Focus NLP4NLP (Same year or following years)",
                        "Self-Reused Self-Plagiarized Reused Plagiarized"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "6": {
                    "title": "Raw text versus LP",
                    "text": [
                        "Strategy Backward study Forward study document pairs# after document pairs# document pairs# duplicate pruning",
                        "2. Linguistic processing (LP)"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "10": {
                    "title": "Self Reuse Plagiarism",
                    "text": [
                        "In 61% of the cases, authors do not quote the source paper",
                        "130 papers have both the same title and the same list of authors",
                        "205 papers have the same title",
                        "Some specific cases (largest similarities)",
                        "Republishing the corrigendum of a previously published paper",
                        "Republishing a paper with a small difference in the title and one missing author in the authors list",
                        "Same research center described by the same author in two different conferences, with an overlapping of 90%",
                        "2 papers presented by the same author in 2 successive conferences, the difference being primarily in the name of the 2 systems being presented, that have been funded by the same project agency in 2 different contracts, with an overlapping of 45%",
                        "Used Using acl acmtslp alta anlp cath cl coling conll csal eacl emnlp hlt icassps ijcnlp inlg isca jep lre lrec ltc modulad mts muc naacl paclic ranlp sem speechc tacl tal taln taslp tipster trec",
                        "Total used Total using Difference",
                        "anlp anlp cath cath cl cl coling coling conll conll csal csal eacl eacl emnlp emnlp hlt hlt icassps icassps ijcnlp ijcnlp inlg inlg isca isca jep jep lre lre lrec lrec ltc ltc modulad modulad mts mts muc muc naacl naacl paclic paclic ranlp ranlp sem sem speechc speechc tacl tacl tal tal taln taln taslp taslp tipster tipster trec trec Total using"
                    ],
                    "page_nums": [
                        14,
                        16
                    ],
                    "images": []
                },
                "12": {
                    "title": "Reuse and Plagiarism",
                    "text": [
                        "261 cases : manual checking",
                        "25 have a least one author in common, but with a somehow different spelling, and should therefore be placed in the Self- plagiarism category",
                        "14 correctly quote the source paper, but with variants in the spelling of the authors names, of the papers title or of the conference or journal source, or correctly citing the source paper but forgetting to place it among the references, and should therefore be placed in the Reuse category.",
                        "After manual corrections: 224 cases (0.33% of papers)",
                        "In 52% of the cases, authors do not quote the source paper",
                        "This results in 117 possible cases of plagiarism (0.17%):",
                        "The copying paper cites another reference from the same authors of the source paper",
                        "(typically a previous reference, or a paper published in a Journal) (46 cases)",
                        "Both papers use extracts of a third paper that they both cite (31 cases)",
                        "Authors of the two papers are different, but from same laboratory (typical in industrial laboratories or funding agencies) (11 cases)",
                        "Authors of the two papers previously co-authored papers (typically as supervisor and PhD student or postdoc) but are now in different laboratories (11 cases)",
                        "Authors of the papers are different, but collaborated in the same project which is presented in the two papers (2 cases) The two papers present the same short example, result or definition coming from another source (13 cases) Only 3 remaining cases of possible plagiarism: same paper as a patchwork of 3 other papers, while sharing several references with them.",
                        "Used Using acl acmtslp alta anlp cath cl coling conll csal eacl emnlp hlt icassps ijcnlp inlg isca jep lre lrec ltc modulad mts muc naacl paclic ranlp sem speechc tacl tal taln taslp tipster trec",
                        "Total used Total using Difference",
                        "anlp anlp cath cath cl cl coling coling conll conll csal csal eacl eacl emnlp emnlp hlt hlt icassps icassps ijcnlp ijcnlp inlg inlg isca isca jep jep lre lre lrec lrec ltc ltc modulad modulad mts mts muc muc naacl naacl paclic paclic ranlp ranlp sem sem speechc speechc tacl tacl tal tal taln taln taslp taslp tipster tipster trec trec Total using"
                    ],
                    "page_nums": [
                        17,
                        20,
                        22
                    ],
                    "images": []
                },
                "13": {
                    "title": "Variants in Spelling Authors Name",
                    "text": [
                        "Non-Linear Probability Estimation Method",
                        "Used in HMM for Modeling Frame Correlation",
                        "Qing Guo, Fang Zheng, Jian Wu, and Wenhu Wu",
                        "An New Method Used in HMM for Modeling",
                        "Guo Qing, Zheng Fang, Wu Jian and Wu Wenhu"
                    ],
                    "page_nums": [
                        18
                    ],
                    "images": []
                },
                "14": {
                    "title": "Variants in Spelling References",
                    "text": [
                        "Quoted Reference: Graham W. (2007) an OWL",
                        "Ontology for HPSG proceeding of the ACL 2007",
                        "Correct Reference: Graham Wilcock (2007), An",
                        "OWL Ontology for HPSG",
                        "Quoted Reference: Li Liu, Jianglong He, On the use of orthogonal GMM in speaker verification",
                        "Correct Reference: Li Liu and Jialong He, On the use of orthogonal GMM in speaker recognition"
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": []
                },
                "18": {
                    "title": "Self Plagiarism or Fair Use",
                    "text": [
                        "(Pamela Samuelson, Comm. of ACM 1994)",
                        "The previous work must be restated to lay the groundwork for a new contribution in the second work,",
                        "Portions of the previous work must be repeated to deal with new evidence or arguments,",
                        "The audience for each work is so different that publishing the same work in different places is necessary to get the message out,",
                        "The authors think they said it so well the first time that it makes no sense to say it differently a second time.",
                        "30% as an upper limit in the reuse of parts of a previously published paper.",
                        "Only 1.3% of NLP4NLP papers go beyond this limit"
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": []
                },
                "19": {
                    "title": "Plagiarism Right to Quote",
                    "text": [
                        "National legislations usually embody the Berne convention limits in one or more of the following requirements:",
                        "the cited paragraphs are within a reasonable limit,",
                        "<= 10% of the copied / copying papers in France / Canada",
                        "Only 0.05% of NLP4NLP papers go beyond this limit",
                        "the cited paragraphs are clearly marked as quotations and fully referenced, the resulting new work is not just a collection of quotations, but constitutes a fully original work in itself.",
                        "the copied paragraphs must have a function in the goal of the copying paper."
                    ],
                    "page_nums": [
                        26
                    ],
                    "images": []
                }
            },
            "paper_title": "A Study of Reuse and Plagiarism in Speech and Natural Language Processing papers",
            "paper_id": "1246",
            "paper": {
                "title": "A Study of Reuse and Plagiarism in Speech and Natural Language Processing papers",
                "abstract": "The aim of this experiment is to present an easy way to compare fragments of texts in order to detect (supposed) results of copy & paste operations between articles in the domain of Natural Language Processing, including Speech Processing (NLP). The search space of the comparisons is a corpus labelled as NLP4NLP, which includes 34 different sources and gathers a large part of the publications in the NLP field over the past 50 years. This study considers the similarity between the papers of each individual source and the complete set of papers in the whole corpus, according to four different types of relationship (self-reuse, self-plagiarism, reuse and plagiarism) and in both directions: a source paper borrowing a fragment of text from another paper of the collection, or in the reverse direction, fragments of text from the source paper being borrowed and inserted in another paper of the collection.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Everything starts with a copy & paste and, of course the flood of documents that we see today could not exist without the practical ease of copy & paste."
                    },
                    {
                        "id": 1,
                        "string": "This is not new but what is new is that the availability of archives allows us to study a vast amount of papers in our domain (i.e."
                    },
                    {
                        "id": 2,
                        "string": "Natural Language Processing, NLP, both for written and spoken materials) and to figure out the level of reuse and plagiarism in this area."
                    },
                    {
                        "id": 3,
                        "string": "Context Our work comes after the various studies initiated in the Workshop entitled: \"Rediscovering 50 Years of Discoveries in Natural Language Processing\" on the occasion of ACL's 50th anniversary in 2012 [Radev et al 2013] where a group of researchers studied the content of the corpus recorded in the ACL Anthology [Bird et al 2008] ."
                    },
                    {
                        "id": 4,
                        "string": "Among these studies, one was devoted to reuse and it is worth quoting Gupta and Rosso [Gupta et al 2012]: \"It becomes essential to check the authenticity and the novelty of the submitted text before the acceptance."
                    },
                    {
                        "id": 5,
                        "string": "It becomes nearly impossible for a human judge (reviewer) to discover the source of the submitted work, if any, unless the source is already known."
                    },
                    {
                        "id": 6,
                        "string": "Automatic plagiarism detection applications identify such potential sources for the submitted work and based on it a human judge can easily take the decision\"."
                    },
                    {
                        "id": 7,
                        "string": "Let's add that this subject is a specific and active domain ruled yearly by the PAN international plagiarism detection competition 1 ."
                    },
                    {
                        "id": 8,
                        "string": "On our side, we also conducted a specific study of reuse and plagiarism in the papers published at the Language Resources and Evaluation conference (LREC), from 1998 to 2014 [Francopoulo et al 2016]."
                    },
                    {
                        "id": 9,
                        "string": "Objectives Our aim is not to present the state-of-art or to compare the various metrics and algorithms for reuse and plagiarism detection, see [Hoad et al 2003] [HaCohen-Kerner et al 2010] for instance."
                    },
                    {
                        "id": 10,
                        "string": "We position our work as an extrinsic detection, the aim of which is to find near-matches between texts, as opposed to intrinsic detection whose aim is to show that different parts of a presumably single-author text could not have been written by the In contrast, our main objective is to deal with the entry level of the detection."
                    },
                    {
                        "id": 11,
                        "string": "The main question is: Is there a meaningful difference in taking the verbatim raw strings compared with the result of a linguistic parsing?"
                    },
                    {
                        "id": 12,
                        "string": "A secondary objective is to present and study a series of ascertainments about the practices of our specific field."
                    },
                    {
                        "id": 13,
                        "string": "The corpus: NLP4NLP The corpus is a large content of our own research field, i.e."
                    },
                    {
                        "id": 14,
                        "string": "NLP, covering both written and spoken language processing sub-domains and extended to a limited number of corpora, for which Information Retrieval and NLP activities intersect."
                    },
                    {
                        "id": 15,
                        "string": "This corpus was collected at IMMI-CNRS and LIMSI-CNRS (France) and is named NLP4NLP 2 ."
                    },
                    {
                        "id": 16,
                        "string": "It currently contains 65,003 documents coming from various conferences and journals with either public or restricted access."
                    },
                    {
                        "id": 17,
                        "string": "This is a large part of the existing published articles in our field, apart from the workshop proceedings and the published books."
                    },
                    {
                        "id": 18,
                        "string": "The time period spans 50 years from 1965 to 2015."
                    },
                    {
                        "id": 19,
                        "string": "Broadly speaking, and aside from the small corpora, one third comes from the ACL Anthology 3 , one third from the ISCA Archive 4 and one third from IEEE 5 A phase of preprocessing has been applied to represent the various sources in a common format."
                    },
                    {
                        "id": 20,
                        "string": "This format follows the organization of the ACL Anthology with two parts in parallel for each document: the metadata and the content."
                    },
                    {
                        "id": 21,
                        "string": "Each document is labeled with a unique identifier, for instance \"lrec2000_1\" is reified on the hard disk as two files: \"lrec2000_1.bib\" and \"lrec2000_1.pdf\"."
                    },
                    {
                        "id": 22,
                        "string": "For the metadata, we faced four different types of sources with different flavors and character encodings: BibTeX (e.g."
                    },
                    {
                        "id": 23,
                        "string": "ACL Anthology), custom XML (e.g."
                    },
                    {
                        "id": 24,
                        "string": "TALN), database downloads (e.g."
                    },
                    {
                        "id": 25,
                        "string": "IEEE) or HTML program of the conference (e.g."
                    },
                    {
                        "id": 26,
                        "string": "TREC)."
                    },
                    {
                        "id": 27,
                        "string": "We wrote a series of small Java programs to transform these metadata into a common BibTeX format under UTF8."
                    },
                    {
                        "id": 28,
                        "string": "Each file comprises the author names and the title."
                    },
                    {
                        "id": 29,
                        "string": "The file is located in a directory which designates the year and the corpus."
                    },
                    {
                        "id": 30,
                        "string": "Concerning the content, we faced different formats possibly for the same corpus, and the amount of documents being huge, we cannot designate the file type by hand individually."
                    },
                    {
                        "id": 31,
                        "string": "To deal with this, we wrote a program to self-detect the type and sub-type as follows: A small amount of texts are in raw text: we keep them in this format."
                    },
                    {
                        "id": 32,
                        "string": "The vast majority of the documents are in PDF format of different sub-types."
                    },
                    {
                        "id": 33,
                        "string": "First, we used PDFBox 7 to determine the sub-type of the PDF content: when the content is a textual content, we use PDFBox 3 http://aclweb.org/anthology 4 www.isca-speech.org/iscaweb/index.php/archive/online-archive 5 https://www.ieee.org/index.html 6 In the case of a joint conference, the papers are counted twice."
                    },
                    {
                        "id": 34,
                        "string": "This number reduces to 65,003, if we count only once duplicated papers."
                    },
                    {
                        "id": 35,
                        "string": "Similarly, the number of venues is 577 when all venues are counted, but this number reduces to 558 when the 19 joint conferences are counted only once."
                    },
                    {
                        "id": 36,
                        "string": "again to extract the text, possibly with the use of the \"Legion of the Bouncy Castle\" 8 to extract the encrypted content."
                    },
                    {
                        "id": 37,
                        "string": "When the PDF is a text under the form of an image, we use PDFBox to extract the images and then Tesseract OCR 9 to transform the images into a textual content."
                    },
                    {
                        "id": 38,
                        "string": "Then, and after some experiments, two filters are applied to avoid getting rubbish content: The content should be at least 900 characters."
                    },
                    {
                        "id": 39,
                        "string": "The content should be of good quality."
                    },
                    {
                        "id": 40,
                        "string": "In order to evaluate this quality, the content is analyzed by the morphological module of TagParser [Francopoulo 2007], a deep industrial parser based on a broad English lexicon and Global Atlas (a knowledge base containing more than one million words from 18 Wikipedias) [Francopoulo et al."
                    },
                    {
                        "id": 41,
                        "string": "2013 ] to detect out-of-the-vocabulary (OOV) words."
                    },
                    {
                        "id": 42,
                        "string": "Based on the hypothesis that rubbish strings are OOV words, we retain a text when the ratio OOV / number of words is less than 9%."
                    },
                    {
                        "id": 43,
                        "string": "We then apply a set of symbolic rules to split the abstract, body and reference section."
                    },
                    {
                        "id": 44,
                        "string": "The file is recorded in XML."
                    },
                    {
                        "id": 45,
                        "string": "It should be noted that we made some experiments with other strategies, given the fact that we are able to compare them with respect to a quantitative evaluation of the quality, as explained before."
                    },
                    {
                        "id": 46,
                        "string": "The first experiment was to use ParsCit 10 [Councill et al."
                    },
                    {
                        "id": 47,
                        "string": "2008 ] but the evaluation of the quality was bad, specially when the content is not pure ASCII."
                    },
                    {
                        "id": 48,
                        "string": "The result on accentuated Latin strings, or Arabic and Russian contents was awful."
                    },
                    {
                        "id": 49,
                        "string": "We also tried Grobid 11 but we did not succeed to run it correctly on Windows."
                    },
                    {
                        "id": 50,
                        "string": "A semi-automatic cleaning process was applied on the metadata in order to avoid false duplicates concerning middle names (for X Y Z, is Y a second given name or the first part of the family name?)"
                    },
                    {
                        "id": 51,
                        "string": "and for this purpose, we use the specific BibTex format where the given name is separated from the family name with a comma."
                    },
                    {
                        "id": 52,
                        "string": "Then typographic variants (e.g."
                    },
                    {
                        "id": 53,
                        "string": "\"Jean-Luc\" versus \"Jean Luc\" or \"Herve\" versus \"Hervé\") were searched in a tedious process and false duplicates were normalized in order to be merged."
                    },
                    {
                        "id": 54,
                        "string": "The resulting number of different authors is 48,894. for more details about the extraction process as well as the solutions for some tricky problems like joint conferences management or abstract / body / reference sections detection."
                    },
                    {
                        "id": 55,
                        "string": "The majority (90%) of the documents come from conferences, the rest coming from journals."
                    },
                    {
                        "id": 56,
                        "string": "The overall number of words is roughly 270M."
                    },
                    {
                        "id": 57,
                        "string": "Initially, the texts are in four languages: English, French, German and Russian."
                    },
                    {
                        "id": 58,
                        "string": "The number of texts in German and Russian is less than 0.5%."
                    },
                    {
                        "id": 59,
                        "string": "They are detected automatically and are ignored."
                    },
                    {
                        "id": 60,
                        "string": "The texts in French are a little bit more numerous (3%), and are kept with the same status as the English ones."
                    },
                    {
                        "id": 61,
                        "string": "This is not a problem as our tool is able to process English and French."
                    },
                    {
                        "id": 62,
                        "string": "The corpus is a collection of documents of a single technical domain, which is NLP in the broad sense, and of course, some conferences are specialized in certain topics like written language processing, spoken language processing, including signal processing, information retrieval or machine translation."
                    },
                    {
                        "id": 63,
                        "string": "Definitions As the terminology is fuzzy and contradictory among the scientific literature, we need first to define four important terms in order to avoid any misunderstanding."
                    },
                    {
                        "id": 64,
                        "string": "The term \"self-reuse\" is used for a copy & paste when the source of the copy has an author who belongs to the group of authors of the text of the paste and when the source is cited."
                    },
                    {
                        "id": 65,
                        "string": "The term \"self-plagiarism\" is used for a copy & paste when the source of the copy has similarly an author who belongs to the group of authors of the text of the paste, but when the source is not cited."
                    },
                    {
                        "id": 66,
                        "string": "The term \"reuse\" is used for a copy & paste when the source of the copy has no author in the group of authors of the paste and when the source is cited."
                    },
                    {
                        "id": 67,
                        "string": "The term \"plagiarism\" is used for a copy & paste when the source of the copy has no author in the group of the paste and when the source is not cited."
                    },
                    {
                        "id": 68,
                        "string": "Said in other words, the terms \"self-reuse\" and \"reuse\" qualify a situation with a proper source citation, on the contrary of \"self-plagiarism\" and \"plagiarism\"."
                    },
                    {
                        "id": 69,
                        "string": "Let's note that in spite of the fact that the term \"self-plagiarism\" seems to be contradictory as authors should be free to use their own wordings, we use this term because it is the usual habit within the community of plagiarism detectionsome authors also use the term \"recycling\", for instance [HaCohen-Kerner et al 2010]."
                    },
                    {
                        "id": 70,
                        "string": "Directions Another point to clarify concerns the expression \"source papers\"."
                    },
                    {
                        "id": 71,
                        "string": "As a convention, we call \"focus\" the corpus corresponding to the source which is studied."
                    },
                    {
                        "id": 72,
                        "string": "The whole NL4NLP collection is the \"search space\"."
                    },
                    {
                        "id": 73,
                        "string": "We examine the copy & paste operations in both directions: we study the configuration with a source paper borrowing fragments of text from other papers of the NLP4NLP collection, in other words, a backward study, and we also study in the reverse direction the fragments of the source paper being borrowed by papers of the NLP4NLP collection, in other words, a forward study."
                    },
                    {
                        "id": 74,
                        "string": "Algorithm Comparison of word sequences has proven to be an effective method for detection of copy For each document of the focus (the source corpus), all the sliding windows 12 of lemmas (typically 5 to 7, excluding punctuations) are built and recorded under the form of a character string key in an index locally to a document."
                    },
                    {
                        "id": 75,
                        "string": "An index gathering all these local indexes is built and is called the \"focus index\"."
                    },
                    {
                        "id": 76,
                        "string": "For each document apart from the focus (i.e."
                    },
                    {
                        "id": 77,
                        "string": "outside the source corpus), all the sliding windows are built and only the windows contained in the focus index are recorded in an index locally to this document."
                    },
                    {
                        "id": 78,
                        "string": "This filtering operation is done to optimize the comparison phase, as there is no need to compare the windows out of the focus index."
                    },
                    {
                        "id": 79,
                        "string": "Then, the keys are compared to compute a similarity overlapping score [Lyon et al 2001] between documents D1 and D2, with the Jaccard distance: score(D1,D2) = shared windows# / union# (D1 windows, D2 windows)."
                    },
                    {
                        "id": 80,
                        "string": "The pairs of documents D1 / D2 are then filtered according to a threshold in order to retain only significant similarity scoring situations."
                    },
                    {
                        "id": 81,
                        "string": "Algorithm comments and evaluation In a first implementation, we compared the raw character strings with a segmentation based on space and punctuation."
                    },
                    {
                        "id": 82,
                        "string": "But, due to the fact that the input is the result of PDF formatting, the texts may contain variable caesura for line endings or some little textual variations."
                    },
                    {
                        "id": 83,
                        "string": "Our objective is to compare at a higher level than hyphen variation (there are different sorts of hyphens), caesura (the sequence X/-/endOfLine/Y needs to match an entry XY in the lexicon to distinguish from an hyphen binding a composition), upper/lower case variation, plural, orthographic variation (\"normalise\" versus \"normalize\"), spellchecking (particularly useful when the PDF is an image and when the extraction is of low quality) and abbreviation (\"NP\" versus \"Noun Phrase\" or \"HMM\" versus \"Hidden Markov Model\")."
                    },
                    {
                        "id": 84,
                        "string": "Some rubbish sequence of characters (e.g."
                    },
                    {
                        "id": 85,
                        "string": "a series of hyphens) were also detected and cleaned."
                    },
                    {
                        "id": 86,
                        "string": "Given that a parser takes all these variations and cleanings into account, we decided to apply a full linguistic parsing, as a second strategy."
                    },
                    {
                        "id": 87,
                        "string": "The syntactic structures and relations are ignored."
                    },
                    {
                        "id": 88,
                        "string": "Then a module for entity linking is called in order to bind different names referring to the same entity, a process often labeled as \"entity linking\" ."
                    },
                    {
                        "id": 89,
                        "string": "Thus \"British National Corpus\" is considered as possibly abbreviated to \"BNC\", as well as less regular names like \"ItalWordNet\" possibly abbreviated to \"IWN\"."
                    },
                    {
                        "id": 90,
                        "string": "Each entry of the Knowledge Base has a canonical form, possibly associated with different variants: the aim is to normalize into a canonical form to neutralize proper noun obfuscations based on variant substitutions."
                    },
                    {
                        "id": 91,
                        "string": "After this processing, only the sentences with at least a verb are considered."
                    },
                    {
                        "id": 92,
                        "string": "We examined the differences between those two strategies concerning all types of copy & paste situations above the threshold, choosing the LREC source as the focus."
                    },
                    {
                        "id": 93,
                        "string": "The results are presented in Table 2 , with the last column adding the two other columns without the duplicates produced by the couples of the same year."
                    },
                    {
                        "id": 94,
                        "string": "The strategy based on linguistic processing provides more pairs (+158) and we examined these differences."
                    },
                    {
                        "id": 95,
                        "string": "Among these pairs, the vast majority (80%) concerns caesura: this is normal because most conferences demand a double column format, so the authors frequently use caesura to save place 13 ."
                    },
                    {
                        "id": 96,
                        "string": "The other differences (20%) are mainly caused by lexical variations and spellchecking."
                    },
                    {
                        "id": 97,
                        "string": "Thus, the results show that using raw texts gives a more \"silent\" system."
                    },
                    {
                        "id": 98,
                        "string": "The drawback is that the computation is much longer 14 , but we think that it is worth the value."
                    },
                    {
                        "id": 99,
                        "string": "Tuning parameters There are three parameters that had to be tuned: the window size, the distance function and the threshold."
                    },
                    {
                        "id": 100,
                        "string": "The main problem we had was that we did not have any gold standard to evaluate the quality specifically on our corpus and the burden to annotate a corpus was too heavy."
                    },
                    {
                        "id": 101,
                        "string": "We therefore decided to start from the parameters presented in the articles related to the PAN contest."
                    },
                    {
                        "id": 102,
                        "string": "We then computed the results, picked a random selection of pairs that we examined and tuned the parameters accordingly."
                    },
                    {
                        "id": 103,
                        "string": "All experiments were conducted with LREC as the focus and NLP4NLP as the search space."
                    },
                    {
                        "id": 104,
                        "string": "In the PAN related articles, different window sizes are used."
                    },
                    {
                        "id": 105,
                        "string": "A window of five is the most frequent one [Kasprzak et al 2010], but our results show that a lot of common sequences like \"the linguistic unit is the\" overload the pairwise score."
                    },
                    {
                        "id": 106,
                        "string": "After some trials, we decided to select a size of seven tokens, in agreement with [Citron and Ginsparg 2014]."
                    },
                    {
                        "id": 107,
                        "string": "Concerning the distance function, the Jaccard distance is frequently used but let's note that other formulas are applicable and documented in the literature."
                    },
                    {
                        "id": 108,
                        "string": "For instance, some authors use an approximation with the following formula: score(D1,D2) = shared windows# / min(D1 windows#, D2 windows#) [Clough et al 2009], which is faster to compute, because there is no need to compute the union."
                    },
                    {
                        "id": 109,
                        "string": "Given that computation time is not a problem for us, we kept the most used function which is the Jaccard distance."
                    },
                    {
                        "id": 110,
                        "string": "Concerning the threshold, we tried thresholds of 0.03 and 0.04 (3 to 4%) and we compared the results."
                    },
                    {
                        "id": 111,
                        "string": "The last value gave more significant results, as it reduced noise, while still allowing to detect meaningful pairs of similar papers."
                    },
                    {
                        "id": 112,
                        "string": "After running the first trials, we discovered that using the Jaccard distance resulted in considering as similar a set of two papers, one of them being of small content."
                    },
                    {
                        "id": 113,
                        "string": "This may be the case for invited talks, for example, when the author only provide a short abstract."
                    },
                    {
                        "id": 114,
                        "string": "In this case, a simple acknowledgement to the same institution may produce a similarity score higher than the threshold."
                    },
                    {
                        "id": 115,
                        "string": "The same happens for some eldest papers when the OCR produced a truncated document."
                    },
                    {
                        "id": 116,
                        "string": "In order to solve this problem, we added a second threshold on the minimum number of shared windows that we set at 50 after considering the corresponding erroneous cases."
                    },
                    {
                        "id": 117,
                        "string": "Special considerations concerning authorship and citations As previously explained, our aim is to distinguish a copy & paste fragment associated with a citation compared to a fragment without any citation."
                    },
                    {
                        "id": 118,
                        "string": "To this end, we proceed with an approximation: we do not bind exactly the anchor in the text, but we parse the reference section and consider that, globally to the text, the document cites (or not) the other document."
                    },
                    {
                        "id": 119,
                        "string": "Due to the fact, that we have proper author identification for each document, the corpus forms a complex web of citations."
                    },
                    {
                        "id": 120,
                        "string": "We are thus able to distinguish self-reuse versus self-plagiarism and reuse versus plagiarism."
                    },
                    {
                        "id": 121,
                        "string": "We are in a situation slightly different from METER where the references are not Precision about the anteriority test Given the fact that some papers and drafts of papers can circulate among researchers before the official published date, it is impossible to verify exactly when a document is issued; moreover we do not have any more detailed time indication than the year, as we don't know the precise date of submission."
                    },
                    {
                        "id": 122,
                        "string": "This is why we also consider the same year within the comparisons."
                    },
                    {
                        "id": 123,
                        "string": "In this case, it is difficult to determine which are the borrowing and borrowed papers, and in some cases they may even have been written simultaneously."
                    },
                    {
                        "id": 124,
                        "string": "However, if one paper cites a second one, while it is not cited by the second one, it may serve as a sign to consider it as being the borrowing paper."
                    },
                    {
                        "id": 125,
                        "string": "Resulting files The program computes a detailed result for each individual source as an HTML page where all similar pairs of documents are listed with their similarity score, with the common fragments displayed as red highlighted snippets and HTML links back to the original 67,937 documents 15 ."
                    },
                    {
                        "id": 126,
                        "string": "For each of the 4 categories (Self-reuse, Self-Plagiarism, Reuse and Plagiarism), the program produces the list of couples of \"similar\" papers according to our criteria, with their similarity score, and the global results in the form of matrices displaying the number of papers 14 It takes 25 hours instead of 3 hours on a mid-range mono-processor Xeon E3-1270 V2 with 32G of RAM."
                    },
                    {
                        "id": 127,
                        "string": "15 But the space limitations do not allow to present these results in lengthy details."
                    },
                    {
                        "id": 128,
                        "string": "Furthermore, we do not want to display personal results."
                    },
                    {
                        "id": 129,
                        "string": "that are similar in each couple of the 34 sources, in the forward and backward directions (the using sources are on the X axis, while the used sources are on the Y axis)."
                    },
                    {
                        "id": 130,
                        "string": "The total of used and using papers, and the difference between those totals, are presented, while the 7 (Table 3) or 5 (Table 4 ) top using or used sources are indicated in green."
                    },
                    {
                        "id": 131,
                        "string": "We conducted a manual checking of the couples of papers showing a very high similarity: the 14 couples that showed a similarity of 1 were the duplication of a paper due to an error in editing the proceedings of a conference."
                    },
                    {
                        "id": 132,
                        "string": "We also found after those first trials erroneous results of the OCR for some eldest papers which resulted in files containing several papers, in full or in fragments, or where blanks were inserted after each individual character."
                    },
                    {
                        "id": 133,
                        "string": "We excluded those 86 documents from the corpus being considered."
                    },
                    {
                        "id": 134,
                        "string": "Checking those results, we also mentioned several cases where the author was the same, but with a different spelling, or where references were properly quoted, but with a different wording, a different spelling (American English versus British English, for example) or an improper reference to the source."
                    },
                    {
                        "id": 135,
                        "string": "We had to manually correct those cases, and move the corresponding couples of papers in the correct category (from reuse or plagiarism to self-reuse or self-plagiarism in the case of authors names, from plagiarism to reuse, in the case of references)."
                    },
                    {
                        "id": 136,
                        "string": "Table 3 provides the results of merging self-reuse (authors reusing their own text while quoting the source paper) and self-plagiarism (authors reusing their own text without quoting the source paper)."
                    },
                    {
                        "id": 137,
                        "string": "As we see, it is a rather frequent phenomenon, with a total of 12,493 documents (i.e."
                    },
                    {
                        "id": 138,
                        "string": "18% of the 67,937 documents!)."
                    },
                    {
                        "id": 139,
                        "string": "In 61% of the cases (7,650 self-plagiarisms over 12,493), the authors do not quote the source paper."
                    },
                    {
                        "id": 140,
                        "string": "We found that 205 papers have exactly the same title, and that 130 papers have both the same title and the same list of authors!"
                    },
                    {
                        "id": 141,
                        "string": "Also 3,560 papers have exactly the same list of authors."
                    },
                    {
                        "id": 142,
                        "string": "Given the large number of documents, it is impossible to conduct a manual checking of all the couples."
                    },
                    {
                        "id": 143,
                        "string": "We see that the most used sources are the large conferences: ISCA, IEEE-ICASSP, ACL, COLING, HLT, EMNLP and LREC."
                    },
                    {
                        "id": 144,
                        "string": "The most using sources are not only those large conferences, but also the journals: IEEE-Transactions on Acoustics, Speech and Language Processing (and its various avatars) (TASLP), Computer Speech and Language (CSAL), Computational Linguistics (CL) and Speech Com."
                    },
                    {
                        "id": 145,
                        "string": "If we consider the balance between the using and the used sources, we clearly see that the flow of papers goes from conferences to journals."
                    },
                    {
                        "id": 146,
                        "string": "The largest flows of self-reuse and self-plagiarism concern ISCA and ICASSP, in both directions, but especially from ISCA to ICASSP, ICASSP and ISCA to TASLP (also in the reverse direction) and to CSAL, ISCA to Speech Com, ACL to Computational Linguistics, ISCA to LREC and EMNLP to ACL."
                    },
                    {
                        "id": 147,
                        "string": "If we want to study the influence a given conference (or journal) has on another one, we must however recall that these figures are raw figures in terms of number of documents, and we must not forget that some conferences (or journals) are much bigger than others."
                    },
                    {
                        "id": 148,
                        "string": "For instance, LREC is a conference with more than 4,500 documents compared to LRE which is a journal with only 308 documents."
                    },
                    {
                        "id": 149,
                        "string": "If we relate the number of published papers that reuse another paper to the total number of published papers, we may see that 17% of the LRE papers (52 over 308) use content coming from the LREC conferences, without quoting them in 66% of the cases."
                    },
                    {
                        "id": 150,
                        "string": "Also the frequency of the conferences (annual or biennial) and the calendar (date of the conference and of the submission deadline) may influence the flow of papers between the sources."
                    },
                    {
                        "id": 151,
                        "string": "The similarity scores range from 4% to 97% (Fig."
                    },
                    {
                        "id": 152,
                        "string": "1) ."
                    },
                    {
                        "id": 153,
                        "string": "We see that about 4,500 couples of papers have a similarity score equal or superior to 10%; about 900 (1.3% of the total number of papers) have a score superior or equal to 30%."
                    },
                    {
                        "id": 154,
                        "string": "Looking at the ones with the largest similarity score, we found a few examples of important variants in the spelling of the same authors' names, and cases of republishing the corrigendum of a previously published paper or of republishing a paper with a small difference in the title and one missing author in the authors' list."
                    },
                    {
                        "id": 155,
                        "string": "In one case, the same research center is described by the same author in two different conferences with an overlapping of 90%."
                    },
                    {
                        "id": 156,
                        "string": "In another case, the difference of the two papers is primarily in the name of the systems being presented, funded by the same project agency in two different contracts, while the description has a 45% overlap!"
                    },
                    {
                        "id": 157,
                        "string": "1  501  1001  1501  2001  2501  3001  3501  4001  4501  5001  5501  6001  6501  7001  7501  8001  8501  9001  9501  10001  10501  11001  Used   Using  acl  acmtslp  alta  anlp  cath  cl  coling  conll  csal  eacl  emnlp  hlt  icassps  ijcnlp  inlg  isca  jep  lre  lrec  ltc  modulad  mts  muc  naacl  paclic  ranlp  sem  speechc  tacl  tal  taln  taslp  tipster  trec  Total used  Total using  Difference   acl  22  8  1  4  8 136 78 25 31 22 83 85  29 31  7  48  0 20 71  4  0 19  1 51  8  5 26  1  2  0  0  24  4  9  863  625  238 Table 3 ."
                    },
                    {
                        "id": 158,
                        "string": "Self-reuse and Self-Plagiarism Matrix, with indication of the 7 most using and used sources."
                    },
                    {
                        "id": 159,
                        "string": "Self-reuse and Self-Plagiarism Used Using  acl  acmtslp  alta  anlp  cath  cl  coling  conll  csal  eacl  emnlp  hlt  icassps  ijcnlp  inlg  isca  jep  lre  lrec  ltc  modulad  mts  muc  naacl  paclic  ranlp  sem  speechc  tacl  tal  taln  taslp  tipster  trec  Total used  Total using  Difference   acl  1  0  0  0  1  1  2  2  0  0  4  3  0  3 Table 4 provides the results of merging reuse (authors reusing fragments of the texts of other authors while quoting the source paper) and plagiarism (authors reusing fragments of the texts of other authors without quoting the source paper)."
                    },
                    {
                        "id": 160,
                        "string": "As we see, there are very few cases altogether."
                    },
                    {
                        "id": 161,
                        "string": "Only 261 papers (i.e."
                    },
                    {
                        "id": 162,
                        "string": "less than 0.4% of the 67,937 documents) reuse a fragment of papers written by other authors that they quote."
                    },
                    {
                        "id": 163,
                        "string": "In 60% of the cases (156 plagiarisms over 261), the authors do not quote the source paper, but these possible cases of plagiarism only represent 0.23% of the total number of papers."
                    },
                    {
                        "id": 164,
                        "string": "Given those small numbers, we were able to conduct a manual checking of those couples."
                    },
                    {
                        "id": 165,
                        "string": "Among the couple papers placed in the \"Reuse\" category, it appeared that 12 have a least one author in common, but with a somehow different spelling and should therefore be placed in the \"Self-reuse\" category."
                    },
                    {
                        "id": 166,
                        "string": "Among the couples of papers placed in the \"Plagiarism\" category, 25 have a least one author in common, but with a somehow different spelling and should therefore be placed in the \"Self-plagiarism\" category and 14 correctly quote the source paper, but with variants in the spelling of the authors' names, of the paper's title or of the conference or journal source or forgetting to place the source paper in the references and should therefore be placed in the \"Reuse\" category."
                    },
                    {
                        "id": 167,
                        "string": "It therefore resulted in 107 cases of \"reuse\" and 117 possible cases of plagiarism (0.17% of the papers) that we studied more closely."
                    },
                    {
                        "id": 168,
                        "string": "We found the following explanations: The paper cites another reference from the same authors of the source paper (typically a previous reference, or a paper published in a Journal) (46 cases) Both papers use extracts of a third paper that they both cite (31 cases) The authors of the two papers are different, but from the same laboratory (typically in industrial laboratories or funding agencies) (11 cases) The authors previously co-authored papers (typically as supervisor and PhD student or postdoc) but are now in a different laboratory (11 cases) The authors of the papers are different, but collaborated in the same project which is presented in the two papers (2 cases) The two papers present the same short example, result or definition coming from another source (13 cases) If we exclude those cases, only 3 cases of possible plagiarism remain that correspond to the same paper which appears as a patchwork of 3 other papers, while sharing several references with them."
                    },
                    {
                        "id": 169,
                        "string": "The similarity scores range from 4% to 42% (Fig."
                    },
                    {
                        "id": 170,
                        "string": "2) ."
                    },
                    {
                        "id": 171,
                        "string": "Only 34 couples of papers have a similarity score equal or higher than 10%."
                    },
                    {
                        "id": 172,
                        "string": "For example, the couple showing the highest similarity score comprises a paper published in 1998 and a paper published in 2000 which both describe Chart parsing using the words of the initial paper published 20 years earlier in 1980, that they both properly quote."
                    },
                    {
                        "id": 173,
                        "string": "Among the three remaining possible cases of plagiarism, the highest similarity score is 10%, with a shared window of 200 tokens."
                    },
                    {
                        "id": 174,
                        "string": "Fig."
                    },
                    {
                        "id": 175,
                        "string": "2 Similarity scores of the couples detected as reuse / plagiarism Time delay between publication and reuse We now consider the duration between the publication of a paper and its reuse (in all 4 categories) in another publication."
                    },
                    {
                        "id": 176,
                        "string": "It appears that 38% of the similar papers were published on the same year, 71% within the next year, 83% over 2 years and 93% over 3 years (Figure 3 and 4 )."
                    },
                    {
                        "id": 177,
                        "string": "Only 7% reuse material from an earlier period."
                    },
                    {
                        "id": 178,
                        "string": "The average duration is 1.22 years."
                    },
                    {
                        "id": 179,
                        "string": "30% of the similar papers published on the same year concern the couple of conferences ISCA-ICASSP."
                    },
                    {
                        "id": 180,
                        "string": "We now consider the reuse of conference papers in journal papers ( Figures 5 and 6 )."
                    },
                    {
                        "id": 181,
                        "string": "We observe here a similar time schedule, with a delay of one year: 12% of the reused papers were published on the same year, 41% within the next year, 68% over 2 years, 85% over 3 years and 93% over 4 years."
                    },
                    {
                        "id": 182,
                        "string": "Only 7% reuse material from an earlier period."
                    },
                    {
                        "id": 183,
                        "string": "The average duration is 2.07 years."
                    },
                    {
                        "id": 184,
                        "string": "Discussion The first obvious ascertainment is that self-reusing is much more important than reusing the content of others."
                    },
                    {
                        "id": 185,
                        "string": "With a comparable threshold of 0.04, when we consider the total of the two directions, there are 4843 self-reuse and 7650 self-plagiarism detected pairs, compared with 105 reuse and 156 plagiarism detected pairs."
                    },
                    {
                        "id": 186,
                        "string": "Globally, the source papers are quoted only in 39% of the cases on average, a percentage which falls down from 39% to 23% if the papers are published on the same year."
                    },
                    {
                        "id": 187,
                        "string": "Plagiarism may raise legal issues if it violates copyright, but the right to quote 16 exists in certain conditions: \"National legislations usually embody the Berne convention limits in one or more of the following requirements: the cited paragraphs are within a reasonable limit, clearly marked as quotations and fully referenced, the resulting new work is not just a collection of quotations, but constitutes a fully original work in itself\", we could also add that the cited paragraph must have a function in the goal of the citing paper."
                    },
                    {
                        "id": 188,
                        "string": "Obviously, most of the cases reported in this paper comply with the right to quote."
                    },
                    {
                        "id": 189,
                        "string": "The limits of the cited paragraph vary from country to country."
                    },
                    {
                        "id": 190,
                        "string": "In France and Canada, for example, a limit of 10% of both the copying and copied texts seems to be acceptable."
                    },
                    {
                        "id": 191,
                        "string": "As we've seen, we stay within those limits in all cases in NLP4NLP."
                    },
                    {
                        "id": 192,
                        "string": "Self-reuse and self-plagiarism are of a different nature."
                    },
                    {
                        "id": 193,
                        "string": "Let's recall that they concern papers that have at least one author in common."
                    },
                    {
                        "id": 194,
                        "string": "Of course, a copy & paste operation is easy and frequent but there is another phenomena to take into account which is difficult to distinguish from copy & paste: this is the style of the author."
                    },
                    {
                        "id": 195,
                        "string": "Everybody has habits to formulate its ideas, and, even on a long period, most authors seem to keep the same chunks of prepared words."
                    },
                    {
                        "id": 196,
                        "string": "As we've seen, almost 40% of the cases concern papers that are published on the same year: authors submit two similar papers at two different conferences on the same year, and publish the two papers in both conferences if both are accepted."
                    },
                    {
                        "id": 197,
                        "string": "It is very difficult to prevent those cases as none of the papers are published when the other is submitted."
                    },
                    {
                        "id": 198,
                        "string": "Another frequent case is the publication of a paper in a journal after its publication in a conference."
                    },
                    {
                        "id": 199,
                        "string": "Here also, it is a natural and usual process, sometimes even encouraged by the journal editors after a pre-selection of the best papers in a conference."
                    },
                    {
                        "id": 200,
                        "string": "As a tentative to moderate these figures and to justify self-reuse and self-plagiarism of previously published material, it is worth quoting Pamela Samuelson [Samuelson 1994]: The previous work must be restated to lay the groundwork for a new contribution in the second work, Portions of the previous work must be repeated to deal with new evidence or arguments, The audience for each work is so different that publishing the same work in different places is necessary to get the message out, The authors think they said it so well the first time that it makes no sense to say it differently a second time."
                    },
                    {
                        "id": 201,
                        "string": "She considers that 30% is an upper limit in the reuse of parts of a previously published paper."
                    },
                    {
                        "id": 202,
                        "string": "We believe that following these two sets of principles regarding (self) reuse and plagiarism will help maintaining an ethical behavior in our community."
                    },
                    {
                        "id": 203,
                        "string": "Further developments A limitation of our approach is that it fails to identify copy & paste when the original text has been strongly altered."
                    },
                    {
                        "id": 204,
                        "string": "Our study of graphical variations of a common meaning is presently limited to geographical variants, technical abbreviations (e.g."
                    },
                    {
                        "id": 205,
                        "string": "HMM versus Hidden Markov Model) and resource names aliases from the LRE Map."
                    },
                    {
                        "id": 206,
                        "string": "We plan to deal with \"rogeting\" which is the practice of replacing words with supposedly synonymous alternatives in order to disguise plagiarism 17 Another direction of improvement is to isolate and ignore tables in order to reduce noise, but this is a complex task as documented in [Frey et al 2015]."
                    },
                    {
                        "id": 207,
                        "string": "Let's note that this is not a big problem in our approach, as we ignore sentences without any verb and as verbs are not very frequent within a table."
                    },
                    {
                        "id": 208,
                        "string": "More generally, we could also study the position and rhetorical structure of the copy & paste in order to identify and justify their function."
                    },
                    {
                        "id": 209,
                        "string": "We may finally explore whether copy & paste is more common for non native English speakers, given that it is frequent that they publish first in their native language at a national conference and then in English in an international conference or an international journal, in order to broaden their audience."
                    },
                    {
                        "id": 210,
                        "string": "Conclusions To our knowledge, this paper is the first which reports results on the study of copy & paste operations on corpora of NLP archives of this size."
                    },
                    {
                        "id": 211,
                        "string": "Based on a simple method of n-gram comparison after text processing using NLP, this method is easy to implement."
                    },
                    {
                        "id": 212,
                        "string": "Of course, this process makes a large number of pairwise comparisons (65,000*65,000), which still represents a practical computing limitation."
                    },
                    {
                        "id": 213,
                        "string": "As our measures show, self-reuse and self-plagiarism are common practices."
                    },
                    {
                        "id": 214,
                        "string": "This is not specific to our field and is certainly related to the current tendency which is called \"salami-slicing\" publication caused by the publishand-perish demand 18 ."
                    },
                    {
                        "id": 215,
                        "string": "But we gladly notice that plagiarism is very uncommon in our community."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1.",
                        "start": 0,
                        "end": 2
                    },
                    {
                        "section": "Context",
                        "n": "2.",
                        "start": 3,
                        "end": 8
                    },
                    {
                        "section": "Objectives",
                        "n": "3.",
                        "start": 9,
                        "end": 12
                    },
                    {
                        "section": "The corpus: NLP4NLP",
                        "n": "4.",
                        "start": 13,
                        "end": 62
                    },
                    {
                        "section": "Definitions",
                        "n": "5.",
                        "start": 63,
                        "end": 69
                    },
                    {
                        "section": "Directions",
                        "n": "6.",
                        "start": 70,
                        "end": 73
                    },
                    {
                        "section": "Algorithm",
                        "n": "7.",
                        "start": 74,
                        "end": 80
                    },
                    {
                        "section": "Algorithm comments and evaluation",
                        "n": "8.",
                        "start": 81,
                        "end": 98
                    },
                    {
                        "section": "Tuning parameters",
                        "n": "9.",
                        "start": 99,
                        "end": 116
                    },
                    {
                        "section": "Special considerations concerning authorship and citations",
                        "n": "10.",
                        "start": 117,
                        "end": 120
                    },
                    {
                        "section": "Precision about the anteriority test",
                        "n": "11.",
                        "start": 121,
                        "end": 124
                    },
                    {
                        "section": "Resulting files",
                        "n": "12.",
                        "start": 125,
                        "end": 158
                    },
                    {
                        "section": "Self-reuse and Self-Plagiarism",
                        "n": "13.",
                        "start": 159,
                        "end": 174
                    },
                    {
                        "section": "Time delay between publication and reuse",
                        "n": "15.",
                        "start": 175,
                        "end": 183
                    },
                    {
                        "section": "Discussion",
                        "n": "16.",
                        "start": 184,
                        "end": 202
                    },
                    {
                        "section": "Further developments",
                        "n": "17.",
                        "start": 203,
                        "end": 209
                    },
                    {
                        "section": "Conclusions",
                        "n": "18.",
                        "start": 210,
                        "end": 215
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1246-Figure1-1.png",
                        "caption": "Fig. 1 Similarity scores of the couples detected as self-reuse / self-plagiarism",
                        "page": 5,
                        "bbox": {
                            "x1": 191.51999999999998,
                            "x2": 400.32,
                            "y1": 636.0,
                            "y2": 755.04
                        }
                    },
                    {
                        "filename": "../figure/image/1246-Table1-1.png",
                        "caption": "Table 1. Detail of NLP4NLP, with the convention that an asterisk indicates that the corpus is in the ACL Anthology.",
                        "page": 1,
                        "bbox": {
                            "x1": 68.64,
                            "x2": 526.0799999999999,
                            "y1": 112.8,
                            "y2": 504.47999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1246-Table3-1.png",
                        "caption": "Table 3. Self-reuse and Self-Plagiarism Matrix, with indication of the 7 most using and used sources.",
                        "page": 6,
                        "bbox": {
                            "x1": 55.68,
                            "x2": 774.24,
                            "y1": 70.08,
                            "y2": 438.24
                        }
                    },
                    {
                        "filename": "../figure/image/1246-Table4-1.png",
                        "caption": "Table 4. Reuse and Plagiarism Matrix, with indication of the 5 most using and used sources",
                        "page": 7,
                        "bbox": {
                            "x1": 66.72,
                            "x2": 774.24,
                            "y1": 82.08,
                            "y2": 443.52
                        }
                    },
                    {
                        "filename": "../figure/image/1246-Table2-1.png",
                        "caption": "Table 2. Comparison of the two strategies on the LREC corpus",
                        "page": 3,
                        "bbox": {
                            "x1": 172.79999999999998,
                            "x2": 422.4,
                            "y1": 612.9599999999999,
                            "y2": 670.0799999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1246-Figure2-1.png",
                        "caption": "Fig. 2 Similarity scores of the couples detected as reuse / plagiarism",
                        "page": 8,
                        "bbox": {
                            "x1": 149.76,
                            "x2": 442.08,
                            "y1": 504.0,
                            "y2": 701.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-58"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivation",
                    "text": [
                        "Processing long, complex sentences is hard!",
                        "Children, people with reading disabilities, L2 learners",
                        "Sentence level NLP systems:",
                        "Koehn & Knowles, 2017 Can we automatically break a complex sentence into several simple ones while preserving its meaning?"
                    ],
                    "page_nums": [
                        1,
                        2,
                        3,
                        4,
                        5,
                        6,
                        7
                    ],
                    "images": []
                },
                "1": {
                    "title": "The Split and Rephrase Task",
                    "text": [
                        "Narayan, Gardent, Cohen & Shimorina, EMNLP 2017",
                        "Dataset, evaluation method, baseline models",
                        "Task definition: complex sentence -> several simple sentences with the same meaning",
                        "Alan Bean joined NASA in 1963 where he became a member of the Apollo 12 mission along with Alfred Worden as back up pilot and David Scott as commander .",
                        "Alan Bean served as a crew member of Apollo 12 . Alfred Worden was the backup pilot of Apollo 12 . Apollo 12 was commanded by David Scott . Alan Bean was selected by Nasa in 1963",
                        "Requires (a) identifying independent semantic units (b) rephrasing those units to single sentences"
                    ],
                    "page_nums": [
                        8,
                        9,
                        10,
                        11,
                        12,
                        13,
                        14,
                        15
                    ],
                    "images": []
                },
                "2": {
                    "title": "This Work",
                    "text": [
                        "We show that simple neural models seem to perform very on the original benchmark due to memorization of the training set",
                        "We propose a more challenging data split for the task to discourage memorization",
                        "We perform automatic evaluation and error analysis on the new benchmark, showing that the task is still far from being solved"
                    ],
                    "page_nums": [
                        16,
                        17,
                        18,
                        19
                    ],
                    "images": []
                },
                "3": {
                    "title": "WebSplit Dataset Construction",
                    "text": [
                        "<Alan_Bean | nationality | United_States, Alan_Bean | mission | Apollo_12, Alan_Bean | NASA selection | 1963> A lan Bean, born in the United St tes, was selected Alan Bean, born in the United States, was selected Alan Bean, born in the United States, was selected by NASA in 1963 and served as a crew member of by NASA in 1963 and served as a crew member o f by NASA in 1963 and served Apollo as 12. a crew member of Apollo 12. Apollo 12.",
                        "(facts from DBpedia) Simple Sentences",
                        "A lan Bean is a US national. A lan Bean is a US national. Alan Bean is a US national.",
                        "<Alan_Bean | mission | Apollo_12> Alan Bean was on the crew of Apollo 12. A lan Bean was on the crew of Apollo 12. Alan Bean was on the crew of Apollo 12.",
                        "<Alan_Bean | NASA selection | 1963> A lan Bean was hired by NASA in 1963. Alan Bean was hired by NASA in 1963. Alan Bean was hired by NASA in 1963.",
                        "Sets of RDF triples Complex Sentences",
                        "Matching via RDFs ~1M examples"
                    ],
                    "page_nums": [
                        20,
                        21,
                        22,
                        23,
                        24,
                        25,
                        26
                    ],
                    "images": []
                },
                "5": {
                    "title": "Preliminary Results",
                    "text": [
                        "Our simple seq2seq baseline outperform all but one of the baselines from",
                        "seq2seq (ours) seq2seq split-multi hybrid multi-seq2seq split-seq2seq",
                        "Text Only Text + RDFs",
                        "Their best baselines were using the RDF structures as additional information",
                        "Do the simple seq2seq model really performs so well? seq2seq (ours) seq2seq split-multi"
                    ],
                    "page_nums": [
                        33,
                        34,
                        35,
                        36
                    ],
                    "images": []
                },
                "6": {
                    "title": "BLEU can be Misleading",
                    "text": [
                        "In spite of the high BLEU scores, our neural models suffer from:",
                        "Missing facts - appeared in the input but not in the output",
                        "Unsupported facts - appeared in the output but not in the input",
                        "Repeated facts - appeared several times in the output"
                    ],
                    "page_nums": [
                        37,
                        38,
                        39,
                        40,
                        41
                    ],
                    "images": [
                        "figure/image/1248-Table3-1.png"
                    ]
                },
                "7": {
                    "title": "A Closer Look",
                    "text": [
                        "Visualizing the attention weights we find an unexpected pattern",
                        "The network mainly attends to a single token instead of spreading the attention",
                        "This token was usually a part of the first mentioned entity",
                        "Consistent among different input examples"
                    ],
                    "page_nums": [
                        42,
                        43,
                        44,
                        45,
                        46,
                        47,
                        48
                    ],
                    "images": [
                        "figure/image/1248-Figure1-1.png"
                    ]
                },
                "8": {
                    "title": "Testing for Over Memorization",
                    "text": [
                        "In this stage we suspect that the network heavily memorizes entity-fact pairs",
                        "We test this by introducing it with inputs consisting of repeated entities alone",
                        "The network indeed generates facts it memorized about those specific entities"
                    ],
                    "page_nums": [
                        49,
                        50,
                        51,
                        52,
                        53
                    ],
                    "images": []
                },
                "9": {
                    "title": "Searching for the Cause Dataset Artifacts",
                    "text": [
                        "The original dataset included overlap between the training/development/test sets",
                        "When looking at the complex sentences side, there is no overlap",
                        "On the other hand, most of the simple sentences did overlap (~90%)",
                        "Dev Dev Complex Simple",
                        "Complex Train Simple target",
                        "Test Test Complex Simple",
                        "Makes memorization very effective leakage from train on the target side"
                    ],
                    "page_nums": [
                        54,
                        55,
                        56,
                        57,
                        58
                    ],
                    "images": []
                },
                "10": {
                    "title": "New Data Split",
                    "text": [
                        "To remedy this, we construct a new data split by using the RDF information:",
                        "Ensuring that all RDF relation types appear in the training set (enable generalization)",
                        "Ensuring that no RDF triple (fact) appears in two different sets (reduce memorization)",
                        "The resulting dataset has no overlapping simple sentences",
                        "Original Split New Split unique dev simple sentences in train unique test simple sentences in train",
                        "% dev vocabulary in train",
                        "% test vocabulary in train",
                        "Has more unknown symbols in dev/test need better models!"
                    ],
                    "page_nums": [
                        59,
                        60,
                        61,
                        62,
                        63,
                        64
                    ],
                    "images": []
                },
                "11": {
                    "title": "Copy Mechanism",
                    "text": [
                        "To help with the increase in unknown words in the harder split, we incorporate a copy mechanism",
                        "Uses a copy switch - feed-forward NN component with a sigmoid-activated scalar output",
                        "Controls the interpolation of the softmax probabilities and the copy probabilities over the input tokens in each decoder step",
                        "copy switch attention weights (copy) 1 - copy switch softmax output"
                    ],
                    "page_nums": [
                        65,
                        66,
                        67,
                        68,
                        69
                    ],
                    "images": []
                },
                "12": {
                    "title": "Results New Split",
                    "text": [
                        "Baseline seq2seq models completely break (BLEU < on the new split",
                        "original split new split",
                        "Copy mechanism helps to generalize",
                        "Much lower than the original benchmark - memorization was crucial for the high BLEU"
                    ],
                    "page_nums": [
                        70,
                        71,
                        72,
                        73
                    ],
                    "images": []
                },
                "14": {
                    "title": "Error Analysis",
                    "text": [
                        "On the original split the models did very well (due to memorization) with up to 91% correct simple sentences correct missing repeated unsupported",
                        "original split new split",
                        "On the new benchmark the best model got only up to",
                        "The task is much more challenging then previously demonstrated"
                    ],
                    "page_nums": [
                        76,
                        77,
                        78,
                        79
                    ],
                    "images": []
                },
                "16": {
                    "title": "More Broadly",
                    "text": [
                        "Creating datasets is hard!",
                        "Think how models can cheat\"",
                        "Create a challenging evaluation environment to capture generalization",
                        "Look for leakage of train to dev/test",
                        "Numbers can be misleading!",
                        "Look at the data",
                        "Look at the model"
                    ],
                    "page_nums": [
                        85,
                        86,
                        87,
                        88,
                        89,
                        90,
                        91,
                        92,
                        93
                    ],
                    "images": []
                }
            },
            "paper_title": "Split and Rephrase: Better Evaluation and a Stronger Baseline",
            "paper_id": "1248",
            "paper": {
                "title": "Split and Rephrase: Better Evaluation and a Stronger Baseline",
                "abstract": "Splitting and rephrasing a complex sentence into several shorter sentences that convey the same meaning is a challenging problem in NLP. We show that while vanilla seq2seq models can reach high scores on the proposed benchmark (Narayan et al., 2017), they suffer from memorization of the training set which contains more than 89% of the unique simple sentences from the validation and test sets. To aid this, we present a new train-development-test data split and neural models augmented with a copymechanism, outperforming the best reported baseline by 8.68 BLEU and fostering further progress on the task.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Processing long, complex sentences is challenging."
                    },
                    {
                        "id": 1,
                        "string": "This is true either for humans in various circumstances (Inui et al., 2003; Watanabe et al., 2009; De Belder and Moens, 2010) or in NLP tasks like parsing (Tomita, 1986; McDonald and Nivre, 2011; Jelínek, 2014) and machine translation (Chandrasekar et al., 1996; Pouget-Abadie et al., 2014; Koehn and Knowles, 2017 )."
                    },
                    {
                        "id": 2,
                        "string": "An automatic system capable of breaking a complex sentence into several simple sentences that convey the same meaning is very appealing."
                    },
                    {
                        "id": 3,
                        "string": "A recent work by Narayan et al."
                    },
                    {
                        "id": 4,
                        "string": "(2017) introduced a dataset, evaluation method and baseline systems for the task, naming it \"Split-and-Rephrase\"."
                    },
                    {
                        "id": 5,
                        "string": "The dataset includes 1,066,115 instances mapping a single complex sentence to a sequence of sentences that express the same meaning, together with RDF triples that describe their semantics."
                    },
                    {
                        "id": 6,
                        "string": "They considered two system setups: a text-to-text setup that does not use the accompany-ing RDF information, and a semantics-augmented setup that does."
                    },
                    {
                        "id": 7,
                        "string": "They report a BLEU score of 48.9 for their best text-to-text system, and of 78.7 for the best RDF-aware one."
                    },
                    {
                        "id": 8,
                        "string": "We focus on the text-totext setup, which we find to be more challenging and more natural."
                    },
                    {
                        "id": 9,
                        "string": "We begin with vanilla SEQ2SEQ models with attention (Bahdanau et al., 2015) and reach an accuracy of 77.5 BLEU, substantially outperforming the text-to-text baseline of Narayan et al."
                    },
                    {
                        "id": 10,
                        "string": "(2017) and approaching their best RDF-aware method."
                    },
                    {
                        "id": 11,
                        "string": "However, manual inspection reveal many cases of unwanted behaviors in the resulting outputs: (1) many resulting sentences are unsupported by the input: they contain correct facts about relevant entities, but these facts were not mentioned in the input sentence; (2) some facts are repeated-the same fact is mentioned in multiple output sentences; and (3) some facts are missingmentioned in the input but omitted in the output."
                    },
                    {
                        "id": 12,
                        "string": "The model learned to memorize entity-fact pairs instead of learning to split and rephrase."
                    },
                    {
                        "id": 13,
                        "string": "Indeed, feeding the model with examples containing entities alone without any facts about them causes it to output perfectly phrased but unsupported facts (Table 3) ."
                    },
                    {
                        "id": 14,
                        "string": "Digging further, we find that 99% of the simple sentences (more than 89% of the unique ones) in the validation and test sets also appear in the training set, which-coupled with the good memorization capabilities of SEQ2SEQ models and the relatively small number of distinct simple sentences-helps to explain the high BLEU score."
                    },
                    {
                        "id": 15,
                        "string": "To aid further research on the task, we propose a more challenging split of the data."
                    },
                    {
                        "id": 16,
                        "string": "We also establish a stronger baseline by extending the SEQ2SEQ approach with a copy mechanism, which was shown to be helpful in similar tasks (Gu et al., 2016; Merity et al., 2017; See et al., 2017) ."
                    },
                    {
                        "id": 17,
                        "string": "On the original split, our models outperform the count  unique  RDF entities  32,186  925  RDF relations  16,093  172  complex sentences  1,066,115 5,544  simple sentences  5,320,716 9,552  train complex sentences 886,857  4,438  train simple sentences  4,451,959 8,840  dev complex sentences  97,950  554  dev simple sentences  475,337  3,765  test complex best baseline of Narayan et al."
                    },
                    {
                        "id": 18,
                        "string": "(2017) by up to 8.68 BLEU, without using the RDF triples."
                    },
                    {
                        "id": 19,
                        "string": "On the new split, the vanilla SEQ2SEQ models break completely, while the copy-augmented models perform better."
                    },
                    {
                        "id": 20,
                        "string": "In parallel to our work, an updated version of the dataset was released (v1.0), which is larger and features a train/test split protocol which is similar to our proposal."
                    },
                    {
                        "id": 21,
                        "string": "We report results on this dataset as well."
                    },
                    {
                        "id": 22,
                        "string": "The code and data to reproduce our results are available on Github."
                    },
                    {
                        "id": 23,
                        "string": "1 We encourage future work on the split-and-rephrase task to use our new data split or the v1.0 split instead of the original one."
                    },
                    {
                        "id": 24,
                        "string": "Preliminary Experiments Task Definition In the split-and-rephrase task we are given a complex sentence C, and need to produce a sequence of simple sentences T 1 , ..., T n , n ≥ 2, such that the output sentences convey all and only the information in C. As additional supervision, the split-and-rephrase dataset associates each sentence with a set of RDF triples that describe the information in the sentence."
                    },
                    {
                        "id": 25,
                        "string": "Note that the number of simple sentences to generate is not given as part of the input."
                    },
                    {
                        "id": 26,
                        "string": "Experimental Details We focus on the task of splitting a complex sentence into several simple ones without access to the corresponding RDF triples in either train or test time."
                    },
                    {
                        "id": 27,
                        "string": "For evaluation we follow Narayan et al."
                    },
                    {
                        "id": 28,
                        "string": "(2017) and compute the averaged individual multi-reference BLEU score for each prediction."
                    },
                    {
                        "id": 29,
                        "string": "2 We split each prediction to 1 https://github.com/biu-nlp/ sprp-acl2018 2 Note that this differs from \"normal\" multi-reference BLEU (as implemented in multi-bleu.pl) since the number of references differs among the instances in the test-  sentences 3 and report the average number of simple sentences in each prediction, and the average number of tokens for each simple sentence."
                    },
                    {
                        "id": 30,
                        "string": "We train vanilla sequence-to-sequence models with attention (Bahdanau et al., 2015) as implemented in the OPENNMT-PY toolkit (Klein et al., 2017) ."
                    },
                    {
                        "id": 31,
                        "string": "4 Our models only differ in the LSTM cell size (128, 256 and 512, respectively)."
                    },
                    {
                        "id": 32,
                        "string": "See the supplementary material for training details and hyperparameters."
                    },
                    {
                        "id": 33,
                        "string": "We compare our models to the baselines proposed in Narayan et al."
                    },
                    {
                        "id": 34,
                        "string": "(2017) ."
                    },
                    {
                        "id": 35,
                        "string": "HYBRIDSIMPL and SEQ2SEQ are text-to-text models, while the other reported baselines additionally use the RDF information."
                    },
                    {
                        "id": 36,
                        "string": "Results As shown in Table 2 , our 3 models obtain higher BLEU scores then the SEQ2SEQ baseline, with up to 28.35 BLEU improvement, despite being single-layer models vs. the 3-layer models used in Narayan et al."
                    },
                    {
                        "id": 37,
                        "string": "(2017) ."
                    },
                    {
                        "id": 38,
                        "string": "A possible explanation for this discrepancy is the SEQ2SEQ baseline using a dropout rate of 0.8, while we use 0.3 and only apply it on the LSTM outputs."
                    },
                    {
                        "id": 39,
                        "string": "Our results are also better than the MUL-TISEQ2SEQ and SPLIT-MULTISEQ2SEQ models, which use explicit RDF information."
                    },
                    {
                        "id": 40,
                        "string": "We also present the macro-average 5 number of sim-   Analysis We begin analyzing the results by manually inspecting the model's predictions on the validation set."
                    },
                    {
                        "id": 41,
                        "string": "This reveals three common kinds of mistakes as demonstrated in Table 3 : unsupported facts, repetitions, and missing facts."
                    },
                    {
                        "id": 42,
                        "string": "All the unsupported facts seem to be related to entities mentioned in the source sentence."
                    },
                    {
                        "id": 43,
                        "string": "Inspecting the attention weights (Figure 1 ) reveals a worrying trend: throughout the prediction, the model focuses heavily on the first word in of the first entity (\"A wizard of Mars\") while paying little attention to other cues like \"hardcover\", \"Diane\" and references of a specific complex sentence, and then average these numbers."
                    },
                    {
                        "id": 44,
                        "string": "\"the ISBN number\"."
                    },
                    {
                        "id": 45,
                        "string": "This explains the abundance of \"hallucinated\" unsupported facts: rather than learning to split and rephrase, the model learned to identify entities, and spit out a list of facts it had memorized about them."
                    },
                    {
                        "id": 46,
                        "string": "To validate this assumption, we count the number of predicted sentences which appeared as-is in the training data."
                    },
                    {
                        "id": 47,
                        "string": "We find that 1645 out of the 1693 (97.16%) predicted sentences appear verbatim in the training set."
                    },
                    {
                        "id": 48,
                        "string": "Table  1 gives more detailed statistics on the WEBSPLIT dataset."
                    },
                    {
                        "id": 49,
                        "string": "To further illustrate the model's recognize-andspit strategy, we compose inputs containing an entity string which is duplicated three times, as shown in the bottom two rows of Table 3 ."
                    },
                    {
                        "id": 50,
                        "string": "As expected, the model predicted perfectly phrased and correct facts about the given entities, although these facts are clearly not supported by the input."
                    },
                    {
                        "id": 51,
                        "string": "New Data-split The original data-split is not suitable for measuring generalization, as it is susceptible to \"cheating\" by fact memorization."
                    },
                    {
                        "id": 52,
                        "string": "We construct a new train-development-test split to better reflect our expected behavior from a split-and-rephrase model."
                    },
                    {
                        "id": 53,
                        "string": "We split the data into train, development and test sets by randomly dividing the 5,554 distinct complex sentences across the sets, while using the provided RDF information to ensure that: 1."
                    },
                    {
                        "id": 54,
                        "string": "Every possible RDF relation (e.g., BORNIN, LOCATEDIN) is represented in the training set (and may appear also in the other sets)."
                    },
                    {
                        "id": 55,
                        "string": "2."
                    },
                    {
                        "id": 56,
                        "string": "Every RDF triplet (a complete fact) is represented only in one of the splits."
                    },
                    {
                        "id": 57,
                        "string": "While the set of complex sentences is still divided roughly to 80%/10%/10% as in the original split, now there are nearly no simple sentences in  We believe this split strikes a good balance between challenge and feasibility: to succeed, a model needs to learn to identify relations in the complex sentence, link them to their arguments, and produce a rephrasing of them."
                    },
                    {
                        "id": 58,
                        "string": "However, it is not required to generalize to unseen relations."
                    },
                    {
                        "id": 59,
                        "string": "6 The data split and scripts for creating it are available on Github."
                    },
                    {
                        "id": 60,
                        "string": "7 Statistics describing the data split are detailed in Table 4 ."
                    },
                    {
                        "id": 61,
                        "string": "Copy-augmented Model To better suit the split-and-rephrase task, we augment the SEQ2SEQ models with a copy mechanism."
                    },
                    {
                        "id": 62,
                        "string": "Such mechanisms have proven to be beneficial in similar tasks like abstractive summarization (Gu et al., 2016; See et al., 2017) and language modeling (Merity et al., 2017) ."
                    },
                    {
                        "id": 63,
                        "string": "We hypothesize that biasing the model towards copying will improve performance, as many of the words in the simple sentences (mostly corresponding to entities) appear in the complex sentence, as evident by the relatively high BLEU scores for the SOURCE baseline in Table 2 ."
                    },
                    {
                        "id": 64,
                        "string": "Copying is modeled using a \"copy switch\" probability p(z) computed by a sigmoid over a learned composition of the decoder state, the context vector and the last output embedding."
                    },
                    {
                        "id": 65,
                        "string": "It interpolates the p sof tmax distribution over the target vocabulary and a copy distribution p copy over the source sentence tokens."
                    },
                    {
                        "id": 66,
                        "string": "p copy is simply the computed attention weights."
                    },
                    {
                        "id": 67,
                        "string": "Once the above distribu- 6 The updated dataset (v1.0, published by Narayan et al."
                    },
                    {
                        "id": 68,
                        "string": "after this work was accepted) follows (2) above, but not (1)."
                    },
                    {
                        "id": 69,
                        "string": "7 https://github.com/biu-nlp/ sprp-acl2018 p(w) = p(z = 1)p copy (w) + p(z = 0)p sof tmax (w) In case w is not present in the output vocabulary, we set p sof tmax (w) = 0."
                    },
                    {
                        "id": 70,
                        "string": "We refer the reader to See et al."
                    },
                    {
                        "id": 71,
                        "string": "(2017) for a detailed discussion regarding the copy mechanism."
                    },
                    {
                        "id": 72,
                        "string": "Experiments and Results Models with larger capacities may have greater representation power, but also a stronger tendency to memorize the training data."
                    },
                    {
                        "id": 73,
                        "string": "We therefore perform experiments with copy-enhanced models of varying LSTM widths (128, 256 and 512) ."
                    },
                    {
                        "id": 74,
                        "string": "We train the models using the negative log likelihood of p(w) as the objective."
                    },
                    {
                        "id": 75,
                        "string": "Other than the copy mechanism, we keep the settings identical to those in Section 2."
                    },
                    {
                        "id": 76,
                        "string": "We train models on the original split, our proposed data split and the v1.0 split."
                    },
                    {
                        "id": 77,
                        "string": "in spite of it being larger (1,331,515 vs. 886,857 examples) , indicating that merely adding data will not solve the task."
                    },
                    {
                        "id": 78,
                        "string": "Results Analysis We inspect the models' predictions for the first 20 complex sentences of the original and new validation sets in Table 7 ."
                    },
                    {
                        "id": 79,
                        "string": "We mark each simple sentence as being \"correct\" if it contains all and only relevant information, \"unsupported\" if it contains facts not present in the source, and \"repeated\" if it repeats information from a previous sentence."
                    },
                    {
                        "id": 80,
                        "string": "We also count missing facts."
                    },
                    {
                        "id": 81,
                        "string": "Figure  2 shows the attention weights of the COPY512 model for the same sentence in Figure 1 ."
                    },
                    {
                        "id": 82,
                        "string": "Reassuringly, the attention is now distributed more evenly over the input symbols."
                    },
                    {
                        "id": 83,
                        "string": "On the new splits, all models perform catastrophically."
                    },
                    {
                        "id": 84,
                        "string": "Table 6 shows outputs from the COPY512 model when trained on the new split."
                    },
                    {
                        "id": 85,
                        "string": "On the original split, while SEQ2SEQ128 mainly suffers from missing information, perhaps due to insufficient memorization capacity, SEQ2SEQ512 generated the most unsupported sentences, due to overfitting or memorization."
                    },
                    {
                        "id": 86,
                        "string": "The overall number of issues is clearly reduced in the copy-augmented models."
                    },
                    {
                        "id": 87,
                        "string": "Conclusions We demonstrated that a SEQ2SEQ model can obtain high scores on the original split-and-rephrase task while not actually learning to split-andrephrase."
                    },
                    {
                        "id": 88,
                        "string": "We propose a new and more challenging data-split to remedy this, and demonstrate that the cheating SEQ2SEQ models fail miserably on the new split."
                    },
                    {
                        "id": 89,
                        "string": "Augmenting the SEQ2SEQ models with a copy-mechanism improves performance on both data splits, establishing a new competitive baseline for the task."
                    },
                    {
                        "id": 90,
                        "string": "Yet, the split-and-rephrase task (on the new split) is still far from being solved."
                    },
                    {
                        "id": 91,
                        "string": "We strongly encourage future research to evaluate on our proposed split or on the recently released version 1.0 of the dataset, which is larger and also addresses the overlap issues mentioned here."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 23
                    },
                    {
                        "section": "Preliminary Experiments",
                        "n": "2",
                        "start": 24,
                        "end": 50
                    },
                    {
                        "section": "New Data-split",
                        "n": "3",
                        "start": 51,
                        "end": 60
                    },
                    {
                        "section": "Copy-augmented Model",
                        "n": "4",
                        "start": 61,
                        "end": 71
                    },
                    {
                        "section": "Experiments and Results",
                        "n": "5",
                        "start": 72,
                        "end": 86
                    },
                    {
                        "section": "Conclusions",
                        "n": "6",
                        "start": 87,
                        "end": 91
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1248-Figure1-1.png",
                        "caption": "Figure 1: SEQ2SEQ512’s attention weights. Horizontal: input. Vertical: predictions.",
                        "page": 2,
                        "bbox": {
                            "x1": 88.8,
                            "x2": 274.08,
                            "y1": 321.59999999999997,
                            "y2": 528.96
                        }
                    },
                    {
                        "filename": "../figure/image/1248-Table3-1.png",
                        "caption": "Table 3: Predictions from a vanilla SEQ2SEQ model, illustrating unsupported facts, missing facts and repeated facts. The last two rows show inputs we composed to demonstrate that the models memorize entity-fact pairs.",
                        "page": 2,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 523.1999999999999,
                            "y1": 61.44,
                            "y2": 178.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1248-Table6-1.png",
                        "caption": "Table 6: Predictions from the COPY512 model, trained on the new data split.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 523.1999999999999,
                            "y1": 61.44,
                            "y2": 147.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1248-Table7-1.png",
                        "caption": "Table 7: Results of the manual analysis, showing the number of simple sentences with unsupported facts (unsup.), repeated facts, missing facts and correct facts, for 20 complex sentences from the original and new validation sets.",
                        "page": 4,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 528.0,
                            "y1": 179.51999999999998,
                            "y2": 351.36
                        }
                    },
                    {
                        "filename": "../figure/image/1248-Figure2-1.png",
                        "caption": "Figure 2: Attention weights from the COPY512 model for the same input as in Figure 1.",
                        "page": 4,
                        "bbox": {
                            "x1": 89.75999999999999,
                            "x2": 275.03999999999996,
                            "y1": 521.76,
                            "y2": 728.16
                        }
                    },
                    {
                        "filename": "../figure/image/1248-Table1-1.png",
                        "caption": "Table 1: Statistics for the WEBSPLIT dataset.",
                        "page": 1,
                        "bbox": {
                            "x1": 87.84,
                            "x2": 272.15999999999997,
                            "y1": 61.44,
                            "y2": 214.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1248-Table2-1.png",
                        "caption": "Table 2: BLEU scores, simple sentences per complex sentence (#S/C) and tokens per simple sentence (#T/S), as computed over the test set. SOURCE are the complex sentences and REFERENCE are the reference rephrasings from the test set. Models marked with * use the semantic RDF triples.",
                        "page": 1,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 510.24,
                            "y1": 61.44,
                            "y2": 193.92
                        }
                    },
                    {
                        "filename": "../figure/image/1248-Table4-1.png",
                        "caption": "Table 4: Statistics for the RDF-based data split",
                        "page": 3,
                        "bbox": {
                            "x1": 86.88,
                            "x2": 271.2,
                            "y1": 61.44,
                            "y2": 214.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1248-Table5-1.png",
                        "caption": "Table 5: Results over the test sets of the original, our proposed split and the v1.0 split",
                        "page": 3,
                        "bbox": {
                            "x1": 325.44,
                            "x2": 514.0799999999999,
                            "y1": 61.44,
                            "y2": 277.92
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-59"
        },
        {
            "slides": {
                "0": {
                    "title": "Overview Research question",
                    "text": [
                        "Can orthographic (spelling) information enable better word translations in low-resource contexts?",
                        "Languages with common ancestors and/or borrowing exhibit increased lexical similarity",
                        "Spelling of words can carry signal for translation",
                        "Low-resource pairs are most in need of additional signal"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "1": {
                    "title": "Overview Task and general approach",
                    "text": [
                        "Bilingual lexicon induction: single-word translations",
                        "Operate on word embeddings",
                        "Haghigi et al. (2008): orthographic features",
                        "Mikolov et al. (2013): word2vec, linear mapping"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Baseline Artetxe et al 2017",
                    "text": [
                        "Start with dictionary D (inferred from numerals)",
                        "Learn matrix W minimizing Euclidean distance between target (Z) and mapped source (XW) embeddings of pairs in D",
                        "Use nearest neighbors as entries in new dictionary"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Baseline Artetxe et al 2017 Problems",
                    "text": [
                        "Language English Word Baselines Prediction Reference",
                        "German unevenly gleichmaig (evenly) ungleichmaig",
                        "German Ethiopians Afrikaner (Africans) Athiopier",
                        "Italian autumn primavera (spring) autunno",
                        "Finnish Latvians ukrainalaiset (Ukrainians) latvialaiset",
                        "Suffers from clustering problems present in word2vec",
                        "Similar distributions similar embeddings",
                        "Hints of correct translation present in spelling"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Proposed modifications",
                    "text": [
                        "Use normalized edit distance in nearest-neighbor calculation",
                        "During dictionary induction, distances between similarly-spelled words are reduced",
                        "Extend embedding vectors with character counts",
                        "Extend vectors with scaled counts of letters in both languages alphabets (scale constant k",
                        "Word d1 d2 aba"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Quantitative results",
                    "text": [
                        "English Word Translation Accuracy",
                        "German Italian Target Language Finnish",
                        "Best when combined; largest contribution from embedding extension",
                        "Improvement less pronounced for English-Finnish (linguistic dissimilarity)"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "6": {
                    "title": "Qualitative results",
                    "text": [
                        "Language English Word Baselines Prediction Our Prediction",
                        "German unevenly gleichmaig (evenly) ungleichmaig",
                        "German Ethiopians Afrikaner (Africans) Athiopier",
                        "Italian autumn primavera (spring) autunno",
                        "Finnish Latvians ukrainalaiset (Ukrainians) latvialaiset",
                        "Use orthographic information to disambiguate semantic clusters",
                        "Significant gains in adequacy"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "7": {
                    "title": "Conclusion",
                    "text": [
                        "Orthographic information can improve unsupervised bilingual lexicon induction, especially for language pairs with high lexical similarity.",
                        "These techniques can be incorporated into other embedding-based frameworks."
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "8": {
                    "title": "Results with Identity",
                    "text": [
                        "English Word Translation Accuracy w/ Identity",
                        "German Italian Target Language Finnish"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "9": {
                    "title": "Proof of optimal W",
                    "text": [
                        "W = arg min DijXiW Zj2",
                        "= arg min XiW (DZ )i2",
                        "= arg max Tr(XWZD) W"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                },
                "10": {
                    "title": "Proof of optimal W continued",
                    "text": [
                        "W = arg max Tr(XWZD)",
                        "= arg max Tr(ZDXW",
                        "= arg max Tr(UV W [UV SVD(ZDX",
                        "= arg max Tr(V WU)"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                }
            },
            "paper_title": "Orthographic Features for Bilingual Lexicon Induction",
            "paper_id": "1250",
            "paper": {
                "title": "Orthographic Features for Bilingual Lexicon Induction",
                "abstract": "Recent embedding-based methods in bilingual lexicon induction show good results, but do not take advantage of orthographic features, such as edit distance, which can be helpful for pairs of related languages. This work extends embedding-based methods to incorporate these features, resulting in significant accuracy gains for related languages.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Over the past few years, new methods for bilingual lexicon induction have been proposed that are applicable to low-resource language pairs, for which very little sentence-aligned parallel data is available."
                    },
                    {
                        "id": 1,
                        "string": "Parallel data can be very expensive to create, so methods that require less of it or that can utilize more readily available data are desirable."
                    },
                    {
                        "id": 2,
                        "string": "One prevalent strategy involves creating multilingual word embeddings, where each language's vocabulary is embedded in the same latent space (Vulić and Moens, 2013; Mikolov et al., 2013a; Artetxe et al., 2016) ; however, many of these methods still require a strong cross-lingual signal in the form of a large seed dictionary."
                    },
                    {
                        "id": 3,
                        "string": "More recent work has focused on reducing that constraint."
                    },
                    {
                        "id": 4,
                        "string": "Vulić and Moens (2016) and Vulic and Korhonen (2016) use document-aligned data to learn bilingual embeddings instead of a seed dictionary."
                    },
                    {
                        "id": 5,
                        "string": "Artetxe et al."
                    },
                    {
                        "id": 6,
                        "string": "(2017) use a very small, automatically-generated seed lexicon of identical numerals as the initialization in an iterative self-learning framework to learn a linear mapping between monolingual embedding spaces; Zhang et al."
                    },
                    {
                        "id": 7,
                        "string": "(2017) use an adversarial training method to learn a similar mapping."
                    },
                    {
                        "id": 8,
                        "string": "Lample et al."
                    },
                    {
                        "id": 9,
                        "string": "(2018a) use a series of techniques to align monolingual embedding spaces in a completely unsupervised way; their method is used by Lample et al."
                    },
                    {
                        "id": 10,
                        "string": "(2018b) as the initialization for a completely unsupervised machine translation system."
                    },
                    {
                        "id": 11,
                        "string": "These recent advances in unsupervised bilingual lexicon induction show promise for use in low-resource contexts."
                    },
                    {
                        "id": 12,
                        "string": "However, none of them make use of linguistic features of the languages themselves (with the arguable exception of syntactic/semantic information encoded in the word embeddings)."
                    },
                    {
                        "id": 13,
                        "string": "This is in contrast to work that predates many of these embedding-based methods that leveraged linguistic features such as edit distance and orthographic similarity: Dyer et al."
                    },
                    {
                        "id": 14,
                        "string": "(2011) and Berg-Kirkpatrick et al."
                    },
                    {
                        "id": 15,
                        "string": "(2010) investigate using linguistic features for word alignment, and Haghighi et al."
                    },
                    {
                        "id": 16,
                        "string": "(2008) use linguistic features for unsupervised bilingual lexicon induction."
                    },
                    {
                        "id": 17,
                        "string": "These features can help identify words with common ancestry (such as the English-Italian pair agile-agile) and borrowed words (macaronimaccheroni)."
                    },
                    {
                        "id": 18,
                        "string": "The addition of linguistic features led to increased performance in these earlier models, especially for related languages, yet these features have not been applied to more modern methods."
                    },
                    {
                        "id": 19,
                        "string": "In this work, we extend the modern embeddingbased approach of Artetxe et al."
                    },
                    {
                        "id": 20,
                        "string": "(2017) with orthographic information in order to leverage similarities between related languages for increased accuracy in bilingual lexicon induction."
                    },
                    {
                        "id": 21,
                        "string": "Background This work is directly based on the work of Artetxe et al."
                    },
                    {
                        "id": 22,
                        "string": "(2017) ."
                    },
                    {
                        "id": 23,
                        "string": "Following their work, let X ∈ R |Vs|×d and Z ∈ R |Vt|×d be the word embedding matrices of two distinct languages, referred to respectively as the source and target, such that each row corresponds to the d-dimensional embedding of a single word."
                    },
                    {
                        "id": 24,
                        "string": "We refer to the ith row of one of these matrices as X i * or Z i * ."
                    },
                    {
                        "id": 25,
                        "string": "The vocabularies for each language are V s and V t , respectively."
                    },
                    {
                        "id": 26,
                        "string": "Also let D ∈ {0, 1} |Vs|×|Vt| be a binary matrix representing a dictionary such that D ij = 1 if the ith word in the source language is aligned with the jth word in the target language."
                    },
                    {
                        "id": 27,
                        "string": "We wish to find a mapping matrix W ∈ R d×d that maps source embeddings onto their aligned target embeddings."
                    },
                    {
                        "id": 28,
                        "string": "Artetxe et al."
                    },
                    {
                        "id": 29,
                        "string": "(2017) define the optimal mapping matrix W * with the following equation, W * = arg min W i j D ij X i * W − Z j * 2 which minimizes the sum of the squared Euclidean distances between mapped source embeddings and their aligned target embeddings."
                    },
                    {
                        "id": 30,
                        "string": "By normalizing and mean-centering X and Z, and enforcing that W be an orthogonal matrix (W T W = I), the above formulation becomes equivalent to maximizing the dot product between the mapped source embeddings and target embeddings, such that W * = arg max W Tr(XW Z T D T ) where Tr(·) is the trace operator, the sum of all diagonal entries."
                    },
                    {
                        "id": 31,
                        "string": "The optimal solution to this equa- tion is W * = U V T , where X T DZ = U ΣV T is the singular value decomposition of X T DZ."
                    },
                    {
                        "id": 32,
                        "string": "This formulation requires a seed dictionary."
                    },
                    {
                        "id": 33,
                        "string": "To reduce the need for a large seed dictionary, Artetxe et al."
                    },
                    {
                        "id": 34,
                        "string": "(2017) propose an iterative, self-learning framework that determines W as above, uses it to calculate a new dictionary D, and then iterates until convergence."
                    },
                    {
                        "id": 35,
                        "string": "In the dictionary induction step, they set D ij = 1 if j = arg max k (X i * W ) · Z k * and D ij = 0 otherwise."
                    },
                    {
                        "id": 36,
                        "string": "We propose two methods for extending this system using orthographic information, described in the following two sections."
                    },
                    {
                        "id": 37,
                        "string": "Orthographic Extension of Word Embeddings This method augments the embeddings for all words in both languages before using them in the self-learning framework of Artetxe et al."
                    },
                    {
                        "id": 38,
                        "string": "(2017) ."
                    },
                    {
                        "id": 39,
                        "string": "To do this, we append to each word's embedding a vector of length equal to the size of the union of the two languages' alphabets."
                    },
                    {
                        "id": 40,
                        "string": "Each position in this vector corresponds to a single letter, and its value is set to the count of that letter within the spelling of the word."
                    },
                    {
                        "id": 41,
                        "string": "This letter count vector is then scaled by a constant before being appended to the base word embedding."
                    },
                    {
                        "id": 42,
                        "string": "After appending, the resulting augmented vector is normalized to have magnitude 1."
                    },
                    {
                        "id": 43,
                        "string": "Mathematically, let A be an ordered set of characters (an alphabet), containing all characters appearing in both language's alphabets: A = A source ∪ A target Let O source and O target be the orthographic extension matrices for each language, containing counts of the characters appearing in each word w i , scaled by a constant factor c e : O ij = c e · count(A j , w i ), O ∈ {O source , O target } Then, we concatenate the embedding matrices and extension matrices: X = [X; O source ], Z = [Z; O target ] Finally, in the normalized embedding matrices X and Z , each row has magnitude 1: X i * = X i * X i * , Z i * = Z i * Z i * These new matrices are used in place of X and Z in the self-learning process."
                    },
                    {
                        "id": 44,
                        "string": "Orthographic Similarity Adjustment This method modifies the similarity score for each word pair during the dictionary induction phase of the self-learning framework of Artetxe et al."
                    },
                    {
                        "id": 45,
                        "string": "(2017) , which uses the dot product of two words' embeddings to quantify similarity."
                    },
                    {
                        "id": 46,
                        "string": "We modify this similarity score by adding a measure of orthographic similarity, which is a function of the normalized string edit distance of the two words."
                    },
                    {
                        "id": 47,
                        "string": "The normalized edit distance is defined as the Levenshtein distance (L(·, ·)) (Levenshtein, 1966) divided by the length of the longer word."
                    },
                    {
                        "id": 48,
                        "string": "The Levenshtein distance represents the minimum number of insertions, deletions, and substitutions required to transform one word into the other."
                    },
                    {
                        "id": 49,
                        "string": "The normalized edit distance function is denoted as NL(·, ·)."
                    },
                    {
                        "id": 50,
                        "string": "NL(w 1 , w 2 ) = L(w 1 , w 2 ) max(|w 1 |, |w 2 |) We define the orthographic similarity of two words w 1 and w 2 as log(2.0−NL(w 1 , w 2 ))."
                    },
                    {
                        "id": 51,
                        "string": "These similarity scores are used to form an orthographic similarity matrix S, where each entry corresponds to a source-target word pair."
                    },
                    {
                        "id": 52,
                        "string": "Each entry is first scaled by a constant factor c s ."
                    },
                    {
                        "id": 53,
                        "string": "This matrix is added to the standard similarity matrix, XW Z T ."
                    },
                    {
                        "id": 54,
                        "string": "S ij = c s ·log(2.0−NL(w i , w j )), w i ∈ V s , w j ∈ V t The vocabulary for each language is 200,000 words, so computing a similarity score for each pair would involve 40 billion edit distance calculations."
                    },
                    {
                        "id": 55,
                        "string": "Also, the vast majority of word pairs are orthographically very dissimilar, resulting in a normalized edit distance close to 1 and an orthographic similarity close to 0, having little to no effect on the overall estimated similarity."
                    },
                    {
                        "id": 56,
                        "string": "Therefore, we only calculate the edit distance for a subset of possible word pairs."
                    },
                    {
                        "id": 57,
                        "string": "Thus, the actual orthographic similarity matrix that we use is as follows: S ij = S ij w i , w j ∈ symDelete(V t ,V s ,k) 0 otherwise This subset of word pairs was chosen using an adaptation of the Symmetric Delete spelling correction algorithm described by Garbe (2012) , which we denote as symDelete(·,·,·)."
                    },
                    {
                        "id": 58,
                        "string": "This algorithm takes as arguments the target vocabulary, source vocabulary, and a constant k, and identifies all source-target word pairs that are identical after k or fewer deletions from each word; that is, all pairs where each is reachable from the other with no more than k insertions and k deletions."
                    },
                    {
                        "id": 59,
                        "string": "For example, the Italian-English pair modernomodern will be identified with k = 1, and the pair tollerante-tolerant will be identified with k = 2."
                    },
                    {
                        "id": 60,
                        "string": "The algorithm works by computing all strings formed by k or fewer deletions from each target word, stores them in a hash table, then does the same for each source word and generates sourcetarget pairs that share an entry in the hash table."
                    },
                    {
                        "id": 61,
                        "string": "The complexity of this algorithm can be expressed as O(|V |l k ), where V = V t ∪ V s is the combined vocabulary and l is the length of the longest word in V ."
                    },
                    {
                        "id": 62,
                        "string": "This is linear with respect to the vocabulary size, as opposed to the quadratic complexity required for computing the entire matrix."
                    },
                    {
                        "id": 63,
                        "string": "However, the algorithm is sensitive to both word length and the choice of k. In our experiments, we found that ignoring all words of length greater than 30 allowed the algorithm to complete very quickly while skipping less than 0.1% of the data."
                    },
                    {
                        "id": 64,
                        "string": "We also used small values of k (0 < k < 4), and used k = 1 for our final results, finding no significant benefit from using a larger value."
                    },
                    {
                        "id": 65,
                        "string": "Experiments We use the datasets used by Artetxe et al."
                    },
                    {
                        "id": 66,
                        "string": "(2017) , consisting of three language pairs: English-Italian, English-German, and English-Finnish."
                    },
                    {
                        "id": 67,
                        "string": "The English-Italian dataset was introduced in Dinu and Baroni (2014) ; the other datasets were created by Artetxe et al."
                    },
                    {
                        "id": 68,
                        "string": "(2017) ."
                    },
                    {
                        "id": 69,
                        "string": "Each dataset includes monolingual word embeddings (trained with word2vec (Mikolov et al., 2013b) ) for both languages and a bilingual dictionary, separated into a training and test set."
                    },
                    {
                        "id": 70,
                        "string": "We do not use the training set as the input dictionary to the system, instead using an automatically-generated dictionary consisting only of numeral identity translations (such as 2-2, 3-3, et cetera) as in Artetxe et al."
                    },
                    {
                        "id": 71,
                        "string": "(2017) ."
                    },
                    {
                        "id": 72,
                        "string": "1 However, because the methods presented in this work feature tunable hyperparameters, we use a portion of the training set as devel- Results and Discussion For our experiments with orthographic extension of word embeddings, each embedding was extended by the size of the union of the alphabets of both languages."
                    },
                    {
                        "id": 73,
                        "string": "The size of this union was 199 for English-Italian, 200 for English-German, and 287 for English-Finnish."
                    },
                    {
                        "id": 74,
                        "string": "These numbers are perhaps unintuitively high."
                    },
                    {
                        "id": 75,
                        "string": "However, the corpora include many other characters, including diacritical markings and various symbols (%, [, !, etc.)"
                    },
                    {
                        "id": 76,
                        "string": "that are an indication that tokenization of the data could be improved."
                    },
                    {
                        "id": 77,
                        "string": "We did not filter these characters in this work."
                    },
                    {
                        "id": 78,
                        "string": "For our experiments with orthographic similarity adjustment, the heuristic identified approximately 2 million word pairs for each language pair out of a possible 40 billion, resulting in significant computation savings."
                    },
                    {
                        "id": 79,
                        "string": "Figure 1 shows the results on the development data."
                    },
                    {
                        "id": 80,
                        "string": "Based on these results, we selected c e = 1 8 and c s = 1 as our hyperparameters."
                    },
                    {
                        "id": 81,
                        "string": "The local optima were not identical for all three languages, but we felt that these values struck the best compromise among them."
                    },
                    {
                        "id": 82,
                        "string": "Table 1 compares our methods against the system of Artetxe et al."
                    },
                    {
                        "id": 83,
                        "string": "(2017) , using scaling factors selected based on development data results."
                    },
                    {
                        "id": 84,
                        "string": "Because approximately 20% of source-target pairs in the dictionary were identical, we also extended all systems to guess the identity translation if the source word appeared in the target vocabulary."
                    },
                    {
                        "id": 85,
                        "string": "This improved accuracy in most cases, with some exceptions for English-Italian."
                    },
                    {
                        "id": 86,
                        "string": "We also experimented with both methods together, and found that this was the best of the settings that did not include the identity translation component; with the identity component included, however, the embedding extension method alone was best for English-Finnish."
                    },
                    {
                        "id": 87,
                        "string": "The fact that Finnish is the only language here that is not in the Indo-European family (and has fewer words borrowed from English or its ancestors) may explain why the performance trends for English-Finnish were different than those of the other two language pairs."
                    },
                    {
                        "id": 88,
                        "string": "In addition to identifying orthographically similar words, the extension method is capable of learning a mapping between source and target letters, which could partially explain its improved performance over our edit distance method."
                    },
                    {
                        "id": 89,
                        "string": "Table 2 shows some correct translations from our system that were missed by the baseline."
                    },
                    {
                        "id": 90,
                        "string": "Conclusion and Future Work In this work, we presented two techniques (which can be combined) for improving embedding-based bilingual lexicon induction for related languages using orthographic information and no parallel data, allowing their use with low-resource language pairs."
                    },
                    {
                        "id": 91,
                        "string": "These methods increased accuracy in our experiments, with both the combined and embedding extension methods providing significant gains over the baseline system."
                    },
                    {
                        "id": 92,
                        "string": "In the future, we want to extend this work to related languages with different alphabets (experimenting with transliteration or phonetic transcription) and to extend other unsupervised bilingual lexicon induction systems, such as that of Lample et al."
                    },
                    {
                        "id": 93,
                        "string": "(2018a) ."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 20
                    },
                    {
                        "section": "Background",
                        "n": "2",
                        "start": 21,
                        "end": 36
                    },
                    {
                        "section": "Orthographic Extension of Word Embeddings",
                        "n": "3",
                        "start": 37,
                        "end": 43
                    },
                    {
                        "section": "Orthographic Similarity Adjustment",
                        "n": "4",
                        "start": 44,
                        "end": 64
                    },
                    {
                        "section": "Experiments",
                        "n": "5",
                        "start": 65,
                        "end": 71
                    },
                    {
                        "section": "Results and Discussion",
                        "n": "6",
                        "start": 72,
                        "end": 89
                    },
                    {
                        "section": "Conclusion and Future Work",
                        "n": "7",
                        "start": 90,
                        "end": 93
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1250-Figure1-1.png",
                        "caption": "Figure 1: Performance on development data vs. scaling factors ce and cs. The lowest tested value for both was 10−6.",
                        "page": 2,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 523.1999999999999,
                            "y1": 65.75999999999999,
                            "y2": 191.04
                        }
                    },
                    {
                        "filename": "../figure/image/1250-Table2-1.png",
                        "caption": "Table 2: Examples of pairs correctly identified by our embedding extension method that were incorrectly translated by the system of Artetxe et al. (2017). Our system can disambiguate semantic clusters created by word2vec.",
                        "page": 3,
                        "bbox": {
                            "x1": 128.64,
                            "x2": 469.44,
                            "y1": 228.48,
                            "y2": 305.28
                        }
                    },
                    {
                        "filename": "../figure/image/1250-Table1-1.png",
                        "caption": "Table 1: Comparison of methods on test data. Scaling constants ce and cs were selected based on performance on development data over all three language pairs. The last two rows report the results of using both methods together.",
                        "page": 3,
                        "bbox": {
                            "x1": 113.75999999999999,
                            "x2": 484.32,
                            "y1": 62.879999999999995,
                            "y2": 162.23999999999998
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-60"
        },
        {
            "slides": {
                "0": {
                    "title": "Semantic Graphs",
                    "text": [
                        "WordNet-like resources are curated to describe relations between word senses",
                        "The graph is directed",
                        "Edges have form <S, r, T>: <zebra, is-a, equine>",
                        "Still, some relations are symmetric"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "2": {
                    "title": "Incorporating a Global View",
                    "text": [
                        "We want to avoid unreasonable graphs",
                        "Imposing hard constraints isnt flexible enough",
                        "Only takes care of impossible graphs",
                        "We still want the local signal to matter - its very strong.",
                        "Our solution: an additive, learnable global graph score",
                        "Score(<zebra, hypernym, equine>| WordNet) =",
                        "slocal(edge) + (sglobal(WN edge), sglobal(WN))"
                    ],
                    "page_nums": [
                        6,
                        7
                    ],
                    "images": []
                },
                "3": {
                    "title": "Global Graph Score",
                    "text": [
                        "Based on a framework called Exponential Random Graph Model (ERGM)",
                        "The score sglobal(WN) is derived from a log-linear distribution across possible graphs that have a fixed number n of nodes",
                        "OK. What are the features?"
                    ],
                    "page_nums": [
                        8,
                        9
                    ],
                    "images": []
                },
                "5": {
                    "title": "Graph Motifs multiple relations",
                    "text": [
                        "(some) joint blue/orange motifs:"
                    ],
                    "page_nums": [
                        15,
                        16,
                        17,
                        18
                    ],
                    "images": []
                },
                "6": {
                    "title": "ERGM Training",
                    "text": [
                        "Estimating the scores for all possible graphs to obtain a probability distribution is implausible",
                        "Number of possible directed graphs with n nodes: O(exp(n2))",
                        "Estimation begins to be hard at ~n=100 for R=1. In WordNet: n = 40K, R",
                        "Unlike other structured problems, theres no known dynamic programming algorithm either",
                        "What can we do?",
                        "Decompose score over dyads (node pairs) in graph",
                        "Draw and score negative sample graphs"
                    ],
                    "page_nums": [
                        19,
                        20,
                        21
                    ],
                    "images": []
                },
                "7": {
                    "title": "Max Margin Markov Graph Model M3GM",
                    "text": [
                        "Sample negative graphs from the local neighborhood of the true WN",
                        "Loss = Max {0, 1 + score(negative sample)",
                        "Its important to choose an appropriate proposal distribution (source of the negative samples)",
                        "We want to make things hard for the scorer"
                    ],
                    "page_nums": [
                        22,
                        23,
                        24,
                        25,
                        26,
                        27,
                        28
                    ],
                    "images": [
                        "figure/image/1261-Table4-1.png"
                    ]
                },
                "8": {
                    "title": "Evaluation",
                    "text": [
                        "No reciprocal relations (hypernym hyponym)",
                        "Still includes symmetric relations",
                        "Rule baseline - take symmetric if exists in train",
                        "Used in all models as default for symmetric relations DistMult",
                        "Synset embeddings - averaged from FastText",
                        "M3GM (re-rank top 100 from local)",
                        "Metrics - MRR, H@10 transE"
                    ],
                    "page_nums": [
                        29,
                        30,
                        31
                    ],
                    "images": []
                },
                "10": {
                    "title": "Feature Analysis",
                    "text": [
                        "Motifs with heavy positive weights:",
                        "Motifs with heavy negative weights:",
                        "Target of both has_part and verb_group",
                        "Seen in training data",
                        "Derivations occur in the abstract parts of the graph",
                        "(bodega canteen vs. shop)"
                    ],
                    "page_nums": [
                        33,
                        34,
                        35,
                        36,
                        37,
                        38
                    ],
                    "images": []
                },
                "11": {
                    "title": "Future Work",
                    "text": [
                        "Multilingual transfers of semantic graphs align embeddings / translate concepts",
                        "Can we introduce global features to help?"
                    ],
                    "page_nums": [
                        39,
                        40,
                        41
                    ],
                    "images": []
                }
            },
            "paper_title": "Predicting Semantic Relations using Global Graph Properties",
            "paper_id": "1261",
            "paper": {
                "title": "Predicting Semantic Relations using Global Graph Properties",
                "abstract": "Semantic graphs, such as WordNet, are resources which curate natural language on two distinguishable layers. On the local level, individual relations between synsets (semantic building blocks) such as hypernymy and meronymy enhance our understanding of the words used to express their meanings. Globally, analysis of graph-theoretic properties of the entire net sheds light on the structure of human language as a whole. In this paper, we combine global and local properties of semantic graphs through the framework of Max-Margin Markov Graph Models (M3GM), a novel extension of Exponential Random Graph Model (ERGM) that scales to large multi-relational graphs. We demonstrate how such global modeling improves performance on the local task of predicting semantic relations between synsets, yielding new state-ofthe-art results on the WN18RR dataset, a challenging version of WordNet link prediction in which \"easy\" reciprocal cases are removed. In addition, the M3GM model identifies multirelational motifs that are characteristic of wellformed lexical semantic ontologies.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Semantic graphs, such as WordNet (Fellbaum, 1998) , encode the structural qualities of language as a representation of human knowledge."
                    },
                    {
                        "id": 1,
                        "string": "On the local level, they describe connections between specific semantic concepts, or synsets, through individual edges representing relations such as hypernymy ('is-a') or meronymy ('is-part-of'); on the global level, they encode emergent regular properties in the induced relation graphs."
                    },
                    {
                        "id": 2,
                        "string": "Local properties have been subject to extensive study in recent years via the task of relation prediction, where individual edges are found based mostly on distributional methods that embed synsets and relations into a vector space (e.g."
                    },
                    {
                        "id": 3,
                        "string": "Socher et al., 2013; Bordes et al., 2013; Neelakantan et al., 2015) ."
                    },
                    {
                        "id": 4,
                        "string": "In contrast, while the structural regularity and significance of global aspects of semantic graphs is well-attested (Sigman and Cecchi, 2002) , global properties have rarely been used in prediction settings."
                    },
                    {
                        "id": 5,
                        "string": "In this paper, we show how global semantic graph features can facilitate in local tasks such as relation prediction."
                    },
                    {
                        "id": 6,
                        "string": "To motivate this approach, consider the hypothetical hypernym graph fragments in Figure 1 : in (a), the semantic concept (synset) 'catamaran' has a single hypernym, 'boat'."
                    },
                    {
                        "id": 7,
                        "string": "This is a typical property across a standard hypernym graph."
                    },
                    {
                        "id": 8,
                        "string": "In (b), the synset 'cat' has two hypernyms, an unlikely event."
                    },
                    {
                        "id": 9,
                        "string": "While a local relation prediction model might mistake the relation between 'cat' and 'boat' to be plausible, for whatever reason, a high-order graphstructure-aware model should be able to discard it based on the knowledge that a synset should not have more than one hypernym."
                    },
                    {
                        "id": 10,
                        "string": "In (c), an impossible situation arises: a cycle in the hypernym graph leads each of the participating synsets to be predicted by transitivity as its own hypernym, contrary to the relation's definition."
                    },
                    {
                        "id": 11,
                        "string": "However, a purely local model has no explicit mechanism for rejecting such an outcome."
                    },
                    {
                        "id": 12,
                        "string": "In this paper, we examine the effect of global graph properties on the link structure via the WordNet relation prediction task."
                    },
                    {
                        "id": 13,
                        "string": "Our hypothesis is that features extracted from the entire graph can help constrain local predictions to structurally sound ones (Guo et al., 2007) ."
                    },
                    {
                        "id": 14,
                        "string": "Such features are often manifested as aggregate counts of small subgraph structures, known as motifs, such as the number of nodes with two or more outgoing edges, or the number of cycles of length 3."
                    },
                    {
                        "id": 15,
                        "string": "Returning to the example in Figure 1 , each of these features will be affected when graphs (b) and (c) are evaluated, respectively."
                    },
                    {
                        "id": 16,
                        "string": "To estimate weights on local and global graph features, we build on the Exponential Random Graph Model (ERGM), a log-linear model over networks utilizing global graph features (Holland and Leinhardt, 1981) ."
                    },
                    {
                        "id": 17,
                        "string": "In ERGMs, the likelihood of a graph is computed by exponentiating a weighted sum of the features, and then normalizing over all possible graphs."
                    },
                    {
                        "id": 18,
                        "string": "This normalization term grows exponentially in the number of nodes, and in general cannot be decomposed into smaller parts."
                    },
                    {
                        "id": 19,
                        "string": "Approximations are therefore necessary to fit ERGMs on graphs with even a few dozen nodes, and the largest known ERGMs scale only to thousands of nodes (Schmid and Desmarais, 2017) ."
                    },
                    {
                        "id": 20,
                        "string": "This is insufficient for WordNet, which has an order of 10 5 nodes."
                    },
                    {
                        "id": 21,
                        "string": "We extend the ERGM framework in several ways."
                    },
                    {
                        "id": 22,
                        "string": "First, we replace the maximum likelihood objective with a margin-based objective, which compares the observed network against alternative networks; we call the resulting model the Max-Margin Markov Graph Model (M3GM), drawing on ideas from structured prediction (Taskar et al., 2004) ."
                    },
                    {
                        "id": 23,
                        "string": "The gradient of this loss is approximated by importance sampling over candidate negative edges, using a local relational model as a proposal distribution."
                    },
                    {
                        "id": 24,
                        "string": "The complexity of each epoch of estimation is thus linear in the number of edges, making it possible to scale up to the 10 5 nodes in WordNet."
                    },
                    {
                        "id": 25,
                        "string": "1 Second, we address the multi-relational nature of semantic graphs, by incorporating a combinatorial set of labeled motifs."
                    },
                    {
                        "id": 26,
                        "string": "Finally, we link graph-level relational features with distributional information, by combining the M3GM with a dyad-level model over word sense embeddings."
                    },
                    {
                        "id": 27,
                        "string": "We train M3GM as a re-ranker, which we apply to a a strong local-feature baseline on the WN18RR dataset (Dettmers et al., 2018) ."
                    },
                    {
                        "id": 28,
                        "string": "This yields absolute improvements of 3-4 points on all commonly-used metrics."
                    },
                    {
                        "id": 29,
                        "string": "Model inspection reveals that M3GM assigns importance to features from all relations, and captures some interesting inter-relational properties that lend insight into the overall structure of WordNet."
                    },
                    {
                        "id": 30,
                        "string": "2 Related Work Relational prediction in semantic graphs."
                    },
                    {
                        "id": 31,
                        "string": "Recent approaches to relation prediction in semantic graphs generally start by embedding the semantic concepts into a shared space and modeling relations by some operator that induces a score for an embedding pair input."
                    },
                    {
                        "id": 32,
                        "string": "We use several of these techniques as base models (Nickel et al., 2011; Bordes et al., 2013; Yang et al., 2014) ; detailed description of these methods is postponed to Section 3.2."
                    },
                    {
                        "id": 33,
                        "string": "Socher et al."
                    },
                    {
                        "id": 34,
                        "string": "(2013) generalize over the approach of Nickel et al."
                    },
                    {
                        "id": 35,
                        "string": "(2011) by using a bilinear tensor which assigns multiple parameters for each relation; Shi and Weninger (2017) project the node embeddings in a translational model similar to that of Bordes et al."
                    },
                    {
                        "id": 36,
                        "string": "(2013) ; Dettmers et al."
                    },
                    {
                        "id": 37,
                        "string": "(2018) apply a convolutional neural network by reshaping synset embeddings to 2-dimensional matrices."
                    },
                    {
                        "id": 38,
                        "string": "None of these embedding-based approaches incorporate structural information; in general, improvements in embedding-based methods are expected to be complementary to our approach."
                    },
                    {
                        "id": 39,
                        "string": "Some recent works compose single edges into more intricate motifs, such as Guu et al."
                    },
                    {
                        "id": 40,
                        "string": "(2015) , who define a task of path prediction and compose various functions to solve it."
                    },
                    {
                        "id": 41,
                        "string": "They find that compositionalized bilinear models perform best on WordNet."
                    },
                    {
                        "id": 42,
                        "string": "Minervini et al."
                    },
                    {
                        "id": 43,
                        "string": "(2017) train link-prediction models against an adversary that produces examples which violate structural constraints such as symmetry and transitivity."
                    },
                    {
                        "id": 44,
                        "string": "Another line of work builds on local neighborhoods of relation interactions and automatic detection of relations from syntactically parsed text (Riedel et al., 2013; ."
                    },
                    {
                        "id": 45,
                        "string": "Schlichtkrull et al."
                    },
                    {
                        "id": 46,
                        "string": "(2017) use Graph Convolutional Networks to predict relations while considering high-order neighborhood properties of the nodes in question."
                    },
                    {
                        "id": 47,
                        "string": "In general, these methods aggregate information over local neighborhoods, but do not explicitly model structural motifs."
                    },
                    {
                        "id": 48,
                        "string": "Our model introduces interaction features between relations (e.g., hypernyms and meronyms) for the goal of relation prediction."
                    },
                    {
                        "id": 49,
                        "string": "To our knowledge, this is the first time that relation interaction is explicitly modeled into a relation prediction task."
                    },
                    {
                        "id": 50,
                        "string": "Within the ERGM framework, Lu et al."
                    },
                    {
                        "id": 51,
                        "string": "(2010) train a limited set of combinatory path features for social network link prediction."
                    },
                    {
                        "id": 52,
                        "string": "Scaling exponential random graph models."
                    },
                    {
                        "id": 53,
                        "string": "The problem of approximating the denominator of the ERGM probability has been an active research topic for several decades."
                    },
                    {
                        "id": 54,
                        "string": "Two common approximation methods exist in the literature."
                    },
                    {
                        "id": 55,
                        "string": "In Maximum Pseudolikelihood Estimation (MPLE; Strauss and Ikeda, 1990 ), a graph's probability is decomposed into a product of the probability for each edge, which in turn is computed based on the ERGM feature difference between the graph excluding the edge and the full graph."
                    },
                    {
                        "id": 56,
                        "string": "Monte Carlo Maximum Likelihood Estimation (MCMLE; Snijders, 2002) follows a sampling logic, where a large number of graphs is randomly generated from the overall space under the intuition that the sum of their scores would give a good approximation for the total score mass."
                    },
                    {
                        "id": 57,
                        "string": "The probability for the observed graph is then estimated following normalization conditioned on the sampling distribution, and its precision increases as more samples are gathered."
                    },
                    {
                        "id": 58,
                        "string": "Recent work found that applying a parametric bootstrap can increase the reliability of MPLE, while retaining its superiority in training speed (Schmid and Desmarais, 2017) ."
                    },
                    {
                        "id": 59,
                        "string": "Despite this result, we opted for an MCMLE-based approach for M3GM, mainly due to the ability to keep the number of edges constant in each sampled graph."
                    },
                    {
                        "id": 60,
                        "string": "This property is important in our setup, since local edge scores added or removed to the overall graph score can occasionally dominate the objective function, giving unintended importance to the overall edge count."
                    },
                    {
                        "id": 61,
                        "string": "Max-Margin Markov Graph Models Consider a graph G = (V, E), where V is a set of vertices and E = {(s i , t i )} |E| i=1 is a set of directed edges."
                    },
                    {
                        "id": 62,
                        "string": "The ERGM scoring function defines a probability over G |V | , the set of all graphs with |V | nodes."
                    },
                    {
                        "id": 63,
                        "string": "This probability is defined as a loglinear function, P ERGM (G) ∝ ψ ERGM (G) = exp θ T f (G) , (1) where f is a feature function, from graphs to a vector of feature counts."
                    },
                    {
                        "id": 64,
                        "string": "Features are typically counts of motifs -small subgraph structures -as described in the introduction."
                    },
                    {
                        "id": 65,
                        "string": "The vector θ is the parameter to estimate."
                    },
                    {
                        "id": 66,
                        "string": "In this section we discuss our adaptation of this model to the domain of semantic graphs, leveraging their idiosyncratic properties."
                    },
                    {
                        "id": 67,
                        "string": "Semantic graphs are composed of multiple relation types, which the feature space needs to accommodate; their nodes are linguistic constructs (semantic concepts) associated with complex interpretations, which can benefit the graph representation through incorporating their embeddings in R d into a new scoring model."
                    },
                    {
                        "id": 68,
                        "string": "We then present our M3GM framework to perform reliable and efficient parameter estimation on the new model."
                    },
                    {
                        "id": 69,
                        "string": "Graph Motifs as Features Based on common practice in ERGM feature extraction (e.g., Morris et al., 2008) , we select the following graph features as a basis: • Total edge count; • Number of cycles of length k, for k ∈ {2, 3}; • Number of nodes with exactly k outgoing (incoming) edges, for k ∈ {1, 2, 3}; • Number of nodes with at least k outgoing (incoming) edges, for k ∈ {1, 2, 3}; • Number of paths of length 2; • Transitivity: the proportion of length-2 paths u → v → w where an edge u → w also exists."
                    },
                    {
                        "id": 70,
                        "string": "Semantic graphs are multigraphs, where multiple relationships (hypernymy, meronymy, derivation, etc.)"
                    },
                    {
                        "id": 71,
                        "string": "are overlaid atop a common set of nodes."
                    },
                    {
                        "id": 72,
                        "string": "For each relation r in the relation inventory R, we denote its edge set as E r , and redefine E = r∈R E r , the union of all labeled edges."
                    },
                    {
                        "id": 73,
                        "string": "Some relations do not produce a connected graph, while others may coincide with each other frequently, possibly in regular but intricate patterns: for example, derivation relations tend to occur between synsets in the higher, more abstract levels of the hypernym graph."
                    },
                    {
                        "id": 74,
                        "string": "We represent this complexity by expanding the feature space to include relation-sensitive combinatory motifs."
                    },
                    {
                        "id": 75,
                        "string": "For each feature template from the basis list above, we extract features for all possible combinations of relation types existing in the graph."
                    },
                    {
                        "id": 76,
                        "string": "Depending on the feature type, these could be relation singletons, pairs, or triples; they may be order-sensitive or order-insensitive."
                    },
                    {
                        "id": 77,
                        "string": "For example: • A combinatory 'transitivity' feature will be extracted for the proportion of paths u hypernym − −−−−−→ v meronym − −−−−− → w where an edge u has part − −−−− → w also exists."
                    },
                    {
                        "id": 78,
                        "string": "• A combinatory '2-outgoing' feature will be extracted for the number of nodes with exactly one derivation and one has part."
                    },
                    {
                        "id": 79,
                        "string": "The number of features thus scales in O(|R| K ) for a feature basis which involves up to K edges in any feature, and so our 17 basis features (with K = 3) generate a combinatory feature set with roughly 3,000 features for the 11-relation version of WordNet used in our experiments (see Section 4.1)."
                    },
                    {
                        "id": 80,
                        "string": "Local Score Component In classical ERGM application domains such as social media or biological networks, nodes tend to have little intrinsic distinction, or at least little meaningful intrinsic information that may be extracted prior to applying the model."
                    },
                    {
                        "id": 81,
                        "string": "In semantic graphs, however, the nodes represent synsets, which are associated with information that is both valuable to predicting the graph structure and approximable using unsupervised techniques such as embedding into a common d-dimensional vector space based on copious amounts of available data."
                    },
                    {
                        "id": 82,
                        "string": "We thus modify the traditional scoring function from eq."
                    },
                    {
                        "id": 83,
                        "string": "(1) to include node-specific information, by introducing a relation-specific association op- erator A (r) : V × V → R: ψ ERGM+ (G) = = exp   θ T f (G) + r∈R (s,t)∈Er A (r) (s, t)   ."
                    },
                    {
                        "id": 84,
                        "string": "(2) The association operator generalizes various models from the relation prediction literature: TransE (Bordes et al., 2013) embeds each relation r into a vector in the shared space, representing a 'difference' between sources and targets, to compute the association score under a translational objective, A (r) TRANSE (s, t) = − e s + e r − e t ."
                    },
                    {
                        "id": 85,
                        "string": "BiLin (Nickel et al., 2011) embeds relations into full-rank matrices, computing the score by a bilinear multiplication, A (r) BILIN (s, t) = e T s W r e t ."
                    },
                    {
                        "id": 86,
                        "string": "DistMult (Yang et al., 2014 ) is a special case of BiLin where the relation matrices are diagonal, reducing the computation to a ternary dot product, A (r) DISTMULT (s, t) = e s , e r , e t = d i=1 e s i e r i e t i ."
                    },
                    {
                        "id": 87,
                        "string": "Parameter Estimation The probabilistic formulation of ERGM requires the computation of a normalization term that sums over all possible graphs with a given number of nodes, G N ."
                    },
                    {
                        "id": 88,
                        "string": "The set of such graphs grows at a rate that is super-exponential in the number of nodes, making exact computation intractable even for networks that are orders of magnitude smaller than semantic graphs like WordNet."
                    },
                    {
                        "id": 89,
                        "string": "One solution is to approximate probability using a variant of the Monte Carlo Maximum Likelihood Estimation (MCMLE) produce, log P (G) ≈ log ψ(G) − log |G |V | | M M G ∼G |V | ψ(G), (3) where M is the number of networksG sampled from G |V | , the space of all (multirelational) edge sets on nodes V ."
                    },
                    {
                        "id": 90,
                        "string": "EachG is referred to as a negative sample, and the goal of estimation is to assign low scores to these samples, in comparison with the score assigned to the observed network G. Network samples can be obtained using edgewise negative sampling."
                    },
                    {
                        "id": 91,
                        "string": "For each edge s r − → t in the training network G, we remove it temporarily and consider T alternative edges, keeping the source s and relation r constant, and sampling a targett from a proposal distribution Q."
                    },
                    {
                        "id": 92,
                        "string": "Every such substitution produces a new graphG, G =G ∪ {s r − →t} \\ {s r − → t}."
                    },
                    {
                        "id": 93,
                        "string": "(4) Large-margin objective."
                    },
                    {
                        "id": 94,
                        "string": "Rather than approximating the log probability, as in MCMLE estimation, we propose a margin loss objective: the log score for each negative sampleG should be below the log score for G by a margin of at least 1."
                    },
                    {
                        "id": 95,
                        "string": "This motivates the hinge loss, L(Θ,G; G) = 1 − log ψ ERGM+ (G) + log ψ ERGM+ (G) + , (5) where (x) + = max(0, x)."
                    },
                    {
                        "id": 96,
                        "string": "Recall that the scoring function ψ ERGM+ includes both the local association score for the alternative edge and the global graph features for the resulting graph."
                    },
                    {
                        "id": 97,
                        "string": "However, it is not necessary to recompute all association scores; we need only subtract the association score for the deleted edge s r − → t, and add the association score for the sampled edge s r − →t."
                    },
                    {
                        "id": 98,
                        "string": "The overall loss function is the sum over N = |E|×T negative samples, {G (i) } N i=1 , plus an L 2 regularizer on the model parameters, L(Θ; G) = λ||Θ|| 2 2 + N i=1 L(Θ,G (i) )."
                    },
                    {
                        "id": 99,
                        "string": "(6) Proposal distribution."
                    },
                    {
                        "id": 100,
                        "string": "The proposal distribution Q used to sample negative edges is defined to be proportional to the local association scores of edges not present in the training graph: Q(t | s, r, G) ∝ 0 s r − →t ∈ G A (r) (s,t) s r − →t / ∈ G ."
                    },
                    {
                        "id": 101,
                        "string": "(7) By preferring edges that have high association scores, the negative sampler helps push the M3GM parameters away from likely false positives."
                    },
                    {
                        "id": 102,
                        "string": "Relation Prediction We evaluate M3GM on the relation graph edge prediction task."
                    },
                    {
                        "id": 103,
                        "string": "3 Data for this task consists of a set of labeled edges, i.e."
                    },
                    {
                        "id": 104,
                        "string": "tuples of the form (s, r, t), where s and t denote source and target entities, respectively."
                    },
                    {
                        "id": 105,
                        "string": "Given an edge from an evaluation set, two prediction instances are created by hiding the source and target side, in turn."
                    },
                    {
                        "id": 106,
                        "string": "The predictor is then evaluated on its ability to predict the hidden entity, given the other entity and the relation type."
                    },
                    {
                        "id": 107,
                        "string": "4 WN18RR Dataset A popular relation prediction dataset for WordNet is the subset curated as WN18 (Bordes et al., 2013 (Bordes et al., , 2014 , containing 18 relations for about 41,000 synsets extracted from WordNet 3.0."
                    },
                    {
                        "id": 108,
                        "string": "It has been noted that this dataset suffers from considerable leakage: edges from reciprocal relations such as hypernym / hyponym appear in one direction in the training set and in the opposite direction in dev / test (Socher et al., 2013; Dettmers et al., 2018) ."
                    },
                    {
                        "id": 109,
                        "string": "This allows trivial rule-based baselines to achieve high performance."
                    },
                    {
                        "id": 110,
                        "string": "To alleviate this concern, Dettmers et al."
                    },
                    {
                        "id": 111,
                        "string": "(2018) released the WN18RR set, removing seven relations altogether."
                    },
                    {
                        "id": 112,
                        "string": "However, even this dataset retains four symmetric relation types: also see, derivationally related form, similar to, and verb group."
                    },
                    {
                        "id": 113,
                        "string": "These symmetric relations can be exploited by defaulting to a simple rulebased predictor."
                    },
                    {
                        "id": 114,
                        "string": "Metrics We report the following metrics, common in ranking tasks and in relation prediction in particular: MR, the Mean Rank of the desired entity; MRR, Mean Reciprocal Rank, the main evaluation metric; and H@k, the proportion of Hits (true entities) found in the top k of the lists, for k ∈ {1, 10}."
                    },
                    {
                        "id": 115,
                        "string": "Unlike some prior work, we do not type-restrict the possible relation predictions (so, e.g., a verb group link may select a noun, and that would count against the model)."
                    },
                    {
                        "id": 116,
                        "string": "Systems We evaluate a single-rule baseline, three association models, and two variants of the M3GM re-ranker trained on top of the best-performing association baseline."
                    },
                    {
                        "id": 117,
                        "string": "RULE We include a single-rule baseline that predicts a relation between s and t in the evaluation set if the same relation was encountered between t and s in the training set."
                    },
                    {
                        "id": 118,
                        "string": "All other models revert to this baseline for the four symmetric relations."
                    },
                    {
                        "id": 119,
                        "string": "Association Models The next group of systems compute local scores for entity-relation triplets."
                    },
                    {
                        "id": 120,
                        "string": "They all encode entities into embeddings e. Each of these systems, in addition to being evaluated as a baseline, is also used for computing association scores in M3GM, both in the proposal distribution (see Section 3.3) and for creating lists to be re-ranked (see below): TRANSE, BILIN, DISTMULT."
                    },
                    {
                        "id": 121,
                        "string": "For detailed descriptions, see Section 3.2."
                    },
                    {
                        "id": 122,
                        "string": "Max-Margin Markov Graph Model The M3GM is applied as a re-ranker."
                    },
                    {
                        "id": 123,
                        "string": "For each relation and source (target), the top K candidate targets (sources) are retrieved based on the local association scores."
                    },
                    {
                        "id": 124,
                        "string": "Each candidate edge is introduced into the graph, and the score ψ ERGM+ (G) is used to re-rank the top-K list."
                    },
                    {
                        "id": 125,
                        "string": "We add a variant to this protocol where the graph score and association score are weighted by α and 1 − α, repsectively, before being summed."
                    },
                    {
                        "id": 126,
                        "string": "We tune a separate α r for each relation type, using the development set's mean reciprocal rank (MRR)."
                    },
                    {
                        "id": 127,
                        "string": "These hyperparameter values offer further insight into where the M3GM signal benefits relation prediction most (see Section 6)."
                    },
                    {
                        "id": 128,
                        "string": "Since we do not apply the model to the symmetric relations (scored by the RULE baseline), they are excluded from the sampling protocol described in eq."
                    },
                    {
                        "id": 129,
                        "string": "(5), although their edges do contribute to the combinatory graph feature vector f ."
                    },
                    {
                        "id": 130,
                        "string": "Our default setting backpropagates loss into only the graph weight vector θ."
                    },
                    {
                        "id": 131,
                        "string": "We experiment with a model variant which backpropagates into the association model and synset embeddings as well."
                    },
                    {
                        "id": 132,
                        "string": "Synset Embeddings For the association component of our model, we require embedding representations for WordNet synsets."
                    },
                    {
                        "id": 133,
                        "string": "While unsupervised word embedding techniques go a long way in representing wordforms (Collobert et al., 2011; Mikolov et al., 2013; Pennington et al., 2014) , they are not immediately applicable to the semantically-precise domain of synsets."
                    },
                    {
                        "id": 134,
                        "string": "We explore two methods of transforming pre-trained word embeddings into synset embeddings."
                    },
                    {
                        "id": 135,
                        "string": "Averaging."
                    },
                    {
                        "id": 136,
                        "string": "A straightforward way of using word embeddings to create synset embeddings is to collect the words representing the synset as surface form within the WordNet dataset and average their embeddings (Socher et al., 2013) ."
                    },
                    {
                        "id": 137,
                        "string": "We apply this method to pre-trained GloVe embeddings (Pennington et al., 2014) and pre-trained FastText embeddings (Bojanowski et al., 2017) , averaging over the set of all wordforms in all lemmas for each synset, and performing a caseinsensitive query on the embedding dictionary."
                    },
                    {
                        "id": 138,
                        "string": "For example, the synset 'determine.v.01' lists the following lemmas: 'determine', 'find', 'find out', 'ascertain'."
                    },
                    {
                        "id": 139,
                        "string": "Its vector is initialized as 1 5 (e determine + 2 · e f ind + e out + e ascertain )."
                    },
                    {
                        "id": 140,
                        "string": "AutoExtend retrofitting + Mimick."
                    },
                    {
                        "id": 141,
                        "string": "AutoExtend is a method developed specifically for embedding WordNet synsets (Rothe and Schütze, 2015) , in which pre-trained word embeddings are retrofitted to the tripartite relation graph connecting wordforms, lemmas, and synsets."
                    },
                    {
                        "id": 142,
                        "string": "The resulting synset embeddings occupy the same space as the word embeddings."
                    },
                    {
                        "id": 143,
                        "string": "However, some Word-Net senses are not represented in the underlying set of pre-trained word embeddings."
                    },
                    {
                        "id": 144,
                        "string": "5 To handle these cases, we trained a character-based model called MIMICK, which learns to predict embeddings for out-of-vocabulary items based on their spellings (Pinter et al., 2017) ."
                    },
                    {
                        "id": 145,
                        "string": "We do not modify the spelling conventions of WordNet synsets before passing them to Mimick, so e.g."
                    },
                    {
                        "id": 146,
                        "string": "'mask.n.02' (the second synset corresponding to 'mask' as a noun) acts as the input character sequence as is."
                    },
                    {
                        "id": 147,
                        "string": "Random initialization."
                    },
                    {
                        "id": 148,
                        "string": "In preliminary experiments, we attempted training the association models using randomly-initialized embeddings."
                    },
                    {
                        "id": 149,
                        "string": "These proved to be substantially weaker than distributionally-informed embeddings and we do not report their performance in the results section."
                    },
                    {
                        "id": 150,
                        "string": "We view this finding as strong evidence to support the necessity of a distributional signal in a typelevel semantic setup."
                    },
                    {
                        "id": 151,
                        "string": "Setup Following tuning experiments, we train the association models on synset embeddings with d = 300, using a negative log-likelihood loss function over 10 negative samples and iterating over symmetric relations once every five epochs."
                    },
                    {
                        "id": 152,
                        "string": "We optimize the loss using AdaGrad with η = 0.01, and perform early stopping based on the development set mean reciprocal rank."
                    },
                    {
                        "id": 153,
                        "string": "M3GM is trained in four epochs using AdaGrad with η = 0.1."
                    },
                    {
                        "id": 154,
                        "string": "We set M3GM's rerank list size K = 100 and, following tuning, the regularization parameter λ = 0.01 and negative sample count per edge T = 10."
                    },
                    {
                        "id": 155,
                        "string": "Our models are all implemented in DyNet (Neubig et al., 2017) ."
                    },
                    {
                        "id": 156,
                        "string": "Table 1 presents the results on the development set."
                    },
                    {
                        "id": 157,
                        "string": "Lines 1-3 depict the results for local models using averaged FastText embedding initialization, showing that the best performance in terms of MRR and top-rank hits is achieved by TRANSE."
                    },
                    {
                        "id": 158,
                        "string": "Mean Rank does not align with the other metrics; this is an interpretable tradeoff, as both BILIN and DISTMULT have an inherent preference for correlated synset embeddings, giving a stronger fallback for cases where the relation embedding is completely off, but allowing less freedom for separating strong cases from correlated false positives, compared to a translational objective."
                    },
                    {
                        "id": 159,
                        "string": "Results Effect of global score."
                    },
                    {
                        "id": 160,
                        "string": "There is a clear advantage to re-ranking the top local candidates using the score signal from the M3GM model (line 4)."
                    },
                    {
                        "id": 161,
                        "string": "These results are further improved when the graph score is weighted against the association component per relation (line 5)."
                    },
                    {
                        "id": 162,
                        "string": "We obtain similar improvements when re-ranking the predictions from DISTMULT and BILIN."
                    },
                    {
                        "id": 163,
                        "string": "Dettmers et al."
                    },
                    {
                        "id": 164,
                        "string": "(2018) ."
                    },
                    {
                        "id": 165,
                        "string": "The M3GM training procedure is not useful in fine-tuning the association model via backpropagation: this degrades the association scores for true edges in the evaluation set, dragging the reranked results along with them to about a 2-point drop relative to the untuned variant."
                    },
                    {
                        "id": 166,
                        "string": "Table 2 shows that our main results transfer onto the test set, with even a slightly larger margin."
                    },
                    {
                        "id": 167,
                        "string": "This could be the result of the greater edge density of the combined training and dev graphs, which enhance the global coherence of the graph structure captured by M3GM features."
                    },
                    {
                        "id": 168,
                        "string": "To support this theory, we tested the M3GM model trained on only the training set, and its test set performance was roughly one point worse on all metrics, as compared with the model trained on the training+dev data."
                    },
                    {
                        "id": 169,
                        "string": "Synset embedding initialization."
                    },
                    {
                        "id": 170,
                        "string": "We trained association models initialized on AutoEx-tend+Mimick vectors (see Section 4.4)."
                    },
                    {
                        "id": 171,
                        "string": "Their performance, inferior to averaged FastText vectors by about 1-2 MRR points on the dev set, is somewhat at odds with findings from previous experiments on WordNet (Guu et al., 2015) ."
                    },
                    {
                        "id": 172,
                        "string": "We believe the decisive factor in our result is the size of the training corpus used to create FastText embeddings, along with the increase in resulting vocabulary coverage."
                    },
                    {
                        "id": 173,
                        "string": "Out of 124,819 lemma tokens participating in 41,105 synsets, 118,051 had embeddings available (94.6%; type-level coverage 88.1%)."
                    },
                    {
                        "id": 174,
                        "string": "Only 530 synsets (1.3%) finished this initialization process with no embedding and were assigned random vectors."
                    },
                    {
                        "id": 175,
                        "string": "AutoExtend, fit for embeddings from Mikolov et al."
                    },
                    {
                        "id": 176,
                        "string": "(2013) which were trained on a smaller corpus, offers a weaker signal: 13,377 synsets (32%) had no vector and needed Mimick initialization."
                    },
                    {
                        "id": 177,
                        "string": "Positive 1 s member meronym − −−−−−−−−−−− → t 2 s has part −−−−−→ t 3 s hypernym −−−−−−→ t derivationally related f orm − −−−−−−−−−−−−−−−−−→ u Negative 4 s hypernym −−−−−−→ t 5 s hypernym ← −−−−−− → t 6 s member meronym − −−−−−−−−−−− → t instance hypernym − −−−−−−−−−−−− → u 7 s1 has part −−−−−→ t verb group ← −−−−−− − s2 Graph Analysis As a consequence of the empirical experiment, we aim to find out what M3GM has learned about WordNet."
                    },
                    {
                        "id": 178,
                        "string": "Table 3 presents a sample of topweighted motifs."
                    },
                    {
                        "id": 179,
                        "string": "Lines 1 and 2 demonstrate that the model prefers a broad scattering of targets for the member meronym and has part relations 6 , which are flat and top-downwards hierarchical, respectively, while line 4 shows that a multitude of unique hypernyms is undesired, as expected from a bottom-upwards hierarchical relation."
                    },
                    {
                        "id": 180,
                        "string": "Line 5 enforces the asymmetry of the hypernym relation."
                    },
                    {
                        "id": 181,
                        "string": "Lines 3, 6, and 7 hint at deeper interactions between the different relation types."
                    },
                    {
                        "id": 182,
                        "string": "Line 3 shows that the model assigns positive weights to hypernyms which have derivationally-related forms, suggesting that the derivational equivalence classes in the graph tend to exist in the higher, more abstract levels of the hypernym hierarchy, as noted in Section 3.1."
                    },
                    {
                        "id": 183,
                        "string": "Line 6 captures a semantic conflict: synsets located in the lower, specific levels of the graph can be specified either as instances of abstract concepts 7 , or as members of less specific concrete classes, but not as both."
                    },
                    {
                        "id": 184,
                        "string": "Line 7 may have captured a nodal property -since part of is a relation which holds between nouns, and verb group holds between verbs, this negative weight assignment may be the manifestation of a part-of-speech uniqueness constraint."
                    },
                    {
                        "id": 185,
                        "string": "In addition, in features 3 and 7 we see the importance of symmetric relations (here derivationally related form 6 Example edges: 'America' → 'American', 'face' → 'mouth', respectively."
                    },
                    {
                        "id": 186,
                        "string": "7 Example instance hypernym edge: 'Rome' → 'national capital'."
                    },
                    {
                        "id": 187,
                        "string": "and verb group, respectively), which manage to be represented in the graph model despite not being directly trained on."
                    },
                    {
                        "id": 188,
                        "string": "Table 4 presents examples of relation targets successfully re-ranked thanks to these features."
                    },
                    {
                        "id": 189,
                        "string": "The first false connection created a new unique hypernym, 'garden lettuce', downgraded by the graph score through incrementing the count of negatively-weighted feature 4."
                    },
                    {
                        "id": 190,
                        "string": "In the second case, 'vienna' was brought from rank 10 to rank 1 since it incremented the count for the positivelyweighted feature 2, whereas all targets ranked above it by the local model were already has parts, mostly of 'europe'."
                    },
                    {
                        "id": 191,
                        "string": "The α r values weighing the importance of M3GM scores in the overall function, found per relation through grid search over the development set, are presented in Table 5 ."
                    },
                    {
                        "id": 192,
                        "string": "It appears that for all but two relations, the best-performing model preferred the signal from the graph features to that from the association model (α r > 0.5)."
                    },
                    {
                        "id": 193,
                        "string": "Based on the surface properties of the different relation graphs, the decisive factor seems to be that synset domain topic of and has part pertain mostly to very common concepts, offering good local signal from the synset embeddings, whereas the rest include many long-tail, low-frequency synsets that require help from global features to detect regularity."
                    },
                    {
                        "id": 194,
                        "string": "Conclusion This paper presents a novel method for reasoning about semantic graphs like WordNet, combining the distributional coherence between individual entity pairs with the structural coherence of network motifs."
                    },
                    {
                        "id": 195,
                        "string": "Applied as a re-ranker, this method substantially improves performance on link prediction."
                    },
                    {
                        "id": 196,
                        "string": "Our analysis of results from Table 3 , lines 6 and 7, suggests that adding graph motifs which qualify their adjacent nodes in terms of syntactic function or semantic category may prove useful."
                    },
                    {
                        "id": 197,
                        "string": "From a broader perspective, M3GM can do more as a probabilistic model than predict individual edges."
                    },
                    {
                        "id": 198,
                        "string": "For example, consider the problem of linking a new entity into a semantic graph, given only the vector embedding."
                    },
                    {
                        "id": 199,
                        "string": "This task involves adding multiple edges simultaneously, while maintaining structural coherence."
                    },
                    {
                        "id": 200,
                        "string": "Our model is capable of scoring bundles of new edges, and in future work, we plan to explore the possibility of combining M3GM with a search algorithm, to automatically extend existing knowledge graphs by linking in one or more new entities."
                    },
                    {
                        "id": 201,
                        "string": "We also plan to explore multilingual applications."
                    },
                    {
                        "id": 202,
                        "string": "To some extent, the structural parameters estimated by M3GM are not specific to English: for example, hypernymy cannot be symmetric in any language."
                    },
                    {
                        "id": 203,
                        "string": "If the structural parameters estimated from English WordNet are transferable to other languages, then the combination of M3GM and multilingual word embeddings could facilitate the creation and extension of large-scale semantic resources across many languages (Fellbaum and Vossen, 2012; Bond and Foster, 2013; Lafourcade, 2007) ."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 29
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 30,
                        "end": 60
                    },
                    {
                        "section": "Max-Margin Markov Graph Models",
                        "n": "3",
                        "start": 61,
                        "end": 68
                    },
                    {
                        "section": "Graph Motifs as Features",
                        "n": "3.1",
                        "start": 69,
                        "end": 79
                    },
                    {
                        "section": "Local Score Component",
                        "n": "3.2",
                        "start": 80,
                        "end": 86
                    },
                    {
                        "section": "Parameter Estimation",
                        "n": "3.3",
                        "start": 87,
                        "end": 101
                    },
                    {
                        "section": "Relation Prediction",
                        "n": "4",
                        "start": 102,
                        "end": 106
                    },
                    {
                        "section": "WN18RR Dataset",
                        "n": "4.1",
                        "start": 107,
                        "end": 112
                    },
                    {
                        "section": "Metrics",
                        "n": "4.2",
                        "start": 113,
                        "end": 115
                    },
                    {
                        "section": "Systems",
                        "n": "4.3",
                        "start": 116,
                        "end": 116
                    },
                    {
                        "section": "RULE",
                        "n": "4.3.1",
                        "start": 117,
                        "end": 118
                    },
                    {
                        "section": "Association Models",
                        "n": "4.3.2",
                        "start": 119,
                        "end": 121
                    },
                    {
                        "section": "Max-Margin Markov Graph Model",
                        "n": "4.3.3",
                        "start": 122,
                        "end": 131
                    },
                    {
                        "section": "Synset Embeddings",
                        "n": "4.4",
                        "start": 132,
                        "end": 150
                    },
                    {
                        "section": "Setup",
                        "n": "4.5",
                        "start": 151,
                        "end": 158
                    },
                    {
                        "section": "Results",
                        "n": "5",
                        "start": 159,
                        "end": 177
                    },
                    {
                        "section": "Graph Analysis",
                        "n": "6",
                        "start": 178,
                        "end": 193
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 194,
                        "end": 203
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1261-Table4-1.png",
                        "caption": "Table 4: Successful M3GM re-ranking examples.",
                        "page": 7,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 525.12,
                            "y1": 62.4,
                            "y2": 138.24
                        }
                    },
                    {
                        "filename": "../figure/image/1261-Table3-1.png",
                        "caption": "Table 3: Select heavyweight features (motifs) following best dev set training using M3GM. Circled nodes count towards the motif.",
                        "page": 7,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 297.12,
                            "y1": 62.879999999999995,
                            "y2": 213.12
                        }
                    },
                    {
                        "filename": "../figure/image/1261-Table5-1.png",
                        "caption": "Table 5: Graph score weights found for relations on the dev set. Zero means graph score is not considered at all for this relation, one means only it is considered.",
                        "page": 7,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 528.0,
                            "y1": 197.76,
                            "y2": 260.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1261-Table1-1.png",
                        "caption": "Table 1: Results on development set (all metrics except MR are x100). M3GM lines use TRANSE as their association model. In M3GMαr , the graph component is tuned post-hoc against the local component per relation.",
                        "page": 6,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 283.2,
                            "y1": 62.4,
                            "y2": 156.96
                        }
                    },
                    {
                        "filename": "../figure/image/1261-Table2-1.png",
                        "caption": "Table 2: Main results on test set. † These models were not re-implemented, and are reported as in Nguyen et al. (2018) and in Dettmers et al. (2018).",
                        "page": 6,
                        "bbox": {
                            "x1": 319.68,
                            "x2": 511.2,
                            "y1": 62.4,
                            "y2": 166.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1261-Figure1-1.png",
                        "caption": "Figure 1: Probable (a) and improbable (b-c) structures in a hypothetical hypernym graph.",
                        "page": 0,
                        "bbox": {
                            "x1": 336.0,
                            "x2": 494.4,
                            "y1": 247.67999999999998,
                            "y2": 526.0799999999999
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-61"
        },
        {
            "slides": {
                "0": {
                    "title": "Geolocation Prediction",
                    "text": [
                        "Predict a loca8on of a person",
                        "My house is at",
                        "Vancouver. Vancouver geoloca8on predic8on city name an SNS message"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Our Approach",
                    "text": [
                        "Geoloca8on predic8on with neural networks",
                        "AttentionM\u0001 AttentionL\u0001 AttentionD\u0001 AttentionN\u0001 Timezone Embedding\u0001",
                        "City User W ord Em bedding\u0001 Embedding\u0001 Embedding\u0001",
                        "Inpu ts m essages, metadata, and u ser network",
                        "Atte ntionM\u0001 Text processes with AttentionL\u0001 AttentionD\u0001 AttentionN\u0001",
                        "R NN+APen8on Timezone Embedding\u0001",
                        "AttentionU\u0001 with APen 8on",
                        "TEXT\u0001 Quite a comp lex"
                    ],
                    "page_nums": [
                        2,
                        3,
                        4,
                        5,
                        6
                    ],
                    "images": [
                        "figure/image/1264-Figure1-1.png"
                    ]
                },
                "2": {
                    "title": "Geolocation Prediction Target",
                    "text": [
                        "A popular target in previous works (Cheng et al.,",
                        "Ground-truth loca8ons with geotags"
                    ],
                    "page_nums": [
                        7,
                        8
                    ],
                    "images": []
                },
                "3": {
                    "title": "Metadata",
                    "text": [
                        "Descrip8on, loca8on, 8mezone, etc.",
                        "State-of-the-art performances combined with texts",
                        "Descrip0on I work as a researcher at XXX",
                        "Loca0on I live in Canada.",
                        "TwiPer user Timezone America/Vancouver"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "4": {
                    "title": "User Network",
                    "text": [
                        "Men8on network, friend network, etc.",
                        "Predic8on with label propaga8on",
                        "State-of-the-art performances combined with texts",
                        "Men8on user 1 Men8on user 2"
                    ],
                    "page_nums": [
                        10,
                        11
                    ],
                    "images": []
                },
                "5": {
                    "title": "Model",
                    "text": [
                        "AttentionM\u0001 AttentionL\u0001 AttentionD\u0001 AttentionN\u0001 Timezone Embedding\u0001",
                        "City User Word Em bedding\u0001 Embedding\u0001 Embedding\u0001",
                        "M essages Metadata User network"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": [
                        "figure/image/1264-Figure1-1.png"
                    ]
                },
                "8": {
                    "title": "TEXT and META Component",
                    "text": [
                        "AttentionM\u0001 AttentionL\u0001 AttentionD\u0001 AttentionN\u0001 Timezone Text processes Embedding\u0001",
                        "(singl e te xt) Embedding RNNM\u0001 RNNL\u0001 RNND\u0001",
                        "City User Word Em bedding\u0001 Embedding\u0001 Embedding\u0001",
                        "M essages Metadata (loca8on, d escrip8on mezone)"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": [
                        "figure/image/1264-Figure1-1.png"
                    ]
                },
                "9": {
                    "title": "USERNET Component 1",
                    "text": [
                        "AttentionM\u0001 AttentionL\u0001 AttentionD\u0001 AttentionN\u0001 Timezone Embedding\u0001",
                        "City User Word Em bedding\u0001 Embedding\u0001 Embedding\u0001"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": [
                        "figure/image/1264-Figure1-1.png"
                    ]
                },
                "10": {
                    "title": "USERNET Component 2",
                    "text": [
                        "Embedding for each user",
                        "Embedding for each ground-truth city of a user City User",
                        "unknown for unavailable cases",
                        "APen8on over N users",
                        "linked user 1\u0001 User current Network\u0001 linked user\u0001 user N\u0001"
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                },
                "11": {
                    "title": "Unified Processes",
                    "text": [
                        "AttentionTL\u0001 USERNET\u0001 User network",
                        "AttentionM\u0001 AttentionL\u0001 AttentionD\u0001 Timezone Embedding\u0001",
                        "City User Word Em bedding\u0001 Embedding\u0001 Embedding\u0001"
                    ],
                    "page_nums": [
                        18
                    ],
                    "images": [
                        "figure/image/1264-Figure1-1.png"
                    ]
                },
                "12": {
                    "title": "Data 1",
                    "text": [
                        "Uni-direc8onal men8on network (Rahimi et al.,",
                        "Dataset users + one-hop users",
                        "Set undirected edges for men8ons",
                        "Bob set edge Bob"
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": []
                },
                "13": {
                    "title": "Data 2",
                    "text": [
                        "* Restricted edges to sa8sfy one of the following condi8ons:",
                        "Both users have ground truth loca8ons.",
                        "One user has a ground truth loca8on and another user is men8oned 5 8mes or more."
                    ],
                    "page_nums": [
                        20
                    ],
                    "images": [
                        "figure/image/1264-Table1-1.png"
                    ]
                },
                "14": {
                    "title": "Baselines and Sub models",
                    "text": [
                        "LR X logistic (Rahimi regression, et al. 2015a) k-d tree",
                        "MADCEL-B-LR X X logistic regression, k-d tree,",
                        "LR-STACK X X X logistic regression, k-d tree, stacking, Modified Adsorption",
                        "SUB-NN-UNET X X TEXT, USERNET",
                        "SUB-NN-META X X TEXT&META",
                        "Proposed Model X X X Full model"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": []
                },
                "16": {
                    "title": "Results TwiPerUS",
                    "text": [
                        "accuracy@161 * significant improvement with 5% significance level"
                    ],
                    "page_nums": [
                        23
                    ],
                    "images": [
                        "figure/image/1264-Table2-1.png"
                    ]
                },
                "17": {
                    "title": "Results W NUT",
                    "text": [
                        "Baselines (reported)\u0001 Jayasinghe et al. (2016)",
                        "accuracy@161 ** significant improvement with 1% significance level"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": [
                        "figure/image/1264-Table4-1.png"
                    ]
                },
                "18": {
                    "title": "Analysis of Attention layers 1",
                    "text": [
                        "Analysis of APen8on layers (1)",
                        "TwiPerUS pro bability FCUN\u0001 densit y func8ons AttentionUN\u0001 FCU\u0001",
                        "AttentionTL\u0001 Timeline & USERNET\u0001",
                        "Atte ntionM\u0001 AttentionL\u0001 AttentionD\u0001 AttentionN\u0001 Timezo ne description\u0001 timezone\u0001 Metadata Embedd ing\u0001 RNNM\u0001 RNNL\u0001 RNND\u0001 City User W ord Em bedding\u0001 Embedding\u0001 Embedding\u0001"
                    ],
                    "page_nums": [
                        25,
                        26
                    ],
                    "images": [
                        "figure/image/1264-Figure1-1.png",
                        "figure/image/1264-Figure5-1.png",
                        "figure/image/1264-Figure4-1.png"
                    ]
                },
                "19": {
                    "title": "Analysis of Attention layers 2",
                    "text": [
                        "Analysis of APen8on layers (2)",
                        "probability FCUN\u0001 density func8on",
                        "Attention TL\u0001 User&UserNetw oUSrEkRNET\u0001",
                        "AttentionM\u0001 AttentionL\u0001 AttentionD\u0001 AttentionN\u0001 Timezone Embedding\u0001 RNNM\u0001 RNNL\u0001 RNND\u0001",
                        "City User W ord Em bedding\u0001 Embedding\u0001 Embedding\u0001"
                    ],
                    "page_nums": [
                        27,
                        28
                    ],
                    "images": [
                        "figure/image/1264-Figure1-1.png",
                        "figure/image/1264-Figure5-1.png",
                        "figure/image/1264-Table7-1.png"
                    ]
                },
                "21": {
                    "title": "Future Works",
                    "text": [
                        "An extension of the proposed model",
                        "Introduc8on of temporal state",
                        "Capture loca8on changes like travel",
                        "Applica8on to different tasks",
                        "For example, gender analysis or age analysis",
                        "Some metadata may not be effec8ve"
                    ],
                    "page_nums": [
                        30
                    ],
                    "images": []
                },
                "24": {
                    "title": "Model Configuration 1",
                    "text": [
                        "Attention context vector size\u0001",
                        "Max tweet number per user\u0001"
                    ],
                    "page_nums": [
                        47
                    ],
                    "images": [
                        "figure/image/1264-Table5-1.png"
                    ]
                },
                "26": {
                    "title": "Training Time",
                    "text": [
                        "GeForce GTX Titan X"
                    ],
                    "page_nums": [
                        49
                    ],
                    "images": []
                }
            },
            "paper_title": "Unifying Text, Metadata, and User Network Representations with a Neural Network for Geolocation Prediction",
            "paper_id": "1264",
            "paper": {
                "title": "Unifying Text, Metadata, and User Network Representations with a Neural Network for Geolocation Prediction",
                "abstract": "We propose a novel geolocation prediction model using a complex neural network. Our model unifies text, metadata, and user network representations with an attention mechanism to overcome previous ensemble approaches. In an evaluation using two open datasets, the proposed model exhibited a maximum 3.8% increase in accuracy and a maximum of 6.6% increase in ac-curacy@161 against previous models. We further analyzed several intermediate layers of our model, which revealed that their states capture some statistical characteristics of the datasets.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Social media sites have become a popular source of information to analyze current opinions of numerous people."
                    },
                    {
                        "id": 1,
                        "string": "Many researchers have worked to realize various automated analytical methods for social media because manual analysis of such vast amounts of data is difficult."
                    },
                    {
                        "id": 2,
                        "string": "Geolocation prediction is one such analytical method that has been studied widely to predict a user location or a document location."
                    },
                    {
                        "id": 3,
                        "string": "Location information is crucially important information for analyses such as disaster analysis (Sakaki et al., 2010) , disease analysis (Culotta, 2010) , and political analysis (Tumasjan et al., 2010) ."
                    },
                    {
                        "id": 4,
                        "string": "Such information is also useful for analyses such as sentiment analysis (Martínez-Cámara et al., 2014) and user attribute analysis (Rao et al., 2010) to undertake detailed region-specific analyses."
                    },
                    {
                        "id": 5,
                        "string": "Geolocation prediction has been performed for Wikipedia (Overell, 2009 ), Flickr (Serdyukov et al., 2009; Crandall et al., 2009 ), Facebook (Backstrom et al., 2010) , and Twitter (Cheng et al., 2010; Eisenstein et al., 2010) ."
                    },
                    {
                        "id": 6,
                        "string": "Among these sources, Twitter is often preferred because of its characteristics, which are suited for geolocation prediction."
                    },
                    {
                        "id": 7,
                        "string": "First, some tweets include geotags, which are useful as ground truth locations."
                    },
                    {
                        "id": 8,
                        "string": "Secondly, tweets include metadata such as timezones and self-declared locations that can facilitate geolocation prediction."
                    },
                    {
                        "id": 9,
                        "string": "Thirdly, a user network is obtainable by consideration of the interaction between two users as a network link."
                    },
                    {
                        "id": 10,
                        "string": "Herein, we propose a neural network model to tackle geolocation prediction in Twitter."
                    },
                    {
                        "id": 11,
                        "string": "Past studies have combined text, metadata, and user network information with ensemble approaches (Han et al., 2013 (Han et al., , 2014 Rahimi et al., 2015a; Jayasinghe et al., 2016) to achieve state-of-the-art performance."
                    },
                    {
                        "id": 12,
                        "string": "Our model combines text, metadata, and user network information using a complex neural network."
                    },
                    {
                        "id": 13,
                        "string": "Neural networks have recently shown effectiveness to capture complex representations combining simpler representations from large-scale datasets (Goodfellow et al., 2016) ."
                    },
                    {
                        "id": 14,
                        "string": "We intend to obtain unified text, metadata, and user network representations with an attention mechanism  that is superior to the earlier ensemble approaches."
                    },
                    {
                        "id": 15,
                        "string": "The contributions of this paper are the following: 1."
                    },
                    {
                        "id": 16,
                        "string": "We propose a neural network model that learns unified text, metadata, and user network representations with an attention mechanism."
                    },
                    {
                        "id": 17,
                        "string": "2."
                    },
                    {
                        "id": 18,
                        "string": "We show that the proposed model outperforms the previous ensemble approaches in two open datasets."
                    },
                    {
                        "id": 19,
                        "string": "3."
                    },
                    {
                        "id": 20,
                        "string": "We analyze some components of the proposed model to gain insight into the unification processes of the model."
                    },
                    {
                        "id": 21,
                        "string": "Our model specifically emphasizes geolocation prediction in Twitter to use benefits derived from the characteristics described above."
                    },
                    {
                        "id": 22,
                        "string": "However, our model can be readily extended to other social media analyses such as user attribute analysis and political analysis, which can benefit from metadata and user network information."
                    },
                    {
                        "id": 23,
                        "string": "In subsequent sections of this paper, we explain the related works in four perspectives in Section 2."
                    },
                    {
                        "id": 24,
                        "string": "The proposed neural network model is described in Section 3 along with two open datasets that we used for evaluations in Section 4."
                    },
                    {
                        "id": 25,
                        "string": "Details of an evaluation are reported in Section 5 with discussions in Section 6."
                    },
                    {
                        "id": 26,
                        "string": "Finally, Section 7 concludes the paper with some future directions."
                    },
                    {
                        "id": 27,
                        "string": "2 Related Works 2.1 Text-based Approach Probability distributions of words over locations have been used to estimate the geolocations of users."
                    },
                    {
                        "id": 28,
                        "string": "Maximum likelihood estimation approaches (Cheng et al., 2010 and language modeling approaches minimizing KL-divergence (Wing and Baldridge, 2011; Kinsella et al., 2011; Roller et al., 2012) have succeeded in predicting user locations using word distributions."
                    },
                    {
                        "id": 29,
                        "string": "Topic modeling approaches to extract latent topics with geographical regions (Eisenstein et al., 2010 (Eisenstein et al., , 2011 Hong et al., 2012; Ahmed et al., 2013) have also been explored considering word distributions."
                    },
                    {
                        "id": 30,
                        "string": "Supervised machine learning methods with word features are also popular in text-based geolocation prediction."
                    },
                    {
                        "id": 31,
                        "string": "Multinomial Naive Bayes (Han et al., 2012 (Han et al., , 2014 Wing and Baldridge, 2011) , logistic regression (Wing and Baldridge, 2014; Han et al., 2014) , hierarchical logistic regression (Wing and Baldridge, 2014) , and a multilayer neural network with stacked denoising autoencoder (Liu and Inkpen, 2015) have realized geolocation prediction from text."
                    },
                    {
                        "id": 32,
                        "string": "A semi-supervised machine learning approach by Cha et al."
                    },
                    {
                        "id": 33,
                        "string": "(2015) has also been produced using a sparse-coding and dictionary learning."
                    },
                    {
                        "id": 34,
                        "string": "User-network-based Approach Social media often include interactions of several kinds among users."
                    },
                    {
                        "id": 35,
                        "string": "These interactions can be regarded as links that form a network among users."
                    },
                    {
                        "id": 36,
                        "string": "Several studies have used such user network information to predict geolocation."
                    },
                    {
                        "id": 37,
                        "string": "Backstrom et al."
                    },
                    {
                        "id": 38,
                        "string": "(2010) introduced a probabilistic model to predict the location of a user using friendship information in Facebook."
                    },
                    {
                        "id": 39,
                        "string": "Friend and follower information in Twitter were used to predict user locations with a most frequent friend algorithm (Davis Jr. et al., 2011) , a unified descriptive model (Li et al., 2012b) , location-based generative models (Li et al., 2012a) , dynamic Bayesian networks (Sadilek et al., 2012) , a support vector machine (Rout et al., 2013) , and maximum likelihood estimation (McGee et al., 2013) ."
                    },
                    {
                        "id": 40,
                        "string": "Mention information in Twitter is also used with label propagation models (Jurgens, 2013; Compton et al., 2014) and an energy and social local coefficient model (Kong et al., 2014) ."
                    },
                    {
                        "id": 41,
                        "string": "Jurgens et al."
                    },
                    {
                        "id": 42,
                        "string": "(2015) compared nine user-network-based approaches targeting Twitter, controlling data conditions."
                    },
                    {
                        "id": 43,
                        "string": "Metadata-based Approach Metadata such as location fields are useful as effective clues to predict geolocation."
                    },
                    {
                        "id": 44,
                        "string": "Hecht et al."
                    },
                    {
                        "id": 45,
                        "string": "(2011) reported that decent accuracy of geolocation prediction can be achieved using location fields."
                    },
                    {
                        "id": 46,
                        "string": "Approaches to combine metadata with texts are also proposed to extend text-based approaches."
                    },
                    {
                        "id": 47,
                        "string": "Combinatory approaches such as a dynamically weighted ensemble method (Mahmud et al., 2012) , polygon stacking (Schulz et al., 2013) , stacking (Han et al., 2013 (Han et al., , 2014 , and average pooling with a neural network (Miura et al., 2016) have strengthened geolocation prediction."
                    },
                    {
                        "id": 48,
                        "string": "Combinatory Approach Extending User-network-based Approach Several attempts have been made to combine usernetwork-based approaches with other approaches."
                    },
                    {
                        "id": 49,
                        "string": "A text-based approach with logistic regression was combined with label propagation approaches to enhance geolocation prediction (Rahimi et al., 2015a (Rahimi et al., ,b, 2016 ."
                    },
                    {
                        "id": 50,
                        "string": "Jayasinghe et al."
                    },
                    {
                        "id": 51,
                        "string": "(2016) combined nine components including text-based approaches, metadata-based approaches, and a usernetwork-based approach with a cascade ensemble method."
                    },
                    {
                        "id": 52,
                        "string": "Comparisons with Proposed Model A model we propose in Section 3 which combines text, metadata, and user network information with a neural network, can be regarded as an alternative to approaches using text and metadata (Mahmud et al., 2012; Schulz et al., 2013; Han et al., 2013 Han et al., , 2014 Miura et al., 2016) , approaches with text and user network information (Rahimi et al., 2015a,b) , and an approach with text, metadata, and user network information (Jayasinghe et al., 2016) ."
                    },
                    {
                        "id": 53,
                        "string": "In Section 5, we demonstrate that our model outperforms earlier models."
                    },
                    {
                        "id": 54,
                        "string": "In terms of machine learning methods, our model is a neural network model that shares some similarity with previous neural network models (Liu and Inkpen, 2015; Miura et al., 2016) ."
                    },
                    {
                        "id": 55,
                        "string": "Our model and these previous models have two key differences."
                    },
                    {
                        "id": 56,
                        "string": "First, our model integrates user network information along with other information."
                    },
                    {
                        "id": 57,
                        "string": "Secondly, our model combines text and metadata with an attention mechanism ."
                    },
                    {
                        "id": 58,
                        "string": "Figure 1 presents an overview of our model: a complex neural network for classification with a city as a label."
                    },
                    {
                        "id": 59,
                        "string": "For each user, the model accepts inputs of messages, a location field, a description field, a timezone, linked users, and the cities of linked users."
                    },
                    {
                        "id": 60,
                        "string": "Model Proposed Model User network information is incorporated by city embeddings and user embeddings of linked users."
                    },
                    {
                        "id": 61,
                        "string": "User embeddings are introduced along with city embeddings because linked users with city information 1 are limited."
                    },
                    {
                        "id": 62,
                        "string": "We chose to let the model learn geolocation representations of linked users directly via user embeddings."
                    },
                    {
                        "id": 63,
                        "string": "The model can be broken down to several components, details of which are described in Section 3.1.1-3.1.4."
                    },
                    {
                        "id": 64,
                        "string": "Text Component We describe the text component of the model, which is the \"TEXT\" section in Figure 1 ."
                    },
                    {
                        "id": 65,
                        "string": "Figure 2 presents an overview of the text component."
                    },
                    {
                        "id": 66,
                        "string": "The component consists of a recurrent neural network (RNN) (Graves, 2012) layer and attention layers."
                    },
                    {
                        "id": 67,
                        "string": "An input of the component is a timeline of a user, which consists of messages in a time sequence."
                    },
                    {
                        "id": 68,
                        "string": "As an implementation of RNN, we used Gated Recurrent Unit (GRU)  with a bidirectional setting."
                    },
                    {
                        "id": 69,
                        "string": "In the RNN layer, word embeddings x of a message are processed with the following transition functions: Word Embedding Attention M computes a message representation m as a weighted sum of g t with weight α t : z t = σ (W z x t + U z h t−1 + b z ) (1) r t = σ (W r x t + U r h t−1 + b r ) (2) h t = tanh (W h x t + U h (r t ⊙ h t−1 ) + b h ) (3) h t = (1 − z t ) ⊙ h t−1 + z t ⊙h t (4) where z t is an update gate, r t is a reset gate,h t is a candidate state, h t is a state, W z , W r , W h , U z , U r , U h are weight matrices, b z , b r , b x 1 x T … input h 1 bi-directional recurrent states … g 1 g 2 g T RNN features … x 2 u 1 context vectors + … α 1 g 1 α 2 g 2 α T g T Attention features m u 2 u T Attention Layer RNN Layer m = ∑ t α t g t (5) α t = exp ( v T α u t ) ∑ t exp (v T α u t ) (6) u t = tanh (W α g t + b α ) (7) where v α is a weight vector, W α is a weight matrix, and b α a bias vector."
                    },
                    {
                        "id": 70,
                        "string": "u t is an attention context vector calculated from g t with a single fullyconnected layer (Eq."
                    },
                    {
                        "id": 71,
                        "string": "7)."
                    },
                    {
                        "id": 72,
                        "string": "u t is normalized with softmax to obtain α t as a probability (Eq."
                    },
                    {
                        "id": 73,
                        "string": "6)."
                    },
                    {
                        "id": 74,
                        "string": "The message representation m is passed to the second attention layer Attention TL to obtain a timeline representation from message representations."
                    },
                    {
                        "id": 75,
                        "string": "Text and Metadata Component We describe text and metadata components of the model, which is the \"TEXT&META\" section in Figure 1 ."
                    },
                    {
                        "id": 76,
                        "string": "This component considers the following three types of metadata along with text: location a text field in which a user is allowed to write the user location freely, description a text field a user can use for self-description, and timezone a selective field from which a user can choose a timezone."
                    },
                    {
                        "id": 77,
                        "string": "Note that certain percentages of these fields are not available 2 , and unknown tokens are used for inputs in such cases."
                    },
                    {
                        "id": 78,
                        "string": "We process location fields and description fields similarly to messages using an RNN layer and an attention layer."
                    },
                    {
                        "id": 79,
                        "string": "Because there is only one location and one description per user, a second attention layer is not required, as it is in the text component."
                    },
                    {
                        "id": 80,
                        "string": "We also chose to share word embeddings among the messages, the location, and the description processes because these inputs are all textual information."
                    },
                    {
                        "id": 81,
                        "string": "For the timezone, an embedding is assigned for each timezone value."
                    },
                    {
                        "id": 82,
                        "string": "A processed timeline representation, a location representation, and a description representation are then passed to the attention layer Attention U with a timezone representation."
                    },
                    {
                        "id": 83,
                        "string": "Attention U combines these four representations and outputs a user representation."
                    },
                    {
                        "id": 84,
                        "string": "This combination is done as in Attention TL with four representations as g 1 ."
                    },
                    {
                        "id": 85,
                        "string": "."
                    },
                    {
                        "id": 86,
                        "string": "."
                    },
                    {
                        "id": 87,
                        "string": "g 4 in Eq."
                    },
                    {
                        "id": 88,
                        "string": "5."
                    },
                    {
                        "id": 89,
                        "string": "User Network Component We describe the user network component of the model, which is the \"USERNET\" section in Figure 1 ."
                    },
                    {
                        "id": 90,
                        "string": "Figure 3 presents an overview of the user network component."
                    },
                    {
                        "id": 91,
                        "string": "The model has two inputs linked cities and linked users."
                    },
                    {
                        "id": 92,
                        "string": "Users connected with a user network are extracted as linked users."
                    },
                    {
                        "id": 93,
                        "string": "We treat their cities 3 as linked cities."
                    },
                    {
                        "id": 94,
                        "string": "Linked cities and linked users are assigned with city embeddings c and user embeddings a respectively."
                    },
                    {
                        "id": 95,
                        "string": "c and a are then processed to output p = c ⊕ a, where ⊕ is an element-wise addition operator."
                    },
                    {
                        "id": 96,
                        "string": "p is then passed to the subsequent attention layer Attention N to obtain a user network representa- Construction of the User Network We construct mention networks (Jurgens, 2013; Compton et al., 2014; Rahimi et al., 2015a,b) from datasets as user networks."
                    },
                    {
                        "id": 97,
                        "string": "To do so, we follow the approach of Rahimi et al."
                    },
                    {
                        "id": 98,
                        "string": "(2015a) and Rahimi et al."
                    },
                    {
                        "id": 99,
                        "string": "(2015b) who use uni-directional mention to set edges of a mention network."
                    },
                    {
                        "id": 100,
                        "string": "An edge is set between the two users nodes if a user mentions another user."
                    },
                    {
                        "id": 101,
                        "string": "The number of unidirectional mention edges for TwitterUS and W-NUT can be found in Table 1 ."
                    },
                    {
                        "id": 102,
                        "string": "The uni-directional setting results to large numbers of edges, which often are computationally expensive to process."
                    },
                    {
                        "id": 103,
                        "string": "We restricted edges to satisfy one of the following conditions to reduce the size: (1) both users have ground truth locations or (2) one user has a ground truth location and another user is mentioned 5 times or more in a training set."
                    },
                    {
                        "id": 104,
                        "string": "The number of reduced-edges with these conditions in TwitterUS and W-NUT can be confirmed in Table 1 ."
                    },
                    {
                        "id": 105,
                        "string": "Evaluation 5.1 Implemented Baselines 5.1.1 LR LR is an l 1 -regularized logistic regression model with k-d tree regions (Roller et al., 2012) used in Rahimi et al."
                    },
                    {
                        "id": 106,
                        "string": "(2015a) ."
                    },
                    {
                        "id": 107,
                        "string": "The model uses tfidf weighted bag-of-words unigrams for features."
                    },
                    {
                        "id": 108,
                        "string": "This model is simple, but it has shown state-ofthe-art performance in cases when only text is available."
                    },
                    {
                        "id": 109,
                        "string": "MADCEL-B-LR MADCEL-B-LR, a model presented by (Rahimi et al., 2015a) , combines LR with Modified Adsorption (MAD) (Talukdar and Crammer, 2009) ."
                    },
                    {
                        "id": 110,
                        "string": "MAD is a graph-based label propagation algorithm that optimizes an objective with a prior term, a smoothness term, and an uninformativeness term."
                    },
                    {
                        "id": 111,
                        "string": "LR is combined with MAD by introducing LR results as dongle nodes to MAD."
                    },
                    {
                        "id": 112,
                        "string": "This model includes an algorithm for the construction of a mention network."
                    },
                    {
                        "id": 113,
                        "string": "The algorithm removes celebrity users 5 and collapses a mention network 6 ."
                    },
                    {
                        "id": 114,
                        "string": "We use binary edges for user network edges because they performed slightly better than weighted edges by accuracy@161 metric in Rahimi et al."
                    },
                    {
                        "id": 115,
                        "string": "(2015a) ."
                    },
                    {
                        "id": 116,
                        "string": "LR-STACK LR-STACK is an ensemble learning model that combines four LR classifiers (LR-MSG, LR-LOC, LR-DESC, LR-TZ) with an l 2 -regularized logistic regression meta-classifier (LR-2ND)."
                    },
                    {
                        "id": 117,
                        "string": "LR-MSG, LR-LOC, LR-DESC, and LR-TZ respectively use messages, location fields, description fields, and timezones as their inputs."
                    },
                    {
                        "id": 118,
                        "string": "This model is similar to the stacking (Wolpert, 1992) approach taken in Han et al."
                    },
                    {
                        "id": 119,
                        "string": "(2013) and Han et al."
                    },
                    {
                        "id": 120,
                        "string": "(2014) , which showed superior performance compared to a feature concatenation approach."
                    },
                    {
                        "id": 121,
                        "string": "The model takes the following three steps to combine text and metadata: Step 1 LR-MSG, LR-LOC, LR-DESC, and LR-TZ are trained using a training set, Step 2 the outputs of the four classifiers on the training set are obtained with 10-fold cross validation, and Step 3 LR-2ND is trained using the outputs of the four classifiers."
                    },
                    {
                        "id": 122,
                        "string": "MADCEL-B-LR-STACK MADCEL-B-LR-STACK is a combined model of MADCEL-B-LR and LR-STACK."
                    },
                    {
                        "id": 123,
                        "string": "LR-STACK results are introduced as dongle nodes to MAD instead of LR results to combine text, metadata, and network information."
                    },
                    {
                        "id": 124,
                        "string": "Model Configurations 5.2.1 Text Processor We applied a lower case conversion, a unicode normalization, a Twitter user name normalization, and a URL normalization for text pre-processing."
                    },
                    {
                        "id": 125,
                        "string": "The pre-processed text is then segmented using Twokenizer (Owoputi et al., 2013) to obtain words."
                    },
                    {
                        "id": 126,
                        "string": "Pre-training of Embeddings We pre-trained word embeddings using messages, location fields, and description fields of a training set using fastText (Bojanowski et al., 2016) with the skip-gram algorithm."
                    },
                    {
                        "id": 127,
                        "string": "We also pre-trained user embeddings using the non-reduced mention network described in Section 4.2 of a training set with LINE (Tang et al., 2015) ."
                    },
                    {
                        "id": 128,
                        "string": "The detail of pre-training parameters are described in Appendix A.1."
                    },
                    {
                        "id": 129,
                        "string": "Neural Network Optimization We chose an objective function of our models to cross-entropy loss."
                    },
                    {
                        "id": 130,
                        "string": "l 2 regularization was applied to the RNN layers, the attention context vectors, and the FC layers of our models to avoid overfitting."
                    },
                    {
                        "id": 131,
                        "string": "The objective function was minimized through stochastic gradient descent over shuffled mini-batches with Adam (Kingma and Ba, 2014)."
                    },
                    {
                        "id": 132,
                        "string": "Model Parameters The layers and the embeddings in our models have unit size and embedding dimension parameters."
                    },
                    {
                        "id": 133,
                        "string": "Our models and the baseline models have regularization parameter α, which is sensitive to a dataset."
                    },
                    {
                        "id": 134,
                        "string": "The baseline models have additional k-d tree bucket size c, celebrity threshold t, and MAD parameters µ 1 , µ 2 , and µ 3 , which are also data sensitive."
                    },
                    {
                        "id": 135,
                        "string": "We chose optimal values for these parameters in terms of accuracy with a grid search using the development sets of TwitterUS and W-NUT."
                    },
                    {
                        "id": 136,
                        "string": "Details of the parameter selection strategies and the selected values are described in Appendix A.2."
                    },
                    {
                        "id": 137,
                        "string": "Metrics We evaluate the models in the following four commonly used metrics in geolocation prediction: accuracy the percentage of correctly predicted cities, accuracy@161 a relaxed accuracy that takes prediction errors within 161 km as correct predictions, median error distance median value of error distances in predictions, and mean error distance mean value of error distances in predictions."
                    },
                    {
                        "id": 138,
                        "string": "Table 2 : Performances of our models and the baseline models on TwitterUS."
                    },
                    {
                        "id": 139,
                        "string": "Significance tests were performed between models with same Sign."
                    },
                    {
                        "id": 140,
                        "string": "Test IDs."
                    },
                    {
                        "id": 141,
                        "string": "The shaded lines represent values copied from related papers."
                    },
                    {
                        "id": 142,
                        "string": "Asterisks denote significant improvements against paired counterparts with 1% confidence (**) and 5% confidence (*)."
                    },
                    {
                        "id": 143,
                        "string": "Model Sign."
                    },
                    {
                        "id": 144,
                        "string": "Test ID Accuracy Accuracy @161 Error Distance Median Mean Baselines ( Table 3 : Performance of our models and baseline models on W-NUT."
                    },
                    {
                        "id": 145,
                        "string": "The same notations as those in Table 2 are used in this table."
                    },
                    {
                        "id": 146,
                        "string": "Table 2 presents results of our models and the implemented baseline models on TwitterUS."
                    },
                    {
                        "id": 147,
                        "string": "We also list values from earlier reports (Han et al., 2012; Wing and Baldridge, 2014; Rahimi et al., 2015a Rahimi et al., ,b, 2016 to make our results readily comparable with past reported values."
                    },
                    {
                        "id": 148,
                        "string": "Result Performance on TwitterUS We performed some statistical significance tests among model pairs that share the same inputs."
                    },
                    {
                        "id": 149,
                        "string": "The values in the Sign."
                    },
                    {
                        "id": 150,
                        "string": "Test ID column of Table  2 represent the IDs of these pairs."
                    },
                    {
                        "id": 151,
                        "string": "As a preparation of statistical significance tests, accuracies, accuracy@161s, and error distances of each test user were calculated for each model pair."
                    },
                    {
                        "id": 152,
                        "string": "Twosided Fisher-Pittman Permutation tests were used for testing accuracy and accuracy@161."
                    },
                    {
                        "id": 153,
                        "string": "Mood's median test was used for testing error distance in terms of median."
                    },
                    {
                        "id": 154,
                        "string": "Paired t-tests were used for testing error distance in terms of mean."
                    },
                    {
                        "id": 155,
                        "string": "We confirmed the significance of improvements in accuracy@161 and mean distance error for all of our models."
                    },
                    {
                        "id": 156,
                        "string": "Three of our models also improved in terms of accuracy."
                    },
                    {
                        "id": 157,
                        "string": "Especially, the proposed model achieved a 2.8% increase in accuracy and a 2.4% increase in accuracy@161 against the counterpart baseline model MADCEL-B-LR-STACK."
                    },
                    {
                        "id": 158,
                        "string": "One negative result we found was the median error distance between SUB-NN-META and LR-STACK."
                    },
                    {
                        "id": 159,
                        "string": "The baseline model LR-STACK performed 4.5 km significantly better than our model."
                    },
                    {
                        "id": 160,
                        "string": "Table 3 presents the results of our models and the implemented baseline models on W-NUT."
                    },
                    {
                        "id": 161,
                        "string": "As for TwitterUS, we listed values from Miura et al."
                    },
                    {
                        "id": 162,
                        "string": "(2016) and Jayasinghe et al."
                    },
                    {
                        "id": 163,
                        "string": "(2016) ."
                    },
                    {
                        "id": 164,
                        "string": "We tested the significance of these results in the same way as we did for TwitterUS."
                    },
                    {
                        "id": 165,
                        "string": "We confirmed significant improvement in the four metrics for all of our models."
                    },
                    {
                        "id": 166,
                        "string": "The proposed model achieved a 4.8% increase in accuracy and a 6.6% increase in accuracy@161 against the counterpart baseline model MADCEL-B-LR-STACK."
                    },
                    {
                        "id": 167,
                        "string": "The accuracy is 3.8% higher against the previously reported best value (Jayasinghe et al., 2016) which combined texts, metadata, and user network information with an ensemble method."
                    },
                    {
                        "id": 168,
                        "string": "6 Discussion 6.1 Analyses of Attention Probabilities Performance on W-NUT Unification Strategies In the evaluation, the proposed model has implicitly shown effectiveness at unifying text, metadata, and user network representations through improvements in the four metrics."
                    },
                    {
                        "id": 169,
                        "string": "However, details of the unification processes are not clear from the model outputs because they are merely the probabilities of estimated locations."
                    },
                    {
                        "id": 170,
                        "string": "To gain insight into the unification processes, we analyzed the states of two attention layers: Attention U and Attention UN in Figure 1 ."
                    },
                    {
                        "id": 171,
                        "string": "Figure 4 presents the estimated probability density functions (PDFs) of the four input representations for Attention U ."
                    },
                    {
                        "id": 172,
                        "string": "These PDFs are estimated with kernel density estimation from the development sets of TwitterUS and W-NUT, where all four representations are available."
                    },
                    {
                        "id": 173,
                        "string": "From the PDFs, it is apparent that the model assigns higher probabilities to time line representations than to other three representations in TwitterUS compared to W-NUT."
                    },
                    {
                        "id": 174,
                        "string": "This finding is reasonable because timelines in TwitterUS consist of more tweets (tweet/user in Table 1 ) and are likely to be more informative than in W-NUT."
                    },
                    {
                        "id": 175,
                        "string": "Figure 5 presents the estimated PDFs of user network representations for Attention UN ."
                    },
                    {
                        "id": 176,
                        "string": "These PDFs are estimated from the development sets of TwitterUS and W-NUT, where both input representations are available."
                    },
                    {
                        "id": 177,
                        "string": "Strong preference of network representation for TwitterUS against W-NUT is found in the PDFs."
                    },
                    {
                        "id": 178,
                        "string": "This finding is intuitive because TwitterUS has substantially more user network edges (reduced-edge/user in Table 1 ) than W-NUT, which is likely to benefit more from user network information."
                    },
                    {
                        "id": 179,
                        "string": "Attention Patterns We further analyzed the proposed model by clustering attention probabilities to capture typical attention patterns."
                    },
                    {
                        "id": 180,
                        "string": "For each user, we assigned six attention probabilities of Attention U and Attention UN as features for a clustering."
                    },
                    {
                        "id": 181,
                        "string": "A kmeans clustering was performed over these users with 9 clusters."
                    },
                    {
                        "id": 182,
                        "string": "The clustering clearly separated the users to 5 clusters for TwitterUS users and 4 clusters for W-NUT users."
                    },
                    {
                        "id": 183,
                        "string": "We extracted typical users of each cluster by selecting the closest users of the cluster centroids."
                    },
                    {
                        "id": 184,
                        "string": "Figure 6 shows a clustering result and the attention probabilities of these users."
                    },
                    {
                        "id": 185,
                        "string": "These attention probabilities can be considered as typical attention patterns of the proposed model and match with the previously estimated PDFs."
                    },
                    {
                        "id": 186,
                        "string": "For example, cluster 2 and 3 represent an attention pattern that processes users by balancing the representations of locations along with the representations of timelines."
                    },
                    {
                        "id": 187,
                        "string": "Additionally, the location probabilities in this pattern are in the right tail region of the location PDF."
                    },
                    {
                        "id": 188,
                        "string": "Limitations of Proposed Model City Prediction The evaluation produced improvements in most of our models in the four metrics."
                    },
                    {
                        "id": 189,
                        "string": "One exception we found was the median distance error between SUB-NN-META and LR-STACKING in TwitterUS."
                    },
                    {
                        "id": 190,
                        "string": "Because the median distance error of SUB-NN-META was quite low (46.8 km), we Table 4 denotes this oracle performance."
                    },
                    {
                        "id": 191,
                        "string": "The oracle mean error distance is 31.4 km."
                    },
                    {
                        "id": 192,
                        "string": "Its standard deviation is 30.1."
                    },
                    {
                        "id": 193,
                        "string": "Note that ground truth locations of TwitterUS are geotags and will not exactly match the oracle city centers."
                    },
                    {
                        "id": 194,
                        "string": "These oracle values imply that the current median error distances are close to the lower bound of the city classification approach and that they are difficult to improve."
                    },
                    {
                        "id": 195,
                        "string": "Errors with High Confidences The proposed model still contains 28-30% errors even in accuracy@161."
                    },
                    {
                        "id": 196,
                        "string": "A qualitative analysis of errors with high confidences was performed to investigate cases that the model fails."
                    },
                    {
                        "id": 197,
                        "string": "We found two common types of error in the error analysis."
                    },
                    {
                        "id": 198,
                        "string": "The first is a case when a location field is incorrect due to a reason such as a house move."
                    },
                    {
                        "id": 199,
                        "string": "For example, the model predicted \"Hong Kong\" for a user with a location field of \"Hong Kong\" but has the gold location of \"Toronto\"."
                    },
                    {
                        "id": 200,
                        "string": "The second is a case when a user tweets a place name of a travel."
                    },
                    {
                        "id": 201,
                        "string": "For example, the model predicted \"San Francisco\" for a user who tweeted about a travel to \"San Francisco\" but has the gold location of \"Boston\"."
                    },
                    {
                        "id": 202,
                        "string": "These two types of error are difficult to handle with the current architecture of the proposed model."
                    },
                    {
                        "id": 203,
                        "string": "The architecture only supports single location field which disables the model to track location changes."
                    },
                    {
                        "id": 204,
                        "string": "The architecture also treats each tweet independently which forbids the model to express a temporal state like traveling."
                    },
                    {
                        "id": 205,
                        "string": "Conclusion As described in this paper, we proposed a complex neural network model for geolocation prediction."
                    },
                    {
                        "id": 206,
                        "string": "The model unifies text, metadata, and user network information."
                    },
                    {
                        "id": 207,
                        "string": "The model achieved the maximum of a 3.8% increase in accuracy and a maximum of 6.6% increase in accuracy@161 against several previous state-of-the-art models."
                    },
                    {
                        "id": 208,
                        "string": "We further analyzed the states of several attention layers, which revealed that the probabilities assigned to timeline representations and user network representations match to some statistical characteristics of datasets."
                    },
                    {
                        "id": 209,
                        "string": "As future works of this study, we are planning to expand the proposed model to handle multiple locations and a temporal state to capture location changes and states like traveling."
                    },
                    {
                        "id": 210,
                        "string": "Additionally, we plan to apply the proposed model to other social media analyses such as gender analysis and age analysis."
                    },
                    {
                        "id": 211,
                        "string": "In these analyses, metadata like location fields and timezones may not be effective like in geolocation prediction."
                    },
                    {
                        "id": 212,
                        "string": "However, a user network is known to include various user attributes information including gender and age (McPherson et al., 2001) which suggests the unification of text and user network information to result in a success as in geolocation prediction."
                    },
                    {
                        "id": 213,
                        "string": "A Supplemental Materials A.1 Parameters of Embedding Pre-training Word embeddings were pre-trained with the parameters of learning rate=0.025, window size=5, negative sample size=5, and epoch=5."
                    },
                    {
                        "id": 214,
                        "string": "User embeddings were pre-trained with the parameters of initial learning rate=0.025, order=2, negative sample size=5, and training sample size=100M."
                    },
                    {
                        "id": 215,
                        "string": "A.2 Model Parameters and Parameter Selection Strategies Unit Sizes, Embedding Dimensions, and a Max Tweet Number The layers and the embeddings in our models have unit size and embedding dimension parameters."
                    },
                    {
                        "id": 216,
                        "string": "We also restricted the maximum number of tweets per user for TwitterUS to reduce memory footprints."
                    },
                    {
                        "id": 217,
                        "string": "Table 5 shows the values for these parameters."
                    },
                    {
                        "id": 218,
                        "string": "Smaller values were set for TwitterUS because TwitterUS is approximately 2.6 times larger in terms of tweet number."
                    },
                    {
                        "id": 219,
                        "string": "It was computationally expensive to process TwiiterUS in the same settings as W-NUT."
                    },
                    {
                        "id": 220,
                        "string": "Regularization Parameters and Bucket Sizes We chose optimal values of α using a grid search with the development sets of TwitterUS and W-NUT."
                    },
                    {
                        "id": 221,
                        "string": "The range of α was set as the following: α ∈ {1e −4 , 5e −5 , 1e −5 , 5e −6 , 1e −6 , 5e −7 , 1e −7 , 5e −8 , 1e −8 }."
                    },
                    {
                        "id": 222,
                        "string": "We also chose optimal values of c using grid search with the development sets of TwitterUS and W-NUT for the baseline models."
                    },
                    {
                        "id": 223,
                        "string": "The range of c was set as the following for TwitterUS: c ∈ {50, 100, 150, 200, 250, 300, 339}."
                    },
                    {
                        "id": 224,
                        "string": "The following was set for W-NUT: c ∈ {100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1500, 2000, 2500, 3000, 3028} ."
                    },
                    {
                        "id": 225,
                        "string": "Table 6 presents selected values of α and c. For LR-STACK and MADCEl-B-LR-STACK, different parameters of α and c were selected for each logistic regression classifier."
                    },
                    {
                        "id": 226,
                        "string": "MAD Parameters and Celebrity Threshold The MAD parameters µ 1 , µ 2 , and µ 3 and celebrity threshold t were also chosen using grid search with the development sets of TwitterUS and W-NUT."
                    },
                    {
                        "id": 227,
                        "string": "The ranges of µ 1 , µ 2 , and µ 3 were set as the following: µ 1 ∈ {1.0}, µ 2 ∈ {0.001, 0.01, 0.1, 1.0, 10.0}, µ 3 ∈ {0.0, 0.001, 0.01, 0.1, 1.0, 10.0}."
                    },
                    {
                        "id": 228,
                        "string": "The range of t for TwitterUS was set as t ∈ {2, ."
                    },
                    {
                        "id": 229,
                        "string": "."
                    },
                    {
                        "id": 230,
                        "string": "."
                    },
                    {
                        "id": 231,
                        "string": ", 16}."
                    },
                    {
                        "id": 232,
                        "string": "The range of t for W-NUT was set Table 6 : Regularization parameters and bucket sizes selected for our models and baseline models."
                    },
                    {
                        "id": 233,
                        "string": "Table 7 : MAD parameters and celebrity threshold selected for baseline models."
                    },
                    {
                        "id": 234,
                        "string": "as t ∈ {2, ."
                    },
                    {
                        "id": 235,
                        "string": "."
                    },
                    {
                        "id": 236,
                        "string": "."
                    },
                    {
                        "id": 237,
                        "string": ", 6}."
                    },
                    {
                        "id": 238,
                        "string": "Table 6 presents selected values of µ 1 , µ 2 , µ 3 , and t."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 33
                    },
                    {
                        "section": "User-network-based Approach",
                        "n": "2.2",
                        "start": 34,
                        "end": 42
                    },
                    {
                        "section": "Metadata-based Approach",
                        "n": "2.3",
                        "start": 43,
                        "end": 47
                    },
                    {
                        "section": "Combinatory Approach Extending",
                        "n": "2.4",
                        "start": 48,
                        "end": 51
                    },
                    {
                        "section": "Comparisons with Proposed Model",
                        "n": "2.5",
                        "start": 52,
                        "end": 59
                    },
                    {
                        "section": "Proposed Model",
                        "n": "3.1",
                        "start": 60,
                        "end": 63
                    },
                    {
                        "section": "Text Component",
                        "n": "3.1.1",
                        "start": 64,
                        "end": 74
                    },
                    {
                        "section": "Text and Metadata Component",
                        "n": "3.1.2",
                        "start": 75,
                        "end": 88
                    },
                    {
                        "section": "User Network Component",
                        "n": "3.1.3",
                        "start": 89,
                        "end": 95
                    },
                    {
                        "section": "Construction of the User Network",
                        "n": "4.2",
                        "start": 96,
                        "end": 104
                    },
                    {
                        "section": "Evaluation",
                        "n": "5",
                        "start": 105,
                        "end": 108
                    },
                    {
                        "section": "MADCEL-B-LR",
                        "n": "5.1.2",
                        "start": 109,
                        "end": 115
                    },
                    {
                        "section": "LR-STACK",
                        "n": "5.1.3",
                        "start": 116,
                        "end": 121
                    },
                    {
                        "section": "MADCEL-B-LR-STACK",
                        "n": "5.1.4",
                        "start": 122,
                        "end": 122
                    },
                    {
                        "section": "Model Configurations 5.2.1 Text Processor",
                        "n": "5.2",
                        "start": 123,
                        "end": 125
                    },
                    {
                        "section": "Pre-training of Embeddings",
                        "n": "5.2.2",
                        "start": 126,
                        "end": 128
                    },
                    {
                        "section": "Neural Network Optimization",
                        "n": "5.2.3",
                        "start": 129,
                        "end": 131
                    },
                    {
                        "section": "Model Parameters",
                        "n": "5.2.4",
                        "start": 132,
                        "end": 136
                    },
                    {
                        "section": "Metrics",
                        "n": "5.2.5",
                        "start": 137,
                        "end": 147
                    },
                    {
                        "section": "Result Performance on TwitterUS",
                        "n": "5.3",
                        "start": 148,
                        "end": 167
                    },
                    {
                        "section": "Unification Strategies",
                        "n": "6.1.1",
                        "start": 168,
                        "end": 178
                    },
                    {
                        "section": "Attention Patterns",
                        "n": "6.1.2",
                        "start": 179,
                        "end": 187
                    },
                    {
                        "section": "City Prediction",
                        "n": "6.2.1",
                        "start": 188,
                        "end": 194
                    },
                    {
                        "section": "Errors with High Confidences",
                        "n": "6.2.2",
                        "start": 195,
                        "end": 204
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 205,
                        "end": 238
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1264-Table2-1.png",
                        "caption": "Table 2: Performances of our models and the baseline models on TwitterUS. Significance tests were performed between models with same Sign. Test IDs. The shaded lines represent values copied from related papers. Asterisks denote significant improvements against paired counterparts with 1% confidence (**) and 5% confidence (*).",
                        "page": 6,
                        "bbox": {
                            "x1": 98.88,
                            "x2": 498.24,
                            "y1": 61.44,
                            "y2": 240.0
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Table3-1.png",
                        "caption": "Table 3: Performance of our models and baseline models on W-NUT. The same notations as those in Table 2 are used in this table.",
                        "page": 6,
                        "bbox": {
                            "x1": 98.88,
                            "x2": 498.24,
                            "y1": 302.88,
                            "y2": 441.12
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Figure1-1.png",
                        "caption": "Figure 1: Overview of the proposed model. RNN denotes a recurrent neural network layer. FC denotes a fully connected layer. The striped layers are message-level processes. ⊕ represents element-wise addition.",
                        "page": 2,
                        "bbox": {
                            "x1": 138.72,
                            "x2": 458.4,
                            "y1": 62.879999999999995,
                            "y2": 321.12
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Table6-1.png",
                        "caption": "Table 6: Regularization parameters and bucket sizes selected for our models and baseline models.",
                        "page": 12,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 527.04,
                            "y1": 224.64,
                            "y2": 444.0
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Table7-1.png",
                        "caption": "Table 7: MAD parameters and celebrity threshold selected for baseline models.",
                        "page": 12,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 527.04,
                            "y1": 484.79999999999995,
                            "y2": 601.92
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Table5-1.png",
                        "caption": "Table 5: Unit sizes, embedding dimensions, and max tweet numbers of our models.",
                        "page": 12,
                        "bbox": {
                            "x1": 310.56,
                            "x2": 521.28,
                            "y1": 61.44,
                            "y2": 185.28
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Figure4-1.png",
                        "caption": "Figure 4: Estimated probability density functions of the four representations in AttentionU.",
                        "page": 7,
                        "bbox": {
                            "x1": 66.72,
                            "x2": 299.03999999999996,
                            "y1": 64.8,
                            "y2": 246.23999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Figure5-1.png",
                        "caption": "Figure 5: Estimated probability density functions of user network representations in AttentionUN.",
                        "page": 7,
                        "bbox": {
                            "x1": 353.76,
                            "x2": 479.03999999999996,
                            "y1": 63.839999999999996,
                            "y2": 161.28
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Figure3-1.png",
                        "caption": "Figure 3: Overview of the user network component with a detailed description of the elementwise addition and AttentionN.",
                        "page": 3,
                        "bbox": {
                            "x1": 301.44,
                            "x2": 525.12,
                            "y1": 62.879999999999995,
                            "y2": 244.32
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Figure2-1.png",
                        "caption": "Figure 2: Overview of the text component with detailed description of RNNM and AttentionM.",
                        "page": 3,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 296.15999999999997,
                            "y1": 63.839999999999996,
                            "y2": 240.0
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Table4-1.png",
                        "caption": "Table 4: Error distance values in TwitterUS with oracle predictions. σ in the table denotes the standard deviation.",
                        "page": 8,
                        "bbox": {
                            "x1": 110.88,
                            "x2": 251.04,
                            "y1": 232.79999999999998,
                            "y2": 274.08
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Figure6-1.png",
                        "caption": "Figure 6: A k-means clustering result and the attention probabilities of users that are closest to the cluster centroids. The underlined values are the max values of the two datasets for each column.",
                        "page": 8,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 522.24,
                            "y1": 67.67999999999999,
                            "y2": 192.95999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1264-Table1-1.png",
                        "caption": "Table 1: Some properties of TwitterUS (train) and W-NUT (train). We were able to obtain approximately 70–78% of the full datasets because of accessibility changes in Twitter.",
                        "page": 4,
                        "bbox": {
                            "x1": 100.8,
                            "x2": 261.12,
                            "y1": 61.44,
                            "y2": 179.04
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-62"
        },
        {
            "slides": {
                "0": {
                    "title": "Structured Prediction Reviewed",
                    "text": [
                        "z Shareholders took their money",
                        "s s their money took their took money their took s , s s",
                        "s.t. z forms a tree"
                    ],
                    "page_nums": [
                        9,
                        10,
                        11
                    ],
                    "images": []
                },
                "1": {
                    "title": "Linear Programming Formulation",
                    "text": [
                        "z Shareholders took their money",
                        "s.t. z forms a tree",
                        "s s took took money their"
                    ],
                    "page_nums": [
                        12,
                        13
                    ],
                    "images": []
                },
                "2": {
                    "title": "Backprop",
                    "text": [
                        "s s took took money their",
                        "s.t. z forms a tree",
                        "z Shareholders took theirmoney rzL",
                        "s their took rsL",
                        "We have: rzL We need: rsL",
                        "z = argmax z>s"
                    ],
                    "page_nums": [
                        15,
                        16,
                        17,
                        18,
                        19,
                        20,
                        21,
                        22
                    ],
                    "images": []
                },
                "3": {
                    "title": "Some Geometry",
                    "text": [
                        "Straight-through Estimator (STE): rsL rzL",
                        "q Shareholders took theirmoney"
                    ],
                    "page_nums": [
                        23,
                        24,
                        25,
                        26,
                        27,
                        28
                    ],
                    "images": []
                },
                "4": {
                    "title": "Algorithm",
                    "text": [
                        "z Shareholders took their money rzL",
                        "Parser z argmax z> s",
                        "s s took took money their s.t. z forms a tree rsL",
                        "s their took rsL z\u0000 q"
                    ],
                    "page_nums": [
                        29,
                        30,
                        31,
                        32,
                        33
                    ],
                    "images": []
                },
                "6": {
                    "title": "Applications",
                    "text": [
                        "Shareholders took theirmoney L1 Shareholders took theirmoney",
                        "Joint learning Induce latent structures",
                        "Training data Training data",
                        "argmax Parser rL1 argmax Parser",
                        "Downstream task r\u0000L2 Downstream task r\u0000L",
                        "Loss L2 Loss L"
                    ],
                    "page_nums": [
                        36,
                        37,
                        38
                    ],
                    "images": []
                },
                "8": {
                    "title": "SemEval 15 Micro averaged labeled F1",
                    "text": [
                        "Neurbo Pipeline STE Structured Att. SPIGOT",
                        "Hard decision z N/A"
                    ],
                    "page_nums": [
                        43,
                        44,
                        45,
                        46
                    ],
                    "images": []
                },
                "9": {
                    "title": "Semantic Parsing for Sentiment Classification",
                    "text": [
                        "Semantic graph arg1 poss",
                        "Martins et al., arg max Semantic Parser",
                        "Shareholders took theirmoney BiLSTM+MLP",
                        "took: arg1 took:arg2; their:poss",
                        "Concat head token and role Classifier"
                    ],
                    "page_nums": [
                        47,
                        48
                    ],
                    "images": []
                },
                "10": {
                    "title": "Stanford Sentiment Treebank accuracy",
                    "text": [
                        "BiLSTM Pipeline STE SPIGOT"
                    ],
                    "page_nums": [
                        49
                    ],
                    "images": []
                }
            },
            "paper_title": "Backpropagating through Structured Argmax using a SPIGOT",
            "paper_id": "1278",
            "paper": {
                "title": "Backpropagating through Structured Argmax using a SPIGOT",
                "abstract": "We introduce the structured projection of intermediate gradients optimization technique (SPIGOT), a new method for backpropagating through neural networks that include hard-decision structured predictions (e.g., parsing) in intermediate layers. SPIGOT requires no marginal inference, unlike structured attention networks (Kim et al., 2017) and some reinforcement learning-inspired solutions (Yogatama et al., 2017) . Like socalled straight-through estimators (Hinton, 2012), SPIGOT defines gradient-like quantities associated with intermediate nondifferentiable operations, allowing backpropagation before and after them; SPIGOT's proxy aims to ensure that, after a parameter update, the intermediate structure will remain well-formed. We experiment on two structured NLP pipelines: syntactic-then-semantic dependency parsing, and semantic parsing followed by sentiment classification. We show that training with SPIGOT leads to a larger improvement on the downstream task than a modularly-trained pipeline, the straight-through estimator, and structured attention, reaching a new state of the art on semantic dependency parsing.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Learning methods for natural language processing are increasingly dominated by end-to-end differentiable functions that can be trained using gradient-based optimization."
                    },
                    {
                        "id": 1,
                        "string": "Yet traditional NLP often assumed modular stages of processing that formed a pipeline; e.g., text was tokenized, then tagged with parts of speech, then parsed into a phrase-structure or dependency tree, then semantically analyzed."
                    },
                    {
                        "id": 2,
                        "string": "Pipelines, which make \"hard\" (i.e., discrete) decisions at each stage, appear to be incompatible with neural learning, leading many researchers to abandon earlier-stage processing."
                    },
                    {
                        "id": 3,
                        "string": "Inspired by findings that continue to see benefit from various kinds of linguistic or domain-specific preprocessing (He et al., 2017; Oepen et al., 2017; Ji and Smith, 2017) , we argue that pipelines can be treated as layers in neural architectures for NLP tasks."
                    },
                    {
                        "id": 4,
                        "string": "Several solutions are readily available: • Reinforcement learning (most notably the REINFORCE algorithm; Williams, 1992) , and structured attention (SA; Kim et al., 2017) ."
                    },
                    {
                        "id": 5,
                        "string": "These methods replace argmax with a sampling or marginalization operation."
                    },
                    {
                        "id": 6,
                        "string": "We note two potential downsides of these approaches: (i) not all argmax-able operations have corresponding sampling or marginalization methods that are efficient, and (ii) inspection of intermediate outputs, which could benefit error analysis and system improvement, is more straightforward for hard decisions than for posteriors."
                    },
                    {
                        "id": 7,
                        "string": "• The straight-through estimator (STE; Hinton, 2012) treats discrete decisions as if they were differentiable and simply passes through gradients."
                    },
                    {
                        "id": 8,
                        "string": "While fast and surprisingly effective, it ignores constraints on the argmax problem, such as the requirement that every word has exactly one syntactic parent."
                    },
                    {
                        "id": 9,
                        "string": "We will find, experimentally, that the quality of intermediate representations degrades substantially under STE."
                    },
                    {
                        "id": 10,
                        "string": "This paper introduces a new method, the structured projection of intermediate gradients optimization technique (SPIGOT; §2), which defines a proxy for the gradient of a loss function with respect to the input to argmax."
                    },
                    {
                        "id": 11,
                        "string": "Unlike STE's gradient proxy, SPIGOT aims to respect the constraints in the argmax problem."
                    },
                    {
                        "id": 12,
                        "string": "SPIGOT can be applied with any intermediate layer that is expressible as a constrained maximization problem, and whose feasible set can be projected onto."
                    },
                    {
                        "id": 13,
                        "string": "We show empirically that SPIGOT works even when the maximization and the projection are done approximately."
                    },
                    {
                        "id": 14,
                        "string": "We offer two concrete architectures that employ structured argmax as an intermediate layer: semantic parsing with syntactic parsing in the middle, and sentiment analysis with semantic parsing in the middle ( §3)."
                    },
                    {
                        "id": 15,
                        "string": "These architectures are trained using a joint objective, with one part using data for the intermediate task, and the other using data for the end task."
                    },
                    {
                        "id": 16,
                        "string": "The datasets are not assumed to overlap at all, but the parameters for the intermediate task are affected by both parts of the training data."
                    },
                    {
                        "id": 17,
                        "string": "Our experiments ( §4) show that our architecture improves over a state-of-the-art semantic dependency parser, and that SPIGOT offers stronger performance than a pipeline, SA, and STE."
                    },
                    {
                        "id": 18,
                        "string": "On sentiment classification, we show that semantic parsing offers improvement over a BiLSTM, more so with SPIGOT than with alternatives."
                    },
                    {
                        "id": 19,
                        "string": "Our analysis considers how the behavior of the intermediate parser is affected by the end task ( §5)."
                    },
                    {
                        "id": 20,
                        "string": "Our code is open-source and available at https:// github.com/Noahs-ARK/SPIGOT."
                    },
                    {
                        "id": 21,
                        "string": "Method Our aim is to allow a (structured) argmax layer in a neural network to be treated almost like any other differentiable function."
                    },
                    {
                        "id": 22,
                        "string": "This would allow us to place, for example, a syntactic parser in the middle of a neural network, so that the forward calculation simply calls the parser and passes the parse tree to the next layer, which might derive syntactic features for the next stage of processing."
                    },
                    {
                        "id": 23,
                        "string": "The challenge is in the backward computation, which is key to learning with standard gradientbased methods."
                    },
                    {
                        "id": 24,
                        "string": "When its output is discrete as we assume here, argmax is a piecewise constant function."
                    },
                    {
                        "id": 25,
                        "string": "At every point, its gradient is either zero or undefined."
                    },
                    {
                        "id": 26,
                        "string": "So instead of using the true gradient, we will introduce a proxy for the gradient of the loss function with respect to the inputs to argmax, allowing backpropagation to proceed through the argmax layer."
                    },
                    {
                        "id": 27,
                        "string": "Our proxy is designed as an improvement to earlier methods (discussed below) that completely ignore constraints on the argmax operation."
                    },
                    {
                        "id": 28,
                        "string": "It accomplishes this through a projec-tion of the gradients."
                    },
                    {
                        "id": 29,
                        "string": "We first lay out notation, and then briefly review max-decoding and its relaxation ( §2.1)."
                    },
                    {
                        "id": 30,
                        "string": "We define SPIGOT in §2.2, and show how to use it to backpropagate through NLP pipelines in §2.3."
                    },
                    {
                        "id": 31,
                        "string": "Notation."
                    },
                    {
                        "id": 32,
                        "string": "Our discussion centers around two tasks: a structured intermediate task followed by an end task, where the latter considers the outputs of the former (e.g., syntactic-then-semantic parsing)."
                    },
                    {
                        "id": 33,
                        "string": "Inputs are denoted as x, and end task outputs as y."
                    },
                    {
                        "id": 34,
                        "string": "We use z to denote intermediate structures derived from x."
                    },
                    {
                        "id": 35,
                        "string": "We will often refer to the intermediate task as \"decoding\", in the structured prediction sense."
                    },
                    {
                        "id": 36,
                        "string": "It seeks an output z = argmax z∈Z S from the feasible set Z, maximizing a (learned, parameterized) scoring function S for the structured intermediate task."
                    },
                    {
                        "id": 37,
                        "string": "L denotes the loss of the end task, which may or may not also involve structured predictions."
                    },
                    {
                        "id": 38,
                        "string": "We use ∆ k−1 = {p ∈ R k | 1 p = 1, p ≥ 0} to denote the (k − 1)-dimensional simplex."
                    },
                    {
                        "id": 39,
                        "string": "We denote the domain of binary variables as B = {0, 1}, and the unit interval as U = [0, 1]."
                    },
                    {
                        "id": 40,
                        "string": "By projection of a vector v onto a set A, we mean the closest point in A to v, measured by Euclidean distance: proj A (v) = argmin v ∈A v − v 2 ."
                    },
                    {
                        "id": 41,
                        "string": "Relaxed Decoding Decoding problems are typically decomposed into a collection of \"parts\", such as arcs in a dependency tree or graph."
                    },
                    {
                        "id": 42,
                        "string": "In such a setup, each element of z, z i , corresponds to one possible part, and z i takes a boolean value to indicate whether the part is included in the output structure."
                    },
                    {
                        "id": 43,
                        "string": "The scoring function S is assumed to decompose into a vector s(x) of part-local, input-specific scores: z = argmax z∈Z S(x, z) = argmax z∈Z z s(x) (1) In the following, we drop s's dependence on x for clarity."
                    },
                    {
                        "id": 44,
                        "string": "In many NLP problems, the output space Z can be specified by linear constraints (Roth and Yih, 2004) : A z ψ ≤ b, (2) where ψ are auxiliary variables (also scoped by argmax), together with integer constraints (typically, each z i ∈ B)."
                    },
                    {
                        "id": 45,
                        "string": "Figure 1 : The original feasible set Z (red vertices), is relaxed into a convex polytope P (the area encompassed by blue edges)."
                    },
                    {
                        "id": 46,
                        "string": "Left: making a gradient update toẑ makes it step outside the polytope, and it is projected back to P, resulting in the projected pointz."
                    },
                    {
                        "id": 47,
                        "string": "∇ s L is then along the edge."
                    },
                    {
                        "id": 48,
                        "string": "Right: updatingẑ keeps it within P, and thus ∇ s L = η∇ẑL."
                    },
                    {
                        "id": 49,
                        "string": "The problem in Equation 1 can be NP-complete in general, so the {0, 1} constraints are often relaxed to [0, 1] to make decoding tractable (Martins et al., 2009) ."
                    },
                    {
                        "id": 50,
                        "string": "Then the discrete combinatorial problem over Z is transformed into the optimization of a linear objective over a convex polytope P ={p ∈ R d |Ap≤b}, which is solvable in polynomial time (Bertsimas and Tsitsiklis, 1997) ."
                    },
                    {
                        "id": 51,
                        "string": "This is not necessary in some cases, where the argmax can be solved exactly with dynamic programming."
                    },
                    {
                        "id": 52,
                        "string": "From STE to SPIGOT We now view structured argmax as an activation function that takes a vector of input-specific partscores s and outputs a solutionẑ."
                    },
                    {
                        "id": 53,
                        "string": "For backpropagation, to calculate gradients for parameters of s, the chain rule defines: ∇ s L = J ∇ẑL, (3) where the Jacobian matrix J = ∂ẑ ∂s contains the derivative of each element ofẑ with respect to each element of s. Unfortunately, argmax is a piecewise constant function, so its Jacobian is either zero (almost everywhere) or undefined (in the case of ties)."
                    },
                    {
                        "id": 54,
                        "string": "One solution, taken in structured attention, is to replace the argmax with marginal inference and a softmax function, so thatẑ encodes probability distributions over parts (Kim et al., 2017; Liu and Lapata, 2018) ."
                    },
                    {
                        "id": 55,
                        "string": "As discussed in §1, there are two reasons to avoid this modification."
                    },
                    {
                        "id": 56,
                        "string": "Softmax can only be used when marginal inference is feasible, by sum-product algorithms for example (Eisner, 2016; Friesen and Domingos, 2016) ; in general marginal inference can be #P-complete."
                    },
                    {
                        "id": 57,
                        "string": "Further, a soft intermediate layer will be less amenable to inspection by anyone wishing to understand and improve the model."
                    },
                    {
                        "id": 58,
                        "string": "In another line of work, argmax is augmented with a strongly-convex penalty on the solutions (Martins and Astudillo, 2016; Amos and Kolter, 2017; Niculae and Blondel, 2017; Niculae et al., 2018; Mensch and Blondel, 2018) ."
                    },
                    {
                        "id": 59,
                        "string": "However, their approaches require solving a relaxation even when exact decoding is tractable."
                    },
                    {
                        "id": 60,
                        "string": "Also, the penalty will bias the solutions found by the decoder, which may be an undesirable conflation of computational and modeling concerns."
                    },
                    {
                        "id": 61,
                        "string": "A simpler solution is the STE method (Hinton, 2012), which replaces the Jacobian matrix in Equation 3 by the identity matrix."
                    },
                    {
                        "id": 62,
                        "string": "This method has been demonstrated to work well when used to \"backpropagate\" through hard threshold functions Friesen and Domingos, 2018) and categorical random variables (Jang et al., 2016; Choi et al., 2017) ."
                    },
                    {
                        "id": 63,
                        "string": "Consider for a moment what we would do ifẑ were a vector of parameters, rather than intermediate predictions."
                    },
                    {
                        "id": 64,
                        "string": "In this case, we are seeking points in Z that minimize L; denote that set of minimizers by Z * ."
                    },
                    {
                        "id": 65,
                        "string": "Given ∇ẑL and step size η, we would updateẑ to beẑ − η∇ẑL."
                    },
                    {
                        "id": 66,
                        "string": "This update, however, might not return a value in the feasible set Z, or even (if we are using a linear relaxation) the relaxed set P. SPIGOT therefore introduces a projection step that aims to keep the \"updated\"ẑ in the feasible set."
                    },
                    {
                        "id": 67,
                        "string": "Of course, we do not directly updateẑ; we continue backpropagation through s and onward to the parameters."
                    },
                    {
                        "id": 68,
                        "string": "But the projection step nonetheless alters the parameter updates in the way that our proxy for \"∇ s L\" is defined."
                    },
                    {
                        "id": 69,
                        "string": "The procedure is defined as follows: Due to the convexity of P, the projected pointz will always be unique, and is guaranteed to be no farther thanp from any point in Z * (Luenberger and Ye, 2015)."
                    },
                    {
                        "id": 70,
                        "string": "1 Compared to STE, SPIGOT in-volves a projection and limits ∇ s L to a smaller space to satisfy constraints."
                    },
                    {
                        "id": 71,
                        "string": "See Figure 1 for an illustration."
                    },
                    {
                        "id": 72,
                        "string": "p =ẑ − η∇ẑL, (4a) z = proj P (p), (4b) ∇ s L ẑ −z."
                    },
                    {
                        "id": 73,
                        "string": "(4c) When efficient exact solutions (such as dynamic programming) are available, they can be used."
                    },
                    {
                        "id": 74,
                        "string": "Yet, we note that SPIGOT does not assume the argmax operation is solved exactly."
                    },
                    {
                        "id": 75,
                        "string": "Backpropagation through Pipelines Using SPIGOT, we now devise an algorithm to \"backpropagate\" through NLP pipelines."
                    },
                    {
                        "id": 76,
                        "string": "In these pipelines, an intermediate task's output is fed into an end task for use as features."
                    },
                    {
                        "id": 77,
                        "string": "The parameters of the complete model are divided into two parts: denote the parameters of the intermediate task model by φ (used to calculate s), and those in the end task model as θ."
                    },
                    {
                        "id": 78,
                        "string": "2 As introduced earlier, the end-task loss function to be minimized is L, which depends on both φ and θ. Algorithm 1 describes the forward and backward computations."
                    },
                    {
                        "id": 79,
                        "string": "It takes an end task training pair x, y , along with the intermediate task's feasible set Z, which is determined by x."
                    },
                    {
                        "id": 80,
                        "string": "It first runs the intermediate model and decodes to get intermediate structureẑ, just as in a standard pipeline."
                    },
                    {
                        "id": 81,
                        "string": "Then forward propagation is continued into the end-task model to compute loss L, usingẑ to define input features."
                    },
                    {
                        "id": 82,
                        "string": "Backpropagation in the endtask model computes ∇ θ L and ∇ẑL, and ∇ s L is then constructed using Equations 4."
                    },
                    {
                        "id": 83,
                        "string": "Backpropagation then continues into the intermediate model, computing ∇ φ L. Due to its flexibility, SPIGOT is applicable to many training scenarios."
                    },
                    {
                        "id": 84,
                        "string": "When there is no x, z training data for the intermediate task, SPIGOT can be used to induce latent structures for the end-task (Yogatama et al., 2017; Kim et al., 2017; Choi et al., 2017 , inter alia)."
                    },
                    {
                        "id": 85,
                        "string": "When intermediate-task training data is available, one can use SPIGOT to adopt joint learning by minimizing an interpolation of L (on end-task data x, y ) and an intermediate-task loss function L (on intermediate task data x, z )."
                    },
                    {
                        "id": 86,
                        "string": "This is the setting in our experiments; note that we do not assume any overlap in the training examples for the two tasks."
                    },
                    {
                        "id": 87,
                        "string": "Algorithm 1 Forward and backward computation with SPIGOT."
                    },
                    {
                        "id": 88,
                        "string": "1: procedure SPIGOT(x, y, Z) 2: Construct A, b such that Z = {p ∈ Z d | Ap ≤ b} 3: P ← {p ∈ R d | Ap ≤ b} Relaxation 4: Forwardprop and compute s φ (x) 5:ẑ ← argmax z∈Z z s φ (x) Intermediate decoding 6: Forwardprop and compute L given x, y, andẑ 7: Backprop and compute ∇ θ L and ∇ẑL 8:z ← proj P (ẑ − η∇ẑL) Projection 9: ∇sL ←ẑ −z 10: Backprop and compute ∇ φ L 11: end procedure considered in this work, arc-factored unlabeled dependency parsing and first-order semantic dependency parsing."
                    },
                    {
                        "id": 89,
                        "string": "In early experiments we observe that for both tasks, projecting with respect to all constraints of their original formulations using a generic quadratic program solver was prohibitively slow."
                    },
                    {
                        "id": 90,
                        "string": "Therefore, we construct relaxed polytopes by considering only a subset of the constraints."
                    },
                    {
                        "id": 91,
                        "string": "3 The projection then decomposes into a series of singly constrained quadratic programs (QP), each of which can be efficiently solved in linear time."
                    },
                    {
                        "id": 92,
                        "string": "The two approximate projections discussed here are used in backpropagation only."
                    },
                    {
                        "id": 93,
                        "string": "In the forward pass, we solve the decoding problem using the models' original decoding algorithms."
                    },
                    {
                        "id": 94,
                        "string": "Arc-factored unlabeled dependency parsing."
                    },
                    {
                        "id": 95,
                        "string": "For unlabeled dependency trees, we impose [0, 1] constraints and single-headedness constraints."
                    },
                    {
                        "id": 96,
                        "string": "4 Formally, given a length-n input sentence, excluding self-loops, an arc-factored parser considers d = n(n − 1) candidate arcs."
                    },
                    {
                        "id": 97,
                        "string": "Let i→j denote an arc from the ith token to the jth, and σ(i→j) denote its index."
                    },
                    {
                        "id": 98,
                        "string": "We construct the relaxed feasible set by: P DEP =    p ∈ U d i =j p σ(i→j) = 1, ∀j    , (5) i.e., we consider each token j individually, and force single-headedness by constraining the number of arcs incoming to j to sum to 1."
                    },
                    {
                        "id": 99,
                        "string": "Algorithm 2 summarizes the procedure to project onto P DEP ."
                    },
                    {
                        "id": 100,
                        "string": "Line 3 forms a singly constrained QP, and can be solved in O(n) time (Brucker, 1984) ."
                    },
                    {
                        "id": 101,
                        "string": "Algorithm 2 Projection onto the relaxed polytope P DEP for dependency tree structures."
                    },
                    {
                        "id": 102,
                        "string": "Let bold σ(·→j) denote the index set of arcs incoming to j."
                    },
                    {
                        "id": 103,
                        "string": "For a vector v, we use v σ(·→j) to denote vector [v k ] k∈σ(·→j) ."
                    },
                    {
                        "id": 104,
                        "string": "1: procedure DEPPROJ(p) 2: for j = 1, 2, ."
                    },
                    {
                        "id": 105,
                        "string": "."
                    },
                    {
                        "id": 106,
                        "string": "."
                    },
                    {
                        "id": 107,
                        "string": ", n do 3:z σ(·→j) ← proj ∆ n−2 p σ(·→j) 4: end for 5: returnz 6: end procedure First-order semantic dependency parsing."
                    },
                    {
                        "id": 108,
                        "string": "Semantic dependency parsing uses labeled bilexical dependencies to represent sentence-level semantics (Oepen et al., 2014 (Oepen et al., , 2015 (Oepen et al., , 2016 ."
                    },
                    {
                        "id": 109,
                        "string": "Each dependency is represented by a labeled directed arc from a head token to a modifier token, where the arc label encodes broadly applicable semantic relations."
                    },
                    {
                        "id": 110,
                        "string": "Figure 2 diagrams a semantic graph from the DELPH-IN MRS-derived dependencies (DM), together with a syntactic tree."
                    },
                    {
                        "id": 111,
                        "string": "We use a state-of-the-art semantic dependency parser (Peng et al., 2017) that considers three types of parts: heads, unlabeled arcs, and labeled arcs."
                    },
                    {
                        "id": 112,
                        "string": "Let σ(i → j) denote the index of the arc from i to j with semantic role ."
                    },
                    {
                        "id": 113,
                        "string": "In addition to [0, 1] constraints, we constrain that the predictions for labeled arcs sum to the prediction of their associated unlabeled arc: P SDP p ∈ U d p σ(i →j) = p σ(i→j) , ∀i = j ."
                    },
                    {
                        "id": 114,
                        "string": "(6) This ensures that exactly one label is predicted if and only if its arc is present."
                    },
                    {
                        "id": 115,
                        "string": "The projection onto P SDP can be solved similarly to Algorithm 2."
                    },
                    {
                        "id": 116,
                        "string": "We drop the determinism constraint imposed by Peng et al."
                    },
                    {
                        "id": 117,
                        "string": "(2017) in the backward computation."
                    },
                    {
                        "id": 118,
                        "string": "Experiments We empirically evaluate our method with two sets of experiments: using syntactic tree structures in semantic dependency parsing, and using semantic dependency graphs in sentiment classification."
                    },
                    {
                        "id": 119,
                        "string": "Syntactic-then-Semantic Parsing In this experiment we consider an intermediate syntactic parsing task, followed by seman- tic dependency parsing as the end task."
                    },
                    {
                        "id": 120,
                        "string": "We first briefly review the neural network architectures for the two models ( §4.1.1), and then introduce the datasets ( §4.1.2) and baselines ( §4.1.3)."
                    },
                    {
                        "id": 121,
                        "string": "Architectures Syntactic dependency parser."
                    },
                    {
                        "id": 122,
                        "string": "For intermediate syntactic dependencies, we use the unlabeled arc-factored parser of Kiperwasser and Goldberg (2016) ."
                    },
                    {
                        "id": 123,
                        "string": "It uses bidirectional LSTMs (BiLSTM) to encode the input, followed by a multilayerperceptron (MLP) to score each potential dependency."
                    },
                    {
                        "id": 124,
                        "string": "One notable modification is that we replace their use of Chu-Liu/Edmonds' algorithm (Chu and Liu, 1965; Edmonds, 1967) with the Eisner algorithm (Eisner, 1996 (Eisner, , 2000 , since our dataset is in English and mostly projective."
                    },
                    {
                        "id": 125,
                        "string": "Semantic dependency parser."
                    },
                    {
                        "id": 126,
                        "string": "We use the basic model of Peng et al."
                    },
                    {
                        "id": 127,
                        "string": "(2017) (denoted as NEUR-BOPARSER) as the end model."
                    },
                    {
                        "id": 128,
                        "string": "It is a first-order parser, and uses local factors for heads, unlabeled arcs, and labeled arcs."
                    },
                    {
                        "id": 129,
                        "string": "NEURBOPARSER does not use syntax."
                    },
                    {
                        "id": 130,
                        "string": "It first encodes an input sentence with a two-layer BiLSTM, and then computes part scores with two-layer tanh-MLPs."
                    },
                    {
                        "id": 131,
                        "string": "Inference is conducted with AD 3 ."
                    },
                    {
                        "id": 132,
                        "string": "To add syntactic features to NEURBOPARSER, we concatenate a token's contextualized representation to that of its syntactic head, predicted by the intermediate parser."
                    },
                    {
                        "id": 133,
                        "string": "Formally, given length-n input sentence, we first run a BiLSTM."
                    },
                    {
                        "id": 134,
                        "string": "We use the concatenation of the two hidden representations h j = [ − → h j ; ← − h j ] at each position j as the contextualized token representations."
                    },
                    {
                        "id": 135,
                        "string": "We then concatenate h j with the representation of its head h HEAD(j) by h j = [h j ; h HEAD(j) ] =   h j ; i =jẑ σ(i→j) h i   , (7) whereẑ ∈ B n(n−1) is a binary encoding of the tree structure predicted by by the intermediate parser."
                    },
                    {
                        "id": 136,
                        "string": "We then use h j anywhere h j would have been used in NEURBOPARSER."
                    },
                    {
                        "id": 137,
                        "string": "In backpropagation, we compute ∇ẑL with an automatic differentiation toolkit (DyNet; Neubig et al., 2017) ."
                    },
                    {
                        "id": 138,
                        "string": "We note that this approach can be generalized to convolutional neural networks over graphs Duvenaud et al., 2015; Kipf and Welling, 2017, inter alia) , recurrent neural networks along paths Roth and Lapata, 2016, inter alia) or dependency trees (Tai et al., 2015) ."
                    },
                    {
                        "id": 139,
                        "string": "We choose to use concatenations to control the model's complexity, and thus to better understand which parts of the model work."
                    },
                    {
                        "id": 140,
                        "string": "We refer the readers to Kiperwasser and Goldberg (2016) and Peng et al."
                    },
                    {
                        "id": 141,
                        "string": "(2017) for further details of the parsing models."
                    },
                    {
                        "id": 142,
                        "string": "Training procedure."
                    },
                    {
                        "id": 143,
                        "string": "Following previous work, we minimize structured hinge loss (Tsochantaridis et al., 2004) for both models."
                    },
                    {
                        "id": 144,
                        "string": "We jointly train both models from scratch, by randomly sampling an instance from the union of their training data at each step."
                    },
                    {
                        "id": 145,
                        "string": "In order to isolate the effect of backpropagation, we do not share any parameters between the two models."
                    },
                    {
                        "id": 146,
                        "string": "5 Implementation details are summarized in the supplementary materials."
                    },
                    {
                        "id": 147,
                        "string": "Datasets • For semantic dependencies, we use the English dataset from SemEval 2015 Task 18 (Oepen et al., 2015) ."
                    },
                    {
                        "id": 148,
                        "string": "Among the three formalisms provided by the shared task, we consider DELPH-IN MRS-derived dependencies (DM) and Prague Semantic Dependencies (PSD)."
                    },
                    {
                        "id": 149,
                        "string": "6 It includes §00-19 of the WSJ corpus as training data, §20 and §21 for development and in-domain test data, resulting in a 33,961/1,692/1,410 train/dev./test split, and 5 Parameter sharing has proved successful in many related tasks (Collobert and Weston, 2008; Søgaard and Goldberg, 2016; Ammar et al., 2016; Swayamdipta et al., 2016 Swayamdipta et al., , 2017 , and could be easily combined with our approach."
                    },
                    {
                        "id": 150,
                        "string": "6 We drop the third (PAS) because its structure is highly predictable from parts-of-speech, making it less interesting."
                    },
                    {
                        "id": 151,
                        "string": "(Marcus et al., 1993) ."
                    },
                    {
                        "id": 152,
                        "string": "To avoid data leak, we depart from standard split and use §20 and §21 as development and test data, and the remaining sections as training data."
                    },
                    {
                        "id": 153,
                        "string": "The number of training/dev./test instances is 40,265/2,012/1,671."
                    },
                    {
                        "id": 154,
                        "string": "Baselines We compare to the following baselines: • A pipelined system (PIPELINE)."
                    },
                    {
                        "id": 155,
                        "string": "The pretrained parser achieves 92.9 test unlabeled attachment score (UAS)."
                    },
                    {
                        "id": 156,
                        "string": "8 • Structured attention networks (SA; Kim et al., 2017) ."
                    },
                    {
                        "id": 157,
                        "string": "We use the inside-outside algorithm (Baker, 1979) to populate z with arcs' marginal probabilities, use log-loss as the objective in training the intermediate parser."
                    },
                    {
                        "id": 158,
                        "string": "• The straight-through estimator (STE; Hinton, 2012) , introduced in §2.2."
                    },
                    {
                        "id": 159,
                        "string": "Empirical Results Table 1 compares the semantic dependency parsing performance of SPIGOT to all five baselines."
                    },
                    {
                        "id": 160,
                        "string": "FREDA3 (Peng et al., 2017) is a state-of-the-art variant of NEURBOPARSER that is trained using multitask learning to jointly predict three different semantic dependency graph formalisms."
                    },
                    {
                        "id": 161,
                        "string": "Like the basic NEURBOPARSER model that we build from, FREDA3 does not use any syntax."
                    },
                    {
                        "id": 162,
                        "string": "Strong DM performance is achieved in a more recent work by using joint learning and an ensemble (Peng et al., 2018) , which is beyond fair comparisons to the models discussed here."
                    },
                    {
                        "id": 163,
                        "string": "We found that using syntactic information improves semantic parsing performance: using pipelined syntactic head features brings 0.5-1.4% absolute labeled F 1 improvement to NEUR-BOPARSER."
                    },
                    {
                        "id": 164,
                        "string": "Such improvements are smaller compared to previous works, where dependency path and syntactic relation features are included (Almeida and Ribeyre et al., 2015; , indicating the potential to get better performance by using more syntactic information, which we leave to future work."
                    },
                    {
                        "id": 165,
                        "string": "Both STE and SPIGOT use hard syntactic features."
                    },
                    {
                        "id": 166,
                        "string": "By allowing backpropation into the intermediate syntactic parser, they both consistently outperform PIPELINE."
                    },
                    {
                        "id": 167,
                        "string": "On the other hand, when marginal syntactic tree structures are used, SA outperforms PIPELINE only on the out-of-domain PSD test set, and improvements under other cases are not observed."
                    },
                    {
                        "id": 168,
                        "string": "Compared to STE, SPIGOT outperforms STE on DM by more than 0.3% absolute labeled F 1 , both in-domain and out-of-domain."
                    },
                    {
                        "id": 169,
                        "string": "For PSD, SPIGOT achieves similar performance to STE on in-domain test set, but has a 0.5% absolute labeled F 1 improvement on out-of-domain data, where syntactic parsing is less accurate."
                    },
                    {
                        "id": 170,
                        "string": "tecture achieves 93.5 UAS when trained and evaluated with the standard split, close to the results reported by Kiperwasser and Goldberg (2016) ."
                    },
                    {
                        "id": 171,
                        "string": "Semantic Dependencies for Sentiment Classification Our second experiment uses semantic dependency graphs to improve sentiment classification performance."
                    },
                    {
                        "id": 172,
                        "string": "We are not aware of any efficient algorithm that solves marginal inference for semantic dependency graphs under determinism constraints, so we do not include a comparison to SA."
                    },
                    {
                        "id": 173,
                        "string": "Architectures Here we use NEURBOPARSER as the intermediate model, as described in §4.1.1, but with no syntactic enhancements."
                    },
                    {
                        "id": 174,
                        "string": "Sentiment classifier."
                    },
                    {
                        "id": 175,
                        "string": "We first introduce a baseline that does not use any structural information."
                    },
                    {
                        "id": 176,
                        "string": "It learns a one-layer BiLSTM to encode the input sentence, and then feeds the sum of all hidden states into a two-layer ReLU-MLP."
                    },
                    {
                        "id": 177,
                        "string": "To use semantic dependency features, we concatenate a word's BiLSTM-encoded representation to the averaged representation of its heads, together with the corresponding semantic roles, similarly to that in Equation 7."
                    },
                    {
                        "id": 178,
                        "string": "9 Then the concatenation is fed into an affine transformation followed by a ReLU activation."
                    },
                    {
                        "id": 179,
                        "string": "The rest of the model is kept the same as the BiLSTM baseline."
                    },
                    {
                        "id": 180,
                        "string": "Training procedure."
                    },
                    {
                        "id": 181,
                        "string": "We use structured hinge loss to train the semantic dependency parser, and log-loss for the sentiment classifier."
                    },
                    {
                        "id": 182,
                        "string": "Due to the discrepancy in the training data size of the two tasks (33K vs. 7K), we pre-train a semantic dependency parser, and then adopt joint training together with the classifier."
                    },
                    {
                        "id": 183,
                        "string": "In the joint training stage, we randomly sample 20% of the semantic dependency training instances each epoch."
                    },
                    {
                        "id": 184,
                        "string": "Implementations are detailed in the supplementary materials."
                    },
                    {
                        "id": 185,
                        "string": "Datasets For semantic dependencies, we use the DM dataset introduced in §4.1.2."
                    },
                    {
                        "id": 186,
                        "string": "We consider a binary classification task using the Stanford Sentiment Treebank (Socher et al., 2013) ."
                    },
                    {
                        "id": 187,
                        "string": "It consists of roughly 10K movie review sentences from Rotten Tomatoes."
                    },
                    {
                        "id": 188,
                        "string": "The full dataset includes a rating on a scale from 1 to 5 for each constituent (including the full sentences), resulting in more than 200K instances."
                    },
                    {
                        "id": 189,
                        "string": "Following previous work (Iyyer et al., 2015) , we only use full-sentence  instances, with neutral instances excluded (3s) and the remaining four rating levels converted to binary \"positive\" or \"negative\" labels."
                    },
                    {
                        "id": 190,
                        "string": "This results in a 6,920/872/1,821 train/dev./test split."
                    },
                    {
                        "id": 191,
                        "string": "Empirical Results Table 2 compares our SPIGOT method to three baselines."
                    },
                    {
                        "id": 192,
                        "string": "Pipelined semantic dependency predictions brings 0.9% absolute improvement in classification accuracy, and SPIGOT outperforms all baselines."
                    },
                    {
                        "id": 193,
                        "string": "In this task STE achieves slightly worse performance than a fixed pre-trained PIPELINE."
                    },
                    {
                        "id": 194,
                        "string": "Analysis We examine here how the intermediate model is affected by the end-task training signal."
                    },
                    {
                        "id": 195,
                        "string": "Is the endtask signal able to \"overrule\" intermediate predictions?"
                    },
                    {
                        "id": 196,
                        "string": "We use the syntactic-then-semantic parsing model ( §4.1) as a case study."
                    },
                    {
                        "id": 197,
                        "string": "Table 3 compares a pipelined system to one jointly trained using SPIGOT."
                    },
                    {
                        "id": 198,
                        "string": "We consider the development set instances where both syntactic and semantic annotations are available, and partition them based on whether the two systems' syntactic predictions agree (SAME), or not (DIFF)."
                    },
                    {
                        "id": 199,
                        "string": "The second group includes sentences with much lower syntactic parsing accuracy (91.3 vs. 97.4 UAS), and SPIGOT further reduces this to 89.6."
                    },
                    {
                        "id": 200,
                        "string": "Even though these changes hurt syntactic parsing accuracy, they lead to a 1.1% absolute gain in labeled F 1 for semantic parsing."
                    },
                    {
                        "id": 201,
                        "string": "Furthermore, SPIGOT Table 3 : Syntactic parsing performance (in unlabeled attachment score, UAS) and DM semantic parsing performance (in labeled F 1 ) on different groups of the development data."
                    },
                    {
                        "id": 202,
                        "string": "Both systems predict the same syntactic parses for instances from SAME, and they disagree on instances from DIFF ( §5)."
                    },
                    {
                        "id": 203,
                        "string": "tree, we consider three cases: (a) h is a head of m in the semantic graph; (b) h is a modifier of m in the semantic graph; (c) h is the modifier of m in the semantic graph."
                    },
                    {
                        "id": 204,
                        "string": "The first two reflect modifications to the syntactic parse that rearrange semantically linked words to be neighbors."
                    },
                    {
                        "id": 205,
                        "string": "Under (c), the semantic parser removes a syntactic dependency that reverses the direction of a semantic dependency."
                    },
                    {
                        "id": 206,
                        "string": "These cases account for 17.6%, 10.9%, and 12.8%, respectively (41.2% combined) of the total changes."
                    },
                    {
                        "id": 207,
                        "string": "Making these changes, of course, is complicated, since they often require other modifications to maintain well-formedness of the tree."
                    },
                    {
                        "id": 208,
                        "string": "Figure 2 gives an example."
                    },
                    {
                        "id": 209,
                        "string": "Related Work Joint learning in NLP pipelines."
                    },
                    {
                        "id": 210,
                        "string": "To avoid cascading errors, much effort has been devoted to joint decoding in NLP pipelines (Habash and Rambow, 2005; Cohen and Smith, 2007; Goldberg and Tsarfaty, 2008; Lewis et al., 2015; Zhang et al., 2015, inter alia) ."
                    },
                    {
                        "id": 211,
                        "string": "However, joint inference can sometimes be prohibitively expensive."
                    },
                    {
                        "id": 212,
                        "string": "Recent advances in representation learning facilitate exploration in the joint learning of multiple tasks by sharing parameters (Collobert and Weston, 2008; Blitzer et al., 2006; Finkel and Manning, 2010; Zhang and Weiss, 2016; Hashimoto et al., 2017, inter alia) ."
                    },
                    {
                        "id": 213,
                        "string": "Differentiable optimization."
                    },
                    {
                        "id": 214,
                        "string": "Gould et al."
                    },
                    {
                        "id": 215,
                        "string": "(2016) review the generic approaches to differentiation in bi-level optimization (Bard, 2010; Kunisch and Pock, 2013) ."
                    },
                    {
                        "id": 216,
                        "string": "Amos and Kolter (2017) extend their efforts to a class of subdifferentiable quadratic programs."
                    },
                    {
                        "id": 217,
                        "string": "However, they both require that the intermediate objective has an invertible Hessian, limiting their application in NLP."
                    },
                    {
                        "id": 218,
                        "string": "In another line of work, the steps of a gradient-based optimization procedure are unrolled into a single computation graph (Stoyanov et al., 2011; Domke, 2012; Goodfellow et al., 2013; Brakel et al., 2013) ."
                    },
                    {
                        "id": 219,
                        "string": "This comes at a high computational cost due to the second-order derivative computation during backpropagation."
                    },
                    {
                        "id": 220,
                        "string": "Moreover, constrained optimization problems (like many NLP problems) often require projection steps within the procedure, which can be difficult to differentiate through (Belanger and McCallum, 2016; Belanger et al., 2017) ."
                    },
                    {
                        "id": 221,
                        "string": "Conclusion We presented SPIGOT, a novel approach to backpropagating through neural network architectures that include discrete structured decisions in intermediate layers."
                    },
                    {
                        "id": 222,
                        "string": "SPIGOT devises a proxy for the gradients with respect to argmax's inputs, employing a projection that aims to respect the constraints in the intermediate task."
                    },
                    {
                        "id": 223,
                        "string": "We empirically evaluate our method with two architectures: a semantic parser with an intermediate syntactic parser, and a sentiment classifier with an intermediate semantic parser."
                    },
                    {
                        "id": 224,
                        "string": "Experiments show that SPIGOT achieves stronger performance than baselines under both settings, and outperforms stateof-the-art systems on semantic dependency parsing."
                    },
                    {
                        "id": 225,
                        "string": "Our implementation is available at https: //github.com/Noahs-ARK/SPIGOT."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 20
                    },
                    {
                        "section": "Method",
                        "n": "2",
                        "start": 21,
                        "end": 40
                    },
                    {
                        "section": "Relaxed Decoding",
                        "n": "2.1",
                        "start": 41,
                        "end": 51
                    },
                    {
                        "section": "From STE to SPIGOT",
                        "n": "2.2",
                        "start": 52,
                        "end": 74
                    },
                    {
                        "section": "Backpropagation through Pipelines",
                        "n": "2.3",
                        "start": 75,
                        "end": 115
                    },
                    {
                        "section": "Experiments",
                        "n": "4",
                        "start": 116,
                        "end": 118
                    },
                    {
                        "section": "Syntactic-then-Semantic Parsing",
                        "n": "4.1",
                        "start": 119,
                        "end": 120
                    },
                    {
                        "section": "Architectures",
                        "n": "4.1.1",
                        "start": 121,
                        "end": 146
                    },
                    {
                        "section": "Datasets",
                        "n": "4.1.2",
                        "start": 147,
                        "end": 153
                    },
                    {
                        "section": "Baselines",
                        "n": "4.1.3",
                        "start": 154,
                        "end": 158
                    },
                    {
                        "section": "Empirical Results",
                        "n": "4.1.4",
                        "start": 159,
                        "end": 170
                    },
                    {
                        "section": "Semantic Dependencies for Sentiment Classification",
                        "n": "4.2",
                        "start": 171,
                        "end": 172
                    },
                    {
                        "section": "Architectures",
                        "n": "4.2.1",
                        "start": 173,
                        "end": 184
                    },
                    {
                        "section": "Datasets",
                        "n": "4.2.2",
                        "start": 185,
                        "end": 190
                    },
                    {
                        "section": "Empirical Results",
                        "n": "4.2.3",
                        "start": 191,
                        "end": 193
                    },
                    {
                        "section": "Analysis",
                        "n": "5",
                        "start": 194,
                        "end": 208
                    },
                    {
                        "section": "Related Work",
                        "n": "6",
                        "start": 209,
                        "end": 220
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 221,
                        "end": 225
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1278-Figure1-1.png",
                        "caption": "Figure 1: The original feasible set Z (red vertices), is relaxed into a convex polytope P (the area encompassed by blue edges). Left: making a gradient update to ẑ makes it step outside the polytope, and it is projected back to P , resulting in the projected point z̃. ∇sL is then along the edge. Right: updating ẑ keeps it within P , and thus∇sL = η∇ẑL.",
                        "page": 2,
                        "bbox": {
                            "x1": 86.88,
                            "x2": 269.28,
                            "y1": 61.44,
                            "y2": 120.96
                        }
                    },
                    {
                        "filename": "../figure/image/1278-Table1-1.png",
                        "caption": "Table 1: Semantic dependency parsing performance in both unlabeled (UF ) and labeled (LF ) F1 scores. Bold font indicates the best performance. Peng et al. (2017) does not report UF .",
                        "page": 5,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 524.16,
                            "y1": 62.4,
                            "y2": 381.12
                        }
                    },
                    {
                        "filename": "../figure/image/1278-Figure2-1.png",
                        "caption": "Figure 2: A development instance annotated with both gold DM semantic dependency graph (red arcs on the top), and gold syntactic dependency tree (blue arcs at the bottom). A pretrained syntactic parser predicts the same tree as the gold; the semantic parser backpropagates into the intermediate syntactic parser, and changes the dashed blue arcs into dashed red arcs (§5).",
                        "page": 4,
                        "bbox": {
                            "x1": 333.59999999999997,
                            "x2": 516.0,
                            "y1": 97.44,
                            "y2": 115.19999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1278-Table2-1.png",
                        "caption": "Table 2: Test accuracy of sentiment classification on Stanford Sentiment Treebank. Bold font indicates the best performance.",
                        "page": 7,
                        "bbox": {
                            "x1": 120.96,
                            "x2": 241.44,
                            "y1": 62.4,
                            "y2": 148.32
                        }
                    },
                    {
                        "filename": "../figure/image/1278-Table3-1.png",
                        "caption": "Table 3: Syntactic parsing performance (in unlabeled attachment score, UAS) and DM semantic parsing performance (in labeled F1) on different groups of the development data. Both systems predict the same syntactic parses for instances from SAME, and they disagree on instances from DIFF (§5).",
                        "page": 7,
                        "bbox": {
                            "x1": 320.64,
                            "x2": 512.16,
                            "y1": 61.44,
                            "y2": 133.92
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-63"
        },
        {
            "slides": {
                "0": {
                    "title": "Inferring Character State",
                    "text": [
                        "The band instructor told the band to Players",
                        "He often stopped the music when players were off-tone. frustrated",
                        "annoyed They grew tired and started playing",
                        "worse after a while.",
                        "The instructor was furious and threw",
                        "angry his chair. afraid",
                        "He cancelled practice and expected us to perform tomorrow. stressed"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Reasoning about Naieve Psychology",
                    "text": [
                        "New Story Commonsense Dataset:",
                        "Open text + psychology theory",
                        "Complete chains of mental states of characters",
                        "Implied changes to characters"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "How do we represent naive psychology",
                    "text": [
                        "The band instructor told the band to start playing.",
                        "He often stopped the music when players were off-tone.",
                        "To create a good harmony",
                        "Anger feels feels frustrated"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Naieve Psychology Annotations",
                    "text": [
                        "Causal source to actions"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "Motivation Maslow Hierarchy of Needs 1943",
                    "text": [
                        "She sat down on the couch and instantly fell asleep.",
                        "She sat down to eat lunch."
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "5": {
                    "title": "Motivation Reiss Categories 2004",
                    "text": [
                        "Esteem She sat down on the couch",
                        "and instantly fell asleep.",
                        "Food She sat down to eat lunch."
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "6": {
                    "title": "Emotional Reaction Plutchik 1980",
                    "text": [
                        "Their favorite uncle died.",
                        "Suddenly, they heard a loud noise."
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "7": {
                    "title": "Implicit Mental State Changes",
                    "text": [
                        "The band instructor told the band to start playing.",
                        "He often stopped the music when players were off-tone.",
                        "They grew tired and started playing worse after a while.",
                        "The instructor was furious and threw his chair.",
                        "How are players affected? implicitly involved inference in these cases"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "8": {
                    "title": "Tracking Mental States",
                    "text": [
                        "The band instructor told the band to start playing.",
                        "He often stopped the music when players were off-tone.",
                        "They grew tired and started playing worse after a while.",
                        "The instructor was furious and threw his chair.",
                        "He cancelled practice and expected us to perform tomorrow.",
                        "Why does the instructor cancel practice? based on previous info need to incorporate context"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "10": {
                    "title": "Full Annotation Chain",
                    "text": [
                        "Sarah gets attacked by a shark.",
                        "Sarah fights off the shark.",
                        "Sarah escapes the attack.",
                        "Is Sarah taking action: Yes",
                        "Stability to escape to safety",
                        "Does the Shark have a reaction?"
                    ],
                    "page_nums": [
                        11,
                        12,
                        13,
                        14
                    ],
                    "images": []
                },
                "11": {
                    "title": "Data Collection Summary",
                    "text": [
                        "Over 300k low-level annotations for 15k stories from ROC training set",
                        "Open-text Open-text + categories"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "12": {
                    "title": "Annotated Data Distributions Motivation",
                    "text": [
                        "Fair amount of diversity in the open-text",
                        "~1/3 have positive motivation change:",
                        "Sampled Explanations Open-text % Annotations where selected",
                        "meet goal; to look nice",
                        "to support his friends",
                        "be employed; stay dry"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                },
                "13": {
                    "title": "Annotated Data Distributions Emotion",
                    "text": [
                        "Lots of happy stories",
                        "~2/3 have positive emotion change:",
                        "Explanations % Annotations where selected"
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": []
                },
                "15": {
                    "title": "Task 1 Explanation Generation",
                    "text": [
                        "Explain mental state of character using natural language",
                        "The band instructor told the band to start playing.",
                        "Story Text Excerpt + Character"
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": []
                },
                "16": {
                    "title": "Modeling",
                    "text": [
                        "Story Text + Character",
                        "Encoders - LSTM, CNN, REN, NPN",
                        "Decoder for generation: single layer",
                        "Decoder for categorization: logistic regression cat = !$`abb()"
                    ],
                    "page_nums": [
                        20,
                        26
                    ],
                    "images": []
                },
                "17": {
                    "title": "Encoding Modules",
                    "text": [
                        "Given entity and line (and entity-specific context sentences",
                        "CNN, LSTM: encode last line and context -- concatenate"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": []
                },
                "18": {
                    "title": "Entity Modeling",
                    "text": [
                        "Recurrent Entity Networks (Henaff et al 2017)",
                        "Store separate memory cells for each story character",
                        "Update after each sentence with sentence-based hidden states",
                        "Neural Process Networks (Bosselut et al 2018)",
                        "Also has separate representations for each character",
                        "Updates after each sentence using learned action embeddings"
                    ],
                    "page_nums": [
                        22
                    ],
                    "images": []
                },
                "19": {
                    "title": "Explanation Generation Set up",
                    "text": [
                        "Evaluation: Cosine similarity of generated response to reference",
                        "Random baseline: Select random answer from dev set",
                        "Words for describing intent/emotion are close in embedding space"
                    ],
                    "page_nums": [
                        23
                    ],
                    "images": []
                },
                "20": {
                    "title": "Explanation Generation Results",
                    "text": [
                        "Cos. Similarity to Reference",
                        "Motivation (VE) Emotion (VE)",
                        "Random LSTM CNN REN NPN"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": []
                },
                "21": {
                    "title": "Predicting psychological categories for mental state",
                    "text": [
                        "Task 2 Mental State Classification",
                        "The band instructor told the band to start playing. anticipation",
                        "Story Text Excerpt + Character Theory categories"
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": []
                },
                "22": {
                    "title": "State Classification Results",
                    "text": [
                        "LSTM perform best on motivation categories",
                        "Entity modeling has slight improvement in Plutchik Maslow Reiss Plutchik",
                        "Random LSTM CNN REN NPN"
                    ],
                    "page_nums": [
                        27
                    ],
                    "images": []
                },
                "23": {
                    "title": "Further Improvement",
                    "text": [
                        "Random LSTM CNN REN NPN"
                    ],
                    "page_nums": [
                        28
                    ],
                    "images": []
                },
                "24": {
                    "title": "Effect of Entity Specific Context",
                    "text": [
                        "Including previous lines from context that include entity",
                        "F1 w/ and w/o context",
                        "Entity specific context: improves all models F1",
                        "CNN CNN w/ context"
                    ],
                    "page_nums": [
                        29
                    ],
                    "images": []
                },
                "27": {
                    "title": "Performance Per Category",
                    "text": [
                        "Very concrete sets of actions (physiological F1: 40% )"
                    ],
                    "page_nums": [
                        32
                    ],
                    "images": []
                },
                "28": {
                    "title": "Future Work",
                    "text": [
                        "Outside Knowledge: Help with infrequent classes and subtle implied changes",
                        "Social Commonsense: Help with inferring mental state especially in more contextual cases",
                        "Potential Applications: Improving language models, chat systems, natural language understanding"
                    ],
                    "page_nums": [
                        33
                    ],
                    "images": []
                }
            },
            "paper_title": "Modeling Naive Psychology of Characters in Simple Commonsense Stories",
            "paper_id": "1279",
            "paper": {
                "title": "Modeling Naive Psychology of Characters in Simple Commonsense Stories",
                "abstract": "Understanding a narrative requires reading between the lines and reasoning about the unspoken but obvious implications about events and people's mental states -a capability that is trivial for humans but remarkably hard for machines. To facilitate research addressing this challenge, we introduce a new annotation framework to explain naive psychology of story characters as fully-specified chains of mental states with respect to motivations and emotional reactions. Our work presents a new largescale dataset with rich low-level annotations and establishes baseline performance on several new tasks, suggesting avenues for future research.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Understanding a story requires reasoning about the causal links between the events in the story and the mental states of the characters, even when those relationships are not explicitly stated."
                    },
                    {
                        "id": 1,
                        "string": "As shown by the commonsense story cloze shared task (Mostafazadeh et al., 2017) , this reasoning is remarkably hard for both statistical and neural machine readers -despite being trivial for humans."
                    },
                    {
                        "id": 2,
                        "string": "This stark performance gap between humans and machines is not surprising as most powerful language models have been designed to effectively learn local fluency patterns."
                    },
                    {
                        "id": 3,
                        "string": "Consequently, they generally lack the ability to abstract away from surface patterns in text to model more complex implied dynamics, such as intuiting characters' mental states or predicting their plausible next actions."
                    },
                    {
                        "id": 4,
                        "string": "In this paper, we construct a new annotation formalism to densely label commonsense short stories (Mostafazadeh et al., 2016) in terms of the mental states of the characters."
                    },
                    {
                        "id": 5,
                        "string": "The result-The band instructor told the band to start playing."
                    },
                    {
                        "id": 6,
                        "string": "He often stopped the music when players were off-tone."
                    },
                    {
                        "id": 7,
                        "string": "They grew tired and started playing worse after a while."
                    },
                    {
                        "id": 8,
                        "string": "The instructor was furious and threw his chair."
                    },
                    {
                        "id": 9,
                        "string": "Figure 1 : A story example with partial annotations for motivations (dashed) and emotional reactions (solid) ."
                    },
                    {
                        "id": 10,
                        "string": "Open text explanations are in black (e.g., \"frustrated\") and formal theory labels are in blue with brackets (e.g., \"[esteem]\")."
                    },
                    {
                        "id": 11,
                        "string": "ing dataset offers three unique properties."
                    },
                    {
                        "id": 12,
                        "string": "First, as highlighted in Figure 1 , the dataset provides a fully-specified chain of motivations and emotional reactions for each story character as preand post-conditions of events."
                    },
                    {
                        "id": 13,
                        "string": "Second, the annotations include state changes for entities even when they are not mentioned directly in a sentence (e.g., in the fourth sentence in Figure 1 , players would feel afraid as a result of the instructor throwing a chair), thereby capturing implied effects unstated in the story."
                    },
                    {
                        "id": 14,
                        "string": "Finally, the annotations encompass both formal labels from multiple theories of psychology (Maslow, 1943; Reiss, 2004; Plutchik, 1980) as well as open text descriptions of motivations and emotions, providing a comprehensive mapping between open text explanations and label categories (e.g., \"to spend time with her son\" Reiss) and Emotional Reaction (Plutchik) ."
                    },
                    {
                        "id": 15,
                        "string": "!"
                    },
                    {
                        "id": 16,
                        "string": "Maslow's category love)."
                    },
                    {
                        "id": 17,
                        "string": "Our corpus 1 spans across 15k stories, amounting to 300k low-level annotations for around 150k character-line pairs."
                    },
                    {
                        "id": 18,
                        "string": "Using our new corpus, we present baseline performance on two new tasks focusing on mental state tracking of story characters: categorizing motivations and emotional reactions using theory labels, as well as describing motivations and emotional reactions using open text."
                    },
                    {
                        "id": 19,
                        "string": "Empirical results demonstrate that existing neural network models including those with explicit or latent entity representations achieve promising results."
                    },
                    {
                        "id": 20,
                        "string": "Mental State Representations Understanding people's actions, motivations, and emotions has been a recurring research focus across several disciplines including philosophy and psychology (Schachter and Singer, 1962; Burke, 1969; Lazarus, 1991; Goldman, 2015) ."
                    },
                    {
                        "id": 21,
                        "string": "We draw from these prior works to derive a set of categorical labels for annotating the step-by-step causal dynamics between the mental states of story characters and the events they experience."
                    },
                    {
                        "id": 22,
                        "string": "Motivation Theories We use two popular theories of motivation: the \"hierarchy of needs\" of Maslow (1943) and the \"basic motives\" of Reiss (2004) to compile 5 coarse-grained and 19 fine-grained motivation categories, shown in Figure 2 ."
                    },
                    {
                        "id": 23,
                        "string": "Maslow's \"hierarchy of needs\" are comprised of five categories, ranging from physiological needs to spiritual growth, which we use as coarse-level categories."
                    },
                    {
                        "id": 24,
                        "string": "Reiss (2004) proposes 19 more fine-grained categories that provide a more informative range of motivations."
                    },
                    {
                        "id": 25,
                        "string": "For example, even though they both relate to the physiological needs Maslow category, the food and rest motives from Reiss (2004) are very different."
                    },
                    {
                        "id": 26,
                        "string": "While the Reiss theory allows for finergrained annotations of motivation, the larger set of abstract concepts can be overwhelming for annotators."
                    },
                    {
                        "id": 27,
                        "string": "Motivated by Straker (2013) , we design a hybrid approach, where Reiss labels are annotated as sub-categories of Maslow categories."
                    },
                    {
                        "id": 28,
                        "string": "Emotion Theory Among several theories of emotion, we work with the \"wheel of emotions\" of Plutchik (1980) , as it has been a common choice in prior literature on emotion categorization (Mohammad and Turney, 2013; Zhou et al., 2016) ."
                    },
                    {
                        "id": 29,
                        "string": "We use the eight basic emotional dimensions as illustrated in Figure 2 ."
                    },
                    {
                        "id": 30,
                        "string": "Mental State Explanations In addition to the motivation and emotion categories derived from psychology theories, we also obtain open text descriptions of character mental states."
                    },
                    {
                        "id": 31,
                        "string": "These open text descriptions allow learning computational models that can explain the mental states of characters in natural language, which is likely to be more accessible and informative to end users than having theory categories alone."
                    },
                    {
                        "id": 32,
                        "string": "Collecting both theory categories and open text also allows us to learn the automatic mappings between the two, which generalizes the previous work of Mohammad and Turney (2013) on emotion category mappings."
                    },
                    {
                        "id": 33,
                        "string": "Annotation Framework In this study, we choose to annotate the simple commonsense stories introduced by Mostafazadeh et al."
                    },
                    {
                        "id": 34,
                        "string": "(2016) ."
                    },
                    {
                        "id": 35,
                        "string": "Despite their simplicity, these stories pose a significant challenge to natural language understanding models (Mostafazadeh et al., 2017 In addition, they depict multiple interactions between story characters, presenting rich opportunities to reason about character motivations and reactions."
                    },
                    {
                        "id": 36,
                        "string": "Furthermore, there are more than 98k such stories currently available covering a wide range of everyday scenarios."
                    },
                    {
                        "id": 37,
                        "string": "Unique Challenges While there have been a variety of annotated resources developed on the related topics of sentiment analysis (Mohammad and Turney, 2013; Deng and Wiebe, 2015) , entity tracking (Hoffart et al., 2011; Weston et al., 2015) , and story understanding (Goyal et al., 2010; Ouyang and McKeown, 2015; Lukin et al., 2016) , our study is the first to annotate the full chains of mental state effects for story characters."
                    },
                    {
                        "id": 38,
                        "string": "This poses several unique challenges as annotations require (1) interpreting discourse (2) understanding implicit causal effects, and (3) understanding formal psychology theory categories."
                    },
                    {
                        "id": 39,
                        "string": "In prior literature, annotations of this complexity have typically been performed by experts (Deng and Wiebe, 2015; Ouyang and McKeown, 2015) ."
                    },
                    {
                        "id": 40,
                        "string": "While reliable, these annotations are prohibitively expensive to scale up."
                    },
                    {
                        "id": 41,
                        "string": "Therefore, we introduce a new annotation framework that pipelines a set of smaller isolated tasks as illustrated in Figure 3 ."
                    },
                    {
                        "id": 42,
                        "string": "All annotations were collected using crowdsourced workers from Amazon Mechanical Turk."
                    },
                    {
                        "id": 43,
                        "string": "Annotation Pipeline We describe the components and workflow of the full annotation pipeline shown in Figure 3 below."
                    },
                    {
                        "id": 44,
                        "string": "The example story in the figure is used to illustrate the output of various steps in the pipeline (full annotations for this example are in the appendix)."
                    },
                    {
                        "id": 45,
                        "string": "(1) Entity Resolution The first task in the pipeline aims to discover (1) the set of characters E i in each story i and (2) the set of sentences S ij in which a specific character j 2 E i is ex-plicitly mentioned."
                    },
                    {
                        "id": 46,
                        "string": "For example, in the story in Figure 3 , the characters identified by annotators are \"I/me\" and \"My cousin\", whom appear in sentences {1, 4, 5} and {1, 2, 3, 4, 5}, respectively."
                    },
                    {
                        "id": 47,
                        "string": "We use S ij to control the workflow of later parts of the pipeline by pruning future tasks for sentences that are not tied to characters."
                    },
                    {
                        "id": 48,
                        "string": "Because S ij is used to prune follow-up tasks, we take a high recall strategy to include all sentences that at least one annotator selected."
                    },
                    {
                        "id": 49,
                        "string": "(2a) Action Resolution The next task identifies whether a character j appearing in a sentence k is taking any action to which a motivation can be attributed."
                    },
                    {
                        "id": 50,
                        "string": "We perform action resolution only for sentences k 2 S ij ."
                    },
                    {
                        "id": 51,
                        "string": "In the running example, we would want to know that the cousin in line 2 is not doing anything intentional, allowing us to omit this line in the next pipeline stage (3a) where a character's motives are annotated."
                    },
                    {
                        "id": 52,
                        "string": "Description of state (e.g., \"Alex is feeling blue\") or passive event participation (e.g., \"Alex trips\") are not considered volitional acts for which the character may have an underlying motive."
                    },
                    {
                        "id": 53,
                        "string": "For each line and story character pair, we obtain 4 annotations."
                    },
                    {
                        "id": 54,
                        "string": "Because pairs can still be filtered out in the next stage of annotation, we select a generous threshold where only 2 annotators must vote that an intentional action took place for the sentence to be used as an input to the motivation annotation task (3a)."
                    },
                    {
                        "id": 55,
                        "string": "(2b) Affect Resolution This task aims to identify all of the lines where a story character j has an emotional reaction."
                    },
                    {
                        "id": 56,
                        "string": "Importantly, it is often possible to infer the emotional reaction of a character j even when the character does not explicitly appear in a sentence k. For instance, in Figure 3 , we want to annotate the narrator's reaction to line 2 even though they are not mentioned because their emotional response is inferrable."
                    },
                    {
                        "id": 57,
                        "string": "We obtain 4 an- Figure 4 : Examples of open-text explanations that annotators provided corresponding with the categories they selected."
                    },
                    {
                        "id": 58,
                        "string": "The bars on the right of the categories represent the percentage of lines where annotators selected that category (out of those character-line pairs with positive motivation/emotional reaction)."
                    },
                    {
                        "id": 59,
                        "string": "notations per character per line."
                    },
                    {
                        "id": 60,
                        "string": "The lines with at least 2 annotators voting are used as input for the next task: (3b) emotional reaction."
                    },
                    {
                        "id": 61,
                        "string": "(3a) Motivation We use the output from the action resolution stage (2a) to ask workers to annotate character motives in lines where they intentionally initiate an event."
                    },
                    {
                        "id": 62,
                        "string": "We provide 3 annotators a line from a story, the preceding lines, and a specific character."
                    },
                    {
                        "id": 63,
                        "string": "They are asked to produce a free response sentence describing what causes the character's behavior in that line and to select the most related Maslow categories and Reiss subcategories."
                    },
                    {
                        "id": 64,
                        "string": "In Figure 3 , an annotator described the motivation of the narrator in line 1 as wanting \"to have company\" and then selected the love (Maslow) and family (Reiss) as categorical labels."
                    },
                    {
                        "id": 65,
                        "string": "Because many annotators are not familiar with motivational theories, we require them to complete a tutorial the first time they attempt the task."
                    },
                    {
                        "id": 66,
                        "string": "(3b) Emotional Reaction Simultaneously, we use the output from the affect resolution stage (2b) to ask workers what the emotional response of a character is immediately following a line in which they are affected."
                    },
                    {
                        "id": 67,
                        "string": "As with the motives, we give 3 annotators a line from a story, its previous context, and a specific character."
                    },
                    {
                        "id": 68,
                        "string": "We ask them to describe in open text how the character will feel following the event in the sentence (up to three emotions)."
                    },
                    {
                        "id": 69,
                        "string": "As a follow-up, we ask workers to compare their free responses against Plutchik categories by using 3-point likert ratings."
                    },
                    {
                        "id": 70,
                        "string": "In Figure 3 , we include a response for the emotional reaction of the narrator in line 1."
                    },
                    {
                        "id": 71,
                        "string": "Even though the narrator was not mentioned directly in that line, an annotator recorded that they will react to their cousin being a slob by feeling \"annoyed\" and selected the Plutchik categories for sadness, disgust and anger."
                    },
                    {
                        "id": 72,
                        "string": "Dataset Statistics and Insights Cost The tasks corresponding to the theory category assignments are the hardest and most expensive in the pipeline (⇠$4 per story)."
                    },
                    {
                        "id": 73,
                        "string": "Therefore, we obtain theory category labels only for a third of our annotated stories, which we assign to the development and test sets."
                    },
                    {
                        "id": 74,
                        "string": "The training data is annotated with a shortened pipeline with only open text descriptions of motivations and emotional reactions from two workers (⇠$1 per story)."
                    },
                    {
                        "id": 75,
                        "string": "Scale Our dataset to date includes a total of 300k low-level annotations for motivation and emotion across 15,000 stories (randomly selected from the ROC story training set)."
                    },
                    {
                        "id": 76,
                        "string": "It covers over 150,000 character-line pairs, in which 56k character-line pairs have an annotated motivation and 105k have an annotated change in emotion (i.e."
                    },
                    {
                        "id": 77,
                        "string": "a label other than none)."
                    },
                    {
                        "id": 78,
                        "string": "Table 1 shows the break down across training, development, and test splits."
                    },
                    {
                        "id": 79,
                        "string": "Figure 4 shows the frequency of different labels being selected for motivational and emotional categories in cases with positive change."
                    },
                    {
                        "id": 80,
                        "string": "Agreements For quality control, we removed workers who consistently produced low-quality work, as discussed in the Appendix."
                    },
                    {
                        "id": 81,
                        "string": "In the categorization sets (Maslow, Reiss and Plutchik), we compare the performance of annotators by treating each individual category as a binary label (1 Serenity 0.01 0.03 0.06 0.27 0.06 0.01 -0.07 -0.07 -0.04 -0.09 -0.07 0.02 -0.07 -0.04 -0.08 -0.10 -0.07 0.01 -0.02 0.07 -0.05 0.02 -0.10 0.15 Curiosity -0.01 -0.02 0.13 0.03 0.40 -0.01 -0.04 -0.05 -0.04 -0.01 -0.08 -0.01 -0.16 -0.06 -0.07 -0.14 -0.01 -0.11 -0.12 -0.04 -0.10 0.08 -0.12 -0.07 Esteem Other -1.00 0.14 0.04 0.02 0.04 0.31 -0.05 0.03 0.10 -0.12 0.05 -1.00 -0.07 0.08 -1.00 -1.00 -1.00 -0.14 -0.05 -0.07 0.03 -1.00 -0.09 -0.04   if they included the category in their set of responses) and averaging the agreement per category."
                    },
                    {
                        "id": 82,
                        "string": "For Plutchik scores, we count 'moderately associated' ratings as agreeing with 'highly' associated' ratings."
                    },
                    {
                        "id": 83,
                        "string": "The percent agreement and Krippendorff's alpha are shown in Table 2 ."
                    },
                    {
                        "id": 84,
                        "string": "We also compute the percent agreement between the individual annotations and the majority labels."
                    },
                    {
                        "id": 85,
                        "string": "2 These scores are difficult to interpret by themselves, however, as annotator agreement in our categorization system has a number of properties that are not accounted for by these metrics (disagreement preferences -joy and trust are closer than joy and anger -that are difficult to quantify in a principled way, hierarchical categories map-ping Reiss subcategories from Maslow categories, skewed category distributions that inflate PPA and deflate KA scores, and annotators that could select multiple labels for the same examples)."
                    },
                    {
                        "id": 86,
                        "string": "To provide a clearer understanding of agreement within this dataset, we create aggregated confusion matrices for annotator pairs."
                    },
                    {
                        "id": 87,
                        "string": "First, we sum the counts of combinations of answers between all paired annotations (excluding none labels)."
                    },
                    {
                        "id": 88,
                        "string": "If an annotator selected multiple categories, we split the count uniformly among the selected categories."
                    },
                    {
                        "id": 89,
                        "string": "We compute NPMI over the total confusion matrix."
                    },
                    {
                        "id": 90,
                        "string": "In Figure 5 , we show the NPMI confusion matrix for motivational categories."
                    },
                    {
                        "id": 91,
                        "string": "In the motivation annotations, we find the highest scores on the diagonal (i.e., Reiss agreement), with most confusions occurring between Reiss motives in the same Maslow category (outlined black in Figure 5 )."
                    },
                    {
                        "id": 92,
                        "string": "Other disagreements generally involve Reiss subcategories that are thematically similar, such as serenity (mental relaxation) and rest (physical relaxation)."
                    },
                    {
                        "id": 93,
                        "string": "We provide this analysis for Plutchik categories in the appendix, finding high scores along the diagonal with disagreements typically occurring between categories in a \"positive emotion\" cluster (joy, trust) or a \"negative emotion\" cluster (anger, disgust,sadness)."
                    },
                    {
                        "id": 94,
                        "string": "Tasks The multiple modes covered by the annotations in this new dataset allow for multiple new tasks to be explored."
                    },
                    {
                        "id": 95,
                        "string": "We outline three task types below, covering a total of eight tasks on which to evaluate."
                    },
                    {
                        "id": 96,
                        "string": "Open-text Explanation Figure 6 : General model architectures for three new task types Differences between task type inputs and outputs are summarized in Figure 6 ."
                    },
                    {
                        "id": 97,
                        "string": "State Classification The three primary tasks involve categorizing the psychological states of story characters for each of the label sets (Maslow, Reiss, Plutchik) collected for the dev and test splits of our dataset."
                    },
                    {
                        "id": 98,
                        "string": "In each classification task, a model is given a line of the story (along with optional preceding context lines) and a character and predicts the motivation (or emotional reaction)."
                    },
                    {
                        "id": 99,
                        "string": "A binary label is predicted for each of the Maslow needs, Reiss motives or Plutchik categories."
                    },
                    {
                        "id": 100,
                        "string": "Annotation Classification Because the dev and test sets contain paired classification labels and free text explanations, we propose three tasks where a model must predict the correct Maslow/Reiss/Plutchik label given an emotional reaction or motivation explanation."
                    },
                    {
                        "id": 101,
                        "string": "Explanation Generation Finally, we can use the free text explanations to train models to describe the psychological state of a character in free text (examples in Figure 4 )."
                    },
                    {
                        "id": 102,
                        "string": "These explanations allow for two conditional generation tasks where the model must generate the words describing the emotional reaction or motivation of the character."
                    },
                    {
                        "id": 103,
                        "string": "Baseline Models The general model architectures for the three tasks are shown in Figure 6 ."
                    },
                    {
                        "id": 104,
                        "string": "We describe each model component below."
                    },
                    {
                        "id": 105,
                        "string": "The state classification and explanation generation models could be trained separately or in a multi-task set-up."
                    },
                    {
                        "id": 106,
                        "string": "In the state classification and explanation generation tasks, a model is given a line from a story x s containing N words {w s 0 , w s 1 , ."
                    },
                    {
                        "id": 107,
                        "string": "."
                    },
                    {
                        "id": 108,
                        "string": "."
                    },
                    {
                        "id": 109,
                        "string": ", w s N } from vocabulary V , a character in that story e j 2 E where E is the set of characters in the story, and (optionally) the preceding sentences in the story C = {x 0 ."
                    },
                    {
                        "id": 110,
                        "string": "."
                    },
                    {
                        "id": 111,
                        "string": "."
                    },
                    {
                        "id": 112,
                        "string": ", x s 1 } containing words from vocabulary V ."
                    },
                    {
                        "id": 113,
                        "string": "A representation for a character's psychological state is encoded as: h e = Encoder(x s , C[e j ]) (1) where C[e j ] corresponds to the concatenated subset of sentences in C where e j appears."
                    },
                    {
                        "id": 114,
                        "string": "Encoders While the end classifier or decoder is different for each task, we use the same set of encoders based on word embeddings, common neural network architectures, or memory networks to formulate a representation of the sentence and character, h e ."
                    },
                    {
                        "id": 115,
                        "string": "Unless specified, h e is computed by encoding separate vector representations for the sentence (x s !"
                    },
                    {
                        "id": 116,
                        "string": "h s ) and character-specific context (C[e j ] !"
                    },
                    {
                        "id": 117,
                        "string": "h c ) and concatenating these encodings (h e = [h c ; h s ]) ."
                    },
                    {
                        "id": 118,
                        "string": "We describe the encoders below: TF-IDF We learn a TD-IDF model on the full training corpus of Mostafazadeh et al."
                    },
                    {
                        "id": 119,
                        "string": "(2016) (excluding the stories in our dev/test sets)."
                    },
                    {
                        "id": 120,
                        "string": "To encode the sentence, we extract TF-IDF features for its words, yielding v s 2 R V ."
                    },
                    {
                        "id": 121,
                        "string": "A projection and nonlinearity is applied to these features to yield h s : h s = (W s v s + b s ) (2) where W s 2 R d⇥H ."
                    },
                    {
                        "id": 122,
                        "string": "The character vector h c is encoded in the same way on sentences in the context pertaining to the character."
                    },
                    {
                        "id": 123,
                        "string": "GloVe We extract pretrained Glove vectors (Pennington et al., 2014) h s = CNN(v s ) (3) where CNN represents the categorization model from (Kim, 2014) ."
                    },
                    {
                        "id": 124,
                        "string": "The character vector h c is encoded in the same way with a separate CNN."
                    },
                    {
                        "id": 125,
                        "string": "Implementation details are provided in the appendix."
                    },
                    {
                        "id": 126,
                        "string": "LSTM A two-layer bi-LSTM encodes the sentence words and concatenates the final time step hidden states from both directions to yield h s ."
                    },
                    {
                        "id": 127,
                        "string": "The character vector h c is encoded the same way."
                    },
                    {
                        "id": 128,
                        "string": "REN We use the \"tied\" recurrent entity network from Henaff et al."
                    },
                    {
                        "id": 129,
                        "string": "(2017) ."
                    },
                    {
                        "id": 130,
                        "string": "A memory cell m is initialized for each of the J characters in the story, E = {e 0 , ."
                    },
                    {
                        "id": 131,
                        "string": "."
                    },
                    {
                        "id": 132,
                        "string": "."
                    },
                    {
                        "id": 133,
                        "string": ", e J }."
                    },
                    {
                        "id": 134,
                        "string": "The REN reads documents one sentence at a time and updates m j for e j 2 E after reading each sentence."
                    },
                    {
                        "id": 135,
                        "string": "Unlike the previous encoders, all sentences of the context C are given to the REN along with the sentence x s ."
                    },
                    {
                        "id": 136,
                        "string": "The model learns to distribute encoded information to the correct memory cells."
                    },
                    {
                        "id": 137,
                        "string": "The representation passed to the downstream model is: h e = {m j } s (5) where {m j } s is the memory vector in the cell corresponding to e j after reading x s ."
                    },
                    {
                        "id": 138,
                        "string": "Implementation details are provided in the appendix."
                    },
                    {
                        "id": 139,
                        "string": "NPN We also include the neural process network from Bosselut et al."
                    },
                    {
                        "id": 140,
                        "string": "(2018) with \"tied\" entities, but \"untied\" actions that are not grounded to particular concepts."
                    },
                    {
                        "id": 141,
                        "string": "The memory is initialized and accessed similarly as the REN."
                    },
                    {
                        "id": 142,
                        "string": "Exact implementation details are provided in the appendix."
                    },
                    {
                        "id": 143,
                        "string": "State Classifier Once the sentence-character encoding h e is extracted, the state classifier predicts a binary label y z for every category z 2 Z where Z is the set of category labels for a particular psychological theory (e.g., disgust, surprise, etc."
                    },
                    {
                        "id": 144,
                        "string": "in the Plutchik wheel)."
                    },
                    {
                        "id": 145,
                        "string": "We use logistic regression as a classifier: y z = (W z h e + b z ) (6) where W z and b z are a label-specific set of weights and biases for classifying each label z 2 Z."
                    },
                    {
                        "id": 146,
                        "string": "Explanation Generator The explanation generator is a single-layer LSTM (Hochreiter and Schmidhuber, 1997 ) that receives the encoded sentence-character representation h e and predicts each word y t in the explanation using the same method from ."
                    },
                    {
                        "id": 147,
                        "string": "Implementation details are provided in the appendix."
                    },
                    {
                        "id": 148,
                        "string": "Annotation Classifier For annotation classification tasks, words from open-text explanations are encoded with TF-IDF features."
                    },
                    {
                        "id": 149,
                        "string": "The same classifier architecture from Section 5.2 is used to predict the labels."
                    },
                    {
                        "id": 150,
                        "string": "6 Experimental Setup Training State Classification The dev set D is split into two portions of 80% (D 1 ) and 20% (D 2 )."
                    },
                    {
                        "id": 151,
                        "string": "D 1 is used to train the classifier and encoder."
                    },
                    {
                        "id": 152,
                        "string": "D 2 is used to tune hyperparameters."
                    },
                    {
                        "id": 153,
                        "string": "The model is trained to minimize the weighted binary cross entropy of predicting a class label y z for each class z: L = Z X z=1 z y z logŷ z +(1 z )(1 y z ) log(1 ŷ z ) (7) where Z is the number of labels in each of the three classifications tasks and z is defined as: z = 1 e p P (yz) (8) where P (y z ) is the marginal class probability of a positive label for z in the training set."
                    },
                    {
                        "id": 154,
                        "string": "Annotation Classification The dev set is split in the same manner as for state classification."
                    },
                    {
                        "id": 155,
                        "string": "The TF-IDF features are trained on the set of training annotations D t coupled with those from D 1 ."
                    },
                    {
                        "id": 156,
                        "string": "The model must minimize the same loss as in Equation 7."
                    },
                    {
                        "id": 157,
                        "string": "Details are provided in the appendix."
                    },
                    {
                        "id": 158,
                        "string": "Explanation Generation We use the training set of open annotations to train a model to predict explanations."
                    },
                    {
                        "id": 159,
                        "string": "The decoder is trained to minimize the negative loglikelihood of predicting each word in the explanation of a character's state: L gen = T X t=1 log P (y t |y 0 , ..., y t 1 , h e ) (9) where h e is the sentence-character representation produced by an encoder from Section 5.1."
                    },
                    {
                        "id": 160,
                        "string": "Metrics Classification For the state and annotation classification task, we report the micro-averaged precision (P), recall (R), and F1 score of the Plutchik, Maslow, and Reiss prediction tasks."
                    },
                    {
                        "id": 161,
                        "string": "We report the results of selecting a label at random in the top two rows of Table 3 ."
                    },
                    {
                        "id": 162,
                        "string": "Note that random is low because the distribution of positive instances for each (Henaff et al., 2017) 26.24 42.14 32.34 16.79 22.20 19.12 26.22 33.26 29.32 + Explanation Training 26.85 44.78 33.57 16.73 26.55 20.53 25.30 37.30 30.15 NPN (Bosselut et al., 2018) 24.27 44.16 31.33 13.13 26.44 17.55 21.98 37.31 27.66 + Explanation Training 26.60 39.17 31.69 15.75 20.34 17.75 24.33 40.10 30.29 Generation Because explanations tend to be short sequences (Figure 4 ) with high levels of synonymy, traditional metrics such as BLEU are inadequate for evaluating generation quality."
                    },
                    {
                        "id": 163,
                        "string": "We use the vector average and vector extrema metrics from Liu et al."
                    },
                    {
                        "id": 164,
                        "string": "(2016) computed using the Glove vectors of generated and reference words."
                    },
                    {
                        "id": 165,
                        "string": "We report results in Table 5 on the dev set and compare to a baseline that randomly samples an example from the dev set as a generated sequence."
                    },
                    {
                        "id": 166,
                        "string": "Ablations Story Context vs. No Context Our dataset is motivated by the importance of interpreting story context to categorize emotional reactions and motivations of characters."
                    },
                    {
                        "id": 167,
                        "string": "To test this importance, we ablate h c , the representation of the context sentences pertaining to the character, as an input to the state classifier for each encoder (except the REN and NPN)."
                    },
                    {
                        "id": 168,
                        "string": "In Table 3 , this ablation is the first row for each encoder presented."
                    },
                    {
                        "id": 169,
                        "string": "Explanation Pretraining Because the state classification and explanation generation tasks use the same models to encode the story, we explore initializing a classification encoder with parameters trained on the generation task."
                    },
                    {
                        "id": 170,
                        "string": "For the CNN, LSTM, and REN encoders, we pretrain a generator to produce emotion or motivation explana-tions."
                    },
                    {
                        "id": 171,
                        "string": "We use the parameters from the emotion or motivation explanation generators to initialize the Plutchik or Maslow/Reiss classifiers respectively."
                    },
                    {
                        "id": 172,
                        "string": "Experimental Results State Classification We show results on the test set for categorizing Maslow, Reiss, and Plutchik states in Table 3 ."
                    },
                    {
                        "id": 173,
                        "string": "Despite the difficulty of the task, all models outperform the random baseline."
                    },
                    {
                        "id": 174,
                        "string": "Interestingly, the performance boost from adding entity-specific contextual information (i.e., not ablating h c ) indicates that the models learn to condition on a character's previous experience to classify its mental state at the current time step."
                    },
                    {
                        "id": 175,
                        "string": "This effect can be seen in a story about a man whose flight is cancelled."
                    },
                    {
                        "id": 176,
                        "string": "The model without context predicts the same emotional reactions for the man, his wife and the pilot, but with context correctly predicts that the pilot will not have a reaction while predicting that the man and his wife will feel sad."
                    },
                    {
                        "id": 177,
                        "string": "For the CNN, LSTM, REN, and NPN models, we also report results from pretraining encoder parameters using the free response annotations from the training set."
                    },
                    {
                        "id": 178,
                        "string": "This pretraining offers a clear performance boost for all models on all three prediction tasks, showing that the parameters of the encoder can be pretrained on auxiliary tasks providing emotional and motivational state signal."
                    },
                    {
                        "id": 179,
                        "string": "The best performing models in each task are most effective at predicting Maslow physiological needs, Reiss food motives, and Plutchik reactions of joy."
                    },
                    {
                        "id": 180,
                        "string": "The relative ease of predicting motivations Table 4 shows that a simple model can learn to map open text responses to categorical labels."
                    },
                    {
                        "id": 181,
                        "string": "This further supports our hypothesis that pretraining a classification model on the free-response annotations could be helpful in boosting performance on the category prediction."
                    },
                    {
                        "id": 182,
                        "string": "Explanation Generation Finally, we provide results for the task of generating explanations of motivations and emotions in Table 5 ."
                    },
                    {
                        "id": 183,
                        "string": "Because the explanations are closely tied to emotional and motivation states, the randomly selected explanation can often be close in embedding space to the reference explanations, making the random baseline fairly competitive."
                    },
                    {
                        "id": 184,
                        "string": "However, all models outperform the strong baseline on both metrics, indicating that the generated short explanations are closer semantically to the reference annotation."
                    },
                    {
                        "id": 185,
                        "string": "Related work Mental State Annotations Incorporating emotion theories into NLP tasks has been explored in previous projects."
                    },
                    {
                        "id": 186,
                        "string": "Ghosh et al."
                    },
                    {
                        "id": 187,
                        "string": "(2017) modulate language model distributions by increasing the probability of words that express certain affective LIWC (Tausczik and Pennebaker, 2016) categories."
                    },
                    {
                        "id": 188,
                        "string": "More generally, various projects tackle the problem of generating text from a set of attributes like sentiment or generic-ness (Ficler and Goldberg, 2017; Dong et al., 2017) ."
                    },
                    {
                        "id": 189,
                        "string": "Similarly, there is also a body of research in reasoning about commonsense stories and discourse (Li and Jurafsky, 2017; Mostafazadeh et al., 2016) or detecting emotional stimuli in stories (Gui et al., 2017) ."
                    },
                    {
                        "id": 190,
                        "string": "Previous work in plot units (Lehnert, 1981) developed formalisms for affect and mental state in story narratives that included motivations and reactions."
                    },
                    {
                        "id": 191,
                        "string": "In our work, we collect mental state annotations for stories to used as a new resource in this space."
                    },
                    {
                        "id": 192,
                        "string": "Modeling Entity State Recently, novel works in language modeling (Ji et al., 2017; Yang et al., 2016) , question answering (Henaff et al., 2017) , and text generation (Kiddon et al., 2016; Bosselut et al., 2018) have shown that modeling entity state explicitly can boost performance while providing a preliminary interface for interpreting a model's prediction."
                    },
                    {
                        "id": 193,
                        "string": "Entity modeling in these works, however, was limited to tracking entity reference (Kiddon et al., 2016; Yang et al., 2016; Ji et al., 2017) , recognizing entity state similarity (Henaff et al., 2017) or predicting simple attributes from entity states (Bosselut et al., 2018) ."
                    },
                    {
                        "id": 194,
                        "string": "Our work provides a new dataset for tracking emotional reactions and motivations of characters in stories."
                    },
                    {
                        "id": 195,
                        "string": "Conclusion We present a large scale dataset as a resource for training and evaluating mental state tracking of characters in short commonsense stories."
                    },
                    {
                        "id": 196,
                        "string": "This dataset contains over 300k low-level annotations for character motivations and emotional reactions."
                    },
                    {
                        "id": 197,
                        "string": "We provide benchmark results on this new resource."
                    },
                    {
                        "id": 198,
                        "string": "Importantly, we show that modeling character-specific context and pretraining on freeresponse data can boost labeling performance."
                    },
                    {
                        "id": 199,
                        "string": "While our work only use information present in our dataset, we view our dataset as a future testbed for evaluating models trained on any number of resources for learning common sense about emotional reactions and motivations."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 18
                    },
                    {
                        "section": "Mental State Representations",
                        "n": "2",
                        "start": 19,
                        "end": 21
                    },
                    {
                        "section": "Motivation Theories",
                        "n": "2.1",
                        "start": 22,
                        "end": 26
                    },
                    {
                        "section": "Emotion Theory",
                        "n": "2.2",
                        "start": 27,
                        "end": 29
                    },
                    {
                        "section": "Mental State Explanations",
                        "n": "2.3",
                        "start": 30,
                        "end": 32
                    },
                    {
                        "section": "Annotation Framework",
                        "n": "3",
                        "start": 33,
                        "end": 42
                    },
                    {
                        "section": "Annotation Pipeline",
                        "n": "3.1",
                        "start": 43,
                        "end": 71
                    },
                    {
                        "section": "Dataset Statistics and Insights",
                        "n": "3.2",
                        "start": 72,
                        "end": 93
                    },
                    {
                        "section": "Tasks",
                        "n": "4",
                        "start": 94,
                        "end": 102
                    },
                    {
                        "section": "Baseline Models",
                        "n": "5",
                        "start": 103,
                        "end": 113
                    },
                    {
                        "section": "Encoders",
                        "n": "5.1",
                        "start": 114,
                        "end": 142
                    },
                    {
                        "section": "State Classifier",
                        "n": "5.2",
                        "start": 143,
                        "end": 145
                    },
                    {
                        "section": "Explanation Generator",
                        "n": "5.3",
                        "start": 146,
                        "end": 146
                    },
                    {
                        "section": "Annotation Classifier",
                        "n": "5.4",
                        "start": 147,
                        "end": 149
                    },
                    {
                        "section": "Training",
                        "n": "6.1",
                        "start": 150,
                        "end": 159
                    },
                    {
                        "section": "Metrics",
                        "n": "6.2",
                        "start": 160,
                        "end": 165
                    },
                    {
                        "section": "Ablations",
                        "n": "6.3",
                        "start": 166,
                        "end": 171
                    },
                    {
                        "section": "Experimental Results",
                        "n": "7",
                        "start": 172,
                        "end": 184
                    },
                    {
                        "section": "Related work",
                        "n": "8",
                        "start": 185,
                        "end": 194
                    },
                    {
                        "section": "Conclusion",
                        "n": "9",
                        "start": 195,
                        "end": 199
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1279-Figure6-1.png",
                        "caption": "Figure 6: General model architectures for three new task types",
                        "page": 5,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 284.15999999999997,
                            "y1": 64.32,
                            "y2": 217.92
                        }
                    },
                    {
                        "filename": "../figure/image/1279-Figure2-1.png",
                        "caption": "Figure 2: Theories of Motivation (Maslow and Reiss) and Emotional Reaction (Plutchik).",
                        "page": 1,
                        "bbox": {
                            "x1": 88.8,
                            "x2": 487.68,
                            "y1": 70.56,
                            "y2": 199.2
                        }
                    },
                    {
                        "filename": "../figure/image/1279-Figure3-1.png",
                        "caption": "Figure 3: The annotation pipeline for the fine-grained annotations with an example story.",
                        "page": 2,
                        "bbox": {
                            "x1": 105.6,
                            "x2": 486.24,
                            "y1": 66.24,
                            "y2": 188.64
                        }
                    },
                    {
                        "filename": "../figure/image/1279-Table3-1.png",
                        "caption": "Table 3: State Classification Results",
                        "page": 7,
                        "bbox": {
                            "x1": 117.6,
                            "x2": 479.03999999999996,
                            "y1": 62.4,
                            "y2": 288.96
                        }
                    },
                    {
                        "filename": "../figure/image/1279-Table1-1.png",
                        "caption": "Table 1: Annotated data statistics for each dataset",
                        "page": 3,
                        "bbox": {
                            "x1": 312.96,
                            "x2": 530.4,
                            "y1": 241.92,
                            "y2": 322.56
                        }
                    },
                    {
                        "filename": "../figure/image/1279-Figure4-1.png",
                        "caption": "Figure 4: Examples of open-text explanations that annotators provided corresponding with the categories they selected. The bars on the right of the categories represent the percentage of lines where annotators selected that category (out of those character-line pairs with positive motivation/emotional reaction).",
                        "page": 3,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 522.24,
                            "y1": 61.44,
                            "y2": 177.12
                        }
                    },
                    {
                        "filename": "../figure/image/1279-Table4-1.png",
                        "caption": "Table 4: F1 scores of predicting correct category labels from free response annotations",
                        "page": 8,
                        "bbox": {
                            "x1": 91.67999999999999,
                            "x2": 267.36,
                            "y1": 62.4,
                            "y2": 102.24
                        }
                    },
                    {
                        "filename": "../figure/image/1279-Table5-1.png",
                        "caption": "Table 5: Vector average and extrema scores for generation of annotation explanations",
                        "page": 8,
                        "bbox": {
                            "x1": 81.6,
                            "x2": 278.4,
                            "y1": 148.32,
                            "y2": 261.12
                        }
                    },
                    {
                        "filename": "../figure/image/1279-Table2-1.png",
                        "caption": "Table 2: Agreement Statistics (PPA = Pairwise percent agreement of worker responses per binary category, KA= Krippendorff’s Alpha)",
                        "page": 4,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 285.12,
                            "y1": 310.56,
                            "y2": 443.03999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1279-Figure5-1.png",
                        "caption": "Figure 5: NPMI confusion matrix on motivational categories for all annotator pairs with color scaling for legibility. The highest values are generally along diagonal or within Maslow categories (outlined in black). We highlight a few common points of disagreement between thematically similar categories.",
                        "page": 4,
                        "bbox": {
                            "x1": 88.8,
                            "x2": 511.2,
                            "y1": 63.36,
                            "y2": 241.92
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-64"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivation 1 Satire or not",
                    "text": [
                        "Satire & Research Goals Model/Data Experiments & Results Conclusion",
                        "After years of ghting there",
                        "nally is a settlement",
                        "between the Gema and",
                        "Youtube . It became known today , that in future every music video is allowed to be played back in Germany again, as long as the audio is removed",
                        "University of Stuttgart McHardy/Adel/Klinger June 3rd, 2019"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Motivation 2 Satire or not",
                    "text": [
                        "Satire & Research Goals Model/Data Experiments & Results Conclusion",
                        "Erfurt ( dpo ) It is an organization which operates outside of law and order, funds numerous NPD operatives and is to a not inconsiderable extent involved in the series of murders of the so-called",
                        "DPA is a German news agency",
                        "DPO does not exist (in this context). University of Stuttgart McHardy/Adel/Klinger June 3rd, 2019"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Satire",
                    "text": [
                        "Form of art to critize in an entertaining manner",
                        "Stylistic devices include humor, irony, sarcasm",
                        "Goal: Mimic regular news in diction",
                        "Its not misinformation or desinformation (fake news):",
                        "Articles typically contain satire markers",
                        "(similar to irony or sarcasm)",
                        "Automatically distinguish satirical news from regular news",
                        "Challenging task (even for humans)",
                        "University of Stuttgart McHardy/Adel/Klinger June 3rd, 2019"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Previous Work",
                    "text": [
                        "Satire & Research Goals Model/Data Experiments & Results Conclusion",
                        "Created data sets which are automatically labeled from publication source",
                        "Potential limitation: Models might learn characteristics of publication sources instead of actual characteristics of satire",
                        "(evaluation is not faulty, they use dierent publication sources for validation than for training)",
                        "Bad generalization to unseen publication sources?",
                        "Interpretation of models (regarding concepts of satire) misleading?",
                        "University of Stuttgart McHardy/Adel/Klinger June 3rd, 2019"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Our Contributions",
                    "text": [
                        "Satire & Research Goals Model/Data Experiments & Results Conclusion",
                        "We propose adversarial training: Improve robustness of model against confounding variable of publication sources",
                        "We show that adversarial training is crucial for the model to pay attention to satire instead of publication characteristics",
                        "We publish a large German data set for satire detection.",
                        "First dataset in German",
                        "First dataset including publication sources",
                        "Largest resource for satire detection so far",
                        "University of Stuttgart McHardy/Adel/Klinger June 3rd, 2019"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "6": {
                    "title": "Data Collection and Selection",
                    "text": [
                        "Satire & Research Goals Model/Data Experiments & Results Conclusion",
                        "Der Spiegel, Der Standard, Die Zeit, Suddeutsche Zeitung",
                        "Der Enthuller, Eulenspiegel, Nordd. Nach., Der Postillon,",
                        "Satirepatzer, Die Tagespresse, Titanic, Welt (Satire), Der",
                        "Zeitspiegel, Eine Zeitung, Zynismus24",
                        "Publication #Articles Article Sent. Title",
                        "University of Stuttgart McHardy/Adel/Klinger June 3rd, 2019"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "7": {
                    "title": "Research Question 1 Performance",
                    "text": [
                        "Satire & Research Goals Model/Data Experiments & Results Conclusion",
                        "How does a decrease in publication classication performance through adversarial training aect the satire classication performance?",
                        "University of Stuttgart McHardy/Adel/Klinger June 3rd, 2019"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "8": {
                    "title": "Research Question 2 Attention Weights",
                    "text": [
                        "Satire & Research Goals Model/Data Experiments & Results Conclusion",
                        "Is adversarial training eective for avoiding that the model pays most attention to the characteristics of publication source rather than actual satire?",
                        "Erfurt ( dpo ) - It is an organization which operates outside of law and order , funds numerous NPD operatives and is to a not inconsiderable extent involved in the series of murders of the so called Zwickauer Zelle .",
                        "numerous NPD operatives and is to a not inconsiderable extent involved in the series of murders of the so called Zwickauer Zelle .",
                        "After discussed all , , the whereof proposal the to Union allow hopes family for reunion an off-putting only inclusive effect mothers-in-law . is being",
                        "advAfter all , the proposal to allow family reunion only inclusive mothers-in-law is being discussed , whereof the Union hopes for an off-putting effect .",
                        "University of Stuttgart McHardy/Adel/Klinger June 3rd, 2019"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                },
                "9": {
                    "title": "Conclusion and Availability",
                    "text": [
                        "Satire & Research Goals Model/Data Experiments & Results Conclusion",
                        "Observation: Satire detection models learn characteristics of publication sources",
                        "Adversarial training to control for this confounding variable",
                        "Considerable reduction of publication identication performance while satire detection remains on comparable levels",
                        "Attention weights show eectiveness of our approach",
                        "First German dataset for satire detection",
                        "Dataset and code available at: http://www.ims.uni-stuttgart.de/data/germansatire",
                        "University of Stuttgart McHardy/Adel/Klinger June 3rd, 2019"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                }
            },
            "paper_title": "Adversarial Training for Satire Detection: Controlling for Confounding Variables",
            "paper_id": "1282",
            "paper": {
                "title": "Adversarial Training for Satire Detection: Controlling for Confounding Variables",
                "abstract": "The automatic detection of satire vs. regular news is relevant for downstream applications (for instance, knowledge base population) and to improve the understanding of linguistic characteristics of satire. Recent approaches build upon corpora which have been labeled automatically based on article sources. We hypothesize that this encourages the models to learn characteristics for different publication sources (e.g., \"The Onion\" vs. \"The Guardian\") rather than characteristics of satire, leading to poor generalization performance to unseen publication sources. We therefore propose a novel model for satire detection with an adversarial component to control for the confounding variable of publication source. On a large novel data set collected from German news (which we make available to the research community), we observe comparable satire classification performance and, as desired, a considerable drop in publication classification performance with adversarial training. Our analysis shows that the adversarial component is crucial for the model to learn to pay attention to linguistic properties of satire.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Satire is a form of art used to criticize in an entertaining manner (cf."
                    },
                    {
                        "id": 1,
                        "string": "Sulzer, 1771, p."
                    },
                    {
                        "id": 2,
                        "string": "995ff.)"
                    },
                    {
                        "id": 3,
                        "string": "."
                    },
                    {
                        "id": 4,
                        "string": "It makes use of different stylistic devices, e.g., humor, irony, sarcasm, exaggerations, parody or caricature (Knoche, 1982; Colletta, 2009 )."
                    },
                    {
                        "id": 5,
                        "string": "The occurrence of harsh, offensive or banal and funny words is typical (Golbert, 1962; Brummack, 1971) ."
                    },
                    {
                        "id": 6,
                        "string": "Satirical news are written with the aim of mimicking regular news in diction."
                    },
                    {
                        "id": 7,
                        "string": "In contrast to misinformation and disinformation (Thorne and Vlachos, 2018) , it does not have the intention of fooling the readers into actually believing something wrong in order to manipulate their opinion."
                    },
                    {
                        "id": 8,
                        "string": "* Work was done at University of Stuttgart."
                    },
                    {
                        "id": 9,
                        "string": "The task of satire detection is to automatically distinguish satirical news from regular news."
                    },
                    {
                        "id": 10,
                        "string": "This is relevant, for instance, for downstream applications, such that satirical articles can be ignored in knowledge base population."
                    },
                    {
                        "id": 11,
                        "string": "Solving this problem computationally is challenging."
                    },
                    {
                        "id": 12,
                        "string": "Even human readers are sometimes not able to precisely recognize satire (Allcott and Gentzkow, 2017) ."
                    },
                    {
                        "id": 13,
                        "string": "Thus, an automatic system for satire detection is both relevant for downstream applications and could help humans to better understand the characteristics of satire."
                    },
                    {
                        "id": 14,
                        "string": "Previous work mostly builds on top of corpora of news articles which have been labeled automatically based on the publication source (e.g., \"The New York Times\" articles would be labeled as regular while \"The Onion\" articles as satire 1 )."
                    },
                    {
                        "id": 15,
                        "string": "We hypothesize that such distant labeling approach leads to the model mostly representing characteristics of the publishers instead of actual satire."
                    },
                    {
                        "id": 16,
                        "string": "This has two main issues: First, interpretation of the model to obtain a better understanding of concepts of satire would be misleading, and second, generalization of the model to unseen publication sources would be harmed."
                    },
                    {
                        "id": 17,
                        "string": "We propose a new model with adversarial training to control for the confounding variable of publication sources, i.e., we debias the model."
                    },
                    {
                        "id": 18,
                        "string": "Our experiments and analysis show that (1) the satire detection performance stays comparable when the adversarial component is included, and (2) that adversarial training is crucial for the model to pay attention to satire instead of publication characteristics."
                    },
                    {
                        "id": 19,
                        "string": "(3), we publish a large German data set for satire detection which is a) the first data set in German, b) the first data set including publication sources, enabling the experiments at hand, and c) the largest resource for satire detection so far."
                    },
                    {
                        "id": 20,
                        "string": "2 2 Previous Work Previous work tackled the task of automatic English satire detection with handcrafted features, for instance, the validity of the context of entity mentions (Burfoot and Baldwin, 2009 ), or the coherence of a story (Goldwasser and Zhang, 2016) ."
                    },
                    {
                        "id": 21,
                        "string": "Rubin et al."
                    },
                    {
                        "id": 22,
                        "string": "(2016) use distributions of parts-ofspeech, sentiment, and exaggerations."
                    },
                    {
                        "id": 23,
                        "string": "In contrast to these approaches, our model uses only word embeddings as input representations."
                    },
                    {
                        "id": 24,
                        "string": "Our work is therefore similar to Yang et al."
                    },
                    {
                        "id": 25,
                        "string": "(2017) and De Sarkar et al."
                    },
                    {
                        "id": 26,
                        "string": "(2018) who also use artificial neural networks to predict if a given text is satirical or regular news."
                    },
                    {
                        "id": 27,
                        "string": "They develop a hierarchical model of convolutional and recurrent layers with attention over paragraphs or sentences."
                    },
                    {
                        "id": 28,
                        "string": "We follow this line of work but our model is not hierarchical and introduces less parameters."
                    },
                    {
                        "id": 29,
                        "string": "We apply attention to words instead of sentences or paragraphs, accounting for the fact that satire might be expressed on a sub-sentence level."
                    },
                    {
                        "id": 30,
                        "string": "Adversarial training is popular to improve the robustness of models."
                    },
                    {
                        "id": 31,
                        "string": "Originally introduced by Goodfellow et al."
                    },
                    {
                        "id": 32,
                        "string": "(2014) as generative adversarial networks with a generative and a discriminative component, Ganin et al."
                    },
                    {
                        "id": 33,
                        "string": "(2016) show that a related concept can also be used for domain adaptation: A domain-adversarial neural network consists of a classifier for the actual class labels and a domain discriminator."
                    },
                    {
                        "id": 34,
                        "string": "The two components share the same feature extractor and are trained in a minimax optimization algorithm with gradient reversal: The sign of the gradient of the domain discriminator is flipped when backpropagating to the feature extractor."
                    },
                    {
                        "id": 35,
                        "string": "Building upon the idea of eliminating domain-specific input representations, Wadsworth et al."
                    },
                    {
                        "id": 36,
                        "string": "(2018) debias input representations for recidivism prediction, or income prediction (Edwards and Storkey, 2016; Beutel et al., 2017; Madras et al., 2018; Zhang et al., 2018) ."
                    },
                    {
                        "id": 37,
                        "string": "Debiasing mainly focuses on word embeddings, e.g., to remove gender bias from embeddings (Bolukbasi et al., 2016) ."
                    },
                    {
                        "id": 38,
                        "string": "Despite previous positive results with adversarial training, a recent study by Elazar and Goldberg (2018) calls for being cautious and not blindly trusting adversarial training for debiasing."
                    },
                    {
                        "id": 39,
                        "string": "We therefore analyze whether it is possible at all to use adversarial training in another setting, namely to control for the confounding variable of publication sources in satire detection (see Section 3.1)."
                    },
                    {
                        "id": 40,
                        "string": "3 Methods for Satire Classification Limitations of Previous Methods The data set used by Yang et al."
                    },
                    {
                        "id": 41,
                        "string": "(2017) and De Sarkar et al."
                    },
                    {
                        "id": 42,
                        "string": "(2018) consists of text from 14 satirical and 6 regular news websites."
                    },
                    {
                        "id": 43,
                        "string": "Although the satire sources in train, validation, and test sets did not overlap, the sources of regular news were not split up according to the different data sets (Yang et al., 2017) ."
                    },
                    {
                        "id": 44,
                        "string": "We hypothesize that this enables the classifier to learn which articles belong to which publication of regular news and classify everything else as satire, given that one of the most frequent words is the name of the website itself (see Section 4.1)."
                    },
                    {
                        "id": 45,
                        "string": "Unfortunately, we cannot analyze this potential limitation since their data set does not contain any information on the publication source 3 ."
                    },
                    {
                        "id": 46,
                        "string": "Therefore, we create a new corpus in German (see Section 4.1) including this information and investigate our hypothesis on it."
                    },
                    {
                        "id": 47,
                        "string": "Model Motivated by our hypothesis in Section 3.1, we propose to consider two different classification problems (satire detection and publication identification) with a shared feature extractor."
                    },
                    {
                        "id": 48,
                        "string": "Figure 1 provides an overview of our model."
                    },
                    {
                        "id": 49,
                        "string": "We propose to train the publication identifier as an adversary."
                    },
                    {
                        "id": 50,
                        "string": "Feature Extractor Following De Sarkar et al."
                    },
                    {
                        "id": 51,
                        "string": "(2018) , we only use word embeddings and no further handcrafted features to represent the input."
                    },
                    {
                        "id": 52,
                        "string": "We pretrain word embeddings of 300 dimensions on the whole corpus using word2vec (Mikolov et al., 2013) ."
                    },
                    {
                        "id": 53,
                        "string": "The feature generator f takes the embeddings of the words of each article as input for a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) , followed by a self-attention layer as proposed by Lin et al."
                    },
                    {
                        "id": 54,
                        "string": "(2017) ."
                    },
                    {
                        "id": 55,
                        "string": "We refer to the union of all the parameters of the feature extractor as θ f in the following."
                    },
                    {
                        "id": 56,
                        "string": "Satire Detector The gray part of Figure 1 shows the model part for our main task -satire detection."
                    },
                    {
                        "id": 57,
                        "string": "The satire detector feeds the representation from the feature extractor into a softmax layer and performs a binary classification task (satire: yes or no)."
                    },
                    {
                        "id": 58,
                        "string": "Note that, in contrast to De Sarkar et al."
                    },
                    {
                        "id": 59,
                        "string": "(2018) , we classify satire solely on the document level, as this is sufficient to analyze the impact of the adversarial component and the influence of the publication source."
                    },
                    {
                        "id": 60,
                        "string": "Publication Identifier The second classification branch of our model aims at identifying the publication source of the input."
                    },
                    {
                        "id": 61,
                        "string": "Similar to the satire detector, the publication identifier consists of a single softmax layer which gets the extracted features as an input."
                    },
                    {
                        "id": 62,
                        "string": "It then performs a multi-class classification task since our dataset consists of 15 publication sources (see Table 1 )."
                    },
                    {
                        "id": 63,
                        "string": "Adversarial Training Let θ f be the parameters of the feature extractors and θ s and θ p be the parameters of the satire detector and the publication identifier, respectively."
                    },
                    {
                        "id": 64,
                        "string": "The objective function for satire detection is J s = −E (x,ys)∼p data log P θ f ∪θs (y s , x) , (1) while the objective for publication identification is J p = −E (x,yp)∼p data log P θ f ∪θp (y p , x) ."
                    },
                    {
                        "id": 65,
                        "string": "(2) Note that the parameters of the feature extractor θ f are part of both model parts."
                    },
                    {
                        "id": 66,
                        "string": "Since our goal is to control for the confounding variable of publication sources, we train the publication identifier as an adversary: The parameters of the classification part θ p are updated to optimize the publication identification while the parameters of the shared feature generator θ f are updated to fool the publication identifier."
                    },
                    {
                        "id": 67,
                        "string": "This leads to the following update equations for the parameters θ s := θ s − η ∂J s ∂θ s (3) θ p := θ p − η ∂J p ∂θ p (4) θ f := θ f − η ∂J s ∂θ f − λ ∂J p ∂θ f (5) with η being the learning rate and λ being a weight for the reversed gradient that is tuned on the development set."
                    },
                    {
                        "id": 68,
                        "string": "Figure 1 depicts Figure 1 : Architecture of the model."
                    },
                    {
                        "id": 69,
                        "string": "The gray area on the left shows the satire detector; the white area on the right is the adversary (publication identifier); the gradient flow with and without adversarial training is shown with blue arrows pointing upwards."
                    },
                    {
                        "id": 70,
                        "string": "∂ J s ∂ θ s ∂ J s ∂ θ f ∂ J p ∂θ p −λ ∂ J p ∂ θ f sources of the corpus, consisting of almost 330k articles."
                    },
                    {
                        "id": 71,
                        "string": "The corpus contains articles published between January 1st, 2000 and May 1st, 2018."
                    },
                    {
                        "id": 72,
                        "string": "Each publication has individual typical phrases and different most common words."
                    },
                    {
                        "id": 73,
                        "string": "Among the most common words is typically the name of each publication, e.g., \"Der Spiegel\" has \"SPIEGEL\" as fifth and \"Der Postillon\" \"Postillon\" as third most common word."
                    },
                    {
                        "id": 74,
                        "string": "We did not delete those words to keep the dataset as realistic as possible."
                    },
                    {
                        "id": 75,
                        "string": "We randomly split the data set into training, development (dev) and test (80/10/10 %) with the same label distributions in all sets."
                    },
                    {
                        "id": 76,
                        "string": "Given the comparable large size of the corpus, we opt for using a well-defined test set for reproducability of our experiments in contrast to a crossvalidation setting."
                    },
                    {
                        "id": 77,
                        "string": "Research questions."
                    },
                    {
                        "id": 78,
                        "string": "We discuss two questions."
                    },
                    {
                        "id": 79,
                        "string": "RQ1: How does a decrease in publication classification performance through adversarial training affect the satire classification performance?"
                    },
                    {
                        "id": 80,
                        "string": "RQ2: Is adversarial training effective for avoiding that the model pays most attention to the characteristics of publication source rather than actual satire?"
                    },
                    {
                        "id": 81,
                        "string": "Baseline."
                    },
                    {
                        "id": 82,
                        "string": "As a baseline model, we train the satire detector part (gray area in Figure 1 ) on the satire task."
                    },
                    {
                        "id": 83,
                        "string": "Then, we freeze the weights of the feature extractor and train the publication classifier on top of it."
                    },
                    {
                        "id": 84,
                        "string": "In addition, we use a majority baseline model which predicts the most common class."
                    },
                    {
                        "id": 85,
                        "string": "Hyperparameters."
                    },
                    {
                        "id": 86,
                        "string": "We cut the input sentences to a maximum length of 500 words."
                    },
                    {
                        "id": 87,
                        "string": "This enables us to fully represent almost all satire articles and capture most of the content of the regular articles while keeping the training time low."
                    },
                    {
                        "id": 88,
                        "string": "As mentioned before, we represent the input words with 300 dimensional embeddings."
                    },
                    {
                        "id": 89,
                        "string": "The feature extractor consists of a biLSTM layer with 300 hidden units in each direction and a self-attention layer with an internal hidden representation of 600."
                    },
                    {
                        "id": 90,
                        "string": "For training, we use Adam (Kingma and Ba, 2014) with an initial learning rate of 0.0001 and a decay rate of 10 −6 ."
                    },
                    {
                        "id": 91,
                        "string": "We use mini-batch gradient descent training with a batch size of 32 and alternating batches of the two branches of our model."
                    },
                    {
                        "id": 92,
                        "string": "We avoid overfitting by early stopping based on the satire F1 score on the development set."
                    },
                    {
                        "id": 93,
                        "string": "Evaluation."
                    },
                    {
                        "id": 94,
                        "string": "For evaluating satire detection, we use precision, recall and F1 score of the satire class."
                    },
                    {
                        "id": 95,
                        "string": "For publication identification, we calculate a weighted macro precision, recall and F1 score, i.e., a weighted sum of class-specific scores with weights determined by the class distribution."
                    },
                    {
                        "id": 96,
                        "string": "Selection of Hyperparameter λ Table 2 (upper part) shows results for different values of λ, the hyperparameter of adversarial training, on dev."
                    },
                    {
                        "id": 97,
                        "string": "For λ ∈ {0.2, 0.3, 0.5}, the results are comparably, with λ = 0.2 performing best for satire detection."
                    },
                    {
                        "id": 98,
                        "string": "Setting λ = 0.7 leads to a performance drop for satire but also to F 1 = 0 for publication classification."
                    },
                    {
                        "id": 99,
                        "string": "Hence, we chose λ = 0.2 (the best performing model on satire classification) and λ = 0.7 (the worst performing model on publication identification) to investigate RQ1."
                    },
                    {
                        "id": 100,
                        "string": "Results (RQ1) The bottom part of Table 2 shows the results on test data."
                    },
                    {
                        "id": 101,
                        "string": "The majority baseline fails since the corpus contains more regular than satirical news articles."
                    },
                    {
                        "id": 102,
                        "string": "In comparison to the baseline model without adversarial training (no adv), the model with λ = 0.2 achieves a comparable satire classification performance."
                    },
                    {
                        "id": 103,
                        "string": "As expected, the publication identification performance drops, especially the precision declines from 44.2 % to 30.8 %."
                    },
                    {
                        "id": 104,
                        "string": "Thus, a model which is punished for identifying publication sources can still learn to identify satire."
                    },
                    {
                        "id": 105,
                        "string": "Similar to the results on dev, the recall of the model with λ = 0.7 drops to (nearly) 0 %."
                    },
                    {
                        "id": 106,
                        "string": "In this case, the satire classification performance also drops."
                    },
                    {
                        "id": 107,
                        "string": "This suggests that there are overlapping features (cues) for both satire and publication classification."
                    },
                    {
                        "id": 108,
                        "string": "This indicates that the two tasks cannot be entirely untangled."
                    },
                    {
                        "id": 109,
                        "string": "Analysis (RQ2) To address RQ2, we analyze the results and attention weights of the baseline model and our model with adversarial training."
                    },
                    {
                        "id": 110,
                        "string": "Shift in Publication Identification The baseline model (no adv) mostly predicts the correct publication for a given article (in 55.7 % of the cases)."
                    },
                    {
                        "id": 111,
                        "string": "The model with λ = 0.2 mainly (in 98.2 % of the cases) predicts the most common publication in our corpus (\"Süddeutsche Zeitung\")."
                    },
                    {
                        "id": 112,
                        "string": "The model with λ = 0.7 shifts the majority of predictions (98.7 %) to a rare class (namely \"Eine Zeitung\"), leading to its bad performance."
                    },
                    {
                        "id": 113,
                        "string": "English translation: no adv After all , the proposal to allow family reunion only inclusive mothers-in-law is being discussed , whereof the Union hopes for an off-putting effect ."
                    },
                    {
                        "id": 114,
                        "string": "adv After all , the proposal to allow family reunion only inclusive mothers-in-law is being discussed , whereof the Union hopes for an off-putting effect ."
                    },
                    {
                        "id": 115,
                        "string": "Figure 2 exemplifies the attention weights for a selection of satirical instances."
                    },
                    {
                        "id": 116,
                        "string": "In the first example the baseline model (no adv) focuses on a single word (\"dpo\" as a parody of the German newswire \"dpa\") which is unique to the publication the article was picked from (\"Der Postillon\")."
                    },
                    {
                        "id": 117,
                        "string": "In comparison the model using adversarial training (λ = 0.2) ignores this word completely and pays attention to \"die Mordserie\" (\"series of murders\") instead."
                    },
                    {
                        "id": 118,
                        "string": "In the second example, there are no words unique to a publication and the baseline spreads the attention evenly across all words."
                    },
                    {
                        "id": 119,
                        "string": "In contrast, the model with adversarial training is able to find cues for satire, being humor in this example (\"family reunion [for refugees] is only allowed including mothers-in-law\")."
                    },
                    {
                        "id": 120,
                        "string": "Interpretation of Attention Weights Conclusion and Future Work We presented evidence that simple neural networks for satire detection learn to recognize characteristics of publication sources rather than satire and proposed a model that uses adversarial training to control for this effect."
                    },
                    {
                        "id": 121,
                        "string": "Our results show a considerable reduction of publication identification performance while the satire detection remains on comparable levels."
                    },
                    {
                        "id": 122,
                        "string": "The adversarial component enables the model to pay attention to linguistic characteristics of satire."
                    },
                    {
                        "id": 123,
                        "string": "Future work could investigate the effect of other potential confounding variables in satire detection, such as the distribution of time and region of the articles."
                    },
                    {
                        "id": 124,
                        "string": "Further, we propose to perform more quantitative but also more qualitative analysis to better understand the behaviour of the two classifier configurations in comparison."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 39
                    },
                    {
                        "section": "Limitations of Previous Methods",
                        "n": "3.1",
                        "start": 40,
                        "end": 46
                    },
                    {
                        "section": "Model",
                        "n": "3.2",
                        "start": 47,
                        "end": 49
                    },
                    {
                        "section": "Feature Extractor",
                        "n": "3.2.1",
                        "start": 50,
                        "end": 55
                    },
                    {
                        "section": "Satire Detector",
                        "n": "3.2.2",
                        "start": 56,
                        "end": 59
                    },
                    {
                        "section": "Publication Identifier",
                        "n": "3.2.3",
                        "start": 60,
                        "end": 62
                    },
                    {
                        "section": "Adversarial Training",
                        "n": "3.2.4",
                        "start": 63,
                        "end": 95
                    },
                    {
                        "section": "Selection of Hyperparameter λ",
                        "n": "4.2",
                        "start": 96,
                        "end": 99
                    },
                    {
                        "section": "Results (RQ1)",
                        "n": "5",
                        "start": 100,
                        "end": 106
                    },
                    {
                        "section": "Analysis (RQ2)",
                        "n": "6",
                        "start": 107,
                        "end": 109
                    },
                    {
                        "section": "Shift in Publication Identification",
                        "n": "6.1",
                        "start": 110,
                        "end": 119
                    },
                    {
                        "section": "Conclusion and Future Work",
                        "n": "7",
                        "start": 120,
                        "end": 124
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1282-Figure1-1.png",
                        "caption": "Figure 1: Architecture of the model. The gray area on the left shows the satire detector; the white area on the right is the adversary (publication identifier); the gradient flow with and without adversarial training is shown with blue arrows pointing upwards.",
                        "page": 2,
                        "bbox": {
                            "x1": 309.59999999999997,
                            "x2": 522.24,
                            "y1": 62.879999999999995,
                            "y2": 229.92
                        }
                    },
                    {
                        "filename": "../figure/image/1282-Figure2-1.png",
                        "caption": "Figure 2: Attention weight examples for satirical articles, with and without adversary.",
                        "page": 4,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 286.08,
                            "y1": 418.56,
                            "y2": 487.2
                        }
                    },
                    {
                        "filename": "../figure/image/1282-Table2-1.png",
                        "caption": "Table 2: Results on dev and independent test data.",
                        "page": 3,
                        "bbox": {
                            "x1": 317.76,
                            "x2": 515.04,
                            "y1": 62.4,
                            "y2": 204.95999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1282-Table1-1.png",
                        "caption": "Table 1: Corpus statistics (average length in words)",
                        "page": 3,
                        "bbox": {
                            "x1": 80.64,
                            "x2": 281.28,
                            "y1": 62.879999999999995,
                            "y2": 280.32
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-65"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivation",
                    "text": [
                        "Good translation preserves the meaning of the sentence.",
                        "Neural MT learns to represent the sentence.",
                        "Is the representation meaningful in some sense?"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Gist of our idea",
                    "text": [
                        "1. Train variants of NMT to obtain sentence representations.",
                        "2. Evaluate all such representations semantically.",
                        "3. Relate performance in MT and in semantics."
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Evaluating sentence representations",
                    "text": [
                        "prediction tasks for evaluating sentence embeddings",
                        "focus on semantics (recently, linguistics task added, too).",
                        "HyTER paraphrases (Dreyer and Marcu, 2014)"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Evaluation through classification",
                    "text": [
                        "SentEval Classification Tasks an ambitious and moving but bleak film . | and that makes all the difference . | rarely , a movie is more than a movie . +> H | the movie is well done , but slow . | | | the pianist is polanski 's best film . i ||",
                        "and that makes all the difference .",
                        "rarely , a movie is more than a movie . +> H | > 1*x C4)",
                        "the movie is well done , but slow .",
                        "the pianist is polanski 's best film .",
                        "e Solo: movies sentiment, product review polarity, question type...",
                        "A square full of people and life . is . =>. E v",
                        "The square is busy . Hifi",
                        "The couple is at a restaurant . > LM ~. N Xx",
                        "A cute couple at a club Hl |",
                        "A white dog bounding through snow Ew > F Cv",
                        "e Paired: natural language inference, semantic equivalence e 10 classification tasks in total, we report them as AvgAcc",
                        "A cute couple at a club || 7",
                        "oO 4k-55k training examples, with testset or 10-fold crosseval."
                    ],
                    "page_nums": [
                        5,
                        6,
                        7,
                        8,
                        9
                    ],
                    "images": []
                },
                "4": {
                    "title": "Evaluation through similarity",
                    "text": [
                        "7 similarity tasks: pairs of sentences + human judgement",
                        "I think it probably depends on your money. It depends on your country.",
                        "Yes, you should mention your experience. Yes, you should make a resume",
                        "Hope this is what you are looking for. Is this the kind of thing you're looking for?",
                        "with training set, sent. similarity predicted by regression, without training set, cosine similarity used as sent. sim., ultimately, the predicted sent. similarity is correlated with the golden truth.",
                        "In sum, we report them as AvgSim."
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "7": {
                    "title": "Cluster separation Davies Bouldin index",
                    "text": [
                        "find the least well-"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "8": {
                    "title": "Paraphrase retrieval task NN",
                    "text": [
                        "ee ----@ nearest neighbor a KON and check",
                        "e oe the same cluster"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "9": {
                    "title": "Classification task",
                    "text": [
                        "Remove some points from the clusters.",
                        "Train an LDA classifier with the remaining points.",
                        "Classify the removed points back."
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                },
                "10": {
                    "title": "Sequence to sequence with attention",
                    "text": [
                        "ij: weight of the jth encoder state for the ith decoder state"
                    ],
                    "page_nums": [
                        17
                    ],
                    "images": [
                        "figure/image/1292-Figure1-1.png"
                    ]
                },
                "12": {
                    "title": "Multi head inner attention",
                    "text": [
                        "ij: weight of the jth encoder state for the ith column of MT",
                        "concatenate columns of MT",
                        "linear projection of columns to control embedding size"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": [
                        "figure/image/1292-Figure1-1.png"
                    ]
                },
                "13": {
                    "title": "Proposed NMT architectures",
                    "text": [
                        "ATTN-CTX ATTN-ATTN (compound att.)",
                        "decoder operates on entire embedding decoder selects components of embedding"
                    ],
                    "page_nums": [
                        22,
                        23
                    ],
                    "images": [
                        "figure/image/1292-Figure1-1.png"
                    ]
                },
                "14": {
                    "title": "Evaluated NMT models",
                    "text": [
                        "FINAL, FINAL-CTX: no attention",
                        "AVGPOOL, MAXPOOL: pooling instead of attention",
                        "ATTN-CTX: inner attention, constant context vector",
                        "ATTN-ATTN: inner attention, decoder attention",
                        "TRF-ATTN-ATTN: Transformer with inner attention",
                        "translation from English (to Czech or German), evaluating embeddings of English (source) sentences"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": []
                },
                "16": {
                    "title": "Sample Results representation eval encs",
                    "text": [
                        "Selected models trained for translation from English to Czech. InferSent and GloVe- BOW are trained on monolingual (English) data.",
                        "Baselines are hard to beat.",
                        "Attention harms the performance."
                    ],
                    "page_nums": [
                        29,
                        30,
                        31,
                        32
                    ],
                    "images": []
                },
                "17": {
                    "title": "Full Results correlations excluding Transformer",
                    "text": [
                        "BLEU vs. other metrics:"
                    ],
                    "page_nums": [
                        33,
                        34
                    ],
                    "images": [
                        "figure/image/1292-Figure2-1.png"
                    ]
                },
                "19": {
                    "title": "Summary",
                    "text": [
                        "Proposed NMT architecture combining the benefit of attention and one $&!#* vector representing the whole sentence.",
                        "Evaluated the obtained sentence embeddings using a wide range of semantic tasks.",
                        "The better the translation, the worse performance in meaning representation.",
                        "Heads divide sentence equidistantly, not logically.",
                        "Heads divide sentence eJoqiuni doisutr antly, not logically.",
                        "JNLE Special Issue on Sentence Representations:"
                    ],
                    "page_nums": [
                        41,
                        42,
                        43,
                        44,
                        45
                    ],
                    "images": []
                },
                "20": {
                    "title": "InferSent multi task training in OCs thesis only",
                    "text": [
                        "e |dea: produce better representations by jointly training",
                        "NMT with other tasks",
                        "e Proxy: predict InferSent embeddings as the auxiliary task",
                        "L= Ly + alomse",
                        "L tar mT, target"
                    ],
                    "page_nums": [
                        46
                    ],
                    "images": []
                },
                "21": {
                    "title": "Multi task training results encs",
                    "text": [
                        "<I Multitask Inactive, a",
                        "Small loss in BLEU xc. maxeooy, Sometimes gain iN AVGACC cexe. 4000, 4h)",
                        "<4 Multitask Active, a"
                    ],
                    "page_nums": [
                        47,
                        48,
                        49
                    ],
                    "images": []
                },
                "22": {
                    "title": "Multi task training results ende",
                    "text": [
                        "SEE Ne Een Sennen aoe",
                        "en-de results less stable (much smaller vocabulary).",
                        "<1 Multitask Inactive, a <4 Multitask Active, a=",
                        "e Generally promising. L",
                        "e Further exploration of a values |...",
                        "and datasets needed. L"
                    ],
                    "page_nums": [
                        50,
                        51,
                        52,
                        53
                    ],
                    "images": []
                }
            },
            "paper_title": "Are BLEU and Meaning Representation in Opposition?",
            "paper_id": "1292",
            "paper": {
                "title": "Are BLEU and Meaning Representation in Opposition?",
                "abstract": "One of possible ways of obtaining continuous-space sentence representations is by training neural machine translation (NMT) systems. The recent attention mechanism however removes the single point in the neural network from which the source sentence representation can be extracted. We propose several variations of the attentive NMT architecture bringing this meeting point back. Empirical evaluation suggests that the better the translation quality, the worse the learned sentence representations serve in a wide range of classification and similarity tasks.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Deep learning has brought the possibility of automatically learning continuous representations of sentences."
                    },
                    {
                        "id": 1,
                        "string": "On the one hand, such representations can be geared towards particular tasks such as classifying the sentence in various aspects (e.g."
                    },
                    {
                        "id": 2,
                        "string": "sentiment, register, question type) or relating the sentence to other sentences (e.g."
                    },
                    {
                        "id": 3,
                        "string": "semantic similarity, paraphrasing, entailment)."
                    },
                    {
                        "id": 4,
                        "string": "On the other hand, we can aim at \"universal\" sentence representations, that is representations performing reasonably well in a range of such tasks."
                    },
                    {
                        "id": 5,
                        "string": "Regardless the evaluation criterion, the representations can be learned either in an unsupervised way (from simple, unannotated texts) or supervised, relying on manually constructed training sets of sentences equipped with annotations of the appropriate type."
                    },
                    {
                        "id": 6,
                        "string": "A different approach is to obtain sentence representations from training neural machine translation models (Hill et al., 2016) ."
                    },
                    {
                        "id": 7,
                        "string": "Since Hill et al."
                    },
                    {
                        "id": 8,
                        "string": "(2016) , NMT has seen substantial advances in translation quality and it is thus natural to ask how these improvements affect the learned representations."
                    },
                    {
                        "id": 9,
                        "string": "One of the key technological changes was the introduction of \"attention\" , making it even the very central component in the network (Vaswani et al., 2017) ."
                    },
                    {
                        "id": 10,
                        "string": "Attention allows the NMT system to dynamically choose which parts of the source are most important when deciding on the current output token."
                    },
                    {
                        "id": 11,
                        "string": "As a consequence, there is no longer a static vector representation of the sentence available in the system."
                    },
                    {
                        "id": 12,
                        "string": "In this paper, we remove this limitation by proposing a novel encoder-decoder architecture with a structured fixed-size representation of the input that still allows the decoder to explicitly focus on different parts of the input."
                    },
                    {
                        "id": 13,
                        "string": "In other words, our NMT system has both the capacity to attend to various parts of the input and to produce static representations of input sentences."
                    },
                    {
                        "id": 14,
                        "string": "We train this architecture on English-to-German and English-to-Czech translation and evaluate the learned representations of English on a wide range of tasks in order to assess its performance in learning \"universal\" meaning representations."
                    },
                    {
                        "id": 15,
                        "string": "In Section 2, we briefly review recent efforts in obtaining sentence representations."
                    },
                    {
                        "id": 16,
                        "string": "In Section 3, we introduce a number of variants of our novel architecture."
                    },
                    {
                        "id": 17,
                        "string": "Section 4 describes some standard and our own methods for evaluating sentence representations."
                    },
                    {
                        "id": 18,
                        "string": "Section 5 then provides experimental results: translation and representation quality."
                    },
                    {
                        "id": 19,
                        "string": "The relation between the two is discussed in Section 6."
                    },
                    {
                        "id": 20,
                        "string": "Related Work The properties of continuous sentence representations have always been of interest to researchers working on neural machine translation."
                    },
                    {
                        "id": 21,
                        "string": "In the first works on RNN sequence-to-sequence models,  and Sutskever et al."
                    },
                    {
                        "id": 22,
                        "string": "(2014) Table 1 : Different RNN-based architectures and their properties."
                    },
                    {
                        "id": 23,
                        "string": "Legend: \"pooling\" -vectors combined by mean or max (AVGPOOL, MAXPOOL); \"sent."
                    },
                    {
                        "id": 24,
                        "string": "emb.\""
                    },
                    {
                        "id": 25,
                        "string": "-sentence embedding, i.e."
                    },
                    {
                        "id": 26,
                        "string": "the fixed-size sentence representation computed by the encoder."
                    },
                    {
                        "id": 27,
                        "string": "\"init\" -initial decoder state."
                    },
                    {
                        "id": 28,
                        "string": "\"ctx\" -context vector, i.e."
                    },
                    {
                        "id": 29,
                        "string": "input for the decoder cell."
                    },
                    {
                        "id": 30,
                        "string": "\"input for att.\""
                    },
                    {
                        "id": 31,
                        "string": "-input for the decoder attention."
                    },
                    {
                        "id": 32,
                        "string": "provided visualizations of the phrase and sentence embedding spaces and observed that they reflect semantic and syntactic structure to some extent."
                    },
                    {
                        "id": 33,
                        "string": "Hill et al."
                    },
                    {
                        "id": 34,
                        "string": "(2016) perform a systematic evaluation of sentence representation in different models, including NMT, by applying them to various sentence classification tasks and by relating semantic similarity to closeness in the representation space."
                    },
                    {
                        "id": 35,
                        "string": "Shi et al."
                    },
                    {
                        "id": 36,
                        "string": "(2016) investigate the syntactic properties of representations learned by NMT systems by predicting sentence-and word-level syntactic labels (e.g."
                    },
                    {
                        "id": 37,
                        "string": "tense, part of speech) and by generating syntax trees from these representations."
                    },
                    {
                        "id": 38,
                        "string": "Schwenk and Douze (2017) aim to learn language-independent sentence representations using NMT systems with multiple source and target languages."
                    },
                    {
                        "id": 39,
                        "string": "They do not consider the attention mechanism and evaluate primarily by similarity scores of the learned representations for similar sentences (within or across languages)."
                    },
                    {
                        "id": 40,
                        "string": "Model Architectures Our proposed model architectures differ in (a) which encoder states are considered in subsequent processing, (b) how they are combined, and (c) how they are used in the decoder."
                    },
                    {
                        "id": 41,
                        "string": "Table 1 summarizes all the examined configurations of RNN-based models."
                    },
                    {
                        "id": 42,
                        "string": "The first three (ATTN, FINAL, FINAL-CTX) correspond roughly to the standard sequence-to-sequence models, , Sutskever et al."
                    },
                    {
                        "id": 43,
                        "string": "(2014) and , resp."
                    },
                    {
                        "id": 44,
                        "string": "The last column (ATTN-ATTN) is our main proposed architecture: compound attention, described here in Section 3.1."
                    },
                    {
                        "id": 45,
                        "string": "In addition to RNN-based models, we experiment with the Transformer model, see Section 3.3. s 1 s 2 s 3 s T � + c 3 − → h 1 ← − h 1 − → h 2 ← − h 2 − → h 3 ← − h 3 − → h T ← − h T + α 21 α 22 α 23 α 2T ."
                    },
                    {
                        "id": 46,
                        "string": "."
                    },
                    {
                        "id": 47,
                        "string": "."
                    },
                    {
                        "id": 48,
                        "string": "M 2 M 1 M 3 M 4 β 31 β 32 β 33 β 34 = M � = H decoder encoder x 1 x 2 x 3 x T ."
                    },
                    {
                        "id": 49,
                        "string": "."
                    },
                    {
                        "id": 50,
                        "string": "."
                    },
                    {
                        "id": 51,
                        "string": "Figure 1 : An illustration of compound attention with 4 attention heads."
                    },
                    {
                        "id": 52,
                        "string": "The figure shows the computations that result in the decoder state s 3 (in addition, each state s i depends on the previous target token y i−1 )."
                    },
                    {
                        "id": 53,
                        "string": "Note that the matrix M is the same for all positions in the output sentence and it can thus serve as the source sentence representation."
                    },
                    {
                        "id": 54,
                        "string": "Compound Attention Our compound attention model incorporates attention in both the encoder and the decoder, Fig."
                    },
                    {
                        "id": 55,
                        "string": "1 ."
                    },
                    {
                        "id": 56,
                        "string": "Encoder with inner attention."
                    },
                    {
                        "id": 57,
                        "string": "First, we process the input sequence x 1 , x 2 , ."
                    },
                    {
                        "id": 58,
                        "string": "."
                    },
                    {
                        "id": 59,
                        "string": "."
                    },
                    {
                        "id": 60,
                        "string": ", x T using a bidirectional recurrent network with gated recurrent units (GRU, Cho et al., 2014) : − → h t = −−→ GRU(x t , − − → h t−1 ), ← − h t = ←−− GRU(x t , ← − − h t+1 ), h t = [ − → h t , ← − h t ]."
                    },
                    {
                        "id": 61,
                        "string": "We denote by u the combined number of units in the two RNNs, i.e."
                    },
                    {
                        "id": 62,
                        "string": "the dimensionality of h t ."
                    },
                    {
                        "id": 63,
                        "string": "Next, our goal is to combine the states (h 1 , h 2 , ."
                    },
                    {
                        "id": 64,
                        "string": "."
                    },
                    {
                        "id": 65,
                        "string": "."
                    },
                    {
                        "id": 66,
                        "string": ", h T ) = H of the encoder into a vector of fixed dimensionality that represents the entire sentence."
                    },
                    {
                        "id": 67,
                        "string": "Traditional seq2seq models concatenate the final states of both encoder RNNs ( − → h T and ← − h 1 ) to obtain the sentence representation (denoted as FINAL in Table 1 )."
                    },
                    {
                        "id": 68,
                        "string": "Another option is to combine all encoder states using the average or maximum over time (Collobert and Weston, 2008; Schwenk and Douze, 2017 ) (AVGPOOL and MAXPOOL in Table 1 and following)."
                    },
                    {
                        "id": 69,
                        "string": "We adopt an alternative approach, which is to use inner attention 1 (Liu et al., 2016; Li et al., 2016) to compute several weighted averages of the encoder states (Lin et al., 2017) ."
                    },
                    {
                        "id": 70,
                        "string": "The main motivation for incorporating these multiple \"views\" of the state sequence is that it removes the need for the RNN cell to accumulate the representation of the whole sentence as it processes the input, and therefore it should have more capacity for modeling local dependencies."
                    },
                    {
                        "id": 71,
                        "string": "Specifically, we fix a number r, the number of attention heads, and compute an r ×T matrix A of attention weights α jt , representing the importance of position t in the input for the j th attention head."
                    },
                    {
                        "id": 72,
                        "string": "We then use this matrix to compute r weighted sums of the encoder states, which become the rows of a new matrix M : M = AH."
                    },
                    {
                        "id": 73,
                        "string": "(1) A vector representation of the source sentence (the \"sentence embedding\") can be obtained by flattening the matrix M ."
                    },
                    {
                        "id": 74,
                        "string": "In our experiments, we project the encoder states h 1 , h 2 , ."
                    },
                    {
                        "id": 75,
                        "string": "."
                    },
                    {
                        "id": 76,
                        "string": "."
                    },
                    {
                        "id": 77,
                        "string": ", h T down to a given dimensionality before applying Eq."
                    },
                    {
                        "id": 78,
                        "string": "(1), so that we can control the size of the representation."
                    },
                    {
                        "id": 79,
                        "string": "Following Lin et al."
                    },
                    {
                        "id": 80,
                        "string": "(2017) , we compute the attention matrix by feeding the encoder states to a two-layer feed-forward network: A = softmax(U tanh(W H � )), (2) where W and U are weight matrices of dimensions d × u and r × d, respectively (d is the number of hidden units); the softmax function is applied along the second dimension, i.e."
                    },
                    {
                        "id": 81,
                        "string": "across the encoder states."
                    },
                    {
                        "id": 82,
                        "string": "Attentive decoder."
                    },
                    {
                        "id": 83,
                        "string": "In vanilla seq2seq models with a fixed-size sentence representation, the decoder is usually conditioned on this representation via the initial RNN state."
                    },
                    {
                        "id": 84,
                        "string": "We propose to instead leverage the structured sentence embedding by applying attention to its components."
                    },
                    {
                        "id": 85,
                        "string": "This is no different from the classical attention mechanism used in NMT , except that it acts on this fixed-size representation instead of the sequence of encoder states."
                    },
                    {
                        "id": 86,
                        "string": "In the i th decoding step, the attention mechanism computes a distribution {β ij } r j=1 over the r components of the structured representation."
                    },
                    {
                        "id": 87,
                        "string": "This is then used to weight these components to obtain the context vector c i , which in turn is used to update the decoder state."
                    },
                    {
                        "id": 88,
                        "string": "Again, we can write this in matrix form as C = BM, (3) where B = (β ij ) T � ,r i=1,j=1 is the attention matrix and C = (c i , c 2 , ."
                    },
                    {
                        "id": 89,
                        "string": "."
                    },
                    {
                        "id": 90,
                        "string": "."
                    },
                    {
                        "id": 91,
                        "string": ", c T � ) are the context vectors."
                    },
                    {
                        "id": 92,
                        "string": "Note that by combining Eqs."
                    },
                    {
                        "id": 93,
                        "string": "(1) and (3) , we get C = (BA)H. (4) Hence, the composition of the encoder and decoder attentions (the \"compound attention\") defines an implicit alignment between the source and the target sequence."
                    },
                    {
                        "id": 94,
                        "string": "From this viewpoint, our model can be regarded as a restriction of the conventional attention model."
                    },
                    {
                        "id": 95,
                        "string": "The decoder uses a conditional GRU cell (cGRU att ; Sennrich et al., 2017) , which consists of two consecutively applied GRU blocks."
                    },
                    {
                        "id": 96,
                        "string": "The first block processes the previous target token y i−1 , while the second block receives the context vector c i and predicts the next target token y i ."
                    },
                    {
                        "id": 97,
                        "string": "Constant Context Compared to the FINAL model, the compound attention architecture described in the previous section undoubtedly benefits from the fact that the decoder is presented with information from the encoder (i.e."
                    },
                    {
                        "id": 98,
                        "string": "the context vectors c i ) in every decoding step."
                    },
                    {
                        "id": 99,
                        "string": "To investigate this effect, we include baseline models where we replace all context vectors c i with the entire sentence embedding (indicated by the suffix \"-CTX\" in Table 1 )."
                    },
                    {
                        "id": 100,
                        "string": "Specifically, we provide either the flattened matrix M (for models with inner attention; ATTN or ATTN-CTX), the final state of the encoder (FINAL-CTX), or the result of mean-or max-pooling (*POOL-CTX) as a constant input to the decoder cell."
                    },
                    {
                        "id": 101,
                        "string": "Transformer with Inner Attention The Transformer (Vaswani et al., 2017) is a recently proposed model based entirely on feedforward layers and attention."
                    },
                    {
                        "id": 102,
                        "string": "It consists of an encoder and a decoder, each with 6 layers, consisting of multi-head attention on the previous layer and a position-wise feed-forward network."
                    },
                    {
                        "id": 103,
                        "string": "In order to introduce a fixed-size sentence representation into the model, we modify it by adding inner attention after the last encoder layer."
                    },
                    {
                        "id": 104,
                        "string": "The attention in the decoder then operates on the components of this representation (i.e."
                    },
                    {
                        "id": 105,
                        "string": "the rows of the matrix M )."
                    },
                    {
                        "id": 106,
                        "string": "This variation on the Transformer model corresponds to the ATTN-ATTN column in Table 1 and is therefore denoted TRF-ATTN-ATTN."
                    },
                    {
                        "id": 107,
                        "string": "Representation Evaluation Continuous sentence representations can be evaluated in many ways, see e.g."
                    },
                    {
                        "id": 108,
                        "string": "Kiros et al."
                    },
                    {
                        "id": 109,
                        "string": "(2015) , Conneau et al."
                    },
                    {
                        "id": 110,
                        "string": "(2017) or the RepEval workshops."
                    },
                    {
                        "id": 111,
                        "string": "2 We evaluate our learned representations with classification and similarity tasks from SentEval (Section 4.1) and by examining clusters of sentence paraphrase representations (Section 4.2)."
                    },
                    {
                        "id": 112,
                        "string": "SentEval We perform evaluation on 10 classification and 7 similarity tasks using the SentEval 3 (Conneau et al., 2017) evaluation tool."
                    },
                    {
                        "id": 113,
                        "string": "This is a superset of the tasks from Kiros et al."
                    },
                    {
                        "id": 114,
                        "string": "(2015) ."
                    },
                    {
                        "id": 115,
                        "string": "Table 2 describes the classification tasks (number of classes, data size, task type and an example) and Table 3 lists the similarity tasks."
                    },
                    {
                        "id": 116,
                        "string": "The similarity (relatedness) datasets contain pairs of sentences labeled with a real-valued similarity score."
                    },
                    {
                        "id": 117,
                        "string": "A given sentence representation model is evaluated either by learning to directly predict this score given the respective sentence embeddings (\"regression\"), or by computing the cosine similarity of the embeddings (\"similarity\") without the need of any training."
                    },
                    {
                        "id": 118,
                        "string": "In both cases, Pearson and Spearman correlation of the predictions with the gold ratings is reported."
                    },
                    {
                        "id": 119,
                        "string": "See Dolan et al."
                    },
                    {
                        "id": 120,
                        "string": "(2004) for details on MRPC and Hill et al."
                    },
                    {
                        "id": 121,
                        "string": "(2016) for the remaining tasks."
                    },
                    {
                        "id": 122,
                        "string": "Paraphrases We also evaluate the representation of paraphrases."
                    },
                    {
                        "id": 123,
                        "string": "We use two paraphrase sources for this purpose: COCO and HyTER Networks."
                    },
                    {
                        "id": 124,
                        "string": "COCO (Common Objects in Context; Lin et al., 2014) is an object recognition and image captioning dataset, containing 5 captions for each image."
                    },
                    {
                        "id": 125,
                        "string": "We extracted the captions from its validation set to form a set of 5 × 5k = 25k sentences grouped by the source image."
                    },
                    {
                        "id": 126,
                        "string": "The average sentence length is 11 tokens and the vocabulary size is 9k types."
                    },
                    {
                        "id": 127,
                        "string": "HyTER Networks (Dreyer and Marcu, 2014 ) are large finite-state networks representing a sub-set of all possible English translations of 102 Arabic and 102 Chinese sentences."
                    },
                    {
                        "id": 128,
                        "string": "The networks were built by manually based on reference sentences in Arabic, Chinese and English."
                    },
                    {
                        "id": 129,
                        "string": "Each network contains up to hundreds of thousands of possible translations of the given source sentence."
                    },
                    {
                        "id": 130,
                        "string": "We randomly generated 500 translations for each source sentence, obtaining a corpus of 102k sentences grouped into 204 clusters, each containing 500 paraphrases."
                    },
                    {
                        "id": 131,
                        "string": "The average length of the 102k English sentences is 28 tokens and the vocabulary size is 11k token types."
                    },
                    {
                        "id": 132,
                        "string": "For every model, we encode each dataset to obtain a set of sentence embeddings with cluster labels."
                    },
                    {
                        "id": 133,
                        "string": "We then compute the following metrics: Cluster classification accuracy (denoted \"Cl\")."
                    },
                    {
                        "id": 134,
                        "string": "We remove 1 point (COCO) or half of the points (HyTER) from each cluster, and fit an LDA classifier on the rest."
                    },
                    {
                        "id": 135,
                        "string": "We then compute the accuracy of the classifier on the removed points."
                    },
                    {
                        "id": 136,
                        "string": "Nearest-neighbor paraphrase retrieval accuracy (NN)."
                    },
                    {
                        "id": 137,
                        "string": "For each point, we find its nearest neighbor according to cosine or L 2 distance, and count how often the neighbor lies in the same cluster as the original point."
                    },
                    {
                        "id": 138,
                        "string": "Inverse Davies-Bouldin index (iDB)."
                    },
                    {
                        "id": 139,
                        "string": "The Davies-Bouldin index (Davies and Bouldin, 1979) measures cluster separation."
                    },
                    {
                        "id": 140,
                        "string": "For every pair of clusters, we compute the ratio R ij of their combined scatter (average L 2 distance to the centroid) S i + S j and the L 2 distance of their centroids d ij , then average the maximum values for all clusters: R ij = S i + S j d ij (5) DB = 1 N N � i=1 max j� =i R ij (6) The lower the DB index, the better the separation."
                    },
                    {
                        "id": 141,
                        "string": "To match with the rest of our metrics, we take its inverse: iDB = 1 DB ."
                    },
                    {
                        "id": 142,
                        "string": "Experimental Results We trained English-to-German and English-to-Czech NMT models using Neural Monkey 4 (Helcl and Libovický, 2017a) ."
                    },
                    {
                        "id": 143,
                        "string": "In the following, we distinguish these models using the code of the target language, i.e."
                    },
                    {
                        "id": 144,
                        "string": "de or cs."
                    },
                    {
                        "id": 145,
                        "string": "The de models were trained on the Multi30K multilingual image caption dataset (Elliott et al., 4 https://github.com/ufal/neuralmonkey 2016), extended by Helcl and Libovický (2017b) , who acquired additional parallel data using backtranslation (Sennrich et al., 2016) and perplexitybased selection (Yasuda et al., 2008) ."
                    },
                    {
                        "id": 146,
                        "string": "This extended dataset contains 410k sentence pairs, with the average sentence length of 12 ± 4 tokens in English."
                    },
                    {
                        "id": 147,
                        "string": "We train each model for 20 epochs with the batch size of 32."
                    },
                    {
                        "id": 148,
                        "string": "We truecased the training data as well as all data we evaluate on."
                    },
                    {
                        "id": 149,
                        "string": "For German, we employed Neural Monkey's reversible pre-processing scheme, which expands contractions and performs morphological segmentation of determiners."
                    },
                    {
                        "id": 150,
                        "string": "We used a vocabulary of at most 30k tokens for each language (no subword units)."
                    },
                    {
                        "id": 151,
                        "string": "The cs models were trained on CzEng 1.7 (Bojar et al., 2016) ."
                    },
                    {
                        "id": 152,
                        "string": "5 We used byte-pair encoding (BPE) with a vocabulary of 30k sub-word units, shared for both languages."
                    },
                    {
                        "id": 153,
                        "string": "For English, the average sentence length is 15 ± 19 BPE tokens and the original vocabulary size is 1.9M."
                    },
                    {
                        "id": 154,
                        "string": "We performed 1 training epoch with the batch size of 128 on the entire training section (57M sentence pairs)."
                    },
                    {
                        "id": 155,
                        "string": "The datasets for both de and cs models come with their respective development and test sets of sentence pairs, which we use for the evaluation of translation quality."
                    },
                    {
                        "id": 156,
                        "string": "(We use 1k randomly selected sentence pairs from CzEng 1.7 dtest as a development set."
                    },
                    {
                        "id": 157,
                        "string": "For evaluation, we use the entire etest.)"
                    },
                    {
                        "id": 158,
                        "string": "We also evaluate the InferSent model 6 (Conneau et al., 2017) as pre-trained on the natural language inference (NLI) task."
                    },
                    {
                        "id": 159,
                        "string": "InferSent has been shown to achieve state-of-the-art results on the SentEval tasks."
                    },
                    {
                        "id": 160,
                        "string": "We also include a bag-ofwords baseline (GloVe-BOW) obtained by averaging GloVe 7 word vectors (Pennington et al., 2014) ."
                    },
                    {
                        "id": 161,
                        "string": "Translation Quality We estimate translation quality of the various models using single-reference case-sensitive BLEU (Papineni et al., 2002) as implemented in Neural Monkey (the reference implementation is mteval-v13a.pl from NIST or Moses)."
                    },
                    {
                        "id": 162,
                        "string": "Tables 4 and 5 provide the results on the two datasets."
                    },
                    {
                        "id": 163,
                        "string": "The cs dataset is much larger and the training takes much longer."
                    },
                    {
                        "id": 164,
                        "string": "We were thus able to experiment with only a subset of the possible model configurations."
                    },
                    {
                        "id": 165,
                        "string": "Table 5 : Translation quality of cs models."
                    },
                    {
                        "id": 166,
                        "string": "The columns \"Size\" and \"Heads\" specify the total size of sentence representation and the number of heads of encoder inner attention."
                    },
                    {
                        "id": 167,
                        "string": "In both cases, the best performing is the ATTN Bahdanau et al."
                    },
                    {
                        "id": 168,
                        "string": "model, followed by Transformer (de only) and our ATTN-ATTN (compound attention)."
                    },
                    {
                        "id": 169,
                        "string": "The non-attentive FINAL Cho et al."
                    },
                    {
                        "id": 170,
                        "string": "is the worst, except cs-MAXPOOL."
                    },
                    {
                        "id": 171,
                        "string": "For 5 selected cs models, we also performed the WMT-style 5-way manual ranking on 200 sentence pairs."
                    },
                    {
                        "id": 172,
                        "string": "The annotations are interpreted as simulated pairwise comparisons."
                    },
                    {
                        "id": 173,
                        "string": "For each model, the final score is the number of times the model was judged better than the other model in the pair."
                    },
                    {
                        "id": 174,
                        "string": "Tied pairs are excluded."
                    },
                    {
                        "id": 175,
                        "string": "The results, shown in Table 5, confirm the automatic evaluation results."
                    },
                    {
                        "id": 176,
                        "string": "We also checked the relation between BLEU and the number of heads and representation size."
                    },
                    {
                        "id": 177,
                        "string": "While there are many exceptions, the general ten-dency is that the larger the representation or the more heads, the higher the BLEU score."
                    },
                    {
                        "id": 178,
                        "string": "The Pearson correlation between BLEU and the number of heads is 0.87 for cs and 0.31 for de."
                    },
                    {
                        "id": 179,
                        "string": "SentEval Due to the large number of SentEval tasks, we present the results abridged in two different ways: by reporting averages (Table 6 ) and by showing only the best models in comparison with other methods ( Table 7) ."
                    },
                    {
                        "id": 180,
                        "string": "The full results can be found in the supplementary material."
                    },
                    {
                        "id": 181,
                        "string": "Table 6 provides averages of the classification and similarity results, along with the results of selected tasks (SNLI, SICK-E)."
                    },
                    {
                        "id": 182,
                        "string": "As the baseline for classifications tasks, we assign the most frequent class to all test examples."
                    },
                    {
                        "id": 183,
                        "string": "8 The de models are generally worse, most likely due to the higher OOV rate and overall simplicity of the training sentences."
                    },
                    {
                        "id": 184,
                        "string": "On cs, we see a clear pattern that more heads hurt the performance."
                    },
                    {
                        "id": 185,
                        "string": "The de set has more variations to consider but the results are less conclusive."
                    },
                    {
                        "id": 186,
                        "string": "For the similarity results, it is worth noting that cs-ATTN-ATTN performs very well with 1 attention head but fails miserably with more heads."
                    },
                    {
                        "id": 187,
                        "string": "Otherwise, the relation to the number of heads is less clear."
                    },
                    {
                        "id": 188,
                        "string": "Table 7 compares our strongest models with other approaches on all tasks."
                    },
                    {
                        "id": 189,
                        "string": "Besides InferSent and GloVe-BOW, we include SkipThought as evaluated by Conneau et al."
                    },
                    {
                        "id": 190,
                        "string": "(2017) , and the NMTbased embeddings by Hill et al."
                    },
                    {
                        "id": 191,
                        "string": "(2016) trained on the English-French WMT15 dataset (this is the best result reported by Hill et al."
                    },
                    {
                        "id": 192,
                        "string": "for NMT) ."
                    },
                    {
                        "id": 193,
                        "string": "We see that the supervised InferSent clearly outperforms all other models in all tasks except for MRPC and TREC."
                    },
                    {
                        "id": 194,
                        "string": "Results by Hill et al."
                    },
                    {
                        "id": 195,
                        "string": "are always lower than our best setups, except MRPC and TREC again."
                    },
                    {
                        "id": 196,
                        "string": "On classification tasks, our models are outperformed even by GloVe-BOW, except for the NLI tasks (SICK-E and SNLI) where cs-FINAL-CTX is better."
                    },
                    {
                        "id": 197,
                        "string": "Table 6 : Abridged SentEval and paraphrase evaluation results."
                    },
                    {
                        "id": 198,
                        "string": "Full results in supplementary material."
                    },
                    {
                        "id": 199,
                        "string": "AvgAcc is the average of all 10 SentEval classification tasks (see Table S1 in supplementary material), AvgSim averages all 7 similarity tasks (see Table S2 )."
                    },
                    {
                        "id": 200,
                        "string": "Hy-and CO-stand for HyTER and COCO, respectively."
                    },
                    {
                        "id": 201,
                        "string": "\"H.\" is the number of attention heads."
                    },
                    {
                        "id": 202,
                        "string": "We give the out-of-vocabulary (OOV) rate and the perplexity of a 4-gram language model (LM) trained on the English side of the respective parallel corpus and evaluated on all available data for a given task."
                    },
                    {
                        "id": 203,
                        "string": "than L 2 distance."
                    },
                    {
                        "id": 204,
                        "string": "We therefore do not list L 2based results (except in the supplementary material)."
                    },
                    {
                        "id": 205,
                        "string": "This evaluation seems less stable and discerning than the previous two, but we can again confirm the victory of InferSent followed by our nonattentive cs models."
                    },
                    {
                        "id": 206,
                        "string": "cs and de models are no longer clearly separated."
                    },
                    {
                        "id": 207,
                        "string": "Paraphrase Scores Discussion To assess the relation between the various measures of sentence representations and translation quality as estimated by BLEU, we plot a heatmap of Pearson correlations in Fig."
                    },
                    {
                        "id": 208,
                        "string": "2 ."
                    },
                    {
                        "id": 209,
                        "string": "As one example, Fig."
                    },
                    {
                        "id": 210,
                        "string": "3 details the cs models' BLEU scores and AvgAcc."
                    },
                    {
                        "id": 211,
                        "string": "A good sign is that on the cs dataset, most metrics of representation are positively correlated (the pairwise Pearson correlation is 0.78 ± 0.32 on average), the outlier being TREC (−0.16 ± 0.16 correlation with the other metrics on average) On the other hand, most representation metrics correlate with BLEU negatively (−0.57±0.31) on cs."
                    },
                    {
                        "id": 212,
                        "string": "The pattern is less pronounced but still clear also on the de dataset."
                    },
                    {
                        "id": 213,
                        "string": "A detailed understanding of what the learned representations contain is difficult."
                    },
                    {
                        "id": 214,
                        "string": "We can only speculate that if the NMT model has some capability for following the source sentence superficially, it will use it and spend its capacity on closely matching the target sentences rather than on deriving some representation of meaning which would reflect e.g."
                    },
                    {
                        "id": 215,
                        "string": "semantic similarity."
                    },
                    {
                        "id": 216,
                        "string": "We assume that this can be a direct consequence of NMT being trained for cross entropy: putting the exact word forms in exact positions as the target sentence requires."
                    },
                    {
                        "id": 217,
                        "string": "Performing well in single-reference BLEU is not an indication that the system understands the meaning but rather that it can maximize the chance of producing the n-grams required by the reference."
                    },
                    {
                        "id": 218,
                        "string": "The negative correlation between the number of attention heads and the representation metrics from Fig."
                    },
                    {
                        "id": 219,
                        "string": "3 (−0.81 ± 0.12 for cs and −0.18 ± 0.19 for de, on average) can be partly explained by the following observation."
                    },
                    {
                        "id": 220,
                        "string": "We plotted the induced alignments (e.g."
                    },
                    {
                        "id": 221,
                        "string": "Fig."
                    },
                    {
                        "id": 222,
                        "string": "4 ) and noticed that the heads tend to \"divide\" the sentence into segments."
                    },
                    {
                        "id": 223,
                        "string": "While one would hope that the segments correspond to some meaningful units of the sentence (e.g."
                    },
                    {
                        "id": 224,
                        "string": "subject, predicate, object), we failed to find any such interpretation for ATTN-ATTN and for cs models in general."
                    },
                    {
                        "id": 225,
                        "string": "Instead, the heads divide the source sentence more or less equidistantly, as documented by Fig."
                    },
                    {
                        "id": 226,
                        "string": "5 ."
                    },
                    {
                        "id": 227,
                        "string": "Such a multi-headed sentence representation is then less fit for representing e.g."
                    },
                    {
                        "id": 228,
                        "string": "paraphrases where the subject and object swap their position due to passivization, because their representations are then accessed by different heads, and thus end up in different parts of the sentence embedding vector."
                    },
                    {
                        "id": 229,
                        "string": "For de-ATTN-CTX models, we observed a much  flatter distribution of attention weights for each head and, unlike in the other models, we were often able to identify a head focusing on the main verb."
                    },
                    {
                        "id": 230,
                        "string": "This difference between ATTN-ATTN and some ATTN-CTX models could be explained by the fact that in the former, the decoder is oblivious to the ordering of the heads (because of decoder attention), and hence it may not be useful for a given head to look for a specific syntactic or semantic role."
                    },
                    {
                        "id": 231,
                        "string": "Conclusion We presented a novel variation of attentive NMT models Vaswani et al., 2017) that again provides a single meeting point with a continuous representation of the source sentence."
                    },
                    {
                        "id": 232,
                        "string": "We evaluated these representations with a number of measures reflecting how well the meaning of the source sentence is captured."
                    },
                    {
                        "id": 233,
                        "string": "While our proposed \"compound attention\" leads to translation quality not much worse than the fully attentive model, it generally does not perform well in the meaning representation."
                    },
                    {
                        "id": 234,
                        "string": "Quite on the contrary, the better the BLEU score, the worse the meaning representation."
                    },
                    {
                        "id": 235,
                        "string": "We believe that this observation is important for representation learning where bilingual MT now seems less likely to provide useful data, but perhaps more so for MT itself, where the struggle towards a high single-reference BLEU score (or even worse, cross entropy) leads to systems that refuse to consider the meaning of the sentence."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 19
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 20,
                        "end": 39
                    },
                    {
                        "section": "Model Architectures",
                        "n": "3",
                        "start": 40,
                        "end": 53
                    },
                    {
                        "section": "Compound Attention",
                        "n": "3.1",
                        "start": 54,
                        "end": 96
                    },
                    {
                        "section": "Constant Context",
                        "n": "3.2",
                        "start": 97,
                        "end": 100
                    },
                    {
                        "section": "Transformer with Inner Attention",
                        "n": "3.3",
                        "start": 101,
                        "end": 106
                    },
                    {
                        "section": "Representation Evaluation",
                        "n": "4",
                        "start": 107,
                        "end": 111
                    },
                    {
                        "section": "SentEval",
                        "n": "4.1",
                        "start": 112,
                        "end": 121
                    },
                    {
                        "section": "Paraphrases",
                        "n": "4.2",
                        "start": 122,
                        "end": 141
                    },
                    {
                        "section": "Experimental Results",
                        "n": "5",
                        "start": 142,
                        "end": 160
                    },
                    {
                        "section": "Translation Quality",
                        "n": "5.1",
                        "start": 161,
                        "end": 178
                    },
                    {
                        "section": "SentEval",
                        "n": "5.2",
                        "start": 179,
                        "end": 206
                    },
                    {
                        "section": "Discussion",
                        "n": "6",
                        "start": 207,
                        "end": 230
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 231,
                        "end": 235
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1292-Table4-1.png",
                        "caption": "Table 4: Translation quality of de models.",
                        "page": 5,
                        "bbox": {
                            "x1": 87.84,
                            "x2": 274.08,
                            "y1": 61.44,
                            "y2": 312.96
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Table5-1.png",
                        "caption": "Table 5: Translation quality of cs models.",
                        "page": 5,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 287.03999999999996,
                            "y1": 344.64,
                            "y2": 466.08
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Table1-1.png",
                        "caption": "Table 1: Different RNN-based architectures and their properties. Legend: “pooling” – vectors combined by mean or max (AVGPOOL, MAXPOOL); “sent. emb.” – sentence embedding, i.e. the fixed-size sentence representation computed by the encoder. “init” – initial decoder state. “ctx” – context vector, i.e. input for the decoder cell. “input for att.” – input for the decoder attention.",
                        "page": 1,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 521.28,
                            "y1": 65.75999999999999,
                            "y2": 144.0
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Figure1-1.png",
                        "caption": "Figure 1: An illustration of compound attention with 4 attention heads. The figure shows the computations that result in the decoder state s3 (in addition, each state si depends on the previous target token yi−1). Note that the matrix M is the same for all positions in the output sentence and it can thus serve as the source sentence representation.",
                        "page": 1,
                        "bbox": {
                            "x1": 326.88,
                            "x2": 504.47999999999996,
                            "y1": 223.67999999999998,
                            "y2": 447.35999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Table6-1.png",
                        "caption": "Table 6: Abridged SentEval and paraphrase evaluation results. Full results in supplementary material. AvgAcc is the average of all 10 SentEval classification tasks (see Table S1 in supplementary material), AvgSim averages all 7 similarity tasks (see Table S2). Hy- and CO- stand for HyTER and COCO, respectively. “H.” is the number of attention heads. We give the out-of-vocabulary (OOV) rate and the perplexity of a 4-gram language model (LM) trained on the English side of the respective parallel corpus and evaluated on all available data for a given task.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 63.839999999999996,
                            "y2": 420.0
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Table7-1.png",
                        "caption": "Table 7: Comparison of state-of-the-art SentEval results with our best models and the Glove-BOW baseline. “H.” is the number of attention heads. Reprinted results are marked with †, others are our measurements.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 515.52,
                            "y2": 709.92
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Figure3-1.png",
                        "caption": "Figure 3: BLEU vs. AvgAcc for cs models.",
                        "page": 7,
                        "bbox": {
                            "x1": 311.52,
                            "x2": 520.8,
                            "y1": 69.6,
                            "y2": 234.72
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Figure2-1.png",
                        "caption": "Figure 2: Pearson correlations. Upper triangle: de models, lower triangle: cs models. Positive values shown in shades of green. For similarity tasks, only the Pearson (not Spearman) coefficient is represented.",
                        "page": 7,
                        "bbox": {
                            "x1": 76.32,
                            "x2": 286.08,
                            "y1": 67.67999999999999,
                            "y2": 289.44
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Table2-1.png",
                        "caption": "Table 2: SentEval classification tasks. Tasks without a test set use 10-fold cross-validation.",
                        "page": 3,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 63.36,
                            "y2": 223.2
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Table3-1.png",
                        "caption": "Table 3: SentEval semantic relatedness tasks.",
                        "page": 3,
                        "bbox": {
                            "x1": 94.56,
                            "x2": 267.36,
                            "y1": 262.08,
                            "y2": 342.24
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Figure4-1.png",
                        "caption": "Figure 4: Alignment between a source sentence (left) and the output (right) as represented in the ATTN-ATTN model with 8 heads and size of 1000. Each color represents a different head; the stroke width indicates the alignment weight; weights ≤ 0.01 pruned out. (Best viewed in color.)",
                        "page": 8,
                        "bbox": {
                            "x1": 78.72,
                            "x2": 283.68,
                            "y1": 69.12,
                            "y2": 406.08
                        }
                    },
                    {
                        "filename": "../figure/image/1292-Figure5-1.png",
                        "caption": "Figure 5: Attention weight by relative position in the source sentence (average over dev set excluding sentences shorter than 8 tokens). Same model as in Fig. 4. Each plot corresponds to one head.",
                        "page": 8,
                        "bbox": {
                            "x1": 313.44,
                            "x2": 517.92,
                            "y1": 70.56,
                            "y2": 264.48
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-66"
        },
        {
            "slides": {
                "1": {
                    "title": "Unsupervised Models for Language",
                    "text": [
                        "Discrete Latent Variable Autoencoders",
                        "Model the discreteness of language",
                        "Sampling is not differentiable",
                        "REINFORCE: sample inefficient and unstable"
                    ],
                    "page_nums": [
                        3,
                        4
                    ],
                    "images": []
                },
                "2": {
                    "title": "Contributions",
                    "text": [
                        "Model Supervision Abstractive Differentiable Latent",
                        "Fully unsupervised and abstractive",
                        "Fully differentiable (continuous approximations)",
                        "Human-readable compressions via LM prior",
                        "User-defined flexible compression ratio",
                        "SOTA in unsupervised sentence compression"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "3": {
                    "title": "SEQ3 Overview",
                    "text": [
                        "Reconstruction loss: distill input into the latent sequence",
                        "LM Prior loss: human-readable compressions",
                        "Compressor Minimize DKL between Compressor and LM:",
                        "Topic loss: similar topic as input",
                        "vx: IDF-weighted average of esi",
                        "Reconstructor vy: average of eci",
                        "Length constraints: user-defined shorter length",
                        "Length-aware decoder initializat ion",
                        "Countdown Encoder inputs Decoder",
                        "3. Explicit length penalty"
                    ],
                    "page_nums": [
                        6,
                        7,
                        8,
                        9,
                        10,
                        11,
                        12,
                        13,
                        14,
                        15,
                        16,
                        17,
                        18,
                        19,
                        20,
                        21,
                        22,
                        23,
                        24,
                        25
                    ],
                    "images": []
                },
                "4": {
                    "title": "Differentiable Sampling",
                    "text": [
                        "Forward-pass: Discrete embedding (Gumbel-max trick)",
                        "Backward-pass: Mixture of embeddings (Gumbel-softmax approx.)"
                    ],
                    "page_nums": [
                        26,
                        27,
                        28
                    ],
                    "images": []
                },
                "5": {
                    "title": "Experimental Setup",
                    "text": [
                        "Train LM (LM prior) Train seq3",
                        "Never exposed to target sentences (compressions)",
                        "Vocabulary: 15K most frequent words in source sentences",
                        "Average F1 of ROUGE-1, ROUGE-2, ROUGE-L"
                    ],
                    "page_nums": [
                        29
                    ],
                    "images": []
                },
                "6": {
                    "title": "Results on Gigaword",
                    "text": [
                        "Supervision Model R-1 R-2 R-L",
                        "Unsupervised Pretrained Generator (Wang & Lee,2018)",
                        "Table: Results on (English) Gigaword for sentence compression."
                    ],
                    "page_nums": [
                        30,
                        31
                    ],
                    "images": []
                },
                "7": {
                    "title": "Ablation",
                    "text": [
                        "Both topic and LM losses work in synergy",
                        "LM prior loss: how words should be included",
                        "Topic loss: what words to include"
                    ],
                    "page_nums": [
                        32
                    ],
                    "images": []
                },
                "8": {
                    "title": "Model Outputs",
                    "text": [
                        "the central election commission ( cec ) on monday decided that taiwan will hold another election of national assembly members",
                        "national <unk> election scheduled for may",
                        "the central election commission ( cec ) announced elections",
                        "INPUT dave bassett resigned as manager of struggling english pre-",
                        "mier league side nottingham forest on saturday after they were",
                        "knocked out of the f.a. cup in the third round according to local reports on saturday",
                        "forest manager bassett quits",
                        "dave bassett resigned as manager of struggling english premier league side UNK forest on knocked round press"
                    ],
                    "page_nums": [
                        33
                    ],
                    "images": []
                },
                "9": {
                    "title": "Conclusions and Future Work",
                    "text": [
                        "Fully differentiable seq2seq2seq (seq3) autoencoder",
                        "SOTA in unsupervised abstractive sentence compression",
                        "Topic loss is essential for convergence",
                        "LM prior improves readability",
                        "Next Step: unsupervised machine translation",
                        "Machine Translation Dialogue Text to Code",
                        "the big black cat A: What do you want to do tonight? sort a list of numbers",
                        "B: Lets go for a movie!"
                    ],
                    "page_nums": [
                        34,
                        35
                    ],
                    "images": []
                },
                "10": {
                    "title": "Differentiable Sampling Extended",
                    "text": [
                        "Soft-argmax: Weighted sum of embeddings from peaked softmax",
                        "= argmax(ai i), i Gumbel",
                        "y = softmax(ai i), i Gumbel",
                        "Gumbel-softmax: Differentiable approximation to sampling",
                        "Straight-Through: forward-pass: one-hot, backward-pass: soft"
                    ],
                    "page_nums": [
                        38,
                        39,
                        40,
                        41
                    ],
                    "images": []
                },
                "11": {
                    "title": "Out of Vocabulary OOV Words",
                    "text": [
                        "We copy OOV words using the approach of Fevry and Phang (2018).",
                        "Simpler alternative to pointer networks (See et al., 2017).",
                        "We use a set of special OOV tokens: oov1, oov2, . . . , oovN",
                        "We replace the ith unknown word in the input with the oovi token.",
                        "If all the OOV tokens are used, we use the generic UNK token.",
                        "In inference, we replace the special tokens with the original words.",
                        "RAW John arrived in Rome yesterday. While in Rome, John had fun.",
                        "INPUT oov1 arrived in oov2 yesterday. While in oov2, oov1 had fun."
                    ],
                    "page_nums": [
                        42
                    ],
                    "images": []
                },
                "12": {
                    "title": "Temperature for Gumbel Softmax",
                    "text": [
                        "Temperature does not affect the forward pass, but it affects gradients.",
                        "Havrylov & Titov (2017) tune bound",
                        "In our experiments the learned temperature lead to instability."
                    ],
                    "page_nums": [
                        43
                    ],
                    "images": []
                },
                "13": {
                    "title": "Implementation Details",
                    "text": [
                        "Decoders: 2-layer unidirectional LSTM with size",
                        "Embedding: initialize with 100d GloVe (Pennington et al., 2014)",
                        "Tied encoders of the compressor and reconstructor.",
                        "Shared embedding layer for all encoders and decoders.",
                        "Tied embedding-output layers of both decoders."
                    ],
                    "page_nums": [
                        44
                    ],
                    "images": []
                },
                "14": {
                    "title": "Length Control",
                    "text": [
                        "Sample target length M.",
                        "Decoders state length-aware initialization."
                    ],
                    "page_nums": [
                        45,
                        46,
                        47,
                        48
                    ],
                    "images": []
                },
                "15": {
                    "title": "Results on DUC Shared Tasks",
                    "text": [
                        "Table: Results on the DUC-2004"
                    ],
                    "page_nums": [
                        49
                    ],
                    "images": [
                        "figure/image/1293-Table2-1.png"
                    ]
                },
                "16": {
                    "title": "Model Output Extra",
                    "text": [
                        "INPUT the american sailors who thwarted somali pirates flew home",
                        "to the u.s. on wednesday but without their captain , who was still aboard a navy destroyer after being rescued from the hijackers",
                        "GOLD us sailors who thwarted pirate hijackers fly home",
                        "SEQ3 the american sailors who foiled somali pirates flew home"
                    ],
                    "page_nums": [
                        50
                    ],
                    "images": []
                }
            },
            "paper_title": "SEQ 3 : Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression",
            "paper_id": "1293",
            "paper": {
                "title": "SEQ 3 : Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression",
                "abstract": "Neural sequence-to-sequence models are currently the dominant approach in several natural language processing tasks, but require large parallel corpora. We present a sequenceto-sequence-to-sequence autoencoder (SEQ 3 ), consisting of two chained encoder-decoder pairs, with words used as a sequence of discrete latent variables. We apply the proposed model to unsupervised abstractive sentence compression, where the first and last sequences are the input and reconstructed sentences, respectively, while the middle sequence is the compressed sentence. Constraining the length of the latent word sequences forces the model to distill important information from the input. A pretrained language model, acting as a prior over the latent sequences, encourages the compressed sentences to be human-readable. Continuous relaxations enable us to sample from categorical distributions, allowing gradient-based optimization, unlike alternatives that rely on reinforcement learning. The proposed model does not require parallel text-summary pairs, achieving promising results in unsupervised sentence compression on benchmark datasets.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Neural sequence-to-sequence models (SEQ2SEQ) perform impressively well in several natural language processing tasks, such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2015) or syntactic constituency parsing (Vinyals et al., 2015) ."
                    },
                    {
                        "id": 1,
                        "string": "However, they require massive parallel training datasets (Koehn and Knowles, 2017) ."
                    },
                    {
                        "id": 2,
                        "string": "Consequently there has been extensive work on utilizing non-parallel corpora to boost the performance of SEQ2SEQ models (Sennrich et al., 2016; Gülçehre et al., 2015) , mostly in neural machine translation where models that require absolutely no parallel corpora have also been pro- posed (Artetxe et al., 2018; Lample et al., 2018b) ."
                    },
                    {
                        "id": 3,
                        "string": "Unsupervised (or semi-supervised) SEQ2SEQ models have also been proposed for summarization tasks with no (or small) parallel text-summary sets, including unsupervised sentence compression."
                    },
                    {
                        "id": 4,
                        "string": "Current models, however, barely reach lead-N baselines (Fevry and Phang, 2018; Wang and Lee, 2018) , and/or are non-differentiable (Wang and Lee, 2018; Miao and Blunsom, 2016) , thus relying on reinforcement learning, which is unstable and inefficient."
                    },
                    {
                        "id": 5,
                        "string": "By contrast, we propose a sequence-to-sequence-to-sequence autoencoder, dubbed SEQ 3 , that can be trained end-to-end via gradient-based optimization."
                    },
                    {
                        "id": 6,
                        "string": "SEQ 3 employs differentiable approximations for sampling from categorical distributions (Maddison et al., 2017; Jang et al., 2017) , which have been shown to outperform reinforcement learning (Havrylov and Titov, 2017) ."
                    },
                    {
                        "id": 7,
                        "string": "Therefore it is a generic framework which can be easily extended to other tasks, e.g., machine translation and semantic parsing via task-specific losses."
                    },
                    {
                        "id": 8,
                        "string": "In this work, as a first step, we apply SEQ 3 to unsupervised abstractive sentence compression."
                    },
                    {
                        "id": 9,
                        "string": "SEQ 3 ( §2) comprises two attentional encoderdecoder (Bahdanau et al., 2015) pairs (Fig."
                    },
                    {
                        "id": 10,
                        "string": "1) : a compressor C and a reconstructor R. C ( §2.1) receives an input text x = ⟨x 1 , ."
                    },
                    {
                        "id": 11,
                        "string": "."
                    },
                    {
                        "id": 12,
                        "string": "."
                    },
                    {
                        "id": 13,
                        "string": ", x N ⟩ of N words, and generates a summary y = ⟨y 1 , ."
                    },
                    {
                        "id": 14,
                        "string": "."
                    },
                    {
                        "id": 15,
                        "string": "."
                    },
                    {
                        "id": 16,
                        "string": ", y M ⟩ of M words (M<N), y being a latent variable."
                    },
                    {
                        "id": 17,
                        "string": "R and C communicate only through the discrete words of the summary y ( §2.2)."
                    },
                    {
                        "id": 18,
                        "string": "R ( §2.3) produces a sequencex = ⟨x 1 , ."
                    },
                    {
                        "id": 19,
                        "string": "."
                    },
                    {
                        "id": 20,
                        "string": "."
                    },
                    {
                        "id": 21,
                        "string": ",x N ⟩ of N words from y, try- Figure 2 : More detailed illustration of SEQ 3 ."
                    },
                    {
                        "id": 22,
                        "string": "The compressor (C) produces a summary from the input text, and the reconstructor (R) tries to reproduce the input from the summary."
                    },
                    {
                        "id": 23,
                        "string": "R and C comprise an attentional encoder-decoder each, and communicate only through the (discrete) words of the summary."
                    },
                    {
                        "id": 24,
                        "string": "The LM prior incentivizes C to produce human-readable summaries, while topic loss rewards summaries with similar topicindicating words as the input text."
                    },
                    {
                        "id": 25,
                        "string": "ing to minimize a reconstruction loss L R = (x,x) ( §2.5)."
                    },
                    {
                        "id": 26,
                        "string": "A pretrained language model acts as a prior on y, introducing an additional loss L P (x, y) that encourages SEQ 3 to produce human-readable summaries."
                    },
                    {
                        "id": 27,
                        "string": "A third loss L T (x, y) rewards summaries y with similar topic-indicating words as x."
                    },
                    {
                        "id": 28,
                        "string": "Experiments ( §3) on the Gigaword sentence compression dataset (Rush et al., 2015) and the DUC-2003 and DUC-2004 shared tasks (Over et al., 2007) produce promising results."
                    },
                    {
                        "id": 29,
                        "string": "Our contributions are: (1) a fully differentiable sequence-to-sequence-to-sequence (SEQ 3 ) autoencoder that can be trained without parallel data via gradient optimization; (2) an application of SEQ 3 to unsupervised abstractive sentence compression, with additional task-specific loss functions; (3) state of the art performance in unsupervised abstractive sentence compression."
                    },
                    {
                        "id": 30,
                        "string": "This work is a step towards exploring the potential of SEQ 3 in other tasks, such as machine translation."
                    },
                    {
                        "id": 31,
                        "string": "Proposed Model Compressor The bottom left part of Fig."
                    },
                    {
                        "id": 32,
                        "string": "2 illustrates the internals of the compressor C. An embedding layer projects the source sequence x to the word embeddings e s = ⟨e s 1 , ."
                    },
                    {
                        "id": 33,
                        "string": "."
                    },
                    {
                        "id": 34,
                        "string": "."
                    },
                    {
                        "id": 35,
                        "string": ", e s N ⟩, which are then en-coded by a bidirectional RNN, producing h s = ⟨h s 1 , ."
                    },
                    {
                        "id": 36,
                        "string": "."
                    },
                    {
                        "id": 37,
                        "string": "."
                    },
                    {
                        "id": 38,
                        "string": ", h s N ⟩."
                    },
                    {
                        "id": 39,
                        "string": "Each h s t is the concatenation of the corresponding left-to-right and right-to-left states (outputs in LSTMs) of the bi-RNN."
                    },
                    {
                        "id": 40,
                        "string": "h s t = [ − − → RNN s (e s t , − → h s t−1 ); ← − − RNN s (e s t , ← − h s t+1 )] To generate the summary y, we employ the attentional RNN decoder of Luong et al."
                    },
                    {
                        "id": 41,
                        "string": "(2015) , with their global attention and input feeding."
                    },
                    {
                        "id": 42,
                        "string": "Concretely, at each timestep (t ∈ {1, ."
                    },
                    {
                        "id": 43,
                        "string": "."
                    },
                    {
                        "id": 44,
                        "string": "."
                    },
                    {
                        "id": 45,
                        "string": ", M}) we compute a probability distribution a i over all the states h s 1 , ."
                    },
                    {
                        "id": 46,
                        "string": "."
                    },
                    {
                        "id": 47,
                        "string": "."
                    },
                    {
                        "id": 48,
                        "string": ", h s N of the source encoder conditioned on the current state h c t of the compressor's decoder to produce a context vector c t ."
                    },
                    {
                        "id": 49,
                        "string": "a i = softmax(h s i ⊺ W a h c t ), c t = N ∑ i=1 a i h s i The matrix W a is learned."
                    },
                    {
                        "id": 50,
                        "string": "We obtain a probability distribution for y t over the vocabulary V by combining c t and the current state h c t of the decoder."
                    },
                    {
                        "id": 51,
                        "string": "o c t = tanh(W o [c t ; h c t ] + b o ) (1) u c t = W v o c t + b v (2) p(y t |y <t , x) = softmax(u c t ) (3) W o , b o , W v , b v are learned."
                    },
                    {
                        "id": 52,
                        "string": "c t is also used when updating the state h c t of the decoder, along with the embedding e c t of y t and a countdown argument M − t (scaled by a learnable w d ) indicating the number of the remaining words of the summary (Fevry and Phang, 2018; Kikuchi et al., 2016) ."
                    },
                    {
                        "id": 53,
                        "string": "h c t+1 = − − → RNN c (h c t , e c t , c t , w d (M − t)) (4) For each input x = ⟨x 1 , ."
                    },
                    {
                        "id": 54,
                        "string": "."
                    },
                    {
                        "id": 55,
                        "string": "."
                    },
                    {
                        "id": 56,
                        "string": ", x N ⟩, we obtain a target length M for the summary y = ⟨y 1 , ."
                    },
                    {
                        "id": 57,
                        "string": "."
                    },
                    {
                        "id": 58,
                        "string": "."
                    },
                    {
                        "id": 59,
                        "string": ", y M ⟩ by sampling (and rounding) from a uniform distribution U (αN, βN); α, β are hyper-parameters (α < β < 1); we set M = 5, if the sampled M is smaller."
                    },
                    {
                        "id": 60,
                        "string": "Sampling M, instead of using a static compression ratio, allows us to train a model capable of producing summaries with varying (e.g., user-specified) compression ratios."
                    },
                    {
                        "id": 61,
                        "string": "Controlling the output length in encoder-decoder architectures has been explored in machine translation (Kikuchi et al., 2016) and summarization (Fan et al., 2018) ."
                    },
                    {
                        "id": 62,
                        "string": "Differentiable Word Sampling To generate the summary, we need to sample its words y t from the categorical distributions p(y t |y <t , x), which is a non-differentiable process."
                    },
                    {
                        "id": 63,
                        "string": "Soft-Argmax Instead of sampling y t , a simple workaround during training is to pass as input to the next timestep of C's decoder and to the corresponding timestep of R's encoder a weighted sum of all the vocabulary's (V ) word embeddings, using a peaked softmax function (Goyal et al., 2017) : e c t = |V | ∑ i e(w i ) softmax(u c t /τ ) (5) where u c t is the unnormalized score in Eq."
                    },
                    {
                        "id": 64,
                        "string": "2 (i.e., the logit) of each word w i and τ ∈ (0, ∞) is the temperature."
                    },
                    {
                        "id": 65,
                        "string": "As τ → 0 most of the probability mass in Eq."
                    },
                    {
                        "id": 66,
                        "string": "5 goes to the most probable word, hence the operation approaches the arg max."
                    },
                    {
                        "id": 67,
                        "string": "Gumbel-Softmax We still want to be able to perform sampling, though, as it has the benefit of adding stochasticity and facilitating exploration of the parameter space."
                    },
                    {
                        "id": 68,
                        "string": "Hence, we use the Gumbel-Softmax (GS) reparametrization trick (Maddison et al., 2017; Jang et al., 2017) as a low variance approximation of sampling from categorical distributions."
                    },
                    {
                        "id": 69,
                        "string": "Sampling a specific word y t from the softmax (Eq."
                    },
                    {
                        "id": 70,
                        "string": "3) is equivalent to adding (element-wise) to the logits an independent noise sample ξ from the Gumbel distribution 1 and taking the arg max: y t ∼ softmax(u c t ) ↔ y t = arg max(u c t + ξ) (6) Therefore, using the GS trick, Eq."
                    },
                    {
                        "id": 71,
                        "string": "5 becomes: e c t = |V | ∑ i e(w i ) softmax((u c t + ξ)/τ ) (7) Straight-Through Both relaxations lead to mixtures of embeddings, which do not correspond to actual words."
                    },
                    {
                        "id": 72,
                        "string": "Even though this enables the compressor to communicate with the reconstructor using continuous values, thus fully utilizing the available embedding space, ultimately our aim is to constrain them to communicate using only natural language."
                    },
                    {
                        "id": 73,
                        "string": "In addition, an unwanted discrepancy is created between training (continuous embeddings) and test time (discrete embeddings)."
                    },
                    {
                        "id": 74,
                        "string": "We alleviate these problems with the Straight-Through estimator (ST) (Bengio et al., 2013) ."
                    },
                    {
                        "id": 75,
                        "string": "Specifically, in the forward pass of training we discretizeẽ c t by using the arg max (Eq."
                    },
                    {
                        "id": 76,
                        "string": "6), whereas in the backward pass we compute the gradients using the GS (Eq."
                    },
                    {
                        "id": 77,
                        "string": "7)."
                    },
                    {
                        "id": 78,
                        "string": "This is a biased estimator due 1 ξi = − log(− log(xi)), xi ∼ U (0, 1) to the mismatch between the forward and backward passes, but works well in practice."
                    },
                    {
                        "id": 79,
                        "string": "ST GS reportedly outperforms scheduled sampling (Goyal et al., 2017) and converges faster than reinforcement learning (Havrylov and Titov, 2017) ."
                    },
                    {
                        "id": 80,
                        "string": "Reconstructor The reconstructor (upper right of Fig."
                    },
                    {
                        "id": 81,
                        "string": "2 ) works like the compressor, but its encoder operates on the embeddings e c 1 , ."
                    },
                    {
                        "id": 82,
                        "string": "."
                    },
                    {
                        "id": 83,
                        "string": "."
                    },
                    {
                        "id": 84,
                        "string": ", e c M of the words y 1 , ."
                    },
                    {
                        "id": 85,
                        "string": "."
                    },
                    {
                        "id": 86,
                        "string": "."
                    },
                    {
                        "id": 87,
                        "string": ", y M of the summary (exact embeddings of the sampled words y t in the forward pass, approximate differentiable embeddings in the backward pass)."
                    },
                    {
                        "id": 88,
                        "string": "Decoder Initialization We initialize the hidden state of each decoder using a transformation of the concatenation [ − → h s N ; ← − h s 1 ] of the last hidden states (from the two directions) of its bidirectional encoder and a length vector, following Mallinson et al."
                    },
                    {
                        "id": 89,
                        "string": "(2018) ."
                    },
                    {
                        "id": 90,
                        "string": "The length vector for the decoder of the compressor C consists of the target summary length M, scaled by a learnable parameter w v , and the compression ratio M N ."
                    },
                    {
                        "id": 91,
                        "string": "h c 0 = tanh(W c [ − → h s N ; ← − h s 1 ; w v M ; M N ]) W c is a trainable hidden layer."
                    },
                    {
                        "id": 92,
                        "string": "The decoder of the reconstructor R is initialized similarly."
                    },
                    {
                        "id": 93,
                        "string": "Loss Functions Reconstruction Loss L R (x,x) is the (negative) log-likelihood assigned by the (decoder of) R to the input (correctly reconstructed) words x = ⟨x 1 , ."
                    },
                    {
                        "id": 94,
                        "string": "."
                    },
                    {
                        "id": 95,
                        "string": "."
                    },
                    {
                        "id": 96,
                        "string": ", x N ⟩, where p R is the distribution of R. L R (x,x) = − N ∑ i=1 log p R (x i = x i ) We do not expect L R (x,x) to decrease to zero, as there is information loss through the compression."
                    },
                    {
                        "id": 97,
                        "string": "However, we expect it to drive the compressor to produce such sentences that will increase the likelihood of the target words in the reconstruction."
                    },
                    {
                        "id": 98,
                        "string": "LM Prior Loss To ensure that the summaries y are readable, we pretrain an RNN language model (see Appendix) on the source texts of the full training set."
                    },
                    {
                        "id": 99,
                        "string": "We compute the Kullback-Leibler divergence D KL between the probability distributions of the (decoder of) the compressor (p(y t |y <t , x), Eq."
                    },
                    {
                        "id": 100,
                        "string": "3) and the language model (p LM (y t |y <t , x))."
                    },
                    {
                        "id": 101,
                        "string": "Similar priors have been used in sentence compression (Miao and Blunsom, 2016) and agent communication (Havrylov and Titov, 2017) ."
                    },
                    {
                        "id": 102,
                        "string": "We also use the following task-specific losses."
                    },
                    {
                        "id": 103,
                        "string": "Topic Loss Words with high TF-IDF scores are indicative of the topic of a text (Ramos et al., 2003; Erkan and Radev, 2004) ."
                    },
                    {
                        "id": 104,
                        "string": "To encourage the compressor to preserve in the summary y the topicindicating words of the input x, we compute the TF-IDF-weighted average v x of the word embeddings of x and the average v y of the word embeddings of y and use their cosine distance as an ad- ditional loss L T = 1 − cos(v x , v y )."
                    },
                    {
                        "id": 105,
                        "string": "v x = N ∑ i=1 IDF(x i ) e s i ∑ N t=1 IDF(x t ) v y = 1 M M ∑ i=1 e c i (Using TF-IDF in v y did not help.)"
                    },
                    {
                        "id": 106,
                        "string": "All IDF scores are computed on the training set."
                    },
                    {
                        "id": 107,
                        "string": "Length Penalty A fourth loss L L (not shown in Fig."
                    },
                    {
                        "id": 108,
                        "string": "1 ) helps the (decoder of the) compressor to predict the end-of-sequence (EOS) token at the target summary length M. L L is the cross-entropy between the distributions p(y t |y <t , x) (Eq."
                    },
                    {
                        "id": 109,
                        "string": "3) of the compressor at t = M + 1 and onward, with the one-hot distribution of the EOS token."
                    },
                    {
                        "id": 110,
                        "string": "Modeling Details Parameter Sharing We tie the weights of layers encoding similar information, to reduce the number of trainable parameters."
                    },
                    {
                        "id": 111,
                        "string": "First, we use a shared embedding layer for the encoders and decoders, initialized with 100-dimensional GloVe embeddings (Pennington et al., 2014) ."
                    },
                    {
                        "id": 112,
                        "string": "Additionally, we tie the shared embedding layer with the output layers of both decoders (Press and Wolf, 2017; Inan et al., 2017) ."
                    },
                    {
                        "id": 113,
                        "string": "Finally, we tie the encoders of the compressor and reconstructor (see Appendix)."
                    },
                    {
                        "id": 114,
                        "string": "OOVs Out-of-vocabulary words are handled as in Fevry and Phang (2018) (see Appendix)."
                    },
                    {
                        "id": 115,
                        "string": "Experiments Datasets We train SEQ 3 on the Gigaword sentence compression dataset (Rush et al., 2015) ."
                    },
                    {
                        "id": 116,
                        "string": "2 It consists of pairs, each containing the first sentence of a news article (x) and the article's headline (y), a total of 3.8M/189k/1951 train/dev/test pairs."
                    },
                    {
                        "id": 117,
                        "string": "We also test (without retraining) SEQ 3 on DUC-2003 and DUC-2004 shared tasks (Over et al., 2007) , containing 624/500 news articles each, paired with 4 reference summaries capped at 75 bytes."
                    },
                    {
                        "id": 118,
                        "string": "Methods compared We evaluated SEQ 3 and an ablated version of SEQ 3 ."
                    },
                    {
                        "id": 119,
                        "string": "We only used the article 2 github.com/harvardnlp/sent-summary sentences (sources) of the training pairs from Gigaword to train SEQ 3 ; our model is never exposed to target headlines (summaries) during training or evaluation, i.e., it is completely unsupervised."
                    },
                    {
                        "id": 120,
                        "string": "Our code is publicly available."
                    },
                    {
                        "id": 121,
                        "string": "3 We compare SEQ 3 to other unsupervised sentence compression models."
                    },
                    {
                        "id": 122,
                        "string": "We note that the extractive model of Miao and Blunsom (2016) relies on a pre-trained attention model using at least 500K parallel sentences, which is crucial to mitigate the inefficiency of sampling-based variational inference and REINFORCE."
                    },
                    {
                        "id": 123,
                        "string": "Therefore it is not comparable, as it is semi-supervised."
                    },
                    {
                        "id": 124,
                        "string": "The results of the extractive model of Fevry and Phang (2018) are also not comparable, as they were obtained on a different, not publicly available test set."
                    },
                    {
                        "id": 125,
                        "string": "We note, however, that they report that their system performs worse than the LEAD-8 baseline in ROUGE-2 and ROUGE-L on Gigaword."
                    },
                    {
                        "id": 126,
                        "string": "The only directly comparable unsupervised model is the abstractive 'Pretrained Generator' of Wang and Lee (2018) ."
                    },
                    {
                        "id": 127,
                        "string": "The version of 'Adversarial REINFORCE' that Wang and Lee (2018) consider unsupervised is actually weakly supervised, since its discriminator was exposed to the summaries of the same sources the rest of the model was trained on."
                    },
                    {
                        "id": 128,
                        "string": "As baselines, we use LEAD-8 for Gigaword, which simply selects the first 8 words of the source, and PREFIX for DUC, which includes the first 75 bytes of the source article."
                    },
                    {
                        "id": 129,
                        "string": "We also compare to supervised abstractive sentence compression methods (Tables 1-3 )."
                    },
                    {
                        "id": 130,
                        "string": "Following previous work, we report the average F1 of ROUGE-1, ROUGE-2, ROUGE-L (Lin, 2004) ."
                    },
                    {
                        "id": 131,
                        "string": "We implemented SEQ 3 with LSTMs (see Appendix) and during inference we perform greedy-sampling."
                    },
                    {
                        "id": 132,
                        "string": "Results Table 1 reports the Gigaword results."
                    },
                    {
                        "id": 133,
                        "string": "SEQ 3 outperforms the unsupervised Pretrained Generator across all metrics by a large margin."
                    },
                    {
                        "id": 134,
                        "string": "It also surpasses LEAD-8."
                    },
                    {
                        "id": 135,
                        "string": "If we remove the LM prior, performance drops, esp."
                    },
                    {
                        "id": 136,
                        "string": "in ROUGE-2 and ROUGE-L."
                    },
                    {
                        "id": 137,
                        "string": "This makes sense, since the pretrained LM rewards correct word order."
                    },
                    {
                        "id": 138,
                        "string": "We also tried removing the topic loss, but the model failed to converge and results were extremely poor (Table 1) ."
                    },
                    {
                        "id": 139,
                        "string": "Topic loss acts as a bootstrap mechanism, biasing the compressor to generate words that maintain the topic of the input text."
                    },
                    {
                        "id": 140,
                        "string": "This greatly reduces variance due to sampling in early stages of training, alleviating the need to pretrain individual ABS (Rush et al., 2015) 29.55 11.32 26.42 SEASS (Zhou et al., 2017) 36.15 17.54 33.63 words-lvt5k-1sent (Nallapati et al., 2016) 36 (Rush et al., 2015) 28 (Zajic et al., 2007 ) 25.12 6.46 20.12 Woodsend et al."
                    },
                    {
                        "id": 141,
                        "string": "(2010 22 6 17 ABS (Rush et al., 2015) 28  components, unlike works that rely on reinforcement learning (Miao and Blunsom, 2016; Wang and Lee, 2018) ."
                    },
                    {
                        "id": 142,
                        "string": "Overall, both losses work in synergy, with the topic loss driving what and the LM prior loss driving how words should be included in the summary."
                    },
                    {
                        "id": 143,
                        "string": "SEQ 3 behaves similarly on DUC-2003 and DUC-2004 (Tables 2-3) , although it was trained on Gigaword."
                    },
                    {
                        "id": 144,
                        "string": "In DUC-2003, however, it does not surpass the PREFIX baseline."
                    },
                    {
                        "id": 145,
                        "string": "Finally, Fig."
                    },
                    {
                        "id": 146,
                        "string": "3 illustrates three randomly sampled outputs of SEQ 3 on Gigaword."
                    },
                    {
                        "id": 147,
                        "string": "In the first one, SEQ 3 copies several words esp."
                    },
                    {
                        "id": 148,
                        "string": "from the beginning of the input (hence the high ROUGE-L) exhibiting extractive capabilities, though still being adequately abstractive (bold words denote paraphrases)."
                    },
                    {
                        "id": 149,
                        "string": "In the second one, SEQ 3 showcases its true abstractive power by paraphrasing and compressing multi-word expressions to single content words more heavily, still without losing the overall meaning."
                    },
                    {
                        "id": 150,
                        "string": "In the last example, SEQ 3 progressively becomes ungrammatical though interestingly retaining some content words from the input."
                    },
                    {
                        "id": 151,
                        "string": "Limitations and Future Work The model tends to copy the first words of the input sentence in the compressed text (Fig."
                    },
                    {
                        "id": 152,
                        "string": "3 )."
                    },
                    {
                        "id": 153,
                        "string": "We input: the american sailors who thwarted somali pirates flew home to the u.s. on wednesday but without their captain , who was still aboard a navy destroyer after being rescued from the hijackers ."
                    },
                    {
                        "id": 154,
                        "string": "gold: us sailors who thwarted pirate hijackers fly home SEQ 3 : the american sailors who foiled somali pirates flew home after crew hijacked ."
                    },
                    {
                        "id": 155,
                        "string": "input: the central election commission -lrb-cec -rrb-on monday decided that taiwan will hold another election of national assembly members on may # ."
                    },
                    {
                        "id": 156,
                        "string": "gold: national <unk> election scheduled for may input: dave bassett resigned as manager of struggling english premier league side nottingham forest on saturday after they were knocked out of the f.a."
                    },
                    {
                        "id": 157,
                        "string": "cup in the third round , according to local reports on saturday ."
                    },
                    {
                        "id": 158,
                        "string": "gold: forest manager bassett quits ."
                    },
                    {
                        "id": 159,
                        "string": "hypothesize that since the reconstructor is autoregressive, i.e., each word is conditioned on the previous one, errors occurring early in the generated sequence have cascading effects."
                    },
                    {
                        "id": 160,
                        "string": "This inevitably encourages the compressor to select the first words of the input."
                    },
                    {
                        "id": 161,
                        "string": "A possible workaround might be to modify SEQ 3 so that the first encoder-decoder pair would turn the inputs to longer sequences, and the second encoder-decoder would compress them trying to reconstruct the original inputs."
                    },
                    {
                        "id": 162,
                        "string": "In future work, we plan to explore the potential of SEQ 3 in other tasks, such as unsupervised machine translation (Lample et al., 2018a; Artetxe et al., 2018) and caption generation ."
                    },
                    {
                        "id": 163,
                        "string": "A Appendix A.1 Temperature for Gumbel-Softmax Even though the value of the temperature τ does not affect the forward pass, it greatly affects the gradient computation and therefore the learning process."
                    },
                    {
                        "id": 164,
                        "string": "Jang et al."
                    },
                    {
                        "id": 165,
                        "string": "(2017) propose to anneal τ during training towards zero."
                    },
                    {
                        "id": 166,
                        "string": "Gulcehre et al."
                    },
                    {
                        "id": 167,
                        "string": "(2017) propose to learn τ as a function of the compressor's decoder state h c t , in order to reduce hyperparameter tuning: τ (h c t ) = 1 log(1 + exp(w ⊺ τ h c t )) + 1 (8) where w τ is a trainable parameter and τ (h c t ) ∈ (0, 1)."
                    },
                    {
                        "id": 168,
                        "string": "Havrylov and Titov (2017) In our experiments, we had convergence problems with the learned temperature technique."
                    },
                    {
                        "id": 169,
                        "string": "We found that the compressor preferred values close to the upper bound, which led to unstable training, forcing us to set τ 0 > 1 to stabilize the training process."
                    },
                    {
                        "id": 170,
                        "string": "Our findings align with the behavior reported by Gu et al."
                    },
                    {
                        "id": 171,
                        "string": "(2018) ."
                    },
                    {
                        "id": 172,
                        "string": "Consequently, we follow their choice and fix τ = 0.5, which worked well in practice."
                    },
                    {
                        "id": 173,
                        "string": "A.2 Out of Vocabulary (OOV) Words The vocabulary of our experiments comprises the 15k most frequent words of Gigaword's training input texts (without looking at their summaries)."
                    },
                    {
                        "id": 174,
                        "string": "To handle OOVs, we adopt the approach of Fevry and Phang (2018) , which can be thought of as a simpler form of copying compared to pointer networks (See et al., 2017) ."
                    },
                    {
                        "id": 175,
                        "string": "We use a small set (10 in our experiments) of special OOV tokens OOV 1 , OOV 2 , ."
                    },
                    {
                        "id": 176,
                        "string": "."
                    },
                    {
                        "id": 177,
                        "string": "."
                    },
                    {
                        "id": 178,
                        "string": ", OOV 10 , whose embeddings are updated during learning."
                    },
                    {
                        "id": 179,
                        "string": "Given an input text x = ⟨x 1 , ."
                    },
                    {
                        "id": 180,
                        "string": "."
                    },
                    {
                        "id": 181,
                        "string": "."
                    },
                    {
                        "id": 182,
                        "string": ", x N ⟩, we replace (before feeding x to SEQ 3 ) each unknown word x i with the first unused (for the particular x) OOV token, taking care to use the same OOV token for all the occurrences of the same unknown word in x."
                    },
                    {
                        "id": 183,
                        "string": "For example, if 'John' and 'Rome' are not in the vocabulary, then \"John arrived in Rome yesterday."
                    },
                    {
                        "id": 184,
                        "string": "While in Rome, John had fun.\""
                    },
                    {
                        "id": 185,
                        "string": "becomes \"OOV 1 arrived in OOV 2 yesterday."
                    },
                    {
                        "id": 186,
                        "string": "While in OOV 2 , OOV 1 had fun.\""
                    },
                    {
                        "id": 187,
                        "string": "If a new unknown word x i is encountered in x and all the available OOV tokens have been used, x i is replaced by 'UNK', whose embedding is also updated during learning."
                    },
                    {
                        "id": 188,
                        "string": "The OOV tokens (and 'UNK') are included in the vocabulary, and SEQ 3 learns to predict them as summary words, in effect copying the corresponding unknown words of x."
                    },
                    {
                        "id": 189,
                        "string": "At test time, we replace the OOV tokens with the corresponding unknown words."
                    },
                    {
                        "id": 190,
                        "string": "A.3 Reconstruction Word Drop Our model is an instance of Variational Auto-Encoders (VAE) (Kingma and Welling, 2014)."
                    },
                    {
                        "id": 191,
                        "string": "A common problem in VAEs is that the reconstructor tends to disregard the latent variable."
                    },
                    {
                        "id": 192,
                        "string": "We weaken the reconstructor R, in order to force it to fully utilize the latent sequence y to generatex."
                    },
                    {
                        "id": 193,
                        "string": "To this end, we employ word dropout as in Bowman et al."
                    },
                    {
                        "id": 194,
                        "string": "(2016) and randomly drop a percentage of the input words, thus forcing R to rely solely on y to make good reconstructions."
                    },
                    {
                        "id": 195,
                        "string": "A.4 Implementation and Hyper-parameters We implemented SEQ 3 in PyTorch (Paszke et al., 2017) ."
                    },
                    {
                        "id": 196,
                        "string": "All the RNNs are LSTMs (Hochreiter and Schmidhuber, 1997) ."
                    },
                    {
                        "id": 197,
                        "string": "We use a shared encoder for the compressor and the reconstructor, consisting of a two-layer bidirectional LSTM with size 300 per direction."
                    },
                    {
                        "id": 198,
                        "string": "We use separate decoders for the compressor and the reconstructor; each decoder is a two-layer unidirectional LSTM with size 300."
                    },
                    {
                        "id": 199,
                        "string": "The (shared) embedding layer of the compressor and the reconstructor is initialized with 100-dimensional GloVe embeddings (Pennington et al., 2014) and is tied with the output (projec-tion) layers of the decoders and jointly finetuned during training."
                    },
                    {
                        "id": 200,
                        "string": "We apply layer normalization (Ba et al., 2016) to the context vectors (Eq."
                    },
                    {
                        "id": 201,
                        "string": "1) of the compressor and the reconstructor."
                    },
                    {
                        "id": 202,
                        "string": "We apply word dropout ( §A.3) to the reconstructor with p = 0.5."
                    },
                    {
                        "id": 203,
                        "string": "During training, the summary length M is sampled from U (0.4 N, 0.6 N) ; during testing, M = 0.5 N. The four losses are summed, λs being scalar hyper-parameters."
                    },
                    {
                        "id": 204,
                        "string": "L = λ R L R + λ P L P + λ T L T + λ L L L We set λ R = λ T = 1, λ L = λ P = 0.1."
                    },
                    {
                        "id": 205,
                        "string": "We use the Adam (Kingma and  optimizer, with batch size 128 and the default learning rate 0.001."
                    },
                    {
                        "id": 206,
                        "string": "The network is trained for 5 epochs."
                    },
                    {
                        "id": 207,
                        "string": "LM Prior The pretrained language model is a two-layer LSTM of size 1024 per layer."
                    },
                    {
                        "id": 208,
                        "string": "It uses its own embedding layer of size 256, which is randomly initialized and updated when training the language model."
                    },
                    {
                        "id": 209,
                        "string": "We apply dropout with p = 0.2 to the embedding layer and dropout with p = 0.5 to the LSTM layers."
                    },
                    {
                        "id": 210,
                        "string": "We use Adam (Kingma and  with batch size 128 and the network is trained for 30 epochs."
                    },
                    {
                        "id": 211,
                        "string": "The learning rate is set initially to 0.001 and is multiplied with γ = 0.5 every 10 epochs."
                    },
                    {
                        "id": 212,
                        "string": "Evaluation Following Chopra et al."
                    },
                    {
                        "id": 213,
                        "string": "(2016) , we filter out pairs with empty headlines from the test set."
                    },
                    {
                        "id": 214,
                        "string": "We employ the PYROUGE package with \"-m -n 2 -w 1.2\" to compute ROUGE scores."
                    },
                    {
                        "id": 215,
                        "string": "We use the provided tokenizations of the Gigaword and DUC-2003 , DUC-2004 datasets."
                    },
                    {
                        "id": 216,
                        "string": "All hyper-parameters were tuned on the development set."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 30
                    },
                    {
                        "section": "Compressor",
                        "n": "2.1",
                        "start": 31,
                        "end": 61
                    },
                    {
                        "section": "Differentiable Word Sampling",
                        "n": "2.2",
                        "start": 62,
                        "end": 79
                    },
                    {
                        "section": "Reconstructor",
                        "n": "2.3",
                        "start": 80,
                        "end": 87
                    },
                    {
                        "section": "Decoder Initialization",
                        "n": "2.4",
                        "start": 88,
                        "end": 92
                    },
                    {
                        "section": "Loss Functions",
                        "n": "2.5",
                        "start": 93,
                        "end": 109
                    },
                    {
                        "section": "Modeling Details",
                        "n": "2.6",
                        "start": 110,
                        "end": 114
                    },
                    {
                        "section": "Experiments",
                        "n": "3",
                        "start": 115,
                        "end": 150
                    },
                    {
                        "section": "Limitations and Future Work",
                        "n": "4",
                        "start": 151,
                        "end": 216
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1293-Table1-1.png",
                        "caption": "Table 1: Average results on the (English) Gigaword dataset for abstractive sentence compression methods.",
                        "page": 4,
                        "bbox": {
                            "x1": 80.64,
                            "x2": 514.0799999999999,
                            "y1": 63.839999999999996,
                            "y2": 168.0
                        }
                    },
                    {
                        "filename": "../figure/image/1293-Table2-1.png",
                        "caption": "Table 2: Averaged results on the DUC-2003 dataset; the top part reports results of supervised systems.",
                        "page": 4,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 284.15999999999997,
                            "y1": 202.56,
                            "y2": 249.12
                        }
                    },
                    {
                        "filename": "../figure/image/1293-Figure3-1.png",
                        "caption": "Figure 3: Good/bad example summaries on Gigaword.",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 522.24,
                            "y1": 202.56,
                            "y2": 428.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1293-Table3-1.png",
                        "caption": "Table 3: Averaged results on the DUC-2004 dataset; the top part reports results of supervised systems.",
                        "page": 4,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 284.15999999999997,
                            "y1": 289.44,
                            "y2": 358.08
                        }
                    },
                    {
                        "filename": "../figure/image/1293-Figure4-1.png",
                        "caption": "Figure 4: Plot of Eq. 9, with different values for the upper bound τ0.",
                        "page": 7,
                        "bbox": {
                            "x1": 78.24,
                            "x2": 284.15999999999997,
                            "y1": 361.44,
                            "y2": 538.56
                        }
                    },
                    {
                        "filename": "../figure/image/1293-Figure2-1.png",
                        "caption": "Figure 2: More detailed illustration of SEQ3. The compressor (C) produces a summary from the input text, and the reconstructor (R) tries to reproduce the input from the summary. R and C comprise an attentional encoder-decoder each, and communicate only through the (discrete) words of the summary. The LM prior incentivizes C to produce human-readable summaries, while topic loss rewards summaries with similar topicindicating words as the input text.",
                        "page": 1,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 289.44,
                            "y1": 68.64,
                            "y2": 271.2
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-67"
        },
        {
            "slides": {
                "0": {
                    "title": "Power laws of natural language",
                    "text": [
                        "2. Burstiness About how the words are aligned",
                        "Words occur in clusters These can be analyzed through power laws",
                        "Occurrences of words fluctuate",
                        "Todays talk is about quantifying the degree of fluctuation.",
                        "How these could be useful will be presented at the end."
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Fluctuation underlying text Look at variance in t",
                    "text": [
                        "Any words (any word, any set of words) occur in clusters",
                        "Occurrences of rare words in Moby Dick (below 3162th)",
                        "Two ways of analysis",
                        "Long range correlation weaknesses"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Any words any word any set of words occur in clusters",
                    "text": [
                        "Fluctuation underlying text Look at variance in",
                        "Occurrences of rare words in Moby Dick (below 3162th)",
                        "Variance is larger when events are clustered vs. random",
                        "Two ways of analysis",
                        "Fluctuation Analysis (Ebeling 1994) variance w.r.t.",
                        "Taylors analysis Our achievements Long range correlation variance w.r.t. mean"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Taylors law Smith 1938 Taylor 1961",
                    "text": [
                        "Power law between standard deviation and mean of event occurrences within (space or) time",
                        "Empirically (but is of course possible, too)",
                        "Empirically known to hold in vast fields (Eisler, 2007) ecology, life science, physics, finance, human dynamics",
                        "The only application to language is",
                        "Gerlach & Altmann (2014) not really Taylor analysis",
                        "We devised a new method based on the original concept of Taylors law"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "Our method",
                    "text": [
                        "1 For every word kind C count its number of occurrence within given length",
                        "Estimate using the least squares method in log scale",
                        "2 Obtain mean C and standard deviation C of C.",
                        "@ log C log C'",
                        "3 Plot C and C for all words. CF0"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "5": {
                    "title": "Taylors law of natural language",
                    "text": [
                        "- - Moby Taylors Here, Every English, vocabulary Dick point law 250k in",
                        "5000. is size log a words, word 20k scale words kind",
                        "Taylor exponent corresponds to gradient of log -log plot.",
                        "Taylors law in log scale"
                    ],
                    "page_nums": [
                        6,
                        7
                    ],
                    "images": []
                },
                "6": {
                    "title": "Theoretical analysis of the exponent",
                    "text": [
                        "if all words are independent and identically distributed (i.i.d.).",
                        "Taylor Exponent because shuffled text is equivalent to i.i.d. process.",
                        "if words always co-occur with the same proportion. ex) Suppose that = {0,1}, and occurs always twice as"
                    ],
                    "page_nums": [
                        8,
                        9
                    ],
                    "images": []
                },
                "7": {
                    "title": "Taylors law for other data",
                    "text": [
                        "Lisp, crawled and parsed",
                        "dear this platform truck insert up xload things and hand unless let"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "8": {
                    "title": "Datasets",
                    "text": [
                        "Newspapers 3 (En,Zh,Ja) WSJ",
                        "Tagged Wiki 1 (En+tag) enwiki8",
                        "CHILDES 10(En, Fr, Thomas (English)",
                        "Program Codes C++, Lisp, Haskell,"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                },
                "9": {
                    "title": "Taylor exponents of various data kind",
                    "text": [
                        "None of the real texts showed the exponent 0.5"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "10": {
                    "title": "Summary thus far",
                    "text": [
                        "Taylors law holds in vast fields including natural/social science",
                        "Taylors law also holds in languages and other linguistic related sequential data",
                        "Taylor exponent shows the degree of co-occurrence among words",
                        "Taylor exponent differs among text categories",
                        "(No such quality for Zipfs law, Heaps law)",
                        "How can our results be useful?",
                        "Do machine generated texts produce"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "11": {
                    "title": "Machine generated text by n grams",
                    "text": [
                        "bigrams of Moby Dick"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": [
                        "figure/image/1294-Figure4-1.png"
                    ]
                },
                "12": {
                    "title": "Machine generated texts by character based LSTM language model",
                    "text": [
                        "Stacked LSTM (3 LSTM layers)",
                        "Distribution of following character",
                        "Learning: Shakespeare by naive setting",
                        "Generation: Probabilistic generation of succeeding characters",
                        "128 preceding characters State-of the art models present different results"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "13": {
                    "title": "Texts generated by machine translation",
                    "text": [
                        "Les Miserables translated by",
                        "Google translator (in English)",
                        "Fluctuation that derives from the context is provided by the source text"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                }
            },
            "paper_title": "Taylor's Law for Human Linguistic Sequences",
            "paper_id": "1294",
            "paper": {
                "title": "Taylor's Law for Human Linguistic Sequences",
                "abstract": "Taylor's law describes the fluctuation characteristics underlying a system in which the variance of an event within a time span grows by a power law with respect to the mean. Although Taylor's law has been applied in many natural and social systems, its application for language has been scarce. This article describes a new quantification of Taylor's law in natural language and reports an analysis of over 1100 texts across 14 languages. The Taylor exponents of written natural language texts were found to exhibit almost the same value. The exponent was also compared for other language-related data, such as the child-directed speech, music, and programming language code. The results show how the Taylor exponent serves to quantify the fundamental structural complexity underlying linguistic time series. The article also shows the applicability of these findings in evaluating language models.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Taylor's law characterizes how the variance of the number of events for a given time and space grows with respect to the mean, forming a power law."
                    },
                    {
                        "id": 1,
                        "string": "It is a quantification method for the clustering behavior of a system."
                    },
                    {
                        "id": 2,
                        "string": "Since the pioneering studies of this concept (Smith, 1938; Taylor, 1961) , a substantial number of studies have been conducted across various domains, including ecology, life science, physics, finance, and human dynamics, as well summarized in (Eisler, Bartos, and Kertész, 2007) ."
                    },
                    {
                        "id": 3,
                        "string": "More recently, Cohen and Xu (2015) reported Taylor exponents for random sampling from various distributions, and Calif and Schmitt (2015) reported Taylor's law in wind energy data using a non-parametric regression."
                    },
                    {
                        "id": 4,
                        "string": "Those two papers also refer to research about Taylor's law in a wide range of fields."
                    },
                    {
                        "id": 5,
                        "string": "Despite such diverse application across domains, there has been little analysis based on Taylor's law in studying natural language."
                    },
                    {
                        "id": 6,
                        "string": "The only such report, to the best of our knowledge, is Gerlach and Altmann (2014) , but they measured the mean and variance by means of the vocabulary size within a document."
                    },
                    {
                        "id": 7,
                        "string": "This approach essentially differs from the original concept of Taylor analysis, which fundamentally counts the number of events, and thus the theoretical background of Taylor's law as presented in Eisler, Bartos, and Kertész (2007) cannot be applied to interpret the results."
                    },
                    {
                        "id": 8,
                        "string": "For the work described in this article, we applied Taylor's law for texts, in a manner close to the original concept."
                    },
                    {
                        "id": 9,
                        "string": "We considered lexical fluctuation within texts, which involves the cooccurrence and burstiness of word alignment."
                    },
                    {
                        "id": 10,
                        "string": "The results can thus be interpreted according to the analytical results of Taylor's law, as described later."
                    },
                    {
                        "id": 11,
                        "string": "We found that the Taylor exponent is indeed a characteristic of texts and is universal across various kinds of texts and languages."
                    },
                    {
                        "id": 12,
                        "string": "These results are shown here for data including over 1100 singleauthor texts across 14 languages and large-scale newspaper data."
                    },
                    {
                        "id": 13,
                        "string": "Moreover, we found that the Taylor exponents for other symbolic sequential data, including child-directed speech, programming language code, and music, differ from those for written natural language texts, thus distinguishing different kinds of data sources."
                    },
                    {
                        "id": 14,
                        "string": "The Taylor exponent in this sense could categorize and quantify the structural complexity of language."
                    },
                    {
                        "id": 15,
                        "string": "The Chomsky hierarchy (Chomsky, 1956 ) is, of course, the most important framework for such categorization."
                    },
                    {
                        "id": 16,
                        "string": "The Taylor exponent is another way to quantify the complexity of natural language: it allows for continuous quantification based on lexical fluctuation."
                    },
                    {
                        "id": 17,
                        "string": "Since the Taylor exponent can quantify and characterize one aspect of natural language, our findings are applicable in computational linguistics to assess language models."
                    },
                    {
                        "id": 18,
                        "string": "At the end of this article, in §5, we report how the most basic character-based long short-term memory (LSTM) unit produces texts with a Taylor exponent of 0.50, equal to that of a sequence of independent and identically distributed random variables (an i.i.d."
                    },
                    {
                        "id": 19,
                        "string": "sequence)."
                    },
                    {
                        "id": 20,
                        "string": "This shows how such models are limited in producing consistent co-occurrence among words, as compared with a real text."
                    },
                    {
                        "id": 21,
                        "string": "Taylor analysis thus provides a possible direction to reconsider the limitations of language models."
                    },
                    {
                        "id": 22,
                        "string": "Related Work This work can be situated as a study to quantify the complexity underlying texts."
                    },
                    {
                        "id": 23,
                        "string": "As summarized in (Tanaka-Ishii and Aihara, 2015), measures for this purpose include the entropy rate (Takahira, Tanaka-Ishii, and Lukasz, 2016; Bentz et al., 2017) and those related to the scaling behaviors of natural language."
                    },
                    {
                        "id": 24,
                        "string": "Regarding the latter, certain power laws are known to hold universally in linguistic data."
                    },
                    {
                        "id": 25,
                        "string": "The most famous among these are Zipf's law (Zipf, 1965) and Heaps' law (Heaps, 1978) ."
                    },
                    {
                        "id": 26,
                        "string": "Other, different kinds of power laws from Zipf's law are obtained through various methods of fluctuation analysis, but the question of how to quantify the fluctuation existing in language data has been controversial."
                    },
                    {
                        "id": 27,
                        "string": "Our work is situated as one such case of fluctuation analysis."
                    },
                    {
                        "id": 28,
                        "string": "In real data, the occurrence timing of a particular event is often biased in a bursty, clustered manner, and fluctuation analysis quantifies the degree of this bias."
                    },
                    {
                        "id": 29,
                        "string": "Originally, this was motivated by a study of how floods of the Nile River occur in clusters (i.e., many floods coming after an initial flood) (Hurst, 1951) ."
                    },
                    {
                        "id": 30,
                        "string": "Such clustering phenomena have been widely reported in both natural and social domains (Eisler, Bartos, and Kertész, 2007) ."
                    },
                    {
                        "id": 31,
                        "string": "Fluctuation analysis for language originates in (Ebeling and Pöeschel, 1994) , which applied the approach to characters."
                    },
                    {
                        "id": 32,
                        "string": "That work corresponds to observing the average of the variances of each character's number of occurrences within a time span."
                    },
                    {
                        "id": 33,
                        "string": "Their method is strongly related to ours but different from two viewpoints: (1) Taylor analysis considers the variance with respect to the mean, rather than time; and (2) Taylor analysis does not average results over all elements."
                    },
                    {
                        "id": 34,
                        "string": "Because of these differences, the method in (Ebeling and Pöeschel, 1994) cannot distinguish real texts from an i.i.d."
                    },
                    {
                        "id": 35,
                        "string": "process when applied to word sequences (Takahashi and Tanaka-Ishii, 2018) ."
                    },
                    {
                        "id": 36,
                        "string": "Event clustering phenomena cause a sequence to resemble itself in a self-similar manner."
                    },
                    {
                        "id": 37,
                        "string": "Therefore, studies of the fluctuation underlying a sequence can take another form of long-range correlation analysis, to consider the similarity between two subsequences underlying a time series."
                    },
                    {
                        "id": 38,
                        "string": "This approach requires a function to calculate the similarity of two sequences, and the autocorrelation function (ACF) is the main function considered."
                    },
                    {
                        "id": 39,
                        "string": "Since the ACF only applies to numerical data, both Altmann, Pierrehumbert, and Motter (2009) and Tanaka-Ishii and Bunde (2016) applied long-range correlation analysis by transforming text into intervals and showed how natural language texts are long-range correlated."
                    },
                    {
                        "id": 40,
                        "string": "Another recent work (Lin and Tegmark, 2016) proposed using mutual information instead of the ACF."
                    },
                    {
                        "id": 41,
                        "string": "Mutual information, however, cannot detect the long-range correlation underlying texts."
                    },
                    {
                        "id": 42,
                        "string": "All these works studied correlation phenomena via only a few texts and did not show any underlying universality with respect to data and language types."
                    },
                    {
                        "id": 43,
                        "string": "One reason is that analysis methods for long-range correlation are nontrivial to apply to texts."
                    },
                    {
                        "id": 44,
                        "string": "Overall, the analysis based on Taylor's law in the present work belongs to the former approach of fluctuation analysis and shows the law's vast applicability and stability for written texts and even beyond, quantifying universal complexity underlying human linguistic sequences."
                    },
                    {
                        "id": 45,
                        "string": "Measuring the Taylor Exponent Proposed method Given a set of elements W (words), let X = X 1 , X 2 , ."
                    },
                    {
                        "id": 46,
                        "string": "."
                    },
                    {
                        "id": 47,
                        "string": "."
                    },
                    {
                        "id": 48,
                        "string": ", X N be a discrete time series of length N , where X i ∈ W for all i = 1, 2, ."
                    },
                    {
                        "id": 49,
                        "string": "."
                    },
                    {
                        "id": 50,
                        "string": "."
                    },
                    {
                        "id": 51,
                        "string": ", N , i.e., each X i represents a word."
                    },
                    {
                        "id": 52,
                        "string": "For a given segment length ∆t ∈ N (a positive integer), a data sample X is segmented by the length ∆t."
                    },
                    {
                        "id": 53,
                        "string": "The number of occurrences of a specific word w k ∈ W is counted for every segment, and the mean µ k and standard deviation σ k across segments are obtained."
                    },
                    {
                        "id": 54,
                        "string": "Doing this for all word kinds w 1 , ."
                    },
                    {
                        "id": 55,
                        "string": "."
                    },
                    {
                        "id": 56,
                        "string": "."
                    },
                    {
                        "id": 57,
                        "string": ", w |W | ∈ W gives the distribution of σ with respect to µ."
                    },
                    {
                        "id": 58,
                        "string": "Following a previous work (Eisler, Bartos, and Kertész, 2007) , in this article Taylor's law is defined to hold when µ and σ are correlated by a power law in the following way: σ ∝ µ α ."
                    },
                    {
                        "id": 59,
                        "string": "(1) Experimentally, the Taylor exponent α is known to take a value within the range of 0.5 ≤ α ≤ 1.0 across a wide variety of domains as reported in (Eisler, Bartos, and Kertész, 2007) , including finance, meteorology, agriculture, and biology."
                    },
                    {
                        "id": 60,
                        "string": "Mathematically, it is analytically proven that α = 0.5 for an i.i.d process, and the proof is included as Supplementary Material."
                    },
                    {
                        "id": 61,
                        "string": "On the other hand, α = 1.0 when all segments always contain the same proportion of the elements of W ."
                    },
                    {
                        "id": 62,
                        "string": "For example, suppose that W = {a, b}."
                    },
                    {
                        "id": 63,
                        "string": "If b always occurs twice as often as a in all segments (e.g., three a and six b in one segment, two a and four b in another, etc."
                    },
                    {
                        "id": 64,
                        "string": "), then both the mean and standard deviation for b are twice those for a, so the exponent is 1.0."
                    },
                    {
                        "id": 65,
                        "string": "In a real text, this cannot occur for all W , so α < 1.0 for natural language text."
                    },
                    {
                        "id": 66,
                        "string": "Nevertheless, for a subset of words in W , this could happen, especially for a template-like sequence."
                    },
                    {
                        "id": 67,
                        "string": "For instance, consider a programming statement: while (i < 1000) do i-."
                    },
                    {
                        "id": 68,
                        "string": "Here, the words while and do always occur once, whereas i always occurs twice."
                    },
                    {
                        "id": 69,
                        "string": "This example shows that the exponent indicates how consistently words depend on each other in W , i.e., how words co-occur systematically in a coherent manner, thus indicating that the Taylor exponent is partly related to grammaticality."
                    },
                    {
                        "id": 70,
                        "string": "To measure the Taylor exponent α, the mean and standard deviation are computed for every word kind 1 and then plotted in log-log coordinates."
                    },
                    {
                        "id": 71,
                        "string": "The number of points in this work was the number of different words."
                    },
                    {
                        "id": 72,
                        "string": "We fitted the points to a linear function in log-log coordinates by the least-squares method."
                    },
                    {
                        "id": 73,
                        "string": "We naturally took the logarithm of both cµ α and σ to estimate the exponent, because Taylor's law is a power law."
                    },
                    {
                        "id": 74,
                        "string": "The coefficientĉ, and exponentα are then estimated as the 1 In this work, words are not lemmatized, e.g."
                    },
                    {
                        "id": 75,
                        "string": "\"say,\" \"said,\" and \"says\" are all considered different words."
                    },
                    {
                        "id": 76,
                        "string": "This was chosen so in this work because the Taylor exponent considers systematic co-occurrence of words, and idiomatic phrases should thus be considered in their original forms."
                    },
                    {
                        "id": 77,
                        "string": "following: c,α = arg min c,α ϵ(c, α), ϵ(c, α) = 1 |W | |W | ∑ k=1 (log σ k − log cµ α k ) 2 ."
                    },
                    {
                        "id": 78,
                        "string": "This fit function could be a problem depending on the distribution of errors between the data points and the regression line."
                    },
                    {
                        "id": 79,
                        "string": "As seen later, the error distribution seems to differ with the kind of data: for a random source the error seems Gaussian, and so the above formula is relevant, whereas for real data, the distribution is biased."
                    },
                    {
                        "id": 80,
                        "string": "Changing the fit function according to the data source, however, would cause other essential problems for fair comparison."
                    },
                    {
                        "id": 81,
                        "string": "Here, because Cohen and Xu (2015) reported that most empirical works on Taylor's law used least-squares regression (including their own), this work also uses the above scheme 2 , with the error defined as ϵ(ĉ,α)."
                    },
                    {
                        "id": 82,
                        "string": "large representative archives, parsed, and stripped of natural language comments), and 12 pieces of musical data (long symphonies and so forth, transformed from MIDI into text with the software SMF2MML 5 , with annotations removed)."
                    },
                    {
                        "id": 83,
                        "string": "As for the randomized data listed in the last block, we took the text of Moby Dick and generated 10 different shuffled samples and bigramgenerated sequences."
                    },
                    {
                        "id": 84,
                        "string": "We also introduced LSTMgenerated texts to consider the utility of our findings, as explained in §5."
                    },
                    {
                        "id": 85,
                        "string": "Figure 1 shows typical distributions for natural language texts, with two single-author texts ((a) 5 http://shaw.la.coocan.jp/smf2mml/ and (b)) and two multiple-author texts (newspapers, (c) and (d)), in English and Chinese, respectively."
                    },
                    {
                        "id": 86,
                        "string": "The segment size was ∆t = 5620 words 6 , i.e., each segment had 5620 words and the horizontal axis indicates the averaged frequency of a specific word within a segment of 5620 words."
                    },
                    {
                        "id": 87,
                        "string": "Data Taylor Exponents for Real Data The points at the upper right represent the most frequent words, whereas those at the lower left represent the least frequent."
                    },
                    {
                        "id": 88,
                        "string": "Although the plots exhibited different distributions, they could globally be considered roughly aligned in a power-law manner."
                    },
                    {
                        "id": 89,
                        "string": "This finding is non-trivial, as seen in other analyses based on Taylor's law (Eisler, Bartos, and Kertész, 2007) ."
                    },
                    {
                        "id": 90,
                        "string": "The exponent α was almost the same even though English and Chinese are different languages using different kinds of script."
                    },
                    {
                        "id": 91,
                        "string": "As explained in §3.1, the Taylor exponent indicates the degree of consistent co-occurrence among words."
                    },
                    {
                        "id": 92,
                        "string": "The value of 0.58 obtained here suggests that the words of natural language texts are not strongly or consistently coherent with respect to each other."
                    },
                    {
                        "id": 93,
                        "string": "Nevertheless, the value is well above 0.5, and for the real data listed in Table 1 (first to third blocks), not a single sample gave an exponent as low as 0.5."
                    },
                    {
                        "id": 94,
                        "string": "Although the overall global tendencies in Figure 1 followed power laws, many points deviated significantly from the regression lines."
                    },
                    {
                        "id": 95,
                        "string": "The words with the greatest fluctuation were often keywords."
                    },
                    {
                        "id": 96,
                        "string": "For example, among words in Moby Dick with large µ, those with the largest σ included whale, captain, and sailor, whereas those with the smallest σ included functional words such as to, that, and with."
                    },
                    {
                        "id": 97,
                        "string": "The Taylor exponent depended only slightly on the data size."
                    },
                    {
                        "id": 98,
                        "string": "Figure 2 shows this dependency Figure 2: Taylor exponentα (vertical axis) calculated for the two largest texts: The New York Times and The Mainichi newspapers."
                    },
                    {
                        "id": 99,
                        "string": "To evaluate the exponent's dependence on the text size, parts of each text were taken and the exponents were calculated for those parts, with points taken logarithmically."
                    },
                    {
                        "id": 100,
                        "string": "The window size was ∆t = 5620."
                    },
                    {
                        "id": 101,
                        "string": "As the text size grew, the Taylor exponent slightly decreased."
                    },
                    {
                        "id": 102,
                        "string": "for the two largest data sets used, The New York Times (NYT, 1.5 billion words) and The Mainichi (24 years) newspapers."
                    },
                    {
                        "id": 103,
                        "string": "When the data size was increased, the exponent exhibited a slight tendency to decrease."
                    },
                    {
                        "id": 104,
                        "string": "For the NYT, the decrease seemed to have a lower limit, as the figure shows that the exponent stabilized at around 10 7 words."
                    },
                    {
                        "id": 105,
                        "string": "The reason for this decrease can be explained as follows."
                    },
                    {
                        "id": 106,
                        "string": "The Taylor exponent becomes larger when some words occur in a clustered manner."
                    },
                    {
                        "id": 107,
                        "string": "Making the text size larger increases the number of segments (since ∆t was fixed in this experiment)."
                    },
                    {
                        "id": 108,
                        "string": "If the number of clusters does not increase as fast as the increase in the number of segments, then the number of clusters per segment becomes smaller, leading to a smaller exponent."
                    },
                    {
                        "id": 109,
                        "string": "In other words, the influence of each consecutive co-occurrence of a particular word decays slightly as the overall text size grows."
                    },
                    {
                        "id": 110,
                        "string": "Analysis of different kinds of data showed how the Taylor exponent differed according to the data source."
                    },
                    {
                        "id": 111,
                        "string": "Figure 3 shows plots for samples from enwiki8 (tagged Wikipedia), the child-directed speech of Thomas (taken from CHILDES), programming language data sets, and music."
                    },
                    {
                        "id": 112,
                        "string": "The distributions appear different from those for the natural language texts, and the exponents were significantly larger."
                    },
                    {
                        "id": 113,
                        "string": "This means that these data sets contained expressions with fixed forms much more frequently than did the natural language texts."
                    },
                    {
                        "id": 114,
                        "string": "Figure 4 summarizes the overall picture among the different data sources."
                    },
                    {
                        "id": 115,
                        "string": "The median and quantiles of the Taylor exponent were calculated for the different kinds of data listed in Table 1 ."
                    },
                    {
                        "id": 116,
                        "string": "The first two boxes show results with an exponent of 0.50."
                    },
                    {
                        "id": 117,
                        "string": "These results were each obtained from 10 random samples of the randomized sequences."
                    },
                    {
                        "id": 118,
                        "string": "We will return to these results in the next section."
                    },
                    {
                        "id": 119,
                        "string": "The remaining boxes show results for real data."
                    },
                    {
                        "id": 120,
                        "string": "The exponents for texts from Project Gutenberg ranged from 0.53 to 0.68."
                    },
                    {
                        "id": 121,
                        "string": "Figure 5 shows a histogram of these texts with respect to the value of α."
                    },
                    {
                        "id": 122,
                        "string": "The number of texts decreased significantly at a value of 0.63, showing that the distribution of the Taylor exponent was rather tight."
                    },
                    {
                        "id": 123,
                        "string": "The kinds of texts at the upper limit of exponents for Project Gutenberg included structured texts of fixed style, such as dictionaries, lists of histories, and Bibles."
                    },
                    {
                        "id": 124,
                        "string": "The majority of texts were in English, followed by French and then other languages, as listed in Table 1 ."
                    },
                    {
                        "id": 125,
                        "string": "Whether α distinguishes languages is a difficult question."
                    },
                    {
                        "id": 126,
                        "string": "The histogram suggests that Chinese texts exhibited larger values than did texts in Indo-European languages."
                    },
                    {
                        "id": 127,
                        "string": "We conducted a statistical test to evaluate whether this difference was significant as compared to English."
                    },
                    {
                        "id": 128,
                        "string": "Since the numbers of texts were very different, we used the non-parametric statistical test of the Brunner-Munzel method, among various possible methods, to test a null hypothesis of whether α was equal for the two distributions (Brunner and Munzel, 2000) ."
                    },
                    {
                        "id": 129,
                        "string": "The p-value for Chinese was p = 1.24 × 10 −16 , thus rejecting the null hypothesis at the significance level of 0.01."
                    },
                    {
                        "id": 130,
                        "string": "This confirms that α was generally larger for Chinese texts than for English texts."
                    },
                    {
                        "id": 131,
                        "string": "Similarly, the null hypothesis was rejected for Finnish and French, but it was accepted for German and Japanese at the 0.01 significance level."
                    },
                    {
                        "id": 132,
                        "string": "Since Japanese was accepted despite its large difference from English, we could not conclude whether the Taylor exponent distinguishes languages."
                    },
                    {
                        "id": 133,
                        "string": "Turning to the last four columns of Figure 4 , representing the enwiki8, child-directed speech (CHILDES), programming language, and music data, the Taylor exponents clearly differed from those of the natural language texts."
                    },
                    {
                        "id": 134,
                        "string": "Given the template-like nature of these four data sources, the results were somewhat expected."
                    },
                    {
                        "id": 135,
                        "string": "The kind of data thus might be distinguishable using the Taylor exponent."
                    },
                    {
                        "id": 136,
                        "string": "To confirm this, however, would require assembling a larger data set."
                    },
                    {
                        "id": 137,
                        "string": "Applying this approach with Twitter data and adult utterances would produce interesting results and remains for our future work."
                    },
                    {
                        "id": 138,
                        "string": "The Taylor exponent also differed according to ∆t, and Figure 6 shows the dependence ofα on ∆t."
                    },
                    {
                        "id": 139,
                        "string": "For each kind of data shown in Figure 4 , the mean exponent is plotted for various ∆t."
                    },
                    {
                        "id": 140,
                        "string": "As reported in (Eisler, Bartos, and Kertész, 2007) , the exponent is known to grow when the segment size gets larger."
                    },
                    {
                        "id": 141,
                        "string": "The reason is that words occur in a bursty, clustered manner at all length scales: no matter how large the segment size becomes, a segment will include either many or few instances of a given word, leading to larger variance growth."
                    },
                    {
                        "id": 142,
                        "string": "This phenomenon suggests how word cooccurrences in natural language are self-similar."
                    },
                    {
                        "id": 143,
                        "string": "The Taylor exponent is initially 0.5 when the segment size is very small."
                    },
                    {
                        "id": 144,
                        "string": "This can be analytically explained as follows (Eisler, Bartos, and Kertész, 2007) ."
                    },
                    {
                        "id": 145,
                        "string": "Consider the case of ∆t=1."
                    },
                    {
                        "id": 146,
                        "string": "Let n be the frequency of a particular word in a segment."
                    },
                    {
                        "id": 147,
                        "string": "We have ⟨n⟩ ≪ 1.0, because the possibility of a specific word appearing in a segment becomes very small."
                    },
                    {
                        "id": 148,
                        "string": "Because ⟨n⟩ 2 ≈ 0, σ 2 = ⟨n 2 ⟩ − ⟨n⟩ 2 ≈ ⟨n 2 ⟩."
                    },
                    {
                        "id": 149,
                        "string": "Because n = 1 or 0 (with ∆t=1), ⟨n 2 ⟩ = ⟨n⟩ = µ."
                    },
                    {
                        "id": 150,
                        "string": "Thus, σ 2 ≈ µ."
                    },
                    {
                        "id": 151,
                        "string": "Overall, the results show the possibility of ap- Figure 4 : Box plots of the Taylor exponents for different kinds of data."
                    },
                    {
                        "id": 152,
                        "string": "Each point represents one sample, and samples from the same kind of data are contained in each box plot."
                    },
                    {
                        "id": 153,
                        "string": "The first two boxes are for the randomized data, while the remaining boxes are for real data, including both the natural language texts and language-related sequences."
                    },
                    {
                        "id": 154,
                        "string": "Each box ranges between the quantiles, with the middle line indicating the median, the whiskers showing the maximum and minimum, and some extreme values lying beyond."
                    },
                    {
                        "id": 155,
                        "string": "Figure 5 : Histogram of Taylor exponents for long texts in Project Gutenberg (1129 texts)."
                    },
                    {
                        "id": 156,
                        "string": "The legend indicates the languages, in frequency order."
                    },
                    {
                        "id": 157,
                        "string": "Each bar shows the number of texts with that value ofα."
                    },
                    {
                        "id": 158,
                        "string": "Because of the skew of languages in the original conception of Project Gutenberg, the majority of the texts are in English, shown in blue, whereas texts in other languages are shown in other colors."
                    },
                    {
                        "id": 159,
                        "string": "The histogram shows how the Taylor exponent ranged fairly tightly around the mean, and natural language texts with an exponent larger than 0.63 were rare."
                    },
                    {
                        "id": 160,
                        "string": "plying Taylor's exponent to quantify the complexity underlying coherence among words."
                    },
                    {
                        "id": 161,
                        "string": "Grammatical complexity was formalized by Chomsky via the Chomsky hierarchy (Chomsky, 1956) , which describes grammar via rewriting rules."
                    },
                    {
                        "id": 162,
                        "string": "The constraints placed on the rules distinguish four different levels of grammar: regular, context-free, context-sensitive, and phrase structure."
                    },
                    {
                        "id": 163,
                        "string": "As indicated in (Badii and Politi, 1997) , however, this does not quantify the complexity on a continuous scale."
                    },
                    {
                        "id": 164,
                        "string": "For example, we might want to quantify the complexity of child-directed speech as compared to that of adults, and this could be addressed in only a limited way through the Chomsky hierarchy."
                    },
                    {
                        "id": 165,
                        "string": "Another point is that the hierarchy is sentence-based and does not consider fluctuation in the kinds of words appearing."
                    },
                    {
                        "id": 166,
                        "string": "Evaluation of Machine-Generated Text by the Taylor Exponent The main contribution of this paper is the findings of Taylor's law behavior for real texts as presented thus far."
                    },
                    {
                        "id": 167,
                        "string": "This section explains the applicability of these findings, through results obtained with baseline language models."
                    },
                    {
                        "id": 168,
                        "string": "As mentioned previously, i.i.d."
                    },
                    {
                        "id": 169,
                        "string": "mathematical processes have a Taylor exponent of 0.50."
                    },
                    {
                        "id": 170,
                        "string": "We show here that, even if a process is not trivially i.i.d., the exponent often takes a value of 0.50 Figure 6 : Growth ofα with respect to ∆t, averaged across data sets within each data kind."
                    },
                    {
                        "id": 171,
                        "string": "The plot labeled \"random\" shows the average for the two datasets of randomized text from Moby Dick (shuffled and bigrams, as explained in §5)."
                    },
                    {
                        "id": 172,
                        "string": "Since this analysis required a large amount of computation, for the large data sets (such as newspaper and programming language data), 4 million words were taken from each kind of data and used here."
                    },
                    {
                        "id": 173,
                        "string": "When ∆t was small, the Taylor exponent was close to 0.5, as theoretically described in the main text."
                    },
                    {
                        "id": 174,
                        "string": "As ∆t was increased, the value ofα grew."
                    },
                    {
                        "id": 175,
                        "string": "The maximum ∆t was about 10,000, or about one-tenth of the length of one long literary text."
                    },
                    {
                        "id": 176,
                        "string": "For the kinds of data investigated here,α grew almost linearly."
                    },
                    {
                        "id": 177,
                        "string": "The results show that, at a given ∆t, the Taylor exponent has some capability to distinguish different kinds of text data."
                    },
                    {
                        "id": 178,
                        "string": "for random processes, including texts produced by standard language models such as n-gram based models."
                    },
                    {
                        "id": 179,
                        "string": "A more complete work in this direction is reported in (Takahashi and Tanaka-Ishii, 2018) ."
                    },
                    {
                        "id": 180,
                        "string": "Figure 7 shows samples from each of two simple random processes."
                    },
                    {
                        "id": 181,
                        "string": "Figure 7a shows the behavior of a shuffled text of Moby Dick."
                    },
                    {
                        "id": 182,
                        "string": "Obviously, (a) Text produced by LSTM (3-layer stacked character-based) (b) Machine-translated text using neural language model Figure 8 : Taylor analysis for two texts produced by standard neural language models: (a) a stacked LSTM model that learned the complete works of Shakespeare; and (b) a machine translation of Les Misérables (originally in French, translated into English), from a neural language model."
                    },
                    {
                        "id": 183,
                        "string": "since the sequence was almost i.i.d."
                    },
                    {
                        "id": 184,
                        "string": "following Zipf distribution, the Taylor exponent was 0.50."
                    },
                    {
                        "id": 185,
                        "string": "Given that the Taylor exponent becomes larger for a sequence with words dependent on each other, as explained in §3, we would expect that a sequence generated by an n-gram model would exhibit an exponent larger than 0.50."
                    },
                    {
                        "id": 186,
                        "string": "The simplest such model is the bigram model, so a sequence of 300,000 words was probabilistically generated using a bigram model of Moby Dick."
                    },
                    {
                        "id": 187,
                        "string": "Figure 7b shows the Taylor analysis, revealing that the exponent remained 0.50."
                    },
                    {
                        "id": 188,
                        "string": "This result does not depend much on the quality of the individual samples."
                    },
                    {
                        "id": 189,
                        "string": "The first and second box plots in Figure 4 show the distribution of exponents for 10 different samples for the shuffled and bigram-generated texts, respectively."
                    },
                    {
                        "id": 190,
                        "string": "The exponents were all around 0.50, with small variance."
                    },
                    {
                        "id": 191,
                        "string": "State-of-the-art language models are based on neural models, and they are mainly evaluated by perplexity and in terms of the performance of individual applications."
                    },
                    {
                        "id": 192,
                        "string": "Since their architecture is complex, quality evaluation has become an issue."
                    },
                    {
                        "id": 193,
                        "string": "One possible improvement would be to use an evaluation method that qualitatively differs from judging application performance."
                    },
                    {
                        "id": 194,
                        "string": "One such method is to verify whether the properties underlying natural language hold for texts generated by language models."
                    },
                    {
                        "id": 195,
                        "string": "The Taylor exponent is one such possibility, among various properties of natural language texts."
                    },
                    {
                        "id": 196,
                        "string": "As a step toward this approach, Figure 8 shows two results produced by neural language models."
                    },
                    {
                        "id": 197,
                        "string": "Figure 8a shows the result for a sample of 2 million characters produced by a stan-dard (three-layer) stacked character-based LSTM unit that learned the complete works of Shakespeare."
                    },
                    {
                        "id": 198,
                        "string": "The model was optimized to minimize the cross-entropy with a stochastic gradient algorithm to predict the next character from the previous 128 characters."
                    },
                    {
                        "id": 199,
                        "string": "See (Takahashi and Tanaka-Ishii, 2017) for the details of the experimental settings."
                    },
                    {
                        "id": 200,
                        "string": "The Taylor exponent of the generated text was 0.50."
                    },
                    {
                        "id": 201,
                        "string": "This indicates that the character-level language model could not capture or reproduce the word-level clustering behavior in text."
                    },
                    {
                        "id": 202,
                        "string": "This analysis sheds light on the quality of the language model, separate from the prediction accuracy."
                    },
                    {
                        "id": 203,
                        "string": "The application of Taylor's law for a wider range of language models appears in (Takahashi and Tanaka-Ishii, 2018) ."
                    },
                    {
                        "id": 204,
                        "string": "Briefly, state-of-theart word-level language models can generate text whose Taylor exponent is larger than 0.50 but smaller than that of the dataset used for training."
                    },
                    {
                        "id": 205,
                        "string": "This indicates both the capability of modeling burstiness in text and the room for improvement."
                    },
                    {
                        "id": 206,
                        "string": "Also, the perplexity values correlate well with the Taylor exponents."
                    },
                    {
                        "id": 207,
                        "string": "Therefore, Taylor exponent can reasonably serve for evaluating machinegenerated text."
                    },
                    {
                        "id": 208,
                        "string": "In contrast to character-level neural language models, neural-network-based machine translation (NMT) models are, in fact, capable of maintaining the burstiness of the original text."
                    },
                    {
                        "id": 209,
                        "string": "Figure 8b shows the Taylor analysis for a machinetranslated text of Les Misérables (from French to English), obtained from Google NMT (Wu et al., 2016) ."
                    },
                    {
                        "id": 210,
                        "string": "We split the text into 5000-character portions because of the API's limitation (See (Takahashi and Tanaka-Ishii, 2017) for the details)."
                    },
                    {
                        "id": 211,
                        "string": "As is expected and desirable, the translated text retains the clustering behavior of the original text, as the Taylor exponent of 0.57 is equivalent to that of the original text."
                    },
                    {
                        "id": 212,
                        "string": "Conclusion We have proposed a method to analyze whether a natural language text follows Taylor's law, a scaling property quantifying the degree of consistent co-occurrence among words."
                    },
                    {
                        "id": 213,
                        "string": "In our method, a sequence of words is divided into given segments, and the mean and standard deviation of the frequency of every kind of word are measured."
                    },
                    {
                        "id": 214,
                        "string": "The law is considered to hold when the standard deviation varies with the mean according to a power law, thus giving the Taylor exponent."
                    },
                    {
                        "id": 215,
                        "string": "Theoretically, an i.i.d."
                    },
                    {
                        "id": 216,
                        "string": "process has a Taylor exponent of 0.5, whereas larger exponents indicate sequences in which words co-occur systematically."
                    },
                    {
                        "id": 217,
                        "string": "Using over 1100 texts across 14 languages, we showed that written natural language texts follow Taylor's law, with the exponent distributed around 0.58."
                    },
                    {
                        "id": 218,
                        "string": "This value differed greatly from the exponents for other data sources: enwiki8 (tagged Wikipedia, 0.63), child-directed speech (CHILDES, around 0.68), and programming language and music data (around 0.79)."
                    },
                    {
                        "id": 219,
                        "string": "These Taylor exponents imply that a written text is more complex than programming source code or music with regard to fluctuation of its components."
                    },
                    {
                        "id": 220,
                        "string": "None of the real data exhibited an exponent equal to 0.5."
                    },
                    {
                        "id": 221,
                        "string": "We conducted more detailed analysis varying the data size and the segment size."
                    },
                    {
                        "id": 222,
                        "string": "Taylor's law and its exponent can also be applied to evaluate machine-generated text."
                    },
                    {
                        "id": 223,
                        "string": "We showed that a character-based LSTM language model generated text with a Taylor exponent of 0.5."
                    },
                    {
                        "id": 224,
                        "string": "This indicates one limitation of that model."
                    },
                    {
                        "id": 225,
                        "string": "Our future work will include an analysis using other kinds of data, such as Twitter data and adult utterances, and a study of how Taylor's law relates to grammatical complexity for different sequences."
                    },
                    {
                        "id": 226,
                        "string": "Another direction will be to apply fluctuation analysis in formulating a statistical test to evaluate the structural complexity underlying a sequence."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 21
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 22,
                        "end": 44
                    },
                    {
                        "section": "Proposed method",
                        "n": "3.1",
                        "start": 45,
                        "end": 86
                    },
                    {
                        "section": "Taylor Exponents for Real Data",
                        "n": "4",
                        "start": 87,
                        "end": 165
                    },
                    {
                        "section": "Evaluation of Machine-Generated Text by the Taylor Exponent",
                        "n": "5",
                        "start": 166,
                        "end": 211
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 212,
                        "end": 226
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1294-Figure3-1.png",
                        "caption": "Figure 3: Examples of Taylor’s law for alternative data sets listed in Table 1: enwiki8 (tag-annotated Wikipedia), Thomas (longest in CHILDES), Lisp source code, and the music of Bach’s St Matthew Passion. These examples exhibited larger Taylor exponents than did typical natural language texts.",
                        "page": 5,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 286.08,
                            "y1": 61.44,
                            "y2": 268.32
                        }
                    },
                    {
                        "filename": "../figure/image/1294-Figure4-1.png",
                        "caption": "Figure 4: Box plots of the Taylor exponents for different kinds of data. Each point represents one sample, and samples from the same kind of data are contained in each box plot. The first two boxes are for the randomized data, while the remaining boxes are for real data, including both the natural language texts and language-related sequences. Each box ranges between the quantiles, with the middle line indicating the median, the whiskers showing the maximum and minimum, and some extreme values lying beyond.",
                        "page": 6,
                        "bbox": {
                            "x1": 116.64,
                            "x2": 481.44,
                            "y1": 61.44,
                            "y2": 264.0
                        }
                    },
                    {
                        "filename": "../figure/image/1294-Figure5-1.png",
                        "caption": "Figure 5: Histogram of Taylor exponents for long texts in Project Gutenberg (1129 texts). The legend indicates the languages, in frequency order. Each bar shows the number of texts with that value of α̂. Because of the skew of languages in the original conception of Project Gutenberg, the majority of the texts are in English, shown in blue, whereas texts in other languages are shown in other colors. The histogram shows how the Taylor exponent ranged fairly tightly around the mean, and natural language texts with an exponent larger than 0.63 were rare.",
                        "page": 6,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 285.12,
                            "y1": 359.52,
                            "y2": 568.3199999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1294-Figure8-1.png",
                        "caption": "Figure 8: Taylor analysis for two texts produced by standard neural language models: (a) a stacked LSTM model that learned the complete works of Shakespeare; and (b) a machine translation of Les Misérables (originally in French, translated into English), from a neural language model.",
                        "page": 7,
                        "bbox": {
                            "x1": 310.56,
                            "x2": 522.24,
                            "y1": 61.44,
                            "y2": 174.72
                        }
                    },
                    {
                        "filename": "../figure/image/1294-Figure6-1.png",
                        "caption": "Figure 6: Growth of α̂ with respect to ∆t, averaged across data sets within each data kind. The plot labeled “random” shows the average for the two datasets of randomized text from Moby Dick (shuffled and bigrams, as explained in §5). Since this analysis required a large amount of computation, for the large data sets (such as newspaper and programming language data), 4 million words were taken from each kind of data and used here. When ∆t was small, the Taylor exponent was close to 0.5, as theoretically described in the main text. As ∆t was increased, the value of α̂ grew. The maximum ∆t was about 10,000, or about one-tenth of the length of one long literary text. For the kinds of data investigated here, α̂ grew almost linearly. The results show that, at a given ∆t, the Taylor exponent has some capability to distinguish different kinds of text data.",
                        "page": 7,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 291.36,
                            "y1": 62.879999999999995,
                            "y2": 225.12
                        }
                    },
                    {
                        "filename": "../figure/image/1294-Figure7-1.png",
                        "caption": "Figure 7: Taylor analysis of a shuffled text of Moby Dick and a randomized text generated by a bigram model. Both exhibited an exponent of 0.50.",
                        "page": 7,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 286.08,
                            "y1": 490.56,
                            "y2": 583.68
                        }
                    },
                    {
                        "filename": "../figure/image/1294-Table1-1.png",
                        "caption": "Table 1: Data we used in this article. For each dataset, length is the number of words, vocabulary is the number of different words. For detail of the data kind, see §3.2.",
                        "page": 3,
                        "bbox": {
                            "x1": 58.559999999999995,
                            "x2": 537.12,
                            "y1": 99.84,
                            "y2": 513.12
                        }
                    },
                    {
                        "filename": "../figure/image/1294-Figure1-1.png",
                        "caption": "Figure 1: Examples of Taylor’s law for natural language texts. Moby Dick and Hong Lou Meng are representative of single-author texts, and the two newspapers are representative of multipleauthor texts, in English and Chinese, respectively. Each point represents a kind of word. The values of σ and µ for each word kind are plotted across texts within segments of size ∆t = 5620. The Taylor exponents obtained by the least-squares method were all around 0.58.",
                        "page": 4,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 286.08,
                            "y1": 61.44,
                            "y2": 249.6
                        }
                    },
                    {
                        "filename": "../figure/image/1294-Figure2-1.png",
                        "caption": "Figure 2: Taylor exponent α̂ (vertical axis) calculated for the two largest texts: The New York Times and The Mainichi newspapers. To evaluate the exponent’s dependence on the text size, parts of each text were taken and the exponents were calculated for those parts, with points taken logarithmically. The window size was ∆t = 5620. As the text size grew, the Taylor exponent slightly decreased.",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 61.44,
                            "y2": 208.32
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-68"
        },
        {
            "slides": {
                "1": {
                    "title": "Tabular QA Visual QA Reading Comprehension",
                    "text": [
                        "Peyton Manning became the first quarterback ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a",
                        "Super Bowl at age 39. The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denvers Executive Vice",
                        "President of Football Operations and",
                        "Q: How many medals did India win? Q: How symmetrical are the white Q: What is the name of the",
                        "A: bricks on either side of the building?",
                        "A: very quarterback who was 38 in Super Bowl",
                        "Neural Programmer (2016) Kazemi and Elqursh (2017) model. Yu et al (2018) model. accuracy on WikiTableQuestions (state of the art) on VQA 1.0 dataset (state of the art = 66.7%) F-1 score on SQuAD (state of the art)",
                        "Have the models read the question carefully?"
                    ],
                    "page_nums": [
                        2,
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Visual QA",
                    "text": [
                        "Kazemi and Elqursh (2017) model.",
                        "Q: How asymmetrical are the white bricks on either side of the building?",
                        "Q: How big are the white bricks on either side of the building?",
                        "Q: How fast are the bricks speaking on either side of the building? A: very"
                    ],
                    "page_nums": [
                        4,
                        5,
                        6,
                        7,
                        8
                    ],
                    "images": []
                },
                "3": {
                    "title": "QA over tables",
                    "text": [
                        "33.5% validation accuracy on WikiTableQuestions dataset (state of the art)",
                        "Q: Which country won the most medals?",
                        "Neural Programmer: max(total), print(nation)",
                        "Q: Which country won the most number of medals?",
                        "Neural Programmer: max(bronze), print(nation)"
                    ],
                    "page_nums": [
                        9,
                        10,
                        11
                    ],
                    "images": []
                },
                "6": {
                    "title": "Attributions",
                    "text": [
                        "Problem statement: Attribute a complex deep networks prediction to input",
                        "features, relative to a certain baseline (informationless) input",
                        "E.g. : attribute an object recognition networks prediction to its pixels,",
                        "a text sentiment networks prediction to individual words",
                        "Explain F(input) - F(baseline) in terms of input features"
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": []
                },
                "8": {
                    "title": "Visual QA attributions",
                    "text": [
                        "Q: How symmetrical are the white bricks on either side of the building?",
                        "red: high attribution blue: negative attribution gray: near-zero attribution"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": []
                },
                "9": {
                    "title": "Overstability",
                    "text": [
                        "Drop all words from the dataset except ones which are frequently top attributions",
                        "E.g. How many players scored more than 10 goals? How many",
                        "Visual QA Neural Programmer",
                        "color, many, what, how, doing, or, where, there, many, tm_token, how, number, total, after,"
                    ],
                    "page_nums": [
                        22,
                        23
                    ],
                    "images": [
                        "figure/image/1299-Figure4-1.png"
                    ]
                },
                "11": {
                    "title": "Stopword deletion attack",
                    "text": [
                        "Delete contentless words from the question",
                        "show, tell, did, me, my, our, are, is, were, this, on, would, and, for, should, be,",
                        "by, based, in, of, bring, with, to, from, whole, being, been, want, wanted, as, can, see, doing, got, sorted, draw, listed, chart, only",
                        "Neural Programmers accuracy falls from 33.5% to 28.5%"
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": []
                },
                "12": {
                    "title": "Subject ablation attack",
                    "text": [
                        "Replace the subject of a question with a low-attribution noun from the vocabulary",
                        "What is the man doing? What is the tweet doing?",
                        "How many children are there? How many tweet are there?",
                        "VQ A models response remains same 75.6% of the time on questions that it originally answered correctly"
                    ],
                    "page_nums": [
                        26
                    ],
                    "images": []
                },
                "16": {
                    "title": "Summary",
                    "text": [
                        "An attribution-based workflow to look inside and understand weaknesses of a model",
                        "Explained how overstability manifests - QA networks do not focus on the right words!",
                        "Crafted adversarial examples and improved Jia and Liang (2017)s attacks",
                        "Deep learning practitioners can easily use attributions to look inside models",
                        "Adding soft network constraints",
                        "E.g. add bias to attention vector so as to limit the influence of how, what, etc.",
                        "Informed enrichment of datasets",
                        "E.g. add more questions with word symmetrical such that answer is not very"
                    ],
                    "page_nums": [
                        31
                    ],
                    "images": []
                }
            },
            "paper_title": "Did the Model Understand the Question?",
            "paper_id": "1299",
            "paper": {
                "title": "Did the Model Understand the Question?",
                "abstract": "We analyze state-of-the-art deep learning models for three tasks: question answering on (1) images, (2) tables, and (3) passages of text. Using the notion of attribution (word importance), we find that these deep networks often ignore important question terms. Leveraging such behavior, we perturb questions to craft a variety of adversarial examples. Our strongest attacks drop the accuracy of a visual question answering model from 61.1% to 19%, and that of a tabular question answering model from 33.5% to 3.3%. Additionally, we show how attributions can strengthen attacks proposed by Jia and Liang (2017) on paragraph comprehension models. Our results demonstrate that attributions can augment standard measures of accuracy and empower investigation of model performance. When a model is accurate but for the wrong reasons, attributions can surface erroneous logic in the model that indicates inadequacies in the test data.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Recently, deep learning has been applied to a variety of question answering tasks."
                    },
                    {
                        "id": 1,
                        "string": "For instance, to answer questions about images (e.g."
                    },
                    {
                        "id": 2,
                        "string": "(Kazemi and Elqursh, 2017) ), tabular data (e.g."
                    },
                    {
                        "id": 3,
                        "string": "(Neelakantan et al., 2017) ), and passages of text (e.g."
                    },
                    {
                        "id": 4,
                        "string": "(Yu et al., 2018) )."
                    },
                    {
                        "id": 5,
                        "string": "Developers, end-users, and reviewers (in academia) would all like to understand the capabilities of these models."
                    },
                    {
                        "id": 6,
                        "string": "The standard way of measuring the goodness of a system is to evaluate its error on a test set."
                    },
                    {
                        "id": 7,
                        "string": "High accuracy is indicative of a good model only if the test set is representative of the underlying realworld task."
                    },
                    {
                        "id": 8,
                        "string": "Most tasks have large test and training sets, and it is hard to manually check that they are representative of the real world."
                    },
                    {
                        "id": 9,
                        "string": "In this paper, we propose techniques to analyze the sensitivity of a deep learning model to question words."
                    },
                    {
                        "id": 10,
                        "string": "We do this by applying attribution (as discussed in section 3), and generating adversarial questions."
                    },
                    {
                        "id": 11,
                        "string": "Here is an illustrative example: recall Visual Question Answering (Agrawal et al., 2015) where the task is to answer questions about images."
                    },
                    {
                        "id": 12,
                        "string": "Consider the question \"how symmetrical are the white bricks on either side of the building?\""
                    },
                    {
                        "id": 13,
                        "string": "(corresponding image in Figure 1 )."
                    },
                    {
                        "id": 14,
                        "string": "The system that we study gets the answer right (\"very\")."
                    },
                    {
                        "id": 15,
                        "string": "But, we find (using an attribution approach) that the system relies on only a few of the words like \"how\" and \"bricks\"."
                    },
                    {
                        "id": 16,
                        "string": "Indeed, we can construct adversarial questions about the same image that the system gets wrong."
                    },
                    {
                        "id": 17,
                        "string": "For instance, \"how spherical are the white bricks on either side of the building?\""
                    },
                    {
                        "id": 18,
                        "string": "returns the same answer (\"very\")."
                    },
                    {
                        "id": 19,
                        "string": "A key premise of our work is that most humans have expertise in question answering."
                    },
                    {
                        "id": 20,
                        "string": "Even if they cannot manually check that a dataset is representative of the real world, they can identify important question words, and anticipate their function in question answering."
                    },
                    {
                        "id": 21,
                        "string": "Our Contributions We follow an analysis workflow to understand three question answering models."
                    },
                    {
                        "id": 22,
                        "string": "There are two steps."
                    },
                    {
                        "id": 23,
                        "string": "First, we apply Integrated Gradients (henceforth, IG) (Sundararajan et al., 2017) to attribute the systems' predictions to words in the questions."
                    },
                    {
                        "id": 24,
                        "string": "We propose visualizations of attributions to make analysis easy."
                    },
                    {
                        "id": 25,
                        "string": "Second, we identify weaknesses (e.g., relying on unimportant words) in the networks' logic as exposed by the attributions, and leverage them to craft adversarial questions."
                    },
                    {
                        "id": 26,
                        "string": "A key contribution of this work is an overstability test for question answering networks."
                    },
                    {
                        "id": 27,
                        "string": "Jia and Liang (2017) showed that reading comprehension networks are overly stable to semantics-altering edits to the passage."
                    },
                    {
                        "id": 28,
                        "string": "In this work, we find that such overstability also applies to questions."
                    },
                    {
                        "id": 29,
                        "string": "Furthermore, this behavior can be seen in visual and tabular question answering networks as well."
                    },
                    {
                        "id": 30,
                        "string": "We use attributions to a define a general-purpose test for measuring the extent of the overstability (sections 4.3 and 5.3)."
                    },
                    {
                        "id": 31,
                        "string": "It involves measuring how a network's accuracy changes as words are systematically dropped from questions."
                    },
                    {
                        "id": 32,
                        "string": "We emphasize that, in contrast to modelindependent adversarial techniques such as that of Jia and Liang (2017) , our method exploits the strengths and weaknesses of the model(s) at hand."
                    },
                    {
                        "id": 33,
                        "string": "This allows our attacks to have a high success rate."
                    },
                    {
                        "id": 34,
                        "string": "Additionally, using insights derived from attributions we were able to improve the attack success rate of Jia and Liang (2017) (section 6.2)."
                    },
                    {
                        "id": 35,
                        "string": "Such extensive use of attributions in crafting adversarial examples is novel to the best of our knowledge."
                    },
                    {
                        "id": 36,
                        "string": "Next, we provide an overview of our results."
                    },
                    {
                        "id": 37,
                        "string": "In each case, we evaluate a pre-trained model on new inputs."
                    },
                    {
                        "id": 38,
                        "string": "We keep the networks' parameters intact."
                    },
                    {
                        "id": 39,
                        "string": "Visual QA (section 4): The task is to answer questions about images."
                    },
                    {
                        "id": 40,
                        "string": "We analyze the deep network in Kazemi and Elqursh (2017) ."
                    },
                    {
                        "id": 41,
                        "string": "We find that the network ignores many question words, relying largely on the image to produce answers."
                    },
                    {
                        "id": 42,
                        "string": "For instance, we show that the model retains more than 50% of its original accuracy even when every word that is not \"color\" is deleted from all questions in the validation set."
                    },
                    {
                        "id": 43,
                        "string": "We also show that the model under-relies on important question words (e.g."
                    },
                    {
                        "id": 44,
                        "string": "nouns) and attaching contentfree prefixes (e.g., \"in not many words, ."
                    },
                    {
                        "id": 45,
                        "string": "."
                    },
                    {
                        "id": 46,
                        "string": ".\")"
                    },
                    {
                        "id": 47,
                        "string": "to questions drops the accuracy from 61.1% to 19%."
                    },
                    {
                        "id": 48,
                        "string": "QA on tables (section 5): We analyze a system called Neural Programmer (henceforth, NP) (Neelakantan et al., 2017) that answers questions on tabular data."
                    },
                    {
                        "id": 49,
                        "string": "NP determines the answer to a question by selecting a sequence of operations to apply on the accompanying table (akin to an SQL query; details in section 5)."
                    },
                    {
                        "id": 50,
                        "string": "We find that these operation selections are more influenced by content-free words (e.g., \"in\", \"at\", \"the\", etc.)"
                    },
                    {
                        "id": 51,
                        "string": "in questions than important words such as nouns or adjectives."
                    },
                    {
                        "id": 52,
                        "string": "Dropping all content-free words reduces the validation accuracy of the network from 33.5% 1 to 28.5%."
                    },
                    {
                        "id": 53,
                        "string": "Similar to Visual QA, we show that attaching content-free phrases (e.g., \"in not a lot of words\") to the question drops the network's accuracy from 33.5% to 3.3%."
                    },
                    {
                        "id": 54,
                        "string": "We also find that NP often gets the answer right for the wrong reasons."
                    },
                    {
                        "id": 55,
                        "string": "For instance, for the question \"which nation earned the most gold medals?"
                    },
                    {
                        "id": 56,
                        "string": "\", one of the operations selected by NP is \"first\" (pick the first row of the table)."
                    },
                    {
                        "id": 57,
                        "string": "Its answer is right only because the table happens to be arranged in order of rank."
                    },
                    {
                        "id": 58,
                        "string": "We quantify this weakness by evaluating NP on the set of perturbed tables generated by Pasupat and Liang (2016) and find that its accuracy drops from 33.5% to 23%."
                    },
                    {
                        "id": 59,
                        "string": "Finally, we show an extreme form of overstability where the table itself induces a large bias in the network regardless of the question."
                    },
                    {
                        "id": 60,
                        "string": "For instance, we found that in tables about Olympic medal counts, NP was predisposed to selecting the \"prev\" operator."
                    },
                    {
                        "id": 61,
                        "string": "Reading comprehension (Section 6): The task is to answer questions about paragraphs of text."
                    },
                    {
                        "id": 62,
                        "string": "We analyze the network by Yu et al."
                    },
                    {
                        "id": 63,
                        "string": "(2018) ."
                    },
                    {
                        "id": 64,
                        "string": "Again, we find that the network often ignores words that should be important."
                    },
                    {
                        "id": 65,
                        "string": "Jia and Liang (2017) proposed attacks wherein sentences are added to paragraphs that ought not to change the network's answers, but sometimes do."
                    },
                    {
                        "id": 66,
                        "string": "Our main finding is that these attacks are more likely to succeed when an added sentence includes all the question words that the model found important (for the original paragraph)."
                    },
                    {
                        "id": 67,
                        "string": "For instance, we find that attacks are 50% more likely to be successful when the added sentence includes top-attributed nouns in the question."
                    },
                    {
                        "id": 68,
                        "string": "This insight should allow the construction of more successful attacks and better training data sets."
                    },
                    {
                        "id": 69,
                        "string": "In summary, we find that all networks ignore important parts of questions."
                    },
                    {
                        "id": 70,
                        "string": "One can fix this by either improving training data, or introducing an inductive bias."
                    },
                    {
                        "id": 71,
                        "string": "Our analysis workflow is helpful in both cases."
                    },
                    {
                        "id": 72,
                        "string": "It would also make sense to expose end-users to attribution visualizations."
                    },
                    {
                        "id": 73,
                        "string": "Knowing which words were ignored, or which operations the words were mapped to, can help the user decide whether to trust a system's response."
                    },
                    {
                        "id": 74,
                        "string": "Related Work We are motivated by Jia and Liang (2017) ."
                    },
                    {
                        "id": 75,
                        "string": "As they discuss, \"the extent to which [reading comprehension systems] truly understand language remains unclear\"."
                    },
                    {
                        "id": 76,
                        "string": "The contrast between Jia and Liang (2017) and our work is instructive."
                    },
                    {
                        "id": 77,
                        "string": "Their main contribution is to fix the evaluation of reading comprehension systems by augmenting the test set with adversarially constructed examples."
                    },
                    {
                        "id": 78,
                        "string": "(As they point out in Section 4.6 of their paper, this does not necessarily fix the model; the model may simply learn to circumvent the specific attack underlying the adversarial examples.)"
                    },
                    {
                        "id": 79,
                        "string": "Their method is independent of the specification of the model at hand."
                    },
                    {
                        "id": 80,
                        "string": "They use crowdsourcing to craft passage perturbations intended to fool the network, and then query the network to test their effectiveness."
                    },
                    {
                        "id": 81,
                        "string": "In contrast, we propose improving the analysis of question answering systems."
                    },
                    {
                        "id": 82,
                        "string": "Our method peeks into the logic of a network to identify highattribution question terms."
                    },
                    {
                        "id": 83,
                        "string": "Often there are several important question terms (e.g., nouns, adjectives) that receive tiny attribution."
                    },
                    {
                        "id": 84,
                        "string": "We leverage this weakness and perturb questions to craft targeted attacks."
                    },
                    {
                        "id": 85,
                        "string": "While Jia and Liang (2017) focus exclusively on systems for the reading comprehension task, we analyze one system each for three different tasks."
                    },
                    {
                        "id": 86,
                        "string": "Our method also helps improve the efficacy Jia and Liang (2017)'s attacks; see table 4 for examples."
                    },
                    {
                        "id": 87,
                        "string": "Our analysis technique is specific to deep-learning-based systems, whereas theirs is not."
                    },
                    {
                        "id": 88,
                        "string": "We could use many other methods instead of Integrated Gradients (IG) to attribute a deep network's prediction to its input features (Baehrens et al., 2010; Simonyan et al., 2013; Shrikumar et al., 2016; Binder et al., 2016; Springenberg et al., 2014) ."
                    },
                    {
                        "id": 89,
                        "string": "One could also use model agnostic techniques like Ribeiro et al."
                    },
                    {
                        "id": 90,
                        "string": "(2016b) ."
                    },
                    {
                        "id": 91,
                        "string": "We choose IG for its ease and efficiency of implementation (requires just a few gradient-calls) and its axiomatic justification (see Sundararajan et al."
                    },
                    {
                        "id": 92,
                        "string": "(2017) for a detailed comparison with other attribution methods)."
                    },
                    {
                        "id": 93,
                        "string": "Recently, there have been a number of techniques for crafting and defending against adversarial attacks on image-based deep learning models (cf."
                    },
                    {
                        "id": 94,
                        "string": "Goodfellow et al."
                    },
                    {
                        "id": 95,
                        "string": "(2015) )."
                    },
                    {
                        "id": 96,
                        "string": "They are based on oversensitivity of models, i.e., tiny, imperceptible perturbations of the image to change a model's response."
                    },
                    {
                        "id": 97,
                        "string": "In contrast, our attacks are based on models' over-reliance on few question words even when other words should matter."
                    },
                    {
                        "id": 98,
                        "string": "We discuss task-specific related work in corresponding sections (sections 4 to 6)."
                    },
                    {
                        "id": 99,
                        "string": "Integrated Gradients (IG) We employ an attribution technique called Integrated Gradients (IG) (Sundararajan et al., 2017) to isolate question words that a deep learning system uses to produce an answer."
                    },
                    {
                        "id": 100,
                        "string": "Formally, suppose a function F : R n !"
                    },
                    {
                        "id": 101,
                        "string": "[0, 1] represents a deep network, and an input x = (x 1 , ."
                    },
                    {
                        "id": 102,
                        "string": "."
                    },
                    {
                        "id": 103,
                        "string": "."
                    },
                    {
                        "id": 104,
                        "string": ", x n ) 2 R n ."
                    },
                    {
                        "id": 105,
                        "string": "An attribution of the prediction at input x relative to a baseline input x 0 is a vector A F (x, x 0 ) = (a 1 , ."
                    },
                    {
                        "id": 106,
                        "string": "."
                    },
                    {
                        "id": 107,
                        "string": "."
                    },
                    {
                        "id": 108,
                        "string": ", a n ) 2 R n where a i is the contribution of x i to the prediction F (x)."
                    },
                    {
                        "id": 109,
                        "string": "One can think of F as the probability of a specific response."
                    },
                    {
                        "id": 110,
                        "string": "x 1 , ."
                    },
                    {
                        "id": 111,
                        "string": "."
                    },
                    {
                        "id": 112,
                        "string": "."
                    },
                    {
                        "id": 113,
                        "string": ", x n are the question words; to be precise, they are going to be vector representations of these terms."
                    },
                    {
                        "id": 114,
                        "string": "The attributions a 1 , ."
                    },
                    {
                        "id": 115,
                        "string": "."
                    },
                    {
                        "id": 116,
                        "string": "."
                    },
                    {
                        "id": 117,
                        "string": ", a n are the influences/blame-assignments to the variables x 1 , ."
                    },
                    {
                        "id": 118,
                        "string": "."
                    },
                    {
                        "id": 119,
                        "string": "."
                    },
                    {
                        "id": 120,
                        "string": ", x n on the probability F ."
                    },
                    {
                        "id": 121,
                        "string": "Notice that attributions are defined relative to a special, uninformative input called the baseline."
                    },
                    {
                        "id": 122,
                        "string": "In this paper, we use an empty question as the baseline, that is, a sequence of word embeddings corresponding to padding value."
                    },
                    {
                        "id": 123,
                        "string": "Note that the context (image, table, or passage) of the baseline x 0 is set to be that of x; only the question is set to empty."
                    },
                    {
                        "id": 124,
                        "string": "We now describe how IG produces attributions."
                    },
                    {
                        "id": 125,
                        "string": "Intuitively, as we interpolate between the baseline and the input, the prediction moves along a trajectory, from uncertainty to certainty (the final probability)."
                    },
                    {
                        "id": 126,
                        "string": "At each point on this trajectory, one can use the gradient of the function F with respect to the input to attribute the change in probability back to the input variables."
                    },
                    {
                        "id": 127,
                        "string": "IG simply aggregates the gradients of the probability with respect to the input along this trajectory using a path integral."
                    },
                    {
                        "id": 128,
                        "string": "Definition 1 (Integrated Gradients) Given an input x and baseline x 0 , the integrated gradient along the i th dimension is defined as follows."
                    },
                    {
                        "id": 129,
                        "string": "IG satisfies the condition that the attributions sum to the difference between the probabilities at the input and the baseline."
                    },
                    {
                        "id": 130,
                        "string": "We call a variable uninfluential if all else fixed, varying it does not change the output probability."
                    },
                    {
                        "id": 131,
                        "string": "IG satisfies the property that uninfluential variables do not get any attribution."
                    },
                    {
                        "id": 132,
                        "string": "Conversely, influential variables always get some attribution."
                    },
                    {
                        "id": 133,
                        "string": "Attributions for a linear combination of two functions F 1 and F 2 are a linear combination of the attributions for F 1 and F 2 ."
                    },
                    {
                        "id": 134,
                        "string": "Finally, IG satisfies the condition that symmetric variables get equal attributions."
                    },
                    {
                        "id": 135,
                        "string": "IG i (x, x 0 ) ::= (x i x 0 i )⇥ Z 1 ↵=0 @F (x 0 +↵⇥(x x 0 )) @x i d↵ (here @F (x) @x i is the gradient of F along the i th di- mension at x)."
                    },
                    {
                        "id": 136,
                        "string": "In this work, we validate the use of IG empirically via question perturbations."
                    },
                    {
                        "id": 137,
                        "string": "We observe that perturbing high-attribution terms changes the networks' response (sections 4.4 and 5.5)."
                    },
                    {
                        "id": 138,
                        "string": "Conversely, perturbing terms that receive a low attribution does not change the network's response (sections 4.3 and 5.3)."
                    },
                    {
                        "id": 139,
                        "string": "We use these observations to craft attacks against the network by perturbing instances where generic words (e.g., \"a\", \"the\") receive high attribution or contentful words receive low attribution."
                    },
                    {
                        "id": 140,
                        "string": "4 Visual Question Answering 4.1 Task, model, and data The Visual Question Answering Task (Agrawal et al., 2015; Teney et al., 2017; Kazemi and Elqursh, 2017; Ben-younes et al., 2017; Zhu et al., 2016) requires a system to answer questions about images ( fig."
                    },
                    {
                        "id": 141,
                        "string": "1 )."
                    },
                    {
                        "id": 142,
                        "string": "We analyze the deep network from Kazemi and Elqursh (2017) ."
                    },
                    {
                        "id": 143,
                        "string": "It achieves 61.1% accuracy on the validation set (the state of the art (Fukui et al., 2016) achieves 66.7%)."
                    },
                    {
                        "id": 144,
                        "string": "We chose this model for its easy reproducibility."
                    },
                    {
                        "id": 145,
                        "string": "The VQA 1.0 dataset (Agrawal et al., 2015) consists of 614,163 questions posed over 204,721 images (3 questions per image)."
                    },
                    {
                        "id": 146,
                        "string": "The images were taken from COCO (Lin et al., 2014) , and the questions and answers were crowdsourced."
                    },
                    {
                        "id": 147,
                        "string": "The network in Kazemi and Elqursh (2017) treats question answering as a classification task wherein the classes are 3000 most frequent answers in the training data."
                    },
                    {
                        "id": 148,
                        "string": "The input question is tokenized, embedded and fed to a multi-layer LSTM."
                    },
                    {
                        "id": 149,
                        "string": "The states of the LSTM attend to a featurized version of the image, and ultimately produce a probability distribution over the answer classes."
                    },
                    {
                        "id": 150,
                        "string": "Observations We applied IG and attributed the top selected answer class to input question words."
                    },
                    {
                        "id": 151,
                        "string": "The baseline for a given input instance is the image and an Question: how symmetrical are the white bricks on either side of the building Prediction: very Ground truth: very empty question 2 ."
                    },
                    {
                        "id": 152,
                        "string": "We omit instances where the top answer class predicted by the network remains the same even when the question is emptied (i.e., the baseline input)."
                    },
                    {
                        "id": 153,
                        "string": "This is because IG attributions are not informative when the input and the baseline have the same prediction."
                    },
                    {
                        "id": 154,
                        "string": "A visualization of the attributions is shown in fig."
                    },
                    {
                        "id": 155,
                        "string": "1 ."
                    },
                    {
                        "id": 156,
                        "string": "Notice that very few words have high attribution."
                    },
                    {
                        "id": 157,
                        "string": "We verified that altering the low attribution words in the question does not change the network's answer."
                    },
                    {
                        "id": 158,
                        "string": "For instance, the following questions still return \"very\" as the answer: \"how spherical are the white bricks on either side of the building\", \"how soon are the bricks fading on either side of the building\", \"how fast are the bricks speaking on either side of the building\"."
                    },
                    {
                        "id": 159,
                        "string": "On analyzing attributions across examples, we find that most of the highly attributed words are words such as \"there\", \"what\", \"how\", \"doing\"they are usually the less important words in questions."
                    },
                    {
                        "id": 160,
                        "string": "In section 4.3 we describe a test to measure the extent to which the network depends on such words."
                    },
                    {
                        "id": 161,
                        "string": "We also find that informative words in the question (e.g., nouns) often receive very low attribution, indicating a weakness on part of the network."
                    },
                    {
                        "id": 162,
                        "string": "In Section 4.4, we describe various attacks that exploit this weakness."
                    },
                    {
                        "id": 163,
                        "string": "Overstability test To determine the set of question words that the network finds most important, we isolate words that most frequently occur as top attributed words in questions."
                    },
                    {
                        "id": 164,
                        "string": "We then drop all words except these and compute the accuracy."
                    },
                    {
                        "id": 165,
                        "string": "Figure 2 shows how the accuracy changes as the size of this isolated set is varied from 0 to 5305."
                    },
                    {
                        "id": 166,
                        "string": "We find that just one word is enough for the model to achieve more than 50% of its final accuracy."
                    },
                    {
                        "id": 167,
                        "string": "That word is \"color\"."
                    },
                    {
                        "id": 168,
                        "string": "Note that even when empty questions are passed as input to the network, its accuracy remains at about 44.3% of its original accuracy."
                    },
                    {
                        "id": 169,
                        "string": "This shows that the model is largely reliant on the image for producing the answer."
                    },
                    {
                        "id": 170,
                        "string": "The accuracy increases (almost) monotonically with the size of the isolated set."
                    },
                    {
                        "id": 171,
                        "string": "The top 6 words in the isolated set are \"color\", \"many\", \"what\", \"is\", \"there\", and \"how\"."
                    },
                    {
                        "id": 172,
                        "string": "We suspect that generic words like these are used to determine the type of the answer."
                    },
                    {
                        "id": 173,
                        "string": "The network then uses the type to choose between a few answers it can give for the image."
                    },
                    {
                        "id": 174,
                        "string": "Attacks Attributions reveal that the network relies largely on generic words in answering questions (section 4.3)."
                    },
                    {
                        "id": 175,
                        "string": "This is a weakness in the network's logic."
                    },
                    {
                        "id": 176,
                        "string": "We now describe a few attacks against the network that exploit this weakness."
                    },
                    {
                        "id": 177,
                        "string": "Subject ablation attack In this attack, we replace the subject of a question with a specific noun that consistently receives low attribution across questions."
                    },
                    {
                        "id": 178,
                        "string": "We then determine, among the questions that the network originally answered correctly, what percentage result in the same answer after the ablation."
                    },
                    {
                        "id": 179,
                        "string": "We repeat this process for different nouns; specifically, \"fits\", \"childhood\", \"copyrights\", \"mornings\", \"disorder\", \"importance\", \"topless\", \"critter\", \"jumper\", \"tweet\", and average the result."
                    },
                    {
                        "id": 180,
                        "string": "We find that, among the set of questions that the network originally answered correctly, 75.6% of the questions return the same answer despite the subject replacement."
                    },
                    {
                        "id": 181,
                        "string": "Prefix attack In this attack, we attach content-free phrases to questions."
                    },
                    {
                        "id": 182,
                        "string": "The phrases are manually crafted using generic words that the network finds important (section 4.3)."
                    },
                    {
                        "id": 183,
                        "string": "Table 1 (top half) shows the resulting accuracy for three prefixes -\"in not a lot of words\", \"what is the answer to\", and \"in not many words\"."
                    },
                    {
                        "id": 184,
                        "string": "All of these phrases nearly halve the model's accuracy."
                    },
                    {
                        "id": 185,
                        "string": "The union of the three attacks drops the model's accuracy from 61.1% to 19%."
                    },
                    {
                        "id": 186,
                        "string": "We note that the attributions computed for the network were crucial in crafting the prefixes."
                    },
                    {
                        "id": 187,
                        "string": "For instance, we find that other prefixes like \"tell me\", \"answer this\" and \"answer this for me\" do not drop the accuracy by much; see table 1 (bottom half)."
                    },
                    {
                        "id": 188,
                        "string": "The union of these three ineffective prefixes drops the accuracy from 61.1% to only 46.9%."
                    },
                    {
                        "id": 189,
                        "string": "Per attributions, words present in these prefixes are not deemed important by the network."
                    },
                    {
                        "id": 190,
                        "string": "Agrawal et al."
                    },
                    {
                        "id": 191,
                        "string": "(2016) analyze several VQA models."
                    },
                    {
                        "id": 192,
                        "string": "Among other attacks, they test the models on question fragments of telescopically increasing length."
                    },
                    {
                        "id": 193,
                        "string": "They observe that VQA models often arrive at the same answer by looking at a small fragment of the question."
                    },
                    {
                        "id": 194,
                        "string": "Our stability analysis in section 4.3 explains, and intuitively subsumes this; indeed, several of the top attributed words appear in the prefix, while important words like \"color\" often occur in the middle of the question."
                    },
                    {
                        "id": 195,
                        "string": "Our analysis enables additional attacks, for instance, replacing question subject with low attri- (2017) ;  examine the VQA data, identify deficiencies, and propose data augmentation to reduce over-representation of certain question/answer types."
                    },
                    {
                        "id": 196,
                        "string": "propose the VQA 2.0 dataset, which has pairs of similar images that have different answers on the same question."
                    },
                    {
                        "id": 197,
                        "string": "We note that our method can be used to improve these datasets by identifying inputs where models ignore several words."
                    },
                    {
                        "id": 198,
                        "string": "Huang et al."
                    },
                    {
                        "id": 199,
                        "string": "(2017) evaluate robustness of VQA models by appending questions with semantically similar questions."
                    },
                    {
                        "id": 200,
                        "string": "Our prefix attacks in section 4.4 are in a similar vein and perhaps a more natural and targeted approach."
                    },
                    {
                        "id": 201,
                        "string": "Finally, Fong and Vedaldi (2017) Related work Question Answering over Tables Task, model, and data We now analyze question answering over tables based on the WikiTableQuestions benchmark dataset (Pasupat and Liang, 2015) ."
                    },
                    {
                        "id": 202,
                        "string": "The dataset has 22033 questions posed over 2108 tables scraped from Wikipedia."
                    },
                    {
                        "id": 203,
                        "string": "Answers are either contents of table cells or some table aggregations."
                    },
                    {
                        "id": 204,
                        "string": "Models performing QA on tables translate the question into a structured program (akin to an SQL query) which is then executed on the table to produce the answer."
                    },
                    {
                        "id": 205,
                        "string": "We analyze a model called Neural Programmer (NP) (Neelakantan et al., 2017) ."
                    },
                    {
                        "id": 206,
                        "string": "NP is the state of the art among models that are weakly supervised, i.e., supervised using the final answer instead of the correct structured program."
                    },
                    {
                        "id": 207,
                        "string": "It achieves 33.5% accuracy on the validation set."
                    },
                    {
                        "id": 208,
                        "string": "NP translates the input into a structured program consisting of four operator and table column selections."
                    },
                    {
                        "id": 209,
                        "string": "An example of such a program is \"reset (score), reset (score), min (score), print (name)\", where the output is the name of the person who has the lowest score."
                    },
                    {
                        "id": 210,
                        "string": "Observations We applied IG to attribute operator and column selection to question words."
                    },
                    {
                        "id": 211,
                        "string": "NP preprocesses inputs and whenever applicable, appends symbols tm token, cm token to questions that signify matches between a question and the accom-panying table."
                    },
                    {
                        "id": 212,
                        "string": "These symbols are treated the same as question words."
                    },
                    {
                        "id": 213,
                        "string": "NP also computes priors for column selection using question-table matches."
                    },
                    {
                        "id": 214,
                        "string": "These vectors, tm and cm, are passed as additional inputs to the neural network."
                    },
                    {
                        "id": 215,
                        "string": "In the baseline for IG, we use an empty question, and zero vectors for column selection priors 3 ."
                    },
                    {
                        "id": 216,
                        "string": "We visualize the attributions using an alignment matrix; they are commonly used in the analysis of translation models ( fig."
                    },
                    {
                        "id": 217,
                        "string": "3) ."
                    },
                    {
                        "id": 218,
                        "string": "Observe that the operator \"first\" is used when the question is asking for a superlative."
                    },
                    {
                        "id": 219,
                        "string": "Further, we see that the word \"gold\" is a trigger for this operator."
                    },
                    {
                        "id": 220,
                        "string": "We investigate implications of this behavior in the following sections."
                    },
                    {
                        "id": 221,
                        "string": "Overstability test Similar to the test we did for Visual QA (section 4.3), we check for overstability in NP by looking at accuracy as a function of the vocabulary size."
                    },
                    {
                        "id": 222,
                        "string": "We treat table match annotations tm token, cm token and the out-of-vocab token (unk ) as part of the vocabulary."
                    },
                    {
                        "id": 223,
                        "string": "The results are in fig."
                    },
                    {
                        "id": 224,
                        "string": "4 ."
                    },
                    {
                        "id": 225,
                        "string": "We see that the curve is similar to that of Visual QA (fig."
                    },
                    {
                        "id": 226,
                        "string": "2 )."
                    },
                    {
                        "id": 227,
                        "string": "Just 5 words (along with the column selection priors) are sufficient for the model to reach more than 50% of its final accuracy on the validation set."
                    },
                    {
                        "id": 228,
                        "string": "These five words are: \"many\", \"number\", \"tm token\", \"after\", and \"total\"."
                    },
                    {
                        "id": 229,
                        "string": "Table-specific default programs We saw in the previous section that the model relies on only a few words in producing correct answers."
                    },
                    {
                        "id": 230,
                        "string": "An extreme case of overstability is when   words are chosen in the descending order of their frequency appearance as top attributions to question terms."
                    },
                    {
                        "id": 231,
                        "string": "The X-axis is on logscale, except near zero where it is linear."
                    },
                    {
                        "id": 232,
                        "string": "Note that just 5 words are necessary for the network to reach more than 50% of its final accuracy."
                    },
                    {
                        "id": 233,
                        "string": "the operator sequences produced by the model are independent of the question."
                    },
                    {
                        "id": 234,
                        "string": "We find that if we supply an empty question as an input, i.e., the output is a function only of the table, then the distribution over programs is quite skewed."
                    },
                    {
                        "id": 235,
                        "string": "We call these programs For each default program, we used IG to attribute operator and column selections to column names and show ten most frequently occurring ones across tables in the validation set (table 2) ."
                    },
                    {
                        "id": 236,
                        "string": "Here is an insight from this analysis: NP uses the combination \"reset, prev\" to exclude the last row of the table from answer computation."
                    },
                    {
                        "id": 237,
                        "string": "The default program corresponding to \"reset, prev, max, print\" has attributions to column names such as \"rank\", \"gold\", \"silver\", \"bronze\", \"nation\", \"year\"."
                    },
                    {
                        "id": 238,
                        "string": "These column names indicate medal tallies and usually have a \"total\" row."
                    },
                    {
                        "id": 239,
                        "string": "If the table happens not to have a \"total\" row, the model may  produce an incorrect answer."
                    },
                    {
                        "id": 240,
                        "string": "We now describe attacks that add or drop content-free words from the question, and cause NP to produce the wrong answer."
                    },
                    {
                        "id": 241,
                        "string": "These attacks leverage the attribution analysis."
                    },
                    {
                        "id": 242,
                        "string": "Attacks Question concatenation attacks In these attacks, we either suffix or prefix contentfree phrases to questions."
                    },
                    {
                        "id": 243,
                        "string": "The phrases are crafted using irrelevant trigger words for operator selections (supplementary material, table 5)."
                    },
                    {
                        "id": 244,
                        "string": "We manually ensure that the phrases are content-free."
                    },
                    {
                        "id": 245,
                        "string": "Table 3 describes our results."
                    },
                    {
                        "id": 246,
                        "string": "The first 4 phrases use irrelevant trigger words and result in a large drop in accuracy."
                    },
                    {
                        "id": 247,
                        "string": "For instance, the first phrase uses \"not\" which is a trigger for \"next\", \"last\", and \"min\", and the second uses \"same\" which is a trigger for \"next\" and \"mfe\"."
                    },
                    {
                        "id": 248,
                        "string": "The four phrases combined results in the model's accuracy going down from 33.5% to 3.3%."
                    },
                    {
                        "id": 249,
                        "string": "The first two phrases alone drop the accuracy to 5.6%."
                    },
                    {
                        "id": 250,
                        "string": "The next set of phrases use words that receive low attribution across questions, and are hence non-triggers for any operator."
                    },
                    {
                        "id": 251,
                        "string": "The resulting drop in accuracy on using these phrases is relatively low."
                    },
                    {
                        "id": 252,
                        "string": "Combined, they result in the model's accuracy dropping from 33.5% to 27.1%."
                    },
                    {
                        "id": 253,
                        "string": "Stop word deletion attacks We find that sometimes an operator is selected based on stop words like: \"a\", \"at\", \"the\", etc."
                    },
                    {
                        "id": 254,
                        "string": "For instance, in the question, \"what ethnicity is at the top?"
                    },
                    {
                        "id": 255,
                        "string": "\", the operator \"next\" is triggered on the word \"at\"."
                    },
                    {
                        "id": 256,
                        "string": "Dropping the word \"at\" from the question changes the operator selection and causes NP to return the wrong answer."
                    },
                    {
                        "id": 257,
                        "string": "We drop stop words from questions in the validation dataset that were originally answered correctly and test NP on them."
                    },
                    {
                        "id": 258,
                        "string": "The stop words to be dropped were manually selected 4 and are shown in Figure 5 in the supplementary material."
                    },
                    {
                        "id": 259,
                        "string": "By dropping stop words, the accuracy drops from 33.5% to 28.5%."
                    },
                    {
                        "id": 260,
                        "string": "Selecting operators based on stop words is not robust."
                    },
                    {
                        "id": 261,
                        "string": "In real world search queries, users often phrase questions without stop words, trading grammatical correctness for conciseness."
                    },
                    {
                        "id": 262,
                        "string": "For instance, the user may simply say \"top ethnicity\"."
                    },
                    {
                        "id": 263,
                        "string": "It may be possible to defend against such examples by generating synthetic training data, and re-training the network on it."
                    },
                    {
                        "id": 264,
                        "string": "Row reordering attacks We found that NP often got the question right by leveraging artifacts of the table."
                    },
                    {
                        "id": 265,
                        "string": "For instance, the operators for the question \"which nation earned the most gold medals\" are \"reset\", \"prev\", \"first\" and \"print\"."
                    },
                    {
                        "id": 266,
                        "string": "The \"prev\" operator essentially excludes the last row from the answer computation."
                    },
                    {
                        "id": 267,
                        "string": "It gets the answer right for two reasons: (1) the answer is not in the last row, and (2) rows are sorted by the values in the column \"gold\"."
                    },
                    {
                        "id": 268,
                        "string": "In general, a question answering system should not rely on row ordering in tables."
                    },
                    {
                        "id": 269,
                        "string": "To quantify the extent of such biases, we used a perturbed version of WikiTableQuestions validation dataset as described in Pasupat and Liang (2016) 5 and evaluated the existing NP model on it (there was no re-training involved here)."
                    },
                    {
                        "id": 270,
                        "string": "We found that NP has only 23% accuracy on it, in constrast to an accuracy of 33.5% on the original validation dataset."
                    },
                    {
                        "id": 271,
                        "string": "One approach to making the network robust to row-reordering attacks is to train against perturbed tables."
                    },
                    {
                        "id": 272,
                        "string": "This may also help the model generalize better."
                    },
                    {
                        "id": 273,
                        "string": "Indeed, Mudrakarta et al."
                    },
                    {
                        "id": 274,
                        "string": "(2018) note that the state-of-the-art strongly supervised 6 model on WikiTableQuestions (Krishnamurthy et al., 2017) enjoys a 7% gain in its final accuracy by leveraging perturbed tables during training."
                    },
                    {
                        "id": 275,
                        "string": "6 Reading Comprehension 6.1 Task, model, and data The reading comprehension task involves identifying a span from a context paragraph as an answer to a question."
                    },
                    {
                        "id": 276,
                        "string": "The SQuAD dataset (Rajpurkar et al., 2016) for machine reading comprehension contains 107.7K query-answer pairs, with 87.5K for training, 10.1K for validation, and another 10.1K for testing."
                    },
                    {
                        "id": 277,
                        "string": "Deep learning methods are quite successful on this problem, with the state-of-the-art F1 score at 84.6 achieved by Yu et al."
                    },
                    {
                        "id": 278,
                        "string": "(2018) ; we analyze their model."
                    },
                    {
                        "id": 279,
                        "string": "Analyzing adversarial examples Recall the adversarial attacks proposed by Jia and Liang (2017) for reading comprehension systems."
                    },
                    {
                        "id": 280,
                        "string": "Their attack ADDSENT appends sentences to the paragraph that resemble an answer to the question without changing the ground truth."
                    },
                    {
                        "id": 281,
                        "string": "See the second column of table 4 for a few examples."
                    },
                    {
                        "id": 282,
                        "string": "We investigate the effectiveness of their attacks using attributions."
                    },
                    {
                        "id": 283,
                        "string": "We analyze 100 examples generated by the ADDSENT method in Jia and Liang (2017) , and find that an adversarial sentence is successful in fooling the model in two cases: First, a contentful word in the question gets low/zero attribution and the adversarially added sentence modifies that word."
                    },
                    {
                        "id": 284,
                        "string": "E.g."
                    },
                    {
                        "id": 285,
                        "string": "in the question, \"Who did Kubiak take the place of after Super Bowl XXIV?"
                    },
                    {
                        "id": 286,
                        "string": "\", the word \"Super\" gets low attribution."
                    },
                    {
                        "id": 287,
                        "string": "Adding \"After Champ Bowl XXV, Crowton took the place of Jeff Dean\" changes the prediction for the model."
                    },
                    {
                        "id": 288,
                        "string": "Second, a contentful word in the question that is not present in the context."
                    },
                    {
                        "id": 289,
                        "string": "For e.g."
                    },
                    {
                        "id": 290,
                        "string": "in the question \"Where hotel did the Panthers stay at?"
                    },
                    {
                        "id": 291,
                        "string": "\", \"hotel\", is not present in the context."
                    },
                    {
                        "id": 292,
                        "string": "Adding \"The Vikings stayed at Chicago hotel.\""
                    },
                    {
                        "id": 293,
                        "string": "changes the prediction for the model."
                    },
                    {
                        "id": 294,
                        "string": "On the flip side, an adversarial sentence is unsuccessful when a contentful word in the question having high attribution is not present in the added sentence."
                    },
                    {
                        "id": 295,
                        "string": "E.g."
                    },
                    {
                        "id": 296,
                        "string": "for \"Where according to gross state product does Victoria rank in Australia?"
                    },
                    {
                        "id": 297,
                        "string": "\", \"Australia\" receives high attribution."
                    },
                    {
                        "id": 298,
                        "string": "Adding \"Accord- What period was 2.5 million years ago ?"
                    },
                    {
                        "id": 299,
                        "string": "The period of Plasticean era was 2.5 billion years ago."
                    },
                    {
                        "id": 300,
                        "string": "The period of Plasticean era was 1.5 billion years ago."
                    },
                    {
                        "id": 301,
                        "string": "(as a prefix) ing to net state product, Adelaide ranks 7 in New Zealand.\""
                    },
                    {
                        "id": 302,
                        "string": "does not fool the model."
                    },
                    {
                        "id": 303,
                        "string": "However, retaining \"Australia\" in the adversarial sentence does change the model's prediction."
                    },
                    {
                        "id": 304,
                        "string": "Predicting the effectiveness of attacks Next we correlate attributions with efficacy of the ADDSENT attacks."
                    },
                    {
                        "id": 305,
                        "string": "We analyzed 1000 (question, attack phrase) instances 7 where Yu et al."
                    },
                    {
                        "id": 306,
                        "string": "(2018) model has the correct baseline prediction."
                    },
                    {
                        "id": 307,
                        "string": "Of the 1000 cases, 508 are able to fool the model, while 492 are not."
                    },
                    {
                        "id": 308,
                        "string": "We split the examples into two groups."
                    },
                    {
                        "id": 309,
                        "string": "The first group has examples where a noun or adjective in the question has high attribution, but is missing from the adversarial sentence and the rest are in the second group."
                    },
                    {
                        "id": 310,
                        "string": "Our attribution analysis suggests that we should find more failed examples in the first group."
                    },
                    {
                        "id": 311,
                        "string": "That is indeed the case."
                    },
                    {
                        "id": 312,
                        "string": "The first group has 63% failed examples, while the second has only 40%."
                    },
                    {
                        "id": 313,
                        "string": "Recall that the attack sentences were constructed by (a) generating a sentence that answers the question, (b) replacing all the adjectives and nouns with antonyms, and named entities by the nearest word in GloVe word vector space (Pennington et al., 2014) and (c) crowdsourcing to check that the new sentence is grammatically correct."
                    },
                    {
                        "id": 314,
                        "string": "This suggests a use of attributions to improve the effectiveness of the attacks, namely ensuring that question words that the model thinks are important are left untouched in step (b) (we note that other changes in should be carried out)."
                    },
                    {
                        "id": 315,
                        "string": "In table 4, 7 data sourced from https:// worksheets.codalab.org/worksheets/ 0xc86d3ebe69a3427d91f9aaa63f7d1e7d/ we show a few examples where an original attack did not fool the model, but preserving a noun with high attribution did."
                    },
                    {
                        "id": 316,
                        "string": "Conclusion We analyzed three question answering models using an attribution technique."
                    },
                    {
                        "id": 317,
                        "string": "Attributions helped us identify weaknesses of these models more effectively than conventional methods (based on validation sets)."
                    },
                    {
                        "id": 318,
                        "string": "We believe that a workflow that uses attributions can aid the developer in iterating on model quality more effectively."
                    },
                    {
                        "id": 319,
                        "string": "While the attacks in this paper may seem unrealistic, they do expose real weaknesses that affect the usage of a QA product."
                    },
                    {
                        "id": 320,
                        "string": "Under-reliance on important question terms is not safe."
                    },
                    {
                        "id": 321,
                        "string": "We also believe that other QA models may share these weaknesses."
                    },
                    {
                        "id": 322,
                        "string": "Our attribution-based methods can be directly used to gauge the extent of such problems."
                    },
                    {
                        "id": 323,
                        "string": "Additionally, our perturbation attacks (sections 4.4 and 5.5) serve as empirical validation of attributions."
                    },
                    {
                        "id": 324,
                        "string": "Reproducibility Code to generate attributions and reproduce our results is freely available at https://github."
                    },
                    {
                        "id": 325,
                        "string": "com/pramodkaushik/acl18_results."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 20
                    },
                    {
                        "section": "Our Contributions",
                        "n": "1.1",
                        "start": 21,
                        "end": 73
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 74,
                        "end": 98
                    },
                    {
                        "section": "Integrated Gradients (IG)",
                        "n": "3",
                        "start": 99,
                        "end": 149
                    },
                    {
                        "section": "Observations",
                        "n": "4.2",
                        "start": 150,
                        "end": 162
                    },
                    {
                        "section": "Overstability test",
                        "n": "4.3",
                        "start": 163,
                        "end": 173
                    },
                    {
                        "section": "Attacks",
                        "n": "4.4",
                        "start": 174,
                        "end": 200
                    },
                    {
                        "section": "Task, model, and data",
                        "n": "5.1",
                        "start": 201,
                        "end": 209
                    },
                    {
                        "section": "Observations",
                        "n": "5.2",
                        "start": 210,
                        "end": 220
                    },
                    {
                        "section": "Overstability test",
                        "n": "5.3",
                        "start": 221,
                        "end": 228
                    },
                    {
                        "section": "Table-specific default programs",
                        "n": "5.4",
                        "start": 229,
                        "end": 278
                    },
                    {
                        "section": "Analyzing adversarial examples",
                        "n": "6.2",
                        "start": 279,
                        "end": 303
                    },
                    {
                        "section": "Predicting the effectiveness of attacks",
                        "n": "6.3",
                        "start": 304,
                        "end": 315
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 316,
                        "end": 325
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1299-Figure3-1.png",
                        "caption": "Figure 3: Visualization of attributions. Question words, preprocessing tokens and column selection priors on the Yaxis. Along the X-axis are operator and column selections with their baseline counterparts in parentheses. Operators and columns not affecting the final answer, and those which are same as their baseline counterparts, are given zero attribution.",
                        "page": 5,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 169.44,
                            "y2": 307.2
                        }
                    },
                    {
                        "filename": "../figure/image/1299-Table2-1.png",
                        "caption": "Table 2: Attributions to column names for table-specific default programs (programs returned by NP on empty input questions). See supplementary material, table 6 for the full list. These results are indication that the network is predisposed towards picking certain operators solely based on the table.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 558.24,
                            "y1": 67.2,
                            "y2": 155.04
                        }
                    },
                    {
                        "filename": "../figure/image/1299-Figure4-1.png",
                        "caption": "Figure 4: Accuracy as a function of vocabulary size. The words are chosen in the descending order of their frequency appearance as top attributions to question terms. The X-axis is on logscale, except near zero where it is linear. Note that just 5 words are necessary for the network to reach more than 50% of its final accuracy.",
                        "page": 6,
                        "bbox": {
                            "x1": 79.67999999999999,
                            "x2": 269.28,
                            "y1": 219.84,
                            "y2": 346.08
                        }
                    },
                    {
                        "filename": "../figure/image/1299-Table3-1.png",
                        "caption": "Table 3: Neural Programmer (Neelakantan et al., 2017): Left: Validation accuracy when attack phrases are concatenated to the question. (Original: 33.5%)",
                        "page": 6,
                        "bbox": {
                            "x1": 324.96,
                            "x2": 508.32,
                            "y1": 202.56,
                            "y2": 336.96
                        }
                    },
                    {
                        "filename": "../figure/image/1299-Figure1-1.png",
                        "caption": "Figure 1: Visual QA (Kazemi and Elqursh, 2017): Visualization of attributions (word importances) for a question that the network gets right. Red indicates high attribution, blue negative attribution, and gray near-zero attribution. The colors are determined by attributions normalized w.r.t the maximum magnitude of attributions among the question’s words.",
                        "page": 3,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 530.88,
                            "y1": 61.44,
                            "y2": 145.92
                        }
                    },
                    {
                        "filename": "../figure/image/1299-Table4-1.png",
                        "caption": "Table 4: ADDSENT attacks that failed to fool the model. With modifications to preserve nouns with high attributions, these are successful in fooling the model. Question words that receive high attribution are colored red (intensity indicates magnitude).",
                        "page": 8,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 521.28,
                            "y1": 67.2,
                            "y2": 247.2
                        }
                    },
                    {
                        "filename": "../figure/image/1299-Table1-1.png",
                        "caption": "Table 1: VQA network (Kazemi and Elqursh, 2017): Accuracy for prefix attacks; original accuracy is 61.1%.",
                        "page": 4,
                        "bbox": {
                            "x1": 339.84,
                            "x2": 493.44,
                            "y1": 62.4,
                            "y2": 197.28
                        }
                    },
                    {
                        "filename": "../figure/image/1299-Figure2-1.png",
                        "caption": "Figure 2: VQA network (Kazemi and Elqursh, 2017): Accuracy as a function of vocabulary size, relative to its original accuracy. Words are chosen in the descending order of how frequently they appear as top attributions. The X-axis is on logscale, except near zero where it is linear.",
                        "page": 4,
                        "bbox": {
                            "x1": 79.67999999999999,
                            "x2": 269.28,
                            "y1": 131.51999999999998,
                            "y2": 258.24
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-69"
        },
        {
            "slides": {
                "0": {
                    "title": "ICLR 2018 Neural Language Modeling by Jointly Learning Syntax and Lexicon",
                    "text": [
                        "Supervised Constituency Parsing with Syntactic Distance?"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "1": {
                    "title": "Chart Neural Parsers Transition based Neural Parsers",
                    "text": [
                        "Complexity of CYK is O(n^3).",
                        "Incompleted tree (the shift and reduce steps may not match).",
                        "The model is never exposed to its own mistakes during training"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "2": {
                    "title": "Intuitions",
                    "text": [
                        "Only the order of split (or combination) matters for reconstructing the tree.",
                        "Can we model the order directly?"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Tree to Distance",
                    "text": [
                        "The height for each non-terminal node is the maximum height of its children plus 1"
                    ],
                    "page_nums": [
                        9,
                        10
                    ],
                    "images": []
                },
                "6": {
                    "title": "Distance to Tree",
                    "text": [
                        "Split point for each bracket is the one with maximum distance."
                    ],
                    "page_nums": [
                        11,
                        12
                    ],
                    "images": []
                },
                "7": {
                    "title": "Framework for inferring the distances and labels",
                    "text": [
                        "Labels for non-leaf nodes",
                        "Labels for leaf nodes"
                    ],
                    "page_nums": [
                        14,
                        19,
                        20
                    ],
                    "images": []
                },
                "8": {
                    "title": "Inferring the distances",
                    "text": [
                        "<s> She enjoys| | playing tennis a </s>"
                    ],
                    "page_nums": [
                        15,
                        16
                    ],
                    "images": []
                },
                "9": {
                    "title": "Pairwise learning to rank loss for distances",
                    "text": [
                        "a variant of hinge loss",
                        "While di > dj While di < dj"
                    ],
                    "page_nums": [
                        17,
                        18
                    ],
                    "images": []
                },
                "10": {
                    "title": "Inferring the Labels",
                    "text": [
                        "on to fu fu te"
                    ],
                    "page_nums": [
                        21,
                        22,
                        23
                    ],
                    "images": [
                        "figure/image/1305-Figure3-1.png"
                    ]
                },
                "18": {
                    "title": "One more thing",
                    "text": [
                        "The research in rank loss is well-studied in the topic of",
                        "Models that are good at learning these syntactic distances are not widely known until the rediscovery of LSTM in 2013 (Graves 2013).",
                        "Efficient regularization methods for LSTM didnt become mature until"
                    ],
                    "page_nums": [
                        33
                    ],
                    "images": []
                }
            },
            "paper_title": "Straight to the Tree: Constituency Parsing with Neural Syntactic Distance",
            "paper_id": "1305",
            "paper": {
                "title": "Straight to the Tree: Constituency Parsing with Neural Syntactic Distance",
                "abstract": "In this work, we propose a novel constituency parsing scheme. The model predicts a vector of real-valued scalars, named syntactic distances, for each split position in the input sentence. The syntactic distances specify the order in which the split points will be selected, recursively partitioning the input, in a top-down fashion. Compared to traditional shiftreduce parsing schemes, our approach is free from the potential problem of compounding errors, while being faster and easier to parallelize. Our model achieves competitive performance amongst single model, discriminative parsers in the PTB dataset and outperforms previous models in the CTB dataset. * Equal contribution. Corresponding authors: yikang.shen@umontreal.ca, zhouhan.lin@umontreal.ca. † Work done while at Microsoft Research, Montreal.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Devising fast and accurate constituency parsing algorithms is an important, long-standing problem in natural language processing."
                    },
                    {
                        "id": 1,
                        "string": "Parsing has been useful for incorporating linguistic prior in several related tasks, such as relation extraction, paraphrase detection (Callison-Burch, 2008) , and more recently, natural language inference (Bowman et al., 2016) and machine translation (Eriguchi et al., 2017) ."
                    },
                    {
                        "id": 2,
                        "string": "Neural network-based approaches relying on dense input representations have recently achieved competitive results for constituency parsing (Vinyals et al., 2015; Cross and Huang, 2016; Liu and Zhang, 2017b; Stern et al., 2017a) ."
                    },
                    {
                        "id": 3,
                        "string": "Generally speaking, either these approaches produce the parse tree sequentially, by governing Figure 1: An example of how syntactic distances (d1 and d2) describe the structure of a parse tree: consecutive words with larger predicted distance are split earlier than those with smaller distances, in a process akin to divisive clustering."
                    },
                    {
                        "id": 4,
                        "string": "the sequence of transitions in a transition-based parser (Nivre, 2004; Zhu et al., 2013; Chen and Manning, 2014; Cross and Huang, 2016) , or use a chart-based approach by estimating non-linear potentials and performing exact structured inference by dynamic programming (Finkel et al., 2008; Durrett and Klein, 2015; Stern et al., 2017a) ."
                    },
                    {
                        "id": 5,
                        "string": "Transition-based models decompose the structured prediction problem into a sequence of local decisions."
                    },
                    {
                        "id": 6,
                        "string": "This enables fast greedy decoding but also leads to compounding errors because the model is never exposed to its own mistakes during training (Daumé et al., 2009 )."
                    },
                    {
                        "id": 7,
                        "string": "Solutions to this problem usually complexify the training procedure by using structured training through beamsearch (Weiss et al., 2015; Andor et al., 2016) and dynamic oracles (Goldberg and Nivre, 2012; Cross and Huang, 2016) ."
                    },
                    {
                        "id": 8,
                        "string": "On the other hand, chartbased models can incorporate structured loss functions during training and benefit from exact inference via the CYK algorithm but suffer from higher computational cost during decoding (Durrett and Klein, 2015; Stern et al., 2017a) ."
                    },
                    {
                        "id": 9,
                        "string": "In this paper, we propose a novel, fully-parallel model for constituency parsing, based on the concept of \"syntactic distance\", recently introduced by (Shen et al., 2017) for language modeling."
                    },
                    {
                        "id": 10,
                        "string": "To construct a parse tree from a sentence, one can proceed in a top-down manner, recursively splitting larger constituents into smaller constituents, where the order of the splits defines the hierarchical structure."
                    },
                    {
                        "id": 11,
                        "string": "The syntactic distances are defined for each possible split point in the sentence."
                    },
                    {
                        "id": 12,
                        "string": "The order induced by the syntactic distances fully specifies the order in which the sentence needs to be recursively split into smaller constituents (Figure 1) : in case of a binary tree, there exists a oneto-one correspondence between the ordering and the tree."
                    },
                    {
                        "id": 13,
                        "string": "Therefore, our model is trained to reproduce the ordering between split points induced by the ground-truth distances by means of a margin rank loss (Weston et al., 2011) ."
                    },
                    {
                        "id": 14,
                        "string": "Crucially, our model works in parallel: the estimated distance for each split point is produced independently from the others, which allows for an easy parallelization in modern parallel computing architectures for deep learning, such as GPUs."
                    },
                    {
                        "id": 15,
                        "string": "Along with the distances, we also train the model to produce the constituent labels, which are used to build the fully labeled tree."
                    },
                    {
                        "id": 16,
                        "string": "Our model is fully parallel and thus does not require computationally expensive structured inference during training."
                    },
                    {
                        "id": 17,
                        "string": "Mapping from syntactic distances to a tree can be efficiently done in O(n log n), which makes the decoding computationally attractive."
                    },
                    {
                        "id": 18,
                        "string": "Despite our strong conditional independence assumption on the output predictions, we achieve good performance for single model discriminative parsing in PTB (91.8 F1) and CTB (86.5 F1) matching, and sometimes outperforming, recent chart-based and transition-based parsing models."
                    },
                    {
                        "id": 19,
                        "string": "Syntactic Distances of a Parse Tree In this section, we start from the concept of syntactic distance introduced in Shen et al."
                    },
                    {
                        "id": 20,
                        "string": "(2017) for unsupervised parsing via language modeling and we extend it to the supervised setting."
                    },
                    {
                        "id": 21,
                        "string": "We propose two algorithms, one to convert a parse tree into a compact representation based on distances between consecutive words, and another to map the inferred representation back to a complete parse tree."
                    },
                    {
                        "id": 22,
                        "string": "The representation will later be used for supervised training."
                    },
                    {
                        "id": 23,
                        "string": "We formally define the syntactic distances of a parse tree as follows: d l , c l , t l , h l ← Distance(child l ) 10: d r , c r , t r , h r ← Distance(child r ) 11: h ← max(h l , h r ) + 1 12: d ← d l ∪ [h] ∪ d r 13: c ← c l ∪ [node.label] ∪ c r 14: t ← t l ∪ t r 15: end if 16: return d, c, t, h 17: end function Definition 2.1."
                    },
                    {
                        "id": 24,
                        "string": "Let T be a parse tree that contains a set of leaves (w 0 , ..., w n )."
                    },
                    {
                        "id": 25,
                        "string": "The height of the lowest common ancestor for two leaves (w i , w j ) is noted asd i j ."
                    },
                    {
                        "id": 26,
                        "string": "The syntactic distances of T can be any vector of scalars d = (d 1 , ..., d n ) that satisfy: sign(d i − d j ) = sign(d i−1 i −d j−1 j ) (1) In other words, d induces the same ranking order as the quantitiesd j i computed between pairs of consecutive words in the sequence, i.e."
                    },
                    {
                        "id": 27,
                        "string": "(d 0 1 , ...,d n−1 n )."
                    },
                    {
                        "id": 28,
                        "string": "Note that there are n − 1 syntactic distances for a sentence of length n. Example 2.1."
                    },
                    {
                        "id": 29,
                        "string": "Consider the tree in Fig."
                    },
                    {
                        "id": 30,
                        "string": "1 for which d 0 1 = 2,d 1 2 = 1."
                    },
                    {
                        "id": 31,
                        "string": "An example of valid syntactic distances for this tree is any d = (d 1 , d 2 ) such that d 1 > d 2 ."
                    },
                    {
                        "id": 32,
                        "string": "Given this definition, the parsing model predicts a sequence of scalars, which is a more natural setting for models based on neural networks, rather than predicting a set of spans."
                    },
                    {
                        "id": 33,
                        "string": "For comparison, in most of the current neural parsing methods, the model needs to output a sequence of transitions (Cross and Huang, 2016; Chen and Manning, 2014) ."
                    },
                    {
                        "id": 34,
                        "string": "Let us first consider the case of a binary parse tree."
                    },
                    {
                        "id": 35,
                        "string": "Algorithm 1 provides a way to convert it to a tuple (d, c, t), where d contains the height of the inner nodes in the tree following a left-to-right (in order) traversal, c the constituent labels for each node in the same order and t the part-of-speech  Starting with the full sentence, we pick split point 1 (as it is assigned to the larger distance) and assign label S to span (0,5)."
                    },
                    {
                        "id": 36,
                        "string": "The left child span (0,1) is assigned with a tag PRP and a label NP, which produces an unary node and a terminal node."
                    },
                    {
                        "id": 37,
                        "string": "The right child span (1,5) is assigned the label ∅, coming from implicit binarization, which indicates that the span is not a real constituent and all of its children are instead direct children of its parent."
                    },
                    {
                        "id": 38,
                        "string": "For the span (1,5), the split point 4 is selected."
                    },
                    {
                        "id": 39,
                        "string": "The recursion of splitting and labeling continues until the process reaches a terminal node."
                    },
                    {
                        "id": 40,
                        "string": "Algorithm 2 Distance to Binary Parse Tree 1: function TREE(d,c,t) 2: if d = [] then 3: node ← Leaf(t) 4: else 5: i ← arg max i (d) 6: child l ← Tree(d <i , c <i , t <i ) 7: child r ← Tree(d >i , c >i , t ≥i ) 8: node ← Node(child l , child r , c i ) 9: end if 10: return node 11: end function (POS) tags of each word in the left-to-right order."
                    },
                    {
                        "id": 41,
                        "string": "d is a valid vector of syntactic distances satisfying Definition 2.1."
                    },
                    {
                        "id": 42,
                        "string": "Once a model has learned to predict these variables, Algorithm 2 can reconstruct a unique binary tree from the output of the model (d,ĉ,t)."
                    },
                    {
                        "id": 43,
                        "string": "The idea in Algorithm 2 is similar to the top-down parsing method proposed by Stern et al."
                    },
                    {
                        "id": 44,
                        "string": "(2017a) , but differs in one key aspect: at each recursive call, there is no need to estimate the confidence for every split point."
                    },
                    {
                        "id": 45,
                        "string": "The algorithm simply chooses the split point i with the maximumd i , and assigns to the span the predicted labelĉ i ."
                    },
                    {
                        "id": 46,
                        "string": "This makes the running time of our algorithm to be in O(n log n), compared to the O(n 2 ) of the greedy top-down algorithm by (Stern et al., 2017a) ."
                    },
                    {
                        "id": 47,
                        "string": "Figure 2 shows an example of the reconstruction of parse tree."
                    },
                    {
                        "id": 48,
                        "string": "Alternatively, the tree reconstruction process can also be done in a bottom-up manner, which requires the recursive composition of adjacent spans according to the ranking induced by their syntactic distance, a process akin to agglomerative clustering."
                    },
                    {
                        "id": 49,
                        "string": "One potential issue is the existence of unary and n-ary nodes."
                    },
                    {
                        "id": 50,
                        "string": "We follow the method proposed by Stern et al."
                    },
                    {
                        "id": 51,
                        "string": "(2017a) and add a special empty label ∅ to spans that are not themselves full constituents but simply arise during the course of implicit binarization."
                    },
                    {
                        "id": 52,
                        "string": "For the unary nodes that contains one nonterminal node, we take the common approach of treating these as additional atomic labels alongside all elementary nonterminals (Stern et al., 2017a) ."
                    },
                    {
                        "id": 53,
                        "string": "For all terminal nodes, we determine whether it belongs to a unary chain or not by predicting an additional label."
                    },
                    {
                        "id": 54,
                        "string": "If it is predicted with a label different from the empty label, we conclude that it is a direct child of a unary constituent with that label."
                    },
                    {
                        "id": 55,
                        "string": "Otherwise if it is predicted to have an empty label, we conclude that it is a child of a bigger constituent which has other constituents or words as its siblings."
                    },
                    {
                        "id": 56,
                        "string": "An n-ary node can arbitrarily be split into binary nodes."
                    },
                    {
                        "id": 57,
                        "string": "We choose to use the leftmost split point."
                    },
                    {
                        "id": 58,
                        "string": "The split point may also be chosen based on model prediction during training."
                    },
                    {
                        "id": 59,
                        "string": "Recovering an n-ary parse tree from the predicted binary tree simply requires removing the empty nodes and split combined labels corresponding to unary chains."
                    },
                    {
                        "id": 60,
                        "string": "Algorithm 2 is a divide-and-conquer algorithm."
                    },
                    {
                        "id": 61,
                        "string": "The running time of this procedure is O(n log n)."
                    },
                    {
                        "id": 62,
                        "string": "However, the algorithm is naturally adapted for execution in a parallel environment, which can further reduce its running time to O(log n)."
                    },
                    {
                        "id": 63,
                        "string": "Learning Syntactic Distances We use neural networks to estimate the vector of syntactic distances for a given sentence."
                    },
                    {
                        "id": 64,
                        "string": "We use a modified hinge loss, where the target distances are generated by the tree-to-distance conversion given by Algorithm 1."
                    },
                    {
                        "id": 65,
                        "string": "Section 3.1 will describe in detail the model architecture, and Section 3.2 describes the loss we use in this setting."
                    },
                    {
                        "id": 66,
                        "string": "Model Architecture Given input words w = (w 0 , w 1 , ..., w n ), we predict the tuple (d, c, t)."
                    },
                    {
                        "id": 67,
                        "string": "The POS tags t are given by an external Part-Of-Speech (POS) tagger."
                    },
                    {
                        "id": 68,
                        "string": "The syntactic distances d and constituent labels c are predicted using a neural network architecture that stacks recurrent (LSTM (Hochreiter and Schmidhuber, 1997) ) and convolutional layers."
                    },
                    {
                        "id": 69,
                        "string": "Words and tags are first mapped to sequences of embeddings e w 0 , ..., e w n and e t 0 , ..., e t n ."
                    },
                    {
                        "id": 70,
                        "string": "Then the word embeddings and the tag embeddings are concatenated together as inputs for a stack of bidirectional LSTM layers: h w 0 , ..., h w n = BiLSTM w ([e w 0 , e t 0 ], ..., [e w n , e t n ]) (2) where BiLSTM w (·) is the word-level bidirectional layer, which gives the model enough capacity to capture long-term syntactical relations between words."
                    },
                    {
                        "id": 71,
                        "string": "To predict the constituent labels for each word, we pass the hidden states representations h w 0 , ..., h w n through a 2-layer network FF w c , with softmax output: p(c w i |w) = softmax(FF w c (h w i )) (3) To compose the necessary information for inferring the syntactic distances and the constituency label information, we perform an additional convolution: g s 1 , ."
                    },
                    {
                        "id": 72,
                        "string": "."
                    },
                    {
                        "id": 73,
                        "string": "."
                    },
                    {
                        "id": 74,
                        "string": ", g s n = CONV(h w 0 , ..., h w n ) (4) where g s i can be seen as a draft representation for each split position in Algorithm 2."
                    },
                    {
                        "id": 75,
                        "string": "Note that the subscripts of g s i s start with 1, since we have n − 1 positions as non-terminal constituents."
                    },
                    {
                        "id": 76,
                        "string": "Then, we stack a bidirectional LSTM layer on top of g s i : h s 1 , ..., h s n = BiLSTM s (g s 1 , ."
                    },
                    {
                        "id": 77,
                        "string": "."
                    },
                    {
                        "id": 78,
                        "string": "."
                    },
                    {
                        "id": 79,
                        "string": ", g s n ) (5) where BiLSTM s fine-tunes the representation by conditioning on other split position representations."
                    },
                    {
                        "id": 80,
                        "string": "Interleaving between LSTM and convolution layers turned out empirically to be the best choice over multiple variations of the model, including using self-attention (Vaswani et al., 2017) instead of LSTM."
                    },
                    {
                        "id": 81,
                        "string": "To calculate the syntactic distances for each position, the vectors h s 1 , ."
                    },
                    {
                        "id": 82,
                        "string": "."
                    },
                    {
                        "id": 83,
                        "string": "."
                    },
                    {
                        "id": 84,
                        "string": ", h s n are transformed through a 2-layer feed-forward network FF d with a single output unit (this can be done in parallel with 1x1 convolutions), with no activation function at the output layer: d i = FF d (h s i ), (6) For predicting the constituent labels, we pass the same representations h s 1 , ."
                    },
                    {
                        "id": 85,
                        "string": "."
                    },
                    {
                        "id": 86,
                        "string": "."
                    },
                    {
                        "id": 87,
                        "string": ", h s n through another 2-layer network FF s c , with softmax output."
                    },
                    {
                        "id": 88,
                        "string": "p(c s i |w) = softmax(FF s c (h s i )) (7) The overall architecture is shown in Figure 2a ."
                    },
                    {
                        "id": 89,
                        "string": "Since the output (d, c, t) can be unambiguously transfered to a unique parse tree, the model implicitly makes all parsing decisions inside the recurrent and convolutional layers."
                    },
                    {
                        "id": 90,
                        "string": "Objective Given a set of training examples D = { d k , c k , t k , w k } K k=1 , the training objective is the sum of the prediction losses of syntactic distances d k and constituent labels c k ."
                    },
                    {
                        "id": 91,
                        "string": "Due to the categorical nature of variable c, we use a standard softmax classifier with a crossentropy loss L label for constituent labels, using the estimated probabilities obtained in Eq."
                    },
                    {
                        "id": 92,
                        "string": "3 and 7."
                    },
                    {
                        "id": 93,
                        "string": "A naïve loss function for estimating syntactic distances is the mean-squared error (MSE): The MSE loss forces the model to regress on the exact value of the true distances."
                    },
                    {
                        "id": 94,
                        "string": "Given that only the ranking induced by the ground-truth distances in d is important, as opposed to the absolute values themselves, using an MSE loss over-penalizes the model by ignoring ranking equivalence between different predictions."
                    },
                    {
                        "id": 95,
                        "string": "Therefore, we propose to minimize a pair-wise learning-to-rank loss, similar to those proposed in (Burges et al., 2005) ."
                    },
                    {
                        "id": 96,
                        "string": "We define our loss as a variant of the hinge loss as: L mse dist = i (d i −d i ) 2 (8) L rank dist = i,j>i [1 − sign(d i − d j )(d i −d j )] + , (9) where [x] + is defined as max(0, x)."
                    },
                    {
                        "id": 97,
                        "string": "This loss encourages the model to reproduce the full ranking order induced by the ground-truth distances."
                    },
                    {
                        "id": 98,
                        "string": "The final loss for the overall model is just the sum of individual losses L = L label + L rank dist ."
                    },
                    {
                        "id": 99,
                        "string": "Experiments We evaluate our model described above on 2 different datasets, the standard Wall Street Journal (WSJ) part of the Penn Treebank (PTB) dataset, and the Chinese Treebank (CTB) dataset."
                    },
                    {
                        "id": 100,
                        "string": "For evaluating the F1 score, we use the standard evalb 1 tool."
                    },
                    {
                        "id": 101,
                        "string": "We provide both labeled and unlabeled F1 score, where the former takes into consideration the constituent label for each predicted 1 http://nlp.cs.nyu.edu/evalb/ constituent, while the latter only considers the position of the constituents."
                    },
                    {
                        "id": 102,
                        "string": "In the tables below, we report the labeled F1 scores for comparison with previous work, as this is the standard metric usually reported in the relevant literature."
                    },
                    {
                        "id": 103,
                        "string": "Penn Treebank For the PTB experiments, we follow the standard train/valid/test separation and use sections 2-21 for training, section 22 for development and section 23 for test set."
                    },
                    {
                        "id": 104,
                        "string": "Following this split, the dataset has 45K training sentences and 1700, 2416 sentences for valid/test respectively."
                    },
                    {
                        "id": 105,
                        "string": "The placeholders with the -NONE-tag are stripped from the dataset during preprocessing."
                    },
                    {
                        "id": 106,
                        "string": "The POS tags are predicted with the Stanford Tagger (Toutanova et al., 2003) ."
                    },
                    {
                        "id": 107,
                        "string": "We use a hidden size of 1200 for each direction on all LSTMs, with 0.3 dropout in all the feedforward connections, and 0.2 recurrent connection dropout (Merity et al., 2017) ."
                    },
                    {
                        "id": 108,
                        "string": "The convolutional filter size is 2."
                    },
                    {
                        "id": 109,
                        "string": "The number of convolutional channels is 1200."
                    },
                    {
                        "id": 110,
                        "string": "As a common practice for neural network based NLP models, the embedding layer that maps word indexes to word embeddings is randomly initialized."
                    },
                    {
                        "id": 111,
                        "string": "The word embeddings are sized 400."
                    },
                    {
                        "id": 112,
                        "string": "Following (Merity et al., 2017) , we randomly swap an input word embedding during training with the zero vector with probability of 0.1."
                    },
                    {
                        "id": 113,
                        "string": "We found this helped the model to generalize better."
                    },
                    {
                        "id": 114,
                        "string": "Training is conducted with Adam algorithm with l2 regularization decay 1 × 10 −6 ."
                    },
                    {
                        "id": 115,
                        "string": "We pick the result obtaining the highest labeled F1 Table 3 ."
                    },
                    {
                        "id": 116,
                        "string": "Our model performs achieves good performance for single-model constituency parsing trained without external data."
                    },
                    {
                        "id": 117,
                        "string": "The best result from (Stern et al., 2017b) is obtained by a generative model."
                    },
                    {
                        "id": 118,
                        "string": "Very recently, we came to knowledge of Gaddy et al."
                    },
                    {
                        "id": 119,
                        "string": "(2018) , which uses character-level LSTM features coupled with chart-based parsing to improve performance."
                    },
                    {
                        "id": 120,
                        "string": "Similar sub-word features can be also used in our model."
                    },
                    {
                        "id": 121,
                        "string": "We leave this investigation for future works."
                    },
                    {
                        "id": 122,
                        "string": "For comparison, other models obtaining better scores either use ensembles, benefit from semi-supervised learning, or recur to re-ranking of a set of candidates."
                    },
                    {
                        "id": 123,
                        "string": "Chinese Treebank We use the Chinese Treebank 5.1 dataset, with articles 001-270 and 440-1151 for training, articles  for test set."
                    },
                    {
                        "id": 124,
                        "string": "This is a standard split in the literature (Liu and Zhang, 2017b) ."
                    },
                    {
                        "id": 125,
                        "string": "The -NONE-tags are stripped as well."
                    },
                    {
                        "id": 126,
                        "string": "The hidden size for the LSTM networks is set to 1200."
                    },
                    {
                        "id": 127,
                        "string": "We use a dropout rate of 0.4 on the feed-forward connections, and 0.1 recurrent connection dropout."
                    },
                    {
                        "id": 128,
                        "string": "The convolutional layer has 1200 channels, with a filter size of 2."
                    },
                    {
                        "id": 129,
                        "string": "We use 400 dimensional word embeddings."
                    },
                    {
                        "id": 130,
                        "string": "During training, input word embeddings are randomly swapped with the zero vector with probability of 0.1."
                    },
                    {
                        "id": 131,
                        "string": "We also apply a l2 regularization weighted by 1×10 −6 on the parameters of the network."
                    },
                    {
                        "id": 132,
                        "string": "Table 2 reports our results compared to other benchmarks."
                    },
                    {
                        "id": 133,
                        "string": "To the best of our knowledge, we set a new stateof-the-art for single-model parsing achieving 86.5 F1 on the test set."
                    },
                    {
                        "id": 134,
                        "string": "The detailed statistics are shown in Table 3 ."
                    },
                    {
                        "id": 135,
                        "string": "Ablation Study We perform an ablation study by removing components from a network trained with the best set of hyperparameters, and re-train the ablated version from scratch."
                    },
                    {
                        "id": 136,
                        "string": "This gives an idea of the relative contributions of each of the components in the model."
                    },
                    {
                        "id": 137,
                        "string": "Results are reported in   imented by using 300D GloVe (Pennington et al., 2014) embedding for the input layer but this didn't yield improvements over the model's best performance."
                    },
                    {
                        "id": 138,
                        "string": "Unsurprisingly, the model trained with MSE loss underperforms considerably a model trained with the rank loss."
                    },
                    {
                        "id": 139,
                        "string": "Parsing Speed The prediction of syntactic distances can be batched in modern GPU architectures."
                    },
                    {
                        "id": 140,
                        "string": "The distance to tree conversion is a O(n log n) (n stand for the number of words in the input sentence) divide-and-conquer algorithm."
                    },
                    {
                        "id": 141,
                        "string": "We compare the parsing speed of our parser with other state-ofthe-art neural parsers in Table 5 ."
                    },
                    {
                        "id": 142,
                        "string": "As the syntactic distance computation can be performed in parallel within a GPU, we first compute the distances in a batch, then we iteratively decode the tree with Algorithm 2."
                    },
                    {
                        "id": 143,
                        "string": "It is worth to note that this comparison may be unfair since some of the reported results may use very different hardware settings."
                    },
                    {
                        "id": 144,
                        "string": "We couldn't find the source code to re-run them on our hardware, to give a fair enough comparison."
                    },
                    {
                        "id": 145,
                        "string": "In our setting, we use an NVIDIA TITAN Xp graphics card for running the neural network part, and the distance to tree inference is run on an Intel Core i7-6850K CPU, with 3.60GHz clock speed."
                    },
                    {
                        "id": 146,
                        "string": "Model # sents/sec Petrov and Klein (2007) 6.2 Zhu et al."
                    },
                    {
                        "id": 147,
                        "string": "(2013) 89.5 Liu and Zhang (2017b) 79.2 Stern et al."
                    },
                    {
                        "id": 148,
                        "string": "(2017a) 75.5 Our model 111.1 Our model w/o tree inference 351 Related Work Parsing natural language with neural network models has recently received growing attention."
                    },
                    {
                        "id": 149,
                        "string": "These models have attained state-of-the-art results for dependency parsing (Chen and Manning, 2014) and constituency parsing (Dyer et al., 2016; Cross and Huang, 2016; Coavoux and Crabbé, 2016) ."
                    },
                    {
                        "id": 150,
                        "string": "Early work in neural network based parsing directly use a feed-forward neural network to predict parse trees (Chen and Manning, 2014) ."
                    },
                    {
                        "id": 151,
                        "string": "Vinyals et al."
                    },
                    {
                        "id": 152,
                        "string": "(2015) use a sequence-tosequence framework where the decoder outputs a linearized version of the parse tree given an input sentence."
                    },
                    {
                        "id": 153,
                        "string": "Generally, in these models, the correctness of the output tree is not strictly ensured (although empirically observed)."
                    },
                    {
                        "id": 154,
                        "string": "Other parsing methods ensure structural consistency by operating in a transition-based setting (Chen and Manning, 2014) by parsing either in the top-down direction (Dyer et al., 2016; Liu and Zhang, 2017b) , bottom-up (Zhu et al., 2013; Watanabe and Sumita, 2015; Cross and Huang, 2016) and recently in-order (Liu and Zhang, 2017a) ."
                    },
                    {
                        "id": 155,
                        "string": "Transition-based methods generally suffer from compounding errors due to exposure bias: during testing, the model is exposed to a very different regime (i.e."
                    },
                    {
                        "id": 156,
                        "string": "decisions sampled from the model itself) than what was encountered during training (i.e."
                    },
                    {
                        "id": 157,
                        "string": "the ground-truth decisions) (Daumé et al., 2009; Goldberg and Nivre, 2012) ."
                    },
                    {
                        "id": 158,
                        "string": "This can have catastrophic effects on test performance but can be mitigated to a certain extent by using beamsearch instead of greedy decoding."
                    },
                    {
                        "id": 159,
                        "string": "(Stern et al., 2017b) proposes an effective inference method for generative parsing, which enables direct decoding in those models."
                    },
                    {
                        "id": 160,
                        "string": "More complex training methods have been devised in order to alleviate this problem (Goldberg and Nivre, 2012; Cross and Huang, 2016) ."
                    },
                    {
                        "id": 161,
                        "string": "Other efforts have been put into neural chart-based parsing (Durrett and Klein, 2015; Stern et al., 2017a) which ensure structural consistency and offer exact inference with CYK algorithm."
                    },
                    {
                        "id": 162,
                        "string": "(Gaddy et al., 2018) includes a simplified CYK-style inference, but the complexity still remains in O(n 3 )."
                    },
                    {
                        "id": 163,
                        "string": "In this work, our model learns to produce a particular representation of a tree in parallel."
                    },
                    {
                        "id": 164,
                        "string": "Representations can be computed in parallel, and the conversion from representation to a full tree can efficiently be done with a divide-and-conquer algorithm."
                    },
                    {
                        "id": 165,
                        "string": "As our model outputs decisions in parallel, our model doesn't suffer from the exposure bias."
                    },
                    {
                        "id": 166,
                        "string": "Interestingly, a series of recent works, both in machine translation (Gu et al., 2018) and speech synthesis (Oord et al., 2017) , considered the sequence of output variables conditionally independent given the inputs."
                    },
                    {
                        "id": 167,
                        "string": "Conclusion We presented a novel constituency parsing scheme based on predicting real-valued scalars, named syntactic distances, whose ordering identify the sequence of top-down split decisions."
                    },
                    {
                        "id": 168,
                        "string": "We employ a neural network model that predicts the distances d and the constituent labels c. Given the algorithms presented in Section 2, we can build an unambiguous mapping between each (d, c, t) and a parse tree."
                    },
                    {
                        "id": 169,
                        "string": "One peculiar aspect of our model is that it predicts split decisions in parallel."
                    },
                    {
                        "id": 170,
                        "string": "Our experiments show that our model can achieve strong performance compare to previous models, while being significantly more efficient."
                    },
                    {
                        "id": 171,
                        "string": "Since the architecture of model is no more than a stack of standard recurrent and convolution layers, which are essential components in most academic and industrial deep learning frameworks, the deployment of this method would be straightforward."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 18
                    },
                    {
                        "section": "Syntactic Distances of a Parse Tree",
                        "n": "2",
                        "start": 19,
                        "end": 62
                    },
                    {
                        "section": "Learning Syntactic Distances",
                        "n": "3",
                        "start": 63,
                        "end": 65
                    },
                    {
                        "section": "Model Architecture",
                        "n": "3.1",
                        "start": 66,
                        "end": 98
                    },
                    {
                        "section": "Experiments",
                        "n": "4",
                        "start": 99,
                        "end": 102
                    },
                    {
                        "section": "Penn Treebank",
                        "n": "4.1",
                        "start": 103,
                        "end": 122
                    },
                    {
                        "section": "Chinese Treebank",
                        "n": "4.2",
                        "start": 123,
                        "end": 134
                    },
                    {
                        "section": "Ablation Study",
                        "n": "4.3",
                        "start": 135,
                        "end": 138
                    },
                    {
                        "section": "Parsing Speed",
                        "n": "4.4",
                        "start": 139,
                        "end": 147
                    },
                    {
                        "section": "Related Work",
                        "n": "5",
                        "start": 148,
                        "end": 166
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 167,
                        "end": 171
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1305-Figure1-1.png",
                        "caption": "Figure 1: An example of how syntactic distances (d1 and d2) describe the structure of a parse tree: consecutive words with larger predicted distance are split earlier than those with smaller distances, in a process akin to divisive clustering.",
                        "page": 0,
                        "bbox": {
                            "x1": 355.68,
                            "x2": 477.12,
                            "y1": 221.76,
                            "y2": 336.96
                        }
                    },
                    {
                        "filename": "../figure/image/1305-Table1-1.png",
                        "caption": "Table 1: Results on the PTB dataset WSJ test set, Section 23. LP, LR represents labeled precision and recall respectively.",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 291.36,
                            "y1": 62.879999999999995,
                            "y2": 381.12
                        }
                    },
                    {
                        "filename": "../figure/image/1305-Table2-1.png",
                        "caption": "Table 2: Test set performance comparison on the CTB dataset",
                        "page": 5,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 62.879999999999995,
                            "y2": 299.03999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1305-Table4-1.png",
                        "caption": "Table 4: Ablation test on the PTB dataset. “w/o top LSTM” is the full model without the top LSTM layer. “w. embedding” stands for the full model using the pretrained word embeddings. “w. MSE loss” stands for the full model trained with MSE loss.",
                        "page": 6,
                        "bbox": {
                            "x1": 103.67999999999999,
                            "x2": 259.2,
                            "y1": 174.72,
                            "y2": 247.2
                        }
                    },
                    {
                        "filename": "../figure/image/1305-Table3-1.png",
                        "caption": "Table 3: Detailed experimental results on PTB and CTB datasets",
                        "page": 6,
                        "bbox": {
                            "x1": 148.79999999999998,
                            "x2": 448.32,
                            "y1": 62.879999999999995,
                            "y2": 132.96
                        }
                    },
                    {
                        "filename": "../figure/image/1305-Table5-1.png",
                        "caption": "Table 5: Parsing speed in sentences per second on the PTB dataset.",
                        "page": 6,
                        "bbox": {
                            "x1": 319.68,
                            "x2": 513.12,
                            "y1": 174.72,
                            "y2": 274.08
                        }
                    },
                    {
                        "filename": "../figure/image/1305-Figure2-1.png",
                        "caption": "Figure 2: Inferring the parse tree with Algorithm 2 given distances, constituent labels, and POS tags. Starting with the full sentence, we pick split point 1 (as it is assigned to the larger distance) and assign label S to span (0,5). The left child span (0,1) is assigned with a tag PRP and a label NP, which produces an unary node and a terminal node. The right child span (1,5) is assigned the label ∅, coming from implicit binarization, which indicates that the span is not a real constituent and all of its children are instead direct children of its parent. For the span (1,5), the split point 4 is selected. The recursion of splitting and labeling continues until the process reaches a terminal node.",
                        "page": 2,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 516.48,
                            "y1": 61.44,
                            "y2": 271.68
                        }
                    },
                    {
                        "filename": "../figure/image/1305-Figure3-1.png",
                        "caption": "Figure 3: The overall visualization of our model. Circles represent hidden states, triangles represent convolution layers, block arrows represent feed-forward layers, arrows represent recurrent connections. The bottom part of the model predicts unary labels for each input word. The ∅ is treated as a special label together with other labels. The top part of the model predicts the syntactic distances and the constituent labels. The inputs of model are the word embeddings concatenated with the POS tag embeddings. The tags are given by an external Part-Of-Speech tagger.",
                        "page": 4,
                        "bbox": {
                            "x1": 116.64,
                            "x2": 481.44,
                            "y1": 61.44,
                            "y2": 216.95999999999998
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-70"
        },
        {
            "slides": {
                "1": {
                    "title": "Sentiment to Sentiment Translation",
                    "text": [
                        "The movie is amazing! - The movie is boring!",
                        "2) I went to this restaurant last weak, the staff was friendly, and I were so happy to have a great meal! - I went to this restaurant last weak, the staff was rude, and I were so angry to have a terrible meal!"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "2": {
                    "title": "Applications Dialogue Systems",
                    "text": [
                        "I am sad about the failure of the badminton player A.",
                        "The badminton player B defeats A. Congratulations!",
                        "Refined Answer: Im sorry to see that the badminton player B defeats A.",
                        "Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach 5 of 34"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "3": {
                    "title": "Applications Personalized News Writing",
                    "text": [
                        "Sentiment-to-sentiment translation can save a lot of human labor!",
                        "The visiting team defeated the home team",
                        "News for fans of the visiting team: The players of the home team performed badly, and lost this game.",
                        "News for fans of the home team: Although the players of the home team have tried their best, they lost this game regretfully.",
                        "Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach 6 of 34"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Challenge Can a sentiment dictionary handle this task",
                    "text": [
                        "The simple replacement of emotional words causes low-quality sentences.",
                        "The food is terrible like rock The food is delicious like rock",
                        "Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach 7 of 34",
                        "For some emotional words, word sense disambiguation is necessary.",
                        "For example, good has three antonyms: evil, bad, and ill in WordNet. Choosing which word needs to be decided by the semantic meaning of good based on the given content.",
                        "Some common emotional words do not have antonyms.",
                        "For example, we find that WordNet does not annotate the antonym of delicious."
                    ],
                    "page_nums": [
                        6,
                        7,
                        8
                    ],
                    "images": []
                },
                "6": {
                    "title": "Background State of the Art Methods",
                    "text": [
                        "They first separate the non-emotional information from the emotional information in a hidden vector.",
                        "They combine the non-emotional context and the inverse sentiment to generate a sentence.",
                        "Advantage: The models can automatically generate appropriate emotional antonyms based on the non- emotional context.",
                        "Drawback: Due to the lack of supervised data, most existing models only change the underlying sentiment and fail in keeping the semantic content.",
                        "The food is delicious What a bad movie",
                        "Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach 11 of 34"
                    ],
                    "page_nums": [
                        10,
                        11
                    ],
                    "images": []
                },
                "14": {
                    "title": "Dataset",
                    "text": [
                        "Provided by McAuley and Leskovec (2013). It consists of amounts of food",
                        "Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach 23 of 34"
                    ],
                    "page_nums": [
                        22
                    ],
                    "images": []
                },
                "17": {
                    "title": "Results",
                    "text": [
                        "Yelp ACC BLEU G-score",
                        "Amazon ACC BLEU G-score",
                        "Automatic evaluations of the proposed method and baselines.",
                        "Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach 27 of 34",
                        "Yelp Sentiment Semantic G-score"
                    ],
                    "page_nums": [
                        26,
                        27
                    ],
                    "images": []
                },
                "18": {
                    "title": "Generated Examples",
                    "text": [
                        "Input: I would strongly advise against",
                        "CAAE: I love this place for a great",
                        "MDAL: I have been a great place was",
                        "Proposed Method: I would love using",
                        "Input: Worst cleaning job ever!",
                        "CAAE: Great food and great service!",
                        "MDAL: Great food, food!",
                        "Proposed Method: Excellent outstanding job ever!",
                        "Input: Most boring show Ive ever been.",
                        "CAAE: Great place is the best place in town.",
                        "MDAL: Great place Ive ever ever had.",
                        "Proposed Method: Most amazing show Ive ever been.",
                        "Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach 29 of 34"
                    ],
                    "page_nums": [
                        28
                    ],
                    "images": []
                },
                "22": {
                    "title": "Conclusion",
                    "text": [
                        "A. Enable training with unpaired data.",
                        "B. Tackle the bottleneck of keeping semantic.",
                        "Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach 33 of 34"
                    ],
                    "page_nums": [
                        32
                    ],
                    "images": []
                }
            },
            "paper_title": "Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach",
            "paper_id": "1309",
            "paper": {
                "title": "Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach",
                "abstract": "The goal of sentiment-to-sentiment \"translation\" is to change the underlying sentiment of a sentence while keeping its content. The main challenge is the lack of parallel data. To solve this problem, we propose a cycled reinforcement learning method that enables training on unpaired data by collaboration between a neutralization module and an emotionalization module. We evaluate our approach on two review datasets, Yelp and Amazon. Experimental results show that our approach significantly outperforms the state-of-the-art systems. Especially, the proposed method substantially improves the content preservation performance. The BLEU score is improved from 1.64 to 22.46 and from 0.56 to 14.06 on the two datasets, respectively. 1",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Sentiment-to-sentiment \"translation\" requires the system to change the underlying sentiment of a sentence while preserving its non-emotional semantic content as much as possible."
                    },
                    {
                        "id": 1,
                        "string": "It can be regarded as a special style transfer task that is important in Natural Language Processing (NLP) (Hu et al., 2017; Shen et al., 2017; Fu et al., 2018) ."
                    },
                    {
                        "id": 2,
                        "string": "It has broad applications, including review sentiment transformation, news rewriting, etc."
                    },
                    {
                        "id": 3,
                        "string": "Yet the lack of parallel training data poses a great obstacle to a satisfactory performance."
                    },
                    {
                        "id": 4,
                        "string": "Recently, several related studies for language style transfer (Hu et al., 2017; Shen et al., 2017) have been proposed."
                    },
                    {
                        "id": 5,
                        "string": "However, when applied * Equal Contribution."
                    },
                    {
                        "id": 6,
                        "string": "1 The released code can be found in https://github.com/lancopku/unpaired-sentiment-translation to the sentiment-to-sentiment \"translation\" task, most existing studies only change the underlying sentiment and fail in keeping the semantic content."
                    },
                    {
                        "id": 7,
                        "string": "For example, given \"The food is delicious\" as the source input, the model generates \"What a bad movie\" as the output."
                    },
                    {
                        "id": 8,
                        "string": "Although the sentiment is successfully transformed from positive to negative, the output text focuses on a different topic."
                    },
                    {
                        "id": 9,
                        "string": "The reason is that these methods attempt to implicitly separate the emotional information from the semantic information in the same dense hidden vector, where all information is mixed together in an uninterpretable way."
                    },
                    {
                        "id": 10,
                        "string": "Due to the lack of supervised parallel data, it is hard to only modify the underlying sentiment without any loss of the nonemotional semantic information."
                    },
                    {
                        "id": 11,
                        "string": "To tackle the problem of lacking parallel data, we propose a cycled reinforcement learning approach that contains two parts: a neutralization module and an emotionalization module."
                    },
                    {
                        "id": 12,
                        "string": "The neutralization module is responsible for extracting non-emotional semantic information by explicitly filtering out emotional words."
                    },
                    {
                        "id": 13,
                        "string": "The advantage is that only emotional words are removed, which does not affect the preservation of non-emotional words."
                    },
                    {
                        "id": 14,
                        "string": "The emotionalization module is responsible for adding sentiment to the neutralized semantic content for sentiment-to-sentiment translation."
                    },
                    {
                        "id": 15,
                        "string": "In cycled training, given an emotional sentence with sentiment s, we first neutralize it to the nonemotional semantic content, and then force the emotionalization module to reconstruct the original sentence by adding the sentiment s. Therefore, the emotionalization module is taught to add sentiment to the semantic context in a supervised way."
                    },
                    {
                        "id": 16,
                        "string": "By adding opposite sentiment, we can achieve the goal of sentiment-to-sentiment translation."
                    },
                    {
                        "id": 17,
                        "string": "Because of the discrete choice of neutral words, the gradient is no longer differentiable over the neutralization module."
                    },
                    {
                        "id": 18,
                        "string": "Thus, we use policy gradient, one of the reinforcement learning methods, to reward the output of the neutralization module based on the feedback from the emotionalization module."
                    },
                    {
                        "id": 19,
                        "string": "We add different sentiment to the semantic content and use the quality of the generated text as reward."
                    },
                    {
                        "id": 20,
                        "string": "The quality is evaluated by two useful metrics: one for identifying whether the generated text matches the target sentiment; one for evaluating the content preservation performance."
                    },
                    {
                        "id": 21,
                        "string": "The reward guides the neutralization module to better identify non-emotional words."
                    },
                    {
                        "id": 22,
                        "string": "In return, the improved neutralization module further enhances the emotionalization module."
                    },
                    {
                        "id": 23,
                        "string": "Our contributions are concluded as follows: • For sentiment-to-sentiment translation, we propose a cycled reinforcement learning approach."
                    },
                    {
                        "id": 24,
                        "string": "It enables training with unpaired data, in which only reviews and sentiment labels are available."
                    },
                    {
                        "id": 25,
                        "string": "• Our approach tackles the bottleneck of keeping semantic information by explicitly separating sentiment information from semantic content."
                    },
                    {
                        "id": 26,
                        "string": "• Experimental results show that our approach significantly outperforms the state-of-the-art systems, especially in content preservation."
                    },
                    {
                        "id": 27,
                        "string": "Related Work Style transfer in computer vision has been studied (Johnson et al., 2016; Gatys et al., 2016; Liao et al., 2017; Li et al., 2017; Zhu et al., 2017) ."
                    },
                    {
                        "id": 28,
                        "string": "The main idea is to learn the mapping between two image domains by capturing shared representations or correspondences of higher-level structures."
                    },
                    {
                        "id": 29,
                        "string": "There have been some studies on unpaired language style transfer recently."
                    },
                    {
                        "id": 30,
                        "string": "Hu et al."
                    },
                    {
                        "id": 31,
                        "string": "(2017) propose a new neural generative model that combines variational auto-encoders (VAEs) and holistic attribute discriminators for effective imposition of style semantic structures."
                    },
                    {
                        "id": 32,
                        "string": "Fu et al."
                    },
                    {
                        "id": 33,
                        "string": "(2018) propose to use an adversarial network to make sure that the input content does not have style information."
                    },
                    {
                        "id": 34,
                        "string": "Shen et al."
                    },
                    {
                        "id": 35,
                        "string": "(2017) focus on separating the underlying content from style information."
                    },
                    {
                        "id": 36,
                        "string": "They learn an encoder that maps the original sentence to style-independent content and a style-dependent decoder for rendering."
                    },
                    {
                        "id": 37,
                        "string": "However, their evaluations only consider the transferred style accuracy."
                    },
                    {
                        "id": 38,
                        "string": "We argue that content preservation is also an indispensable evaluation metric."
                    },
                    {
                        "id": 39,
                        "string": "However, when applied to the sentiment-to-sentiment translation task, the previously mentioned models share the same problem."
                    },
                    {
                        "id": 40,
                        "string": "They have the poor preservation of non-emotional semantic content."
                    },
                    {
                        "id": 41,
                        "string": "In this paper, we propose a cycled reinforcement learning method to improve sentiment-tosentiment translation in the absence of parallel data."
                    },
                    {
                        "id": 42,
                        "string": "The key idea is to build supervised training pairs by reconstructing the original sentence."
                    },
                    {
                        "id": 43,
                        "string": "A related study is \"back reconstruction\" in machine translation (He et al., 2016; Tu et al., 2017) ."
                    },
                    {
                        "id": 44,
                        "string": "They couple two inverse tasks: one is for translating a sentence in language A to a sentence in language B; the other is for translating a sentence in language B to a sentence in language A."
                    },
                    {
                        "id": 45,
                        "string": "Different from the previous work, we do not introduce the inverse task, but use collaboration between the neutralization module and the emotionalization module."
                    },
                    {
                        "id": 46,
                        "string": "Sentiment analysis is also related to our work (Socher et al., 2011; Pontiki et al., 2015; Rosenthal et al., 2017; Chen et al., 2017; Ma et al., 2017 Ma et al., , 2018b ."
                    },
                    {
                        "id": 47,
                        "string": "The task usually involves detecting whether a piece of text expresses positive, negative, or neutral sentiment."
                    },
                    {
                        "id": 48,
                        "string": "The sentiment can be general or about a specific topic."
                    },
                    {
                        "id": 49,
                        "string": "Cycled Reinforcement Learning for Unpaired Sentiment-to-Sentiment Translation In this section, we introduce our proposed method."
                    },
                    {
                        "id": 50,
                        "string": "An overview is presented in Section 3.1."
                    },
                    {
                        "id": 51,
                        "string": "The details of the neutralization module and the emotionalization module are shown in Section 3.2 and Section 3.3."
                    },
                    {
                        "id": 52,
                        "string": "The cycled reinforcement learning mechanism is introduced in Section 3.4."
                    },
                    {
                        "id": 53,
                        "string": "Overview The proposed approach contains two modules: a neutralization module and an emotionalization module, as shown in Figure 1 ."
                    },
                    {
                        "id": 54,
                        "string": "The neutralization module first extracts non-emotional semantic content, and then the emotionalization module attaches sentiment to the semantic content."
                    },
                    {
                        "id": 55,
                        "string": "Two modules are trained by the proposed cycled reinforcement learning method."
                    },
                    {
                        "id": 56,
                        "string": "The proposed method requires the two modules to have initial learning ability."
                    },
                    {
                        "id": 57,
                        "string": "Therefore, we propose a novel pre-training method, which uses a self-attention based sentiment classifier (SASC)."
                    },
                    {
                        "id": 58,
                        "string": "A sketch of cycled reinforcement learning is shown in Algorithm 1."
                    },
                    {
                        "id": 59,
                        "string": "The Neutralization Module Emotionalization Module The food is very * The food is very delicious details are introduced as follows."
                    },
                    {
                        "id": 60,
                        "string": "Neutralization Module The neutralization module N θ is used for explicitly filtering out emotional information."
                    },
                    {
                        "id": 61,
                        "string": "In this paper, we consider this process as an extraction problem."
                    },
                    {
                        "id": 62,
                        "string": "The neutralization module first identifies non-emotional words and then feeds them into the emotionalization module."
                    },
                    {
                        "id": 63,
                        "string": "We use a single Longshort Term Memory Network (LSTM) to generate the probability of being neutral or being polar for every word in a sentence."
                    },
                    {
                        "id": 64,
                        "string": "Given an emotional input sequence x = (x 1 , x 2 , ."
                    },
                    {
                        "id": 65,
                        "string": "."
                    },
                    {
                        "id": 66,
                        "string": "."
                    },
                    {
                        "id": 67,
                        "string": ", x T ) of T words from Γ, the vocabulary of words, this module is responsible for producing a neutralized sequence."
                    },
                    {
                        "id": 68,
                        "string": "Since cycled reinforcement learning requires the modules with initial learning ability, we propose a novel pre-training method to teach the neutralization module to identify non-emotional words."
                    },
                    {
                        "id": 69,
                        "string": "We construct a self-attention based sentiment classifier and use the learned attention weight as the supervisory signal."
                    },
                    {
                        "id": 70,
                        "string": "The motivation comes from the fact that, in a well-trained sentiment classification model, the attention weight reflects the sentiment contribution of each word to Algorithm 1 The cycled reinforcement learning method for training the neutralization module N θ and the emotionalization module E φ ."
                    },
                    {
                        "id": 71,
                        "string": "1: Initialize the neutralization module N θ , the emotionalization module E φ with random weights θ, φ 2: Pre-train N θ using MLE based on Eq."
                    },
                    {
                        "id": 72,
                        "string": "6 3: Pre-train E φ using MLE based on Eq."
                    },
                    {
                        "id": 73,
                        "string": "7 4: for each iteration i = 1, 2, ..., M do 5: Sample a sequence x with sentiment s from X 6: Generate a neutralized sequencex based on N θ 7: Givenx and s, generate an output based on E φ 8: Compute the gradient of E φ based on Eq."
                    },
                    {
                        "id": 74,
                        "string": "8 9: Compute the reward R1 based on Eq."
                    },
                    {
                        "id": 75,
                        "string": "11 10:s = the opposite sentiment 11: Givenx ands, generate an output based on E φ 12: Compute the reward R2 based on Eq."
                    },
                    {
                        "id": 76,
                        "string": "11 13: Compute the combined reward Rc based on Eq."
                    },
                    {
                        "id": 77,
                        "string": "10 14: Compute the gradient of N θ based on Eq."
                    },
                    {
                        "id": 78,
                        "string": "9 15: Update model parameters θ, φ 16: end for some extent."
                    },
                    {
                        "id": 79,
                        "string": "Emotional words tend to get higher attention weights while neutral words usually get lower weights."
                    },
                    {
                        "id": 80,
                        "string": "The details of sentiment classifier are described as follows."
                    },
                    {
                        "id": 81,
                        "string": "Given an input sequence x, a sentiment label y is produced as y = sof tmax(W · c) (1) where W is a parameter."
                    },
                    {
                        "id": 82,
                        "string": "The term c is computed as a weighted sum of hidden vectors: c = T i=0 α i h i (2) where α i is the weight of h i ."
                    },
                    {
                        "id": 83,
                        "string": "The term h i is the output of LSTM at the i-th word."
                    },
                    {
                        "id": 84,
                        "string": "The term α i is computed as α i = exp(e i ) T i=0 exp(e i ) (3) where e i = f (h i , h T ) is an alignment model."
                    },
                    {
                        "id": 85,
                        "string": "We consider the last hidden state h T as the context vector, which contains all information of an input sequence."
                    },
                    {
                        "id": 86,
                        "string": "The term e i evaluates the contribution of each word for sentiment classification."
                    },
                    {
                        "id": 87,
                        "string": "Our experimental results show that the proposed sentiment classifier achieves the accuracy of 89% and 90% on two datasets."
                    },
                    {
                        "id": 88,
                        "string": "With high classification accuracy, the attention weight produced by the classifier is considered to adequately capture the sentiment information of each word."
                    },
                    {
                        "id": 89,
                        "string": "To extract non-emotional words based on continuous attention weights, we map attention weights to discrete values, 0 and 1."
                    },
                    {
                        "id": 90,
                        "string": "Since the discrete method is not the key part is this paper, we only use the following method for simplification."
                    },
                    {
                        "id": 91,
                        "string": "We first calculate the averaged attention value in a sentence asᾱ = 1 T T i=0 α i (4) whereᾱ is used as the threshold to distinguish non-emotional words from emotional words."
                    },
                    {
                        "id": 92,
                        "string": "The discrete attention weight is calculated aŝ α i = 1, if α i ≤ᾱ 0, if α i >ᾱ (5) whereα i is treated as the identifier."
                    },
                    {
                        "id": 93,
                        "string": "For pre-training the neutralization module, we build the training pair of input text x and a discrete attention weight sequenceα."
                    },
                    {
                        "id": 94,
                        "string": "The cross entropy loss is computed as L θ = − T i=1 P N θ (α i |x i ) (6) Emotionalization Module The emotionalization module E φ is responsible for adding sentiment to the neutralized semantic content."
                    },
                    {
                        "id": 95,
                        "string": "In our work, we use a bi-decoder based encoder-decoder framework, which contains one encoder and two decoders."
                    },
                    {
                        "id": 96,
                        "string": "One decoder adds the positive sentiment and the other adds the negative sentiment."
                    },
                    {
                        "id": 97,
                        "string": "The input sentiment signal determines which decoder to use."
                    },
                    {
                        "id": 98,
                        "string": "Specifically, we use the seq2seq model (Sutskever et al., 2014) for implementation."
                    },
                    {
                        "id": 99,
                        "string": "Both the encoder and decoder are LSTM networks."
                    },
                    {
                        "id": 100,
                        "string": "The encoder learns to compress the semantic content into a dense vector."
                    },
                    {
                        "id": 101,
                        "string": "The decoder learns to add sentiment based on the dense vector."
                    },
                    {
                        "id": 102,
                        "string": "Given the neutralized semantic content and the target sentiment, this module is responsible for producing an emotional sequence."
                    },
                    {
                        "id": 103,
                        "string": "For pre-training the emotionalization module, we first generate a neutralized input sequencex by removing emotional words identified by the proposed sentiment classifier."
                    },
                    {
                        "id": 104,
                        "string": "Given the training pair of a neutralized sequencex and an original sentence x with sentiment s, the cross entropy loss is computed as L φ = − T i=1 P E φ (x i |x i , s) (7) where a positive example goes through the positive decoder and a negative example goes through the negative decoder."
                    },
                    {
                        "id": 105,
                        "string": "We also explore a simpler method for pretraining the emotionalization module, which uses the product between a continuous vector 1 − α and a word embedding sequence as the neutralized content where α represents an attention weight sequence."
                    },
                    {
                        "id": 106,
                        "string": "Experimental results show that this method achieves much lower results than explicitly removing emotional words based on discrete attention weights."
                    },
                    {
                        "id": 107,
                        "string": "Thus, we do not choose this method in our work."
                    },
                    {
                        "id": 108,
                        "string": "Cycled Reinforcement Learning Two modules are trained by the proposed cycled method."
                    },
                    {
                        "id": 109,
                        "string": "The neutralization module first neutralizes an emotional input to semantic content and then the emotionalization module is forced to reconstruct the original sentence based on the source sentiment and the semantic content."
                    },
                    {
                        "id": 110,
                        "string": "Therefore, the emotionalization module is taught to add sentiment to the semantic content in a supervised way."
                    },
                    {
                        "id": 111,
                        "string": "Because of the discrete choice of neutral words, the loss is no longer differentiable over the neutralization module."
                    },
                    {
                        "id": 112,
                        "string": "Therefore, we formulate it as a reinforcement learning problem and use policy gradient to train the neutralization module."
                    },
                    {
                        "id": 113,
                        "string": "The detailed training process is shown as follows."
                    },
                    {
                        "id": 114,
                        "string": "We refer the neutralization module N θ as the first agent and the emotionalization module E φ as the second one."
                    },
                    {
                        "id": 115,
                        "string": "Given a sentence x associated with sentiment s, the termx represents the middle neutralized context extracted byα, which is generated by P N θ (α|x)."
                    },
                    {
                        "id": 116,
                        "string": "In cycled training, the original sentence can be viewed as the supervision for training the second agent."
                    },
                    {
                        "id": 117,
                        "string": "Thus, the gradient for the second agent is ∇ φ J(φ) = ∇ φ log(P E φ (x|x, s)) (8) We denotex as the output generated by P E φ (x|x, s)."
                    },
                    {
                        "id": 118,
                        "string": "We also denote y as the output generated by P E φ (y|x,s) wheres represents the opposite sentiment."
                    },
                    {
                        "id": 119,
                        "string": "Givenx and y, we first calculate rewards for training the neutralized module, R 1 and R 2 ."
                    },
                    {
                        "id": 120,
                        "string": "The details of calculation process will be introduced in Section 3.4.1."
                    },
                    {
                        "id": 121,
                        "string": "Then, we optimize parameters through policy gradient by maximizing the expected reward to train the neutralization module."
                    },
                    {
                        "id": 122,
                        "string": "It guides the neutralization module to identify non-emotional words better."
                    },
                    {
                        "id": 123,
                        "string": "In return, the improved neutralization module further enhances the emotionalization module."
                    },
                    {
                        "id": 124,
                        "string": "According to the policy gradient theorem (Williams, 1992) , the gradient for the first agent is ∇ θ J(θ) = E[R c · ∇ θ log(P N θ (α|x))] (9) where R c is calculated as R c = R 1 + R 2 (10) Based on Eq."
                    },
                    {
                        "id": 125,
                        "string": "8 and Eq."
                    },
                    {
                        "id": 126,
                        "string": "9, we use the sampling approach to estimate the expected reward."
                    },
                    {
                        "id": 127,
                        "string": "This cycled process is repeated until converge."
                    },
                    {
                        "id": 128,
                        "string": "Reward The reward consists of two parts, sentiment confidence and BLEU."
                    },
                    {
                        "id": 129,
                        "string": "Sentiment confidence evaluates whether the generated text matches the target sentiment."
                    },
                    {
                        "id": 130,
                        "string": "We use a pre-trained classifier to make the judgment."
                    },
                    {
                        "id": 131,
                        "string": "Specially, we use the proposed selfattention based sentiment classifier for implementation."
                    },
                    {
                        "id": 132,
                        "string": "The BLEU (Papineni et al., 2002) score is used to measure the content preservation performance."
                    },
                    {
                        "id": 133,
                        "string": "Considering that the reward should encourage the model to improve both metrics, we use the harmonic mean of sentiment confidence and BLEU as reward, which is formulated as R = (1 + β 2 ) 2 · BLEU · Conf id (β 2 · BLEU ) + Conf id (11) where β is a harmonic weight."
                    },
                    {
                        "id": 134,
                        "string": "Experiment In this section, we evaluate our method on two review datasets."
                    },
                    {
                        "id": 135,
                        "string": "We first introduce the datasets, the training details, the baselines, and the evaluation metrics."
                    },
                    {
                        "id": 136,
                        "string": "Then, we compare our approach with the state-of-the-art systems."
                    },
                    {
                        "id": 137,
                        "string": "Finally, we show the experimental results and provide the detailed analysis of the key components."
                    },
                    {
                        "id": 138,
                        "string": "Unpaired Datasets We conduct experiments on two review datasets that contain user ratings associated with each review."
                    },
                    {
                        "id": 139,
                        "string": "Following previous work (Shen et al., 2017) , we consider reviews with rating above three as positive reviews and reviews below three as negative reviews."
                    },
                    {
                        "id": 140,
                        "string": "The positive and negative reviews are not paired."
                    },
                    {
                        "id": 141,
                        "string": "Since our approach focuses on sentence-level sentiment-to-sentiment translation where sentiment annotations are provided at the document level, we process the two datasets with the following steps."
                    },
                    {
                        "id": 142,
                        "string": "First, following previous work (Shen et al., 2017) , we filter out the reviews that exceed 20 words."
                    },
                    {
                        "id": 143,
                        "string": "Second, we construct textsentiment pairs by extracting the first sentence in a review associated with its sentiment label, because the first sentence usually expresses the core idea."
                    },
                    {
                        "id": 144,
                        "string": "Finally, we train a sentiment classifier and filter out the text-sentiment pairs with the classifier confidence below 0.8."
                    },
                    {
                        "id": 145,
                        "string": "Specially, we use the proposed self-attention based sentiment classifier for implementation."
                    },
                    {
                        "id": 146,
                        "string": "Leskovec (2013) ."
                    },
                    {
                        "id": 147,
                        "string": "It consists of amounts of food reviews from Amazon."
                    },
                    {
                        "id": 148,
                        "string": "3 The processed Amazon dataset contains 230K, 10K, and 3K pairs for training, validation, and testing, respectively."
                    },
                    {
                        "id": 149,
                        "string": "Training Details We tune hyper-parameters based on the performance on the validation sets."
                    },
                    {
                        "id": 150,
                        "string": "The self-attention based sentiment classifier is trained for 10 epochs on two datasets."
                    },
                    {
                        "id": 151,
                        "string": "We set β for calculating reward to 0.5, hidden size to 256, embedding size to 128, vocabulary size to 50K, learning rate to 0.6, and batch size to 64."
                    },
                    {
                        "id": 152,
                        "string": "We use the Adagrad (Duchi et al., 2011) optimizer."
                    },
                    {
                        "id": 153,
                        "string": "All of the gradients are clipped when the norm exceeds 2."
                    },
                    {
                        "id": 154,
                        "string": "Before cycled training, the neutralization module and the emotionalization module are pre-trained for 1 and 4 epochs on the yelp dataset, for 3 and 5 epochs on the Amazon dataset."
                    },
                    {
                        "id": 155,
                        "string": "Baselines We compare our proposed method with the following state-of-the-art systems."
                    },
                    {
                        "id": 156,
                        "string": "Cross-Alignment Auto-Encoder (CAAE): This method is proposed by Shen et al."
                    },
                    {
                        "id": 157,
                        "string": "(2017) ."
                    },
                    {
                        "id": 158,
                        "string": "They propose a method that uses refined alignment of latent representations in hidden layers to perform style transfer."
                    },
                    {
                        "id": 159,
                        "string": "We treat this model as a baseline and adapt it by using the released code."
                    },
                    {
                        "id": 160,
                        "string": "Multi-Decoder with Adversarial Learning (MDAL): This method is proposed by Fu et al."
                    },
                    {
                        "id": 161,
                        "string": "(2018) ."
                    },
                    {
                        "id": 162,
                        "string": "They use a multi-decoder model with adversarial learning to separate style representations and content representations in hidden layers."
                    },
                    {
                        "id": 163,
                        "string": "We adapt this model by using the released code."
                    },
                    {
                        "id": 164,
                        "string": "Evaluation Metrics We conduct two evaluations in this work, including an automatic evaluation and a human evaluation."
                    },
                    {
                        "id": 165,
                        "string": "The details of evaluation metrics are shown as follows."
                    },
                    {
                        "id": 166,
                        "string": "Automatic Evaluation We quantitatively measure sentiment transformation by evaluating the accuracy of generating designated sentiment."
                    },
                    {
                        "id": 167,
                        "string": "For a fair comparison, we do not use the proposed sentiment classification model."
                    },
                    {
                        "id": 168,
                        "string": "Following previous work (Shen et al., 2017; Hu et al., 2017) , we instead use a stateof-the-art sentiment classifier (Vieira and Moura, 2017) , called TextCNN, to automatically evaluate the transferred sentiment accuracy."
                    },
                    {
                        "id": 169,
                        "string": "TextCNN achieves the accuracy of 89% and 88% on two datasets."
                    },
                    {
                        "id": 170,
                        "string": "Specifically, we generate sentences given sentiment s, and use the pre-trained sentiment classifier to assign sentiment labels to the generated sentences."
                    },
                    {
                        "id": 171,
                        "string": "The accuracy is calculated as the percentage of the predictions that match the sentiment s. To evaluate the content preservation performance, we use the BLEU score (Papineni et al., 2002) between the transferred sentence and the source sentence as an evaluation metric."
                    },
                    {
                        "id": 172,
                        "string": "BLEU is a widely used metric for text generation tasks, such as machine translation, summarization, etc."
                    },
                    {
                        "id": 173,
                        "string": "The metric compares the automatically produced text with the reference text by computing overlapping lexical n-gram units."
                    },
                    {
                        "id": 174,
                        "string": "To evaluate the overall performance, we use the geometric mean of ACC and BLEU as an evaluation metric."
                    },
                    {
                        "id": 175,
                        "string": "The G-score is one of the most commonly used \"single number\" measures in Information Retrieval, Natural Language Processing, and Machine Learning."
                    },
                    {
                        "id": 176,
                        "string": "Human Evaluation While the quantitative evaluation provides indication of sentiment transfer quality, it can not evaluate the quality of transferred text accurately."
                    },
                    {
                        "id": 177,
                        "string": "Yelp ACC BLEU G-score CAAE (Shen et al., 2017) 93.22 1.17 10.44 MDAL (Fu et al., 2018) 85.65 1.64 11.85 Proposed Method 80.00 22.46 42.38 Amazon ACC BLEU G-score CAAE (Shen et al., 2017) 84.19 0.56 6.87 MDAL (Fu et al., 2018) 70.50 0.27 4.36 Proposed Method 70.37 14.06 31.45 Table 1 : Automatic evaluations of the proposed method and baselines."
                    },
                    {
                        "id": 178,
                        "string": "ACC evaluates sentiment transformation."
                    },
                    {
                        "id": 179,
                        "string": "BLEU evaluates content preservation."
                    },
                    {
                        "id": 180,
                        "string": "G-score is the geometric mean of ACC and BLEU."
                    },
                    {
                        "id": 181,
                        "string": "Therefore, we also perform a human evaluation on the test set."
                    },
                    {
                        "id": 182,
                        "string": "We randomly choose 200 items for the human evaluation."
                    },
                    {
                        "id": 183,
                        "string": "Each item contains the transformed sentences generated by different systems given the same source sentence."
                    },
                    {
                        "id": 184,
                        "string": "The items are distributed to annotators who have no knowledge about which system the sentence is from."
                    },
                    {
                        "id": 185,
                        "string": "They are asked to score the transformed text in terms of sentiment and semantic similarity."
                    },
                    {
                        "id": 186,
                        "string": "Sentiment represents whether the sentiment of the source text is transferred correctly."
                    },
                    {
                        "id": 187,
                        "string": "Semantic similarity evaluates the context preservation performance."
                    },
                    {
                        "id": 188,
                        "string": "The score ranges from 1 to 10 (1 is very bad and 10 is very good)."
                    },
                    {
                        "id": 189,
                        "string": "Experimental Results Automatic evaluation results are shown in Table 1 ."
                    },
                    {
                        "id": 190,
                        "string": "ACC evaluates sentiment transformation."
                    },
                    {
                        "id": 191,
                        "string": "BLEU evaluates semantic content preservation."
                    },
                    {
                        "id": 192,
                        "string": "G-score represents the geometric mean of ACC and BLEU."
                    },
                    {
                        "id": 193,
                        "string": "CAAE and MDAL achieve much lower BLEU scores, 1.17 and 1.64 on the Yelp dataset, 0.56 and 0.27 on the Amazon dataset."
                    },
                    {
                        "id": 194,
                        "string": "The low BLEU scores indicate the worrying content preservation performance to some extent."
                    },
                    {
                        "id": 195,
                        "string": "Even with the desired sentiment, the irrelevant generated text leads to worse overall performance."
                    },
                    {
                        "id": 196,
                        "string": "In general, these two systems work more like sentiment-aware language models that generate text only based on the target sentiment and neglect the source input."
                    },
                    {
                        "id": 197,
                        "string": "The main reason is that these two systems attempt to separate emotional information from non-emotional content in a hidden layer, where all information is complicatedly mixed together."
                    },
                    {
                        "id": 198,
                        "string": "It is difficult to only modify emotional information without any loss of non-emotional semantic content."
                    },
                    {
                        "id": 199,
                        "string": "In comparison, our proposed method achieves the best overall performance on the two datasets, Yelp Sentiment Semantic G-score CAAE (Shen et al., 2017) 7.67 3.87 5.45 MDAL (Fu et al., 2018) 7.12 3.68 5.12 Proposed Method 6.99 5.08 5.96 Amazon Sentiment Semantic G-score CAAE (Shen et al., 2017) 8.61 3.15 5.21 MDAL (Fu et al., 2018) 7.93 3.22 5.05 Proposed Method 7.92 4.67 6.08 Table 2 : Human evaluations of the proposed method and baselines."
                    },
                    {
                        "id": 200,
                        "string": "Sentiment evaluates sentiment transformation."
                    },
                    {
                        "id": 201,
                        "string": "Semantic evaluates content preservation."
                    },
                    {
                        "id": 202,
                        "string": "demonstrating the ability of learning knowledge from unpaired data."
                    },
                    {
                        "id": 203,
                        "string": "This result is attributed to the improved BLEU score."
                    },
                    {
                        "id": 204,
                        "string": "The BLEU score is largely improved from 1.64 to 22.46 and from 0.56 to 14.06 on the two datasets."
                    },
                    {
                        "id": 205,
                        "string": "The score improvements mainly come from the fact that we separate emotional information from semantic content by explicitly filtering out emotional words."
                    },
                    {
                        "id": 206,
                        "string": "The extracted content is preserved and fed into the emotionalization module."
                    },
                    {
                        "id": 207,
                        "string": "Given the overall quality of transferred text as the reward, the neutralization module is taught to extract non-emotional semantic content better."
                    },
                    {
                        "id": 208,
                        "string": "Table 2 shows the human evaluation results."
                    },
                    {
                        "id": 209,
                        "string": "It can be clearly seen that the proposed method obviously improves semantic preservation."
                    },
                    {
                        "id": 210,
                        "string": "The semantic score is increased from 3.87 to 5.08 on the Yelp dataset, and from 3.22 to 4.67 on the Amazon dataset."
                    },
                    {
                        "id": 211,
                        "string": "In general, our proposed model achieves the best overall performance."
                    },
                    {
                        "id": 212,
                        "string": "Furthermore, it also needs to be noticed that with the large improvement in content preservation, the sentiment accuracy of the proposed method is lower than that of CAAE on the two datasets."
                    },
                    {
                        "id": 213,
                        "string": "It shows that simultaneously promoting sentiment transformation and content preservation remains to be studied further."
                    },
                    {
                        "id": 214,
                        "string": "By comparing two evaluation results, we find that there is an agreement between the human evaluation and the automatic evaluation."
                    },
                    {
                        "id": 215,
                        "string": "It indicates the usefulness of automatic evaluation metrics."
                    },
                    {
                        "id": 216,
                        "string": "However, we also notice that the human evaluation has a smaller performance gap between the baselines and the proposed method than the automatic evaluation."
                    },
                    {
                        "id": 217,
                        "string": "It shows the limitation of automatic metrics for giving accurate results."
                    },
                    {
                        "id": 218,
                        "string": "For evaluating sentiment transformation, even with a high accuracy, the sentiment classifier sometimes generates noisy results, especially for those neutral sentences (e.g., \"I ate a cheese sandwich\")."
                    },
                    {
                        "id": 219,
                        "string": "For evaluating content preservation, the BLEU score Input: I would strongly advise against using this company."
                    },
                    {
                        "id": 220,
                        "string": "CAAE: I love this place for a great experience here."
                    },
                    {
                        "id": 221,
                        "string": "MDAL: I have been a great place was great."
                    },
                    {
                        "id": 222,
                        "string": "Proposed Method: I would love using this company."
                    },
                    {
                        "id": 223,
                        "string": "Input: The service was nearly non-existent and extremely rude."
                    },
                    {
                        "id": 224,
                        "string": "CAAE: The best place in the best area in vegas."
                    },
                    {
                        "id": 225,
                        "string": "MDAL: The food is very friendly and very good."
                    },
                    {
                        "id": 226,
                        "string": "Proposed Method: The service was served and completely fresh."
                    },
                    {
                        "id": 227,
                        "string": "Input: Asked for the roast beef and mushroom sub, only received roast beef."
                    },
                    {
                        "id": 228,
                        "string": "CAAE: We had a great experience with."
                    },
                    {
                        "id": 229,
                        "string": "MDAL: This place for a great place for a great food and best."
                    },
                    {
                        "id": 230,
                        "string": "Proposed Method: Thanks for the beef and spring bbq."
                    },
                    {
                        "id": 231,
                        "string": "Input: Worst cleaning job ever!"
                    },
                    {
                        "id": 232,
                        "string": "CAAE: Great food and great service!"
                    },
                    {
                        "id": 233,
                        "string": "MDAL: Great food, food!"
                    },
                    {
                        "id": 234,
                        "string": "Proposed Method: Excellent outstanding job ever!"
                    },
                    {
                        "id": 235,
                        "string": "Input: Most boring show I've ever been."
                    },
                    {
                        "id": 236,
                        "string": "CAAE: Great place is the best place in town."
                    },
                    {
                        "id": 237,
                        "string": "MDAL: Great place I've ever ever had."
                    },
                    {
                        "id": 238,
                        "string": "Proposed Method: Most amazing show I've ever been."
                    },
                    {
                        "id": 239,
                        "string": "Input: Place is very clean and the food is delicious."
                    },
                    {
                        "id": 240,
                        "string": "CAAE: Don't go to this place."
                    },
                    {
                        "id": 241,
                        "string": "MDAL: This place wasn't worth the worst place is horrible."
                    },
                    {
                        "id": 242,
                        "string": "Proposed Method: Place is very small and the food is terrible."
                    },
                    {
                        "id": 243,
                        "string": "Input: Really satisfied with experience buying clothes."
                    },
                    {
                        "id": 244,
                        "string": "CAAE: Don't go to this place."
                    },
                    {
                        "id": 245,
                        "string": "MDAL: Do not impressed with this place."
                    },
                    {
                        "id": 246,
                        "string": "Proposed Method: Really bad experience."
                    },
                    {
                        "id": 247,
                        "string": "Table 3 : Examples generated by the proposed approach and baselines on the Yelp dataset."
                    },
                    {
                        "id": 248,
                        "string": "The two baselines change not only the polarity of examples, but also the semantic content."
                    },
                    {
                        "id": 249,
                        "string": "In comparison, our approach changes the sentiment of sentences with higher semantic similarity."
                    },
                    {
                        "id": 250,
                        "string": "is computed based on the percentage of overlapping n-grams between the generated text and the reference text."
                    },
                    {
                        "id": 251,
                        "string": "However, the overlapping n-grams contain not only content words but also function words, bringing the noisy results."
                    },
                    {
                        "id": 252,
                        "string": "In general, accurate automatic evaluation metrics are expected in future work."
                    },
                    {
                        "id": 253,
                        "string": "Table 3 presents the examples generated by different systems on the Yelp dataset."
                    },
                    {
                        "id": 254,
                        "string": "The two baselines change not only the polarity of examples, but also the semantic content."
                    },
                    {
                        "id": 255,
                        "string": "In comparison, our method precisely changes the sentiment of sentences (and paraphrases slightly to ensure fluency), while keeping the semantic content unchanged."
                    },
                    {
                        "id": 256,
                        "string": "Table 4 : Performance of key components in the proposed approach."
                    },
                    {
                        "id": 257,
                        "string": "\"NM\" denotes the neutralization module."
                    },
                    {
                        "id": 258,
                        "string": "\"Cycled RL\" represents cycled reinforcement learning."
                    },
                    {
                        "id": 259,
                        "string": "Incremental Analysis In this section, we conduct a series of experiments to evaluate the contributions of our key components."
                    },
                    {
                        "id": 260,
                        "string": "The results are shown in Table 4 ."
                    },
                    {
                        "id": 261,
                        "string": "We treat the emotionalization module as a baseline where the input is the original emotional sentence."
                    },
                    {
                        "id": 262,
                        "string": "The emotionalization module achieves the highest BLEU score but with much lower sentiment transformation accuracy."
                    },
                    {
                        "id": 263,
                        "string": "The encoding of the original sentiment leads to the emotional hidden vector that influences the decoding process and results in worse sentiment transformation performance."
                    },
                    {
                        "id": 264,
                        "string": "It can be seen that the method with all components achieves the best performance."
                    },
                    {
                        "id": 265,
                        "string": "First, we find that the method that only uses cycled reinforcement learning performs badly because it is hard to guide two randomly initialized modules to teach each other."
                    },
                    {
                        "id": 266,
                        "string": "Second, the pre-training method brings a slight improvement in overall performance."
                    },
                    {
                        "id": 267,
                        "string": "The G-score is improved from 32.77 to 34.66 and from 26.46 to 27.87 on the two datasets."
                    },
                    {
                        "id": 268,
                        "string": "The bottleneck of this method is the noisy attention weight because of the limited sentiment classification accuracy."
                    },
                    {
                        "id": 269,
                        "string": "Third, the method that combines cycled reinforcement learning and pre-training achieves the better performance than using one of them."
                    },
                    {
                        "id": 270,
                        "string": "Pre-training gives the two modules initial learning ability."
                    },
                    {
                        "id": 271,
                        "string": "Cycled training teaches the two modules to improve each other based on the feedback signals."
                    },
                    {
                        "id": 272,
                        "string": "Specially, the G-score is improved from 34.66 to 42.38 and from 27.87 to 31.45 on the two datasets."
                    },
                    {
                        "id": 273,
                        "string": "Finally, by comparing the methods with and without the neutralization module, we find that the neutralization mechanism improves a lot in sentiment transformation with a slight reduction on content preservation."
                    },
                    {
                        "id": 274,
                        "string": "It proves the effectiveness of explic-Michael is absolutely wonderful."
                    },
                    {
                        "id": 275,
                        "string": "I would strongly advise against using this company."
                    },
                    {
                        "id": 276,
                        "string": "Horrible experience!"
                    },
                    {
                        "id": 277,
                        "string": "Worst cleaning job ever!"
                    },
                    {
                        "id": 278,
                        "string": "Most boring show i 've ever been."
                    },
                    {
                        "id": 279,
                        "string": "Hainan chicken was really good."
                    },
                    {
                        "id": 280,
                        "string": "I really don't understand all the negative reviews for this dentist."
                    },
                    {
                        "id": 281,
                        "string": "Smells so weird in there."
                    },
                    {
                        "id": 282,
                        "string": "The service was nearly non-existent and extremely rude."
                    },
                    {
                        "id": 283,
                        "string": "itly separating sentiment information from semantic content."
                    },
                    {
                        "id": 284,
                        "string": "Furthermore, to analyze the neutralization ability in the proposed method, we randomly sample several examples, as shown in Table 5 ."
                    },
                    {
                        "id": 285,
                        "string": "It can be clearly seen that emotional words are removed accurately almost without loss of non-emotional information."
                    },
                    {
                        "id": 286,
                        "string": "Error Analysis Although the proposed method outperforms the state-of-the-art systems, we also observe several failure cases, such as sentiment-conflicted sentences (e.g., \"Outstanding and bad service\"), neutral sentences (e.g., \"Our first time here\")."
                    },
                    {
                        "id": 287,
                        "string": "Sentiment-conflicted sentences indicate that the original sentiment is not removed completely."
                    },
                    {
                        "id": 288,
                        "string": "This problem occurs when the input contains emotional words that are unseen in the training data, or the sentiment is implicitly expressed."
                    },
                    {
                        "id": 289,
                        "string": "Handling complex sentiment expressions is an important problem for future work."
                    },
                    {
                        "id": 290,
                        "string": "Neutral sentences demonstrate that the decoder sometimes fails in adding the target sentiment and only generates text based on the semantic content."
                    },
                    {
                        "id": 291,
                        "string": "A better sentimentaware decoder is expected to be explored in future work."
                    },
                    {
                        "id": 292,
                        "string": "Conclusions and Future Work In this paper, we focus on unpaired sentimentto-sentiment translation and propose a cycled reinforcement learning approach that enables training in the absence of parallel training data."
                    },
                    {
                        "id": 293,
                        "string": "We conduct experiments on two review datasets."
                    },
                    {
                        "id": 294,
                        "string": "Experimental results show that our method substantially outperforms the state-of-the-art systems, especially in terms of semantic preservation."
                    },
                    {
                        "id": 295,
                        "string": "For future work, we would like to explore a fine-grained version of sentiment-to-sentiment translation that not only reverses sentiment, but also changes the strength of sentiment."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 26
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 27,
                        "end": 48
                    },
                    {
                        "section": "Cycled Reinforcement Learning for",
                        "n": "3",
                        "start": 49,
                        "end": 52
                    },
                    {
                        "section": "Overview",
                        "n": "3.1",
                        "start": 53,
                        "end": 59
                    },
                    {
                        "section": "Neutralization Module",
                        "n": "3.2",
                        "start": 60,
                        "end": 93
                    },
                    {
                        "section": "Emotionalization Module",
                        "n": "3.3",
                        "start": 94,
                        "end": 107
                    },
                    {
                        "section": "Cycled Reinforcement Learning",
                        "n": "3.4",
                        "start": 108,
                        "end": 127
                    },
                    {
                        "section": "Reward",
                        "n": "3.4.1",
                        "start": 128,
                        "end": 133
                    },
                    {
                        "section": "Experiment",
                        "n": "4",
                        "start": 134,
                        "end": 137
                    },
                    {
                        "section": "Unpaired Datasets",
                        "n": "4.1",
                        "start": 138,
                        "end": 148
                    },
                    {
                        "section": "Training Details",
                        "n": "4.2",
                        "start": 149,
                        "end": 154
                    },
                    {
                        "section": "Baselines",
                        "n": "4.3",
                        "start": 155,
                        "end": 162
                    },
                    {
                        "section": "Evaluation Metrics",
                        "n": "4.4",
                        "start": 163,
                        "end": 165
                    },
                    {
                        "section": "Automatic Evaluation",
                        "n": "4.4.1",
                        "start": 166,
                        "end": 175
                    },
                    {
                        "section": "Human Evaluation",
                        "n": "4.4.2",
                        "start": 176,
                        "end": 188
                    },
                    {
                        "section": "Experimental Results",
                        "n": "4.5",
                        "start": 189,
                        "end": 258
                    },
                    {
                        "section": "Incremental Analysis",
                        "n": "4.6",
                        "start": 259,
                        "end": 285
                    },
                    {
                        "section": "Error Analysis",
                        "n": "4.7",
                        "start": 286,
                        "end": 291
                    },
                    {
                        "section": "Conclusions and Future Work",
                        "n": "5",
                        "start": 292,
                        "end": 295
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1309-Figure1-1.png",
                        "caption": "Figure 1: An illustration of the two modules. Lower: The neutralization module removes emotional words and extracts non-emotional semantic information. Upper: The emotionalization module adds sentiment to the semantic content. The proposed self-attention based sentiment classifier is used to guide the pre-training.",
                        "page": 2,
                        "bbox": {
                            "x1": 80.64,
                            "x2": 278.4,
                            "y1": 62.879999999999995,
                            "y2": 299.03999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1309-Table1-1.png",
                        "caption": "Table 1: Automatic evaluations of the proposed method and baselines. ACC evaluates sentiment transformation. BLEU evaluates content preservation. G-score is the geometric mean of ACC and BLEU.",
                        "page": 5,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 522.24,
                            "y1": 62.879999999999995,
                            "y2": 147.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1309-Table4-1.png",
                        "caption": "Table 4: Performance of key components in the proposed approach. “NM” denotes the neutralization module. “Cycled RL” represents cycled reinforcement learning.",
                        "page": 7,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 284.15999999999997,
                            "y1": 62.879999999999995,
                            "y2": 167.04
                        }
                    },
                    {
                        "filename": "../figure/image/1309-Table5-1.png",
                        "caption": "Table 5: Analysis of the neutralization module. Words in red are removed by the neutralization module.",
                        "page": 7,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 527.04,
                            "y1": 62.879999999999995,
                            "y2": 164.16
                        }
                    },
                    {
                        "filename": "../figure/image/1309-Table2-1.png",
                        "caption": "Table 2: Human evaluations of the proposed method and baselines. Sentiment evaluates sentiment transformation. Semantic evaluates content preservation.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 288.0,
                            "y1": 62.879999999999995,
                            "y2": 147.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1309-Table3-1.png",
                        "caption": "Table 3: Examples generated by the proposed approach and baselines on the Yelp dataset. The two baselines change not only the polarity of examples, but also the semantic content. In comparison, our approach changes the sentiment of sentences with higher semantic similarity.",
                        "page": 6,
                        "bbox": {
                            "x1": 310.56,
                            "x2": 520.3199999999999,
                            "y1": 62.879999999999995,
                            "y2": 435.35999999999996
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-71"
        },
        {
            "slides": {
                "0": {
                    "title": "Motivation",
                    "text": [
                        "- Getting manually labeled data in each domain for sentiment analysis is always an expensive and a time consuming task, cross-domain sentiment analysis provides a solution.",
                        "- However, polarity orientation (positive or negative) and the significance of a word to express an opinion often differ from one domain to another.",
                        "Changing Significance: Entertaining, boring, one-note, etc. are classification in the movie domain. significant for",
                        "Changing Polarity: Unpredictable plot of a movie //Positive sentiment",
                        "Unpredictable behaviour of a machine //Negative sentiment"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Problem Definition",
                    "text": [
                        "Significant Consistent Polarity (SCP) words represent the transferable (usable) information across domains.",
                        "We present an approach based on test and cosine-similarity between context vector of words to identify polarity preserving significant words across domains.",
                        "Furthermore, we show that a weighted ensemble of the classifiers enhances the cross-domain classification performance."
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Technique Find SCP",
                    "text": [
                        "Significant Consistent Polarity (SCP): S T",
                        "//Transferable information from the source (S) to the target (T) for cross-domain SA.",
                        "S: Significant words with their polarity orientation in the labeled source domain: 2 test",
                        "H0 : unpredictable has equal distribution in the positive and negative corpora",
                        "Ha : unpredictable has significantly different count in either positive or negative corpus",
                        "If X2 score is greater than",
                        "Probability of the observed value given null hypothesis is true is less than",
                        "=> Reject the Null hypothesis",
                        "=> unpredictable has occurred significantly more often in one of the class with a 2 score of",
                        "CwP > CwN , hence unpredictable is po sitiveraksha.sharma1@tcs.com"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Technique Find SCP 2",
                    "text": [
                        "T: Significant words with their polarity orientation in the unlabeled target domain:",
                        "Significance: NormalizedCountt(Significants(w)) > Significantt(w)",
                        "Note: We construct a 100 dimensional vector for each candidate word from the unlabeled target domain data.",
                        "Significant Consistent Polarity (SCP): S T",
                        "//Transferable information from the source to the target for cross-domain SA."
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "Example Inferred polarity orientation in the Target Domain",
                    "text": [
                        "Cosine-similarity score with the Pos-pivot (great) and Neg-pivot (bad), and inferred polarity orientation of words in the movie domain."
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "5": {
                    "title": "F score for SCP words identification task",
                    "text": [
                        "Available at: http://www.cs.jhu.edu/~mdredze/datasets/sentiment/ind ex2.html",
                        "Gold standard SCP words: Application of test in both the domains considering target domain is also labeled gives us gold standard SCP words from the corpus. No manual annotation.",
                        "SCL: Structured Correspondence Learning (Bhatt et al., 2015)"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "6": {
                    "title": "Domain Adaptation Algorithm",
                    "text": [
                        "Cs(exampleDoc) = -0.07 (wrong prediction, negative)",
                        "Ct(exampleDoc) = 0.33 (correct prediction, positive)"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "7": {
                    "title": "Cross domain Results",
                    "text": [
                        "Sys1 Sys2 Sys3 Sys4 Sys5 Sys6",
                        "System Name: Transferred Info",
                        "System-4: System-1 + iterations",
                        "We obtained a strong positive",
                        "cross-domain between and accuracy"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                }
            },
            "paper_title": "Identifying Transferable Information Across Domains for Cross-domain Sentiment Classification",
            "paper_id": "1310",
            "paper": {
                "title": "Identifying Transferable Information Across Domains for Cross-domain Sentiment Classification",
                "abstract": "Getting manually labeled data in each domain is always an expensive and a time consuming task. Cross-domain sentiment analysis has emerged as a demanding concept where a labeled source domain facilitates a sentiment classifier for an unlabeled target domain. However, polarity orientation (positive or negative) and the significance of a word to express an opinion often differ from one domain to another domain. Owing to these differences, crossdomain sentiment classification is still a challenging task. In this paper, we propose that words that do not change their polarity and significance represent the transferable (usable) information across domains for cross-domain sentiment classification. We present a novel approach based on χ 2 test and cosine-similarity between context vector of words to identify polarity preserving significant words across domains. Furthermore, we show that a weighted ensemble of the classifiers enhances the cross-domain classification performance.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The choice of the words to express an opinion depends on the domain as users often use domainspecific words (Qiu et al., 2009; ."
                    },
                    {
                        "id": 1,
                        "string": "For example, entertaining and boring are frequently used in the movie domain to express an opinion; however, finding these words in the electronics domain is rare."
                    },
                    {
                        "id": 2,
                        "string": "Moreover, there are words which are likely to be used across domains in the same proportion, but may change their polarity orientation from one domain to another (Choi et al., 2009) ."
                    },
                    {
                        "id": 3,
                        "string": "For example, a word like unpredictable is positive in the movie domain (un-predictable plot), but negative in the automobile domain (unpredictable steering)."
                    },
                    {
                        "id": 4,
                        "string": "Such a polarity changing word should be assigned positive orientation in the movie domain and negative orientation in the automobile domain."
                    },
                    {
                        "id": 5,
                        "string": "1 Due to these differences across domains, a supervised algorithm trained on a labeled source domain, does not generalize well on an unlabeled target domain and the cross-domain performance degrades."
                    },
                    {
                        "id": 6,
                        "string": "Generally, supervised learning algorithms have to be re-trained from scratch on every new domain using the manually annotated review corpus (Pang et al., 2002; Kanayama and Nasukawa, 2006; Pang and Lee, 2008; Esuli and Sebastiani, 2005; Breck et al., 2007; Li et al., 2009; Prabowo and Thelwall, 2009; Taboada et al., 2011; Rosenthal et al., 2014) ."
                    },
                    {
                        "id": 7,
                        "string": "This is not practical as there are numerous domains and getting manually annotated data for every new domain is an expensive and time consuming task (Bhattacharyya, 2015) ."
                    },
                    {
                        "id": 8,
                        "string": "On the other hand, domain adaptation techniques work in contrast to traditional supervised techniques on the principle of transferring learned knowledge across domains (Blitzer et al., 2007; Pan et al., 2010; Bhatt et al., 2015) ."
                    },
                    {
                        "id": 9,
                        "string": "The existing transfer learning based domain adaptation algorithms for cross-domain classification have generally been proven useful in reducing the labeled data requirement, but they do not consider words like unpredictable that change polarity orientation across domains."
                    },
                    {
                        "id": 10,
                        "string": "Transfer (reuse) of changing polarity words affects the cross-domain performance negatively."
                    },
                    {
                        "id": 11,
                        "string": "Therefore, one cannot use transfer learning as the proverbial hammer, rather one needs to gauge what to transfer from the source domain to the target domain."
                    },
                    {
                        "id": 12,
                        "string": "In this paper, we propose that the words which are equally significant with a consistent polarity across domains represent the usable information for cross-domain sentiment analysis."
                    },
                    {
                        "id": 13,
                        "string": "χ 2 is a popularly used and reliable statistical test to identify significance and polarity of a word in an annotated corpus (Oakes et al., 2001; Al-Harbi et al., 2008; Cheng and Zhulyn, 2012; Sharma and Bhattacharyya, 2013) ."
                    },
                    {
                        "id": 14,
                        "string": "However, for an unlabeled corpus no such statistical technique is applicable."
                    },
                    {
                        "id": 15,
                        "string": "Therefore, identification of words which are significant with a consistent polarity across domains is a non-trivial task."
                    },
                    {
                        "id": 16,
                        "string": "In this paper, we present a novel technique based on χ 2 test and cosine-similarity between context vector of words to identify Significant Consistent Polarity (SCP) words across domains."
                    },
                    {
                        "id": 17,
                        "string": "2 The major contribution of this research is as follows."
                    },
                    {
                        "id": 18,
                        "string": "1."
                    },
                    {
                        "id": 19,
                        "string": "Extracting significant consistent polarity words across domains: A technique which exploits cosine-similarity between context vector of words and χ 2 test is used to identify SCP words across labeled source and unlabeled target domains."
                    },
                    {
                        "id": 20,
                        "string": "2."
                    },
                    {
                        "id": 21,
                        "string": "An ensemble-based adaptation algorithm: A classifier (C s ) trained on SCP words in the labeled source domain acts as a seed to initiate a classifier (C t ) on the target specific features."
                    },
                    {
                        "id": 22,
                        "string": "These classifiers are then combined in a weighted ensemble to further enhance the cross-domain classification performance."
                    },
                    {
                        "id": 23,
                        "string": "Our results show that our approach gives a statistically significant improvement over Structured Correspondence Learning (SCL) (Bhatt et al., 2015) and common unigrams in identification of transferable words, which eventually facilitates a more accurate sentiment classifier in the target domain."
                    },
                    {
                        "id": 24,
                        "string": "The road-map for rest of the paper is as follows."
                    },
                    {
                        "id": 25,
                        "string": "Section 2 describes the related work."
                    },
                    {
                        "id": 26,
                        "string": "Section 3 describes the extraction of the SCP and the ensemble-based adaptation algorithm."
                    },
                    {
                        "id": 27,
                        "string": "Section 4 elaborates the dataset and the experimental protocol."
                    },
                    {
                        "id": 28,
                        "string": "Section 5 presents the results and section 6 reports the error analysis."
                    },
                    {
                        "id": 29,
                        "string": "Section 7 concludes the paper."
                    },
                    {
                        "id": 30,
                        "string": "3 Related Work The most significant efforts in the learning of transferable knowledge for cross-domain text classification are Structured Correspondence Learning (SCL) (Blitzer et al., 2007) and Structured Feature Alignment (SFA) (Pan et al., 2010) ."
                    },
                    {
                        "id": 31,
                        "string": "SCL aims to learn the co-occurrence between features from the two domains."
                    },
                    {
                        "id": 32,
                        "string": "It starts with learning pivot features that occur frequently in both the domains."
                    },
                    {
                        "id": 33,
                        "string": "It models correlation between pivots and all other features by training linear predictors to predict presence of pivot features in the unlabeled target domain data."
                    },
                    {
                        "id": 34,
                        "string": "SCL has shown significant improvement over a baseline (shift-unaware) model."
                    },
                    {
                        "id": 35,
                        "string": "SFA uses some domain-independent words as a bridge to construct a bipartite graph to model the co-occurrence relationship between domainspecific words and domain-independent words."
                    },
                    {
                        "id": 36,
                        "string": "Our approach also exploits the concept of cooccurrence (Pan et al., 2010) , but we measure the co-occurrence in terms of similarity between context vector of words, unlike SCL and SFA, which literally look for the co-occurrence of words in the corpus."
                    },
                    {
                        "id": 37,
                        "string": "The use of context vector of words in place of words helps to overcome the data sparsity problem ."
                    },
                    {
                        "id": 38,
                        "string": "Domain adaptation for sentiment classification has been explored by many researchers (Jiang and Zhai, 2007; Ji et al., 2011; Saha et al., 2011; Glorot et al., 2011; Zhou et al., 2014; Bhatt et al., 2015) ."
                    },
                    {
                        "id": 39,
                        "string": "Most of the works have focused on learning a shared low dimensional representation of features that can be generalized across different domains."
                    },
                    {
                        "id": 40,
                        "string": "However, none of the approaches explicitly analyses significance and polarity of words across domains."
                    },
                    {
                        "id": 41,
                        "string": "On the other hand, Glorot et al., (2011) proposed a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion."
                    },
                    {
                        "id": 42,
                        "string": "Zhou et al., (2014) also proposed a deep learning approach to learn a feature mapping between cross-domain heterogeneous features as well as a better feature representation for mapped data to reduce the bias issue caused by the crossdomain correspondences."
                    },
                    {
                        "id": 43,
                        "string": "Though deep learning based approaches perform reasonably good, they don't perform explicit identification and visualization of transferable features across domains unlike SFA and SCL, which output a set of words as transferable (reusable) features."
                    },
                    {
                        "id": 44,
                        "string": "Our approach explicitly determines the words which are equally significant with a consistent polarity across source and target domains."
                    },
                    {
                        "id": 45,
                        "string": "Our results show that the use of SCP words as features identified by our approach leads to a more accurate cross-domain sentiment classifier in the unlabeled target domain."
                    },
                    {
                        "id": 46,
                        "string": "Approach: Cross-domain Sentiment Classification The proposed approach identifies words which are equally significant for sentiment classification with a consistent polarity across source and target domains."
                    },
                    {
                        "id": 47,
                        "string": "These Significant Consistent Polarity (SCP) words make a set of transferable knowledge from the labeled source domain to the unlabeled target domain for cross-domain sentiment analysis."
                    },
                    {
                        "id": 48,
                        "string": "The algorithm further adapts to the unlabeled target domain by learning target domain specific features."
                    },
                    {
                        "id": 49,
                        "string": "The following sections elaborate SCP features extraction (3.1) and the ensemblebased cross-domain adaptation algorithm (3.2)."
                    },
                    {
                        "id": 50,
                        "string": "Extracting SCP Features The words which are not significant for classification in the labeled source domain, do not transfer useful knowledge to the target domain through a supervised classifier trained in the source domain."
                    },
                    {
                        "id": 51,
                        "string": "Moreover, words that are significant in both the domains, but have different polarity orientation transfer the wrong information to the target domain through a supervised classifier trained in the labeled source domain, which also downgrade the cross-domain performance."
                    },
                    {
                        "id": 52,
                        "string": "Our algorithm identifies the significance and the polarity of all the words individually in their respective domains."
                    },
                    {
                        "id": 53,
                        "string": "Then the words which are significant in both the domains with the consistent polarity orientation are used to initiate the crossdomain adaptation algorithm."
                    },
                    {
                        "id": 54,
                        "string": "The following sections elaborate how the significance and the polarity of the words are obtained in the labeled source and the unlabeled target domains."
                    },
                    {
                        "id": 55,
                        "string": "Extracting Significant Words with the Polarity Orientation from the Labeled Source Domain Since we have a polarity annotated dataset in the source domain, a statistical test like χ 2 test can be applied to find the significance of a word in the corpus for sentiment classification (Cheng and Zhulyn, 2012; Zheng et al., 2004) ."
                    },
                    {
                        "id": 56,
                        "string": "We have used goodness of fit chi 2 test with equal number of reviews in positive and negative corpora."
                    },
                    {
                        "id": 57,
                        "string": "This test is generally used to determine whether sample data is consistent with a null hypothesis."
                    },
                    {
                        "id": 58,
                        "string": "4 Here, the null hypothesis is that the word is equally used in the positive and the negative corpora."
                    },
                    {
                        "id": 59,
                        "string": "The χ 2 test is formulated as follows: χ 2 (w) = ((c w p − µ w ) 2 + (c w n − µ w ) 2 )/µ w (1) Where, c w p is the observed count of a word w in the positive documents and c w n is the observed count in the negative documents."
                    },
                    {
                        "id": 60,
                        "string": "µ w represents an average of the word's count in the positive and the negative documents."
                    },
                    {
                        "id": 61,
                        "string": "Here, µ w is the expected count or the value of the null-hypothesis."
                    },
                    {
                        "id": 62,
                        "string": "There is an inverse relation between χ 2 value and the p-value which is probability of the data given null hypothesis is true."
                    },
                    {
                        "id": 63,
                        "string": "In such a case where a word results in a pvalue smaller than the critical p-value (0.05), we reject the null-hypothesis."
                    },
                    {
                        "id": 64,
                        "string": "Consequently, we assume that the word w belongs to a particular class (positive or negative) in the data, hence it is a significant word for classification (Sharma and Bhattacharyya, 2013) ."
                    },
                    {
                        "id": 65,
                        "string": "Polarity of Words in the Labeled Source Domain: Chi-square test substantiates the statistically significant association of a word with a class label."
                    },
                    {
                        "id": 66,
                        "string": "Based on this association we assign a polarity orientation to a word in the domain."
                    },
                    {
                        "id": 67,
                        "string": "In other words, if a word is found significant by χ 2 test, then the exact class of the word is determined by comparing c w p and c w n ."
                    },
                    {
                        "id": 68,
                        "string": "For instance, if c w p is higher than c w n , then the word is positive, else negative."
                    },
                    {
                        "id": 69,
                        "string": "Extracting Significant Words with the Polarity Orientation from the Unlabeled Target Domain Target domain data is unlabeled and hence, χ 2 test cannot be used to find significance of the words."
                    },
                    {
                        "id": 70,
                        "string": "However, to obtain SCP words across domains, we take advantage of the fact that we have to identify significance of only those words in the target domain which are already proven to be significant in the source domain."
                    },
                    {
                        "id": 71,
                        "string": "We presume that a word which is significant in the source domain as per χ 2 test and occurs with a frequency greater than a certain threshold (θ) in the target domain is significant in the target domain also."
                    },
                    {
                        "id": 72,
                        "string": "count t (signif icant s (w)) > θ ⇒ signif icant t (w) (2) Equation (2) formulates the significance test in the unlabeled target (t) domain."
                    },
                    {
                        "id": 73,
                        "string": "Here, function signif icant s assures the significance of the word w in the labeled source (s) domain and count t gives the normalized count of the w in t. 5 χ 2 test has one key assumption that the expected value of an observed variable should not be less than 5 to be significant."
                    },
                    {
                        "id": 74,
                        "string": "Considering this assumption as a base, we fix the value of θ as 10."
                    },
                    {
                        "id": 75,
                        "string": "6 Polarity of Words in the Unlabeled Target Domain: Generally, in a polar corpus, a positive word occurs more frequently in context of other positive words, while a negative word occurs in context of other negative words ."
                    },
                    {
                        "id": 76,
                        "string": "7 Based on this hypothesis, we explore the contextual information of a word that is captured well by its context vector to assign polarity to words in the target domain (Rill et al., 2012; Rong, 2014) ."
                    },
                    {
                        "id": 77,
                        "string": "Mikolov et al., (2013) showed that similarity between context vector of words in vicinity such as 'go' and 'to' is higher compared to distant words or words that are not in the neighborhood of each other."
                    },
                    {
                        "id": 78,
                        "string": "Here, the observed concept is that if a word is positive, then its context vector learned from the polar review corpus will give higher cosine-similarity with a known positive polarity word in comparison to a known negative polarity word or vice versa."
                    },
                    {
                        "id": 79,
                        "string": "Therefore, based on the cosine-similarity scores we can assign the label of the known polarity word to the unknown polarity word."
                    },
                    {
                        "id": 80,
                        "string": "We term known polarity words as Positivepivot and Negative-pivot."
                    },
                    {
                        "id": 81,
                        "string": "Context Vector Generation: To compute context vector (conV ec) of a word (w), we have used publicly available word2vec toolkit with the skip-gram model (Mikolov et al., 2013) ."
                    },
                    {
                        "id": 82,
                        "string": "8 In this model, each word's Huffman code is used as an input to a log-linear classifier with a continuous projection layer and words within a given window are predicted (Faruqui et al., 2014) ."
                    },
                    {
                        "id": 83,
                        "string": "We construct a 100 dimensional vector for each can-5 Normalized count of w in t shows the proportion of occurrences of w in t. 6 We tried with smaller values of theta also, but they were not found as effective as theta value of 10 for significant words identification."
                    },
                    {
                        "id": 84,
                        "string": "7 For example, 'excellent' will be used more often in positive reviews in comparison to negative reviews, hence, it would have more positive words in its context."
                    },
                    {
                        "id": 85,
                        "string": "Likewise, 'terrible' will be used more frequently in negative reviews in comparison to positive reviews, hence, it would have more negative words in its context."
                    },
                    {
                        "id": 86,
                        "string": "8 Available at: https://radimrehurek.com/ gensim/models/word2vec.html didate word from the unlabeled target domain data."
                    },
                    {
                        "id": 87,
                        "string": "The decision method given in Equation 3 defines the polarity assignment to the unknown polarity words of the target domain."
                    },
                    {
                        "id": 88,
                        "string": "If a word w gives a higher cosine-similarity with the PosPivot (Positive-pivot) than the NegPivot (Negative-pivot), the decision method assigns the positive polarity to the word w, else negative polarity to the word w. If(cosine(conV ec(w), conV ec(PosPivot)) > cosine(conV ec(w), conV ec(NegPivot))) ⇒ Positive If(cosine(conV ec(w), conV ec(PosPivot)) < cosine(conV ec(w), conV ec(NegPivot))) ⇒ Negative (3) Pivot Selection Method: We empirically observed that a polar word which has the highest frequency in the corpus gives more coverage to estimate the polarity orientation of other words while using context vector."
                    },
                    {
                        "id": 89,
                        "string": "Essentially, the frequent occurrence of the word in the corpus allows it to be in context of other words frequently."
                    },
                    {
                        "id": 90,
                        "string": "Therefore a polar word having the highest frequency in the target domain is observed to be more accurate as pivot for identification of polarity of input words."
                    },
                    {
                        "id": 91,
                        "string": "9 Table 1 shows the examples of a few words in the electronics domain whose polarity orientation is derived based on the similarity scores obtained with PosPivot and NegPivot words in the electronics domain."
                    },
                    {
                        "id": 92,
                        "string": "Transferable Knowledge: The proposed algorithm uses the above mentioned techniques to identify the significance and the polarity of words in the labeled source data (cf."
                    },
                    {
                        "id": 93,
                        "string": "Section 3.1.1) and the unlabeled target data (cf."
                    },
                    {
                        "id": 94,
                        "string": "Section 3.1.2)."
                    },
                    {
                        "id": 95,
                        "string": "The words which are found significant in both the domains with the same polarity orientation form a set of SCP features for cross-domain sentiment classification."
                    },
                    {
                        "id": 96,
                        "string": "The weights learned for the SCP features in the labeled source domain by the classification algorithm can be reused for sentiment classification in the unlabeled target domain as SCP features have consistent impacts in both the domains."
                    },
                    {
                        "id": 97,
                        "string": "Word Great  Apart from the transferable SCP words (Obtained in Section 3.1), each domain has specific discriminating words which can be discovered only from that domain data."
                    },
                    {
                        "id": 98,
                        "string": "The proposed cross-domain adaptation approach (Algorithm 1) attempts to learn such domain specific features from the target domain using a classifier trained on SCP words in the source domain."
                    },
                    {
                        "id": 99,
                        "string": "An ensemble of the classifiers trained on the SCP features (transferred from the source) and domain specific features (learned within the target) further enhances the cross-domain performance."
                    },
                    {
                        "id": 100,
                        "string": "Table 2 lists the notations used in the algorithm."
                    },
                    {
                        "id": 101,
                        "string": "The working of the cross-domain adaptation algorithm is as follows: Poor Polarity 1."
                    },
                    {
                        "id": 102,
                        "string": "Identify SCP features from the labeled source and the unlabeled target domain data."
                    },
                    {
                        "id": 103,
                        "string": "2."
                    },
                    {
                        "id": 104,
                        "string": "A SVM based classifier is trained on SCP words as features using labeled source domain data, named as C s ."
                    },
                    {
                        "id": 105,
                        "string": "3."
                    },
                    {
                        "id": 106,
                        "string": "The classifier C s is used to predict the labels for the unlabeled target domain instances D u t , and the confidently predicted instances of D u t form a set of pseudo labeled instances R n t ."
                    },
                    {
                        "id": 107,
                        "string": "4."
                    },
                    {
                        "id": 108,
                        "string": "A SVM based classifier is trained on the pseudo labeled target domain instances R n t , using unigrams in R n t as features to include the target specific words, this classifier is named as C t ."
                    },
                    {
                        "id": 109,
                        "string": "Finally, a Weighted Sum Model (WSM) of C s and C t gives a classifier in the target domain."
                    },
                    {
                        "id": 110,
                        "string": "The confidence in the prediction of D u t is measured in terms of the classification-score of the document, i.e., the distance of the input document from the separating hyper-plane given by the SVM classifier (Hsu et al., 2003) ."
                    },
                    {
                        "id": 111,
                        "string": "The top n confidently predicted pseudo labeled instances (R n t ) are used to train classifier C t , where n depends on a threshold that is empirically set to | ± 0.2|."
                    },
                    {
                        "id": 112,
                        "string": "10 The classifier C s trained on the SCP features (transferred knowledge) from the source domain and the classifier C t trained on self-discovered target specific features from the pseudo labeled target domain instances bring in complementary information from the two domains."
                    },
                    {
                        "id": 113,
                        "string": "Therefore, combining C s and C t in a weighted ensemble (WSM) further enhances the cross-domain performance."
                    },
                    {
                        "id": 114,
                        "string": "Algorithm 1 gives the pseudo code of the proposed adaptation approach."
                    },
                    {
                        "id": 115,
                        "string": "Input: D l s = {r 1 s , r 2 s , r 3 s , ....r j s }, , (2013) have shown that 350 to 400 labeled documents are required to get a high accuracy classifier in a domain using supervised classification techniques, but beyond 400 labeled documents there is not much improvement in the classification accuracy."
                    },
                    {
                        "id": 116,
                        "string": "Hence, threshold on classification score is set such that it can give a sufficient number of documents for supervised classification."
                    },
                    {
                        "id": 117,
                        "string": "Threshold |±0.2| gives documents between 350 to 400. rors produced by the individual classifier."
                    },
                    {
                        "id": 118,
                        "string": "The formulation of WSM is given in step-6 of the Algorithm 1."
                    },
                    {
                        "id": 119,
                        "string": "If C s has wrongly predicted a document at boundary point and C t has predicted the same document confidently, then weighted sum of C s and C t predicts the document correctly or vice versa."
                    },
                    {
                        "id": 120,
                        "string": "For example, a document is classified by C s as negative (wrong prediction) with a classification-score of −0.07, while the same document is classified by C t as positive (correct prediction) with a classification-score of 0.33, the WSM of C s and C t will classify the document as positive with a classification-score of 0.12 (Equation 4)."
                    },
                    {
                        "id": 121,
                        "string": "(4) Here 0.765 and 0.712 are the weights W s and W t to the classifiers C s and C t respectively."
                    },
                    {
                        "id": 122,
                        "string": "Weights to the Classifiers in WSM: The weights W s and W t are the classification accuracies obtained by C s and C t respectively on the crossvalidation data from the target domain."
                    },
                    {
                        "id": 123,
                        "string": "The weights W s and W t allow C s and C t to participate in the WSM in proportion of their accuracy on the cross-validation data."
                    },
                    {
                        "id": 124,
                        "string": "This restriction facilitates the domination of the classifier which is more accurate."
                    },
                    {
                        "id": 125,
                        "string": "D u t = {r 1 t , r 2 t , r 3 t , ....r k t }, Vs = {w 1 s , w 2 s , w 3 s , ....w p s }, Vt = {w 1 t , w 2 t , w 3 t , ....w q t } Output: Sentiment Dataset & Experimental Protocol In this paper, we show comparison between SCPbased domain adaptation (our approach) and SCLbased domain adaptation approach proposed by Bhatt el al."
                    },
                    {
                        "id": 126,
                        "string": "(2015) using four domains, viz., Electronics (E), Kitchen (K), Books (B), and DVD."
                    },
                    {
                        "id": 127,
                        "string": "11 We use SVM algorithm with linear kernel (Tong and Koller, 2002) to train a classifier in all the mentioned classification systems in the paper."
                    },
                    {
                        "id": 128,
                        "string": "To implement SVM algorithm, we have used the publicly available Python based Scikit-learn package (Pedregosa et al., 2011) ."
                    },
                    {
                        "id": 129,
                        "string": "12 Data in each domain is divided into three parts, viz., train (60%), validation (20%) and test (20%)."
                    },
                    {
                        "id": 130,
                        "string": "The SCP words are extracted from the training data."
                    },
                    {
                        "id": 131,
                        "string": "The weights W S and W t for the source and target classifiers are essentially accuracies obtained by C s and C t respectively on validation dataset from the target domain."
                    },
                    {
                        "id": 132,
                        "string": "We report the accuracy for all the systems on the test data."
                    },
                    {
                        "id": 133,
                        "string": "Table 3 shows the statistics of the dataset."
                    },
                    {
                        "id": 134,
                        "string": "(2015) is state-of-the-art for cross-domain sentiment analysis."
                    },
                    {
                        "id": 135,
                        "string": "On the other hand, common unigrams of the source and target are the most visible transferable information."
                    },
                    {
                        "id": 136,
                        "string": "13 Gold standard SCP words: Chi-square test gives us significance and polarity of the word in the corpus by taking into account the polarity labels of the reviews."
                    },
                    {
                        "id": 137,
                        "string": "Application of chi-square test in both the domains, considering that the target domain is also labeled, gives us gold standard SCP words."
                    },
                    {
                        "id": 138,
                        "string": "There is no manual annotation involved."
                    },
                    {
                        "id": 139,
                        "string": "F-score for SCP Words Identification Task: The set of SCP words represent the usable information across domains for cross-domain classification, hence we compare the F-score for the SCP words identification task obtained with our approach, SCL and common-unigrams in Figure  1 ."
                    },
                    {
                        "id": 140,
                        "string": "It demonstrates that our approach gives a huge improvement in the F-score over SCL and common unigrams for all the 12 pairs of the source and target domains."
                    },
                    {
                        "id": 141,
                        "string": "To measure the statistical significance of this improvement, we applied t-test on the F-score distribution obtained with our approach, SCL and common unigrams."
                    },
                    {
                        "id": 142,
                        "string": "t-test is a statistical significance test."
                    },
                    {
                        "id": 143,
                        "string": "It is used to determine whether two sets of data are significantly different or not."
                    },
                    {
                        "id": 144,
                        "string": "14 Our approach performs significantly better than SCL and common unigrams, while SCL performs better than common unigrams as per ttest."
                    },
                    {
                        "id": 145,
                        "string": "Comparison among C s , C t and WSM: Table 4 shows the comparison among classifiers obtained in the target domain using SCP given by our approach, SCL, common-unigrams, and gold standard SCP for electronics as the source and movie as the target domains."
                    },
                    {
                        "id": 146,
                        "string": "Since electronics and movie are two very dissimilar domains in terms of domain specific words, unlike, books and movie, getting a high accuracy classifier in the movie domain from the electronics domain is a challenging task (Pang et al., 2002) ."
                    },
                    {
                        "id": 147,
                        "string": "Therefore, in Table 4 results are reported with electronics as the source domain and movie as the target domain."
                    },
                    {
                        "id": 148,
                        "string": "15 In all four cases, there is difference in the transferred information from the source to the target, but the ensemblebased classification algorithm (Section 3.2) is the same."
                    },
                    {
                        "id": 149,
                        "string": "Table 4 depicts sentiment classification accuracy obtained with C s , C t and WSM."
                    },
                    {
                        "id": 150,
                        "string": "The weights W s and W t in WSM are normalized accuracies by C s and C t respectively on the validation set from the target domain."
                    },
                    {
                        "id": 151,
                        "string": "The fourth column (size) represents the feature set size."
                    },
                    {
                        "id": 152,
                        "string": "We observed that WSM gives the highest accuracy, which validates our assumption that a weighted sum of two classifiers is better than the performance of individual classifiers."
                    },
                    {
                        "id": 153,
                        "string": "The WSM accuracy obtained with SCP words given by our approach is comparable to the accuracy obtained with gold standard SCP words."
                    },
                    {
                        "id": 154,
                        "string": "The motivation of this research is to learn shared representation cognizant of significant and polarity changing words across domains."
                    },
                    {
                        "id": 155,
                        "string": "Hence, we report cross-domain classification accuracy obtained with three different types of shared representations (transferable knowledge), viz., common-unigrams, SCL and our approach."
                    },
                    {
                        "id": 156,
                        "string": "16 System-1, system-2 and system-3 in Table 5 show the final cross-domain sentiment classification accuracy obtained with WSM in the target domain 14 The detail about the test is available at: http://www."
                    },
                    {
                        "id": 157,
                        "string": "socialresearchmethods.net/kb/stat_t.php."
                    },
                    {
                        "id": 158,
                        "string": "15 The movie review dataset is a balanced corpus of 2000 reviews."
                    },
                    {
                        "id": 159,
                        "string": "Available at: http://www.cs.cornell."
                    },
                    {
                        "id": 160,
                        "string": "edu/people/pabo/movie-review-data/ 16 The reported accuracy is the ratio of correctly predicted documents to that of the total number of documents in the test dataset."
                    },
                    {
                        "id": 161,
                        "string": "Table 4 : Classification accuracy in % given by C s , C t and WSM with different feature sets for electronics as source and movie as target."
                    },
                    {
                        "id": 162,
                        "string": "for 12 pairs of source and target using commonunigrams, SCL and our approach respectively."
                    },
                    {
                        "id": 163,
                        "string": "System-1: This system considers commonunigrams of both the domains as shared representation."
                    },
                    {
                        "id": 164,
                        "string": "System-2: It differs from system-1 in the shared representation, which is learned using Structured Correspondence Learning (SCL) (Bhatt et al., 2015) to initiate the process."
                    },
                    {
                        "id": 165,
                        "string": "System-3: This system implements the proposed domain adaptation algorithm."
                    },
                    {
                        "id": 166,
                        "string": "Here, the shared representation is the SCP words and the ensemble-based domain adaptation algorithm (Section 3.2) gives the final classifier in the target domain."
                    },
                    {
                        "id": 167,
                        "string": "Table 5 depicts that the system-3 is better than system-1 and system-2 for all pairs, except K to B and B to D. For these two pairs, system-2 performs better than system-3, though the difference in accuracy is very low (below 1%)."
                    },
                    {
                        "id": 168,
                        "string": "To enhance the final accuracy in the target domain, Bhatt et al., (2015) performed iterations over the pseudo labeled target domain instances (R n t )."
                    },
                    {
                        "id": 169,
                        "string": "In each iteration, they obtained a new C t trained on increased number of pseudo labeled documents."
                    },
                    {
                        "id": 170,
                        "string": "This process is repeated till all the training instances of the target domain are considered."
                    },
                    {
                        "id": 171,
                        "string": "The C t obtained in the last iteration makes WSM with C s which is trained on the transferable features given by SCL."
                    },
                    {
                        "id": 172,
                        "string": "Bhatt et al., (2015) have shown that iteration-based domain adaptation technique is more effective than one-shot Table 6 : In-domain sentiment classification accuracy using significant words and unigrams."
                    },
                    {
                        "id": 173,
                        "string": "adaptation approaches."
                    },
                    {
                        "id": 174,
                        "string": "System-4, system-5, and system-6 in Table 5 incorporate the iterative process into system-1, system-2, and system-3 respectively."
                    },
                    {
                        "id": 175,
                        "string": "We observed the same trend after the inclusion of the iterative process also, as the SCPbased system-6 performed the best in all 12 cases."
                    },
                    {
                        "id": 176,
                        "string": "On the other hand, SCL-based system-5 performs better than the common-unigrams based system-4."
                    },
                    {
                        "id": 177,
                        "string": "Table 7 shows the results of significance test (ttest) performed on the accuracy distributions produced by the six different systems."
                    },
                    {
                        "id": 178,
                        "string": "The notice-able point is that the iterations over SCL (system-5) and our approach (system-6) narrow down the difference in the accuracy between system-2 and system-3 as system-2 and system-3 have a statistically significant difference in accuracy with the p-value of 0.039 (Row-4 of Table 7 ), but the difference between system-5 and system-6 is not statistically significant."
                    },
                    {
                        "id": 179,
                        "string": "Essentially, system-3 does not give much improvement with iterations, unlike system-2."
                    },
                    {
                        "id": 180,
                        "string": "In other words, addition of the iterative process with the shared representation given by SCL overcomes the errors introduced by SCL."
                    },
                    {
                        "id": 181,
                        "string": "On the other hand, SCP given by our approach were able to produce a less erroneous system in oneshot."
                    },
                    {
                        "id": 182,
                        "string": "Table 6 shows the in-domain sentiment classification accuracy obtained with unigrams and significant words as features considering labeled data in the domain."
                    },
                    {
                        "id": 183,
                        "string": "System-6 tries to equalize the in-domain accuracy obtained with unigrams."
                    },
                    {
                        "id": 184,
                        "string": "Table 7 : t-test (α = 0.05) results on the difference in accuracy produced by various systems (cf."
                    },
                    {
                        "id": 185,
                        "string": "Table 5)."
                    },
                    {
                        "id": 186,
                        "string": "To validate our assertion that polarity preserving significant words (SCP) across source and target domains make a less erroneous set of transferable knowledge from the source domain to the target domain, we computed Pearson productmoment correlation between F-score obtained for our approach (cf."
                    },
                    {
                        "id": 187,
                        "string": "Figure 1) and cross-domain accuracy obtained with SCP (System-3, cf."
                    },
                    {
                        "id": 188,
                        "string": "Table  5 )."
                    },
                    {
                        "id": 189,
                        "string": "We observed a strong positive correlation (r) of 0.78 between F-score and cross-domain accuracy."
                    },
                    {
                        "id": 190,
                        "string": "Essentially, an accurate set of SCP words positively stimulates an improved classifier in the unlabeled target domain."
                    },
                    {
                        "id": 191,
                        "string": "Error Analysis The pairs of domains which share a greater number of domain-specific words, result in a higher accuracy cross-domain classifier."
                    },
                    {
                        "id": 192,
                        "string": "For example, Electronics (E) and Kitchen (K) domains share many domain-specific words, hence pairing of such similar domains as the source and the target results into a higher accuracy classifier in the target domain."
                    },
                    {
                        "id": 193,
                        "string": "Table 5 shows that K→E outperforms B→E and D→E, and E→K outperforms B→K and D→K."
                    },
                    {
                        "id": 194,
                        "string": "On the other hand, DVD (D) and electronics are two very different domains unlike electronics and Kitchen, or DVD and books."
                    },
                    {
                        "id": 195,
                        "string": "The DVD dataset contains reviews about the music albums."
                    },
                    {
                        "id": 196,
                        "string": "This difference in types of reviews makes them to share less number of words."
                    },
                    {
                        "id": 197,
                        "string": "Table 8 shows the percent (%) of common words among the 4 domains."
                    },
                    {
                        "id": 198,
                        "string": "The percent of common unique words are common unique words divided by the summation of unique words in the domains individually."
                    },
                    {
                        "id": 199,
                        "string": "B  15  22  17  14  22  17   Table 8 : Common unique words between the domains in percent (%)."
                    },
                    {
                        "id": 200,
                        "string": "E -D E -K E -B D -K D -B K - Conclusion In this paper, we proposed that the Significant Consistent Polarity (SCP) words represent the transferable information from the labeled source domain to the unlabeled target domain for crossdomain sentiment classification."
                    },
                    {
                        "id": 201,
                        "string": "We showed a strong positive correlation of 0.78 between the SCP words identified by our approach and the sentiment classification accuracy achieved in the unlabeled target domain."
                    },
                    {
                        "id": 202,
                        "string": "Essentially, a set of less erroneous transferable features leads to a more accurate classification system in the unlabeled target domain."
                    },
                    {
                        "id": 203,
                        "string": "We have presented a technique based on χ 2 test and cosine-similarity between context vector of words to identify SCP words."
                    },
                    {
                        "id": 204,
                        "string": "Results show that the SCP words given by our approach represent more accurate transferable information in comparison to the Structured Correspondence Learning (SCL) algorithm and common-unigrams."
                    },
                    {
                        "id": 205,
                        "string": "Furthermore, we show that an ensemble of the classifiers trained on the SCP features and target specific features overcomes the errors of the individual classifiers."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 29
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 30,
                        "end": 45
                    },
                    {
                        "section": "Approach: Cross-domain Sentiment Classification",
                        "n": "3",
                        "start": 46,
                        "end": 49
                    },
                    {
                        "section": "Extracting SCP Features",
                        "n": "3.1",
                        "start": 50,
                        "end": 54
                    },
                    {
                        "section": "Extracting Significant Words with the Polarity Orientation from the Labeled Source Domain",
                        "n": "3.1.1",
                        "start": 55,
                        "end": 68
                    },
                    {
                        "section": "Extracting Significant Words with the Polarity Orientation from the Unlabeled Target Domain",
                        "n": "3.1.2",
                        "start": 69,
                        "end": 108
                    },
                    {
                        "section": "Finally, a Weighted Sum Model (WSM) of",
                        "n": "5.",
                        "start": 109,
                        "end": 124
                    },
                    {
                        "section": "Dataset & Experimental Protocol",
                        "n": "4",
                        "start": 125,
                        "end": 190
                    },
                    {
                        "section": "Error Analysis",
                        "n": "6",
                        "start": 191,
                        "end": 199
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 200,
                        "end": 205
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1310-Table3-1.png",
                        "caption": "Table 3: Dataset statistics",
                        "page": 5,
                        "bbox": {
                            "x1": 320.64,
                            "x2": 512.16,
                            "y1": 125.75999999999999,
                            "y2": 177.12
                        }
                    },
                    {
                        "filename": "../figure/image/1310-Table4-1.png",
                        "caption": "Table 4: Classification accuracy in % given by Cs, Ct and WSM with different feature sets for electronics as source and movie as target.",
                        "page": 6,
                        "bbox": {
                            "x1": 310.56,
                            "x2": 522.24,
                            "y1": 62.879999999999995,
                            "y2": 276.96
                        }
                    },
                    {
                        "filename": "../figure/image/1310-Table6-1.png",
                        "caption": "Table 6: In-domain sentiment classification accuracy using significant words and unigrams.",
                        "page": 7,
                        "bbox": {
                            "x1": 81.6,
                            "x2": 280.32,
                            "y1": 500.64,
                            "y2": 553.92
                        }
                    },
                    {
                        "filename": "../figure/image/1310-Figure1-1.png",
                        "caption": "Figure 1: F-score for SCP words identification task (Source→ Target) with respect to gold standard SCP words.",
                        "page": 7,
                        "bbox": {
                            "x1": 85.92,
                            "x2": 512.16,
                            "y1": 61.44,
                            "y2": 270.24
                        }
                    },
                    {
                        "filename": "../figure/image/1310-Table5-1.png",
                        "caption": "Table 5: Cross-domain sentiment classification accuracy in the target domain (Source (S)→ Target (T)).",
                        "page": 7,
                        "bbox": {
                            "x1": 139.68,
                            "x2": 457.44,
                            "y1": 317.76,
                            "y2": 453.12
                        }
                    },
                    {
                        "filename": "../figure/image/1310-Table7-1.png",
                        "caption": "Table 7: t-test (α = 0.05) results on the difference in accuracy produced by various systems (cf. Table 5).",
                        "page": 8,
                        "bbox": {
                            "x1": 103.67999999999999,
                            "x2": 259.2,
                            "y1": 62.879999999999995,
                            "y2": 228.95999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1310-Table8-1.png",
                        "caption": "Table 8: Common unique words between the domains in percent (%).",
                        "page": 8,
                        "bbox": {
                            "x1": 320.64,
                            "x2": 512.16,
                            "y1": 87.84,
                            "y2": 109.92
                        }
                    },
                    {
                        "filename": "../figure/image/1310-Table1-1.png",
                        "caption": "Table 1: Cosine-similarity scores with PosPivot (great) and NegPivot (poor), and inferred polarity orientation of the words.",
                        "page": 4,
                        "bbox": {
                            "x1": 91.67999999999999,
                            "x2": 270.24,
                            "y1": 62.879999999999995,
                            "y2": 186.23999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1310-Table2-1.png",
                        "caption": "Table 2: Notations used in the paper",
                        "page": 4,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 287.03999999999996,
                            "y1": 252.48,
                            "y2": 377.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-72"
        },
        {
            "slides": {
                "1": {
                    "title": "Classification vs Structured Prediction",
                    "text": [
                        "I like this book Classifier",
                        "Predictor I like this book"
                    ],
                    "page_nums": [
                        4,
                        5
                    ],
                    "images": []
                },
                "2": {
                    "title": "Search based Structured Prediction",
                    "text": [
                        "I like this book"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Problems of the Generic Learning Algorithm",
                    "text": [
                        "Ambiguities in training data Training and test discrepancy both this and the seems reasonable What if I made wrong decision?",
                        "I like this book"
                    ],
                    "page_nums": [
                        9,
                        10
                    ],
                    "images": []
                },
                "10": {
                    "title": "KD on Supervised reference Data",
                    "text": [
                        "book I like love the this book I like love the this",
                        "I like this book"
                    ],
                    "page_nums": [
                        15
                    ],
                    "images": []
                },
                "11": {
                    "title": "KD on Explored Data",
                    "text": [
                        "book I like love the this",
                        "I like book this the",
                        "Training and test discrepancy Search Space",
                        "Explore (Ross and Bagnell, 2010)",
                        "We use teacher q to explore the search space & learn from KD on the explored data"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                }
            },
            "paper_title": "Distilling Knowledge for Search-based Structured Prediction",
            "paper_id": "1315",
            "paper": {
                "title": "Distilling Knowledge for Search-based Structured Prediction",
                "abstract": "Many natural language processing tasks can be modeled into structured prediction and solved as a search problem. In this paper, we distill an ensemble of multiple models trained with different initialization into a single model. In addition to learning to match the ensemble's probability output on the reference states, we also use the ensemble to explore the search space and learn from the encountered states in the exploration. Experimental results on two typical search-based structured prediction tasks -transition-based dependency parsing and neural machine translation show that distillation can effectively improve the single model's performance and the final model achieves improvements of 1.32 in LAS and 2.65 in BLEU score on these two tasks respectively over strong baselines and it outperforms the greedy structured prediction models in previous literatures.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Search-based structured prediction models the generation of natural language structure (part-ofspeech tags, syntax tree, translations, semantic graphs, etc.)"
                    },
                    {
                        "id": 1,
                        "string": "as a search problem (Collins and Roark, 2004; Liang et al., 2006; Zhang and Clark, 2008; Huang et al., 2012; Sutskever et al., 2014; Goodman et al., 2016) ."
                    },
                    {
                        "id": 2,
                        "string": "It has drawn a lot of research attention in recent years thanks to its competitive performance on both accuracy and running time."
                    },
                    {
                        "id": 3,
                        "string": "A stochastic policy that controls the whole search process is usually learned by imitating a reference policy."
                    },
                    {
                        "id": 4,
                        "string": "The imitation is usually addressed as training a classifier to predict the ref- erence policy's search action on the encountered states when performing the reference policy."
                    },
                    {
                        "id": 5,
                        "string": "Such imitation process can sometimes be problematic."
                    },
                    {
                        "id": 6,
                        "string": "One problem is the ambiguities of the reference policy, in which multiple actions lead to the optimal structure but usually, only one is chosen as training instance (Goldberg and Nivre, 2012 )."
                    },
                    {
                        "id": 7,
                        "string": "Another problem is the discrepancy between training and testing, in which during the test phase, the learned policy enters non-optimal states whose search action is never learned (Ross and Bagnell, 2010; Ross et al., 2011) ."
                    },
                    {
                        "id": 8,
                        "string": "All these problems harm the generalization ability of search-based structured prediction and lead to poor performance."
                    },
                    {
                        "id": 9,
                        "string": "Previous works tackle these problems from two directions."
                    },
                    {
                        "id": 10,
                        "string": "To overcome the ambiguities in data, techniques like ensemble are often adopted (Di-Dependency parsing Neural machine translation st (σ, β, A), where σ is a stack, β is a buffer, and A is the partially generated tree ($, y1, y2, ..., yt) , where $ is the start symbol."
                    },
                    {
                        "id": 11,
                        "string": "A {SHIFT, LEFT, RIGHT} pick one word w from the target side vocabulary W. S0 {([ ], [1, .., n] , ∅)} {($)} ST { ([ROOT] , [ ], A)} {($, y1, y2, ..., ym)} T (s, a) • SHIFT: (σ, j|β) → (σ|j, β) ($, y1, y2, ..., yt) → ($, y1, y2, ..., yt, yt+1 = w) • LEFT: (σ|i j, β) → (σ|j, β) A ← A ∪ {i ← j} • RIGHT: (σ|i j, β) → (σ|i, β) A ← A ∪ {i → j} Table 1 : The search-based structured prediction view of transition-based dependency parsing (Nivre, 2008) and neural machine translation (Sutskever et al., 2014 (Sutskever et al., )."
                    },
                    {
                        "id": 12,
                        "string": "etterich, 2000 ."
                    },
                    {
                        "id": 13,
                        "string": "To mitigate the discrepancy, exploration is encouraged during the training process (Ross and Bagnell, 2010; Ross et al., 2011; Goldberg and Nivre, 2012; Bengio et al., 2015; Goodman et al., 2016) ."
                    },
                    {
                        "id": 14,
                        "string": "In this paper, we propose to consider these two problems in an integrated knowledge distillation manner (Hinton et al., 2015) ."
                    },
                    {
                        "id": 15,
                        "string": "We distill a single model from the ensemble of several baselines trained with different initialization by matching the ensemble's output distribution on the reference states."
                    },
                    {
                        "id": 16,
                        "string": "We also let the ensemble randomly explore the search space and learn the single model to mimic ensemble's distribution on the encountered exploration states."
                    },
                    {
                        "id": 17,
                        "string": "Combing the distillation from reference and exploration further improves our single model's performance."
                    },
                    {
                        "id": 18,
                        "string": "The workflow of our method is shown in Figure 1 ."
                    },
                    {
                        "id": 19,
                        "string": "We conduct experiments on two typical searchbased structured prediction tasks: transition-based dependency parsing and neural machine translation."
                    },
                    {
                        "id": 20,
                        "string": "The results of both these two experiments show the effectiveness of our knowledge distillation method by outperforming strong baselines."
                    },
                    {
                        "id": 21,
                        "string": "In the parsing experiments, an improvement of 1.32 in LAS is achieved and in the machine translation experiments, such improvement is 2.65 in BLEU."
                    },
                    {
                        "id": 22,
                        "string": "Our model also outperforms the greedy models in previous works."
                    },
                    {
                        "id": 23,
                        "string": "Major contributions of this paper include: • We study the knowledge distillation in search-based structured prediction and propose to distill the knowledge of an ensemble into a single model by learning to match its distribution on both the reference states ( §3.2) and exploration states encountered when using the ensemble to explore the search space ( §3.3)."
                    },
                    {
                        "id": 24,
                        "string": "A further combination of these two methods is also proposed to improve the performance ( §3.4)."
                    },
                    {
                        "id": 25,
                        "string": "• We conduct experiments on two search-based structured prediction problems: transitionbased dependency parsing and neural machine translation."
                    },
                    {
                        "id": 26,
                        "string": "In both these two problems, the distilled model significantly improves over strong baselines and outperforms other greedy structured prediction ( §4.2)."
                    },
                    {
                        "id": 27,
                        "string": "Comprehensive analysis empirically shows the feasibility of our distillation method ( §4.3)."
                    },
                    {
                        "id": 28,
                        "string": "Background Search-based Structured Prediction Structured prediction maps an input x = (x 1 , x 2 , ..., x n ) to its structural output y = (y 1 , y 2 , ..., y m ), where each component of y has some internal dependencies."
                    },
                    {
                        "id": 29,
                        "string": "Search-based structured prediction (Collins and Roark, 2004; Daumé III et al., 2005; Daumé III et al., 2009; Ross and Bagnell, 2010; Ross et al., 2011; Doppa et al., 2014; Vlachos and Clark, 2014; Chang et al., 2015) models the generation of the structure as a search problem and it can be formalized as a tuple (S, A, T (s, a), S 0 , S T ), in which S is a set of states, A is a set of actions, T is a function that maps S × A → S, S 0 is a set of initial states, and S T is a set of terminal states."
                    },
                    {
                        "id": 30,
                        "string": "Starting from an initial state s 0 ∈ S 0 , the structured prediction model repeatably chooses an action a t ∈ A by following a policy π(s) and applies a t to s t and enter a new state s t+1 as s t+1 ← T (s t , a t ), until a final state s T ∈ S T is achieved."
                    },
                    {
                        "id": 31,
                        "string": "Several natural language structured prediction problems can be modeled under the search-based framework including dependency parsing (Nivre, 2008) and neural machine translation (Liang et al., 2006; Sutskever et al., 2014) ."
                    },
                    {
                        "id": 32,
                        "string": "Table 1 shows the search-based structured prediction view of these two problems."
                    },
                    {
                        "id": 33,
                        "string": "In the data-driven settings, π(s) controls the whole search process and is usually parameterized by a classifier p(a | s) which outputs the proba-Algorithm 1: Generic learning algorithm for search-based structured prediction."
                    },
                    {
                        "id": 34,
                        "string": "Input: training data: {x (n) , y (n) } N n=1 ; the reference policy: π R (s, y)."
                    },
                    {
                        "id": 35,
                        "string": "Output: classifier p(a|s)."
                    },
                    {
                        "id": 36,
                        "string": "1 D ← ∅; 2 for n ← 1...N do 3 t ← 0; 4 s t ← s 0 (x (n) ); 5 while s t / ∈ S T do 6 a t ← π R (s t , y (n) ); 7 D ← D ∪ {s t }; 8 s t+1 ← T (s t , a t ); 9 t ← t + 1; 10 end 11 end 12 optimize L N LL ; bility of choosing an action a on the given state s. The commonly adopted greedy policy can be formalized as choosing the most probable action with π(s) = argmax a p(a | s) at test stage."
                    },
                    {
                        "id": 37,
                        "string": "To learn an optimal classifier, search-based structured prediction requires constructing a reference policy π R (s, y), which takes an input state s, gold structure y and outputs its reference action a, and training p(a | s) to imitate the reference policy."
                    },
                    {
                        "id": 38,
                        "string": "Algorithm 1 shows the common practices in training p(a | s), which involves: first, using π R (s, y) to generate a sequence of reference states and actions on the training data (line 1 to line 11 in Algorithm 1); second, using the states and actions on the reference sequences as examples to train p(a | s) with negative log-likelihood (NLL) loss (line 12 in Algorithm 1), L N LL = s∈D a −1{a = π R } · log p(a | s) where D is a set of training data."
                    },
                    {
                        "id": 39,
                        "string": "The reference policy is sometimes sub-optimal and ambiguous which means on one state, there can be more than one action that leads to the optimal prediction."
                    },
                    {
                        "id": 40,
                        "string": "In transition-based dependency parsing, Goldberg and Nivre (2012) showed that one dependency tree can be reached by several search sequences using Nivre (2008)'s arcstandard algorithm."
                    },
                    {
                        "id": 41,
                        "string": "In machine translation, the ambiguity problem also exists because one source language sentence usually has multiple semantically correct translations but only one reference translation is presented."
                    },
                    {
                        "id": 42,
                        "string": "Similar problems have also been observed in semantic parsing (Goodman et al., 2016) ."
                    },
                    {
                        "id": 43,
                        "string": "According to Frénay and Verleysen (2014) , the widely used NLL loss is vulnerable to ambiguous data which make it worse for searchbased structured prediction."
                    },
                    {
                        "id": 44,
                        "string": "Besides the ambiguity problem, training and testing discrepancy is another problem that lags the search-based structured prediction performance."
                    },
                    {
                        "id": 45,
                        "string": "Since the training process imitates the reference policy, all the states in the training data are optimal which means they are guaranteed to reach the optimal structure."
                    },
                    {
                        "id": 46,
                        "string": "But during the test phase, the model can predict non-optimal states whose search action is never learned."
                    },
                    {
                        "id": 47,
                        "string": "The greedy search which is prone to error propagation also worsens this problem."
                    },
                    {
                        "id": 48,
                        "string": "Knowledge Distillation A cumbersome model, which could be an ensemble of several models or a single model with larger number of parameters, usually provides better generalization ability."
                    },
                    {
                        "id": 49,
                        "string": "Knowledge distillation (Buciluǎ et al., 2006; Ba and Caruana, 2014; Hinton et al., 2015) is a class of methods for transferring the generalization ability of the cumbersome teacher model into a small student model."
                    },
                    {
                        "id": 50,
                        "string": "Instead of optimizing NLL loss, knowledge distillation uses the distribution q(y | x) outputted by the teacher model as \"soft target\" and optimizes the knowledge distillation loss, L KD = x∈D y −q(y | x) · log p(y | x)."
                    },
                    {
                        "id": 51,
                        "string": "In search-based structured prediction scenario, x corresponds to the state s and y corresponds to the action a."
                    },
                    {
                        "id": 52,
                        "string": "Through optimizing the distillation loss, knowledge of the teacher model is learned by the student model p(y | x)."
                    },
                    {
                        "id": 53,
                        "string": "When correct label is presented, NLL loss can be combined with the distillation loss via simple interpolation as L = αL KD + (1 − α)L N LL (1) 3 Knowledge Distillation for Search-based Structured Prediction Ensemble As Hinton et al."
                    },
                    {
                        "id": 54,
                        "string": "(2015) pointed out, although the real objective of a machine learning algorithm is to generalize well to new data, models are usually trained to optimize the performance on training data, which bias the model to the training data."
                    },
                    {
                        "id": 55,
                        "string": "In search-based structured prediction, such biases can result from either the ambiguities in the training data or the discrepancy between training and testing."
                    },
                    {
                        "id": 56,
                        "string": "It would be more problematic to train p(a | s) using the loss which is in-robust to ambiguities and only considering the optimal states."
                    },
                    {
                        "id": 57,
                        "string": "The effect of ensemble on ambiguous data has been studied in Dietterich (2000) ."
                    },
                    {
                        "id": 58,
                        "string": "They empirically showed that ensemble can overcome the ambiguities in the training data."
                    },
                    {
                        "id": 59,
                        "string": "Daumé III et al."
                    },
                    {
                        "id": 60,
                        "string": "(2005) also use weighted ensemble of parameters from different iterations as their final structure prediction model."
                    },
                    {
                        "id": 61,
                        "string": "In this paper, we consider to use ensemble technique to improve the generalization ability of our search-based structured prediction model following these works."
                    },
                    {
                        "id": 62,
                        "string": "In practice, we train M search-based structured prediction models with different initialized weights and ensemble them by the average of their output distribution as q(a | s) = 1 M m q m (a | s)."
                    },
                    {
                        "id": 63,
                        "string": "In Section 4.3.1, we empirically show that the ensemble has the ability to choose a good search action in the optimal-yetambiguous states and the non-optimal states."
                    },
                    {
                        "id": 64,
                        "string": "Distillation from Reference As we can see in Section 4, ensemble indeed improves the performance of baseline models."
                    },
                    {
                        "id": 65,
                        "string": "However, real world deployment is usually constrained by computation and memory resources."
                    },
                    {
                        "id": 66,
                        "string": "Ensemble requires running the structured prediction models for multiple times, and that makes it less applicable in real-world problem."
                    },
                    {
                        "id": 67,
                        "string": "To take the advantage of the ensemble model while avoid running the models multiple times, we use the knowledge distillation technique to distill a single model from the ensemble."
                    },
                    {
                        "id": 68,
                        "string": "We started from changing the NLL learning objective in Algorithm 1 into the distillation loss (Equation 1) as shown in Algorithm 2."
                    },
                    {
                        "id": 69,
                        "string": "Since such method learns the model on the states produced by the reference policy, we name it as distillation from reference."
                    },
                    {
                        "id": 70,
                        "string": "Blocks connected by in dashed red lines in Figure 1 show the workflow of our distillation from reference."
                    },
                    {
                        "id": 71,
                        "string": "Distillation from Exploration In the scenario of search-based structured prediction, transferring the teacher model's generalization ability into a student model not only includes matching the teacher model's soft targets on the reference search sequence, but also imitating the search decisions made by the teacher model."
                    },
                    {
                        "id": 72,
                        "string": "One way to accomplish the imitation can be sampling Algorithm 2: Knowledge distillation for search-based structured prediction."
                    },
                    {
                        "id": 73,
                        "string": "Input: training data: {x (n) , y (n) } N n=1 ; the reference policy: π R (s, y); the exploration policy: π E (s) which samples an action from the annealed ensemble q(a | s) 1 T Output: classifier p(a | s)."
                    },
                    {
                        "id": 74,
                        "string": "1 D ← ∅; 2 for n ← 1...N do 3 t ← 0; 4 s t ← s 0 (x (n) ); 5 while s t / ∈ S T do 6 if distilling from reference then 7 a t ← π R (s t , y (n) ); 8 else 9 a t ← π E (s t ); search sequence from the ensemble and learn from the soft target on the sampled states."
                    },
                    {
                        "id": 75,
                        "string": "More concretely, we change π R (s, y) into a policy π E (s) which samples an action a from q(a | s) 1 T , where T is the temperature that controls the sharpness of the distribution (Hinton et al., 2015) ."
                    },
                    {
                        "id": 76,
                        "string": "The algorithm is shown in Algorithm 2."
                    },
                    {
                        "id": 77,
                        "string": "Since such distillation generate training instances from exploration, we name it as distillation from exploration."
                    },
                    {
                        "id": 78,
                        "string": "Blocks connected by in solid blue lines in Figure 1 show the workflow of our distillation from exploration."
                    },
                    {
                        "id": 79,
                        "string": "On the sampled states, reference decision from π R is usually non-trivial to achieve, which makes learning from NLL loss infeasible."
                    },
                    {
                        "id": 80,
                        "string": "In Section 4, we empirically show that fully distilling from the soft target, i.e."
                    },
                    {
                        "id": 81,
                        "string": "setting α = 1 in Equation 1, achieves comparable performance with that both from distillation and NLL."
                    },
                    {
                        "id": 82,
                        "string": "Distillation from Both Distillation from reference can encourage the model to predict the action made by the reference policy and distillation from exploration learns the model on arbitrary states."
                    },
                    {
                        "id": 83,
                        "string": "They transfer the generalization ability of the ensemble from different aspects."
                    },
                    {
                        "id": 84,
                        "string": "Hopefully combining them can further improve the performance."
                    },
                    {
                        "id": 85,
                        "string": "In this paper, we combine distillation from reference and exploration with the following manner: we use π R and π E to generate a set of training states."
                    },
                    {
                        "id": 86,
                        "string": "Then, we learn p(a | s) on the generated states."
                    },
                    {
                        "id": 87,
                        "string": "If one state was generated by the reference policy, we minimize the interpretation of distillation and NLL loss."
                    },
                    {
                        "id": 88,
                        "string": "Otherwise, we minimize the distillation loss only."
                    },
                    {
                        "id": 89,
                        "string": "Experiments We perform experiments on two tasks: transitionbased dependency parsing and neural machine translation."
                    },
                    {
                        "id": 90,
                        "string": "Both these two tasks are converted to search-based structured prediction as Section 2.1."
                    },
                    {
                        "id": 91,
                        "string": "For the transition-based parsing, we use the stack-lstm parsing model proposed by Dyer et al."
                    },
                    {
                        "id": 92,
                        "string": "(2015) to parameterize the classifier."
                    },
                    {
                        "id": 93,
                        "string": "1 For the neural machine translation, we parameterize the classifier as an LSTM encoder-decoder model by following Luong et al."
                    },
                    {
                        "id": 94,
                        "string": "(2015) ."
                    },
                    {
                        "id": 95,
                        "string": "2 We encourage the reader of this paper to refer corresponding papers for more details."
                    },
                    {
                        "id": 96,
                        "string": "Settings Transition-based Dependency Parsing We perform experiments on Penn Treebank (PTB) dataset with standard data split (Section 2-21 for training, Section 22 for development, and Section 23 for testing)."
                    },
                    {
                        "id": 97,
                        "string": "Stanford dependencies are converted from the original constituent trees using Stanford CoreNLP 3.3.0 3 by following Dyer et al."
                    },
                    {
                        "id": 98,
                        "string": "(2015) ."
                    },
                    {
                        "id": 99,
                        "string": "Automatic part-of-speech tags are assigned by 10-way jackknifing whose accuracy is 97.5%."
                    },
                    {
                        "id": 100,
                        "string": "Labeled attachment score (LAS) excluding punctuation are used in evaluation."
                    },
                    {
                        "id": 101,
                        "string": "For the other hyper-parameters, we use the same settings as Dyer et al."
                    },
                    {
                        "id": 102,
                        "string": "(2015) ."
                    },
                    {
                        "id": 103,
                        "string": "The best iteration and α is determined on the development set."
                    },
                    {
                        "id": 104,
                        "string": "BLEU score on dev."
                    },
                    {
                        "id": 105,
                        "string": "set Figure 2 : The effect of using different Ks when approximating distillation loss with K-most probable actions in the machine translation experiments."
                    },
                    {
                        "id": 106,
                        "string": "Reimers and Gurevych (2017) and others have pointed out that neural network training is nondeterministic and depends on the seed for the random number generator."
                    },
                    {
                        "id": 107,
                        "string": "To control for this effect, they suggest to report the average of M differentlyseeded runs."
                    },
                    {
                        "id": 108,
                        "string": "In all our dependency parsing, we set n = 20."
                    },
                    {
                        "id": 109,
                        "string": "Neural Machine Translation We conduct our experiments on a small machine translation dataset, which is the Germanto-English portion of the IWSLT 2014 machine translation evaluation campaign."
                    },
                    {
                        "id": 110,
                        "string": "The dataset contains around 153K training sentence pairs, 7K development sentence pairs, and 7K testing sentence pairs."
                    },
                    {
                        "id": 111,
                        "string": "We use the same preprocessing as Ranzato et al."
                    },
                    {
                        "id": 112,
                        "string": "(2015) , which leads to a German vocabulary of about 30K entries and an English vocabulary of 25K entries."
                    },
                    {
                        "id": 113,
                        "string": "One-layer LSTM for both encoder and decoder with 256 hidden units are used by following Wiseman and Rush (2016) ."
                    },
                    {
                        "id": 114,
                        "string": "BLEU (Papineni et al., 2002) was used to evaluate the translator's performance."
                    },
                    {
                        "id": 115,
                        "string": "4 Like in the dependency parsing experiments, we run M = 10 differentlyseeded runs and report the averaged score."
                    },
                    {
                        "id": 116,
                        "string": "Optimizing the distillation loss in Equation 1 requires enumerating over the action space."
                    },
                    {
                        "id": 117,
                        "string": "It is expensive for machine translation since the size of the action space (vocabulary) is considerably large (25K in our experiments)."
                    },
                    {
                        "id": 118,
                        "string": "In this paper, we use the K-most probable actions (translations on target side) on one state to approximate the whole probability distribution of q(a | s) as a q(a | s) · log p(a | s) ≈ K k q(â k | s) · log p(â k | s), whereâ k is the k-th probable action."
                    },
                    {
                        "id": 119,
                        "string": "We fix α to Dozat and Manning (2016) 94.08 Kuncoro et al."
                    },
                    {
                        "id": 120,
                        "string": "(2016) 92.06 Kuncoro et al."
                    },
                    {
                        "id": 121,
                        "string": "(2017) 94.60 (Nilsson and Nivre, 2008) shows the improvement of our Distill (both) over Baseline is statistically significant with p < 0.01."
                    },
                    {
                        "id": 122,
                        "string": "1 and vary K and evaluate the distillation model's performance."
                    },
                    {
                        "id": 123,
                        "string": "These results are shown in Figure  2 where there is no significant difference between different Ks and in speed consideration, we set K to 1 in the following experiments."
                    },
                    {
                        "id": 124,
                        "string": "We tune the temperature T during exploration and the results are shown in Figure 3 ."
                    },
                    {
                        "id": 125,
                        "string": "Sharpen the distribution during the sampling process generally performs better on development set."
                    },
                    {
                        "id": 126,
                        "string": "Our distillation from exploration model gets almost the same performance as that from reference, but simply combing these two sets of data outperform both models by achieving an LAS of 92.14."
                    },
                    {
                        "id": 127,
                        "string": "Results Transition-based Dependency Parsing We also compare our parser with the other parsers in Table 2 ."
                    },
                    {
                        "id": 128,
                        "string": "The second group shows the greedy transition-based parsers in previous literatures."
                    },
                    {
                        "id": 129,
                        "string": "Andor et al."
                    },
                    {
                        "id": 130,
                        "string": "(2016) presented an alternative state representation and explored both greedy and beam search decoding."
                    },
                    {
                        "id": 131,
                        "string": "explores training the greedy parser with dynamic oracle."
                    },
                    {
                        "id": 132,
                        "string": "Our distillation parser outperforms all these greedy counterparts."
                    },
                    {
                        "id": 133,
                        "string": "The third group shows   parsers trained on different techniques including decoding with beam search (Buckman et al., 2016; Andor et al., 2016) , training transitionbased parser with beam search (Andor et al., 2016) , graph-based parsing (Dozat and Manning, 2016) , distilling a graph-based parser from the output of 20 parsers (Kuncoro et al., 2016) , and converting constituent parsing results to dependencies (Kuncoro et al., 2017) ."
                    },
                    {
                        "id": 134,
                        "string": "Our distillation parser still outperforms its transition-based counterparts but lags the others."
                    },
                    {
                        "id": 135,
                        "string": "We attribute the gap between our parser with the other parsers to the difference in parsing algorithms."
                    },
                    {
                        "id": 136,
                        "string": "Table 3 shows the experimental results on IWSLT 2014 dataset."
                    },
                    {
                        "id": 137,
                        "string": "Similar to the PTB parsing results, the ensemble 10 translators outperforms the baseline translator by 3.47 in BLEU score."
                    },
                    {
                        "id": 138,
                        "string": "Distilling from the ensemble by following the reference leads to a single translator of 24.76 BLEU score."
                    },
                    {
                        "id": 139,
                        "string": "Like in the parsing experiments, sharpen the distribution when exploring the search space is more helpful to the model's performance but the differences when T ≤ 0.2 is not significant as shown in Figure 3 ."
                    },
                    {
                        "id": 140,
                        "string": "We set T = 0.1 in our distillation from exploration experiments since it achieves the best development score."
                    },
                    {
                        "id": 141,
                        "string": "Table 3 shows the exploration result of a BLEU score of 24.64 and it slightly lags the best reference model."
                    },
                    {
                        "id": 142,
                        "string": "Distilling from both the reference and exploration improves the single model's performance by a large margin and achieves a BLEU score of 25.44."
                    },
                    {
                        "id": 143,
                        "string": "Neural Machine Translation We also compare our model with other translation models including the one trained with reinforcement learning (Ranzato et al., 2015) and that using beam search in training (Wiseman and Rush, 2016) ."
                    },
                    {
                        "id": 144,
                        "string": "Our distillation translator outperforms these models."
                    },
                    {
                        "id": 145,
                        "string": "Both the parsing and machine translation experiments confirm that it's feasible to distill a reasonable search-based structured prediction model by just exploring the search space."
                    },
                    {
                        "id": 146,
                        "string": "Combining the reference and exploration further improves the model's performance and outperforms its greedy structured prediction counterparts."
                    },
                    {
                        "id": 147,
                        "string": "Analysis In Section 4.2, improvements from distilling the ensemble have been witnessed in both the transition-based dependency parsing and neural machine translation experiments."
                    },
                    {
                        "id": 148,
                        "string": "However, questions like \"Why the ensemble works better?"
                    },
                    {
                        "id": 149,
                        "string": "Is it feasible to fully learn from the distillation loss without NLL?"
                    },
                    {
                        "id": 150,
                        "string": "Is learning from distillation loss stable?\""
                    },
                    {
                        "id": 151,
                        "string": "are yet to be answered."
                    },
                    {
                        "id": 152,
                        "string": "In this section, we first study the ensemble's behavior on \"problematic\" states to show its generalization ability."
                    },
                    {
                        "id": 153,
                        "string": "Then, we empirically study the feasibility of fully learning from the distillation loss by studying the effect of α in the distillation from reference setting."
                    },
                    {
                        "id": 154,
                        "string": "Finally, we show that learning from distillation loss is less sensitive to initialization and achieves a more stable model."
                    },
                    {
                        "id": 155,
                        "string": "Table 4 : The ranking performance of parsers' output distributions evaluated in MAP on \"problematic\" states."
                    },
                    {
                        "id": 156,
                        "string": "Ensemble on \"Problematic\" States As mentioned in previous sections, \"problematic\" states which is either ambiguous or non-optimal harm structured prediciton's performance."
                    },
                    {
                        "id": 157,
                        "string": "Ensemble shows to improve the performance in Section 4.2, which indicates it does better on these states."
                    },
                    {
                        "id": 158,
                        "string": "To empirically testify this, we use dependency parsing as a testbed and study the ensemble's output distribution using the dynamic oracle."
                    },
                    {
                        "id": 159,
                        "string": "The dynamic oracle (Goldberg and Nivre, 2012; Goldberg et al., 2014) can be used to efficiently determine, given any state s, which transition action leads to the best achievable parse from s; if some errors may have already made, what is the best the parser can do, going forward?"
                    },
                    {
                        "id": 160,
                        "string": "This allows us to analyze the accuracy of each parser's individual decisions, in the \"problematic\" states."
                    },
                    {
                        "id": 161,
                        "string": "In this paper, we evaluate the output distributions of the baseline and ensemble parser against the reference actions suggested by the dynamic oracle."
                    },
                    {
                        "id": 162,
                        "string": "Since dynamic oracle yields more than one reference actions due to ambiguities and previous mistakes and the output distribution can be treated as their scoring, we evaluate them as a ranking problem."
                    },
                    {
                        "id": 163,
                        "string": "Intuitively, when multiple reference actions exist, a good parser should push probability mass to these actions."
                    },
                    {
                        "id": 164,
                        "string": "We draw problematic states by sampling from our baseline parser."
                    },
                    {
                        "id": 165,
                        "string": "The comparison in Table 4 shows that the ensemble model significantly outperforms the baseline on ambiguous and non-optimal states."
                    },
                    {
                        "id": 166,
                        "string": "This observation indicates the ensemble's output distribution is more \"informative\", thus generalizes well on problematic states and achieves better performance."
                    },
                    {
                        "id": 167,
                        "string": "We also observe that the distillation model perform better than both the baseline and ensemble."
                    },
                    {
                        "id": 168,
                        "string": "We attribute this to the fact that the distillation model is learned from exploration."
                    },
                    {
                        "id": 169,
                        "string": "Effect of α Over our distillation from reference model, we study the effect of α in Equation 1."
                    },
                    {
                        "id": 170,
                        "string": "We vary α from 0 to 1 by a step of 0.1 in both the transitionbased dependency parsing and neural machine translation experiments and plot the model's performance on development sets in Figure 4 ."
                    },
                    {
                        "id": 171,
                        "string": "Similar trends are witnessed in both these two experiments that model that's configured with larger α generally performs better than that with smaller α."
                    },
                    {
                        "id": 172,
                        "string": "For the dependency parsing problem, the best development performance is achieved when we set α = 1, and for the machine translation, the best α is 0.8."
                    },
                    {
                        "id": 173,
                        "string": "There is only 0.2 point of difference between the best α model and the one with α equals to 1."
                    },
                    {
                        "id": 174,
                        "string": "Such observation indicates that when distilling from the reference policy paying more attention to the distillation loss rather than the NLL is more beneficial."
                    },
                    {
                        "id": 175,
                        "string": "It also indicates that fully learning from the distillation loss outputted by the ensemble is reasonable because models configured with α = 1 generally achieves good performance."
                    },
                    {
                        "id": 176,
                        "string": "Learning Stability Besides the improved performance, knowledge distillation also leads to more stable learning."
                    },
                    {
                        "id": 177,
                        "string": "The performance score distributions of differentlyseed runs are depicted as violin plot in Figure 5 ."
                    },
                    {
                        "id": 178,
                        "string": "Table 5 also reveals the smaller standard derivations are achieved by our distillation methods."
                    },
                    {
                        "id": 179,
                        "string": "As Keskar et al."
                    },
                    {
                        "id": 180,
                        "string": "(2016) pointed out that the general-   ization gap is not due to overfit, but due to the network converge to sharp minimizer which generalizes worse, we attribute the more stable training from our distillation model as the distillation loss presents less sharp minimizers."
                    },
                    {
                        "id": 181,
                        "string": "Related Work Several works have been proposed to applying knowledge distillation to NLP problems."
                    },
                    {
                        "id": 182,
                        "string": "Kim and Rush (2016) presented a distillation model which focus on distilling the structured loss from a large model into a small one which works on sequencelevel."
                    },
                    {
                        "id": 183,
                        "string": "In contrast to their work, we pay more attention to action-level distillation and propose to do better action-level distillation by both from reference and exploration."
                    },
                    {
                        "id": 184,
                        "string": "Freitag et al."
                    },
                    {
                        "id": 185,
                        "string": "(2017) used an ensemble of 6translators to generate training reference."
                    },
                    {
                        "id": 186,
                        "string": "Exploration was tried in their work with beam-search."
                    },
                    {
                        "id": 187,
                        "string": "We differ their work by training the single model to match the distribution of the ensemble."
                    },
                    {
                        "id": 188,
                        "string": "Using ensemble in exploration was also studied in reinforcement learning community (Osband et al., 2016) ."
                    },
                    {
                        "id": 189,
                        "string": "In addition to distilling the ensemble on the labeled training data, a line of semisupervised learning works show that it's effective to transfer knowledge of cumbersome model into a simple one on the unlabeled data (Liang et al., 2008; Li et al., 2014) ."
                    },
                    {
                        "id": 190,
                        "string": "Their extensions to knowledge distillation call for further study."
                    },
                    {
                        "id": 191,
                        "string": "Kuncoro et al."
                    },
                    {
                        "id": 192,
                        "string": "(2016) proposed to compile the knowledge from an ensemble of 20 transitionbased parsers into a voting and distill the knowledge by introducing the voting results as a regularizer in learning a graph-based parser."
                    },
                    {
                        "id": 193,
                        "string": "Different from their work, we directly do the distillation on the classifier of the transition-based parser."
                    },
                    {
                        "id": 194,
                        "string": "Besides the attempts for directly using the knowledge distillation technique, Stahlberg and Byrne (2017) propose to first build the ensemble of several machine translators into one network by unfolding and then use SVD to shrink its parameters, which can be treated as another kind of knowledge distillation."
                    },
                    {
                        "id": 195,
                        "string": "Conclusion In this paper, we study knowledge distillation for search-based structured prediction and propose to distill an ensemble into a single model both from reference and exploration states."
                    },
                    {
                        "id": 196,
                        "string": "Experiments on transition-based dependency parsing and machine translation show that our distillation method significantly improves the single model's performance."
                    },
                    {
                        "id": 197,
                        "string": "Comparison analysis gives empirically guarantee for our distillation method."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 27
                    },
                    {
                        "section": "Search-based Structured Prediction",
                        "n": "2.1",
                        "start": 28,
                        "end": 47
                    },
                    {
                        "section": "Knowledge Distillation",
                        "n": "2.2",
                        "start": 48,
                        "end": 53
                    },
                    {
                        "section": "Ensemble",
                        "n": "3.1",
                        "start": 54,
                        "end": 63
                    },
                    {
                        "section": "Distillation from Reference",
                        "n": "3.2",
                        "start": 64,
                        "end": 70
                    },
                    {
                        "section": "Distillation from Exploration",
                        "n": "3.3",
                        "start": 71,
                        "end": 81
                    },
                    {
                        "section": "Distillation from Both",
                        "n": "3.4",
                        "start": 82,
                        "end": 88
                    },
                    {
                        "section": "Experiments",
                        "n": "4",
                        "start": 89,
                        "end": 95
                    },
                    {
                        "section": "Transition-based Dependency Parsing",
                        "n": "4.1.1",
                        "start": 96,
                        "end": 108
                    },
                    {
                        "section": "Neural Machine Translation",
                        "n": "4.1.2",
                        "start": 109,
                        "end": 126
                    },
                    {
                        "section": "Transition-based Dependency Parsing",
                        "n": "4.2.1",
                        "start": 127,
                        "end": 142
                    },
                    {
                        "section": "Neural Machine Translation",
                        "n": "4.2.2",
                        "start": 143,
                        "end": 146
                    },
                    {
                        "section": "Analysis",
                        "n": "4.3",
                        "start": 147,
                        "end": 155
                    },
                    {
                        "section": "Ensemble on \"Problematic\" States",
                        "n": "4.3.1",
                        "start": 156,
                        "end": 168
                    },
                    {
                        "section": "Effect of α",
                        "n": "4.3.2",
                        "start": 169,
                        "end": 175
                    },
                    {
                        "section": "Learning Stability",
                        "n": "4.3.3",
                        "start": 176,
                        "end": 180
                    },
                    {
                        "section": "Related Work",
                        "n": "5",
                        "start": 181,
                        "end": 193
                    },
                    {
                        "section": "Conclusion",
                        "n": "6",
                        "start": 194,
                        "end": 197
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1315-Figure1-1.png",
                        "caption": "Figure 1: Workflow of our knowledge distillation for search-based structured prediction. The yellow bracket represents the ensemble of multiple models trained with different initialization. The dashed red line shows our distillation from reference (§3.2). The solid blue line shows our distillation from exploration (§3.3).",
                        "page": 0,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 525.12,
                            "y1": 221.76,
                            "y2": 396.0
                        }
                    },
                    {
                        "filename": "../figure/image/1315-Table2-1.png",
                        "caption": "Table 2: The dependency parsing results. Significance test (Nilsson and Nivre, 2008) shows the improvement of our Distill (both) over Baseline is statistically significant with p < 0.01.",
                        "page": 5,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 288.0,
                            "y1": 62.879999999999995,
                            "y2": 255.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1315-Figure3-1.png",
                        "caption": "Figure 3: The effect of T on PTB (above) and IWSLT 2014 (below) development set.",
                        "page": 5,
                        "bbox": {
                            "x1": 310.08,
                            "x2": 523.1999999999999,
                            "y1": 304.8,
                            "y2": 508.79999999999995
                        }
                    },
                    {
                        "filename": "../figure/image/1315-Table3-1.png",
                        "caption": "Table 3: The machine translation results. MIXER denotes that of Ranzato et al. (2015), BSO denotes that of Wiseman and Rush (2016). Significance test (Koehn, 2004) shows the improvement of our Distill (both) over Baseline is statistically significant with p < 0.01.",
                        "page": 5,
                        "bbox": {
                            "x1": 329.76,
                            "x2": 503.03999999999996,
                            "y1": 62.879999999999995,
                            "y2": 187.2
                        }
                    },
                    {
                        "filename": "../figure/image/1315-Table1-1.png",
                        "caption": "Table 1: The search-based structured prediction view of transition-based dependency parsing (Nivre, 2008) and neural machine translation (Sutskever et al., 2014).",
                        "page": 1,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 63.839999999999996,
                            "y2": 159.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1315-Table4-1.png",
                        "caption": "Table 4: The ranking performance of parsers’ output distributions evaluated in MAP on “problematic” states.",
                        "page": 6,
                        "bbox": {
                            "x1": 314.88,
                            "x2": 518.4,
                            "y1": 62.879999999999995,
                            "y2": 132.0
                        }
                    },
                    {
                        "filename": "../figure/image/1315-Figure4-1.png",
                        "caption": "Figure 4: The effect of α on PTB (above) and IWSLT 2014 (below) development set.",
                        "page": 7,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 287.03999999999996,
                            "y1": 65.75999999999999,
                            "y2": 269.76
                        }
                    },
                    {
                        "filename": "../figure/image/1315-Figure5-1.png",
                        "caption": "Figure 5: The distributions of scores for the baseline model and our distillation from both on PTB test (left) and IWSLT 2014 test (right) on differently-seeded runs.",
                        "page": 7,
                        "bbox": {
                            "x1": 319.68,
                            "x2": 514.0799999999999,
                            "y1": 68.64,
                            "y2": 237.12
                        }
                    },
                    {
                        "filename": "../figure/image/1315-Table5-1.png",
                        "caption": "Table 5: The minimal, maximum, and standard derivation values on differently-seeded runs.",
                        "page": 7,
                        "bbox": {
                            "x1": 312.0,
                            "x2": 521.28,
                            "y1": 327.84,
                            "y2": 427.2
                        }
                    },
                    {
                        "filename": "../figure/image/1315-Figure2-1.png",
                        "caption": "Figure 2: The effect of using different Ks when approximating distillation loss with K-most probable actions in the machine translation experiments.",
                        "page": 4,
                        "bbox": {
                            "x1": 310.56,
                            "x2": 523.1999999999999,
                            "y1": 65.75999999999999,
                            "y2": 160.79999999999998
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-73"
        },
        {
            "slides": {
                "0": {
                    "title": "Machine learning can help you",
                    "text": [
                        "***If you have enough training data"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Traditional Labeling",
                    "text": [
                        "Tom Brady was spotted in New York City on Monday with his wife Gisele Bundchen amid rumors of",
                        "Bradys alleged role in Deflategate.",
                        "Is person 1 married to person 2?"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Higher Bandwidth Supervision",
                    "text": [
                        "Tom Brady was spotted in New York City on Monday with his wife Gisele Bundchen amid rumors of",
                        "Bradys alleged role in Deflategate.",
                        "Is person 1 married to person 2?",
                        "Why do you think so?",
                        "Because the words his wife are right before person 2."
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": [
                        "figure/image/1319-Figure1-1.png"
                    ]
                },
                "4": {
                    "title": "Babble Labble Framework",
                    "text": [
                        "INPUT SEMANTIC PARSER FILTER BANK LABEL AGGREGATOR DISC. MODEL",
                        "y x y e1",
                        "Unlabeled Examples + Explanations Labeling Functions Filters Label Matrix",
                        "Label whether person 1 is married to person 2",
                        "x1 Tom Brady and his wife Gisele Bundchen were",
                        "spotted in New York City on Monday amid rumors",
                        "of Bradys alleged role in Deflategate.",
                        "return if his wife in left(x.person2, dist==1) else Correct x1 x2 x3 x4",
                        "def LF_1b(x): return if his wife in right(x.person2) else",
                        "(inconsistent) True, because the words his wife are right before person 2. def LF_2a(x): return if x.person1 in x.sentence and x.person2 in x.sentence else x2 None of us knows what happened at Kanes home Aug. 2, but it is telling that the NHL has not suspended Kane.",
                        "Pragmatic Filter LF4c (always true)",
                        "def LF_2b(x): return if x.person1 x.person2) else Correct False, because person 1 and person in the sentence are identical. y",
                        "Noisy Labels Classifier x3 Dr. Michael Richards and real estate and insurance businessman Gary Kirke did not attend the event. Correct",
                        "False, because the last word of person 1 is different than the last word of person 2.",
                        "Pragmatic Filter (duplicate of LF_3a)",
                        "y x y PRAGMATIC",
                        "True, because LF1A x1 x2 x3",
                        "x1 x2 x3 y False, because",
                        "EXPLANATIONS LABELING FUNCTIONS LABEL MATRIX PROBABILISTIC LABELS TRAINED MODEL",
                        "IMPORTANT: No Babble Labble components require no labeled training data!"
                    ],
                    "page_nums": [
                        6,
                        11,
                        15,
                        18,
                        33,
                        34
                    ],
                    "images": [
                        "figure/image/1319-Figure2-1.png"
                    ]
                },
                "10": {
                    "title": "Semantic Filter",
                    "text": [
                        "Example x1: Tom Brady was spotted in New York City on",
                        "Monday with his wife Gisele Bundchen amid rumors of Bradys alleged role in Deflategate.",
                        "Explanation True, because the words his wife",
                        "are right before person 2.",
                        "right before = to the right of right before = immediately before",
                        "def LF_1b(x): return if his wife in right(x.person2) else def LF_1a(x): return if his wife in left(x.person2, dist==1) else",
                        "(his wife is not to the right of person 2) (his wife is, in fact, 1 word to the left of person 2)"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "11": {
                    "title": "Pragmatic Filters",
                    "text": [
                        "How does the LF label our unlabeled data?",
                        "Uniform labeling signature xN x1"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "13": {
                    "title": "Discriminative Classifier",
                    "text": [
                        "Labeling functions generate noisy, conflicting votes",
                        "Resolve conflicts, re-weight & combine",
                        "Generalize beyond the labeling functions"
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": []
                },
                "14": {
                    "title": "Generalization",
                    "text": [
                        "Task: identify disease-causing chemicals",
                        "Keywords mentioned in LFs:",
                        "treats, causes, induces, prevents,",
                        "Highly relevant features learned by discriminative model:",
                        "could produce a, support diagnosis of,",
                        "Training a discriminative model that can take advantage of additional useful features not specified in labeling functions boosted performance by 4.3 F1 points on average (10%)."
                    ],
                    "page_nums": [
                        20
                    ],
                    "images": []
                },
                "16": {
                    "title": "Results",
                    "text": [
                        "Classifiers trained with Babble Labble and explanations achieved the same F1 score as ones trained with traditional labels while requiring 5100x fewer user inputs"
                    ],
                    "page_nums": [
                        22
                    ],
                    "images": []
                },
                "17": {
                    "title": "Utilizing Unlabeled Data",
                    "text": [
                        "With labeling functions, training set size (and often performance) scales with the amount of unlabeled data we have."
                    ],
                    "page_nums": [
                        23
                    ],
                    "images": [
                        "figure/image/1319-Figure6-1.png"
                    ]
                },
                "19": {
                    "title": "Perfect Parsers Need Not Apply",
                    "text": [
                        "Task Babble Labble Babble Labble",
                        "Using perfect parses yielded negligible improvements. In this framework, for this task, a naive semantic parser is good enough!"
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": []
                },
                "20": {
                    "title": "Limitations",
                    "text": [
                        "Alice beat Bob in the annual office pool tournament.",
                        "No, because it sounds like theyre just co-workers. Prefers",
                        "(e.g., it says so) (e.g., keywords, word distance, capitalization, etc.)",
                        "Users reasons for labeling are sometimes high-level concepts that are hard to parse."
                    ],
                    "page_nums": [
                        26
                    ],
                    "images": []
                },
                "21": {
                    "title": "Related Work Data Programming",
                    "text": [
                        "Use weak supervision (e.g., labeling functions) to generate training sets",
                        "Flagship platform for dataset creation from weak supervision",
                        "Structure Learning (Bach et al., ICML 2017)",
                        "Learning dependencies between correlated labeling functions",
                        "Reef (Varma and Re, In Submission)",
                        "Auto-generating labeling functions from a small labeled set"
                    ],
                    "page_nums": [
                        27
                    ],
                    "images": []
                },
                "23": {
                    "title": "Related Work Highlighting",
                    "text": [
                        "Highlight key phrases in text:",
                        "Mark key regions in images:",
                        "Label key features directly:",
                        "Tom Brady was spotted in New York City on Monday with his wife Gisele Bundchen amid rumors of Bradys alleged role in Deflategate.",
                        "Benefits of natural language approach: more options: e.g., X is not in the sentence, X or Y is in the sentence more direct credit assignment (compared to highlighting) no feature set required a priori"
                    ],
                    "page_nums": [
                        29
                    ],
                    "images": []
                },
                "24": {
                    "title": "Summary",
                    "text": [
                        "We need more efficient ways to collect supervision",
                        "We can collect labeling heuristics instead of labels",
                        "Using this approach, training set size grows with the amount of unlabeled data we have"
                    ],
                    "page_nums": [
                        30
                    ],
                    "images": [
                        "figure/image/1319-Figure6-1.png"
                    ]
                },
                "27": {
                    "title": "Babble Labble",
                    "text": [
                        "Tom Brady was spotted in New York City on Monday with his wife Gisele Bundchen amid rumors of",
                        "Bradys alleged role in Deflategate.",
                        "LF3 Is person 1 married to person 2?",
                        "Why do you think so? Aggregated Labels",
                        "Because the words his wife are right before person 2. y",
                        "x y def LF1(x): return if his wife in left(x.person2, dist==1) else"
                    ],
                    "page_nums": [
                        35
                    ],
                    "images": [
                        "figure/image/1319-Figure1-1.png"
                    ]
                }
            },
            "paper_title": "Training Classifiers with Natural Language Explanations",
            "paper_id": "1319",
            "paper": {
                "title": "Training Classifiers with Natural Language Explanations",
                "abstract": "Training accurate classifiers requires many labels, but each label provides only limited information (one bit for binary classification). In this work, we propose BabbleLabble, a framework for training classifiers in which an annotator provides a natural language explanation for each labeling decision. A semantic parser converts these explanations into programmatic labeling functions that generate noisy labels for an arbitrary amount of unlabeled data, which is used to train a classifier. On three relation extraction tasks, we find that users are able to train classifiers with comparable F1 scores from 5-100 faster by providing explanations instead of just labels. Furthermore, given the inherent imperfection of labeling functions, we find that a simple rule-based semantic parser suffices.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The standard protocol for obtaining a labeled dataset is to have a human annotator view each example, assess its relevance, and provide a label (e.g., positive or negative for binary classification)."
                    },
                    {
                        "id": 1,
                        "string": "However, this only provides one bit of information per example."
                    },
                    {
                        "id": 2,
                        "string": "This invites the question: how can we get more information per example, given that the annotator has already spent the effort reading and understanding an example?"
                    },
                    {
                        "id": 3,
                        "string": "Previous works have relied on identifying relevant parts of the input such as labeling features (Druck et al., 2009; Raghavan et al., 2005; Liang et al., 2009) , highlighting rationale phrases in Both cohorts showed signs of optic nerve toxicity due to ethambutol."
                    },
                    {
                        "id": 4,
                        "string": "Example Label Explanation Because the words \"due to\" occur between the chemical and the disease."
                    },
                    {
                        "id": 5,
                        "string": "Does this chemical cause this disease?"
                    },
                    {
                        "id": 6,
                        "string": "Why do you think so?"
                    },
                    {
                        "id": 7,
                        "string": "Labeling Function def lf(x): return (1 if \"due to\" in between(x.chemical, x.disease) else 0) Figure 1 : In BabbleLabble, the user provides a natural language explanation for each labeling decision."
                    },
                    {
                        "id": 8,
                        "string": "These explanations are parsed into labeling functions that convert unlabeled data into a large labeled dataset for training a classifier."
                    },
                    {
                        "id": 9,
                        "string": "text (Zaidan and Eisner, 2008; Arora and Nyberg, 2009 ), or marking relevant regions in images (Ahn et al., 2006) ."
                    },
                    {
                        "id": 10,
                        "string": "But there are certain types of information which cannot be easily reduced to annotating a portion of the input, such as the absence of a certain word, or the presence of at least two words."
                    },
                    {
                        "id": 11,
                        "string": "In this work, we tap into the power of natural language and allow annotators to provide supervision to a classifier via natural language explanations."
                    },
                    {
                        "id": 12,
                        "string": "Specifically, we propose a framework in which annotators provide a natural language explanation for each label they assign to an example (see Figure 1) ."
                    },
                    {
                        "id": 13,
                        "string": "These explanations are parsed into logical forms representing labeling functions (LFs), functions that heuristically map examples to labels (Ratner et al., 2016) ."
                    },
                    {
                        "id": 14,
                        "string": "The labeling functions are Unlabeled Examples + Explanations Label whether person 1 is married to person 2 Labeling Functions Filters Label Matrix None of us knows what happened at Kane's home Aug. 2, but it is telling that the NHL has not suspended Kane."
                    },
                    {
                        "id": 15,
                        "string": "False, because person 1 and person 2 in the sentence are identical."
                    },
                    {
                        "id": 16,
                        "string": "Dr. Michael Richards and real estate and insurance businessman Gary Kirke did not attend the event."
                    },
                    {
                        "id": 17,
                        "string": "False, because the last word of person 1 is different than the last word of person 2. x 1 x 2 then executed on many unlabeled examples, resulting in a large, weakly-supervised training set that is then used to train a classifier."
                    },
                    {
                        "id": 18,
                        "string": "Semantic parsing of natural language into logical forms is recognized as a challenging problem and has been studied extensively (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Liang et al., 2011; Liang, 2016) ."
                    },
                    {
                        "id": 19,
                        "string": "One of our major findings is that in our setting, even a simple rule-based semantic parser suffices for three reasons: First, we find that the majority of incorrect LFs can be automatically filtered out either semantically (e.g., is it consistent with the associated example?)"
                    },
                    {
                        "id": 20,
                        "string": "or pragmatically (e.g., does it avoid assigning the same label to the entire training set?)."
                    },
                    {
                        "id": 21,
                        "string": "Second, LFs near the gold LF in the space of logical forms are often just as accurate (and sometimes even more accurate)."
                    },
                    {
                        "id": 22,
                        "string": "Third, techniques for combining weak supervision sources are built to tolerate some noise (Alfonseca et al., 2012; Takamatsu et al., 2012; Ratner et al., 2018) ."
                    },
                    {
                        "id": 23,
                        "string": "The significance of this is that we can deploy the same semantic parser across tasks without task-specific training."
                    },
                    {
                        "id": 24,
                        "string": "We show how we can tackle a real-world biomedical application with the same semantic parser used to extract instances of spouses."
                    },
                    {
                        "id": 25,
                        "string": "Our work is most similar to that of Srivastava et al."
                    },
                    {
                        "id": 26,
                        "string": "(2017) , who also use natural language explanations to train a classifier, but with two important differences."
                    },
                    {
                        "id": 27,
                        "string": "First, they jointly train a task-specific semantic parser and classifier, whereas we use a simple rule-based parser."
                    },
                    {
                        "id": 28,
                        "string": "In Section 4, we find that in our weak supervision framework, the rule-based semantic parser and the perfect parser yield nearly identical downstream performance."
                    },
                    {
                        "id": 29,
                        "string": "Second, while they use the logical forms of explanations to produce features that are fed directly to a classifier, we use them as functions for labeling a much larger training set."
                    },
                    {
                        "id": 30,
                        "string": "In Section 4, we show that using functions yields a 9.5 F1 improvement (26% relative improvement) over features, and that the F1 score scales with the amount of available unlabeled data."
                    },
                    {
                        "id": 31,
                        "string": "We validate our approach on two existing datasets from the literature (extracting spouses from news articles and disease-causing chemicals from biomedical abstracts) and one real-world use case with our biomedical collaborators at Oc-camzRazor to extract protein-kinase interactions related to Parkinson's disease from text."
                    },
                    {
                        "id": 32,
                        "string": "We find empirically that users are able to train classifiers with comparable F1 scores up to two orders of magnitude faster when they provide natural language explanations instead of individual labels."
                    },
                    {
                        "id": 33,
                        "string": "Our code and data can be found at https:// github.com/HazyResearch/babble."
                    },
                    {
                        "id": 34,
                        "string": "The BabbleLabble Framework The BabbleLabble framework converts natural language explanations and unlabeled data into a noisily-labeled training set (see Figure 2 )."
                    },
                    {
                        "id": 35,
                        "string": "There are three key components: a semantic parser, a filter bank, and a label aggregator."
                    },
                    {
                        "id": 36,
                        "string": "The semantic Figure 3 : Valid parses are found by iterating over increasingly large subspans of the input looking for matches among the right hand sides of the rules in the grammar."
                    },
                    {
                        "id": 37,
                        "string": "Rules are either lexical (converting tokens into symbols), unary (converting one symbol into another symbol), or compositional (combining many symbols into a single higher-order symbol)."
                    },
                    {
                        "id": 38,
                        "string": "A rule may optionally ignore unrecognized tokens in a span (denoted here with a dashed line)."
                    },
                    {
                        "id": 39,
                        "string": "parser converts natural language explanations into a set of logical forms representing labeling functions (LFs)."
                    },
                    {
                        "id": 40,
                        "string": "The filter bank removes as many incorrect LFs as possible without requiring ground truth labels."
                    },
                    {
                        "id": 41,
                        "string": "The remaining LFs are applied to unlabeled examples to produce a matrix of labels."
                    },
                    {
                        "id": 42,
                        "string": "This label matrix is passed into the label aggregator, which combines these potentially conflicting and overlapping labels into one label for each example."
                    },
                    {
                        "id": 43,
                        "string": "The resulting labeled examples are then used to train an arbitrary discriminative model."
                    },
                    {
                        "id": 44,
                        "string": "Explanations To create the input explanations, the user views a subset S of an unlabeled dataset D (where |S| |D|) and provides for each input x i ∈ S a label y i and a natural language explanation e i , a sentence explaining why the example should receive that label."
                    },
                    {
                        "id": 45,
                        "string": "The explanation e i generally refers to specific aspects of the example (e.g., in Figure 2 , the location of a specific string \"his wife\")."
                    },
                    {
                        "id": 46,
                        "string": "Semantic Parser The semantic parser takes a natural language explanation e i and returns a set of LFs (logical forms or labeling functions) {f 1 , ."
                    },
                    {
                        "id": 47,
                        "string": "."
                    },
                    {
                        "id": 48,
                        "string": "."
                    },
                    {
                        "id": 49,
                        "string": ", f k } of the form f i : X → {−1, 0, 1} in a binary classification setting, with 0 representing abstention."
                    },
                    {
                        "id": 50,
                        "string": "We emphasize that the goal of this semantic parser is not to generate the single correct parse, but rather to have coverage over many potentially useful LFs."
                    },
                    {
                        "id": 51,
                        "string": "1 1 Indeed, we find empirically that an incorrect LF nearby the correct one in the space of logical forms actually has higher end-task accuracy 57% of the time (see Section 4.2)."
                    },
                    {
                        "id": 52,
                        "string": "We choose a simple rule-based semantic parser that can be used without any training."
                    },
                    {
                        "id": 53,
                        "string": "Formally, the parser uses a set of rules of the form α → β, where α can be replaced by the token(s) in β (see Figure 3 for example rules)."
                    },
                    {
                        "id": 54,
                        "string": "To identify candidate LFs, we recursively construct a set of valid parses for each span of the explanation, based on the substitutions defined by the grammar rules."
                    },
                    {
                        "id": 55,
                        "string": "At the end, the parser returns all valid parses (LFs in our case) corresponding to the entire explanation."
                    },
                    {
                        "id": 56,
                        "string": "We also allow an arbitrary number of tokens in a given span to be ignored when looking for a matching rule."
                    },
                    {
                        "id": 57,
                        "string": "This improves the ability of the parser to handle unexpected input, such as unknown words or typos, since the portions of the input that are parseable can still result in a valid parse."
                    },
                    {
                        "id": 58,
                        "string": "For example, in Figure 3 , the word \"person\" is ignored."
                    },
                    {
                        "id": 59,
                        "string": "All predicates included in our grammar (summarized in Table 1 ) are provided to annotators, with minimal examples of each in use (Appendix A)."
                    },
                    {
                        "id": 60,
                        "string": "Importantly, all rules are domain independent (e.g., all three relation extraction tasks that we tested used the same grammar), making the semantic parser easily transferrable to new domains."
                    },
                    {
                        "id": 61,
                        "string": "Additionally, while this paper focuses on the task of relation extraction, in principle the BabbleLabble framework can be applied to other tasks or settings by extending the grammar with the necessary primitives (e.g., adding primitives for rows and columns to enable explanations about the alignments of words in tables)."
                    },
                    {
                        "id": 62,
                        "string": "To guide the construction of the grammar, we collected 500 explanations for the Spouse domain from workers Apply a functional primitive to each member of list/set to transform or filter the elements word distance, character distance Return the distance between two strings by words or characters left, right, between, within Return as a string the text that is left/right/within some distance of a string or between two designated strings on Amazon Mechanical Turk and added support for the most commonly used predicates."
                    },
                    {
                        "id": 63,
                        "string": "These were added before the experiments described in Section 4."
                    },
                    {
                        "id": 64,
                        "string": "Altogether the grammar contains 200 rule templates."
                    },
                    {
                        "id": 65,
                        "string": "Filter Bank The input to the filter bank is a set of candidate LFs produced by the semantic parser."
                    },
                    {
                        "id": 66,
                        "string": "The purpose of the filter bank is to discard as many incorrect LFs as possible without requiring additional labels."
                    },
                    {
                        "id": 67,
                        "string": "It consists of two classes of filters: semantic and pragmatic."
                    },
                    {
                        "id": 68,
                        "string": "Recall that each explanation e i is collected in the context of a specific labeled example (x i , y i )."
                    },
                    {
                        "id": 69,
                        "string": "The semantic filter checks for LFs that are inconsistent with their corresponding example; formally, any LF f for which f (x i ) = y i is discarded."
                    },
                    {
                        "id": 70,
                        "string": "For example, in the first explanation in Figure 2 , the word \"right\" can be interpreted as either \"immediately\" (as in \"right before\") or simply \"to the right.\""
                    },
                    {
                        "id": 71,
                        "string": "The latter interpretation results in a function that is inconsistent with the associated example (since \"his wife\" is actually to the left of person 2), so it can be safely removed."
                    },
                    {
                        "id": 72,
                        "string": "The pragmatic filters removes LFs that are constant, redundant, or correlated."
                    },
                    {
                        "id": 73,
                        "string": "For example, in Figure 2 , LF 2a is constant, as it labels every example positively (since all examples contain two people from the same sentence)."
                    },
                    {
                        "id": 74,
                        "string": "LF 3b is redundant, since even though it has a different syntax tree from LF 3a, it labels the training set identically and therefore provides no new signal."
                    },
                    {
                        "id": 75,
                        "string": "Finally, out of all LFs from the same explanation that pass all the other filters, we keep only the most specific (lowest coverage) LF."
                    },
                    {
                        "id": 76,
                        "string": "This prevents multiple correlated LFs from a single example from dominating."
                    },
                    {
                        "id": 77,
                        "string": "As we show in Section 4, over three tasks, the filter bank removes 86% of incorrect parses, and the incorrect ones that remain have average endtask accuracy within 2.5% of the corresponding correct parses."
                    },
                    {
                        "id": 78,
                        "string": "Label Aggregator The label aggregator combines multiple (potentially conflicting) suggested labels from the LFs and combines them into a single probabilistic label per example."
                    },
                    {
                        "id": 79,
                        "string": "Concretely, if m LFs pass the filter bank and are applied to n examples, the label aggregator implements a function f : {−1, 0, 1} m×n → [0, 1] n ."
                    },
                    {
                        "id": 80,
                        "string": "A naive solution would be to use a simple majority vote, but this fails to account for the fact that LFs can vary widely in accuracy and coverage."
                    },
                    {
                        "id": 81,
                        "string": "Instead, we use data programming (Ratner et al., 2016) , which models the relationship between the true labels and the output of the labeling functions as a factor graph."
                    },
                    {
                        "id": 82,
                        "string": "More specifically, given the true labels Y ∈ {−1, 1} n (latent) and label matrix Λ ∈ {−1, 0, 1} m×n (observed) where Λ i,j = LF i (x j ), we define two types of factors representing labeling propensity and accuracy: φ Lab i,j (Λ, Y ) = 1{Λ i,j = 0} (1) φ Acc i,j (Λ, Y ) = 1{Λ i,j = y j }."
                    },
                    {
                        "id": 83,
                        "string": "(2) Denoting the vector of factors pertaining to a given data point x j as φ j (Λ, Y ) ∈ R m , define the model: p w (Λ, Y ) = Z −1 w exp n j=1 w · φ j (Λ, Y ) , (3) They include Joan Ridsdale, a 62-year-old payroll administrator from County Durham who was hit with a €16,000 tax bill when her husband Gordon died."
                    },
                    {
                        "id": 84,
                        "string": "Spouse Disease Protein Example Explanation True, because the phrase \"her husband\" is within three words of person 2."
                    },
                    {
                        "id": 85,
                        "string": "Example Explanation Young women on replacement estrogens for ovarian failure after cancer therapy may also have increased risk of endometrial carcinoma and should be examined periodically."
                    },
                    {
                        "id": 86,
                        "string": "(person 1, person 2) (chemical, disease) (protein, kinase) True, because \"risk of\" comes before the disease."
                    },
                    {
                        "id": 87,
                        "string": "Here we show that c-Jun N-terminal kinases JNK1, JNK2 and JNK3 phosphorylate tau at many serine/threonine-prolines, as assessed by the generation of the epitopes of phosphorylation-dependent anti-tau antibodies."
                    },
                    {
                        "id": 88,
                        "string": "Example Explanation True, because at least one of the words 'phosphorylation', 'phosphorylate', 'phosphorylated', 'phosphorylates' is found in the sentence and the number of words between the protein and kinase is smaller than 8.\""
                    },
                    {
                        "id": 89,
                        "string": "Figure 4 : An example and explanation for each of the three datasets."
                    },
                    {
                        "id": 90,
                        "string": "where w ∈ R 2m is the weight vector and Z w is the normalization constant."
                    },
                    {
                        "id": 91,
                        "string": "To learn this model without knowing the true labels Y , we minimize the negative log marginal likelihood given the observed labels Λ: w = arg min w − log Y p w (Λ, Y ) (4) using SGD and Gibbs sampling for inference, and then use the marginals pŵ(Y | Λ) as probabilistic training labels."
                    },
                    {
                        "id": 92,
                        "string": "Intuitively, we infer accuracies of the LFs based on the way they overlap and conflict with one another."
                    },
                    {
                        "id": 93,
                        "string": "Since noisier LFs are more likely to have high conflict rates with others, their corresponding accuracy weights in w will be smaller, reducing their influence on the aggregated labels."
                    },
                    {
                        "id": 94,
                        "string": "Discriminative Model The noisily-labeled training set that the label aggregator outputs is used to train an arbitrary discriminative model."
                    },
                    {
                        "id": 95,
                        "string": "One advantage of training a discriminative model on the task instead of using the label aggregator as a classifier directly is that the label aggregator only takes into account those signals included in the LFs."
                    },
                    {
                        "id": 96,
                        "string": "A discriminative model, on the other hand, can incorporate features that were not identified by the user but are nevertheless informative."
                    },
                    {
                        "id": 97,
                        "string": "2 Consequently, even examples for which all LFs abstained can still be classified correctly."
                    },
                    {
                        "id": 98,
                        "string": "On the three tasks we evaluate, using the discriminative model averages 4.3 F1 points higher than using the label aggregator directly."
                    },
                    {
                        "id": 99,
                        "string": "For the results reported in this paper, our discriminative model is a simple logistic regression classifier with generic features defined over dependency paths."
                    },
                    {
                        "id": 100,
                        "string": "3 bigrams, and trigrams of lemmas, dependency labels, and part of speech tags found in the siblings, parents, and nodes between the entities in the dependency parse of the sentence."
                    },
                    {
                        "id": 101,
                        "string": "We found this to perform better on average than a biLSTM, particularly for the traditional supervision baselines with small training set sizes; it also provided easily interpretable features for analysis."
                    },
                    {
                        "id": 102,
                        "string": "Experimental Setup We evaluate the accuracy of BabbleLabble on three relation extraction tasks, which we refer to as Spouse, Disease, and Protein."
                    },
                    {
                        "id": 103,
                        "string": "The goal of each task is to train a classifier for predicting whether the two entities in an example are participating in the relationship of interest, as described below."
                    },
                    {
                        "id": 104,
                        "string": "Datasets Statistics for each dataset are reported in Table 2, with one example and one explanation for each given in Figure 4 and additional explanations shown in Appendix B."
                    },
                    {
                        "id": 105,
                        "string": "In the Spouse task, annotators were shown a sentence with two highlighted names and asked to label whether the sentence suggests that the two people are spouses."
                    },
                    {
                        "id": 106,
                        "string": "Sentences were pulled from the Signal Media dataset of news articles (Corney , 2016) ."
                    },
                    {
                        "id": 107,
                        "string": "Ground truth data was collected from Amazon Mechanical Turk workers, accepting the majority label over three annotations."
                    },
                    {
                        "id": 108,
                        "string": "The 30 explanations we report on were sampled randomly from a pool of 200 that were generated by 10 graduate students unfamiliar with BabbleLabble."
                    },
                    {
                        "id": 109,
                        "string": "In the Disease task, annotators were shown a sentence with highlighted names of a chemical and a disease and asked to label whether the sentence suggests that the chemical causes the disease."
                    },
                    {
                        "id": 110,
                        "string": "Sentences and ground truth labels came from a portion of the 2015 BioCreative chemical-disease relation dataset (Wei et al., 2015) , which contains abstracts from PubMed."
                    },
                    {
                        "id": 111,
                        "string": "Because this task requires specialized domain expertise, we obtained explanations by having someone unfamiliar with BabbleLabble translate from Python to natural language labeling functions from an existing publication that explored applying weak supervision to this task (Ratner et al., 2018) ."
                    },
                    {
                        "id": 112,
                        "string": "The Protein task was completed in conjunction with OccamzRazor, a neuroscience company targeting biological pathways of Parkinson's disease."
                    },
                    {
                        "id": 113,
                        "string": "For this task, annotators were shown a sentence from the relevant biomedical literature with highlighted names of a protein and a kinase and asked to label whether or not the kinase influences the protein in terms of a physical interaction or phosphorylation."
                    },
                    {
                        "id": 114,
                        "string": "The annotators had domain expertise but minimal programming experience, making BabbleLabble a natural fit for their use case."
                    },
                    {
                        "id": 115,
                        "string": "Experimental Settings Text documents are tokenized with spaCy."
                    },
                    {
                        "id": 116,
                        "string": "4 The semantic parser is built on top of the Python-based 4 https://github.com/explosion/spaCy implementation SippyCup."
                    },
                    {
                        "id": 117,
                        "string": "5 On a single core, parsing 360 explanations takes approximately two seconds."
                    },
                    {
                        "id": 118,
                        "string": "We use existing implementations of the label aggregator, feature library, and discriminative classifier described in Sections 2.4-2.5 provided by the open-source project Snorkel (Ratner et al., 2018) ."
                    },
                    {
                        "id": 119,
                        "string": "Hyperparameters for all methods we report were selected via random search over thirty configurations on the same held-out development set."
                    },
                    {
                        "id": 120,
                        "string": "We searched over learning rate, batch size, L 2 regularization, and the subsampling rate (for improving balance between classes)."
                    },
                    {
                        "id": 121,
                        "string": "6 All reported F1 scores are the average value of 40 runs with random seeds and otherwise identical settings."
                    },
                    {
                        "id": 122,
                        "string": "Experimental Results We evaluate the performance of BabbleLabble with respect to its rate of improvement by number of user inputs, its dependence on correctly parsed logical forms, and the mechanism by which it utilizes logical forms."
                    },
                    {
                        "id": 123,
                        "string": "High Bandwidth Supervision In Table 3 we report the average F1 score of a classifier trained with BabbleLabble using 30 explanations or traditional supervision with the indicated number of labels."
                    },
                    {
                        "id": 124,
                        "string": "On average, it took the same amount of time to collect 30 explanations as 60 labels."
                    },
                    {
                        "id": 125,
                        "string": "7 We observe that in all three tasks, BabbleLabble achieves a given F1 score with far fewer user inputs than traditional supervision, by Table 4 : The number of LFs generated from 30 explanations (pre-filters), discarded by the filter bank, and remaining (post-filters), along with the percentage of LFs that were correctly parsed from their corresponding explanations."
                    },
                    {
                        "id": 126,
                        "string": "as much as 100 times in the case of the Spouse task."
                    },
                    {
                        "id": 127,
                        "string": "Because explanations are applied to many unlabeled examples, each individual input from the user can implicitly contribute many (noisy) labels to the learning algorithm."
                    },
                    {
                        "id": 128,
                        "string": "We also observe, however, that once the number of labeled examples is sufficiently large, traditional supervision once again dominates, since ground truth labels are preferable to noisy ones generated by labeling functions."
                    },
                    {
                        "id": 129,
                        "string": "However, in domains where there is much more unlabeled data available than labeled data (which in our experience is most domains), we can gain in supervision efficiency from using BabbleLabble."
                    },
                    {
                        "id": 130,
                        "string": "Of those explanations that did not produce a correct LF, 4% were caused by the explanation referring to unsupported concepts (e.g., one explanation referred to \"the subject of the sentence,\" which our simple parser doesn't support)."
                    },
                    {
                        "id": 131,
                        "string": "Another 2% were caused by human errors (the correct LF for the explanation was inconsistent with the example)."
                    },
                    {
                        "id": 132,
                        "string": "The remainder were due to unrecognized paraphrases (e.g., the explanation said \"the order of appearance is X, Y\" instead of a supported phrasing like \"X comes before Y\")."
                    },
                    {
                        "id": 133,
                        "string": "Utility of Incorrect Parses In Table 4 , we report LF summary statistics before and after filtering."
                    },
                    {
                        "id": 134,
                        "string": "LF correctness is based on exact match with a manually generated parse for each explanation."
                    },
                    {
                        "id": 135,
                        "string": "Surprisingly, the simple heuristic-based filter bank successfully removes over 95% of incorrect LFs in all three tasks, resulting in final LF sets that are 86% correct on average."
                    },
                    {
                        "id": 136,
                        "string": "Furthermore, among those LFs that pass through the filter bank, we found that the average difference in end-task accuracy between correct and incorrect parses is less than 2.5%."
                    },
                    {
                        "id": 137,
                        "string": "Intuitively, the filters are effective because it is quite difficult for an LF to be parsed from the explana-  tion, label its own example correctly (passing the semantic filter), and not label all examples in the training set with the same label or identically to another LF (passing the pragmatic filter)."
                    },
                    {
                        "id": 138,
                        "string": "We went one step further: using the LFs that would be produced by a perfect semantic parser as starting points, we searched for \"nearby\" LFs (LFs differing by only one predicate) with higher endtask accuracy on the test set and succeeded 57% of the time (see Figure 5 for an example)."
                    },
                    {
                        "id": 139,
                        "string": "In other words, when users provide explanations, the signals they describe provide good starting points, but they are actually unlikely to be optimal."
                    },
                    {
                        "id": 140,
                        "string": "This observation is further supported by Table 5 , which shows that the filter bank is necessary to remove clearly irrelevant LFs, but with that in place, the simple rule-based semantic parser and a perfect parser have nearly identical average F1 scores."
                    },
                    {
                        "id": 141,
                        "string": "Using LFs as Functions or Features Once we have relevant logical forms from userprovided explanations, we have multiple options for how to use them."
                    },
                    {
                        "id": 142,
                        "string": "Srivastava et al."
                    },
                    {
                        "id": 143,
                        "string": "(2017) propose using these logical forms as features in a linear classifier."
                    },
                    {
                        "id": 144,
                        "string": "We choose instead to use them as functions for weakly supervising the creation of a larger training set via data programming (Ratner et al., 2016) ."
                    },
                    {
                        "id": 145,
                        "string": "In Table 6 , we compare the two approaches directly, finding that the the data programming approach outperforms a feature-based one by 9.5 F1 points with the rule-based parser, and by 4.5 points with a perfect parser."
                    },
                    {
                        "id": 146,
                        "string": "We attribute this difference primarily to the ability of data programming to utilize unlabeled data."
                    },
                    {
                        "id": 147,
                        "string": "In Figure 6 , we show how the data programming approach improves with the number of unlabeled examples, even as the number of LFs remains constant."
                    },
                    {
                        "id": 148,
                        "string": "We also observe qualitatively that data programming exposes the classifier to additional patterns that are correlated with our explanations but not mentioned directly."
                    },
                    {
                        "id": 149,
                        "string": "For example, in the Disease task, two of the features weighted most def LF_1a(x): return (-1 if any(w.startswith(\"improv\") for w in left(x.person2)) else 0) Correct False, because a word starting with \"improve\" appears before the chemical."
                    },
                    {
                        "id": 150,
                        "string": "Incorrect Explanation Labeling Function Correctness Accuracy Figure 6 : When logical forms of natural language explanations are used as functions for data programming (as they are in BabbleLabble), performance can improve with the addition of unlabeled data, whereas using them as features does not benefit from unlabeled data."
                    },
                    {
                        "id": 151,
                        "string": "highly by the discriminative model were the presence of the trigrams \"could produce a\" or \"support diagnosis of\" between the chemical and disease, despite none of these words occurring in the explanations for that task."
                    },
                    {
                        "id": 152,
                        "string": "In Table 6 we see a 4.3 F1 point improvement (10%) when we use the discriminative model that can take advantage of these features rather than applying the LFs directly to the test set and making predictions based on the output of the label aggregator."
                    },
                    {
                        "id": 153,
                        "string": "Related Work and Discussion Our work has two themes: modeling natural language explanations/instructions and learning from weak supervision."
                    },
                    {
                        "id": 154,
                        "string": "The closest body of work is on \"learning from natural language.\""
                    },
                    {
                        "id": 155,
                        "string": "As mentioned earlier, Srivastava et al."
                    },
                    {
                        "id": 156,
                        "string": "(2017) convert natural language explanations into classifier features (whereas we convert them into labeling functions)."
                    },
                    {
                        "id": 157,
                        "string": "Goldwasser and Roth (2011) Table 6 : F1 scores obtained using explanations as functions for data programming (BL) or features (Feat), optionally with no discriminative model (-DM) or using a perfect parser (+PP)."
                    },
                    {
                        "id": 158,
                        "string": "guage into concepts (e.g., the rules of a card game)."
                    },
                    {
                        "id": 159,
                        "string": "Ling and Fidler (2017) use natural language explanations to assist in supervising an image captioning model."
                    },
                    {
                        "id": 160,
                        "string": "Weston (2016) ; Li et al."
                    },
                    {
                        "id": 161,
                        "string": "(2016) learn from natural language feedback in a dialogue."
                    },
                    {
                        "id": 162,
                        "string": "Wang et al."
                    },
                    {
                        "id": 163,
                        "string": "(2017) convert natural language definitions to rules in a semantic parser to build up progressively higher-level concepts."
                    },
                    {
                        "id": 164,
                        "string": "We lean on the formalism of semantic parsing (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Liang, 2016) ."
                    },
                    {
                        "id": 165,
                        "string": "One notable trend is to learn semantic parsers from weak supervision (Clarke et al., 2010; Liang et al., 2011) , whereas our goal is to obtain weak supervision signal from semantic parsers."
                    },
                    {
                        "id": 166,
                        "string": "The broader topic of weak supervision has received much attention; we mention some works most related to relation extraction."
                    },
                    {
                        "id": 167,
                        "string": "In distant supervision (Craven et al., 1999; Mintz et al., 2009) and multi-instance learning (Riedel et al., 2010; Hoffmann et al., 2011) , an existing knowledge base is used to (probabilistically) impute a training set."
                    },
                    {
                        "id": 168,
                        "string": "Various extensions have focused on aggregating a variety of supervision sources by learning generative models from noisy labels (Alfonseca et al., 2012; Takamatsu et al., 2012; Roth and Klakow, 2013; Ratner et al., 2016; Varma et al., 2017) ."
                    },
                    {
                        "id": 169,
                        "string": "Finally, while we have used natural language explanations as input to train models, they can also be output to interpret models (Krening et al., 2017; Lei et al., 2016) ."
                    },
                    {
                        "id": 170,
                        "string": "More generally, from a machine learning perspective, labels are the primary asset, but they are a low bandwidth signal between annotators and the learning algorithm."
                    },
                    {
                        "id": 171,
                        "string": "Natural language opens up a much higher-bandwidth communication channel."
                    },
                    {
                        "id": 172,
                        "string": "We have shown promising results in relation extraction (where one explanation can be \"worth\" 100 labels), and it would be interesting to extend our framework to other tasks and more interactive settings."
                    },
                    {
                        "id": 173,
                        "string": "Reproducibility The code, data, and experiments for this paper are available on the CodaLab platform at https: //worksheets.codalab.org/worksheets/ 0x900e7e41deaa4ec5b2fe41dc50594548/."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 33
                    },
                    {
                        "section": "The BabbleLabble Framework",
                        "n": "2",
                        "start": 34,
                        "end": 42
                    },
                    {
                        "section": "Explanations",
                        "n": "2.1",
                        "start": 43,
                        "end": 45
                    },
                    {
                        "section": "Semantic Parser",
                        "n": "2.2",
                        "start": 46,
                        "end": 64
                    },
                    {
                        "section": "Filter Bank",
                        "n": "2.3",
                        "start": 65,
                        "end": 77
                    },
                    {
                        "section": "Label Aggregator",
                        "n": "2.4",
                        "start": 78,
                        "end": 93
                    },
                    {
                        "section": "Discriminative Model",
                        "n": "2.5",
                        "start": 94,
                        "end": 101
                    },
                    {
                        "section": "Experimental Setup",
                        "n": "3",
                        "start": 102,
                        "end": 103
                    },
                    {
                        "section": "Datasets",
                        "n": "3.1",
                        "start": 104,
                        "end": 114
                    },
                    {
                        "section": "Experimental Settings",
                        "n": "3.2",
                        "start": 115,
                        "end": 119
                    },
                    {
                        "section": "Experimental Results",
                        "n": "4",
                        "start": 120,
                        "end": 122
                    },
                    {
                        "section": "High Bandwidth Supervision",
                        "n": "4.1",
                        "start": 123,
                        "end": 132
                    },
                    {
                        "section": "Utility of Incorrect Parses",
                        "n": "4.2",
                        "start": 133,
                        "end": 140
                    },
                    {
                        "section": "Using LFs as Functions or Features",
                        "n": "4.3",
                        "start": 141,
                        "end": 152
                    },
                    {
                        "section": "Related Work and Discussion",
                        "n": "5",
                        "start": 153,
                        "end": 173
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1319-Figure1-1.png",
                        "caption": "Figure 1: In BabbleLabble, the user provides a natural language explanation for each labeling decision. These explanations are parsed into labeling functions that convert unlabeled data into a large labeled dataset for training a classifier.",
                        "page": 0,
                        "bbox": {
                            "x1": 310.56,
                            "x2": 523.1999999999999,
                            "y1": 246.72,
                            "y2": 445.91999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Table3-1.png",
                        "caption": "Table 3: F1 scores obtained by a classifier trained with BabbleLabble (BL) using 30 explanations or with traditional supervision (TS) using the specified number of individually labeled examples. BabbleLabble achieves the same F1 score as traditional supervision while using fewer user inputs by a factor of over 5 (Protein) to over 100 (Spouse).",
                        "page": 5,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 68.64,
                            "y2": 167.04
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Figure2-1.png",
                        "caption": "Figure 2: Natural language explanations are parsed into candidate labeling functions (LFs). Many incorrect LFs are filtered out automatically by the filter bank. The remaining functions provide heuristic labels over the unlabeled dataset, which are aggregated into one noisy label per example, yielding a large, noisily-labeled training set for a classifier.",
                        "page": 1,
                        "bbox": {
                            "x1": 84.96,
                            "x2": 511.2,
                            "y1": 66.72,
                            "y2": 256.32
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Table4-1.png",
                        "caption": "Table 4: The number of LFs generated from 30 explanations (pre-filters), discarded by the filter bank, and remaining (post-filters), along with the percentage of LFs that were correctly parsed from their corresponding explanations.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 289.44,
                            "y1": 62.4,
                            "y2": 130.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Table5-1.png",
                        "caption": "Table 5: F1 scores obtained using BabbleLabble with no filter bank (BL-FB), as normal (BL), and with a perfect parser (BL+PP) simulated by hand.",
                        "page": 6,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 528.0,
                            "y1": 62.4,
                            "y2": 130.07999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Figure3-1.png",
                        "caption": "Figure 3: Valid parses are found by iterating over increasingly large subspans of the input looking for matches among the right hand sides of the rules in the grammar. Rules are either lexical (converting tokens into symbols), unary (converting one symbol into another symbol), or compositional (combining many symbols into a single higher-order symbol). A rule may optionally ignore unrecognized tokens in a span (denoted here with a dashed line).",
                        "page": 2,
                        "bbox": {
                            "x1": 121.92,
                            "x2": 477.12,
                            "y1": 66.72,
                            "y2": 204.95999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Table6-1.png",
                        "caption": "Table 6: F1 scores obtained using explanations as functions for data programming (BL) or features (Feat), optionally with no discriminative model (-DM) or using a perfect parser (+PP).",
                        "page": 7,
                        "bbox": {
                            "x1": 308.64,
                            "x2": 523.1999999999999,
                            "y1": 227.51999999999998,
                            "y2": 295.2
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Figure5-1.png",
                        "caption": "Figure 5: Incorrect LFs often still provide useful signal. On top is an incorrect LF produced for the Disease task that had the same accuracy as the correct LF. On bottom is a correct LF from the Spouse task and a more accurate incorrect LF discovered by randomly perturbing one predicate at a time as described in Section 4.2. (Person 2 is always the second person in the sentence).",
                        "page": 7,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 524.16,
                            "y1": 65.75999999999999,
                            "y2": 148.32
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Figure6-1.png",
                        "caption": "Figure 6: When logical forms of natural language explanations are used as functions for data programming (as they are in BabbleLabble), performance can improve with the addition of unlabeled data, whereas using them as features does not benefit from unlabeled data.",
                        "page": 7,
                        "bbox": {
                            "x1": 76.32,
                            "x2": 267.36,
                            "y1": 229.44,
                            "y2": 372.47999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Table1-1.png",
                        "caption": "Table 1: Predicates in the grammar supported by BabbleLabble’s rule-based semantic parser.",
                        "page": 3,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 295.2,
                            "y1": 62.4,
                            "y2": 420.0
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Table2-1.png",
                        "caption": "Table 2: The total number of unlabeled training examples (a pair of annotated entities in a sentence), labeled development examples (for hyperparameter tuning), labeled test examples (for assessment), and the fraction of positive labels in the test split.",
                        "page": 4,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 525.12,
                            "y1": 204.48,
                            "y2": 271.2
                        }
                    },
                    {
                        "filename": "../figure/image/1319-Figure4-1.png",
                        "caption": "Figure 4: An example and explanation for each of the three datasets.",
                        "page": 4,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 524.16,
                            "y1": 64.32,
                            "y2": 164.16
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-74"
        },
        {
            "slides": {
                "0": {
                    "title": "Background Dialog",
                    "text": [
                        "Personal assistant, helps people complete specific tasks",
                        "Combination of rules and statistical components",
                        "No specific goal, attempts to produce natural responses",
                        "Using variants of seq2seq model"
                    ],
                    "page_nums": [
                        1
                    ],
                    "images": []
                },
                "1": {
                    "title": "Background Neural Model",
                    "text": [
                        "utterance-response: n-to-1 relationship e.g., the response Must support! Cheer! is used for 1216 different input utterances",
                        "My friends and I are shocked! pre-defined a set of topics",
                        "from an external corpus rely on external corpus",
                        "treat all the utterance-response pairs uniformly employ a single model to learn the mapping between utterance and response",
                        "introduce latent responding factors to model multiple responding mechanisms lack of interpretation",
                        "favor such general responses with high frequency"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": [
                        "figure/image/1328-Figure1-1.png"
                    ]
                },
                "2": {
                    "title": "How to capture different utterance response relationships",
                    "text": [
                        "Our motivation comes from"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Human Conversation Process",
                    "text": [
                        "Do you know a good eating place for Australian special food?",
                        "knowledge state dialogue partner",
                        "Good Australian eating places include steak, seafood, cake, etc. What do you want to choose?"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "4": {
                    "title": "Model Architecture",
                    "text": [
                        "introduce an explicit specificity control variable s to represent the response purpose",
                        "s summarizes many latent factors into one variable s has explicit meaning on specificity actively controls the generation of the response",
                        "knowledge state dialogue partner"
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "7": {
                    "title": "Model Training",
                    "text": [
                        "decides how specific we should reply",
                        "the specificity control variable interacts with the usage representation of words through the layer let the word usage representation regress to the variable through certain mapping function (sigmoid) , )U",
                        "specificity control variable 2U exp(",
                        "0 denotes the most general response",
                        "1 denotes the most specific response variance usage representation"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": [
                        "figure/image/1328-Figure2-1.png"
                    ]
                }
            },
            "paper_title": "Learning to Control the Specificity in Neural Response Generation",
            "paper_id": "1328",
            "paper": {
                "title": "Learning to Control the Specificity in Neural Response Generation",
                "abstract": "In conversation, a general response (e.g., \"I don't know\") could correspond to a large variety of input utterances. Previous generative conversational models usually employ a single model to learn the relationship between different utteranceresponse pairs, thus tend to favor general and trivial responses which appear frequently. To address this problem, we propose a novel controlled response generation mechanism to handle different utterance-response relationships in terms of specificity. Specifically, we introduce an explicit specificity control variable into a sequence-to-sequence model, which interacts with the usage representation of words through a Gaussian Kernel layer, to guide the model to generate responses at different specificity levels. We describe two ways to acquire distant labels for the specificity control variable in learning. Empirical studies show that our model can significantly outperform the state-of-theart response generation models under both automatic and human evaluations.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Human-computer conversation is a critical and challenging task in AI and NLP."
                    },
                    {
                        "id": 1,
                        "string": "There have been two major streams of research in this direction, namely task oriented dialog and general purpose dialog (i.e., chit-chat)."
                    },
                    {
                        "id": 2,
                        "string": "Task oriented dialog aims to help people complete specific tasks such as buying tickets or shopping, while general purpose dialog attempts to produce natural and meaningful conversations with people regarding a wide range of topics in open domains (Perez-Marin, 2011; ."
                    },
                    {
                        "id": 3,
                        "string": "In recent years, the latter has at-Must support!"
                    },
                    {
                        "id": 4,
                        "string": "Cheer!"
                    },
                    {
                        "id": 5,
                        "string": "Support!"
                    },
                    {
                        "id": 6,
                        "string": "It's good."
                    },
                    {
                        "id": 7,
                        "string": "My friends and I are shocked!"
                    },
                    {
                        "id": 8,
                        "string": "Figure 1 : Rank-frequency distribution of the responses in the chit-chat corpus, with x and y axes being lg(rank order) and lg(frequency) respectively."
                    },
                    {
                        "id": 9,
                        "string": "tracted much attention in both academia and industry as a way to explore the possibility in developing a general purpose AI system in language (e.g., chatbots)."
                    },
                    {
                        "id": 10,
                        "string": "A widely adopted approach to general purpose dialog is learning a generative conversational model from large scale social conversation data."
                    },
                    {
                        "id": 11,
                        "string": "Most methods in this line are constructed within the statistical machine translation (SMT) framework, where a sequence-to-sequence (Seq2Seq) model is learned to \"translate\" an input utterance into a response."
                    },
                    {
                        "id": 12,
                        "string": "However, general purpose dialog is intrinsically different from machine translation."
                    },
                    {
                        "id": 13,
                        "string": "In machine translation, since every sentence and its translation are semantically equivalent, there exists a 1-to-1 relationship between them."
                    },
                    {
                        "id": 14,
                        "string": "However, in general purpose dialog, a general response (e.g., \"I don't know\") could correspond to a large variety of input utterances."
                    },
                    {
                        "id": 15,
                        "string": "For example, in the chit-chat corpus used in this study (as shown in Figure 1 ), the top three most frequently appeared responses are \"Must support!"
                    },
                    {
                        "id": 16,
                        "string": "Cheer!"
                    },
                    {
                        "id": 17,
                        "string": "\", \"Support!"
                    },
                    {
                        "id": 18,
                        "string": "It's good."
                    },
                    {
                        "id": 19,
                        "string": "\", and \"My friends and I are shocked!"
                    },
                    {
                        "id": 20,
                        "string": "\", where the response \"Must support!"
                    },
                    {
                        "id": 21,
                        "string": "Cheer!\""
                    },
                    {
                        "id": 22,
                        "string": "is used for 1216 different input utterances."
                    },
                    {
                        "id": 23,
                        "string": "Previous Seq2Seq models, which treat all the utteranceresponse pairs uniformly and employ a single model to learn the relationship between them, will inevitably favor such general responses with high frequency."
                    },
                    {
                        "id": 24,
                        "string": "Although these responses are safe for replying different utterances, they are boring and trivial since they carry little information, and may quickly lead to an end of the conversation."
                    },
                    {
                        "id": 25,
                        "string": "There have been a few efforts attempting to address this issue in literature."
                    },
                    {
                        "id": 26,
                        "string": "Li et al."
                    },
                    {
                        "id": 27,
                        "string": "(2016a) proposed to use the Maximum Mutual Information (MMI) as the objective to penalize general responses."
                    },
                    {
                        "id": 28,
                        "string": "It could be viewed as a post-processing approach which did not solve the generation of trivial responses fundamentally."
                    },
                    {
                        "id": 29,
                        "string": "Xing et al."
                    },
                    {
                        "id": 30,
                        "string": "(2017) pre-defined a set of topics from an external corpus to guide the generation of the Seq2Seq model."
                    },
                    {
                        "id": 31,
                        "string": "However, it is difficult to ensure that the topics learned from the external corpus are consistent with that in the conversation corpus, leading to the introduction of additional noises."
                    },
                    {
                        "id": 32,
                        "string": "introduced latent responding factors to model multiple responding mechanisms."
                    },
                    {
                        "id": 33,
                        "string": "However, these latent factors are usually difficult in interpretation and it is hard to decide the number of the latent factors."
                    },
                    {
                        "id": 34,
                        "string": "In our work, we propose a novel controlled response generation mechanism to handle different utterance-response relationships in terms of specificity."
                    },
                    {
                        "id": 35,
                        "string": "The key idea is inspired by our observation on everyday conversation between humans."
                    },
                    {
                        "id": 36,
                        "string": "In human-human conversation, people often actively control the specificity of responses depending on their own response purpose (which might be affected by a variety of underlying factors like their current mood, knowledge state and so on)."
                    },
                    {
                        "id": 37,
                        "string": "For example, they may provide some interesting and specific responses if they like the conversation, or some general responses if they want to end it."
                    },
                    {
                        "id": 38,
                        "string": "They may provide very detailed responses if they are familiar with the topic, or just \"I don't know\" otherwise."
                    },
                    {
                        "id": 39,
                        "string": "Therefore, we propose to simulate the way people actively control the specificity of the response."
                    },
                    {
                        "id": 40,
                        "string": "We employ a Seq2Seq framework and further introduce an explicit specificity control variable to represent the response purpose of the agent."
                    },
                    {
                        "id": 41,
                        "string": "Meanwhile, we assume that each word, beyond the semantic representation which relates to its meaning, also has another representation which relates to the usage preference under different response purpose."
                    },
                    {
                        "id": 42,
                        "string": "We name this representation as the usage representation of words."
                    },
                    {
                        "id": 43,
                        "string": "The specificity control variable then interacts with the usage representation of words through a Gaussian Kernel layer, and guides the Seq2Seq model to generate responses at different specificity levels."
                    },
                    {
                        "id": 44,
                        "string": "We refer to our model as Specificity Controlled Seq2Seq model (SC-Seq2Seq)."
                    },
                    {
                        "id": 45,
                        "string": "Note that unlike the work by (Xing et al., 2017) , we do not rely on any external corpus to learn our model."
                    },
                    {
                        "id": 46,
                        "string": "All the model parameters are learned on the same conversation corpus in an end-to-end way."
                    },
                    {
                        "id": 47,
                        "string": "We employ distant supervision to train our SC-Seq2Seq model since the specificity control variable is unknown in the raw data."
                    },
                    {
                        "id": 48,
                        "string": "We describe two ways to acquire distant labels for the specificity control variable, namely Normalized Inverse Response Frequency (NIRF) and Normalized Inverse Word Frequency (NIWF)."
                    },
                    {
                        "id": 49,
                        "string": "By using normalized values, we restrict the specificity control variable to be within a pre-defined continuous value range with each end has very clear meaning on the specificity."
                    },
                    {
                        "id": 50,
                        "string": "This is significantly different from the discrete latent factors in  which are difficult in interpretation."
                    },
                    {
                        "id": 51,
                        "string": "We conduct an empirical study on a large public dataset, and compare our model with several state-of-the-art response generation methods."
                    },
                    {
                        "id": 52,
                        "string": "Empirical results show that our model can generate either general or specific responses, and significantly outperform existing methods under both automatic and human evaluations."
                    },
                    {
                        "id": 53,
                        "string": "Related Work In this section, we briefly review the related work on conversational models and response specificity."
                    },
                    {
                        "id": 54,
                        "string": "Conversational Models Automatic conversation has attracted increasing attention over the past few years."
                    },
                    {
                        "id": 55,
                        "string": "At the very beginning, people started the research using handcrafted rules and templates (Walker et al., 2001; Williams et al., 2013; Henderson et al., 2014) ."
                    },
                    {
                        "id": 56,
                        "string": "These approaches required little data for training but huge manual effort to build the model, which is very time-consuming."
                    },
                    {
                        "id": 57,
                        "string": "For now, conversational models fall into two major categories: retrieval-based and generation-based."
                    },
                    {
                        "id": 58,
                        "string": "Retrievalbased conversational models search the most suitable response from candidate responses using different schemas (Kearns, 2000; Wang et al., 2013; ."
                    },
                    {
                        "id": 59,
                        "string": "These methods rely on preexisting responses, thus are difficult to be exten-ded to open domains ."
                    },
                    {
                        "id": 60,
                        "string": "With the large amount of conversation data available on the Internet, generation-based conversational models developed within a SMT framework (Ritter et al., 2011; Cho et al., 2014; Bahdanau et al., 2015) show promising results."
                    },
                    {
                        "id": 61,
                        "string": "Shang et al."
                    },
                    {
                        "id": 62,
                        "string": "(2015) generated replies for short-text conversation by encoder-decoder-based neural network with local and global attentions."
                    },
                    {
                        "id": 63,
                        "string": "Serban et al."
                    },
                    {
                        "id": 64,
                        "string": "(2016) built an end-to-end dialogue system using generative hierarchical neural network."
                    },
                    {
                        "id": 65,
                        "string": "Gu et al."
                    },
                    {
                        "id": 66,
                        "string": "(2016) introduced copynet to simulate the repeating behavior of humans in conversation."
                    },
                    {
                        "id": 67,
                        "string": "Similarly, our model is also based on the encoder-decoder framework."
                    },
                    {
                        "id": 68,
                        "string": "Response Specificity Some recent studies began to focus on generating more specific or informative responses in conversation."
                    },
                    {
                        "id": 69,
                        "string": "It is also called a diversity problem since if each response is more specific, it would be more diverse between responses of different utterances."
                    },
                    {
                        "id": 70,
                        "string": "As an early work, Li et al."
                    },
                    {
                        "id": 71,
                        "string": "(2016a) used Maximum Mutual Information (MMI) as the objective to penalize general responses."
                    },
                    {
                        "id": 72,
                        "string": "Later,  proposed a data distillation method, which trains a series of generative models at different levels of specificity and uses a reinforcement learning model to choose the model best suited for decoding depending on the conversation context."
                    },
                    {
                        "id": 73,
                        "string": "These methods circumvented the general response issue by using either a post-processing approach or a data selection approach."
                    },
                    {
                        "id": 74,
                        "string": "Besides, Li et al."
                    },
                    {
                        "id": 75,
                        "string": "(2016b) tried to build a personalized conversation engine by adding extra personal information."
                    },
                    {
                        "id": 76,
                        "string": "Xing et al."
                    },
                    {
                        "id": 77,
                        "string": "(2017) incorporated the topic information from an external corpus into the Seq2Seq framework to guide the generation."
                    },
                    {
                        "id": 78,
                        "string": "However, external dataset may not be always available or consistent with the conversation dataset in topics."
                    },
                    {
                        "id": 79,
                        "string": "introduced latent responding factors to the Seq2Seq model to avoid generating safe responses."
                    },
                    {
                        "id": 80,
                        "string": "However, these latent factors are usually difficult in interpretation and hard to decide the number."
                    },
                    {
                        "id": 81,
                        "string": "Moreover, Mou et al."
                    },
                    {
                        "id": 82,
                        "string": "(2016) proposed a content-introducing approach to generate a response based on a predicted keyword."
                    },
                    {
                        "id": 83,
                        "string": "Yao et al."
                    },
                    {
                        "id": 84,
                        "string": "(2016) attempted to improve the specificity with the reinforcement learning framework by using the averaged IDF score of the words in the response as a reward."
                    },
                    {
                        "id": 85,
                        "string": "Shen et al."
                    },
                    {
                        "id": 86,
                        "string": "(2017) presented a con-ditional variational framework for generating specific responses based on specific attributes."
                    },
                    {
                        "id": 87,
                        "string": "Unlike these existing methods, we introduce an explicit specificity control variable into a Seq2Seq model to handle different utterance-response relationships in terms of specificity."
                    },
                    {
                        "id": 88,
                        "string": "Specificity Controlled Seq2Seq Model In this section, we present the Specificity Controlled Seq2Seq model (SC-Seq2Seq), a novel Seq2Seq model designed for actively controlling the generated responses in terms of specificity."
                    },
                    {
                        "id": 89,
                        "string": "Model Overview The basic idea of a generative conversational model is to learn the mapping from an input utterance to its response, typically using an encoderdecoder framework."
                    },
                    {
                        "id": 90,
                        "string": "Formally, given an input utterance sequence X = (x 1 , x 2 , ."
                    },
                    {
                        "id": 91,
                        "string": "."
                    },
                    {
                        "id": 92,
                        "string": "."
                    },
                    {
                        "id": 93,
                        "string": ", x T ) and a target response sequence Y = (y 1 , y 2 , ."
                    },
                    {
                        "id": 94,
                        "string": "."
                    },
                    {
                        "id": 95,
                        "string": "."
                    },
                    {
                        "id": 96,
                        "string": ", y T ), a neural Seq2Seq model is employed to learn p(Y|X) based on the training corpus D = {(X, Y)|Y is the response of X}."
                    },
                    {
                        "id": 97,
                        "string": "By maximizing the likelihood of all the utterance-response pairs with a single mapping mechanism, the learned Seq2Seq model will inevitably favor those general responses that can correspond to a large variety of input utterances."
                    },
                    {
                        "id": 98,
                        "string": "To address this issue, we assume that there are different mapping mechanisms between utteranceresponse pairs with respect to their specificity relation."
                    },
                    {
                        "id": 99,
                        "string": "Rather than involving some latent factors, we propose to introduce an explicit variable s into a Seq2Seq model to handle different utteranceresponse mappings in terms of specificity."
                    },
                    {
                        "id": 100,
                        "string": "By doing so, we hope that (1) s would have explicit meaning on specificity, and (2) s could not only interpret but also actively control the generation of the response Y given the input utterance X."
                    },
                    {
                        "id": 101,
                        "string": "The goal of our model becomes to learn p(Y|X, s) over the corpus D, where we acquire distant labels for s from the same corpus for learning."
                    },
                    {
                        "id": 102,
                        "string": "The overall architecture of SC-Seq2Seq is depicted in Figure  2 , and we will detail our model as follows."
                    },
                    {
                        "id": 103,
                        "string": "Encoder The encoder is to map the input utterance X into a compact vector that can capture its essential topics."
                    },
                    {
                        "id": 104,
                        "string": "Specifically, we use a bi-directional GRU (Cho et al., 2014) as the utterance encoder, and each word x i is firstly represented by its semantic representation e i mapped by semantic embedding matrix E as the input of the encoder."
                    },
                    {
                        "id": 105,
                        "string": "Then, the encoder represents the utterance X as a series of hidden vectors {h t } T t=1 modeling the sequence from both forward and backward directions."
                    },
                    {
                        "id": 106,
                        "string": "Finally, we use the final backward hidden state as the initial hidden state of the decoder."
                    },
                    {
                        "id": 107,
                        "string": "Decoder The decoder is to generate a response Y given the hidden representations of the input utterance X under some specificity level denoted by the control variable s. Specifically, at step t, we define the probability of generating any target word y t by a \"mixture\" of probabilities: p(y t ) = βp M (y t ) + γp S (y t ), (1) where p M (y t ) denotes the semantic-based generation probability, p S (y t ) denotes the specificitybased generation probability, β and γ are the coefficients."
                    },
                    {
                        "id": 108,
                        "string": "Specifically, p M (y t ) is defined the same as that in traditional Seq2Seq model (Sutskever et al., 2014) : p M (y t = w) = w T (W h M · h yt + W e M · e t−1 + b M ), (2) where w is a one-hot indicator vector of the word w and e t−1 is the semantic representation of the t − 1-th generated word in decoder."
                    },
                    {
                        "id": 109,
                        "string": "W h M , W e M and b M are parameters."
                    },
                    {
                        "id": 110,
                        "string": "h yt is the t-th hidden state in the decoder which is computed by: h yt = f (y t−1 , h y t−1 , c t ), (3) where f is a GRU unit and c t is the context vector to allow the decoder to pay different attention to different parts of input at different steps (Bahdanau et al., 2015) ."
                    },
                    {
                        "id": 111,
                        "string": "p S (y t ) denotes the generation probability of the target word given the specificity control variable s. Here we introduce a Gaussian Kernel layer to define this probability."
                    },
                    {
                        "id": 112,
                        "string": "Specifically, we assume that each word, beyond its semantic representation e, also has a usage representation u mapped by usage embedding matrix U."
                    },
                    {
                        "id": 113,
                        "string": "The usage representation of a word denotes its usage preference under different specificity."
                    },
                    {
                        "id": 114,
                        "string": "The specificity control variable s then interacts with the usage representations through the Gaussian Kernel layer to produce the specificity-based generation probability p S (y t ): p S (y t = w) = 1 √ 2πσ exp(− (Ψ S (U, w) − s) 2 2σ 2 ), Ψ S (U, w) = σ(w T (U · W U + b U )), (4) where σ 2 is the variance, and Ψ S (·) maps the word usage representation into a real value with the specificity control variable s as the mean of the Gaussian distribution."
                    },
                    {
                        "id": 115,
                        "string": "W U and b U are parameters to be learned."
                    },
                    {
                        "id": 116,
                        "string": "Note here in general we can use any realvalue function to define Ψ S (U, w)."
                    },
                    {
                        "id": 117,
                        "string": "In this work, we use the sigmoid function σ(·) for Ψ S (U, w) since we want to define s within the range [0,1] so that each end has very clear meaning on the specificity, i.e., 0 denotes the most general response while 1 denotes the most specific response."
                    },
                    {
                        "id": 118,
                        "string": "In the next section, we will also keep this property when we define the distant label for the control variable."
                    },
                    {
                        "id": 119,
                        "string": "Distant Supervision We train our SC-Seq2Seq model by maximizing the log likelihood of generating responses over the training set D: L = (X,Y)∈D log P (Y|X, s; θ)."
                    },
                    {
                        "id": 120,
                        "string": "(5) where θ denotes all the model parameters."
                    },
                    {
                        "id": 121,
                        "string": "Note here since s is an explicit control variable in our model, we need the triples (X, Y, s) for training."
                    },
                    {
                        "id": 122,
                        "string": "However, s is not directly available in the raw conversation corpus, thus we acquire distant labels for s to learn our model."
                    },
                    {
                        "id": 123,
                        "string": "We introduce two ways of distant supervision on the specificity control variable s, namely Normalized Inverse Response Frequency (NIRF) and Normalized Inverse Word Frequency (NIWF)."
                    },
                    {
                        "id": 124,
                        "string": "Normalized Inverse Response Frequency Normalized Inverse Response Frequency (NIRF) is based on the assumption that a response is more general if it corresponds to more input utterances in the corpus."
                    },
                    {
                        "id": 125,
                        "string": "Therefore, we use the inverse frequency of a response in a conversation corpus to indicate its specificity level."
                    },
                    {
                        "id": 126,
                        "string": "Specifically, we first build the response collection R by extracting all the responses from D. For a response Y ∈ R, let f Y denote its corpus frequency in R, we compute its Inverse Response Frequency (IRF) as: IRF Y = log(1 + |R|)/f Y , (6) where |R| denotes the size of the response collection R. Next, we use the min-max normalization method (Jain et al., 2005) to obtain the NIRF value."
                    },
                    {
                        "id": 127,
                        "string": "Namely, NIRF Y = IRF Y − min Y ∈R (IRF Y ) max Y ∈R (IRF Y ) − min Y ∈R (IRF Y ) ."
                    },
                    {
                        "id": 128,
                        "string": "(7) where max(IRF R ) and min(IRF R ) denotes the maximal and minimum IRF value in R respectively."
                    },
                    {
                        "id": 129,
                        "string": "The NIRF value is then used as the distant label of s in training."
                    },
                    {
                        "id": 130,
                        "string": "Note here by using normalized values, we aim to constrain the specificity control variable s to be within the pre-defined continuous value range [0,1]."
                    },
                    {
                        "id": 131,
                        "string": "Normalized Inverse Word Frequency Normalized Inverse Word Frequency (NIWF) is based on the assumption that the specificity level of a response depends on the collection of words it contains, and the sentence is more specific if it contains more specific words."
                    },
                    {
                        "id": 132,
                        "string": "Hence, we can use the inverse corpus frequency of the words to indicate the specificity level of a response."
                    },
                    {
                        "id": 133,
                        "string": "Specifically, for a word y in the response Y, we first obtain its Inverse Word Frequency (IWF) by: IWF y = log(1 + |R|)/f y , (8) where f y denotes the number of responses in R containing the word y."
                    },
                    {
                        "id": 134,
                        "string": "Since a response usually contains a collection of words, there would be multiple ways to define the response-level IWF value, e.g., sum, average, minimum or maximum of the IWF values of all the words."
                    },
                    {
                        "id": 135,
                        "string": "In our work, we find that the best performance can be achieved by using the maximum of the IWF of all the words in Y to represent the response-level IWF by IWF Y = max y∈Y (IWF y )."
                    },
                    {
                        "id": 136,
                        "string": "(9) This is reasonable since a response is specific as long as it contains some specific words."
                    },
                    {
                        "id": 137,
                        "string": "We do not require all the words in a response to be specific, thus sum, average, and minimum would not be appropriate operators for computing the responselevel IWF."
                    },
                    {
                        "id": 138,
                        "string": "Again, we use min-max normalization to obtain the NIWF value for the response Y. Specificity Controlled Response Generation Given a new input utterance, we can employ the learned SC-Seq2Seq model to generate responses at different specificity levels by varying the control variable s. In this way, we can simulate human conversations where one can actively control the response specificity depending on his/her own mind."
                    },
                    {
                        "id": 139,
                        "string": "When we apply our model to a chatbot, there might be different ways to use the control variable for conversation in practice."
                    },
                    {
                        "id": 140,
                        "string": "If we want the agent to always generate informative responses, we can set s to 1 or some values close to 1."
                    },
                    {
                        "id": 141,
                        "string": "If we want the agent to be more dynamic, we can sample s within the range [0,1] to enrich the styles in the response."
                    },
                    {
                        "id": 142,
                        "string": "We may further employ some reinforcement learning technique to learn to adjust the control variable depending on users' feedbacks."
                    },
                    {
                        "id": 143,
                        "string": "This would make the agent even more vivid, and we leave this as our future work."
                    },
                    {
                        "id": 144,
                        "string": "Experiment In this section, we conduct experiments to verify the effectiveness of our proposed model."
                    },
                    {
                        "id": 145,
                        "string": "Dataset Description We conduct our experiments on the public Short Text Conversation (STC) dataset 1 released in NTCIR-13."
                    },
                    {
                        "id": 146,
                        "string": "STC maintains a large repository of post-comment pairs from the Sina Weibo which is one of the popular Chinese social sites."
                    },
                    {
                        "id": 147,
                        "string": "STC dataset contains roughly 3.8 million postcomment pairs, which could be used to simulate the utterance-response pairs in conversation."
                    },
                    {
                        "id": 148,
                        "string": "We employ the Jieba Chinese word segmenter 2 to tokenize the utterances and responses into sequences of Chinese words, and the detailed dataset statistics are shown in Table 1 ."
                    },
                    {
                        "id": 149,
                        "string": "We randomly selected two subsets as the development and test dataset, each containing 10k pairs."
                    },
                    {
                        "id": 150,
                        "string": "The left pairs are used for training."
                    },
                    {
                        "id": 151,
                        "string": "Baselines Methods We compare our proposed SC-Seq2Seq model against several state-of-the-art baselines: (1) Seq2Seq-att: the standard Seq2Seq model with the attention mechanism (Bahdanau et al., 2015) ; (2) MMI-bidi: the Seq2Seq model using Maximum Mutual Information (MMI) as the objective function to reorder the generated responses (Li et al., 2016a) ; (3) MARM: the Seq2Seq model with a probabilistic framework to model the latent responding mechanisms ; (4) Seq2Seq+IDF: an extension of Seq2Seq-att by optimizing specificity under the reinforcement learning framework, where the reward is calculated as the sentence level IDF score of the generated response (Yao et al., 2016) ."
                    },
                    {
                        "id": 152,
                        "string": "We refer to our model trained using NIRF and NIWF as SC-Seq2Seq NIRF and SC-Seq2Seq NIWF respectively."
                    },
                    {
                        "id": 153,
                        "string": "Implementation Details As suggested in (Shang et al., 2015) , we construct two separate vocabularies for utterances and responses by using 40,000 most frequent words on each side in the training data, covering 97.7% words in utterances and 96.1% words in responses respectively."
                    },
                    {
                        "id": 154,
                        "string": "All the remaining words are replaced by a special token <UNK> symbol."
                    },
                    {
                        "id": 155,
                        "string": "We implemented our model in Tensorflow 3 ."
                    },
                    {
                        "id": 156,
                        "string": "We tuned the hyper-parameters via the development set."
                    },
                    {
                        "id": 157,
                        "string": "Specifically, we use one layer of bi-directional GRU for encoder and another uni-directional GRU for decoder, with the GRU hidden unit size set as 300 in both the encoder and decoder."
                    },
                    {
                        "id": 158,
                        "string": "The dimension of semantic word embeddings in both utterances and responses is 300, while the dimension of usage word embeddings in responses is 50."
                    },
                    {
                        "id": 159,
                        "string": "We apply the Adam algorithm (Kingma and Ba, 2015) for optimization, where the parameters of Adam are set as in (Kingma and Ba, 2015) ."
                    },
                    {
                        "id": 160,
                        "string": "The variance σ 2 of the Gaussian Kernel layer is set as 1, and all other trainable parameters are randomly initialized by uniform distribution within [-0.08,0.08]."
                    },
                    {
                        "id": 161,
                        "string": "The mini-batch size for the update is set as 128."
                    },
                    {
                        "id": 162,
                        "string": "We clip the gradient when its norm exceeds 5."
                    },
                    {
                        "id": 163,
                        "string": "Our model is trained on a Tesla K80 GPU card, and we run the training for up to 12 epochs, which takes approximately five days."
                    },
                    {
                        "id": 164,
                        "string": "We select the model that achieves the lowest perplexity on the development dataset, and we report results on the test dataset."
                    },
                    {
                        "id": 165,
                        "string": "Evaluation Methodologies For evaluation, we follow the existing work and employ both automatic and human evaluations: (1) distinct-1 & distinct-2 (Li et al., 2016a) : we count numbers of distinct unigrams and bigrams in the generated responses, and divide the numbers by total number of generated unigrams and bigrams."
                    },
                    {
                        "id": 166,
                        "string": "Distinct metrics (both the numbers and the ratios) can be used to evaluate the specificity/diversity of the responses."
                    },
                    {
                        "id": 167,
                        "string": "(2) BLEU (Papineni et al., 2002) : BLEU has been proved strongly correlated with human evaluations."
                    },
                    {
                        "id": 168,
                        "string": "BLEU-n measures the average n-gram precision on a set of reference sentences."
                    },
                    {
                        "id": 169,
                        "string": "(3) Average & Extrema (Serban et al., 2017): Average and Extrema projects the generated response and the ground truth response into two separate vectors by taking the mean over the word embeddings or taking the extremum of each dimension respectively, and then computes the cosine similarity between them."
                    },
                    {
                        "id": 170,
                        "string": "(4) Human evaluation: Three labelers with rich Weibo experience were recruited to conduct evaluation."
                    },
                    {
                        "id": 171,
                        "string": "Responses from different models are randomly mixed for labeling."
                    },
                    {
                        "id": 172,
                        "string": "Labelers refer to 300 random sampled test utterances and score the quality of the responses with the following criteria: 1) +2: the response is not only semantically relevant and grammatical, but also informat-   ive and interesting; 2) +1: the response is grammatical and can be used as a response to the utterance, but is too trivial (e.g., \"I don't know\"); 3) +0: the response is semantically irrelevant or ungrammatical (e.g., grammatical errors or UNK)."
                    },
                    {
                        "id": 173,
                        "string": "Agreements to measure inter-rater consistency among three labelers are calculated with the Fleiss' kappa (Fleiss and Cohen, 1973) ."
                    },
                    {
                        "id": 174,
                        "string": "Evaluation Results Model Analysis: We first analyze our models trained with different distant supervision information."
                    },
                    {
                        "id": 175,
                        "string": "For each model, given a test utterance, we vary the control variable s by setting it to five different values (i.e., 0, 0.2, 0.5, 0.8, 1) to check whether the learned model can actually achieve different specificity levels."
                    },
                    {
                        "id": 176,
                        "string": "As shown in Table 2 , we find that: (1) The SC-Seq2Seq model trained with NIRF cannot work well."
                    },
                    {
                        "id": 177,
                        "string": "The test performances are almost the same with different s value."
                    },
                    {
                        "id": 178,
                        "string": "This is surprising since the NIRF definition seems to be directly corresponding to the specificity of a response."
                    },
                    {
                        "id": 179,
                        "string": "By conducting further analysis, we find that even though the conversation dataset is large, it is still limited and a general response could appear very few times in this corpus."
                    },
                    {
                        "id": 180,
                        "string": "In other words, the inverse frequency of a response is very weakly correlated with its response spe-cificity."
                    },
                    {
                        "id": 181,
                        "string": "(2) The SC-Seq2Seq model trained with NIWF can achieve our purpose."
                    },
                    {
                        "id": 182,
                        "string": "By varying the control variable s from 0 to 1, the generated responses turn from general to specific as measured by the distinct metrics."
                    },
                    {
                        "id": 183,
                        "string": "The results indicate that the max inverse word frequency in a response is a good distant label for the response specificity."
                    },
                    {
                        "id": 184,
                        "string": "(3) When we compare the generated responses against ground truth data, we find the SC-Seq2Seq NIWF model with the control variable s set to 0.5 can achieve the best performances."
                    },
                    {
                        "id": 185,
                        "string": "The results indicate that there are diverse responses in real data in terms of specificity, and it is necessary to take a balanced setting if we want to fit the ground truth."
                    },
                    {
                        "id": 186,
                        "string": "Baseline Comparison: The performance comparisons between our model and the baselines are shown in Table 3 ."
                    },
                    {
                        "id": 187,
                        "string": "We have the following observations: (1) By using MMI as the objective, MMI-bidi can improve the specificity (in terms of distinct ratios) over the traditional Seq2Seq-att model."
                    },
                    {
                        "id": 188,
                        "string": "(2) MARM can achieve the best distinct ratios among the baseline methods, but the worst in terms of the distinct numbers."
                    },
                    {
                        "id": 189,
                        "string": "The results indicate that MARM tends to generate specific but very short responses."
                    },
                    {
                        "id": 190,
                        "string": "Meanwhile, its low BLEU scores also show that the responses generated by MARM deviate from the ground truth significantly."
                    },
                    {
                        "id": 191,
                        "string": "(3) By using the IDF information as the reward to train  the Seq2Seq model, the Seq2Seq+IDF does not show much advantages, but only achieves comparable results as MMI-bidi."
                    },
                    {
                        "id": 192,
                        "string": "(4) By setting the control variable s to 1, our SC-Seq2Seq NIWF model can achieve the best specificity performance as evaluated by the distinct metrics."
                    },
                    {
                        "id": 193,
                        "string": "By setting the control variable s to 0.5, our SC-Seq2Seq NIWF model can best fit the ground truth data as evaluated by the BLEU scores, Average and Extrema."
                    },
                    {
                        "id": 194,
                        "string": "All the improvements over the baseline models are statistically significant (p-value < 0.01)."
                    },
                    {
                        "id": 195,
                        "string": "These results demonstrate the effectiveness as well as the flexibility of our controlled generation model."
                    },
                    {
                        "id": 196,
                        "string": "Table 4 shows the human evaluation results."
                    },
                    {
                        "id": 197,
                        "string": "We can observe that: (1) SC-Seq2Seq NIWF,s=1 generates the most informative responses and interesting (labeled as \"+2\") and the least general responses than all the baseline models."
                    },
                    {
                        "id": 198,
                        "string": "Meanwhile, SC-Seq2Seq NIWF,s=0 generates the most general responses (labeled as \"+1\"); (2) MARM generates the most bad responses (labeled as \"+0\"), which indicates the drawbacks of the unknown latent responding mechanisms; (3) The kappa values of our models are all larger than 0.4, considered as \"moderate agreement\" regarding quality of responses."
                    },
                    {
                        "id": 199,
                        "string": "The largest kappa value is achieved by SC-Seq2Seq NIWF,s=0 , which seems reasonable since it is easy to reach an agreement on general responses."
                    },
                    {
                        "id": 200,
                        "string": "Sign tests demonstrate the improvements of SC-Seq2Seq NIWF,s=1 to the baseline models are statistically significant (p-value < 0.01)."
                    },
                    {
                        "id": 201,
                        "string": "All the human judgement results again demonstrate the effectiveness of our controlled generation mechanism."
                    },
                    {
                        "id": 202,
                        "string": "Case Study To better understand how different models perform, we conduct some case studies."
                    },
                    {
                        "id": 203,
                        "string": "We randomly sample three utterances from the test dataset, and show the responses generated by different models."
                    },
                    {
                        "id": 204,
                        "string": "Table 5 , we can find that: (1) The responses generated by the four baselines are often quite general and short, which may quickly lead to an end of the conversation."
                    },
                    {
                        "id": 205,
                        "string": "(2) SC-Seq2Seq NIWF with large control variable values (i.e., s > 0.5) can generate very long and specific responses."
                    },
                    {
                        "id": 206,
                        "string": "In these responses, we can find many informative words."
                    },
                    {
                        "id": 207,
                        "string": "For example, in case 2 with s as 1 and 0.8, we can find words like \"眼妆(eye make-up)\", \"气 质(temperament)\" and \"雪亮(bright)\" which are quite specific and strongly related to the conversation topic of \"beauty\"."
                    },
                    {
                        "id": 208,
                        "string": "(3) When we decrease the control variable value, the generated responses become more and more general and shorter from our SC-Seq2Seq NIWF model."
                    },
                    {
                        "id": 209,
                        "string": "As shown in Analysis on Usage Representations We also conduct some analysis to understand the usage representations of words introduced in our model."
                    },
                    {
                        "id": 210,
                        "string": "We randomly sample 500 words from our SC-Seq2Seq NIWF and apply t-SNE (Maaten and Hinton, 2008) to visualize both usage and semantic embeddings."
                    },
                    {
                        "id": 211,
                        "string": "As shown in Figure 3 , we can see that the two distributions are significantly different."
                    },
                    {
                        "id": 212,
                        "string": "In the usage space, words like \"脂 肪 肝(fatty liver)\" and \"久 坐(outsit)\" lie closely which are both specific words, and both are far from the general words like \"胖(fat)\"."
                    },
                    {
                        "id": 213,
                        "string": "On the contrary, in the semantic space, \"脂 肪 肝(fatty liver)\" is close to \"胖(fat)\" since they are semantically related, and both are far from the word \"久坐(outsit)\"."
                    },
                    {
                        "id": 214,
                        "string": "Furthermore, given some sampled target words, we also show the top-5 similar words based on cosine similarity under both representations in Table 6 ."
                    },
                    {
                        "id": 215,
                        "string": "Again, we can see that the nearest neighbors of a same word are quite different under two representations."
                    },
                    {
                        "id": 216,
                        "string": "Neighbors based on semantic representations are semantically related, while neighbors based on usage representations are not so related but with similar specificity levels."
                    },
                    {
                        "id": 217,
                        "string": "Conclusion We propose a novel controlled response generation mechanism to handle different utteranceresponse relationships in terms of specificity."
                    },
                    {
                        "id": 218,
                        "string": "We introduce an explicit specificity control variable into the Seq2Seq model, which interacts with the usage representation of words to generate responses at different specificity levels."
                    },
                    {
                        "id": 219,
                        "string": "Empirical results showed that our model can generate either general or specific responses, and significantly outperform state-of-the-art generation methods."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 51
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 52,
                        "end": 53
                    },
                    {
                        "section": "Conversational Models",
                        "n": "2.1",
                        "start": 54,
                        "end": 67
                    },
                    {
                        "section": "Response Specificity",
                        "n": "2.2",
                        "start": 68,
                        "end": 87
                    },
                    {
                        "section": "Specificity Controlled Seq2Seq Model",
                        "n": "3",
                        "start": 88,
                        "end": 88
                    },
                    {
                        "section": "Model Overview",
                        "n": "3.1",
                        "start": 89,
                        "end": 102
                    },
                    {
                        "section": "Encoder",
                        "n": "3.1.1",
                        "start": 103,
                        "end": 106
                    },
                    {
                        "section": "Decoder",
                        "n": "3.1.2",
                        "start": 107,
                        "end": 118
                    },
                    {
                        "section": "Distant Supervision",
                        "n": "3.2",
                        "start": 119,
                        "end": 123
                    },
                    {
                        "section": "Normalized Inverse Response Frequency",
                        "n": "3.2.1",
                        "start": 124,
                        "end": 130
                    },
                    {
                        "section": "Normalized Inverse Word Frequency",
                        "n": "3.2.2",
                        "start": 131,
                        "end": 137
                    },
                    {
                        "section": "Specificity Controlled Response Generation",
                        "n": "3.3",
                        "start": 138,
                        "end": 141
                    },
                    {
                        "section": "Experiment",
                        "n": "4",
                        "start": 142,
                        "end": 144
                    },
                    {
                        "section": "Dataset Description",
                        "n": "4.1",
                        "start": 145,
                        "end": 149
                    },
                    {
                        "section": "Baselines Methods",
                        "n": "4.2",
                        "start": 150,
                        "end": 152
                    },
                    {
                        "section": "Implementation Details",
                        "n": "4.3",
                        "start": 153,
                        "end": 164
                    },
                    {
                        "section": "Evaluation Methodologies",
                        "n": "4.4",
                        "start": 165,
                        "end": 173
                    },
                    {
                        "section": "Evaluation Results",
                        "n": "4.5",
                        "start": 174,
                        "end": 201
                    },
                    {
                        "section": "Case Study",
                        "n": "4.6",
                        "start": 202,
                        "end": 208
                    },
                    {
                        "section": "Analysis on Usage Representations",
                        "n": "4.7",
                        "start": 209,
                        "end": 215
                    },
                    {
                        "section": "Conclusion",
                        "n": "5",
                        "start": 216,
                        "end": 219
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1328-Figure1-1.png",
                        "caption": "Figure 1: Rank-frequency distribution of the responses in the chit-chat corpus, with x and y axes being lg(rank order) and lg(frequency) respectively.",
                        "page": 0,
                        "bbox": {
                            "x1": 328.8,
                            "x2": 503.03999999999996,
                            "y1": 221.76,
                            "y2": 334.08
                        }
                    },
                    {
                        "filename": "../figure/image/1328-Table1-1.png",
                        "caption": "Table 1: Short Text Conversation (STC) data statistics: #w denotes the number of Chinese words.",
                        "page": 5,
                        "bbox": {
                            "x1": 96.96,
                            "x2": 265.44,
                            "y1": 62.4,
                            "y2": 156.0
                        }
                    },
                    {
                        "filename": "../figure/image/1328-Table2-1.png",
                        "caption": "Table 2: Model analysis of our SC-Seq2Seq under the automatic evaluation.",
                        "page": 6,
                        "bbox": {
                            "x1": 98.88,
                            "x2": 498.24,
                            "y1": 62.4,
                            "y2": 212.16
                        }
                    },
                    {
                        "filename": "../figure/image/1328-Table3-1.png",
                        "caption": "Table 3: Comparisons between our SC-Seq2Seq and the baselines under the automatic evaluation.",
                        "page": 6,
                        "bbox": {
                            "x1": 99.84,
                            "x2": 497.28,
                            "y1": 242.88,
                            "y2": 341.28
                        }
                    },
                    {
                        "filename": "../figure/image/1328-Table4-1.png",
                        "caption": "Table 4: Results on the human evaluation.",
                        "page": 7,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 289.44,
                            "y1": 62.4,
                            "y2": 180.95999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1328-Figure2-1.png",
                        "caption": "Figure 2: The overall architecture of SC-Seq2Seq model.",
                        "page": 3,
                        "bbox": {
                            "x1": 106.56,
                            "x2": 490.56,
                            "y1": 62.879999999999995,
                            "y2": 222.72
                        }
                    },
                    {
                        "filename": "../figure/image/1328-Table6-1.png",
                        "caption": "Table 6: Target words and their top-5 similar words under usage and semantic representations respectively.",
                        "page": 8,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 62.4,
                            "y2": 161.28
                        }
                    },
                    {
                        "filename": "../figure/image/1328-Figure3-1.png",
                        "caption": "Figure 3: t-SNE embeddings of usage and semantic vectors.",
                        "page": 8,
                        "bbox": {
                            "x1": 75.84,
                            "x2": 286.08,
                            "y1": 199.68,
                            "y2": 289.44
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-75"
        },
        {
            "slides": {
                "0": {
                    "title": "Novelty",
                    "text": [
                        "1. Identify and paraphrase metaphors in",
                        "whole sentences from unrestricted",
                        "2. Using word embedding input and output",
                        "vectors to model a word and its context",
                        "Translation. bi | ING",
                        "3. Metaphor processing for Machine G."
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "1": {
                    "title": "The definition of metaphor",
                    "text": [
                        "Linguistically, metaphor is defined as a language expression that uses one or several words to represent another concept, rather than taking their literal meanings of the given words in the context (Lagerwerf"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "2": {
                    "title": "Metaphors are widespread in natural language",
                    "text": [
                        "One third of sentences in typical corpora contain metaphors."
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "3": {
                    "title": "Contexts help to find anomalies and identify metaphors",
                    "text": [
                        "She devoured his sandwiches.",
                        "She devoured his novels.",
                        "devoured means enjoyed avidly. Interpretation",
                        "itently and enjoyed are different concepts identification"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "4": {
                    "title": "Motivation",
                    "text": [
                        "Many previous metaphor processing methods are domain dependent",
                        "Many works simply use input vectors",
                        "Metaphor processing has rarely been applied to a real-world NLP task, instead mostly reporting accuracy on metaphor identification or interpretation."
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "5": {
                    "title": "Contribution",
                    "text": [
                        "1. Metaphor detection and interpretation in sentences from",
                        "2. Investigate the effectiveness of input and output vectors of word embedding.",
                        "3. Apply metaphor detection and G. Google",
                        "interpretation to improve Machine Translate"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "6": {
                    "title": "1 Metaphor detection and interpretation in whole sentence from unrestricted domains",
                    "text": [
                        "She devoured his novels.",
                        "Sentence level Phrase level",
                        "This young man knows how",
                        "to climb the social ladder. T ladder"
                    ],
                    "page_nums": [
                        9,
                        10
                    ],
                    "images": []
                },
                "7": {
                    "title": "2 Investigate the effectiveness of input and output vectors of word embeddings",
                    "text": [
                        "Output vector Input vector of of enjoyed enjoyed"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                },
                "8": {
                    "title": "3 Apply metaphor detection and interpretation to improve Machine Translation",
                    "text": [
                        "Chinese English Spanish English -detected ~",
                        "She devoured his novels.",
                        "Sb eSMET thet) Vi.",
                        "Chinese (Simplified) English Spanish - RT",
                        "xoo< ile te de xidoshud.",
                        "* | ennese Smpites ) English",
                        "eo =n | ( ) Translator",
                        "* SOEERR AT iE",
                        "SRE BE She enjoyed his novels.",
                        "4 o wrOoo< sin te de xi"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "9": {
                    "title": "One of novelties of our work is to model co occurrence between words with input and output vectors",
                    "text": [
                        "CBOW word2vec Input Hidden Output",
                        "Context words c, O T Target words",
                        "O ff Ban Output vector",
                        "OB ME Abandoned ii",
                        "Sona aaa (e.g., gensim word2vec",
                        "Input vec Output vec"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "10": {
                    "title": "The interaction between input and output vectors represents the co occurrence of words and contexts",
                    "text": [
                        "Input vec Output vec apple",
                        "500 iterations on wevi https://ronxin.github.io/wevi/",
                        "orange drink input vec"
                    ],
                    "page_nums": [
                        15,
                        16,
                        17,
                        18
                    ],
                    "images": []
                },
                "11": {
                    "title": "Summary",
                    "text": [
                        "Input vectors can better model the similarity between words with similar semantics and syntax;",
                        "Output vector can better model the co-occurrence between words with different Part-of-Speech"
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": []
                },
                "12": {
                    "title": "The co occurrence between a target word and its context is measured by",
                    "text": [
                        "SCOTEcooccur = cos( vp, Veontext )",
                        "Veontext mL , Yen"
                    ],
                    "page_nums": [
                        20
                    ],
                    "images": []
                },
                "13": {
                    "title": "Hypotheses",
                    "text": [
                        "H11. Literal sense is more common that metaphorical.",
                        "One third of sentences in typical corpora contain metaphors.",
                        "H2. A metaphorical word can be identified, if the sense the Identify a word takes within its context and its literal sense come from metaphor"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": []
                },
                "14": {
                    "title": "Framework",
                    "text": [
                        "cos( w;, context) cos( s;, context) argmax COS( Sz, context)! 5 w* ew",
                        "cos(h;, context) | Best fit word",
                        "literal, if S > threshold 4 . S = cos(w*, w;) metaphoric, otherwise"
                    ],
                    "page_nums": [
                        22
                    ],
                    "images": [
                        "figure/image/1334-Figure2-1.png"
                    ]
                },
                "15": {
                    "title": "Step 1 training word embedding models on Wikipedia so that we can model the common expressions",
                    "text": [
                        "W, Ox ZO W,",
                        "Train W2 O PS > LZ 0 Wa",
                        "Word Embedding XK OK",
                        "Wikipedias language could be more literal.",
                        "We model the literal so that we can identify the anomalies in metaphor in next steps. (H1)"
                    ],
                    "page_nums": [
                        23
                    ],
                    "images": []
                },
                "16": {
                    "title": "Step 2 look up WordNet to list all possible senses of a target word",
                    "text": [
                        "Candidate word set W",
                        "Separate context words and a target word.",
                        "Acandidate word set consists of hypernyms and synonyms of the target word, which represents all possible senses of the target word."
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": []
                },
                "17": {
                    "title": "Step 3 identify the most likely sense from the candidate set",
                    "text": [
                        "Candidate word set W",
                        "argmax cos(s,,context) | w*EWw",
                        "cos( h;, context ) Best fit word",
                        "* Compute the most likely word appearing in the context.",
                        "The best fit word is interpreted as the sense that metaphor"
                    ],
                    "page_nums": [
                        25
                    ],
                    "images": []
                },
                "18": {
                    "title": "Step 4 identify the metaphoricity of a target word",
                    "text": [
                        "literal, if S > threshold metaphoric, otherwise",
                        "A metaphor could be identified as the real sense and its literal sense come from different domains. (H2)"
                    ],
                    "page_nums": [
                        26
                    ],
                    "images": []
                },
                "19": {
                    "title": "An example in Step 2",
                    "text": [
                        "She devoured his novels.",
                        "Context words: {She, his, novels}",
                        "{devour, devoured, devours, devouring}"
                    ],
                    "page_nums": [
                        27
                    ],
                    "images": []
                },
                "20": {
                    "title": "An example in Step 3",
                    "text": [
                        "Veontext = m Ven 3 (Vsne + Vhis + Vnovels)",
                        "cos(Vgestroyed Veontext ) = 0.04",
                        "arg max O i _",
                        "Best fit word = enjoyed"
                    ],
                    "page_nums": [
                        28
                    ],
                    "images": []
                },
                "21": {
                    "title": "An example in Step 4",
                    "text": [
                        ": literal, if S > threshold",
                        "S= COS(Ven joy Vaevour) . .",
                        "Best fit word Target word"
                    ],
                    "page_nums": [
                        29
                    ],
                    "images": []
                },
                "23": {
                    "title": "Examine on Machine Translation before paraphrasing",
                    "text": [
                        "Translate Tum off instant translation @",
                        "Chinese English Spanish English - detected ~ 4 Chinese (Simplified) English Spanish ~ Ea",
                        "She devoured his novels. * thaSIti-7-AhAO/) i. * She (physically) swallowed his novels.",
                        "TA tinshile ta de xidoshud.",
                        "HE Microsoft b 2",
                        "She voraciously wrote novels.",
                        "1 ling ton ha yan 0 8 xo stud,",
                        "She devoured his novels. HORA Alte S 1) ik. : )"
                    ],
                    "page_nums": [
                        31
                    ],
                    "images": []
                },
                "24": {
                    "title": "Examine on Machine Translation after paraphrasing",
                    "text": [
                        "Translate Tum off instant translation |",
                        "cm i ha - (C ae",
                        "She enjoyed his novels. <p EEPR HAO) iE. ) * She enjoyed his novels.",
                        "Ta thud t de aidoshus",
                        "Wx hun t Ge xido shud,",
                        "She enjoyed his novels. - PhS RAJ ii , )"
                    ],
                    "page_nums": [
                        32
                    ],
                    "images": []
                },
                "25": {
                    "title": "Experiment setup",
                    "text": [
                        "Metaphor identification: e Sentence level: inputs are original sentences e Phrase level: inputs are parsed phrases"
                    ],
                    "page_nums": [
                        34
                    ],
                    "images": []
                },
                "26": {
                    "title": "Dataset and baselines",
                    "text": [
                        "Shutova et al. (2016) used Skip-gram input vectors to model the similarity between two component",
                        "* Rei et al. (2017) used sigmoid function, projecting",
                        "Skip-gram input vectors into another space, then",
                        "Sentence Phrase training a deep neural network based classifier."
                    ],
                    "page_nums": [
                        35
                    ],
                    "images": []
                },
                "28": {
                    "title": "Evaluation with different thresholds",
                    "text": [
                        "P R Fl Fl gim-csow, o Filsim-se, O",
                        "Table 2: Model performance vs. different threshold (7) settings. | NB: the sentence level results are based on"
                    ],
                    "page_nums": [
                        37
                    ],
                    "images": [
                        "figure/image/1334-Table2-1.png"
                    ]
                },
                "29": {
                    "title": "Experiment design for Machine Translation evaluation",
                    "text": [
                        "The ex-boxer's job is to bounce people who want to enter this private club. bounce: eject from the premises Good / Bad",
                        "BB ARENT EE BRA EAN HEA EL ERB APB EFAS TEE LAR BEE A EL ER BA SB AB RAST ea HEA BEA AL ER PB A AS eGR eT BI AE A ER A BPR RNS TET A fea EEA XL ERB. BB EASE eT aa BE AEE A LER A,",
                        "Google translation on the original sentence.",
                        "Bing Translation on the original sentence.",
                        "Google translation on our model paraphrased sentence.",
                        "Google translation on Context2Vec paraphrased sentence. Bing Translation on Context2Vec paraphrased sentence."
                    ],
                    "page_nums": [
                        38
                    ],
                    "images": [
                        "figure/image/1334-Figure6-1.png"
                    ]
                },
                "30": {
                    "title": "Metaphor interpretation results",
                    "text": [
                        "L) Paraphrased by our model",
                        "A Paraphrased by the baseline (Melamud et al. 2016)",
                        "Literal Metaphoric Overall Literal Metaphoric Overall"
                    ],
                    "page_nums": [
                        39
                    ],
                    "images": [
                        "figure/image/1334-Figure5-1.png"
                    ]
                },
                "31": {
                    "title": "Takeaway",
                    "text": [
                        "A novel model for metaphor identification and interpretation on sentence level.",
                        "A metaphor could be identified by its interpretation.",
                        "Input and output vectors could better model the",
                        "co-occurrence between two words.",
                        "Effective paraphrasing of metaphors could improve"
                    ],
                    "page_nums": [
                        41
                    ],
                    "images": []
                }
            },
            "paper_title": "Word Embedding and WordNet Based Metaphor Identification and Interpretation",
            "paper_id": "1334",
            "paper": {
                "title": "Word Embedding and WordNet Based Metaphor Identification and Interpretation",
                "abstract": "Metaphoric expressions are widespread in natural language, posing a significant challenge for various natural language processing tasks such as Machine Translation. Current word embedding based metaphor identification models cannot identify the exact metaphorical words within a sentence. In this paper, we propose an unsupervised learning method that identifies and interprets metaphors at word-level without any preprocessing, outperforming strong baselines in the metaphor identification task. Our model extends to interpret the identified metaphors, paraphrasing them into their literal counterparts, so that they can be better translated by machines. We evaluated this with two popular translation systems for English to Chinese, showing that our model improved the systems significantly.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Metaphor enriches language, playing a significant role in communication, cognition, and decision making."
                    },
                    {
                        "id": 1,
                        "string": "Relevant statistics illustrate that about one third of sentences in typical corpora contain metaphor expressions (Cameron, 2003; Martin, 2006; Steen et al., 2010; Shutova, 2016) ."
                    },
                    {
                        "id": 2,
                        "string": "Linguistically, metaphor is defined as a language expression that uses one or several words to represent another concept, rather than taking their literal meanings of the given words in the context (Lagerwerf and Meijers, 2008) ."
                    },
                    {
                        "id": 3,
                        "string": "Computational metaphor processing refers to modelling non-literal expressions (e.g., metaphor, metonymy, and personification) and is useful for improving many NLP tasks such as Machine Translation (MT) and Sentiment Analysis (Rentoumi et al., 2012) ."
                    },
                    {
                        "id": 4,
                        "string": "For instance, Google Translate failed in translating devour within a sentence, \"She devoured his novels.\""
                    },
                    {
                        "id": 5,
                        "string": "(Mohammad et al., 2016) , into Chinese."
                    },
                    {
                        "id": 6,
                        "string": "The term was translated into 吞噬, which takes the literal sense of swallow and is not understandable in Chinese."
                    },
                    {
                        "id": 7,
                        "string": "Interpreting metaphors allows us to paraphrase them into literal expressions which maintain the intended meaning and are easier to translate."
                    },
                    {
                        "id": 8,
                        "string": "Metaphor identification approaches based on word embeddings have become popular (Tsvetkov et al., 2014; Rei et al., 2017) as they do not rely on hand-crafted knowledge for training."
                    },
                    {
                        "id": 9,
                        "string": "These models follow a similar paradigm in which input sentences are first parsed into phrases and then the metaphoricity of the phrases is identified; they do not tackle word-level metaphor."
                    },
                    {
                        "id": 10,
                        "string": "E.g., given the former sentence \"She devoured his novels."
                    },
                    {
                        "id": 11,
                        "string": "\", the aforementioned methods will first parse the sentence into a verb-direct object phrase devour novel, and then detect the clash between devour and novel, flagging this phrase as a likely metaphor."
                    },
                    {
                        "id": 12,
                        "string": "However, which component word is metaphorical cannot be identified, as important contextual words in the sentence were excluded while processing these phrases."
                    },
                    {
                        "id": 13,
                        "string": "Discarding contextual information also leads to a failure to identify a metaphor when both words in the phrase are metaphorical, but taken out of context they appear literal."
                    },
                    {
                        "id": 14,
                        "string": "E.g., \"This young man knows how to climb the social ladder.\""
                    },
                    {
                        "id": 15,
                        "string": "(Mohammad et al., 2016 ) is a metaphorical expression."
                    },
                    {
                        "id": 16,
                        "string": "However, when the sentence is parsed into a verbdirect object phrase, climb ladder, it appears literal."
                    },
                    {
                        "id": 17,
                        "string": "In this paper, we propose an unsupervised metaphor processing model which can identify and interpret linguistic metaphors at the wordlevel."
                    },
                    {
                        "id": 18,
                        "string": "Specifically, our model is built upon word embedding methods (Mikolov et al., 2013) and uses WordNet (Fellbaum, 1998) for lexical re-lation acquisition."
                    },
                    {
                        "id": 19,
                        "string": "Our model is distinguished from existing methods in two aspects."
                    },
                    {
                        "id": 20,
                        "string": "First, our model is generic which does not constrain the source domain of metaphor."
                    },
                    {
                        "id": 21,
                        "string": "Second, the developed model does not rely on any labelled data for model training, but rather captures metaphor in an unsupervised, data-driven manner."
                    },
                    {
                        "id": 22,
                        "string": "Linguistic metaphors are identified by modelling the distance (in vector space) between the target word's literal and metaphorical senses."
                    },
                    {
                        "id": 23,
                        "string": "The metaphorical sense within a sentence is identified by its surrounding context within the sentence, using word embedding representations and WordNet."
                    },
                    {
                        "id": 24,
                        "string": "This novel approach allows our model to operate at the sentence level without any preprocessing, e.g., dependency parsing."
                    },
                    {
                        "id": 25,
                        "string": "Taking contexts into account also addresses the issue that a two-word phrase appears literal, but it is metaphoric within a sentence (e.g., the climb ladder example)."
                    },
                    {
                        "id": 26,
                        "string": "We evaluate our model against three strong baselines (Melamud et al., 2016; Rei et al., 2017) on the task of metaphor identification."
                    },
                    {
                        "id": 27,
                        "string": "Extensive experimentation conducted on a publicly available dataset (Mohammad et al., 2016) shows that our model significantly outperforms the unsupervised learning baselines (Melamud et al., 2016; on both phrase and sentence evaluation, and achieves equivalent performance to the state-ofthe-art deep learning baseline (Rei et al., 2017) on phrase-level evaluation."
                    },
                    {
                        "id": 28,
                        "string": "In addition, while most of the existing works on metaphor processing solely evaluate the model performance in terms of metaphor classification accuracy, we further conducted another set of experiments to evaluate how metaphor processing can be used for supporting the task of MT."
                    },
                    {
                        "id": 29,
                        "string": "Human evaluation shows that our model improves the metaphoric translation significantly, by testing on two prominent translation systems, namely, Google Translate 1 and Bing Translator 2 ."
                    },
                    {
                        "id": 30,
                        "string": "To our best knowledge, this is the first metaphor processing model that is evaluated on MT."
                    },
                    {
                        "id": 31,
                        "string": "To summarise, the contributions of this paper are two-fold: (1) we proposed a novel framework for metaphor identification which does not require any preprocessing or annotated corpora for training; (2) we conducted, to our knowledge, the first metaphor interpretation study of apply-ing metaphor processing for supporting MT."
                    },
                    {
                        "id": 32,
                        "string": "We describe related work in §2, followed by our labelling method in §4, experimental design in §5, results in §6 and conclusions in §7."
                    },
                    {
                        "id": 33,
                        "string": "Related Work A wide range of methods have been applied for computational metaphor processing."
                    },
                    {
                        "id": 34,
                        "string": "Turney et al."
                    },
                    {
                        "id": 35,
                        "string": "(2011); ;  and Tsvetkov et al."
                    },
                    {
                        "id": 36,
                        "string": "(2014) identified metaphors by modelling the abstractness and concreteness of metaphors and non-metaphors, using a machine usable dictionary called MRC Psycholinguistic Database (Coltheart, 1981) ."
                    },
                    {
                        "id": 37,
                        "string": "They believed that metaphorical words would be more abstract than literal ones."
                    },
                    {
                        "id": 38,
                        "string": "Some researchers used topic models to identify metaphors."
                    },
                    {
                        "id": 39,
                        "string": "For instance, Heintz et al."
                    },
                    {
                        "id": 40,
                        "string": "(2013) used Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to model source and target domains, and assumed that sentences containing words from both domains are metaphorical."
                    },
                    {
                        "id": 41,
                        "string": "Strzalkowski et al."
                    },
                    {
                        "id": 42,
                        "string": "(2013) assumed that metaphorical terms occur out of the topic chain, where a topic chain is constructed by topical words that reveal the core discussion of the text."
                    },
                    {
                        "id": 43,
                        "string": "performed metaphorical concept mappings between the source and target domains in multi-languages using both unsupervised and semi-supervised learning approaches."
                    },
                    {
                        "id": 44,
                        "string": "The source and target domains are represented by semantic clusters, which are derived through the distribution of the co-occurrences of words."
                    },
                    {
                        "id": 45,
                        "string": "They also assumed that when contextual vocabularies are from different domains then there is likely to be a metaphor."
                    },
                    {
                        "id": 46,
                        "string": "There is another line of approaches based on word embeddings."
                    },
                    {
                        "id": 47,
                        "string": "Generally, these works are not limited by conceptual domains and hand-crafted knowledge."
                    },
                    {
                        "id": 48,
                        "string": "proposed a model that identified metaphors by employing word and image embeddings."
                    },
                    {
                        "id": 49,
                        "string": "The model first parses sentences into phrases which contain target words."
                    },
                    {
                        "id": 50,
                        "string": "In their word embedding based approach, the metaphoricity of a phrase was identified by measuring the cosine similarity of two component words in the phrase, based on their input vectors from Skip-gram word embeddings."
                    },
                    {
                        "id": 51,
                        "string": "If the cosine similarity is higher than a threshold, the phrase is identified as literal; otherwise metaphorical."
                    },
                    {
                        "id": 52,
                        "string": "Rei et al."
                    },
                    {
                        "id": 53,
                        "string": "(2017) identified metaphors by introducing a deep learning architecture."
                    },
                    {
                        "id": 54,
                        "string": "Instead of using word input vectors directly, they filtered out noisy in- T .. C 1 … C n … C m .."
                    },
                    {
                        "id": 55,
                        "string": "Input Hidden Output CBOW W i W o C 1 … C n … C m .. T .."
                    },
                    {
                        "id": 56,
                        "string": "Input Hidden Output Skip-gram formation in the vector of one word in a phrase, projecting the word vector into another space via a sigmoid activation function."
                    },
                    {
                        "id": 57,
                        "string": "The metaphoricity of the phrases was learnt via training a supervised deep neural network."
                    },
                    {
                        "id": 58,
                        "string": "The above word embedding based models, while demonstrating some success in metaphor identification, only explored using input vectors, which might hinder their performance."
                    },
                    {
                        "id": 59,
                        "string": "In addition, metaphor identification is highly dependent on its context."
                    },
                    {
                        "id": 60,
                        "string": "Therefore, phrase-level models (e.g., Tsvetkov et al."
                    },
                    {
                        "id": 61,
                        "string": "(2014) ; ; Rei et al."
                    },
                    {
                        "id": 62,
                        "string": "(2017) ) are likely to fail in the metaphor identification task if important contexts are excluded."
                    },
                    {
                        "id": 63,
                        "string": "In contrast, our model can operate at the sentence level which takes into account rich context and hence can improve the performance of metaphor identification."
                    },
                    {
                        "id": 64,
                        "string": "Preliminary: CBOW and Skip-gram Our metaphor identification framework is built upon word embedding, which is based on Continuous Bag of Words (CBOW) and Skip-gram (Mikolov et al., 2013) ."
                    },
                    {
                        "id": 65,
                        "string": "In CBOW (see Figure 1 ), the input and output layers are context (C) and centre word (T) one-hot encodings, respectively."
                    },
                    {
                        "id": 66,
                        "string": "The model is trained by maximizing the probability of predicting a centre word, given its context (Rong, 2014) : arg max p(t|c 1 , ..., c n , ..., c m ) (1) where t is a centre word, c n is the nth context word of t within a sentence, totally m context words."
                    },
                    {
                        "id": 67,
                        "string": "CBOW's hidden layer is defined as: H CBOW = 1 m × W i × m n=1 C n = 1 m × m n=1 v i c,n (2) where C n is the one-hot encoding of the nth context word, v i c,n is the nth context word row vector (input vector) in W i which is a weight matrix between input and hidden layers."
                    },
                    {
                        "id": 68,
                        "string": "Thus, the hidden layer is the transpose of the average of input vectors of context words."
                    },
                    {
                        "id": 69,
                        "string": "The probability of predicting a centre word in its context is given by a softmax function below: u t = W o t × H CBOW = v o t × H CBOW (3) p(t|c 1 , ..., c n , ..., c m ) = exp(u t ) V j=1 exp(u j ) (4) where W o t is equivalent to the output vector v o t which is essentially a column vector in a weight matrix W o that is between hidden and output layers, aligning with the centre word t. V is the size of vocabulary in the corpus."
                    },
                    {
                        "id": 70,
                        "string": "The output is a one-hot encoding of the centre word."
                    },
                    {
                        "id": 71,
                        "string": "W i and W o are updated via back propagation of errors."
                    },
                    {
                        "id": 72,
                        "string": "Therefore, only the value of the position that represents the centre word's probability, i.e., p(t|c 1 , ..., c n , ..., c m ), will get close to the value of 1."
                    },
                    {
                        "id": 73,
                        "string": "In contrast, the probability of the rest of the words in the vocabulary will be close to 0 in every centre word training."
                    },
                    {
                        "id": 74,
                        "string": "W i embeds context words."
                    },
                    {
                        "id": 75,
                        "string": "Vectors within W i can be viewed as context word embeddings."
                    },
                    {
                        "id": 76,
                        "string": "W o embeds centre words, vectors in W o can be viewed as centre word embeddings."
                    },
                    {
                        "id": 77,
                        "string": "Skip-gram is the reverse of CBOW (see Figure 1) ."
                    },
                    {
                        "id": 78,
                        "string": "The input and output layers are centre word and context word one-hot encodings, respectively."
                    },
                    {
                        "id": 79,
                        "string": "The target is to maximize the probability of predicting each context word, given a centre word: arg max p(c 1 , ..., c n , ..., c m |t) Skip-gram's hidden layer is defined as: H SG = W i × T = v i t (6) where T is the one-hot encoding of the centre word t. Skip-gram's hidden layer is equal to the transpose of a centre word's input vector v t , as only the tth row are kept by the operation."
                    },
                    {
                        "id": 80,
                        "string": "The probability of a context word is:  where c, n is the nth context word, given a centre word."
                    },
                    {
                        "id": 81,
                        "string": "In Skip-gram, W i aligns to centre words, while W o aligns to context words."
                    },
                    {
                        "id": 82,
                        "string": "Because the names of centre word and context word embeddings are reversed in CBOW and Skip-gram, we will uniformly call vectors in W i input vectors v i , and vectors in W o output vectors v o in the remaining sections."
                    },
                    {
                        "id": 83,
                        "string": "Word embeddings represent both input and output vectors."
                    },
                    {
                        "id": 84,
                        "string": "u c,n = W o c,n × H SG = v o c,n × H SG (7) p(c n |t) = exp(u c,n ) V j=1 exp(u j ) (8) w t s 1 s 2 … h j .. w c1 w c2 w c3 … w cm .. .. cos( w t , context ) cos( s 1 , context ) cos( s 2 , context ) … cos( h j , context ) agrmax w* ∈W Best fit word Methodology In this section, we present the technical details of our metaphor processing framework, built upon two hypotheses."
                    },
                    {
                        "id": 85,
                        "string": "Our first hypothesis (H1) is that a metaphorical word can be identified, if the sense the word takes within its context and its literal sense come from different domains."
                    },
                    {
                        "id": 86,
                        "string": "Such a hypothesis is based on the theory of Selectional Preference Violation (Wilks, 1975 (Wilks, , 1978 ) that a metaphorical item can be found in a violation of selectional restrictions, where a word does not satisfy its semantic constrains within a context."
                    },
                    {
                        "id": 87,
                        "string": "Our second hypothesis (H2) is that the literal senses of words occur more commonly in corpora than their metaphoric senses (Cameron, 2003; Martin, 2006; Steen et al., 2010; Shutova, 2016) ."
                    },
                    {
                        "id": 88,
                        "string": "Figure 2 depicts an overview of our metaphor identification framework."
                    },
                    {
                        "id": 89,
                        "string": "The workflow of our framework is as follows."
                    },
                    {
                        "id": 90,
                        "string": "Step (1) involves training word embeddings based on a Wikipedia dump 3 for obtaining input and output vectors of words."
                    },
                    {
                        "id": 91,
                        "string": "3 https://dumps.wikimedia.org/enwiki/ 20170920/ She devoured his novels."
                    },
                    {
                        "id": 92,
                        "string": "Sense 1 • devour • devoured • … HYPERNYMS • destroy • destroyed • … • ruin • ruined • … • … Sense 2 • devour • devoured • … HYPERNYMS • enjoy • enjoyed • … • bask • basked • … • … Sense 3 • devour • devoured • … HYPERNYMS • eat up • … • … SYNONYMS • down • … • … Sense 4 • devour • devoured • … HYPERNYMS • raven • ravened • … • pig • pigged • … • … … She v o devoured , v i context ) = −0.01, cos(v o enjoyed , v i context ) = 0.02."
                    },
                    {
                        "id": 93,
                        "string": "In Step (2) , given an input sentence, the target word (i.e., the word in the original text whose metaphoricity is to be determined) and its context words (i.e., all other words in the sentence excluding the target word) are separated."
                    },
                    {
                        "id": 94,
                        "string": "We construct a candidate word set W which represents all the possible senses of the target word."
                    },
                    {
                        "id": 95,
                        "string": "This is achieved by first extracting the synonyms and direct hypernyms of the target word from WordNet, and then augmenting the set with the inflections of the extracted synonyms and hypernyms, as well as the target word and its inflections."
                    },
                    {
                        "id": 96,
                        "string": "Auxiliary verbs are excluded from this set, as these words frequently appear in most sentences with little lexical meaning."
                    },
                    {
                        "id": 97,
                        "string": "In Step (3) , we identify the best fit word, which is defined as the word that represents the literal sense that the target word is most likely taking given its context."
                    },
                    {
                        "id": 98,
                        "string": "Finally, in Step (4) , we compute the cosine similarity between the target word and the best fit word."
                    },
                    {
                        "id": 99,
                        "string": "If the similarity is above a threshold, the target word will be identified as literal, otherwise metaphoric (i.e., based on H1)."
                    },
                    {
                        "id": 100,
                        "string": "We will discuss in detail Step (3) and Step (4) in §4.1."
                    },
                    {
                        "id": 101,
                        "string": "Metaphor identification Step (3): One of the key steps of our metaphor identification framework is to identify the best fit word for a target word given its surrounding context."
                    },
                    {
                        "id": 102,
                        "string": "The intuition is that the best fit word will represent the literal sense that the target word is most likely taking."
                    },
                    {
                        "id": 103,
                        "string": "E.g., for the sentence \"She devoured his novels.\""
                    },
                    {
                        "id": 104,
                        "string": "and the corresponding target word devoured, the best fit word is enjoyed, as shown in Figure 3 ."
                    },
                    {
                        "id": 105,
                        "string": "Also note that the best fit word could be the target word itself if the target word is used literally."
                    },
                    {
                        "id": 106,
                        "string": "Given a sentence s, let w t be the target word of the sentence, w * ∈ W the best fit word for w t , and w context the surrounding context for w t , i.e., all the words in s excluding w t ."
                    },
                    {
                        "id": 107,
                        "string": "We compute the context embedding v i context by averaging out the input vectors of each context word of w context , based on Eq."
                    },
                    {
                        "id": 108,
                        "string": "2."
                    },
                    {
                        "id": 109,
                        "string": "Next, we rank each candidate word k ∈ W by measuring its similarity to the context input vector v i context in the vector space."
                    },
                    {
                        "id": 110,
                        "string": "The candidate word with the highest similarity to the context is then selected as the best fit word."
                    },
                    {
                        "id": 111,
                        "string": "w * = arg max k SIM(v k , v context ) (9) where v k is the vector of a candidate word k ∈ W. In contrast to existing word embedding based methods for metaphor identification which only make use of input vectors Rei et al., 2017) , we explore using both input and output vectors of CBOW and Skip-gram embeddings when measuring the similarity between a candidate word and the context."
                    },
                    {
                        "id": 112,
                        "string": "We expect that using a combination of input and output vectors might work better."
                    },
                    {
                        "id": 113,
                        "string": "Specifically, we have experimented with four different model variants as shown below."
                    },
                    {
                        "id": 114,
                        "string": "SIM-CBOW I = cos(v i k,cbow , v i context,cbow ) (10) SIM-CBOW I+O = cos(v o k,cbow , v i context,cbow ) (11) SIM-SG I = cos(v i k,sg , v i context,sg ) (12) SIM-SG I+O = cos(v o k,sg , v i context,sg ) (13) Here, cos(·) is cosine similarity, cbow is CBOW word embeddings, sg is Skip-gram word embeddings."
                    },
                    {
                        "id": 115,
                        "string": "We have also tried other model variants using output vectors for v context ."
                    },
                    {
                        "id": 116,
                        "string": "However, we found that the models using output vectors for v context (both CBOW and Skip-gram embeddings) do not improve our framework performance."
                    },
                    {
                        "id": 117,
                        "string": "Due to the page limit we omitted the results of those models in this paper."
                    },
                    {
                        "id": 118,
                        "string": "Step (4) : Given a predicted best fit word w * identified in Step (3) , we then compute the cosine similarity between the lemmatizations of w * and the target word w t using their input vectors."
                    },
                    {
                        "id": 119,
                        "string": "SIM(w * , w t ) = cos(v i w * , v i wt ) (14) We give a detailed discussion in §4.2 of our rationale for using input vectors for Eq."
                    },
                    {
                        "id": 120,
                        "string": "14."
                    },
                    {
                        "id": 121,
                        "string": "If the similarity is higher than a threshold (τ ) the target word is considered as literal, otherwise, metaphorical (based on H1)."
                    },
                    {
                        "id": 122,
                        "string": "One benefit of our approach is that it allows one to paraphrase the identified metaphorical target word into the best fit word, representing its literal sense in the context."
                    },
                    {
                        "id": 123,
                        "string": "Such a feature is useful for supporting other NLP tasks such as Machine Translation, which we will explore in §6."
                    },
                    {
                        "id": 124,
                        "string": "In terms of the value of threshold (τ ), it is empirically determined based on a development set."
                    },
                    {
                        "id": 125,
                        "string": "Please refer to §5 for details."
                    },
                    {
                        "id": 126,
                        "string": "To better explain the workflow of our framework, we now go through an example as illustrated in Figure 3 ."
                    },
                    {
                        "id": 127,
                        "string": "The target word of the input sentence, \"She devoured his novels.\""
                    },
                    {
                        "id": 128,
                        "string": "is devoured, and its the lemmatised form devour has four verbal senses in WordNet, i.e., destroy completely, enjoy avidly, eat up completely with great appetite, and eat greedily."
                    },
                    {
                        "id": 129,
                        "string": "Each of these senses has a set of corresponding synonyms and hypernyms."
                    },
                    {
                        "id": 130,
                        "string": "E.g., Sense 3 (eat up completely with great appetite) has synonyms demolish, down, consume, and hypernyms go through, eat up, finish, and polish off."
                    },
                    {
                        "id": 131,
                        "string": "We then construct a candidate word set W by including the synonyms and direct hypernyms of the target word from WordNet, and then augmenting the set with the inflections of the extracted synonyms and hypernyms, as well as the target word devour and its inflections."
                    },
                    {
                        "id": 132,
                        "string": "We then identify the best fit word given the context she [ ] his novels based on Eq."
                    },
                    {
                        "id": 133,
                        "string": "9."
                    },
                    {
                        "id": 134,
                        "string": "Based on H2, literal expressions are more common than metaphoric ones in corpora."
                    },
                    {
                        "id": 135,
                        "string": "Therefore, the best fit word is expected to frequently appear within the given context, and thus represents the most likely sense of the target word."
                    },
                    {
                        "id": 136,
                        "string": "For example, the similarity between enjoy (i.e., the best fit word) and the the context is higher than that of devour (i.e., the target word), as shown in Figure 3 ."
                    },
                    {
                        "id": 137,
                        "string": "Word embedding: output vectors vs. input vectors Typically, input vectors are used after training CBOW and Skip-gram, with output vectors being abandoned by practical models, e.g., original word2vec model (Mikolov et al., 2013) and Gensim toolkit (Řehůřek and Sojka, 2010), as these models are designed for modelling similarities in semantics."
                    },
                    {
                        "id": 138,
                        "string": "However, we found that using input vectors to measure cosine similarity between two words with different POS types in a phrase is sub-  optimal, as words with different POS normally have different semantics."
                    },
                    {
                        "id": 139,
                        "string": "They tend to be distant from each other in the input vector space."
                    },
                    {
                        "id": 140,
                        "string": "Taking Skip-gram for example, empirically, input vectors of words with the same POS, occurring within the same contexts tend to be close in the vector space (Mikolov et al., 2013) , as they are frequently updated by back propagating the errors from the same context words."
                    },
                    {
                        "id": 141,
                        "string": "In contrast, input vectors of words with different POS, playing different semantic and syntactic roles tend to be distant from each other, as they seldom occur within the same contexts, resulting in their input vectors rarely being updated equally."
                    },
                    {
                        "id": 142,
                        "string": "Our observation is also in line with Nalisnick et al."
                    },
                    {
                        "id": 143,
                        "string": "(2016) , who examine IN-IN, OUT-OUT and IN-OUT vectors to measure similarity between two words."
                    },
                    {
                        "id": 144,
                        "string": "Nalisnick et al."
                    },
                    {
                        "id": 145,
                        "string": "discovered that two words which are similar by function or type have higher cosine similarity with IN-IN or OUT-OUT vectors, while using input and output vectors for two words (IN-OUT) that frequently co-occur in the same context (e.g., a sentence) can obtain a higher similarity score."
                    },
                    {
                        "id": 146,
                        "string": "For illustrative purpose, we visualize the CBOW and Skip-gram updates between 4dimensional input and output vectors by Wevi 4 (Rong, 2014) , using a two-sentence corpus, \"Drink apple juice.\""
                    },
                    {
                        "id": 147,
                        "string": "and \"Drink orange juice.\"."
                    },
                    {
                        "id": 148,
                        "string": "We feed these two sentences to CBOW and Skipgram with 500 iterations."
                    },
                    {
                        "id": 149,
                        "string": "As seen Figure 4 , the input vectors of apple and orange are similar in both CBOW and Skip-gram, which are different from the input vectors of their context words (drink and juice)."
                    },
                    {
                        "id": 150,
                        "string": "However, the output vectors of apple and orange are similar to the input vectors of drink and juice."
                    },
                    {
                        "id": 151,
                        "string": "To summarise, using input vectors to compare similarity between the best fit word and the target word is more appropriate (cf."
                    },
                    {
                        "id": 152,
                        "string": "Eq.14), as they tend to have the same types of POS."
                    },
                    {
                        "id": 153,
                        "string": "When measuring the similarity between candidate words and the context, using output vectors for the former and input vectors for the latter seems to better predict the best fit word."
                    },
                    {
                        "id": 154,
                        "string": "Experimental settings Baselines."
                    },
                    {
                        "id": 155,
                        "string": "We compare the performance of our framework for metaphor identification against three strong baselines, namely, an unsupervised word embedding based model by , a supervised deep learning model by Rei et al."
                    },
                    {
                        "id": 156,
                        "string": "(2017) , and the Context2Vec model 5 (Melamud et al., 2016) which achieves the best performance on Microsoft Sentence Completion Challenge (Zweig and Burges, 2011) ."
                    },
                    {
                        "id": 157,
                        "string": "Context2Vec was not designed for processing metaphors, in order to use it for this we plug it into a very similar framework to that described in Figure 2 ."
                    },
                    {
                        "id": 158,
                        "string": "We use Context2Vec to predict the best fit word from the candidate set, as it similarly uses context to predict the most likely centre word but with bidirectional LSTM based context embedding method."
                    },
                    {
                        "id": 159,
                        "string": "After locating the best fit word with Context2Vec, we identify the metaphoricity of a target word with the same method (see Step (4) in §4), so that we can also apply it for metaphor interpretation."
                    },
                    {
                        "id": 160,
                        "string": "Note that while Shutova et al."
                    },
                    {
                        "id": 161,
                        "string": "and Rei et al."
                    },
                    {
                        "id": 162,
                        "string": "detect Mohammad et al."
                    },
                    {
                        "id": 163,
                        "string": "(2016) ."
                    },
                    {
                        "id": 164,
                        "string": "This dataset 6 , containing 1,230 literal and 409 metaphor sentences, has been widely used for metaphor identification related research Rei et al., 2017) ."
                    },
                    {
                        "id": 165,
                        "string": "There is a verbal target word annotated by 10 annotators in each sentence."
                    },
                    {
                        "id": 166,
                        "string": "We use two subsets of the Mohammad et al."
                    },
                    {
                        "id": 167,
                        "string": "set, one for phrase evaluation and one for sentence evaluation."
                    },
                    {
                        "id": 168,
                        "string": "The phrase evaluation dataset was kindly provided by Shutova, which consists of 316 metaphorical and 331 literal phrases (subject-verb and verb-direct object word pairs), parsed from Mohammad et al."
                    },
                    {
                        "id": 169,
                        "string": "'s dataset."
                    },
                    {
                        "id": 170,
                        "string": "Similar to , we use 40 metaphoric and 40 literal phrases as a development set and the rest as a test set."
                    },
                    {
                        "id": 171,
                        "string": "For sentence evaluation, we select 212 metaphorical sentences whose target words are annotated with at least 70% agreement."
                    },
                    {
                        "id": 172,
                        "string": "We also add 212 literal sentences with the highest agreement."
                    },
                    {
                        "id": 173,
                        "string": "Among the 424 sentences, we form our development set with 12 randomly selected metaphoric and 12 literal instances to identify the threshold for detecting metaphors."
                    },
                    {
                        "id": 174,
                        "string": "The remaining 400 sentences are our testing set."
                    },
                    {
                        "id": 175,
                        "string": "Word embedding training."
                    },
                    {
                        "id": 176,
                        "string": "We train CBOW and Skip-gram models on a Wikipedia dump with the same settings as  and Rei et al."
                    },
                    {
                        "id": 177,
                        "string": "(2017) ."
                    },
                    {
                        "id": 178,
                        "string": "That is, CBOW and Skip-gram models are trained iteratively 3 times on Wikipedia with a context window of 5 to learn 100-dimensional word input and output vectors."
                    },
                    {
                        "id": 179,
                        "string": "We exclude words with total frequency less than 100."
                    },
                    {
                        "id": 180,
                        "string": "10 negative samples are randomly selected for each centre word training."
                    },
                    {
                        "id": 181,
                        "string": "The word down-sampling rate is 10 -5 ."
                    },
                    {
                        "id": 182,
                        "string": "We use Stanford CoreNLP (Manning et al., 2014) lemmatized Wikipedia to train word embeddings for phrase level evaluation, which is in line with ."
                    },
                    {
                        "id": 183,
                        "string": "In sentence evaluation, we use the original Wikipedia for training word embeddings."
                    },
                    {
                        "id": 184,
                        "string": "6 Experimental Results Table 1 shows the performance of our model and the baselines on the task of metaphor identification."
                    },
                    {
                        "id": 185,
                        "string": "All the results for our models are based on a threshold of 0.6, which is empirically determined based on the developing set."
                    },
                    {
                        "id": 186,
                        "string": "For sentence level metaphor identification, it can be observed that all our models outperform the baseline (Melamud et al., 2016) , with SIM-CBOW I+O giving the highest F1 score of 75% which is a 6% gain over the baseline."
                    },
                    {
                        "id": 187,
                        "string": "We also see that mod-els based on both input and output vectors (i.e., SIM-CBOW I+O and SIM-SG I+O ) yield better performance than the models based on input vectors only (i.e., SIM-CBOW I and SIM-SG I )."
                    },
                    {
                        "id": 188,
                        "string": "Such an observation supports our assumption that using input and output vectors can better model similarity between words that have different types of POS, than simply using input vectors."
                    },
                    {
                        "id": 189,
                        "string": "When comparing CBOW and Skip-gram based models, we see that CBOW based models generally achieve better performance in precision whereas Skip-gram based models perform better in recall."
                    },
                    {
                        "id": 190,
                        "string": "Metaphor identification In terms of phrase level metaphor identification, we compare our best performing models (i.e., SIM-CBOW I+O and SIM-SG I+O ) against the approaches of  and Rei et al."
                    },
                    {
                        "id": 191,
                        "string": "(2017) ."
                    },
                    {
                        "id": 192,
                        "string": "In contrast to the sentence level evaluation in which SIM-CBOW I+O gives the best performance, SIM-SG I+O performs best for the phrase level evaluation."
                    },
                    {
                        "id": 193,
                        "string": "This is likely due to the fact that Skip-gram is trained by using a centre word to maximise the probability of each context word, whereas CBOW uses the average of context word input vectors to maximise the probability of the centre word."
                    },
                    {
                        "id": 194,
                        "string": "Thus, Skip-gram performs better in modelling one-word context, while CBOW has better performance in modelling multi-context words."
                    },
                    {
                        "id": 195,
                        "string": "When comparing to the baselines, our model SIM-SG I+O significantly outperforms the word embedding based approach by , and gives the same performance as the deep supervised method (Rei et al., 2017) which requires a large amount of labelled data for training and cost in training time."
                    },
                    {
                        "id": 196,
                        "string": "SIM-CBOW I+O and SIM-SG I+O are also evaluated with different thresholds for both phrase and sentence level metaphor identification."
                    },
                    {
                        "id": 197,
                        "string": "As can be seen from Table 2 , the results are fairly stable when the threshold is set between 0.5 and 0.9 in terms of F1."
                    },
                    {
                        "id": 198,
                        "string": "Metaphor processing for MT We believe that one of the key purposes of metaphor processing is to support other NLP tasks."
                    },
                    {
                        "id": 199,
                        "string": "Therefore, we conducted another set of experiments to evaluate how metaphor processing can be used to support English-Chinese machine translation."
                    },
                    {
                        "id": 200,
                        "string": "The evaluation task was designed as follows."
                    },
                    {
                        "id": 201,
                        "string": "From the test set for sentence-level metaphor identification which contains 200 metaphoric and   200 literal sentences, we randomly selected 50 metaphoric and 50 literal sentences to construct a set S M for the Machine Translation (MT) evaluation task."
                    },
                    {
                        "id": 202,
                        "string": "For each sentence in S M , if it is predicted as literal by our model, the sentence is kept unchanged; otherwise, the target word of the sentence is paraphrased with the best fit word (refer to §4.1 for details)."
                    },
                    {
                        "id": 203,
                        "string": "The metaphor identification step resulted in 42 True Positive (TP) instances where the ground truth label is metaphoric and 19 False Positive (FP) instances where the ground truth label is literal, resulting in a total of 61 instances predicted as metaphorical by our model."
                    },
                    {
                        "id": 204,
                        "string": "We also run one of our baseline models, Context2Vec, on the 61 sentences to predict the best fit words for comparison."
                    },
                    {
                        "id": 205,
                        "string": "Our hypothesis is that by paraphrasing the metaphorically used target word with the best fit word which expresses the target word's real meaning, the performance of translation engines can be improved."
                    },
                    {
                        "id": 206,
                        "string": "We test our hypothesis on two popular English-Chinese MT systems, i.e., the Google and Bing Translators."
                    },
                    {
                        "id": 207,
                        "string": "We recruited from a UK university 5 Computing Science postgraduate students who are Chinese native speakers to participate the English-Chinese MT evaluation task."
                    },
                    {
                        "id": 208,
                        "string": "During the evaluation, subjects were presented with a questionnaire The ex-boxer's job is to bounce people who want to enter this private club."
                    },
                    {
                        "id": 209,
                        "string": "bounce: eject from the premises    An example of the evaluation task is shown in Figure 6 , in which \"The ex-boxer's job is to bounce people who want to enter this private club.\""
                    },
                    {
                        "id": 210,
                        "string": "is the original sentence, followed by an WordNet explanation of the target word of the sentence (i.e., bounce: eject from the premises)."
                    },
                    {
                        "id": 211,
                        "string": "There are 6 translations."
                    },
                    {
                        "id": 212,
                        "string": "No."
                    },
                    {
                        "id": 213,
                        "string": "1-2 are the original sentence translations, translated by Google Translate (GT) and Bing Translator (BT)."
                    },
                    {
                        "id": 214,
                        "string": "The target word, bounce, is translated, taking the sense of (1) physically rebounding like a ball (反 弹), (2) jumping (弹跳)."
                    },
                    {
                        "id": 215,
                        "string": "No."
                    },
                    {
                        "id": 216,
                        "string": "3-4 are SIM-CBOW I+O paraphrased sentences, translated by GT and BT, respectively, taking the sense of refusing (拒绝)."
                    },
                    {
                        "id": 217,
                        "string": "No."
                    },
                    {
                        "id": 218,
                        "string": "5-6 are Context2Vec paraphrased sentences, translated by GT and BT, respectively, taking the sense of hitting (5.打; 6.打击)."
                    },
                    {
                        "id": 219,
                        "string": "Subjects were instructed to determine if the translation of a target word can correctly represent its sense within the translated sentence, matching its context (cohesion) in Chinese."
                    },
                    {
                        "id": 220,
                        "string": "Note that we evaluate the translation of the target word, therefore, errors in context word translations are ignored by the subjects."
                    },
                    {
                        "id": 221,
                        "string": "Finally, a label is taken agreed by more than half annotators."
                    },
                    {
                        "id": 222,
                        "string": "Noticeably, based on our observation, there is always a Chinese word corresponding to an English target word in MT, as the annotated target word normally represents important information in the sentence in the applied dataset."
                    },
                    {
                        "id": 223,
                        "string": "We use translation accuracy as a measure to evaluate the improvement on MT systems after metaphor processing."
                    },
                    {
                        "id": 224,
                        "string": "The accuracy is calculated by dividing the number of correctly translated instances by the total number of instances."
                    },
                    {
                        "id": 225,
                        "string": "As can be seen in Figure 5 and Table 3 , after paraphrasing the metaphorical sentences with the SIM-CBOW I+O model, the translation improvement for the metaphorical class is dramatic for both MT systems, i.e., 26% improvement for Google Translate and 24% for Bing Translate."
                    },
                    {
                        "id": 226,
                        "string": "In terms of the literal class, there is some small drop (i.e., 4-6%) in accuracy."
                    },
                    {
                        "id": 227,
                        "string": "This is due to the fact that some literals were wrongly identified as metaphors and hence error was introduced during paraphrasing."
                    },
                    {
                        "id": 228,
                        "string": "Nevertheless, with our model, the overall translation performance of both Google and Bing Translate are significantly improved by 11% and 9%, respectively."
                    },
                    {
                        "id": 229,
                        "string": "Our baseline model Context2Vec also improves the translation accuracy, but is 2-4 % lower than our model in terms of overall accuracy."
                    },
                    {
                        "id": 230,
                        "string": "In summary, the experimental results show the effectiveness of applying metaphor processing for supporting Machine Translation."
                    },
                    {
                        "id": 231,
                        "string": "Conclusion We proposed a framework that identifies and interprets metaphors at word-level with an unsupervised learning approach."
                    },
                    {
                        "id": 232,
                        "string": "Our model outperforms the unsupervised baselines in both sentence and phrase evaluations."
                    },
                    {
                        "id": 233,
                        "string": "The interpretation of the identified metaphorical words given by our model also contributes to Google and Bing translation systems with 11% and 9% accuracy improvements."
                    },
                    {
                        "id": 234,
                        "string": "The experiments show that using words' hypernyms and synonyms in WordNet can paraphrase metaphors into their literal counterparts, so that the metaphors can be correctly identified and translated."
                    },
                    {
                        "id": 235,
                        "string": "To our knowledge, this is the first study that evaluates a metaphor processing method on Machine Translation."
                    },
                    {
                        "id": 236,
                        "string": "We believe that compared with simply identifying metaphors, metaphor processing applied in practical tasks, can be more valuable in the real world."
                    },
                    {
                        "id": 237,
                        "string": "Additionally, our experiments demonstrate that using a candidate word output vector instead of its input vector to model the similarity between the candidate word and its context yields better results in the best fit word (the literal counterpart of the metaphor) identification."
                    },
                    {
                        "id": 238,
                        "string": "CBOW and Skip-gram do not consider the distance between a context word and a centre word in a sentence, i.e., context word contributes to predict the centre word equally."
                    },
                    {
                        "id": 239,
                        "string": "Future work will introduce weighted CBOW and Skip-gram to learn positional information within sentences."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 32
                    },
                    {
                        "section": "Related Work",
                        "n": "2",
                        "start": 33,
                        "end": 63
                    },
                    {
                        "section": "Preliminary: CBOW and Skip-gram",
                        "n": "3",
                        "start": 64,
                        "end": 84
                    },
                    {
                        "section": "Methodology",
                        "n": "4",
                        "start": 85,
                        "end": 100
                    },
                    {
                        "section": "Metaphor identification",
                        "n": "4.1",
                        "start": 101,
                        "end": 136
                    },
                    {
                        "section": "Word embedding: output vectors vs. input vectors",
                        "n": "4.2",
                        "start": 137,
                        "end": 153
                    },
                    {
                        "section": "Experimental settings",
                        "n": "5",
                        "start": 154,
                        "end": 189
                    },
                    {
                        "section": "Metaphor identification",
                        "n": "6.1",
                        "start": 190,
                        "end": 197
                    },
                    {
                        "section": "Metaphor processing for MT",
                        "n": "6.2",
                        "start": 198,
                        "end": 230
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 231,
                        "end": 239
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1334-Figure4-1.png",
                        "caption": "Figure 4: Input and output vector visualization. The bluer, the more negative. The redder, the more positive.",
                        "page": 5,
                        "bbox": {
                            "x1": 83.03999999999999,
                            "x2": 280.32,
                            "y1": 67.2,
                            "y2": 150.23999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1334-Table1-1.png",
                        "caption": "Table 1: Metaphor identification results. NB: * denotes that our model outperforms the baseline significantly, based on two-tailed paired t-test with p < 0.001.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 289.44,
                            "y1": 62.879999999999995,
                            "y2": 164.16
                        }
                    },
                    {
                        "filename": "../figure/image/1334-Figure1-1.png",
                        "caption": "Figure 1: CBOW and Skip-gram framework.",
                        "page": 2,
                        "bbox": {
                            "x1": 79.2,
                            "x2": 288.96,
                            "y1": 68.16,
                            "y2": 149.28
                        }
                    },
                    {
                        "filename": "../figure/image/1334-Table2-1.png",
                        "caption": "Table 2: Model performance vs. different threshold (τ ) settings. NB: the sentence level results are based on SIM-CBOWI+O .",
                        "page": 7,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 290.4,
                            "y1": 62.879999999999995,
                            "y2": 164.16
                        }
                    },
                    {
                        "filename": "../figure/image/1334-Figure5-1.png",
                        "caption": "Figure 5: Accuracy of metaphor interpretation, evaluated on Google and Bing Translation.",
                        "page": 7,
                        "bbox": {
                            "x1": 74.39999999999999,
                            "x2": 285.12,
                            "y1": 228.48,
                            "y2": 337.91999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1334-Table3-1.png",
                        "caption": "Table 3: Accuracy of metaphor interpretation, evaluated on Google and Bing Translation.",
                        "page": 7,
                        "bbox": {
                            "x1": 307.68,
                            "x2": 524.16,
                            "y1": 187.68,
                            "y2": 260.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1334-Figure6-1.png",
                        "caption": "Figure 6: MT-based metaphor interpretation questionnaire.",
                        "page": 7,
                        "bbox": {
                            "x1": 309.59999999999997,
                            "x2": 523.1999999999999,
                            "y1": 63.839999999999996,
                            "y2": 155.04
                        }
                    },
                    {
                        "filename": "../figure/image/1334-Figure3-1.png",
                        "caption": "Figure 3: Given CBOW trained input and output vectors, a target word of devoured, and a context of She [ ] his novels, cos(vodevoured, v i context) = −0.01, cos(voenjoyed, v i context) = 0.02.",
                        "page": 3,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 525.12,
                            "y1": 67.67999999999999,
                            "y2": 239.04
                        }
                    },
                    {
                        "filename": "../figure/image/1334-Figure2-1.png",
                        "caption": "Figure 2: Metaphor identification framework. NB: w∗ = best fit word, wt = target word.",
                        "page": 3,
                        "bbox": {
                            "x1": 74.88,
                            "x2": 292.32,
                            "y1": 63.839999999999996,
                            "y2": 245.28
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-76"
        },
        {
            "slides": {
                "0": {
                    "title": "Utility to site moderators and administrators",
                    "text": [
                        "Controversy (as we have defined it) is not necessarily a bad thing.",
                        "Monitoring for bad controversy can prevent harm to the group",
                        "Bringing productive controversy to the communitys attention can help the group solve problems"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "1": {
                    "title": "Observation controversy is community specific",
                    "text": [
                        "break up: controversial in the Reddit group on relationships, but not in the group for posing questions to women",
                        "my parents: controversial for personal-finance group",
                        "(example: live with my parents) but not in the relationships group"
                    ],
                    "page_nums": [
                        4
                    ],
                    "images": []
                },
                "2": {
                    "title": "Observation we can also use early reactions",
                    "text": [
                        "Early opinions can greatly affect subsequent opinion dynamics",
                        "(Salganik et al. MusicLab experiment, Science 2006, inter alia)",
                        "Both the content and structure of the early discussion tree may prove helpful."
                    ],
                    "page_nums": [
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Data selection",
                    "text": [
                        "All posts with %- upvoted Filtered Posts no edits, stable %-upvoted",
                        "Label validation steps (details in paper): 1) high-precision overlap (>88 F-measure) with reddits low-recall rank-by-controversy 2) we ensure popularity prediction != controversy prediction"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                },
                "5": {
                    "title": "Labeled Dataset Statistics",
                    "text": [
                        "Balanced, binary classification with controversial/non-controversial labeling"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": [
                        "figure/image/1337-Table1-1.png"
                    ]
                },
                "6": {
                    "title": "Some posting time text only results",
                    "text": [
                        "(this, plus timestamp, is our baseline)",
                        "o Rather than passing BERT vectors to a bi-LSTM, it",
                        "works about as well and faster to mean-pool, dimension-reduce, and feed to a linear classifier",
                        "o Our hand-crafted features + word2vec match BERT- based algorithms on 3 of 6 subreddits"
                    ],
                    "page_nums": [
                        13,
                        14
                    ],
                    "images": [
                        "figure/image/1337-Table2-1.png"
                    ]
                },
                "8": {
                    "title": "Does the shape of the tree predict controversy",
                    "text": [
                        "Usually yes, even after controlling for the rate of incoming comments.",
                        "max depth/total comment ratio proportion of comments that were top-level (i.e., made in direct reply to the original post) average node depth average branching factor proportion of top-level comments replied to Gini coefficient of replies to top-level comments (to measure how clustered the total discussion is) Wiener Index of virality (average pairwise pathlength between all pairs of nodes)",
                        "total number of comments logged time between OP and the first reply average logged parent-child reply time (over all pairs of comments)",
                        "[binary logistic regression, LL-Ratio test p<.05 in 5/6 communities]"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                },
                "9": {
                    "title": "Prediction results incorporating comment features",
                    "text": [
                        "4 comments, on average"
                    ],
                    "page_nums": [
                        17,
                        18
                    ],
                    "images": []
                },
                "11": {
                    "title": "Takeaways modulo caveats see paper",
                    "text": [
                        "We advocate an early-detection, community-specific approach to controversial-post prediction",
                        "We can use features of the content and structure of the early discussion tree",
                        "Early detection outperforms posting-time-only features in 5 of 6",
                        "Reddit communities tested, even for quite small early-time windows",
                        "Early content is most effective, but tree-shape and rate features transfer across domains better"
                    ],
                    "page_nums": [
                        21
                    ],
                    "images": []
                }
            },
            "paper_title": "Something's Brewing! Early Prediction of Controversy-causing Posts from Discussion Features",
            "paper_id": "1337",
            "paper": {
                "title": "Something's Brewing! Early Prediction of Controversy-causing Posts from Discussion Features",
                "abstract": "Controversial posts are those that split the preferences of a community, receiving both significant positive and significant negative feedback. Our inclusion of the word \"community\" here is deliberate: what is controversial to some audiences may not be so to others. Using data from several different communities on reddit.com, we predict the ultimate controversiality of posts, leveraging features drawn from both the textual content and the tree structure of the early comments that initiate the discussion. We find that even when only a handful of comments are available, e.g., the first 5 comments made within 15 minutes of the original post, discussion features often add predictive capacity to strong content-andrate only baselines. Additional experiments on domain transfer suggest that conversationstructure features often generalize to other communities better than conversation-content features do.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Controversial content -that which attracts both positive and negative feedback -is not necessarily a bad thing; for instance, bringing up a point that warrants spirited debate can improve community health."
                    },
                    {
                        "id": 1,
                        "string": "1 But regardless of the nature of the controversy, detecting potentially controversial content can be useful for both community members and community moderators."
                    },
                    {
                        "id": 2,
                        "string": "Ordinary users, and in particular new users, might appreciate being warned that they need to add more nuance or qualification to their earlier posts."
                    },
                    {
                        "id": 3,
                        "string": "2 Moderators could be alerted that the discussion ensuing from some content might need monitoring."
                    },
                    {
                        "id": 4,
                        "string": "Alternately, they could draw community attention to issues possibly needing resolution: indeed, some sites already provide explicit sorting by controversy."
                    },
                    {
                        "id": 5,
                        "string": "We consider the controversiality of a piece of content in the context of the community in which it is shared, because what is controversial to some audiences may not be so to others (Chen and Berger, 2013; Jang et al., 2017; Basile et al., 2017) ."
                    },
                    {
                        "id": 6,
                        "string": "For example, we identify \"break up\" as a controversial concept in the relationships subreddit (a subreddit is a subcommunity hosted on the Reddit discussion site), but the same topic is associated with a lack of controversy in the AskWomen subreddit (where questions are posed for women to answer)."
                    },
                    {
                        "id": 7,
                        "string": "Similarly, topics that are controversial in one community may simply not be discussed in another: our analysis identifies \"crossfit\", a type of workout, as one of the most controversial concepts in the subreddit Fitness."
                    },
                    {
                        "id": 8,
                        "string": "However, while controversial topics may be community-specific, community moderators still may not be able to determine a priori which posts will attract controversy."
                    },
                    {
                        "id": 9,
                        "string": "Many factors cannot be known ahead of time, e.g., a fixed set of topics may not be dynamic enough to handle a sudden current event, or the specific set of users that happen to be online at a given time may react in unpredictable ways."
                    },
                    {
                        "id": 10,
                        "string": "Indeed, experiments have shown that, to a certain extent, the influence of early opinions on subsequent opinion dynamics can override the influence of an item's actual content (Salganik et al., 2006; Wu and Huberman, 2008; Muchnik et al., 2013; Weninger et al., 2015) ."
                    },
                    {
                        "id": 11,
                        "string": "Hence, we propose an early-detection approach that uses not just the content of the initiating post, but also the content and structure of the initial responding comments."
                    },
                    {
                        "id": 12,
                        "string": "In doing so, we unite streams of heretofore mostly disjoint research programs: see Figure 1 ."
                    },
                    {
                        "id": 13,
                        "string": "Working with over 15,000 discus-Is the task to determine whether a textual item will provoke controversy?"
                    },
                    {
                        "id": 14,
                        "string": "No, whether a topic (or entity/hashtag/word) has been controversial [a distinction also made by Addawood et al."
                    },
                    {
                        "id": 15,
                        "string": "(2017) ] (Popescu and Pennacchiotti, 2010; Choi et al., 2010; Cao et al., 2015; Lourentzou et al., 2015; Addawood et al., 2017; Al-Ayyoub et al., 2017; Garimella et al., 2018) No, whether a conversation contained disagreement (Mishne and Glance, 2006; Yin et al., 2012; Allen et al., 2014; Wang and Cardie, 2014) or mapping the disagreements (Awadallah et al., 2012; Marres, 2015; Borra et al., 2015; Liu et al., 2018) No, the task is, for the given textual item, predict antisocial behavior in the ensuing discussion (Zhang et al., 2018b,a) , or subsequent comment volume/popularity/structure (Szabo and Huberman, 2010; Kim et al., 2011; Tatar et al., 2011; Backstrom et al., 2013; He et al., 2014; Zhang et al., 2018b) , or eventual post article score (Rangwala and Jamali, 2010; Szabo and Huberman, 2010) ,; but all where, like us, the paradigm is early detection No, only info available at the item's creation (Dori-Hacohen and Allan, 2013; Mejova et al., 2014; Klenner et al., 2014; Addawood et al., 2017; Timmermans et al., 2017; Rethmeier et al., 2018; Kaplun et al., 2018) or the entire ensuing revision/discussion history (Rad and Barbosa, 2012; ."
                    },
                    {
                        "id": 16,
                        "string": "N.B."
                    },
                    {
                        "id": 17,
                        "string": ": for Wikipedia articles, often controversy=non-vandalism reverts (Yasseri et al., 2012) ... although some, like us, treat controversy as domain-specific (Jang et al., 2017) and test domain transfer (Basile et al., 2017) ...using early reactions, which, recall, Salganik et al."
                    },
                    {
                        "id": 18,
                        "string": "(2006) observe to be sometimes crucial?"
                    },
                    {
                        "id": 19,
                        "string": "... and testing how well text/earlyconversation-structure features transfer across communities?"
                    },
                    {
                        "id": 20,
                        "string": "This is our work."
                    },
                    {
                        "id": 21,
                        "string": "No, early reversions (Sumi et al., 2011) aren't conversations as usually construed Figure 1 : How our research relates to prior work."
                    },
                    {
                        "id": 22,
                        "string": "sion trees across six subreddits, we find that incorporating structural and textual features of budding comment trees improves predictive performance relatively quickly; for example, in one of the communities we consider, adding features taken from just the first 15 minutes of discussion significantly increases prediction performance, even though the average thread only contains 4 comments by that time (∼4% of all eventual comments)."
                    },
                    {
                        "id": 23,
                        "string": "Additionally, we study feature transferability across domains (in our case, communities), training on one subreddit and testing on another."
                    },
                    {
                        "id": 24,
                        "string": "While text features of comments carry the greatest predictive capacity in-domain, we find that discussion-tree and -rate features are less brittle, transferring better between communities."
                    },
                    {
                        "id": 25,
                        "string": "Our results not only suggest the potential usefulness of granting controversy-prediction algorithms a small observation window to gauge community feedback, but also demonstrate the utility of our expressive feature set for early discussions."
                    },
                    {
                        "id": 26,
                        "string": "Datasets Given our interest in community-specific controversiality, we draw data from reddit.com, which hosts several thousand discussion subcom-munities (subreddits) covering a variety of interests."
                    },
                    {
                        "id": 27,
                        "string": "Our dataset, which attempts to cover all public posts and comments from Reddit's inception in 2007 until Feb. 2014, is derived from a combination of Jason Baumgartner's posts and comments sets and our own scraping efforts to fill in dataset gaps."
                    },
                    {
                        "id": 28,
                        "string": "The result is a mostly-complete set of posts alongside associated comment trees."
                    },
                    {
                        "id": 29,
                        "string": "3 We focus on six text-based 4 subreddits ranging over a variety of styles and topics: two Q&A subreddits: AskMen (AM) and AskWomen (AW); a specialinterest community, Fitness (FT); and three advice communities: LifeProTips (LT), personalfinance (PF), and relationships (RL)."
                    },
                    {
                        "id": 30,
                        "string": "Each comprises tens of thousands of posts and hundreds of thousands to millions of comments."
                    },
                    {
                        "id": 31,
                        "string": "In Reddit (similarly to other sites allowing explicit negative feedback, such as YouTube, imgur, 9gag, etc."
                    },
                    {
                        "id": 32,
                        "string": "), users can give posts upvotes, increas-/r/LifeProTips (LT) 63 comments, 72% upvoted LPT: Check the Facebook app to find the owner of a lost smartphone or simply call her 'mum'?"
                    },
                    {
                        "id": 33,
                        "string": "Also slightly less intrusive IMO."
                    },
                    {
                        "id": 34,
                        "string": "comments, 72% upvoted LPT: get your pets to take their medicine with butter."
                    },
                    {
                        "id": 35,
                        "string": "This is much better!"
                    },
                    {
                        "id": 36,
                        "string": "I have been trying ice cream but my dog is too smart."
                    },
                    {
                        "id": 37,
                        "string": "comments, 93% upvoted LPT: For a cleaner home with little effort, never leave a room empty-handed."
                    },
                    {
                        "id": 38,
                        "string": "There is almost always something you can put back in its place on your way."
                    },
                    {
                        "id": 39,
                        "string": "Woah."
                    },
                    {
                        "id": 40,
                        "string": "237 comments, 71% upvoted **tl;dr** quit whining cuz r/fitness didn't respond they way you wanted..."
                    },
                    {
                        "id": 41,
                        "string": "Unfortunately, I doubt this kind of post is going to change anything... comments, 63% upvoted Interesting New Study: Red Meat Linked With Increased Mortality Risk."
                    },
                    {
                        "id": 42,
                        "string": "Thought this study is worth a discussion... Man, it seems like everything these days will lower your life span."
                    },
                    {
                        "id": 43,
                        "string": "comments, 90% upvoted What type of snack should I have preworkout to avoid lethargy at the gym?"
                    },
                    {
                        "id": 44,
                        "string": "I don't wanna be sluggish at the gym... Apples slices with peanut butter."
                    },
                    {
                        "id": 45,
                        "string": "comments, 57% upvoted Tipping as legal discrimination: Black servers get tipped 3.25% less... [LINK] ..."
                    },
                    {
                        "id": 46,
                        "string": "Tipping should be abandoned anyway, it's ridiculous.... comments, 62% upvoted Am I crazy for wanting this car/payment?"
                    },
                    {
                        "id": 47,
                        "string": "Short of it .. car is $45,000..."
                    },
                    {
                        "id": 48,
                        "string": "Needing a car for work and purchasing $45k car are two entirely different things."
                    },
                    {
                        "id": 49,
                        "string": "comments, 97% upvoted Accumulating wealth via homeownership vs accumulating wealth as a renter."
                    },
                    {
                        "id": 50,
                        "string": "One of the often cited benefits of homeownership ... Use this handy calculator from the NY Times."
                    },
                    {
                        "id": 51,
                        "string": "If you're dilligent..."
                    },
                    {
                        "id": 52,
                        "string": "Figure 2 : Examples of two controversial and one non-controversial post from three communities."
                    },
                    {
                        "id": 53,
                        "string": "Also shown are the text of the first reply, the number of comments the post received, and its percent-upvoted."
                    },
                    {
                        "id": 54,
                        "string": "ing a post's score, or downvotes, decreasing it."
                    },
                    {
                        "id": 55,
                        "string": "5 While the semantics of up/down votes may vary based on community (and, indeed, each user may have their own views on what content should be upvoted and what downvoted), in aggregate, posts that split community reaction fundamentally differ from those that produce agreement."
                    },
                    {
                        "id": 56,
                        "string": "Thus, in principle, posts that have unambiguously received both many upvotes and many downvotes should be deemed the most controversial."
                    },
                    {
                        "id": 57,
                        "string": "Percent Upvoted on Reddit."
                    },
                    {
                        "id": 58,
                        "string": "We quantify the relative proportion of upvotes and downvotes on a post using percent-upvoted, a measure provided by Reddit that gives an estimate of the percent of all votes on a post that are upvotes."
                    },
                    {
                        "id": 59,
                        "string": "In practice, exact values of percent-upvoted are not directly available; the site adds \"vote fuzzing\" to fight vote manipulation."
                    },
                    {
                        "id": 60,
                        "string": "6 To begin with, we first discard posts with fewer than 30 comments."
                    },
                    {
                        "id": 61,
                        "string": "7 Then, we query for the noisy percent-upvoted from each post ten times using the Reddit API, and take a mean to produce a final estimate."
                    },
                    {
                        "id": 62,
                        "string": "Post Outcomes."
                    },
                    {
                        "id": 63,
                        "string": "To better understand the interplay between upvotes and downvotes, we first explore the outcomes for posts both in terms of percent-upvoted and the number of comments; do-5 Vote timestamps are not publicly available."
                    },
                    {
                        "id": 64,
                        "string": "6 Prior to Dec. 2016, vote information was fuzzed according to a different algorithm; however, vote statistics for all posts were recomputed according to a new algorithm that, according to a reddit moderator, can \"actually be trusted;\" https://goo.gl/yHWeJp 7 The intent is to only consider posts receiving enough community attention for us to reliably compare upvote counts with downvotes."
                    },
                    {
                        "id": 65,
                        "string": "We use number of comments as a proxy for aggregate attention because Reddit does not surface the true number of votes."
                    },
                    {
                        "id": 66,
                        "string": "/r/Fitness (FT) /r/personalfinance (PF) ing so on a per-community basis has the potential to surface any subreddit-specific effects."
                    },
                    {
                        "id": 67,
                        "string": "In addition, we compute the median number of comments for posts falling into each bin of the histogram."
                    },
                    {
                        "id": 68,
                        "string": "The resulting plots are given in Figure 3 ."
                    },
                    {
                        "id": 69,
                        "string": "In general, posts receive mostly positive feedback in aggregate, though the mean percentupvoted varies between communities (Table 1) ."
                    },
                    {
                        "id": 70,
                        "string": "There is also a positive correlation between a post's percent-upvoted and the number of comments it receives."
                    },
                    {
                        "id": 71,
                        "string": "This relationship is unsurprising, given that Reddit displays higher rated posts to more users."
                    },
                    {
                        "id": 72,
                        "string": "A null hypothesis, which we compare to empirically in our prediction experiments, is that popularity and percent-upvoted simply carry the same information."
                    },
                    {
                        "id": 73,
                        "string": "However, we have reason to doubt this null hypothesis, as quite a few posts receive significant attention despite having a low percentupvoted ( Figure 2 )."
                    },
                    {
                        "id": 74,
                        "string": "Assigning Controversy Labels To Posts."
                    },
                    {
                        "id": 75,
                        "string": "We assign binary controversy labels (i.e., relatively controversial vs. relatively non-controversial) to posts according to the following process: first, we discard posts where the observed variability across 10 API queries for percent-upvoted exceeds 5%; in these cases, we assume that there are too few total votes for a stable estimate."
                    },
                    {
                        "id": 76,
                        "string": "Next, we discard posts where neither the observed upvote ratio nor the observed score 8 vary at all; in these cases, we cannot be sure that the upvote ratio is insensitive to the vote fuzzing function."
                    },
                    {
                        "id": 77,
                        "string": "9 Fi-  nally, we sort each community's surviving posts by upvote percentage, and discard the small number of posts with percent-upvoted below 50%."
                    },
                    {
                        "id": 78,
                        "string": "10 The top quartile of posts according to this ranking (i.e., posts with mostly only upvotes) are labeled \"non-controversial.\""
                    },
                    {
                        "id": 79,
                        "string": "The bottom quartile of posts, where the number of downvotes cannot exceed but may approach the number of upvotes, are labeled as \"controversial.\""
                    },
                    {
                        "id": 80,
                        "string": "For each community, this process yields a balanced, labeled set of controversial/non-controversial posts."
                    },
                    {
                        "id": 81,
                        "string": "Table 1 contains the number of posts/comments for each community after the above filtration process, and the percent-upvoted for the controversial/noncontroversial sets."
                    },
                    {
                        "id": 82,
                        "string": "Quantitative Validation of Labels Reddit provides a sort-by-controversy function, and we wanted to ensure that our controversy labeling method aligned with this ranking."
                    },
                    {
                        "id": 83,
                        "string": "11 We contacted Reddit itself, but they were unable to provide details."
                    },
                    {
                        "id": 84,
                        "string": "Hence, we scraped the 1K most controversial posts according to Reddit (1K is the max that Reddit provides) for each community over the past year (as of October 2018)."
                    },
                    {
                        "id": 85,
                        "string": "Next, we sampled posts that did not appear on Reddit's controversial list in the year prior to October 2018 to create a 1:k ratio sample of Reddit-controversial posts and non-Reddit-controversial posts for k ∈ {1, 2, 3}, k = 3 being the most difficult setting."
                    },
                    {
                        "id": 86,
                        "string": "Then, we applied the filtering/labeling method described above, and measured how well our process matched Reddit's ranking scheme, i.e., the \"controversy\" label applied by our method matched the \"controversy\" label assigned by Reddit."
                    },
                    {
                        "id": 87,
                        "string": "Our labeling method achieves high precision in identifying controversial/non-controversial posts."
                    },
                    {
                        "id": 88,
                        "string": "While a large proportion of posts are discarded, the labels assigned to surviving posts match those assigned by Reddit with the following F-measures at k = 3 (the results for k = 1, 2 are higher): 12 AM AW FT LT PF RL In all cases, the precision for the non-controversial label is perfect, i.e., our filtration method never labeled a Reddit-controversial post as noncontroversial."
                    },
                    {
                        "id": 89,
                        "string": "The precision of the controversy label was also high, but imperfect; errors could be a result of, e.g., Reddit's controversy ranking being limited to 1K posts, or using internal data, etc."
                    },
                    {
                        "id": 90,
                        "string": "Figure 2 gives examples of controversial and noncontroversial posts from three of the communities we consider, alongside the text of the first comment made in response to those posts."
                    },
                    {
                        "id": 91,
                        "string": "Topical differences."
                    },
                    {
                        "id": 92,
                        "string": "A priori, we expect that the topical content of posts may be related to how controversial they become (see prior work in Fig."
                    },
                    {
                        "id": 93,
                        "string": "1 )."
                    },
                    {
                        "id": 94,
                        "string": "We ran LDA (Blei et al., 2003) with 10 topics on posts from each community independently, and compared the differences in mean topic frequency between controversial and non-controversial posts."
                    },
                    {
                        "id": 95,
                        "string": "We observe communityspecific patterns, e.g., in relationships, posts about family (top words in topic: \"family parents mom dad\") are less controversial than those associated with romantic relationships (top words: \"relationship, love, time, life\"); in AskWomen, a gender topic (\"women men woman male\") tends to be associated with more controversy than an advice-seeking topic (\"im dont feel ive\") Wording differences."
                    },
                    {
                        "id": 96,
                        "string": "We utilize Monroe et al."
                    },
                    {
                        "id": 97,
                        "string": "'s (2008) algorithm for comparing language usage in two bodies of text; the method places a Dirichlet prior over n-grams (n=1,2,3) and estimates Zscores on the difference in rate-usage between controversial and non-controversial posts."
                    },
                    {
                        "id": 98,
                        "string": "This analysis reveals many community-specific patterns, e.g., phrases associated with controversy include \"crossfit\" in Fitness, \"cheated on my\" in relationships, etc."
                    },
                    {
                        "id": 99,
                        "string": "What's controversial in one community may be non-controversial in another, e.g., \"my parents\" is associated with controversy in personalfinance (e.g., \"live with my parents\") but strongly associated with lack of controversy in relationships (e.g., \"my parents got divorced\")."
                    },
                    {
                        "id": 100,
                        "string": "We also observe that some communities share commonalities in phrasing, e.g., \"do you think\" is associated with controversy in both AskMen and AskWomen, whereas \"what are some\" is associated with a lack of controversy in both."
                    },
                    {
                        "id": 101,
                        "string": "Qualitative Validation of Labels Early Discussion Threads We now analyze comments posted in early discussion threads for controversial vs. noncontroversial posts."
                    },
                    {
                        "id": 102,
                        "string": "In this section, we focus on comments posted within one hour of the original submission, although we consider a wider range of times in later experiments."
                    },
                    {
                        "id": 103,
                        "string": "Comment Text."
                    },
                    {
                        "id": 104,
                        "string": "We mirrored the n-gram analysis conducted in the previous section, but, rather than the text of the original post, focused on the text of comments."
                    },
                    {
                        "id": 105,
                        "string": "Many patterns persist, but the conversational framing changes, e.g., \"I cheated\" in the posts of relationships is mirrored by \"you cheated\" in the comments."
                    },
                    {
                        "id": 106,
                        "string": "Community differences again appear: e.g., \"birth control\" indicated controversy when it appears in the comments for relationships, but not for AskWomen."
                    },
                    {
                        "id": 107,
                        "string": "Comment Tree Structure."
                    },
                    {
                        "id": 108,
                        "string": "While prior work in early prediction mostly focuses on measuring rate of early responses, we postulate that more expressive, structural features of conversation trees may also carry predictive capacity."
                    },
                    {
                        "id": 109,
                        "string": "Figure 4 gives samples of conversation trees that developed on Reddit posts within one hour of the original post being made."
                    },
                    {
                        "id": 110,
                        "string": "There is significant diversity among tree size and shape."
                    },
                    {
                        "id": 111,
                        "string": "To quantify these differences, we introduce two sets of features: C-RATE features, which encode the rate of commenting/number of comments; 13 and C-TREE features, which encode structural aspects of discussion trees."
                    },
                    {
                        "id": 112,
                        "string": "14 We then examine whether or not tree features correlate with controversy after controlling for popularity."
                    },
                    {
                        "id": 113,
                        "string": "Using binary logistic regression, after controlling for C-RATE, C-TREE features extracted from comments made within one hour of the original post improve model fit in all cases except for personalfinance (p < .05, LL-Ratio test)."
                    },
                    {
                        "id": 114,
                        "string": "We repeated the experiment, but also controlled for eventual popularity 15 in addition to C-RATE, and observed the same result."
                    },
                    {
                        "id": 115,
                        "string": "This provides evidence that structural features of conversation trees are predictive, though which tree feature is most important according to these experiments is community-specific."
                    },
                    {
                        "id": 116,
                        "string": "For example, for the models without eventual popularity information, the C-TREE feature with largest coefficient in AskWomen and AskMen was the max-depth ratio, but it was the Wiener index in Fitness."
                    },
                    {
                        "id": 117,
                        "string": "Early Prediction of Controversy We shift our focus to the task of predicting controversy on Reddit."
                    },
                    {
                        "id": 118,
                        "string": "In general, tools that predict controversy are most useful if they only require information available at the time of submission or as soon as possible thereafter."
                    },
                    {
                        "id": 119,
                        "string": "We note that while the causal relationship between vote totals and comment threads is not entirely clear (e.g., perhaps the comment threads cause more up/down votes on the post), predicting the ultimate outcome of posts is still useful for community moderators."
                    },
                    {
                        "id": 120,
                        "string": "Experimental protocols."
                    },
                    {
                        "id": 121,
                        "string": "All classifiers are bi-13 Specifically: total number of comments, the logged time between OP and the first reply, and the average logged parentchild reply time over pairs of comments."
                    },
                    {
                        "id": 122,
                        "string": "14 Specifically: max depth/total comment ratio, proportion of comments that were top-level (i.e., made in direct reply to the original post), average node depth, average branching factor, proportion of top-level comments replied to, Gini coefficient of replies to top-level comments (to measure how \"clustered\" the total discussion is), and Wiener Index of virality (which measures the average pairwise path-length between all nodes in the conversation tree (Wiener, 1947; Goel et al., 2015) )."
                    },
                    {
                        "id": 123,
                        "string": "15 We added in the logged number of eventual comments, and also whether or not the post received an above-median number of comments."
                    },
                    {
                        "id": 124,
                        "string": "nary (i.e., controversial vs. non-controversial) and, because the classes are in 50/50 balance, we compare algorithms according to their accuracy."
                    },
                    {
                        "id": 125,
                        "string": "Experiments are conducted as 15-fold cross validation with random 60/20/20 train/dev/test splits, where the splits are drawn to preserve the 50/50 label distribution."
                    },
                    {
                        "id": 126,
                        "string": "For non-neural, feature-based classifiers, we use linear models."
                    },
                    {
                        "id": 127,
                        "string": "16 For BiLSTM models, 17 we use Tensorflow (Abadi et al., 2015) ."
                    },
                    {
                        "id": 128,
                        "string": "Whenever a feature is ill-defined (e.g., if it is a comment text feature, but there are no comments at time t) the column mean of the training set for each cross-validation split is substituted."
                    },
                    {
                        "id": 129,
                        "string": "Similarly, if a comment's body is deleted, it is ignored by text processing algorithms."
                    },
                    {
                        "id": 130,
                        "string": "We perform both Wilcoxon signed-rank tests (Demšar, 2006) and two-sided corrected resampled t-tests (Nadeau and Bengio, 2000) to estimate statistical significance, taking the maximum of the two resulting p-values to err on the conservative side and reduce the chance of Type I error."
                    },
                    {
                        "id": 131,
                        "string": "Comparing Text Models The goal of this section is to compare text-only models for classifying controversial vs. noncontroversial posts."
                    },
                    {
                        "id": 132,
                        "string": "Algorithms are given access to the full post titles and bodies, unless stated otherwise."
                    },
                    {
                        "id": 133,
                        "string": "HAND."
                    },
                    {
                        "id": 134,
                        "string": "We consider a number of hand-designed features related to the textual content of posts inspired by Tan et al."
                    },
                    {
                        "id": 135,
                        "string": "(2016) ."
                    },
                    {
                        "id": 136,
                        "string": "18 TFIDF."
                    },
                    {
                        "id": 137,
                        "string": "We encode posts according to tfidf feature vectors."
                    },
                    {
                        "id": 138,
                        "string": "Words are included in the vocabulary if they appear more than 5 times in the corresponding cross-validation split."
                    },
                    {
                        "id": 139,
                        "string": "16 We cross-validate regularization strength 10ˆ(-100,-5,-4,-3,-2,-1,0,1), model type (SVM vs. Logistic L1 vs. Logistic L2 vs. Logistic L1/L2), and whether or not to apply feature standardization for each feature set and cross-validation split separately."
                    },
                    {
                        "id": 140,
                        "string": "These are trained using lightning (http: //contrib.scikit-learn.org/lightning/)."
                    },
                    {
                        "id": 141,
                        "string": "17 We optimize using Adam (Kingma and Ba, 2014) with LR=.001 for 20 epochs, apply dropout with p = .2, select the model checkpoint that performs best over the validation set, and cross-validate the model's dimension (128 vs. 256) and the number of layers (1 vs. 2) separately for each crossvalidation split."
                    },
                    {
                        "id": 142,
                        "string": "18 Specifically: for the title and text body separately, length, type-token ratio, rate of first-person pronouns, rate of secondperson pronouns, rate of question-marks, rate of capitalization, and Vader sentiment (Hutto and Gilbert, 2014) ."
                    },
                    {
                        "id": 143,
                        "string": "Combining the post title and post body: number of links, number of Reddit links, number of imgur links, number of sentences, Flesch-Kincaid readability score, rate of italics, rate of boldface, presence of a list, and the rate of word use from 25 Empath wordlists (Fast et al., 2016) , which include various categories, such as politeness, swearing, sadness, etc."
                    },
                    {
                        "id": 144,
                        "string": "W2V."
                    },
                    {
                        "id": 145,
                        "string": "We consider a mean, 300D word2vec (Mikolov et al., 2013) embedding representation, computed from a GoogleNews corpus."
                    },
                    {
                        "id": 146,
                        "string": "ARORA."
                    },
                    {
                        "id": 147,
                        "string": "A slight modification of W2V, proposed by Arora et al."
                    },
                    {
                        "id": 148,
                        "string": "(2017) , serves as a \"tough to beat\" baseline for sentence representations."
                    },
                    {
                        "id": 149,
                        "string": "LSTM."
                    },
                    {
                        "id": 150,
                        "string": "We train a Bi-LSTM (Graves and Schmidhuber, 2005 ) over the first 128 tokens of titles + post text, followed by a mean pooling layer, and then a logistic regression layer."
                    },
                    {
                        "id": 151,
                        "string": "The LSTM's embedding layer is initialized with the same word2vec embeddings used in W2V."
                    },
                    {
                        "id": 152,
                        "string": "Markdown formatting artifacts are discarded."
                    },
                    {
                        "id": 153,
                        "string": "BERT-LSTM."
                    },
                    {
                        "id": 154,
                        "string": "Recently, features extracted from fixed, pretrained, neural language models have resulted in high performance on a range of language tasks."
                    },
                    {
                        "id": 155,
                        "string": "Following the recommendations of §5.4 of Devlin et al."
                    },
                    {
                        "id": 156,
                        "string": "(2019) , we consider representing posts by extracting BERT-Large embeddings computed for the first 128 tokens of titles + post text; we average the final 4 layers of the 24-layer, pretrained Transformer-decoder network (Vaswani et al., 2017) ."
                    },
                    {
                        "id": 157,
                        "string": "These token-specific vectors are then passed to a Bi-LSTM, a mean pooling layer, and a logistic classification layer."
                    },
                    {
                        "id": 158,
                        "string": "We keep markdown formatting artifacts because BERT's token vocabulary are WordPiece subtokens (Wu et al., 2016) , which are able to incorporate arbitrary punctuation without modification."
                    },
                    {
                        "id": 159,
                        "string": "BERT-MP."
                    },
                    {
                        "id": 160,
                        "string": "Instead of training a Bi-LSTM over BERT features, we mean pool over the first 128 tokens, apply L2 normalization to the resulting representations, reduce to 100 dimensions using PCA, 19 and train a linear classifier on top."
                    },
                    {
                        "id": 161,
                        "string": "BERT-MP-512."
                    },
                    {
                        "id": 162,
                        "string": "The same as BERT-MP, except the algorithm is given access to 512 tokens (the maximum allowed by BERT-Large) instead of 128."
                    },
                    {
                        "id": 163,
                        "string": "Results: Table 2 gives the performance of each text classifier for each community."
                    },
                    {
                        "id": 164,
                        "string": "In general, the best performing models are based on the BERT features, though HAND+W2V performs well, too."
                    },
                    {
                        "id": 165,
                        "string": "However, no performance gain is achieved when adding hand designed features to BERT."
                    },
                    {
                        "id": 166,
                        "string": "This may be because BERT's subtokenization scheme incorporates punctuation, link urls, etc., which are similar to the features captured by HAND."
                    },
                    {
                        "id": 167,
                        "string": "Adding an LSTM over BERT features is comparable to mean pooling over the sequence; similarly, considering 128 tokens vs. 512 tokens results in comparable  performance."
                    },
                    {
                        "id": 168,
                        "string": "Based on the results of this experiment, we adopt BERT-MP-512 to represent text in experiments for the rest of this work."
                    },
                    {
                        "id": 169,
                        "string": "Post-time Metadata Many non-content factors can influence community reception of posts, e.g., Hessel et al."
                    },
                    {
                        "id": 170,
                        "string": "(2017) find that when a post is made on Reddit can significantly influence its eventual popularity."
                    },
                    {
                        "id": 171,
                        "string": "TIME."
                    },
                    {
                        "id": 172,
                        "string": "These features encode when a post was created."
                    },
                    {
                        "id": 173,
                        "string": "These include indicator variables for year, month, day-of-week, and hour-of-day."
                    },
                    {
                        "id": 174,
                        "string": "AUTHOR."
                    },
                    {
                        "id": 175,
                        "string": "We add an indicator variable for each user that appears at least 3 times in the training set, encoding the hypothesis that some users may simply have a greater propensity to post controversial content."
                    },
                    {
                        "id": 176,
                        "string": "The results of incorporating the metadata features on top of TEXT are given in Table 3 ."
                    },
                    {
                        "id": 177,
                        "string": "While incorporating TIME features on top of TEXT results in consistent improvements across all communities, incorporating author features on top of TIME+TEXT does not."
                    },
                    {
                        "id": 178,
                        "string": "We adopt our highest performing models, TEXT+TIME, as a strong posttime baseline."
                    },
                    {
                        "id": 179,
                        "string": "Early Discussion Features Basic statistics of early comments."
                    },
                    {
                        "id": 180,
                        "string": "We augment the post-time features with early-discussion feature sets by giving our algorithms access to comments from increasing observation periods."
                    },
                    {
                        "id": 181,
                        "string": "Specifically, we train linear classifiers by combining our best post-time feature set (TEXT+TIME) with features derived from comment trees available after t minutes, and sweep t from t = 15 to t = 180 minutes in 15 minute intervals."
                    },
                    {
                        "id": 182,
                        "string": "Figure 6 plots the median number of comments available per thread at different t values for each community."
                    },
                    {
                        "id": 183,
                        "string": "The amount of data available for the early-prediction algorithms to consider varies significantly, e.g., while AskMen threads have a median 10 comments available at 45 minutes, Life-ProTips posts do not reach that threshold even after 3 hours, and we thus expect that it will be a harder setting for early prediction."
                    },
                    {
                        "id": 184,
                        "string": "We see, too, that even our maximal 3 hour window is still early in a post's lifecycle, i.e., posts tend to receive significant attention afterwards: only 15% (LT) to 32% (AW) of all eventual comments are available per thread at this time, on average."
                    },
                    {
                        "id": 185,
                        "string": "Figure 7 gives the distribution of the number of comments available for controversial/non-controversial posts on AskWomen at t = 60 minutes."
                    },
                    {
                        "id": 186,
                        "string": "As with the other communities we consider, the distribution of number of available posts is not overly-skewed, i.e., most posts in our set (we filtered out posts with less than 30 comments) get at least some early comments."
                    },
                    {
                        "id": 187,
                        "string": "We explore a number of feature sets based on early comment trees (comment feature sets are prefixed with \"C-\"): C-RATE and C-TREE."
                    },
                    {
                        "id": 188,
                        "string": "We described these in §3."
                    },
                    {
                        "id": 189,
                        "string": "C-TEXT."
                    },
                    {
                        "id": 190,
                        "string": "For each comment available at a given observation period, we extract the BERT-MP-512 embedding."
                    },
                    {
                        "id": 191,
                        "string": "Then, for each conversation thread, we take a simple mean over all comment representations."
                    },
                    {
                        "id": 192,
                        "string": "While we tried several more expressive means of encoding the text of posts in comment trees, this simple method proved surprisingly effective."
                    },
                    {
                        "id": 193,
                        "string": "20 Sweeping over time."
                    },
                    {
                        "id": 194,
                        "string": "Figure 5 gives the performance of the post-time baseline combined with comment features while sweeping t from 15 to 180 minutes."
                    },
                    {
                        "id": 195,
                        "string": "For five of the six communities we consider, the performance of the comment feature classifier significantly (p < .05) ex-  ceeds the performance of the post-time baseline in less than three hours of observation, e.g., in the case of AskMen and AskWomen, significance is achieved within 15 and 45 minutes, respectively."
                    },
                    {
                        "id": 196,
                        "string": "In general, C-RATE improves only slightly over post only, even though rate features have proven useful in predicting popularity in prior work (He et al., 2014) ."
                    },
                    {
                        "id": 197,
                        "string": "While adding C-TREE also improves performance, comment textual content is the biggest source of predictive gain."
                    },
                    {
                        "id": 198,
                        "string": "These results demonstrate i) that incorporating a variety of early conversation features, e.g., structural features of trees, can improve performance of contro-versy prediction over strong post-time baselines, and ii) the text content of comments contains significant complementary information to post text."
                    },
                    {
                        "id": 199,
                        "string": "Controversy prediction = popularity prediction."
                    },
                    {
                        "id": 200,
                        "string": "We return to a null hypothesis introduced in §2: that the controversy prediction models we consider here are merely learning the same patterns that a popularity prediction algorithm would learn."
                    },
                    {
                        "id": 201,
                        "string": "We train popularity prediction algorithms, and then attempt to use them at test-time to predict controversy; under the null hypothesis, we would expect little to no performance degradation when training on these alternate labels."
                    },
                    {
                        "id": 202,
                        "string": "We 1) train binary popularity predictors using post text/time + comment rate/tree/text features available at t = 180, 21 and use them to predict controversy at test-time; and 2) consider an oracle that predicts the true popularity label at test-time; this oracle is quite strong, as prior work suggests that perfectly predicting popularity is impossible (Salganik et al., 2006) ."
                    },
                    {
                        "id": 203,
                        "string": "In all cases, the best popularity predictor does not achieve performance comparable to even the post-only baseline."
                    },
                    {
                        "id": 204,
                        "string": "For 3 of 6 communities, even the popularity oracle does not beat post time baseline, and in all cases, the mean performance of the controversy predictor exceeds the oracle by t = 180."
                    },
                    {
                        "id": 205,
                        "string": "Thus, in our setting, controversy predictors and popularity predictors learn disjoint patterns."
                    },
                    {
                        "id": 206,
                        "string": "Domain Transfer We conduct experiments where we train models on one subreddit and test them on another."
                    },
                    {
                        "id": 207,
                        "string": "For these experiments, we discard all posting time features, and compare C-(TEXT+TREE+RATE) to C-(TREE+RATE); the goal is to empirically examine the hypothesis in §1: that controversial text is community-specific."
                    },
                    {
                        "id": 208,
                        "string": "To measure performance differences in the domain transfer setting, we compute the percentage accuracy drop relative to a constant prediction baseline when switching the training subreddit from the matching subreddit to a different one."
                    },
                    {
                        "id": 209,
                        "string": "For example, at t = 60, we observe that raw accuracy drops from 65.6 → 55.8 when training on AskWomen and testing on AskMen when considering text, rate, and tree features together; given that the constant prediction baseline achieves 50% accuracy, we compute the percent drop in accuracy as: (55.8 − 50)/(65.6 − 50) − 1 = −63%."
                    },
                    {
                        "id": 210,
                        "string": "The results of this experiment (Figure 8 ) suggest that while text features are quite strong indomain, they are brittle and community specific."
                    },
                    {
                        "id": 211,
                        "string": "Conversely, while rate and structural comment tree features do not carry as much in-domain predictive capacity on their own, they generally transfer better between communities, e.g., for RATE+TREE, there is very little performance drop-off when training/testing on AskMen/AskWomen (this holds for all timing cutoffs we considered)."
                    },
                    {
                        "id": 212,
                        "string": "Similarly, in the case of training on Fitness and testing on PersonalFinance, we sometimes observe a performance increase when switching domains (e.g., at t = 60); we suspect that this could be an effect of dataset size, as our Fitness dataset has the most posts of any subreddit we consider, and PersonalFinance has the least."
                    },
                    {
                        "id": 213,
                        "string": "Figure 8 : Average cross-validated performance degradation for transfer learning setting at t = 180 and t = 60; the y-axis is the training subreddit and the xaxis is testing."
                    },
                    {
                        "id": 214,
                        "string": "For a fixed test subreddit, each column gives the percent accuracy drop when switching from the matching training set to a domain transfer setting."
                    },
                    {
                        "id": 215,
                        "string": "In general, while incorporating comment text features results in higher accuracy overall, comment rate + tree features transfer between communities with less performance degradation."
                    },
                    {
                        "id": 216,
                        "string": "Conclusion We demonstrated that early discussion features are predictive of eventual controversiality in several reddit communities."
                    },
                    {
                        "id": 217,
                        "string": "This finding was dependent upon considering an expressive feature set of early discussions; to our knowledge, this type of feature set (consisting of text, trees, etc.)"
                    },
                    {
                        "id": 218,
                        "string": "hadn't been thoroughly explored in prior early prediction work."
                    },
                    {
                        "id": 219,
                        "string": "One promising avenue for future work is to examine higher-quality textual representations for conversation trees."
                    },
                    {
                        "id": 220,
                        "string": "While our mean-pooling method did produce high performance, the resulting classifiers do not transfer between domains effectively."
                    },
                    {
                        "id": 221,
                        "string": "Developing a more expressive algorithm (e.g., one that incorporates reply-structure relationships) could boost predictive performance, and enable textual features to be less brittle."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 25
                    },
                    {
                        "section": "Datasets",
                        "n": "2",
                        "start": 26,
                        "end": 33
                    },
                    {
                        "section": "comments, 72% upvoted",
                        "n": "62",
                        "start": 34,
                        "end": 36
                    },
                    {
                        "section": "comments, 93% upvoted",
                        "n": "115",
                        "start": 37,
                        "end": 40
                    },
                    {
                        "section": "comments, 63% upvoted",
                        "n": "66",
                        "start": 41,
                        "end": 42
                    },
                    {
                        "section": "comments, 90% upvoted",
                        "n": "394",
                        "start": 43,
                        "end": 44
                    },
                    {
                        "section": "comments, 57% upvoted",
                        "n": "61",
                        "start": 45,
                        "end": 45
                    },
                    {
                        "section": "comments, 62% upvoted",
                        "n": "125",
                        "start": 46,
                        "end": 48
                    },
                    {
                        "section": "comments, 97% upvoted",
                        "n": "110",
                        "start": 49,
                        "end": 81
                    },
                    {
                        "section": "Quantitative Validation of Labels",
                        "n": "2.1",
                        "start": 82,
                        "end": 100
                    },
                    {
                        "section": "Early Discussion Threads",
                        "n": "3",
                        "start": 101,
                        "end": 116
                    },
                    {
                        "section": "Early Prediction of Controversy",
                        "n": "4",
                        "start": 117,
                        "end": 130
                    },
                    {
                        "section": "Comparing Text Models",
                        "n": "4.1",
                        "start": 131,
                        "end": 168
                    },
                    {
                        "section": "Post-time Metadata",
                        "n": "4.2",
                        "start": 169,
                        "end": 178
                    },
                    {
                        "section": "Early Discussion Features",
                        "n": "4.3",
                        "start": 179,
                        "end": 205
                    },
                    {
                        "section": "Domain Transfer",
                        "n": "4.3.1",
                        "start": 206,
                        "end": 215
                    },
                    {
                        "section": "Conclusion",
                        "n": "5",
                        "start": 216,
                        "end": 221
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1337-Figure1-1.png",
                        "caption": "Figure 1: How our research relates to prior work.",
                        "page": 1,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 524.16,
                            "y1": 66.24,
                            "y2": 364.32
                        }
                    },
                    {
                        "filename": "../figure/image/1337-Table2-1.png",
                        "caption": "Table 2: Average accuracy for each post-time, textonly predictor for each dataset, averaged over 15 crossvalidation splits; standard errors are ±.6, on average (and never exceed ±1.03). Bold is best in column; underlined are statistically indistinguishable from best in column (p < .01)",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 292.32,
                            "y1": 62.4,
                            "y2": 174.23999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1337-Table3-1.png",
                        "caption": "Table 3: Post-time only results: the effect of incorporating timing and author identity features.",
                        "page": 6,
                        "bbox": {
                            "x1": 84.96,
                            "x2": 276.0,
                            "y1": 268.8,
                            "y2": 312.96
                        }
                    },
                    {
                        "filename": "../figure/image/1337-Figure3-1.png",
                        "caption": "Figure 3: For each community, a histogram of percent-upvoted and the median number of comments per bin.",
                        "page": 2,
                        "bbox": {
                            "x1": 385.91999999999996,
                            "x2": 516.48,
                            "y1": 62.4,
                            "y2": 222.72
                        }
                    },
                    {
                        "filename": "../figure/image/1337-Figure2-1.png",
                        "caption": "Figure 2: Examples of two controversial and one non-controversial post from three communities. Also shown are the text of the first reply, the number of comments the post received, and its percent-upvoted.",
                        "page": 2,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 372.0,
                            "y1": 77.75999999999999,
                            "y2": 209.28
                        }
                    },
                    {
                        "filename": "../figure/image/1337-Figure5-1.png",
                        "caption": "Figure 5: Classifier accuracy for increasing periods of observation; the “+” in the legend indicates that a feature set is combined with the feature sets below. ts, the time the full feature set first achieves statistical significance over the post-time only baseline, is given for each community (if significance is achieved).",
                        "page": 7,
                        "bbox": {
                            "x1": 86.88,
                            "x2": 516.0,
                            "y1": 62.4,
                            "y2": 337.91999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1337-Figure6-1.png",
                        "caption": "Figure 6: Observation period versus median number of comments available.",
                        "page": 7,
                        "bbox": {
                            "x1": 84.96,
                            "x2": 165.6,
                            "y1": 407.03999999999996,
                            "y2": 495.35999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1337-Figure7-1.png",
                        "caption": "Figure 7: Histogram of the number of comments available per thread at t = 60 minutes in AskWomen.",
                        "page": 7,
                        "bbox": {
                            "x1": 196.79999999999998,
                            "x2": 280.32,
                            "y1": 403.68,
                            "y2": 488.15999999999997
                        }
                    },
                    {
                        "filename": "../figure/image/1337-Table1-1.png",
                        "caption": "Table 1: Dataset statistics: number of posts, number of comments, mean percent-upvoted for the controversial and non-controversial classes.",
                        "page": 3,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 283.2,
                            "y1": 62.4,
                            "y2": 144.96
                        }
                    },
                    {
                        "filename": "../figure/image/1337-Figure8-1.png",
                        "caption": "Figure 8: Average cross-validated performance degradation for transfer learning setting at t = 180 and t = 60; the y-axis is the training subreddit and the xaxis is testing. For a fixed test subreddit, each column gives the percent accuracy drop when switching from the matching training set to a domain transfer setting. In general, while incorporating comment text features results in higher accuracy overall, comment rate + tree features transfer between communities with less performance degradation.",
                        "page": 8,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 62.4,
                            "y2": 257.28
                        }
                    },
                    {
                        "filename": "../figure/image/1337-Figure4-1.png",
                        "caption": "Figure 4: Early conversation trees from AskMen; nodes are comments and edges indicate reply structure. The original post is the black node, and as node colors lighten, comment timing increases from zero minutes to sixty minutes.",
                        "page": 4,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 286.08,
                            "y1": 62.879999999999995,
                            "y2": 228.0
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-77"
        },
        {
            "slides": {
                "1": {
                    "title": "Trending of Social Media",
                    "text": [
                        "Facebook YouTube Instagram Twitter Snapchat Reddit Pinterest Tumblr Linkedin",
                        "Number of active users (millions)",
                        "ON a ts 200 croc"
                    ],
                    "page_nums": [
                        2
                    ],
                    "images": []
                },
                "2": {
                    "title": "Name Tagging",
                    "text": [
                        "[ORG France] defeated [ORG Croatia] in [MISC",
                        "World Cup] final at [LOC Luzhniki Stadium].",
                        "Provide inputs to downstream applications"
                    ],
                    "page_nums": [
                        3
                    ],
                    "images": []
                },
                "3": {
                    "title": "Challenges of Name Tagging in Social Media",
                    "text": [
                        "Real Madrid midfielder Toni Kroos has revealed why he snubbed Cristiano Ronaldo's birthday party, following their humiliating derby defeat to Atletico Madrid. W. W",
                        "Read: Khedira Doesn't Regret Attending CR7's Party Ronaldo received a lot of criticism for hosting his birthday party just hours after his side lost 4-0 to Atletico, and although Kroos understands it was difficult to cancel the party, he feels the tim- ing wasn't right. was invited to Cristiano Ronaldo's party. | didn't go because I knew what could happen he told German TV station ZDF. It wasn't the moment to have a party after losing 4-0 against Atletico. It's also true that many people had been invited and cancelling it wouldn't have been easy.\" R 7 Oo r T K8",
                        "The 25-year-old, who won the World Cup with Germany in Brazil, also insisted that recent media reports of a Real Madrid crisis' were thrown out of proportion. \"We should take a step back and look at the whole picture in the face of what is being said. We have only lost the one game, Kroos added. e Limited Textual Context I think that many teams would love to suffer a crisis like ours. Of course we should be criticised if we play a bad game, as we did that day, without doubt.\" e Performs much worse on Read: Barca In Trouble For Drunk Ronaldo Chants? : . social media data Kroos joined Los Blancos for 30 million last summer and started all but two games for Real, as- sisting 12 goals and scoring one. Do you think Real Madrid will return to their form from before Christmas? Have your say in the comments section below.",
                        "Social Media eLanguage Variations",
                        "Alison wonderlandxDiploxjuaz B2B ayee",
                        "Within word white spaces"
                    ],
                    "page_nums": [
                        4,
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "Utilization of Vision",
                    "text": [
                        "Karl-Anthony Towns named unanimous intimate surprise set at Shea 2015-2016 NBA Rookie of the Year",
                        "Difficult cases based on text only"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": [
                        "figure/image/1345-Figure1-1.png"
                    ]
                },
                "5": {
                    "title": "Task Definition",
                    "text": [
                        "Multimedia Input: image-sentence pair",
                        "Colts Have 4th Best QB Situation in NFL with Andrew Luck #ColtStrong",
                        "[ORG Colts] Have 4th Best QB Situation in [ORG",
                        "NFL] with [PER Andrew Luck] #ColtStrong",
                        "Output: tagging results on sentence"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "6": {
                    "title": "Our work",
                    "text": [
                        "State-of-the-art for news articles (",
                        "Visual attention model (Bahdanau et al.,",
                        "Extract visual features from image regions that are most related to accompanying sentence",
                        "Modulation Gate before CRFs",
                        "Combine word representation with visual features based on their relatedness"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "8": {
                    "title": "Overall Framework",
                    "text": [
                        "Multimodal Input : B-PER |-PER I- t I-PER",
                        "Florence and the Machine ~ text",
                        "surprises ill teen with _ LSTM",
                        "private concert Hl 4 CRE",
                        "/ a dk -p/ /isual - > L od Modulation",
                        "f } G j m\\ \\ Gate | Gate \\ Gate ate / Gate",
                        "ceo I Forward Attention Model ! LSTM I",
                        "Gee fF Aw we = = = LI 114 Ss word SQ embedding ; ie char e representations \\ Florence and the Machine",
                        "_ Visual Attention Model :"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": [
                        "figure/image/1345-Figure2-1.png",
                        "figure/image/1345-Figure3-1.png"
                    ]
                },
                "9": {
                    "title": "Sequence LabelingBLSTM CRE Lample et al 2016",
                    "text": [
                        "and a re the input, memory and hidden state at time t respectively. and are weight matrices. is the element-wise product functions and is the element-wise sigmoid function"
                    ],
                    "page_nums": [
                        12
                    ],
                    "images": []
                },
                "10": {
                    "title": "Attention Model for Text Related Visual Features Localization",
                    "text": [
                        "V= CNN(I) Outputs from convolutional layer",
                        "Ss Florence and the Machine fe surprises ill teen with private concert QU",
                        "e,= W pa; + by Attention",
                        "I Input image C= S a,V; Context Vector"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": [
                        "figure/image/1345-Figure3-1.png"
                    ]
                },
                "11": {
                    "title": "Modulation Gate",
                    "text": [
                        "UV C visual context",
                        "() Multiplication word _",
                        "* representations ( a ) activation function",
                        "f_ ; / \\ (tanh } activation function { tanh } (tanh)",
                        "visual gate visual context word representations",
                        "By o(Wyh; + Uyve + by) Uc Visual context",
                        "Bw = o(Wwh; + UO wve + by) h; Word representation",
                        "Wm bw . h; + By -M Wm, _ Visually tuned word representation"
                    ],
                    "page_nums": [
                        14
                    ],
                    "images": []
                },
                "13": {
                    "title": "Dataset",
                    "text": [
                        "Topics: Sports, concerts and other social events",
                        "Named Entity Types: Person, Organization, Location and MISC",
                        "Size of the dataset in numbers of sentences and tokens"
                    ],
                    "page_nums": [
                        16
                    ],
                    "images": []
                },
                "15": {
                    "title": "Attention Visualization",
                    "text": [
                        "(a). [PER Kiay Thompson} [ORG (b). [PER Radiohead] offers old and (c). [MISC Cannes} just became the",
                        "Warriors} overwhelm [ORG new at first concert in four years. [PER Blake Lively] show",
                        "(d). #iPhoneAtt0: How [PER Steve (e). [PER Florence and the Machine] (f). [ORG Warriorette) Basketball Jobs} and [ORG Apple] changed surprises ill teen with private concert Campers ready tor Day 2 modern society",
                        "(g). ts defending champ [PER Sandeul] (h). Shirts at the ready for our (i). ARMY put up a huge ad in [LOC able to win for the third time on (MISC hometown game today #[{ORG Times Square] for [PER BTS} 4th Duet Song Festival)'? Leicester] #pgautomotive 4[(ORG anniversary! premierteague]"
                    ],
                    "page_nums": [
                        18
                    ],
                    "images": [
                        "figure/image/1345-Figure5-1.png",
                        "figure/image/1345-Figure3-1.png"
                    ]
                }
            },
            "paper_title": "Visual Attention Model for Name Tagging in Multimodal Social Media",
            "paper_id": "1345",
            "paper": {
                "title": "Visual Attention Model for Name Tagging in Multimodal Social Media",
                "abstract": "Everyday billions of multimodal posts containing both images and text are shared in social media sites such as Snapchat, Twitter or Instagram. This combination of image and text in a single message allows for more creative and expressive forms of communication, and has become increasingly common in such sites. This new paradigm brings new challenges for natural language understanding, as the textual component tends to be shorter, more informal, and often is only understood if combined with the visual context. In this paper, we explore the task of name tagging in multimodal social media posts. We start by creating two new multimodal datasets: one based on Twitter posts 1 and the other based on Snapchat captions (exclusively submitted to public and crowdsourced stories). We then propose a novel model based on Visual Attention that not only provides deeper visual understanding on the decisions of the model, but also significantly outperforms other state-of-theart baseline methods for this task. 2 * * This work was mostly done during the first author's internship at Snap Research. 1  The Twitter data and associated images presented in this paper were downloaded from https://archive.org/ details/twitterstream 2 We will make the annotations on Twitter data available for research purpose upon request.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Social platforms, like Snapchat, Twitter, Instagram and Pinterest, have become part of our lives and play an important role in making communication easier and accessible."
                    },
                    {
                        "id": 1,
                        "string": "Once textcentric, social media platforms are becoming in-creasingly multimodal, with users combining images, videos, audios, and texts for better expressiveness."
                    },
                    {
                        "id": 2,
                        "string": "As social media posts become more multimodal, the natural language understanding of the textual components of these messages becomes increasingly challenging."
                    },
                    {
                        "id": 3,
                        "string": "In fact, it is often the case that the textual component can only be understood in combination with the visual context of the message."
                    },
                    {
                        "id": 4,
                        "string": "In this context, here we study the task of Name Tagging for social media containing both image and textual contents."
                    },
                    {
                        "id": 5,
                        "string": "Name tagging is a key task for language understanding, and provides input to several other tasks such as Question Answering, Summarization, Searching and Recommendation."
                    },
                    {
                        "id": 6,
                        "string": "Despite its importance, most of the research in name tagging has focused on news articles and longer text documents, and not as much in multimodal social media data (Baldwin et al., 2015) ."
                    },
                    {
                        "id": 7,
                        "string": "However, multimodality is not the only challenge to perform name tagging on such data."
                    },
                    {
                        "id": 8,
                        "string": "The textual components of these messages are often very short, which limits context around names."
                    },
                    {
                        "id": 9,
                        "string": "Moreover, there linguistic variations, slangs, typos and colloquial language are extremely common, such as using 'looooove' for 'love', 'LosAngeles' for 'Los Angeles', and '#Chicago #Bull' for 'Chicago Bulls'."
                    },
                    {
                        "id": 10,
                        "string": "These characteristics of social media data clearly illustrate the higher difficulty of this task, if compared to traditional newswire name tagging."
                    },
                    {
                        "id": 11,
                        "string": "In this work, we modify and extend the current state-of-the-art model (Lample et al., 2016; Ma and Hovy, 2016) in name tagging to incorporate the visual information of social media posts using an Attention mechanism."
                    },
                    {
                        "id": 12,
                        "string": "Although the usually short textual components of social media posts provide limited contextual information, the accompanying images often provide rich information that can be useful for name tagging."
                    },
                    {
                        "id": 13,
                        "string": "For ex- ample, as shown in Figure 1 , both captions include the phrase 'Modern Baseball'."
                    },
                    {
                        "id": 14,
                        "string": "It is not easy to tell if each Modern Baseball refers to a name or not from the textual evidence only."
                    },
                    {
                        "id": 15,
                        "string": "However using the associated images as reference, we can easily infer that Modern Baseball in the first sentence should be the name of a band because of the implicit features from the objects like instruments and stage, and the Modern Baseball in the second sentence refers to the sport of baseball because of the pitcher in the image."
                    },
                    {
                        "id": 16,
                        "string": "In this paper, given an image-sentence pair as input, we explore a new approach to leverage visual context for name tagging in text."
                    },
                    {
                        "id": 17,
                        "string": "First, we propose an attention-based model to extract visual features from the regions in the image that are most related to the text."
                    },
                    {
                        "id": 18,
                        "string": "It can ignore irrelevant visual information."
                    },
                    {
                        "id": 19,
                        "string": "Secondly, we propose to use a gate to combine textual features extracted by a Bidirectional Long Short Term Memory (BLSTM) and extracted visual features, before feed them into a Conditional Random Fields(CRF) layer for tag predication."
                    },
                    {
                        "id": 20,
                        "string": "The proposed gate architecture plays the role to modulate word-level multimodal features."
                    },
                    {
                        "id": 21,
                        "string": "We evaluate our model on two labeled datasets collected from Snapchat and Twitter respectively."
                    },
                    {
                        "id": 22,
                        "string": "Our experimental results show that the proposed model outperforms state-for-the-art name tagger in multimodal social media."
                    },
                    {
                        "id": 23,
                        "string": "The main contributions of this work are as follows: • We create two new datasets for name tagging in multimedia data, one using Twitter and the other using crowd-sourced Snapchat posts."
                    },
                    {
                        "id": 24,
                        "string": "These new datasets effectively constitute new benchmarks for the task."
                    },
                    {
                        "id": 25,
                        "string": "• We propose a visual attention model specifically for name tagging in multimodal social media data."
                    },
                    {
                        "id": 26,
                        "string": "The proposed end-to-end model only uses image-sentence pairs as input without any human designed features, and a Visual Attention component that helps understand the decision making of the model."
                    },
                    {
                        "id": 27,
                        "string": "Figure 2 shows the overall architecture of our model."
                    },
                    {
                        "id": 28,
                        "string": "We describe three main components of our model in this section: BLSTM-CRF sequence labeling model (Section 2.1), Visual Attention Model (Section 2.3) and Modulation Gate (Section 2.4)."
                    },
                    {
                        "id": 29,
                        "string": "Given a pair of sentence and image as input, the Visual Attention Model extracts regional visual features from the image and computes the weighted sum of the regional visual features as the visual context vector, based on their relatedness with the sentence."
                    },
                    {
                        "id": 30,
                        "string": "The BLSTM-CRF sequence labeling model predicts the label for each word in the sentence based on both the visual context vector and the textual information of the words."
                    },
                    {
                        "id": 31,
                        "string": "The modulation gate controls the combination of the visual context vector and the word representations for each word before the CRF layer."
                    },
                    {
                        "id": 32,
                        "string": "Model BLSTM-CRF Sequence Labeling We model name tagging as a sequence labeling problem."
                    },
                    {
                        "id": 33,
                        "string": "Given a sequence of words: S = {s 1 , s 2 , ..., s n }, we aim to predict a sequence of labels: L = {l 1 , l 2 , ..., l n }, where l i ∈ L and L is a pre-defined label set."
                    },
                    {
                        "id": 34,
                        "string": "Bidirectional LSTM."
                    },
                    {
                        "id": 35,
                        "string": "Long Short-term Memory Networks (LSTMs) (Hochreiter and Schmidhuber, 1997) are variants of Recurrent Neural Networks (RNNs) designed to capture long-range dependencies of input."
                    },
                    {
                        "id": 36,
                        "string": "The equations of a LSTM cell are as follows: i t = σ(W xi x t + W hi h t−1 + b i ) f t = σ(W xf x t + W hf h t−1 + b f ) c t = tanh(W xc x t + W hc h t−1 + b c ) c t = f t c t−1 + i t c t o t = σ(W xo x t + W ho h t−1 + b o ) h t = o t tanh(c t ) where x t , c t and h t are the input, memory and hidden state at time t respectively."
                    },
                    {
                        "id": 37,
                        "string": "W xi , W hi , W xf , W hf , W xc , W hc , W xo , and W ho are weight matrices."
                    },
                    {
                        "id": 38,
                        "string": "is the element-wise product function and σ is the element-wise sigmoid function."
                    },
                    {
                        "id": 39,
                        "string": "Name Tagging benefits from both of the past (left) and the future (right) contexts, thus we implement the Bidirectional LSTM (Graves et al., 2013; Dyer et al., 2015) by concatenating the left and right context representations, h t = [ − → h t , ← − h t ], for each word."
                    },
                    {
                        "id": 40,
                        "string": "Character-level Representation."
                    },
                    {
                        "id": 41,
                        "string": "Following (Lample et al., 2016) , we generate the character-level representation for each word using another BLSTM."
                    },
                    {
                        "id": 42,
                        "string": "It receives character embeddings as input and generates representations combining implicit prefix, suffix and spelling information."
                    },
                    {
                        "id": 43,
                        "string": "The final word representation x i is the concatenation of word embedding e i and character-level representation c i ."
                    },
                    {
                        "id": 44,
                        "string": "c i = BLST M char (s i ) s i ∈ S x i = [e i , c i ] Conditional random fields (CRFs)."
                    },
                    {
                        "id": 45,
                        "string": "For name tagging, it is important to consider the constraints of the labels in neighborhood (e.g., I-LOC must follow B-LOC)."
                    },
                    {
                        "id": 46,
                        "string": "CRFs (Lafferty et al., 2001 ) are effective to learn those constraints and jointly predict the best chain of labels."
                    },
                    {
                        "id": 47,
                        "string": "We follow the implementation of CRFs in (Ma and Hovy, 2016) ."
                    },
                    {
                        "id": 48,
                        "string": "Visual Feature Representation We use Convolutional Neural Networks (CNNs) (LeCun et al., 1989) to obtain the representations of images."
                    },
                    {
                        "id": 49,
                        "string": "Particularly, we use Residual Net (ResNet) (He et al., 2016) , which (Lin et al., 2014) detection, and COCO segmentation tasks."
                    },
                    {
                        "id": 50,
                        "string": "Given an input pair (S, I), where S represents the word sequence and I represents the image rescaled to 224x224 pixels, we use ResNet to extract visual features for regional areas as well as for the whole image ( Fig 3) : V g = ResN et g (I) V r = ResN et r (I) where the global visual vector V g , which represents the whole image, is the output before the last fully connected layer 3 ."
                    },
                    {
                        "id": 51,
                        "string": "The dimension of V g is 1,024."
                    },
                    {
                        "id": 52,
                        "string": "V r are the visual representations for regional areas and they are extracted from the last convolutional layer of ResNet, and the dimension is 1,024x7x7 as shown in Figure 3 ."
                    },
                    {
                        "id": 53,
                        "string": "7x7 is the number of regions in the image and 1,024 is the dimension of the feature vector."
                    },
                    {
                        "id": 54,
                        "string": "Thus each feature vector of V r corresponds to a 32x32 pixel region of the rescaled input image."
                    },
                    {
                        "id": 55,
                        "string": "The global visual representation is a reasonable representation of the whole input image, but not the best."
                    },
                    {
                        "id": 56,
                        "string": "Sometimes only parts of the image are related to the associated sentence."
                    },
                    {
                        "id": 57,
                        "string": "For example, the visual features from the right part of the image in Figure 4 cannot contribute to inferring the information in the associated sentence 'I have just bought Jeremy Pied.'"
                    },
                    {
                        "id": 58,
                        "string": "In this work we utilize visual attention mechanism to combat the problem, which has been proven effective for vision-language related tasks such as Image Captioning  and Visual Question Answering (Yang et al., 2016b; Lu et al., 2016) , by enforcing the model to focus on the regions in images that are mostly related to context textual information while ignoring irrelevant regions."
                    },
                    {
                        "id": 59,
                        "string": "Also the visualization of attention can also help us to understand the decision making of the model."
                    },
                    {
                        "id": 60,
                        "string": "Attention mechanism is mapping a query and a set of key-value pairs to an output."
                    },
                    {
                        "id": 61,
                        "string": "The output is a weighted sum of the values and the assigned weight for each value is computed by a function of the query and corresponding key."
                    },
                    {
                        "id": 62,
                        "string": "We encode the sentence into a query vector using an LSTM, and use regional visual representations V r as both keys and values."
                    },
                    {
                        "id": 63,
                        "string": "Text Query Vector."
                    },
                    {
                        "id": 64,
                        "string": "We use an LSTM to encode the sentence into a query vector, in which the inputs of the LSTM are the concatenations of word embeddings and character-level word representations."
                    },
                    {
                        "id": 65,
                        "string": "Different from the LSTM model used for sequence labeling in Section 2.1, the LSTM here aims to get the semantic information of the sen-tence and it is unidirectional: Visual Attention Model Q = LST M query (S) (1) Attention Implementation."
                    },
                    {
                        "id": 66,
                        "string": "There are many implementations of visual attention mechanism such as Multi-layer Perceptron (Bahdanau et al., 2014) , Bilinear (Luong et al., 2015) , dot product (Luong et al., 2015) , Scaled Dot Product (Vaswani et al., 2017) , and linear projection after summation (Yang et al., 2016b) ."
                    },
                    {
                        "id": 67,
                        "string": "Based on our experimental results, dot product implementations usually result in more concentrated attentions and linear projection after summation results in more dispersed attentions."
                    },
                    {
                        "id": 68,
                        "string": "In the context of name tagging, we choose the implementation of linear projection after summation because it is beneficial for the model to utilize as many related visual features as possible, and concentrated attentions may make the model bias."
                    },
                    {
                        "id": 69,
                        "string": "For implementation, we first project the text query vector Q and regional visual features V r into the same dimensions: P t = tanh(W t Q) P v = tanh(W v V r ) then we sum up the projected query vector with each projected regional visual vector respectively: A = P t ⊕ P v the weights of the regional visual vectors: E = sof tmax(W a A + b a ) where W a is weights matrix."
                    },
                    {
                        "id": 70,
                        "string": "The weighted sum of the regional visual features is: v c = α i v i α i ∈ E, v i ∈ V r We use v c as the visual context vector to initialize the BLSTM sequence labeling model in Section 2.1."
                    },
                    {
                        "id": 71,
                        "string": "We compare the performances of the models using global visual vector V g and attention based visual context vector V c for initialization in Section 4."
                    },
                    {
                        "id": 72,
                        "string": "Visual Modulation Gate The BLSTM-CRF sequence labeling model benefits from using the visual context vector to initialize the LSTM cell."
                    },
                    {
                        "id": 73,
                        "string": "However, the better way to utilize visual features for sequence labeling is to incorporate the features at word level individually."
                    },
                    {
                        "id": 74,
                        "string": "However visual features contribute quite differently when they are used to infer the tags of different words."
                    },
                    {
                        "id": 75,
                        "string": "For example, we can easily find matched visual patterns from associated images for verbs such as 'sing', 'run', and 'play'."
                    },
                    {
                        "id": 76,
                        "string": "Words/Phrases such as names of basketball players, artists, and buildings are often well-aligned with objects in images."
                    },
                    {
                        "id": 77,
                        "string": "However it is difficult to align function words such as 'the', 'of ' and 'well' with visual features."
                    },
                    {
                        "id": 78,
                        "string": "Fortunately, most of the challenging cases in name tagging involve nouns and verbs, the disambiguation of which can benefit more from visual features."
                    },
                    {
                        "id": 79,
                        "string": "We propose to use a visual modulation gate, similar to (Miyamoto and Cho, 2016; Yang et al., 2016a) , to dynamically control the combination of visual features and word representation generated by BLSTM at word-level, before feed them into the CRF layer for tag prediction."
                    },
                    {
                        "id": 80,
                        "string": "The equations for the implementation of modulation gate are as follows: β v = σ(W v h i + U v v c + b v ) β w = σ(W w h i + U w v c + b w ) m = tanh(W m h i + U m v c + b m ) w m = β w · h i + β v · m where h i is the word representation generated by BLSTM, v c is the computed visual context vector, W v , W w , W m , U v , U w and U m are weight matrices, σ is the element-wise sigmoid function, and w m is the modulated word representations fed into the CRF layer in Section 2.1."
                    },
                    {
                        "id": 81,
                        "string": "We conduct experiments to evaluate the impact of modulation gate in Section 4."
                    },
                    {
                        "id": 82,
                        "string": "Datasets We evaluate our model on two multimodal datasets, which are collected from Twitter and Snapchat respectively."
                    },
                    {
                        "id": 83,
                        "string": "Table 1 summarizes the data statistics."
                    },
                    {
                        "id": 84,
                        "string": "Both datasets contain four types of named entities: Location, Person, Organization and Miscellaneous."
                    },
                    {
                        "id": 85,
                        "string": "Each data instance contains a pair of sentence and image, and the names in sentences are manually tagged by three expert labelers."
                    },
                    {
                        "id": 86,
                        "string": "Twitter name tagging."
                    },
                    {
                        "id": 87,
                        "string": "The Twitter name tagging dataset contains pairs of tweets and their associated images extracted from May 2016, January 2017 and June 2017."
                    },
                    {
                        "id": 88,
                        "string": "We use sports and social event related key words, such as concert, festival, soccer, basketball, as queries."
                    },
                    {
                        "id": 89,
                        "string": "We don't take into consideration messages without images for this experiment."
                    },
                    {
                        "id": 90,
                        "string": "If a tweet has more than one image associated to it, we randomly select one of the images."
                    },
                    {
                        "id": 91,
                        "string": "Snap name tagging."
                    },
                    {
                        "id": 92,
                        "string": "The Snap name tagging dataset consists of caption and image pairs exclusively extracted from snaps submitted to public and live stories."
                    },
                    {
                        "id": 93,
                        "string": "They were collected between May and July of 2017."
                    },
                    {
                        "id": 94,
                        "string": "The data contains captions submitted to multiple community curated stories like the Electric Daisy Carnival (EDC) music festival and the Golden State Warrior's NBA parade."
                    },
                    {
                        "id": 95,
                        "string": "Both Twitter and Snapchat are social media with plenty of multimodal posts, but they have obvious differences with sentence length and image styles."
                    },
                    {
                        "id": 96,
                        "string": "In Twitter, text plays a more important role, and the sentences in the Twitter dataset are much longer than those in the Snap dataset (16.0 tokens vs 8.1 tokens)."
                    },
                    {
                        "id": 97,
                        "string": "The image is often more related to the content of the text and added with the purpose of illustrating or giving more context."
                    },
                    {
                        "id": 98,
                        "string": "On the other hand, as users of Snapchat use cameras to communicate, the roles of text and image are switched."
                    },
                    {
                        "id": 99,
                        "string": "Captions are often added to complement what is being portrayed by the snap."
                    },
                    {
                        "id": 100,
                        "string": "On our experiment section we will show that our proposed model outperforms baseline on both datasets."
                    },
                    {
                        "id": 101,
                        "string": "We believe the Twitter dataset can be an important step towards more research in multimodal name tagging and we plan to provide it as a benchmark upon request."
                    },
                    {
                        "id": 102,
                        "string": "Experiment Training Tokenization."
                    },
                    {
                        "id": 103,
                        "string": "To tokenize the sentences, we use the same rules as (Owoputi et al., 2013) , except we separate the hashtag '#' with the words after."
                    },
                    {
                        "id": 104,
                        "string": "Labeling Schema."
                    },
                    {
                        "id": 105,
                        "string": "We use the standard BIO schema (Sang and Veenstra, 1999), because we see little difference when we switch to BIOES schema (Ratinov and Roth, 2009) ."
                    },
                    {
                        "id": 106,
                        "string": "Word embeddings."
                    },
                    {
                        "id": 107,
                        "string": "We use the 100-dimensional GloVe 4 (Pennington et al., 2014) embeddings trained on 2 billions tweets to initialize the lookup table and do fine-tuning during training."
                    },
                    {
                        "id": 108,
                        "string": "Character embeddings."
                    },
                    {
                        "id": 109,
                        "string": "As in (Lample et al., 2016) , we randomly initialize the character embeddings with uniform samples."
                    },
                    {
                        "id": 110,
                        "string": "Based on experimental results, the size of the character embeddings affects little, and we set it as 50."
                    },
                    {
                        "id": 111,
                        "string": "Pretrained CNNs."
                    },
                    {
                        "id": 112,
                        "string": "We use the pretrained ResNet-152 (He et al., 2016) from Pytorch."
                    },
                    {
                        "id": 113,
                        "string": "Early Stopping."
                    },
                    {
                        "id": 114,
                        "string": "We use early stopping (Caruana et al., 2001; Graves et al., 2013) with a patience of 15 to prevent the model from over-fitting."
                    },
                    {
                        "id": 115,
                        "string": "Fine Tuning."
                    },
                    {
                        "id": 116,
                        "string": "The models are optimized with finetuning on both the word-embeddings and the pretrained ResNet."
                    },
                    {
                        "id": 117,
                        "string": "Optimization."
                    },
                    {
                        "id": 118,
                        "string": "The models achieve the best performance by using mini-batch stochastic gradient descent (SGD) with batch size 20 and momentum 0.9 on both datasets."
                    },
                    {
                        "id": 119,
                        "string": "We set an initial learning rate of η 0 = 0.03 with decay rate of ρ = 0.01."
                    },
                    {
                        "id": 120,
                        "string": "We use a gradient clipping of 5.0 to reduce the effects of gradient exploding."
                    },
                    {
                        "id": 121,
                        "string": "Hyper-parameters."
                    },
                    {
                        "id": 122,
                        "string": "We summarize the hyperparameters in Table 2 ."
                    },
                    {
                        "id": 123,
                        "string": "Hyper-parameter Value LSTM hidden state size 300 Char LSTM hidden state size 50 visual vector size 100 dropout rate 0.5 Table 2 : Hyper-parameters of the networks."
                    },
                    {
                        "id": 124,
                        "string": "Table 3 shows the performance of the baseline, which is BLSTM-CRF with sentences as input only, and our proposed models on both datasets."
                    },
                    {
                        "id": 125,
                        "string": "BLSTM-CRF + Global Image Vector: use global image vector to initialize the BLSTM-CRF."
                    },
                    {
                        "id": 126,
                        "string": "BLSTM-CRF + Visual attention: use attention based visual context vector to initialize the BLSTM-CRF."
                    },
                    {
                        "id": 127,
                        "string": "BLSTM-CRF + Visual attention + Gate: modulate word representations with visual vector."
                    },
                    {
                        "id": 128,
                        "string": "Our final model BLSTM-CRF + VISUAL AT-TENTION + GATE, which has visual attention component and modulation gate, obtains the best F1 scores on both datasets."
                    },
                    {
                        "id": 129,
                        "string": "Visual features successfully play a role of validating entity types."
                    },
                    {
                        "id": 130,
                        "string": "For example, when there is a person in the image, it is more likely to include a person name in the associated sentence, but when there is a soccer field in the image, it is more likely to include a sports team name."
                    },
                    {
                        "id": 131,
                        "string": "Results All the models get better scores on Twitter dataset than on Snap dataset, because the average length of the sentences in Snap dataset (8.1 tokens) is much smaller than that of Twitter dataset (16.0 tokens), which means there is much less contextual information in Snap dataset."
                    },
                    {
                        "id": 132,
                        "string": "Also comparing the gains from visual features on different datasets, we find that the model benefits more from visual features on Twitter dataset, considering the much higher baseline scores on Twitter dataset."
                    },
                    {
                        "id": 133,
                        "string": "Based on our observation, users of Snapchat often post selfies with captions, which means some of the images are not strongly related to their associated captions."
                    },
                    {
                        "id": 134,
                        "string": "In contrast, users of Twitter prefer to post images to illustrate texts 4.3 Attention Visualization Figure 5 shows some good examples of the attention visualization and their corresponding name tagging results."
                    },
                    {
                        "id": 135,
                        "string": "The model can successfully focus on appropriate regions when the images are well aligned with the associated sentences."
                    },
                    {
                        "id": 136,
                        "string": "Based on our observation, the multimodal contexts in posts related to sports, concerts or festival are usually better aligned with each other, therefore the visual features easily contribute to these cases."
                    },
                    {
                        "id": 137,
                        "string": "For example, the ball and shoot action in example (a) in Figure 5 indicates that the context should be related to basketball, thus the 'Warriors' should be the name of a sports team."
                    },
                    {
                        "id": 138,
                        "string": "A singing person with a microphone in example (b) indicates that the name of an artist or a band ('Radiohead') may appear in the sentence."
                    },
                    {
                        "id": 139,
                        "string": "The second and the third rows in Figure 5 show some more challenging cases whose tagging results benefit from visual features."
                    },
                    {
                        "id": 140,
                        "string": "In example (d), the model pays attention to the big Apple logo, thus tags the 'Apple' in the sentence as an Organization name."
                    },
                    {
                        "id": 141,
                        "string": "In example (e) and (i), a small Figure 6 shows some failed examples that are categorized into three types: (1) bad alignments between visual and textual information; Error Analysis (2) blur images; (3) wrong attention made by the model."
                    },
                    {
                        "id": 142,
                        "string": "Name tagging greatly benefits from visual fea-tures when the sentences are well aligned with the associated image as we show in Section 4.3."
                    },
                    {
                        "id": 143,
                        "string": "But it is not always the case in social media."
                    },
                    {
                        "id": 144,
                        "string": "The example (a) in Figure 6 shows a failed example resulted from poor alignment between sentences and images."
                    },
                    {
                        "id": 145,
                        "string": "In this image, there are two bins standing in front of a wall, but the sentence talks about basketball players."
                    },
                    {
                        "id": 146,
                        "string": "The unrelated visual information makes the model tag 'Cleveland' as a Location, however it refers to the basketball team 'Cleveland Cavaliers'."
                    },
                    {
                        "id": 147,
                        "string": "The image in example (b) is blur, so the extracted visual information extracted actually introduces noise instead of additional information."
                    },
                    {
                        "id": 148,
                        "string": "The    image in example (c) is about a baseball pitcher, but our model pays attention to the top right corner of the image."
                    },
                    {
                        "id": 149,
                        "string": "The visual context feature computed by our model is not related to the sentence, and results in missed tagging of 'SBU', which is an organization name."
                    },
                    {
                        "id": 150,
                        "string": "Related Work In this section, we summarize relevant background on previous work on name tagging and visual attention."
                    },
                    {
                        "id": 151,
                        "string": "Name Tagging."
                    },
                    {
                        "id": 152,
                        "string": "In recent years, (Chiu and Nichols, 2015; Lample et al., 2016; Ma and Hovy, 2016) proposed several neural network architectures for named tagging that outperform traditional explicit features based methods (Chieu and Ng, 2002; Florian et al., 2003; Ando and Zhang, 2005; Ratinov and Roth, 2009; Lin and Wu, 2009; Passos et al., 2014; Luo et al., 2015) ."
                    },
                    {
                        "id": 153,
                        "string": "They all use Bidirectional LSTM (BLSTM) to extract features from a sequence of words."
                    },
                    {
                        "id": 154,
                        "string": "For characterlevel representations, (Lample et al., 2016) proposed to use another BLSTM to capture prefix and suffix information of words, and (Chiu and Nichols, 2015; Ma and Hovy, 2016) used CNN to extract position-independent character features."
                    },
                    {
                        "id": 155,
                        "string": "On top of BLSTM, (Chiu and Nichols, 2015) used a softmax layer to predict the label for each word, and (Lample et al., 2016; Ma and Hovy, 2016) used a CRF layer for joint prediction."
                    },
                    {
                        "id": 156,
                        "string": "Compared with traditional approaches, neural networks based approaches do not require hand-crafted features and achieved state-of-the-art performance on name tagging (Ma and Hovy, 2016) ."
                    },
                    {
                        "id": 157,
                        "string": "However, these methods were mainly developed for newswire and paid little attention to social media."
                    },
                    {
                        "id": 158,
                        "string": "For name tagging in social media, (Ritter et al., 2011) leveraged a large amount of unlabeled data and many dictionaries into a pipeline model."
                    },
                    {
                        "id": 159,
                        "string": "(Limsopatham and Collier, 2016) adapted the BLSTM-CRF model with additional word shape information, and (Aguilar et al., 2017) utilized an effective multi-task approach."
                    },
                    {
                        "id": 160,
                        "string": "Among these methods, our model is most similar to (Lample et al., 2016) , but we designed a new visual attention component and a modulation control gate."
                    },
                    {
                        "id": 161,
                        "string": "Visual Attention."
                    },
                    {
                        "id": 162,
                        "string": "Since the attention mechanism was proposed by (Bahdanau et al., 2014) , it has been widely adopted to language and vision related tasks, such as Image Captioning and Visual Question Answering (VQA), by retrieving the visual features most related to text context (Zhu et al., 2016; Anderson et al., 2017; Xu and Saenko, 2016; Chen et al., 2015) ."
                    },
                    {
                        "id": 163,
                        "string": "proposed to predict a word based on the visual patch that is most related to the last predicted word for image captioning."
                    },
                    {
                        "id": 164,
                        "string": "(Yang et al., 2016b; Lu et al., 2016) applied attention mechanism for VQA, to find the regions in images that are most related to the questions."
                    },
                    {
                        "id": 165,
                        "string": "(Yu et al., 2016) applied the visual attention mechanism on video captioning."
                    },
                    {
                        "id": 166,
                        "string": "Our attention implementation approach in this work is similar to those used for VQA."
                    },
                    {
                        "id": 167,
                        "string": "The model finds the regions in images that are most related to the accompanying sentences, and then feed the visual features into an BLSTM-CRF sequence labeling model."
                    },
                    {
                        "id": 168,
                        "string": "The differences are: (1) we add visual context feature at each step of sequence labeling; and (2) we propose to use a gate to control the combination of the visual information and textual information based on their relatedness."
                    },
                    {
                        "id": 169,
                        "string": "2 Conclusions and Future Work We propose a gated Visual Attention for name tagging in multimodal social media."
                    },
                    {
                        "id": 170,
                        "string": "We construct two multimodal datasets from Twitter and Snapchat."
                    },
                    {
                        "id": 171,
                        "string": "Experiments show an absolute 3%-4% F-score gain."
                    },
                    {
                        "id": 172,
                        "string": "We hope this work will encourage more research on multimodal social media in the future and we plan on making our benchmark available upon request."
                    },
                    {
                        "id": 173,
                        "string": "Name Tagging for more fine-grained types (e.g."
                    },
                    {
                        "id": 174,
                        "string": "soccer team, basketball team, politician, artist) can benefit more from visual features."
                    },
                    {
                        "id": 175,
                        "string": "For example, an image including a pitcher indicates that the 'Giants' in context should refer to the baseball team 'San Francisco Giants'."
                    },
                    {
                        "id": 176,
                        "string": "We plan to expand our model to tasks such as fine-grained Name Tagging or Entity Liking in the future."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 31
                    },
                    {
                        "section": "BLSTM-CRF Sequence Labeling",
                        "n": "2.1",
                        "start": 32,
                        "end": 47
                    },
                    {
                        "section": "Visual Feature Representation",
                        "n": "2.2",
                        "start": 48,
                        "end": 64
                    },
                    {
                        "section": "Visual Attention Model",
                        "n": "2.3",
                        "start": 65,
                        "end": 71
                    },
                    {
                        "section": "Visual Modulation Gate",
                        "n": "2.4",
                        "start": 72,
                        "end": 81
                    },
                    {
                        "section": "Datasets",
                        "n": "3",
                        "start": 82,
                        "end": 101
                    },
                    {
                        "section": "Training",
                        "n": "4.1",
                        "start": 102,
                        "end": 130
                    },
                    {
                        "section": "Results",
                        "n": "4.2",
                        "start": 131,
                        "end": 140
                    },
                    {
                        "section": "Error Analysis",
                        "n": "4.4",
                        "start": 141,
                        "end": 149
                    },
                    {
                        "section": "Related Work",
                        "n": "5",
                        "start": 150,
                        "end": 168
                    },
                    {
                        "section": "Conclusions and Future Work",
                        "n": "6",
                        "start": 169,
                        "end": 176
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1345-Table1-1.png",
                        "caption": "Table 1: Sizes of the datasets in numbers of sentence and token.",
                        "page": 5,
                        "bbox": {
                            "x1": 156.96,
                            "x2": 441.12,
                            "y1": 62.879999999999995,
                            "y2": 132.96
                        }
                    },
                    {
                        "filename": "../figure/image/1345-Table2-1.png",
                        "caption": "Table 2: Hyper-parameters of the networks.",
                        "page": 5,
                        "bbox": {
                            "x1": 90.72,
                            "x2": 271.2,
                            "y1": 417.59999999999997,
                            "y2": 487.2
                        }
                    },
                    {
                        "filename": "../figure/image/1345-Figure1-1.png",
                        "caption": "Figure 1: Examples of Modern Baseball associated with different images.",
                        "page": 1,
                        "bbox": {
                            "x1": 89.75999999999999,
                            "x2": 280.32,
                            "y1": 61.44,
                            "y2": 135.35999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1345-Figure5-1.png",
                        "caption": "Figure 5: Examples of visual attentions and NER outputs.",
                        "page": 6,
                        "bbox": {
                            "x1": 73.92,
                            "x2": 524.16,
                            "y1": 397.44,
                            "y2": 739.1999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1345-Table3-1.png",
                        "caption": "Table 3: Results of our models on noisy social media data.",
                        "page": 6,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 528.0,
                            "y1": 62.879999999999995,
                            "y2": 145.92
                        }
                    },
                    {
                        "filename": "../figure/image/1345-Figure3-1.png",
                        "caption": "Figure 3: CNN for visual features extraction.",
                        "page": 2,
                        "bbox": {
                            "x1": 317.76,
                            "x2": 511.2,
                            "y1": 332.64,
                            "y2": 411.35999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1345-Figure2-1.png",
                        "caption": "Figure 2: Overall Architecture of the Visual Attention Name Tagging Model.",
                        "page": 2,
                        "bbox": {
                            "x1": 93.6,
                            "x2": 503.03999999999996,
                            "y1": 61.44,
                            "y2": 289.92
                        }
                    },
                    {
                        "filename": "../figure/image/1345-Figure6-1.png",
                        "caption": "Figure 6: Examples of Failed Visual Attention.",
                        "page": 7,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 526.0799999999999,
                            "y1": 61.44,
                            "y2": 156.48
                        }
                    },
                    {
                        "filename": "../figure/image/1345-Figure4-1.png",
                        "caption": "Figure 4: Example of partially related image and sentence. (‘I have just bought Jeremy Pied.’)",
                        "page": 3,
                        "bbox": {
                            "x1": 121.92,
                            "x2": 241.44,
                            "y1": 145.44,
                            "y2": 264.96
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-78"
        },
        {
            "slides": {
                "1": {
                    "title": "Approach",
                    "text": [
                        "f ine-tuning is evaluated in a batch setting",
                        "Corpus BLEU or isolated sentence-wise metrics are often used",
                        "These do not necessarily express how fast a system adapts",
                        "As we will show this is not good enough",
                        "We seek to measure perceived, immediate adaptation performance",
                        "Calculate recall on the set of all words that are not stopwords, ignoring",
                        "1In each of the data sets considered in this work, the average number of occurrences of content",
                        "words ranges between 1.01 and 1.11 per sentence",
                        "Since the task is online adaptation - specifically focus on few-shot learning:",
                        "Consider only first and second occurrences of words!"
                    ],
                    "page_nums": [
                        11,
                        12,
                        13,
                        14
                    ],
                    "images": []
                },
                "2": {
                    "title": "One Shot Recall R1",
                    "text": [
                        "After seeing a word exactly once before in a reference/confirmed translation, is it correctly produced the second time around?",
                        "Hi Content words in the hypothesis i th example",
                        "R1,i Content inthe reference words for whose",
                        "i th example second occurrence is"
                    ],
                    "page_nums": [
                        15,
                        16
                    ],
                    "images": []
                },
                "3": {
                    "title": "One Shot Recall R1 Example",
                    "text": [
                        "Source #1: Der Terrier beit die Frau",
                        "Hypothesis #1: The dog bites the lady",
                        "The terrier bites the woman",
                        "Source #2: Der Mann beit den Terrier",
                        "The man bites1 the terrier1"
                    ],
                    "page_nums": [
                        17,
                        18,
                        19,
                        20,
                        21,
                        22,
                        23,
                        24,
                        25
                    ],
                    "images": []
                },
                "4": {
                    "title": "Zero Shot Recall R0",
                    "text": [
                        "Not having seen a word before, is it still correctly produced? Is the system adapting",
                        "to the domain at hand?",
                        "Hi Content words in the hypothesis for i th example",
                        "R0,i Content thereference words for",
                        "i th that example occur for the first time in"
                    ],
                    "page_nums": [
                        26,
                        27
                    ],
                    "images": []
                },
                "5": {
                    "title": "Zero and One Shot Recall R01",
                    "text": [
                        "Hi Content words in the hypothesis for i th example",
                        "R0,i R1,i secondtime Content in the words reference that occur for",
                        "i for th the example first or"
                    ],
                    "page_nums": [
                        28
                    ],
                    "images": []
                },
                "6": {
                    "title": "Corpus Level Metric",
                    "text": [
                        "G: Corpus of |G| source, reference/confirmed seg-"
                    ],
                    "page_nums": [
                        29
                    ],
                    "images": []
                },
                "7": {
                    "title": "Complete Example",
                    "text": [
                        "Der Terrier beit die Frau",
                        "The dog bites the lady",
                        "The terrier0 bites0 the woman0",
                        "Source #2: Der Mann beit den Terrier",
                        "The terrier bites the man",
                        "The man0 bites1 the terrier1"
                    ],
                    "page_nums": [
                        30,
                        31,
                        32,
                        33,
                        34,
                        35,
                        36,
                        37,
                        38,
                        39,
                        40,
                        41
                    ],
                    "images": []
                },
                "8": {
                    "title": "Evaluation Adaptation Methods",
                    "text": [
                        "The task is online adaptation to the Autodesk data set [Zhechev, 2012]. The background model is an English-to-German Transformer, trained on about 100M segments.",
                        "Four methods for comparison: bias Add an additional bias to the output projection [Michel and Neubig, 2018] full Fine-tuning of all weights top Adapt top encoder/decoder layers only lasso Dynamic selection of adapted tensors with group lasso regularization [Wuebker"
                    ],
                    "page_nums": [
                        42,
                        43
                    ],
                    "images": []
                },
                "11": {
                    "title": "Conclusion",
                    "text": [
                        "Immediate adaptation performance is important for adaptive MT in CAT",
                        "We proposed three metrics for measuring immediate and possibly perceived adaptation performance",
                        "R1 for one-shot recall, quantifying pick up of new vocabulary",
                        "R0 for zero-shot recall, quantifying general domain adaptation performance",
                        "The combined metric R0+1",
                        "These metrics give a different signal than the MT metrics that are traditionally used",
                        "Zero-shot recall R0 suffers from unregularized adaptation!",
                        "Careful regularization can mitigate this effect, while retaining most of the one-shot recall R1"
                    ],
                    "page_nums": [
                        46,
                        47
                    ],
                    "images": []
                },
                "12": {
                    "title": "Bibliography I",
                    "text": [
                        "N. Bertoldi, P. Simianer, M. Cettolo, K. Waschle, M. Federico, and S. Riezler. Online adaptation to post-edits for phrase-based statistical machine translation. Machine",
                        "S. S. R. Kothur, R. Knowles, and P. Koehn. Document-level adaptation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine",
                        "P. Michel and G. Neubig. Extreme adaptation for personalized neural machine",
                        "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311318. Association for",
                        "A. Peris, L. Cebrian, and F. Casacuberta. Online learning for neural machine"
                    ],
                    "page_nums": [
                        48
                    ],
                    "images": []
                },
                "13": {
                    "title": "Bibliography II",
                    "text": [
                        "M. Turchi, M. Negri, M. A. Farajian, and M. Federico. Continuous learning from human post-edits for neural machine translation. The Prague Bulletin of",
                        "J. Wuebker, P. Simianer, and J. DeNero. Compact personalized models for neural machine translation. In Proceedings of the 2018 Conference on Empirical",
                        "Methods in Natural Language Processing, 2018.",
                        "V. Zhechev. Machine translation infrastructure and post-editing performance at autodesk. In AMTA 2012 workshop on post-editing technology and practice"
                    ],
                    "page_nums": [
                        49
                    ],
                    "images": []
                }
            },
            "paper_title": "Measuring Immediate Adaptation Performance for Neural Machine Translation",
            "paper_id": "1350",
            "paper": {
                "title": "Measuring Immediate Adaptation Performance for Neural Machine Translation",
                "abstract": "Incremental domain adaptation, in which a system learns from the correct output for each input immediately after making its prediction for that input, can dramatically improve system performance for interactive machine translation. Users of interactive systems are sensitive to the speed of adaptation and how often a system repeats mistakes, despite being corrected. Adaptation is most commonly assessed using corpus-level BLEU-or TERderived metrics that do not explicitly take adaptation speed into account. We find that these metrics often do not capture immediate adaptation effects, such as zero-shot and oneshot learning of domain-specific lexical items. To this end, we propose new metrics that directly evaluate immediate adaptation performance for machine translation. We use these metrics to choose the most suitable adaptation method from a range of different adaptation techniques for neural machine translation systems.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Incremental domain adaptation, or online adaptation, has been shown to improve statistical machine translation and especially neural machine translation (NMT) systems significantly (Turchi et al., 2017; Karimova et al., 2018) (inter-alia)."
                    },
                    {
                        "id": 1,
                        "string": "The natural use case is a computeraided translation (CAT) scenario, where a user and a machine translation system collaborate to translate a document."
                    },
                    {
                        "id": 2,
                        "string": "Each user translation is immediately used as a new training example to adapt the machine translation system to the specific document."
                    },
                    {
                        "id": 3,
                        "string": "Adaptation techniques for MT are typically evaluated by their corpus translation quality, but such evaluations may not capture prominent aspects of the user experience in a collaborative translation scenario."
                    },
                    {
                        "id": 4,
                        "string": "This paper focuses on directly measuring the speed of lexical acquisition for in-domain vocabulary."
                    },
                    {
                        "id": 5,
                        "string": "To that end, we propose three related metrics that are designed to reflect the responsiveness of adaptation."
                    },
                    {
                        "id": 6,
                        "string": "An ideal system would immediately acquire indomain lexical items upon observing their translations."
                    },
                    {
                        "id": 7,
                        "string": "Moreover, one might expect a neural system to generalize from one corrected translation to related terms."
                    },
                    {
                        "id": 8,
                        "string": "Once a user translates \"bank\" to German \"Bank\" (institution) instead of \"Ufer\" (shore) in a document, the system should also correctly translate \"banks\" to \"Banken\" instead of \"Ufer\" (the plural is identical to the singular in German) in future sentences."
                    },
                    {
                        "id": 9,
                        "string": "We measure both one-shot vocabulary acquisition for terms that have appeared once in a previous target sentence, as well as zeroshot vocabulary acquisition for terms that have not previously appeared."
                    },
                    {
                        "id": 10,
                        "string": "Our experimental evaluation shows some surprising results."
                    },
                    {
                        "id": 11,
                        "string": "Methods that appear to have comparable performance using corpus quality metrics such as BLEU can differ substantially in zero-shot and one-shot vocabulary acquisition."
                    },
                    {
                        "id": 12,
                        "string": "In addition, we find that fine-tuning a neural model tends to improve one-shot vocabulary recall while degrading zero-shot vocabulary recall."
                    },
                    {
                        "id": 13,
                        "string": "We evaluate several adaptation techniques on a range of online adaptation datasets."
                    },
                    {
                        "id": 14,
                        "string": "Fine tuning applied to all parameters in the NMT model maximizes one-shot acquisition, but shows a worrisome degradation in zero-shot recall."
                    },
                    {
                        "id": 15,
                        "string": "By contrast, fine tuning with group lasso regularization, a technique recently proposed to improve the space efficiency of adapted models (Wuebker et al., 2018) , achieves an appealing balance of zero-shot and one-shot vocabulary acquisition as well as high corpus-level translation quality."
                    },
                    {
                        "id": 16,
                        "string": "Measuring Immediate Adaptation Motivation For interactive, adaptive machine translation systems, perceived adaptation performance is a crucial property: An error in the machine translation output which needs to be corrected multiple times can cause frustration, and thus may compromise acceptance of the MT system by human users."
                    },
                    {
                        "id": 17,
                        "string": "A class of errors that are particularly salient are lexical choice errors for domain-specific lexical items."
                    },
                    {
                        "id": 18,
                        "string": "In the extreme, NMT systems using subword modeling (Sennrich et al., 2015) can generate \"hallucinated\" words-words that do not exist in the target language-which are especially irritating for users (Lee et al., 2018; Koehn and Knowles, 2017) ."
                    },
                    {
                        "id": 19,
                        "string": "Users of adaptive MT have a reasonable expectation that in-domain vocabulary will be translated correctly after the translation of a term or some related term has been corrected manually."
                    },
                    {
                        "id": 20,
                        "string": "Arguably, more subtle errors, referring to syntax, word order or more general semantics are less of a focus for immediate adaptation, as these types of errors are also harder to pinpoint and thus to evaluate 1 (Bentivogli et al., 2016) ."
                    },
                    {
                        "id": 21,
                        "string": "Traditional metrics for evaluating machine translation outputs, e.g."
                    },
                    {
                        "id": 22,
                        "string": "BLEU and TER, in essence try to measure the similarity of a hypothesized translation to one or more reference translations, taking the full string into account."
                    },
                    {
                        "id": 23,
                        "string": "Due to significant improvements in MT quality with neural models (Bentivogli et al., 2016 ) (interalia), more specialized metrics, evaluating certain desired behaviors of systems become more useful for specific tasks."
                    },
                    {
                        "id": 24,
                        "string": "For example, Wuebker et al."
                    },
                    {
                        "id": 25,
                        "string": "(2016) show, that NMT models, while being better in most respects, still fall short in the handling of content words in comparison with phrase-based MT."
                    },
                    {
                        "id": 26,
                        "string": "This observation is also supported by Bentivogli et al."
                    },
                    {
                        "id": 27,
                        "string": "(2016) , who show smaller gains for NMT for translation of nouns, an important category of content words."
                    },
                    {
                        "id": 28,
                        "string": "Another reason to isolate vocabulary acquisition as an evaluation criterion is that interactive translation often employs local adaptation via prefix-decoding (Knowles and Koehn, 2016; Wuebker et al., 2016) , which can allow the system to recover syntactic structure or resolve local am-biguities when given a prefix, but may still suffer from poor handling of unknown or domainspecific vocabulary."
                    },
                    {
                        "id": 29,
                        "string": "In this work, we therefore focus on translation performance with respect to content words, setting word order and other aspects aside."
                    },
                    {
                        "id": 30,
                        "string": "Metrics We propose three metrics: one to directly measure one-shot vocabulary acquisition, one to measure zero-shot vocabulary acquisition, and one to measure both."
                    },
                    {
                        "id": 31,
                        "string": "In all three, we measure the recall of target-language content words so that the metrics can be computed automatically by comparing translation hypotheses to reference translations without the use of models or word alignments 2 ."
                    },
                    {
                        "id": 32,
                        "string": "We define content words as those words that are not included in a fixed stopword list, as used for example in query simplification for information retrieval."
                    },
                    {
                        "id": 33,
                        "string": "Such lists are typically compiled manually and are available for many languages."
                    },
                    {
                        "id": 34,
                        "string": "3 For western languages, content words are mostly nouns, main verbs, adjectives or adverbs."
                    },
                    {
                        "id": 35,
                        "string": "For the i-th pair of source sentence and reference translation, i = 1, ."
                    },
                    {
                        "id": 36,
                        "string": "."
                    },
                    {
                        "id": 37,
                        "string": "."
                    },
                    {
                        "id": 38,
                        "string": ", |G|, of an ordered test corpus G, we define two sets R 0,i and R 1,i that are a subset of the whole set of unique 4 content words (i.e."
                    },
                    {
                        "id": 39,
                        "string": "types) of the reference translation for i. R 0,i includes a word if its first occurrence in the test set is in the i-th reference of G, and R 1,i if its second occurrence in the test set is in the i-th reference of G. The union R 0,i ∪ R 1,i includes content words occurring for either the first or second time."
                    },
                    {
                        "id": 40,
                        "string": "To measure zero-shot adaptation in a given hypothesis H i , also represented as a set of its content words, we propose to evaluate the number of word types that were immediately translated correctly: R0 = |H i ∩ R 0,i | |R 0,i | ."
                    },
                    {
                        "id": 41,
                        "string": "To measure one-shot adaptation, where the system correctly produces a content word after ob-2 In each of the data sets considered in this work, the average number of occurrences of content words ranges between 1.01 and 1.11 per sentence."
                    },
                    {
                        "id": 42,
                        "string": "We find this sufficiently close to 1 to evaluate in a bag-of-words fashion and not consider alignments."
                    },
                    {
                        "id": 43,
                        "string": "3 For German we used the list available here: https://github.com/stopwords-iso."
                    },
                    {
                        "id": 44,
                        "string": "4 All proposed metrics operate on the set-level, without clipping (Papineni et al., 2002) or alignment (Banerjee and Lavie, 2005; Kothur et al., 2018) , as we have found this simplification effective."
                    },
                    {
                        "id": 45,
                        "string": "1/1 2/2 3/3 Total 2/4 2/2 4/6 Figure 1 : Example for calculating R0, R1, and R0+1 on a corpus of two sentences."
                    },
                    {
                        "id": 46,
                        "string": "Content words are written in brackets, the corpus-level score is given below the per-segment scores."
                    },
                    {
                        "id": 47,
                        "string": "In the example, the denominator for R1 is 2 due to the two repeated words dog and bites in the reference."
                    },
                    {
                        "id": 48,
                        "string": "serving it exactly once, we propose: R1 = |H i ∩ R 1,i | |R 1,i | ."
                    },
                    {
                        "id": 49,
                        "string": "This principle can be extended to define metrics Rk, k > 1 to allow more \"slack\" in the adaptation, but we leave that investigation to future work."
                    },
                    {
                        "id": 50,
                        "string": "Finally, we define a metric that measures both zero-and one-shot adaptation: R0+1 = |H i ∩ [R 0,i ∪ R 1,i ] | |R 0,i ∪ R 1,i | ."
                    },
                    {
                        "id": 51,
                        "string": "All metrics can either be calculated for single sentences as described above, or for a full test corpus by summing over all sentences, e.g."
                    },
                    {
                        "id": 52,
                        "string": "for R0: Figure 1 gives an example calculation of all three metrics across a two-sentence corpus."
                    },
                    {
                        "id": 53,
                        "string": "|G| i=1 |H i ∩ R 0,i | |G| i=1 |R 0,i | ."
                    },
                    {
                        "id": 54,
                        "string": "Related Work An important line of related work is concerned with estimating the potential adaptability of a system given a source text only, the so-called repetition rate (Cettolo et al., 2014) ."
                    },
                    {
                        "id": 55,
                        "string": "The metric is inspired by BLEU, and uses a sliding window over the source text to count singleton N -grams."
                    },
                    {
                        "id": 56,
                        "string": "The modus operandi for our metrics is most similar to HTER (Snover et al., 2006) , since we are also assuming a single, targeted reference translation 5 for evaluation."
                    },
                    {
                        "id": 57,
                        "string": "The introduction of NMT brought more aspects of translation quality evaluation into focus, such as discourse-level evaluation (Bawden et al., 2017) , or very fine-grained evaluation of specific aspects of the translations (Bentivogli et al., 2016) , highlighting the differences between phrase-based and NMT systems."
                    },
                    {
                        "id": 58,
                        "string": "Online adaptation for (neural) machine translation has been thoroughly explored using BLEU (Turchi et al., 2017) , simulated keystroke and mouse action ratio (Barrachina et al., 2009 ) for effort estimation (Peris and Casacuberta, 2018) , word prediction accuracy (Wuebker et al., 2016) , and user studies (Denkowski et al., 2014; Karimova et al., 2018 ) (all inter-alia)."
                    },
                    {
                        "id": 59,
                        "string": "In (Simianer et al., 2016) immediate adaptation for hierarchical phrase-based MT is specifically investigated, but they also evaluate their systems using humantargeted BLEU and TER."
                    },
                    {
                        "id": 60,
                        "string": "Regularization for segment-wise continued training in NMT has been explored by  by means of knowledge distillation, and with the group lasso by Wuebker et al."
                    },
                    {
                        "id": 61,
                        "string": "(2018) , as used in this paper."
                    },
                    {
                        "id": 62,
                        "string": "Most relevant to our work, in the context of document-level adaptation, Kothur et al."
                    },
                    {
                        "id": 63,
                        "string": "(2018) calculate accuracy for novel words based on an automatic word alignment."
                    },
                    {
                        "id": 64,
                        "string": "However, they do not focus on zero-and one-shot matches, but instead aggregate counts over the full corpus."
                    },
                    {
                        "id": 65,
                        "string": "Online Adaptation NMT systems can be readily adapted by finetuning (also called continued training) with the same cross-entropy loss (L) as used for training the parameters of the baseline system, which also serves as the starting point for adaptation (Luong and Manning, 2015) ."
                    },
                    {
                        "id": 66,
                        "string": "Following Turchi et al."
                    },
                    {
                        "id": 67,
                        "string": "(2017) , we perform learning from each example i using (stochastic) gradient descent, using the current source x i and reference translation y i as a batch of size 1: θ i ← θ i−1 − γ∇L(θ i−1 , x i , y i )."
                    },
                    {
                        "id": 68,
                        "string": "(1) Evaluation is carried out using simulated postediting (Hardt and Elming, 2010) , first translating the source using the model with parameters θ i−1 , before performing the update described above with the now revealed reference translation."
                    },
                    {
                        "id": 69,
                        "string": "The machine translation system effectively only trains for a single iteration for any given data set."
                    },
                    {
                        "id": 70,
                        "string": "The naïve approach, updating all parameters θ of the NMT model, while being effective, can be infeasible in certain settings 6 , since tens of millions of parameters are updated depending on the respective model."
                    },
                    {
                        "id": 71,
                        "string": "While some areas of a typical NMT model can be stored in a sparse fashion without loss (source-and target embeddings), large parts of the model cannot."
                    },
                    {
                        "id": 72,
                        "string": "We denote this type of adaptation as full."
                    },
                    {
                        "id": 73,
                        "string": "A light-weight alternative to adaptation of the full parameter set is to introduce a second bias term in the final output layer of the NMT model, which is trained in isolation, freezing the rest of the model (Michel and Neubig, 2018) ."
                    },
                    {
                        "id": 74,
                        "string": "This merely introduces a vector in the size of the output vocabulary."
                    },
                    {
                        "id": 75,
                        "string": "This method is referred to as bias."
                    },
                    {
                        "id": 76,
                        "string": "Another alternative is freezing parts of the model , for example determining a subset of parameters by performance on a held-out set (Wuebker et al., 2018) ."
                    },
                    {
                        "id": 77,
                        "string": "In our experiments we use two systems using this method, fixed and top, the former being a pre-determined fixed selection of parameters, and the latter being the topmost encoder and decoder layers in the Transformer NMT model (Vaswani et al., 2017) ."
                    },
                    {
                        "id": 78,
                        "string": "Finally, a data-driven alternative to the fixed freezing method was introduced to NMT by Wuebker et al."
                    },
                    {
                        "id": 79,
                        "string": "(2018) , implementing tensor-wise 1 / 2 group lasso regularization, allowing the learning procedure to select a fixed number of parameters after each update."
                    },
                    {
                        "id": 80,
                        "string": "This setup is referred to as lasso."
                    },
                    {
                        "id": 81,
                        "string": "Experiments Neural Machine Translation Systems We adapt an English→German NMT system based on the Transformer architecture trained with an in-house NMT framework on about 100M bilingual sentence pairs."
                    },
                    {
                        "id": 82,
                        "string": "The model has six layers in the encoder, three layers in the decoder, each with eight attention heads with dimensionality 256, distinct input and output embeddings, and vocabulary sizes of around 40,000."
                    },
                    {
                        "id": 83,
                        "string": "The vocabularies are generated with byte-pair encoding (Sennrich et al., 2015) ."
                    },
                    {
                        "id": 84,
                        "string": "For adaptation we use a learning rate γ of 10 −2 (for the bias adaptation a learn- ing rate of 1.0 is used), no dropout, and no labelsmoothing."
                    },
                    {
                        "id": 85,
                        "string": "We use a tensor-wise 2 normalization to 1.0 for all gradients (gradient clipping)."
                    },
                    {
                        "id": 86,
                        "string": "Updates for a sentence pair are repeated until the perplexity on that sentence pair is ≤ 2.0, for a maximum of three repetitions."
                    },
                    {
                        "id": 87,
                        "string": "The fixed adaptation scheme, which involves selecting a subset of parameters on held-out data following Wuebker et al."
                    },
                    {
                        "id": 88,
                        "string": "(2018) , uses about two million parameters excluding all embedding matrices, in addition to potentially the full source embeddings, but in practice this is limited to about 1M parameters."
                    },
                    {
                        "id": 89,
                        "string": "The top scheme only adapts the top layers for both encoder and decoder."
                    },
                    {
                        "id": 90,
                        "string": "For the lasso adaptation, we allow 1M parameters excluding the embeddings, for which we allow 1M parameters in total selected from all embedding matrices."
                    },
                    {
                        "id": 91,
                        "string": "This scheme also always includes the previously described second bias term in the final output layer."
                    },
                    {
                        "id": 92,
                        "string": "Since the proposed metrics operate on words, the machine translation outputs are first converted to full-form words using sentencepiece (Kudo and Richardson, 2018) , then tokenized and truecased with the tokenizer and truecaser distributed with the Moses toolkit (Koehn et al., 2007) ."
                    },
                    {
                        "id": 93,
                        "string": "Results Tables 1 and 2 show the performance of different adaptation techniques on the Autodesk dataset (Zhechev, 2012) , a public post-editing software domain dataset for which incremental adaptation is known to provide large gains for corpus-level metrics."
                    },
                    {
                        "id": 94,
                        "string": "BLEU, sentence BLEU, and TER scores (Table 1) are similar for full adaptation, sparse adaptation with group lasso, and adaptation of a fixed subset of parameters."
                    },
                    {
                        "id": 95,
                        "string": "However (in Table 2 lasso substantially outperforms the other methods in zero-shot (R0), and combined zero-and oneshot recall of content words (R0+1)."
                    },
                    {
                        "id": 96,
                        "string": "Zero-shot recall is considerably degraded relative to the non-adapted baseline for both full and adaptation of a fixed subset of tensors (fixed and top)."
                    },
                    {
                        "id": 97,
                        "string": "That is, terms never observed before during online training are translated correctly less often than they would be with an unadapted system, despite the data set's consistent domain."
                    },
                    {
                        "id": 98,
                        "string": "These approaches trade off long-term gains in BLEU and high one-shot recall for low zero-shot recall, which could be frustrating for users who may perceive the degradation in quality for terms appearing for the first time in a document."
                    },
                    {
                        "id": 99,
                        "string": "The lasso technique is the only one that shows an improvement in R0 over the baseline."
                    },
                    {
                        "id": 100,
                        "string": "However, lasso has considerably lower one-shot recall compared to the other adaptation methods, implying that it often must observe a translated term more than once to acquire it."
                    },
                    {
                        "id": 101,
                        "string": "Appendix A shows similar experiments for several other datasets."
                    },
                    {
                        "id": 102,
                        "string": "Analysis For a better understanding of the results described in the previous section, we conduct an analysis varying the units of the proposed metrics, while focusing on full and lasso adaptation."
                    },
                    {
                        "id": 103,
                        "string": "For the first variant, only truly novel words are taken into account, i.e."
                    },
                    {
                        "id": 104,
                        "string": "words in the test set that do not appear in the training data."
                    },
                    {
                        "id": 105,
                        "string": "Results for these experiments are depicted in Table 3 ."
                    },
                    {
                        "id": 106,
                        "string": "It is apparent that the findings of Table 2 are confirmed, and that relative differences are amplified."
                    },
                    {
                        "id": 107,
                        "string": "This can be explained by the reduced number of total occurrences considered, which is only 310 words in this data set."
                    },
                    {
                        "id": 108,
                        "string": "It is also important to note that all of these   words are made up of known subwords 7 , since our NMT system does not include a copying mechanism and is thus constrained to the target vocabulary."
                    },
                    {
                        "id": 109,
                        "string": "Further results using the raw subword output 8 of the MT systems are depicted in Table 4 : R0 for the lasso method is degraded only slightly below the baseline (-1%, compared to +2% for the regular metric), the findings for R1 and R0+1 remain the same as observed before."
                    },
                    {
                        "id": 110,
                        "string": "Compared to the results for novel words this indicates that the improvement in terms of R0 for lasso mostly come from learning new combinations of subwords."
                    },
                    {
                        "id": 111,
                        "string": "A discussion of the adaptation behavior over time, with exemplified differences between the metrics, can be found in Appendix B."
                    },
                    {
                        "id": 112,
                        "string": "Conclusions To summarize: In some cases, the strong gains in corpus-level translation quality achieved by fine tuning an NMT model come at the expense of zero-shot recall of content words."
                    },
                    {
                        "id": 113,
                        "string": "This concerning impact of adaptation could affect practical user experience."
                    },
                    {
                        "id": 114,
                        "string": "Existing regularization methods mitigate this effect to some degree, but there may be more effective techniques for immediate adaptation that have yet to be developed."
                    },
                    {
                        "id": 115,
                        "string": "The proposed metrics R0, R1, and R0+1 are useful for measuring immediate adaptation performance, which is crucial in adaptive CAT systems."
                    },
                    {
                        "id": 116,
                        "string": "Table 5 contains results for additional English→German datasets, namely patents (Wäschle and Riezler, 2012) (Patent), transcribed public speeches (Cettolo et al., 2012) (TED), and two proprietary user data sets, one from the financial domain (User 1) and the other being technical documentation (User 2)."
                    },
                    {
                        "id": 117,
                        "string": "The same pattern is observed in almost all cases: lasso outperforms the other adaptation techniques in zero-shot recall (R0) and combined recall (R0+1), while full has the highest one-shot recall (R1) on two out of five test sets, being close runner-up to lasso on all others."
                    },
                    {
                        "id": 118,
                        "string": "Overall however, we observe that zero-shot recall R0 is degraded by adaptation, while one-shot recall is improved."
                    },
                    {
                        "id": 119,
                        "string": "We also find that adaptation with the light-weight bias method often does not deviate much from the baseline."
                    },
                    {
                        "id": 120,
                        "string": "In contrast, the results for the traditional MT metrics are predominantly positive."
                    },
                    {
                        "id": 121,
                        "string": "For adaptation, the lasso method provides the best tradeoff in terms of performance throughout the considered metrics."
                    },
                    {
                        "id": 122,
                        "string": "A Additional Results B Learning Curves We are also interested in the behavior of the adaptation methods over time."
                    },
                    {
                        "id": 123,
                        "string": "To this end, in Figure 2 , we plot the difference in cumulative scores 9 of two adapted systems (full and lasso) to the baseline for the proposed metrics as well as the BLEU score."
                    },
                    {
                        "id": 124,
                        "string": "As evident from comparing the curves for BLEU and R0, the BLEU score and the proposed metric give disparate signals for this data."
                    },
                    {
                        "id": 125,
                        "string": "Specifically, there are two distinct dips in the curves for R0 (as well as R0+1) and BLEU: 1."
                    },
                    {
                        "id": 126,
                        "string": "The degradation in R0 around segment 800 is due to significant noise in segment 774, which has a strong impact on the adapted systems, while the baseline system is not affected."
                    },
                    {
                        "id": 127,
                        "string": "The full system's score drops by about 50% at segment 775 (i.e."
                    },
                    {
                        "id": 128,
                        "string": "after adaptation) relative to the cumulative score difference at the previous segment and never recovers after that."
                    },
                    {
                        "id": 129,
                        "string": "2."
                    },
                    {
                        "id": 130,
                        "string": "The dip in the BLEU score at segment 752, observable for both adapted systems, depicting a relative degradation of about 10%, is due to a pathological repetition of a single character in the output of the adapted MT systems for this segment, which has a large impact on the score."
                    },
                    {
                        "id": 131,
                        "string": "The dip observed with R0 is also noticeable in BLEU, but much less pronounced at 4% relative for full and 2% relative for lasso."
                    },
                    {
                        "id": 132,
                        "string": "The dip in BLEU on the other hand is not noticeable with R0, R1, or R0+1."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 15
                    },
                    {
                        "section": "Motivation",
                        "n": "2.1",
                        "start": 16,
                        "end": 29
                    },
                    {
                        "section": "Metrics",
                        "n": "2.2",
                        "start": 30,
                        "end": 53
                    },
                    {
                        "section": "Related Work",
                        "n": "3",
                        "start": 54,
                        "end": 64
                    },
                    {
                        "section": "Online Adaptation",
                        "n": "4",
                        "start": 65,
                        "end": 80
                    },
                    {
                        "section": "Neural Machine Translation Systems",
                        "n": "5.1",
                        "start": 81,
                        "end": 92
                    },
                    {
                        "section": "Results",
                        "n": "5.2",
                        "start": 93,
                        "end": 101
                    },
                    {
                        "section": "Analysis",
                        "n": "5.3",
                        "start": 102,
                        "end": 111
                    },
                    {
                        "section": "Conclusions",
                        "n": "6",
                        "start": 112,
                        "end": 132
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1350-Figure1-1.png",
                        "caption": "Figure 1: Example for calculating R0, R1, and R0+1 on a corpus of two sentences. Content words are written in brackets, the corpus-level score is given below the per-segment scores. In the example, the denominator for R1 is 2 due to the two repeated words dog and bites in the reference.",
                        "page": 2,
                        "bbox": {
                            "x1": 110.88,
                            "x2": 486.24,
                            "y1": 61.44,
                            "y2": 124.32
                        }
                    },
                    {
                        "filename": "../figure/image/1350-Table5-1.png",
                        "caption": "Table 5: BLEU, sentence-wise BLEU, TER, R0+1, R0, and R1 metrics for a number of data sets, comparing different adaptation methods as described in Section 4. Baseline results are given as absolute scores, results for adaptation are given as relative differences. Best viewed in color.",
                        "page": 7,
                        "bbox": {
                            "x1": 160.79999999999998,
                            "x2": 436.32,
                            "y1": 121.44,
                            "y2": 653.28
                        }
                    },
                    {
                        "filename": "../figure/image/1350-Table1-1.png",
                        "caption": "Table 1: Results on the Autodesk test set for traditional MT quality metrics. SBLEU refers to an average of sentence-wise BLEU scores as described by Nakov et al. (2012). The best result in each column is denoted with bold font.",
                        "page": 3,
                        "bbox": {
                            "x1": 331.68,
                            "x2": 501.12,
                            "y1": 62.4,
                            "y2": 175.2
                        }
                    },
                    {
                        "filename": "../figure/image/1350-Figure2-1.png",
                        "caption": "Figure 2: Differences in cumulative scores for R0 (top left), R1 (top right), R0+1 (bottom left), and BLEU (bottom right) to the baseline system on the Autodesk test set for full and lasso adaptation. The peculiarities discussed in the running text are marked by solid vertical lines (at x = 751 and x = 774).",
                        "page": 8,
                        "bbox": {
                            "x1": 95.52,
                            "x2": 502.08,
                            "y1": 198.72,
                            "y2": 578.88
                        }
                    },
                    {
                        "filename": "../figure/image/1350-Table4-1.png",
                        "caption": "Table 4: Results on Autodesk data calculating the metrics with subwords.",
                        "page": 4,
                        "bbox": {
                            "x1": 342.71999999999997,
                            "x2": 490.08,
                            "y1": 191.51999999999998,
                            "y2": 264.0
                        }
                    },
                    {
                        "filename": "../figure/image/1350-Table2-1.png",
                        "caption": "Table 2: Results on the Autodesk test set for the proposed metrics R0, R1, and R0+1.",
                        "page": 4,
                        "bbox": {
                            "x1": 106.56,
                            "x2": 255.35999999999999,
                            "y1": 62.4,
                            "y2": 175.2
                        }
                    },
                    {
                        "filename": "../figure/image/1350-Table3-1.png",
                        "caption": "Table 3: Results on Autodesk data calculating the metrics only for truly novel content words, i.e. ones that do not occur in the training data.",
                        "page": 4,
                        "bbox": {
                            "x1": 342.71999999999997,
                            "x2": 490.08,
                            "y1": 62.4,
                            "y2": 135.35999999999999
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-79"
        },
        {
            "slides": {
                "0": {
                    "title": "Current systems",
                    "text": [
                        "Spanish text ola mi nombre es hodor",
                        "English text: hi my name is hodor Machine"
                    ],
                    "page_nums": [
                        1,
                        2,
                        3
                    ],
                    "images": []
                },
                "1": {
                    "title": "Unwritten languages",
                    "text": [
                        "Bantu language, Republic of Congo, ~160K speakers",
                        "~3000 languages with no writing system",
                        "Mboshi text: not available Recognition",
                        "paired with French translations (Godard et al. 2018)",
                        "Efforts to collect speech and translations using mobile apps"
                    ],
                    "page_nums": [
                        5,
                        6
                    ],
                    "images": []
                },
                "2": {
                    "title": "Haiti Earthquake 2010",
                    "text": [
                        "Survivors sent text messages to helpline",
                        "International rescue teams face language barrier",
                        "No automated tools available",
                        "Volunteers from global Haitian diaspora help create parallel text corpora in short time"
                    ],
                    "page_nums": [
                        7
                    ],
                    "images": []
                },
                "3": {
                    "title": "Are we better prepared in 2019",
                    "text": [
                        "Moun kwense nan Sakre",
                        "People trapped in Sacred"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "4": {
                    "title": "Can we build a speech to text translation ST system",
                    "text": [
                        "given as training data:",
                        "Tens of hours of speech paired with text translations",
                        "No source text available"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "5": {
                    "title": "Neural models",
                    "text": [
                        "Sequence-to-Sequence Weiss et al. (2017)",
                        "English text: hi my name is hodor"
                    ],
                    "page_nums": [
                        10
                    ],
                    "images": []
                },
                "6": {
                    "title": "Spanish speech to English text",
                    "text": [
                        "Encoder telephone speech (unscripted) realistic noise conditions multiple speakers and dialects crowdsourced English text translations",
                        "Closer to real-world conditions"
                    ],
                    "page_nums": [
                        11,
                        12
                    ],
                    "images": []
                },
                "7": {
                    "title": "But",
                    "text": [
                        "Poor performance in low-resource settings",
                        "# hours of training data (log scale)"
                    ],
                    "page_nums": [
                        13
                    ],
                    "images": []
                },
                "8": {
                    "title": "Why Spanish English",
                    "text": [
                        "simulate low-resource settings and test our method",
                        "Later: results on truly low-resource language ---"
                    ],
                    "page_nums": [
                        20,
                        21,
                        22
                    ],
                    "images": []
                },
                "10": {
                    "title": "Pretrain on high resource",
                    "text": [
                        "300 hours of English audio and text",
                        "Attention *train until convergence"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": []
                },
                "11": {
                    "title": "Fine tune on low resource",
                    "text": [
                        "English audio Spanish audio",
                        "transfer from English ASR",
                        "English text English text",
                        "*train until convergence Attention"
                    ],
                    "page_nums": [
                        25,
                        26
                    ],
                    "images": []
                },
                "16": {
                    "title": "Ablation model parameters",
                    "text": [
                        "Spanish to English, N = 20 hours",
                        "+English ASR: encoder English text English text",
                        "+English ASR: decoder Decoder Decoder",
                        "transferring encoder only parameters works well!",
                        "can pretrain on a language different from both source and target in ST pair"
                    ],
                    "page_nums": [
                        35,
                        36,
                        37,
                        38,
                        39
                    ],
                    "images": []
                },
                "17": {
                    "title": "Pretraining on French",
                    "text": [
                        "Spanish to English, N = 20 hours",
                        "+English ASR: encoder Decoder Decoder",
                        "+French ASR: encoder French text English text",
                        "*only 20 hours of French ASR",
                        "French ASR helps Spanish-English ST"
                    ],
                    "page_nums": [
                        40,
                        41
                    ],
                    "images": []
                },
                "18": {
                    "title": "Takeaways",
                    "text": [
                        "Pretraining on a different language helps",
                        "transfer all model parameters for best gains",
                        "encoder parameters account for most of these",
                        "useful when target vocabulary is different"
                    ],
                    "page_nums": [
                        42
                    ],
                    "images": []
                },
                "19": {
                    "title": "Mboshi French ST",
                    "text": [
                        "ST data by Godard et al. 2018",
                        "~4 hours of speech, paired with French translations",
                        "Bantu language, Republic of Congo"
                    ],
                    "page_nums": [
                        43,
                        44
                    ],
                    "images": []
                },
                "20": {
                    "title": "Mboshi French Results",
                    "text": [
                        "Mboshi to French, N = 4 hours",
                        "*outperformed by a naive baseline"
                    ],
                    "page_nums": [
                        45,
                        46
                    ],
                    "images": []
                },
                "21": {
                    "title": "Pretraining on French ASR",
                    "text": [
                        "Mboshi to French, N = 4 hours",
                        "French text French text",
                        "French ASR helps Mboshi-French ST"
                    ],
                    "page_nums": [
                        47,
                        48,
                        49
                    ],
                    "images": []
                },
                "22": {
                    "title": "Pretraining on English ASR",
                    "text": [
                        "Mboshi to French, N = 4 hours",
                        "+English ASR: encoder Decoder Decoder",
                        "English text French text",
                        "using encoder trained on a lot more data",
                        "English ASR helps Mboshi-French ST",
                        "baseline Encoder From English ASR",
                        "+French ASR: all Attention",
                        "+French ASR: remaining French text",
                        "combining gives the best gains",
                        "BLEU score is still low but above naive baseline"
                    ],
                    "page_nums": [
                        50,
                        51,
                        57,
                        58,
                        59
                    ],
                    "images": []
                },
                "23": {
                    "title": "Pretraining on French and English ASR",
                    "text": [
                        "French text French text English text"
                    ],
                    "page_nums": [
                        54,
                        55,
                        56
                    ],
                    "images": []
                },
                "27": {
                    "title": "Why does pretraining help",
                    "text": [
                        "ASR data contains audio from 100s of speakers",
                        "Learning to factor out background noise (?)",
                        "BLEU Baseline +English ASR"
                    ],
                    "page_nums": [
                        64
                    ],
                    "images": []
                },
                "28": {
                    "title": "Spanish English ST",
                    "text": [
                        "*results on Fisher test set ...",
                        "Spanish to English, N = 20 hours",
                        "+En ASR: 20h English text"
                    ],
                    "page_nums": [
                        65,
                        66,
                        67
                    ],
                    "images": []
                },
                "29": {
                    "title": "Neural model",
                    "text": [
                        "yo vive en bronx",
                        "bi-LSTM 1 LSTM 2",
                        "bi-LSTM 2 LSTM 3"
                    ],
                    "page_nums": [
                        68,
                        69
                    ],
                    "images": []
                }
            },
            "paper_title": "Pre-training on High-Resource Speech Recognition Improves Low-Resource Speech-to-Text Translation",
            "paper_id": "1360",
            "paper": {
                "title": "Pre-training on High-Resource Speech Recognition Improves Low-Resource Speech-to-Text Translation",
                "abstract": "We present a simple approach to improve direct speech-to-text translation (ST) when the source language is low-resource: we pre-train the model on a high-resource automatic speech recognition (ASR) task, and then fine-tune its parameters for ST. We demonstrate that our approach is effective by pre-training on 300 hours of English ASR data to improve Spanish-English ST from 10.8 to 20.2 BLEU when only 20 hours of Spanish-English ST training data are available. Through an ablation study, we find that the pre-trained encoder (acoustic model) accounts for most of the improvement, despite the fact that the shared language in these tasks is the target language text, not the source language audio. Applying this insight, we show that pre-training on ASR helps ST even when the ASR language differs from both source and target ST languages: pre-training on French ASR also improves Spanish-English ST. Finally, we show that the approach improves performance on a true low-resource task: pre-training on a combination of English ASR and French ASR improves Mboshi-French ST, where only 4 hours of data are available, from 3.5 to 7.1 BLEU.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction Speech-to-text Translation (ST) has many potential applications for low-resource languages: for example in language documentation, where the source language is often unwritten or endangered (Besacier et al., 2006; Martin et al., 2015; Adams et al., 2016a,b; Anastasopoulos and Chiang, 2017) ; or in crisis relief, where emergency workers might need to respond to calls or requests in a foreign language (Munro, 2010) ."
                    },
                    {
                        "id": 1,
                        "string": "Traditional ST is a pipeline of automatic speech recognition (ASR) and machine translation (MT), and thus requires transcribed source audio to train ASR and parallel text to train MT."
                    },
                    {
                        "id": 2,
                        "string": "These resources are often unavailable for low-resource languages, but for our potential applications, there may be some source language audio paired with target language text translations."
                    },
                    {
                        "id": 3,
                        "string": "In these scenarios, end-to-end ST is appealing."
                    },
                    {
                        "id": 4,
                        "string": "Recently, Weiss et al."
                    },
                    {
                        "id": 5,
                        "string": "(2017) showed that endto-end ST can be very effective, achieving an impressive BLEU score of 47.3 on Spanish-English ST."
                    },
                    {
                        "id": 6,
                        "string": "But this result required over 150 hours of translated audio for training, still a substantial resource requirement."
                    },
                    {
                        "id": 7,
                        "string": "By comparison, a similar system trained on only 20 hours of data for the same task achieved a BLEU score of 5.3 (Bansal et al., 2018) ."
                    },
                    {
                        "id": 8,
                        "string": "Other low-resource systems have similarly low accuracies (Anastasopoulos and Chiang, 2018; Bérard et al., 2018) ."
                    },
                    {
                        "id": 9,
                        "string": "To improve end-to-end ST in low-resource settings, we can try to leverage other data resources."
                    },
                    {
                        "id": 10,
                        "string": "For example, if we have transcribed audio in the source language, we can use multi-task learning to improve ST (Anastasopoulos and Chiang, 2018; Weiss et al., 2017; Bérard et al., 2018) ."
                    },
                    {
                        "id": 11,
                        "string": "But source language transcriptions are unlikely to be available in our scenarios of interest."
                    },
                    {
                        "id": 12,
                        "string": "Could we improve low-resource ST by leveraging data from a high-resource language?"
                    },
                    {
                        "id": 13,
                        "string": "For ASR, training a single model on multiple languages can be effective for all of them (Toshniwal et al., 2018b; Deng et al., 2013) ."
                    },
                    {
                        "id": 14,
                        "string": "For MT, transfer learning (Thrun, 1995) has been very effective: pretraining a model for a high-resource language pair and transferring its parameters to a low-resource language pair when the target language is shared (Zoph et al., 2016; Johnson et al., 2017) ."
                    },
                    {
                        "id": 15,
                        "string": "Inspired by these successes, we show that low-resource ST can leverage transcribed audio in a high-resource target language, or even a different language altogether, simply by pre-training a model for the high-resource ASR task, and then transferring and fine-tuning some or all of the model's parameters for low-resource ST. We first test our approach using Spanish as the source language and English as the target."
                    },
                    {
                        "id": 16,
                        "string": "After training an ASR system on 300 hours of English, fine-tuning on 20 hours of Spanish-English yields a BLEU score of 20.2, compared to only 10.8 for an ST model without ASR pre-training."
                    },
                    {
                        "id": 17,
                        "string": "Analyzing this result, we discover that the main benefit of pre-training arises from the transfer of the encoder parameters, which model the input acoustic signal."
                    },
                    {
                        "id": 18,
                        "string": "In fact, this effect is so strong that we also obtain improvements by pre-training on a language that differs from both the source and the target: pre-training on French and fine-tuning on Spanish-English."
                    },
                    {
                        "id": 19,
                        "string": "We hypothesize that pre-training the encoder parameters, even on a different language, allows the model to better learn about linguistically meaningful phonetic variation while normalizing over acoustic variability such as speaker and channel differences."
                    },
                    {
                        "id": 20,
                        "string": "We conclude that the acousticphonetic learning problem, rather than translation itself, is one of the main difficulties in low-resource ST. A final set of experiments confirm that ASR pretraining also helps on another language pair where the input is truly low-resource: Mboshi-French."
                    },
                    {
                        "id": 21,
                        "string": "Method For both ASR and ST, we use an encoder-decoder model with attention adapted from Weiss et al."
                    },
                    {
                        "id": 22,
                        "string": "(2017), Bérard et al."
                    },
                    {
                        "id": 23,
                        "string": "(2018) and Bansal et al."
                    },
                    {
                        "id": 24,
                        "string": "(2018) , as shown in Figure 1 ."
                    },
                    {
                        "id": 25,
                        "string": "We use the same model architecture for all our models, allowing us to conveniently transfer parameters between them."
                    },
                    {
                        "id": 26,
                        "string": "We also constrain the hyper-parameter search to fit a model into a single Titan X GPU, allowing us to maximize available compute resources."
                    },
                    {
                        "id": 27,
                        "string": "We use a pre-trained English ASR model to initialize training of Spanish-English ST models, and a pre-trained French ASR model to initialize training of Mboshi-French ST models."
                    },
                    {
                        "id": 28,
                        "string": "During ST training, all model parameters are updated."
                    },
                    {
                        "id": 29,
                        "string": "In these configurations, the decoder shares the same vocabulary across the ASR and ST tasks."
                    },
                    {
                        "id": 30,
                        "string": "This is practical for settings where the target text language is highresource with ASR data available."
                    },
                    {
                        "id": 31,
                        "string": "In settings where both ST languages are lowresource, ASR data may only be available in a third language."
                    },
                    {
                        "id": 32,
                        "string": "To test whether transfer learning will help in this setting, we use a pre-trained French ASR model to train Spanish-English ST models; and English ASR for Mboshi-French models."
                    },
                    {
                        "id": 33,
                        "string": "In these cases, the ST languages are different from the ASR language, so we can only transfer the encoder parameters of the ASR model, since the dimensions of the decoder's output softmax layer are indexed by the vocabulary, which is not shared."
                    },
                    {
                        "id": 34,
                        "string": "1 Sharing only the speech encoder parameters is much easier, since the speech input can be preprocessed in the same manner for all languages."
                    },
                    {
                        "id": 35,
                        "string": "This form of transfer learning is more flexible, as there are no constraints on the ASR language used."
                    },
                    {
                        "id": 36,
                        "string": "3 Experimental Setup 3.1 Data sets English ASR."
                    },
                    {
                        "id": 37,
                        "string": "We use the Switchboard Telephone speech corpus (Godfrey and Holliman, 1993) , which consists of around 300 hours of English speech and transcripts, split into 260k utterances."
                    },
                    {
                        "id": 38,
                        "string": "The development set consists of 5 hours that we removed from the training set, split into 4k utterances."
                    },
                    {
                        "id": 39,
                        "string": "French ASR."
                    },
                    {
                        "id": 40,
                        "string": "We use the French speech corpus from the GlobalPhone collection (Schultz, 2002) , which consists of around 20 hours of high quality read speech and transcripts, split into 9k utterances."
                    },
                    {
                        "id": 41,
                        "string": "The development set consists of 2 hours, split into 800 utterances."
                    },
                    {
                        "id": 42,
                        "string": "Spanish-English ST. We use the Fisher Spanish speech corpus (Graff et al., 2010) , which consists of 160 hours of telephone speech in a variety of Spanish dialects, split into 140K utterances."
                    },
                    {
                        "id": 43,
                        "string": "To simulate low-resource conditions, we construct smaller train-ing corpora consisting of 50, 20, 10, 5, or 2.5 hours of data, selected at random from the full training data."
                    },
                    {
                        "id": 44,
                        "string": "The development and test sets each consist of around 4.5 hours of speech, split into 4K utterances."
                    },
                    {
                        "id": 45,
                        "string": "We do not use the corresponding Spanish transcripts; our target text consists of English translations that were collected through crowdsourcing (Post et al., 2013 (Post et al., , 2014 ."
                    },
                    {
                        "id": 46,
                        "string": "Mboshi-French ST. Mboshi is a Bantu language spoken in the Republic of Congo, with around 160,000 speakers."
                    },
                    {
                        "id": 47,
                        "string": "2 We use the Mboshi-French parallel corpus (Godard et al., 2018) , which consists of around 4 hours of Mboshi speech, split into a training set of 5K utterances and a development set of 500 utterances."
                    },
                    {
                        "id": 48,
                        "string": "Since this corpus does not include a designated test set, we randomly sampled and removed 200 utterances from training to use as a development set, and use the designated development data as a test set."
                    },
                    {
                        "id": 49,
                        "string": "Preprocessing Speech."
                    },
                    {
                        "id": 50,
                        "string": "We convert raw speech input to 13dimensional MFCCs using Kaldi (Povey et al., 2011) ."
                    },
                    {
                        "id": 51,
                        "string": "3 We also perform speaker-level mean and variance normalization."
                    },
                    {
                        "id": 52,
                        "string": "Text."
                    },
                    {
                        "id": 53,
                        "string": "The target text of the Spanish-English data set contains 1.5M word tokens and 17K word types."
                    },
                    {
                        "id": 54,
                        "string": "If we model text as sequences of words, our model cannot produce any of the unseen word types in the test data and is penalized for this, but it can be trained very quickly (Bansal et al., 2018) ."
                    },
                    {
                        "id": 55,
                        "string": "If we instead model text as sequences of characters as done by Weiss et al."
                    },
                    {
                        "id": 56,
                        "string": "(2017) , we would have 7M tokens and 100 types, resulting in a model that is open-vocabulary, but very slow to train (Bansal et al., 2018) ."
                    },
                    {
                        "id": 57,
                        "string": "As an effective middle ground, we use byte pair encoding (BPE; Sennrich et al., 2016) to segment each word into subwords, each of which is a character or a high-frequency sequence of characters-we use 1000 of these high-frequency sequences."
                    },
                    {
                        "id": 58,
                        "string": "Since the set of subwords includes the full set of characters, the model is still open vocabulary; but it results in a text with only 1.9M tokens and just over 1K types, which can be trained almost as fast as the word-level model."
                    },
                    {
                        "id": 59,
                        "string": "The vocabulary for BPE depends on the fre-quency of character sequences, so it must be computed with respect to a specific corpus."
                    },
                    {
                        "id": 60,
                        "string": "For English, we use the full 160-hour Spanish-English ST target training text."
                    },
                    {
                        "id": 61,
                        "string": "For French, we use the Mboshi-French ST target training text."
                    },
                    {
                        "id": 62,
                        "string": "Model architecture for ASR and ST Speech encoder."
                    },
                    {
                        "id": 63,
                        "string": "As shown schematically in Figure 1, MFCC feature vectors, extracted using a window size of 25 ms and a step size of 10ms, are fed into a stack of two CNN layers, with 128 and 512 filters with a filter width of 9 frames each."
                    },
                    {
                        "id": 64,
                        "string": "In each CNN layer we stride with a factor of 2 along time, apply a ReLU activation (Nair and Hinton, 2010) , and apply batch normalization (Ioffe and Szegedy, 2015) ."
                    },
                    {
                        "id": 65,
                        "string": "The output of the CNN layers is fed into a three-layer bi-directional long short term memory network (LSTM; Hochreiter and Schmidhuber, 1997); each hidden layer has 512 dimensions."
                    },
                    {
                        "id": 66,
                        "string": "Text decoder."
                    },
                    {
                        "id": 67,
                        "string": "At each time step, the decoder chooses the most probable token from the output of a softmax layer produced by a fully-connected layer, which in turn receives the current state of a recurrent layer computed from previous time steps and an attention vector computed over the input."
                    },
                    {
                        "id": 68,
                        "string": "Attention is computed using the global attentional model with general score function and inputfeeding, as described in Luong et al."
                    },
                    {
                        "id": 69,
                        "string": "(2015) ."
                    },
                    {
                        "id": 70,
                        "string": "The predicted token is then fed into a 128-dimensional embedding layer followed by a three-layer LSTM to update the recurrent state; each hidden state has 256 dimensions."
                    },
                    {
                        "id": 71,
                        "string": "While training, we use the predicted token 20% of the time as input to the next decoder step and the training token for the remaining 80% of the time (Williams and Zipser, 1989) ."
                    },
                    {
                        "id": 72,
                        "string": "At test time we use beam decoding with a beam size of 5 and length normalization (Wu et al., 2016) with a weight of 0.6."
                    },
                    {
                        "id": 73,
                        "string": "Training and implementation."
                    },
                    {
                        "id": 74,
                        "string": "Parameters for the CNN and RNN layers are initialized using the scheme from (He et al., 2015) ."
                    },
                    {
                        "id": 75,
                        "string": "For the embedding and fully-connected layers, we use Chainer's (Tokui et al., 2015) default initialition."
                    },
                    {
                        "id": 76,
                        "string": "We regularize using dropout (Srivastava et al., 2014) , with a ratio of 0.3 over the embedding and LSTM layers (Gal, 2016) , and a weight decay rate of 0.0001."
                    },
                    {
                        "id": 77,
                        "string": "The parameters are optimized using Adam (Kingma and Ba, 2015) , with a starting alpha of 0.001."
                    },
                    {
                        "id": 78,
                        "string": "Following some preliminary experimentation on our development set, we add Gaussian noise with standard deviation of 0.25 to the MFCC features during training, and drop frames with a probability of 0.10."
                    },
                    {
                        "id": 79,
                        "string": "After 20 epochs, we corrupt the true decoder labels by sampling a random output label with a probability of 0.3."
                    },
                    {
                        "id": 80,
                        "string": "Our code is implemented in Chainer (Tokui et al., 2015) and is freely available."
                    },
                    {
                        "id": 81,
                        "string": "4 Evaluation Metrics."
                    },
                    {
                        "id": 82,
                        "string": "We report BLEU (Papineni et al., 2002) for all our models."
                    },
                    {
                        "id": 83,
                        "string": "5 In low-resource settings, BLEU scores tend to be low, difficult to interpret, and poorly correlated with model performance."
                    },
                    {
                        "id": 84,
                        "string": "This is because BLEU requires exact four-gram matches only, but low four-gram accuracy may obscure a high unigram accuracy and inexact translations that partially capture the semantics of an utterance, and these can still be very useful in situations like language documentation and crisis response."
                    },
                    {
                        "id": 85,
                        "string": "Therefore, we also report word-level unigram precision and recall, taking into account stem, synonym, and paraphrase matches."
                    },
                    {
                        "id": 86,
                        "string": "To compute these scores, we use METEOR (Lavie and Agarwal, 2007) with default settings for English and French."
                    },
                    {
                        "id": 87,
                        "string": "6 For example, METEOR assigns \"eat\" a recall of 1 against reference \"eat\" and a recall of 0.8 against reference \"feed\", which it considers a synonym match."
                    },
                    {
                        "id": 88,
                        "string": "Naive baselines."
                    },
                    {
                        "id": 89,
                        "string": "We also include evaluation scores for a naive baseline model that predicts the K most frequent words of the training set as a bag of words for each test utterance."
                    },
                    {
                        "id": 90,
                        "string": "We set K to be the value at which precision/recall are most similar, which is always between 5 and 20 words."
                    },
                    {
                        "id": 91,
                        "string": "This provides an empirical lower bound on precision and recall, since we would expect any usable model to outperform a system that does not even depend on the input utterance."
                    },
                    {
                        "id": 92,
                        "string": "We do not compute BLEU for these baselines, since they do not predict sequences, only bags of words."
                    },
                    {
                        "id": 93,
                        "string": "ment data in Table 1 ."
                    },
                    {
                        "id": 94,
                        "string": "7 We denote each ASR model by L-Nh, where L is a language code and N is the size of the training set in hours."
                    },
                    {
                        "id": 95,
                        "string": "For example, en-300h denotes an English ASR model trained on 300 hours of data."
                    },
                    {
                        "id": 96,
                        "string": "Training ASR models for state-of-the-art performance requires substantial hyper-parameter tuning and long training times."
                    },
                    {
                        "id": 97,
                        "string": "Since our goal is simply to see whether pre-training is useful, we stopped pretraining our models after around 30 epochs (3 days) to focus on transfer experiments."
                    },
                    {
                        "id": 98,
                        "string": "As a consequence, our ASR results are far from state-of-the-art: current end-to-end Kaldi systems obtain 16% WER on Switchboard train-dev, and 22.7% WER on the French Globalphone dev set."
                    },
                    {
                        "id": 99,
                        "string": "8 We believe that better ASR pre-training may produce better ST results, but we leave this for future work."
                    },
                    {
                        "id": 100,
                        "string": "Spanish-English ST In the following, we denote an ST model by S-T-Nh, where S and T are source and target language codes, and N is the size of the training set in hours."
                    },
                    {
                        "id": 101,
                        "string": "For example, sp-en-20h denotes a Spanish-English ST model trained using 20 hours of data."
                    },
                    {
                        "id": 102,
                        "string": "We use the code mb for Mboshi and fr for French."
                    },
                    {
                        "id": 103,
                        "string": "Figure 2 shows the BLEU and unigram precision/recall scores on the development set for baseline Spanish-English ST models and those trained after initializing with the en-300h model."
                    },
                    {
                        "id": 104,
                        "string": "Corresponding results on the test set (Table 2) previous results (Bansal et al., 2018) using the same train/test splits, primarily due to better regularization and modeling of subwords rather than words."
                    },
                    {
                        "id": 105,
                        "string": "Yet transfer learning still substantially improves over these strong baselines."
                    },
                    {
                        "id": 106,
                        "string": "For sp-en-20h, transfer learning improves dev set BLEU from 10.8 to 19.9, precision from 41% to 51%, and recall from 38% to 49%."
                    },
                    {
                        "id": 107,
                        "string": "For sp-en-50h, transfer learning improves BLEU from 23.3 to 27.8, precision from 54% to 58%, and recall from 51% to 56%."
                    },
                    {
                        "id": 108,
                        "string": "Using English ASR to improve ST Very low-resource: 10 hours or less of ST training data."
                    },
                    {
                        "id": 109,
                        "string": "Figure 2 shows that without transfer learning, ST models trained on less than 10 hours of data struggle to learn, with precision/recall scores close to or below that of the naive baseline."
                    },
                    {
                        "id": 110,
                        "string": "But with transfer learning, we see gains in precision and recall of between 10 and 20 points."
                    },
                    {
                        "id": 111,
                        "string": "We also see that with transfer learning, a model trained on only 5 hours of ST data achieves a BLEU of 9.1, nearly as good as the 10.8 of a model trained on 20 hours of ST data without transfer learning."
                    },
                    {
                        "id": 112,
                        "string": "In other words, fine-tuning an English ASR modelwhich is relatively easy to obtain-produces similar results to training an ST model on four times as N = 0 2.5 5 10 20 50 base 0 2.1 1.8 2.1 10.8 22.7 +asr 0.5 5.7 9.1 14.5 20.2 28.2  much data, which may be difficult to obtain."
                    },
                    {
                        "id": 113,
                        "string": "We even find that in the very low-resource setting of just 2.5 hours of ST data, with transfer learning the model achieves a precision/recall of around 30% and improves by more than 10 points over the naive baseline."
                    },
                    {
                        "id": 114,
                        "string": "In very low-resource scenarios with time constraints-such as in disaster relief-it is possible that even this level of performance may be useful, since it can be used to spot keywords in speech and can be trained in just three hours."
                    },
                    {
                        "id": 115,
                        "string": "Sample translations."
                    },
                    {
                        "id": 116,
                        "string": "Table 3 shows example translations for models sp-en-20h and sp-en-50h with and without transfer learning using en-300h."
                    },
                    {
                        "id": 117,
                        "string": "Figure 3 shows the attention weights for the last sample utterance in Table 3 ."
                    },
                    {
                        "id": 118,
                        "string": "For this utterance, the Spanish and English text have a different word order: mucho tiempo occurs in the middle of the speech utterance, and its translation, long time, is at the end of the English reference."
                    },
                    {
                        "id": 119,
                        "string": "Similarly, vive aquí occurs at the end of the speech utterance, while the translation, living here, is in the middle of the English reference."
                    },
                    {
                        "id": 120,
                        "string": "The baseline sp-en-50h model translates the words correctly but doesn't get Table 3 , using 50h models with and without pre-training."
                    },
                    {
                        "id": 121,
                        "string": "The x-axis shows the reference Spanish word positions in the input; the y-axis shows the predicted English subwords."
                    },
                    {
                        "id": 122,
                        "string": "In the reference, mucho tiempo is translated to long time, and vive aquí to living here, but their order is reversed, and this is reflected in (b)."
                    },
                    {
                        "id": 123,
                        "string": "the English word order right."
                    },
                    {
                        "id": 124,
                        "string": "With transfer learning, the model produces a shorter but still accurate translation in the correct word order."
                    },
                    {
                        "id": 125,
                        "string": "Analysis To understand the source of these improvements, we carried out a set of ablation experiments."
                    },
                    {
                        "id": 126,
                        "string": "For most of these experiments, we focus on Spanish-English ST with 20 hours of training data, with and without transfer learning."
                    },
                    {
                        "id": 127,
                        "string": "Transfer learning with selected parameters."
                    },
                    {
                        "id": 128,
                        "string": "In our first set of experiments, we transferred all parameters of the en-300h model, including the speech encoder CNN and LSTM; the text decoder embedding, LSTM and output layer parameters; and attention parameters."
                    },
                    {
                        "id": 129,
                        "string": "To see which set of parameters has the most impact, we train the sp-en-20h model by transferring only selected parameters from en-300h, and randomly initializing the rest."
                    },
                    {
                        "id": 130,
                        "string": "The results (Figure 4) show that transferring all parameters is most effective, and that the speech encoder parameters account for most of the gains."
                    },
                    {
                        "id": 131,
                        "string": "We hypothesize that the encoder learns transferable low-level acoustic features that normalize across variability like speaker and channel differences to better capture meaningful phonetic differences, and that much of this learning is language-independent."
                    },
                    {
                        "id": 132,
                        "string": "This hypothesis is supported by other work showing the benefits of cross-lingual and multilingual training for speech technology in low-resource target languages (Carlin et al., 2011; Jansen et al., 2010; Deng et al., 2013; Vu et al., 2012; Thomas et al., 2012; Cui et al., 2015; Alumäe et al., 2016; Renshaw et al., 2015; Hermann and Goldwater, 2018) ."
                    },
                    {
                        "id": 133,
                        "string": "By contrast, transferring only decoder parameters does not improve accuracy."
                    },
                    {
                        "id": 134,
                        "string": "Since decoder parameters help when used in tandem with encoder parameters, we suspect that the dependency in parameter training order might explain this: the transferred decoder parameters have been trained to expect particular input representations from the encoder, so transferring only the decoder parameters without the encoder might not be useful."
                    },
                    {
                        "id": 135,
                        "string": "Figure 4 also suggests that models make strong gains early on in the training when using transfer learning."
                    },
                    {
                        "id": 136,
                        "string": "The sp-en-20h model initialized with all model parameters (+asr:all) from en-300h reaches a higher BLEU score after just 5 epochs (2 hours) of training than the model without transfer learning trained for 60 epochs/20 hours."
                    },
                    {
                        "id": 137,
                        "string": "This again can be useful in disaster-recovery scenarios, where the time to deploy a working system must be minimized."
                    },
                    {
                        "id": 138,
                        "string": "Amount of ASR data required."
                    },
                    {
                        "id": 139,
                        "string": "Figure 5 shows the impact of increasing the amount of English ASR data used on Spanish-English ST performance for two models: sp-en-20h and sp-en-50h."
                    },
                    {
                        "id": 140,
                        "string": "For sp-en-20h, we see that using en-100h improves performance by almost 6 BLEU points."
                    },
                    {
                        "id": 141,
                        "string": "By using more English ASR training data (en-300h) model, the BLEU score increases by almost 9 points."
                    },
                    {
                        "id": 142,
                        "string": "However, for sp-en-50h, we only see improvements when using en-300h."
                    },
                    {
                        "id": 143,
                        "string": "This implies that transfer learning is most useful when only a few tens of hours of training data are available for ST. As the amount of ST training data increases, the benefits of transfer learning tail off, although it's possible that using even more monolingual data, or improving the training at the ASR step, could extend the benefits to larger ST data sets."
                    },
                    {
                        "id": 144,
                        "string": "Impact of code-switching."
                    },
                    {
                        "id": 145,
                        "string": "We also tried using the en-300h ASR model without any fine-tuning to translate Spanish audio to English text."
                    },
                    {
                        "id": 146,
                        "string": "This model achieved a BLEU score of 1.1, with a precision of 15 and recall of 21."
                    },
                    {
                        "id": 147,
                        "string": "The non-zero BLEU score indicates that the model is matching some 4-grams in the reference."
                    },
                    {
                        "id": 148,
                        "string": "This seems to be due to code-switching in the Fisher-Spanish speech data set."
                    },
                    {
                        "id": 149,
                        "string": "Looking at the dev set utterances, we find several examples where the Spanish transcriptions match the English translations, indicating that the speaker switched into English."
                    },
                    {
                        "id": 150,
                        "string": "For example, there is an utterance whose Spanish transcription and English translation are both \"right yeah\", and this English expression is indeed present in the source audio."
                    },
                    {
                        "id": 151,
                        "string": "The English ASR model correctly translates this utterance, which is unsurprising since the phrase \"right yeah\" occurs nearly 500 times in Switchboard."
                    },
                    {
                        "id": 152,
                        "string": "Overall, we find that in nearly 500 of the 4,000 development set utterances (14%), the Spanish transcription and English translations share more than half of their tokens, indicating likely codeswitching."
                    },
                    {
                        "id": 153,
                        "string": "This suggests that transfer learning from English ASR models might help more than from other languages."
                    },
                    {
                        "id": 154,
                        "string": "To isolate this effect from transfer learning of language-independent speech features, we carried out a further experiment."
                    },
                    {
                        "id": 155,
                        "string": "Using French ASR to improve Spanish-English ST In this experiment, we pre-train using French ASR data for a Spanish-English translation task."
                    },
                    {
                        "id": 156,
                        "string": "Here, we can only transfer the speech encoder parameters, and there should be little if any benefit due to codeswitching."
                    },
                    {
                        "id": 157,
                        "string": "Because our French data set (20 hours) is much smaller than our English one (300 hours), for a fair comparison we used a 20 hour subset of the English data for pre-training in this experiment."
                    },
                    {
                        "id": 158,
                        "string": "For both the English and French models, we transferred only the encoder parameters."
                    },
                    {
                        "id": 159,
                        "string": "Table 4 shows that both the English and French 20-hour pre-trained models improve performance on Spanish-English ST."
                    },
                    {
                        "id": 160,
                        "string": "The English model works slightly better, as would be predicted given our discussion of code-switching, but the French model is also useful, improving BLEU from 10.8 to 12.5."
                    },
                    {
                        "id": 161,
                        "string": "This result strengthens the claim that ASR pretraining on a completely distinct third language can help low-resource ST."
                    },
                    {
                        "id": 162,
                        "string": "Presumably benefits would be much greater if we used a larger ASR data set, as we did with English above."
                    },
                    {
                        "id": 163,
                        "string": "In this experiment, the French pre-trained model used a French BPE output vocabulary, distinct from the English BPE vocabulary used in the ST system."
                    },
                    {
                        "id": 164,
                        "string": "In the future it would be interesting to try combining the French and English text to create a combined output vocabulary, which would allow transferring both the encoder and decoder parameters, and may be useful for translating names or cognates."
                    },
                    {
                        "id": 165,
                        "string": "More generally, it would also be possible to pre-train on multiple languages simultaneously using a shared BPE vocabulary."
                    },
                    {
                        "id": 166,
                        "string": "There is evidence that speech features trained on multiple languages transfer better than those trained on the same amount of data from a single language (Hermann and Goldwater, 2018), so multilingual pretraining for ST could improve results."
                    },
                    {
                        "id": 167,
                        "string": "baseline +fr-20h +en-20h sp-en-20h 10.8 12.5 13.2 Table 5 shows the ST model scores for Mboshi-French with and without using transfer learning."
                    },
                    {
                        "id": 168,
                        "string": "The first two rows fr-top-8w, fr-top-10w, show precision and recall scores for the naive baselines where we predict the top 8 or 10 most frequent French words in the Mboshi-French training set."
                    },
                    {
                        "id": 169,
                        "string": "These show that a precision/recall in the low 20s is easy to achieve, although with no n-gram matches (0 BLEU)."
                    },
                    {
                        "id": 170,
                        "string": "The pre-trained ASR models by themselves (next two lines) are much worse."
                    },
                    {
                        "id": 171,
                        "string": "The baseline model trained only on ST data actually has lower precision/recall than the naive baseline, although its non-zero BLEU score indicates that it is able to correctly predict some n-grams."
                    },
                    {
                        "id": 172,
                        "string": "We see comparable precision/recall to the naive baseline with improvements in BLEU by transferring either French ASR parameters (both encoder and decoder, fr-20h) or English ASR parameters (encoder only, en-300h)."
                    },
                    {
                        "id": 173,
                        "string": "Finally, to achieve the benefits of both the larger training set size for the encoder and the matching language of the decoder, we tried transferring the encoding parameters from the en-300h model and the decoding parameters from the fr-20h model."
                    },
                    {
                        "id": 174,
                        "string": "This configuration (en+fr) gives us the best evaluation scores on all metrics, and highlights the flexibility of our framework."
                    },
                    {
                        "id": 175,
                        "string": "Nevertheless, the 4-hour scenario is clearly a very challenging one."
                    },
                    {
                        "id": 176,
                        "string": "Conclusion This paper introduced the idea of pre-training an end-to-end speech translation system involving a low-resource language using ASR training data from a higher-resource language."
                    },
                    {
                        "id": 177,
                        "string": "We showed that large gains are possible: for example, we achieved an improvement of 9 BLEU points for a Spanish-English ST model with 20 hours of parallel data and 300 hours of English ASR data."
                    },
                    {
                        "id": 178,
                        "string": "Moreover, the pre-trained model trains faster than the baseline, achieving higher BLEU in only a couple of hours, while the baseline trains for more than a day."
                    },
                    {
                        "id": 179,
                        "string": "We also showed that these methods can be used effectively on a real low-resource language, Mboshi, with only 4 hours of parallel data."
                    },
                    {
                        "id": 180,
                        "string": "The very small size of the data set makes the task challenging, but by combining parameters from an English encoder and French decoder, we outperformed baseline models to obtain a BLEU score of 7.1 and precision/recall of about 25%."
                    },
                    {
                        "id": 181,
                        "string": "We believe ours is the first paper to report word-level BLEU scores on this data set."
                    },
                    {
                        "id": 182,
                        "string": "Our analysis indicates that, other things being equal, transferring both encoder and decoder parameters works better than just transferring one or the other."
                    },
                    {
                        "id": 183,
                        "string": "However, transferring the encoder parameters is where most of the benefit comes from."
                    },
                    {
                        "id": 184,
                        "string": "Pre-training using a large ASR corpus from a mismatched language will therefore probably work better than using a smaller ASR corpus that matches the output language."
                    },
                    {
                        "id": 185,
                        "string": "Our analysis suggests several avenues for further exploration."
                    },
                    {
                        "id": 186,
                        "string": "On the speech side, it might be even more effective to use multilingual training; or to replace the MFCC input features with pre-trained multilingual features, or features that are targeted to low-resource multispeaker settings (Kamper et al., , 2017 Thomas et al., 2012; Cui et al., 2015; Renshaw et al., 2015) ."
                    },
                    {
                        "id": 187,
                        "string": "On the language modeling side, simply transferring decoder parameters from an ASR model did not work; it might work better to use pre-trained decoder parameters from a language model, as proposed by Ramachandran et al."
                    },
                    {
                        "id": 188,
                        "string": "(2017) , or shallow fusion (Gülçehre et al., 2015; Toshniwal et al., 2018a) , which interpolates a pre-trained language model during beam search."
                    },
                    {
                        "id": 189,
                        "string": "In these methods, the decoder parameters are independent, and can therefore be used on their own."
                    },
                    {
                        "id": 190,
                        "string": "We plan to explore these strategies in future work."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 20
                    },
                    {
                        "section": "Method",
                        "n": "2",
                        "start": 21,
                        "end": 48
                    },
                    {
                        "section": "Preprocessing",
                        "n": "3.2",
                        "start": 49,
                        "end": 61
                    },
                    {
                        "section": "Model architecture for ASR and ST",
                        "n": "3.3",
                        "start": 62,
                        "end": 80
                    },
                    {
                        "section": "Evaluation",
                        "n": "3.4",
                        "start": 81,
                        "end": 99
                    },
                    {
                        "section": "Spanish-English ST",
                        "n": "5",
                        "start": 100,
                        "end": 107
                    },
                    {
                        "section": "Using English ASR to improve ST",
                        "n": "5.1",
                        "start": 108,
                        "end": 124
                    },
                    {
                        "section": "Analysis",
                        "n": "5.2",
                        "start": 125,
                        "end": 154
                    },
                    {
                        "section": "Using French ASR to improve",
                        "n": "5.3",
                        "start": 155,
                        "end": 175
                    },
                    {
                        "section": "Conclusion",
                        "n": "7",
                        "start": 176,
                        "end": 190
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1360-Figure4-1.png",
                        "caption": "Figure 4: Fisher development set training curves (reported using BLEU) for sp-en-20h using selected parameters from en-300h: none (base); encoder CNN only (+asr:cnn); encoder CNN and LSTM only (+asr:enc); decoder only (+asr:dec); and all: encoder, attention, and decoder (+asr:all). These scores do not use beam search and are therefore lower than the best scores reported in Figure 2.",
                        "page": 5,
                        "bbox": {
                            "x1": 320.15999999999997,
                            "x2": 512.64,
                            "y1": 64.8,
                            "y2": 191.51999999999998
                        }
                    },
                    {
                        "filename": "../figure/image/1360-Figure3-1.png",
                        "caption": "Figure 3: Attention plots for the final example in Table 3, using 50h models with and without pre-training. The x-axis shows the reference Spanish word positions in the input; the y-axis shows the predicted English subwords. In the reference, mucho tiempo is translated to long time, and vive aquı́ to living here, but their order is reversed, and this is reflected in (b).",
                        "page": 5,
                        "bbox": {
                            "x1": 89.75999999999999,
                            "x2": 272.15999999999997,
                            "y1": 61.44,
                            "y2": 369.12
                        }
                    },
                    {
                        "filename": "../figure/image/1360-Figure1-1.png",
                        "caption": "Figure 1: Encoder-decoder with attention model architecture for both ASR and ST. The encoder input is the Spanish speech utterance claro, translated as clearly, represented as BPE (subword) units.",
                        "page": 1,
                        "bbox": {
                            "x1": 317.76,
                            "x2": 509.28,
                            "y1": 69.6,
                            "y2": 221.28
                        }
                    },
                    {
                        "filename": "../figure/image/1360-Figure5-1.png",
                        "caption": "Figure 5: Spanish-to-English BLEU scores on Fisher dev set, with 0h (no transfer learning), 100h and 300h of English ASR data used.",
                        "page": 6,
                        "bbox": {
                            "x1": 86.39999999999999,
                            "x2": 275.03999999999996,
                            "y1": 65.75999999999999,
                            "y2": 157.92
                        }
                    },
                    {
                        "filename": "../figure/image/1360-Table4-1.png",
                        "caption": "Table 4: Fisher dev set BLEU scores for sp-en-20h. baseline: model without transfer learning. Last two columns: Using encoder parameters from French ASR (+fr-20h), and English ASR (+en-20h).",
                        "page": 7,
                        "bbox": {
                            "x1": 76.8,
                            "x2": 285.12,
                            "y1": 62.4,
                            "y2": 102.24
                        }
                    },
                    {
                        "filename": "../figure/image/1360-Table5-1.png",
                        "caption": "Table 5: Mboshi-to-French translation scores, with and without ASR pre-training. Pr. is the precision, and Rec. the recall score. fr-top-8w and fr-top-10w are naive baselines that, respectively, predict the 8 or 10 most frequent training words. For en + fr, we use encoder parameters from en-300h and attention+decoder parameters from fr-20h",
                        "page": 7,
                        "bbox": {
                            "x1": 72.0,
                            "x2": 290.4,
                            "y1": 179.04,
                            "y2": 319.2
                        }
                    },
                    {
                        "filename": "../figure/image/1360-Table1-1.png",
                        "caption": "Table 1: Word Error Rate (WER, in %) for the ASR models used as pretraining, computed on Switchboard train-dev for English and Globalphone dev for French.",
                        "page": 3,
                        "bbox": {
                            "x1": 317.76,
                            "x2": 515.04,
                            "y1": 62.4,
                            "y2": 102.24
                        }
                    },
                    {
                        "filename": "../figure/image/1360-Table2-1.png",
                        "caption": "Table 2: BLEU scores for Spanish-English ST on the Fisher test set, usingN hours of training data. base: no transfer learning. +asr: using model parameters from English ASR (en-300h).",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.0799999999999,
                            "y1": 62.4,
                            "y2": 116.16
                        }
                    },
                    {
                        "filename": "../figure/image/1360-Table3-1.png",
                        "caption": "Table 3: Example translations on selected sentences from the Fisher development set, with stem-level ngram matches to the reference sentence underlined. 20h and 50h are Spanish-English models without pretraining; 20h+asr and 50h+asr are pre-trained on 300 hours of English ASR.",
                        "page": 4,
                        "bbox": {
                            "x1": 306.71999999999997,
                            "x2": 526.56,
                            "y1": 190.56,
                            "y2": 337.91999999999996
                        }
                    },
                    {
                        "filename": "../figure/image/1360-Figure2-1.png",
                        "caption": "Figure 2: (top) BLEU and (bottom) Unigram precision/recall for Spanish-English ST models computed on Fisher dev set. base indicates no transfer learning; +asr are models trained by fine-tuning en-300h model parameters. naive baseline indicates the score when we predict the 15 most frequent English words in the training set.",
                        "page": 4,
                        "bbox": {
                            "x1": 82.56,
                            "x2": 280.32,
                            "y1": 65.75999999999999,
                            "y2": 315.36
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-80"
        },
        {
            "slides": {
                "0": {
                    "title": "Semantic Parsing",
                    "text": [
                        "h h h ?",
                        "Introduction Semantic parser Abstract examples Results Conclusions"
                    ],
                    "page_nums": [
                        1,
                        7,
                        10
                    ],
                    "images": []
                },
                "3": {
                    "title": "Problems with Weak Supervision",
                    "text": [
                        "Introduction Semantic parser Abstract examples Results Conclusions",
                        "Spurious programs (Pasupat and Liang, 2016; Guu et al.,"
                    ],
                    "page_nums": [
                        4,
                        5
                    ],
                    "images": []
                },
                "4": {
                    "title": "CNLVR Cuhr et al 2017",
                    "text": [
                        "xz :There is a small yellow item not touching any wall",
                        "Introduction Semantic parser Abstract examples > Results Conclusions"
                    ],
                    "page_nums": [
                        6
                    ],
                    "images": []
                },
                "5": {
                    "title": "Insight",
                    "text": [
                        "Introduction Semantic parser Abstract examples Results Conclusions"
                    ],
                    "page_nums": [
                        8
                    ],
                    "images": []
                },
                "6": {
                    "title": "Contributions",
                    "text": [
                        "Data augmentation Abstract cache",
                        "helps search tackles spuriousness",
                        "Introduction Semantic parser Abstract examples Results Conclusions"
                    ],
                    "page_nums": [
                        9
                    ],
                    "images": []
                },
                "7": {
                    "title": "Logical Program",
                    "text": [
                        "Introduction Semantic parser Abstract examples Results Conclusions"
                    ],
                    "page_nums": [
                        11
                    ],
                    "images": []
                },
                "10": {
                    "title": "Abstraction",
                    "text": [
                        "Introduction Semantic parser Abstract examples Results Conclusions"
                    ],
                    "page_nums": [
                        14,
                        16
                    ],
                    "images": [
                        "figure/image/1363-Table3-1.png"
                    ]
                },
                "14": {
                    "title": "Abstract Cache",
                    "text": [
                        "Introduction Semantic parser Abstract examples Results Conclusions"
                    ],
                    "page_nums": [
                        19
                    ],
                    "images": [
                        "figure/image/1363-Figure3-1.png"
                    ]
                },
                "15": {
                    "title": "Reward Tying",
                    "text": [
                        "size: 20}, . xz: There is a oi yellow item not touching any wall",
                        "50% Spurious amp Y :True",
                        "Introduction Semantic parser Abstract examples Results Conclusions 21",
                        "a :There is a small yellow item * [Hytoe: . a Black, ty) oA"
                    ],
                    "page_nums": [
                        20,
                        21
                    ],
                    "images": []
                },
                "18": {
                    "title": "Results Public test set",
                    "text": [
                        "Test-P Accuracy Test-P Consistency",
                        "Majority MaxEnt Sup. Sup.+Rerank W.Sup. W.Sup.+Rerank",
                        "Introduction Semantic parser Abstract examples Results Conclusions"
                    ],
                    "page_nums": [
                        24
                    ],
                    "images": []
                },
                "19": {
                    "title": "Ablations",
                    "text": [
                        "Abstract weakly supervised parser",
                        "Introduction Semantic parser Abstract examples Results Conclusions",
                        "Dev Accuracy Dev Consistency",
                        "-Abstraction -Data augment. -Beam cache W.Sup.+Rerank"
                    ],
                    "page_nums": [
                        25,
                        26
                    ],
                    "images": []
                },
                "20": {
                    "title": "Conclusions",
                    "text": [
                        "Similar ideas in: Dong and Lapata (2018) and Zhang et al.",
                        "Automation would be useful"
                    ],
                    "page_nums": [
                        27,
                        28
                    ],
                    "images": []
                }
            },
            "paper_title": "Weakly Supervised Semantic Parsing with Abstract Examples",
            "paper_id": "1363",
            "paper": {
                "title": "Weakly Supervised Semantic Parsing with Abstract Examples",
                "abstract": "Training semantic parsers from weak supervision (denotations) rather than strong supervision (programs) complicates training in two ways. First, a large search space of potential programs needs to be explored at training time to find a correct program. Second, spurious programs that accidentally lead to a correct denotation add noise to training. In this work we propose that in closed worlds with clear semantic types, one can substantially alleviate these problems by utilizing an abstract representation, where tokens in both the language utterance and program are lifted to an abstract form. We show that these abstractions can be defined with a handful of lexical rules and that they result in sharing between different examples that alleviates the difficulties in training. To test our approach, we develop the first semantic parser for CNLVR, a challenging visual reasoning dataset, where the search space is large and overcoming spuriousness is critical, because denotations are either TRUE or FALSE, and thus random programs are likely to lead to a correct denotation. Our method substantially improves performance, and reaches 82.5% accuracy, a 14.7% absolute accuracy improvement compared to the best reported accuracy so far.",
                "text": [
                    {
                        "id": 0,
                        "string": "Introduction The goal of semantic parsing is to map language utterances to executable programs."
                    },
                    {
                        "id": 1,
                        "string": "Early work on statistical learning of semantic parsers utilized * Authors equally contributed to this work."
                    },
                    {
                        "id": 2,
                        "string": "IsSmall(x)), Not(IsTouchingWall(x, Side.Any)))))) Figure 1: Overview of our visual reasoning setup for the CN-LVR dataset."
                    },
                    {
                        "id": 3,
                        "string": "Given an image rendered from a KB k and an utterance x, our goal is to parse x to a program z that results in the correct denotation y."
                    },
                    {
                        "id": 4,
                        "string": "Our training data includes (x, k, y) triplets."
                    },
                    {
                        "id": 5,
                        "string": "supervised learning, where training examples included pairs of language utterances and programs (Zelle and Mooney, 1996; Kate et al., 2005; Collins, 2005, 2007) ."
                    },
                    {
                        "id": 6,
                        "string": "However, collecting such training examples at scale has quickly turned out to be difficult, because expert annotators who are familiar with formal languages are required."
                    },
                    {
                        "id": 7,
                        "string": "This has led to a body of work on weaklysupervised semantic parsing (Clarke et al., 2010; Liang et al., 2011; Krishnamurthy and Mitchell, 2012; Kwiatkowski et al., 2013; Berant et al., 2013; Cai and Yates, 2013; ."
                    },
                    {
                        "id": 8,
                        "string": "In this setup, training examples correspond to utterance-denotation pairs, where a denotation is the result of executing a program against the environment (see Fig."
                    },
                    {
                        "id": 9,
                        "string": "1 )."
                    },
                    {
                        "id": 10,
                        "string": "Naturally, collecting denotations is much easier, because it can be performed by non-experts."
                    },
                    {
                        "id": 11,
                        "string": "Training semantic parsers from denotations rather than programs complicates training in two ways: (a) Search: The algorithm must learn to search through the huge space of programs at training time, in order to find the correct program."
                    },
                    {
                        "id": 12,
                        "string": "This is a difficult search problem due to the combinatorial nature of the search space."
                    },
                    {
                        "id": 13,
                        "string": "(b) Spurious-ness: Incorrect programs can lead to correct denotations, and thus the learner can go astray based on these programs."
                    },
                    {
                        "id": 14,
                        "string": "Of the two mentioned problems, spuriousness has attracted relatively less attention (Pasupat and Liang, 2016; Guu et al., 2017) ."
                    },
                    {
                        "id": 15,
                        "string": "Recently, the Cornell Natural Language for Visual Reasoning corpus (CNLVR) was released (Suhr et al., 2017) , and has presented an opportunity to better investigate the problem of spuriousness."
                    },
                    {
                        "id": 16,
                        "string": "In this task, an image with boxes that contains objects of various shapes, colors and sizes is shown."
                    },
                    {
                        "id": 17,
                        "string": "Each image is paired with a complex natural language statement, and the goal is to determine whether the statement is true or false (Fig."
                    },
                    {
                        "id": 18,
                        "string": "1) ."
                    },
                    {
                        "id": 19,
                        "string": "The task comes in two flavors, where in one the input is the image (pixels), and in the other it is the knowledge-base (KB) from which the image was synthesized."
                    },
                    {
                        "id": 20,
                        "string": "Given the KB, it is easy to view CNLVR as a semantic parsing problem: our goal is to translate language utterances into programs that will be executed against the KB to determine their correctness (Johnson et al., 2017b; Hu et al., 2017) ."
                    },
                    {
                        "id": 21,
                        "string": "Because there are only two return values, it is easy to generate programs that execute to the right denotation, and thus spuriousness is a major problem compared to previous datasets."
                    },
                    {
                        "id": 22,
                        "string": "In this paper, we present the first semantic parser for CNLVR."
                    },
                    {
                        "id": 23,
                        "string": "Semantic parsing can be coarsely divided into a lexical task (i.e., mapping words and phrases to program constants), and a structural task (i.e., mapping language composition to program composition operators)."
                    },
                    {
                        "id": 24,
                        "string": "Our core insight is that in closed worlds with clear semantic types, like spatial and visual reasoning, we can manually construct a small lexicon that clusters language tokens and program constants, and create a partially abstract representation for utterances and programs (Table 1) in which the lexical problem is substantially reduced."
                    },
                    {
                        "id": 25,
                        "string": "This scenario is ubiquitous in many semantic parsing applications such as calendar, restaurant reservation systems, housing applications, etc: the formal language has a compact semantic schema and a well-defined typing system, and there are canonical ways to express many program constants."
                    },
                    {
                        "id": 26,
                        "string": "We show that with abstract representations we can share information across examples and better tackle the search and spuriousness challenges."
                    },
                    {
                        "id": 27,
                        "string": "By pulling together different examples that share the same abstract representation, we can identify programs that obtain high reward across multiple examples, thus reducing the problem of spuriousness."
                    },
                    {
                        "id": 28,
                        "string": "This can also be done at search time, by augmenting the search state with partial programs that have been shown to be useful in earlier iterations."
                    },
                    {
                        "id": 29,
                        "string": "Moreover, we can annotate a small number of abstract utterance-program pairs, and automatically generate training examples, that will be used to warm-start our model to an initialization point in which search is able to find correct programs."
                    },
                    {
                        "id": 30,
                        "string": "We develop a formal language for visual reasoning, inspired by Johnson et al."
                    },
                    {
                        "id": 31,
                        "string": "(2017b) , and train a semantic parser over that language from weak supervision, showing that abstract examples substantially improve parser accuracy."
                    },
                    {
                        "id": 32,
                        "string": "Our parser obtains an accuracy of 82.5%, a 14.7% absolute accuracy improvement compared to stateof-the-art."
                    },
                    {
                        "id": 33,
                        "string": "All our code is publicly available at https://github.com/udiNaveh/ nlvr_tau_nlp_final_proj."
                    },
                    {
                        "id": 34,
                        "string": "Setup Problem Statement Given a training set of N examples {(x i , k i , y i )} N i=1 , where x i is an utterance, k i is a KB describing objects in an image and y i ∈ {TRUE, FALSE} denotes whether the utterance is true or false in the KB, our goal is to learn a semantic parser that maps a new utterance x to a program z such that when z is executed against the corresponding KB k, it yields the correct denotation y (see Fig."
                    },
                    {
                        "id": 35,
                        "string": "1 )."
                    },
                    {
                        "id": 36,
                        "string": "Programming language The original KBs in CNLVR describe an image as a set of objects, where each object has a color, shape, size and location in absolute coordinates."
                    },
                    {
                        "id": 37,
                        "string": "We define a programming language over the KB that is more amenable to spatial reasoning, inspired by work on the CLEVR dataset (Johnson et al., 2017b) ."
                    },
                    {
                        "id": 38,
                        "string": "This programming language provides access to functions that allow us to check the size, shape, and color of an object, to check whether it is touching a wall, to obtain sets of items that are above and below a certain set of items, etc."
                    },
                    {
                        "id": 39,
                        "string": "1 More formally, a program is a sequence of tokens describing a possibly recursive sequence of function applications in prefix notation."
                    },
                    {
                        "id": 40,
                        "string": "Each token is either a function with fixed arity (all functions have either one or two arguments), a constant, a variable or a λ term used to define Boolean functions."
                    },
                    {
                        "id": 41,
                        "string": "Functions, constants and variables have one of the following x: \"There are exactly 3 yellow squares touching the wall.\""
                    },
                    {
                        "id": 42,
                        "string": "z: Equal(3, Count(Filter(ALL ITEMS, λx."
                    },
                    {
                        "id": 43,
                        "string": "And (And (IsYellow(x), IsSquare(x), IsTouchingWall(x)))))) x: \"There are C-QuantMod C-Num C-Color C-Shape touching the wall.\""
                    },
                    {
                        "id": 44,
                        "string": "z: C-QuantMod(C-Num, Count(Filter(ALL ITEMS, λx."
                    },
                    {
                        "id": 45,
                        "string": "And (And (IsC-Color(x), IsC-Shape(x), IsTouchingWall(x)))))) Table 1 : An example for an utterance-program pair (x, z) and its abstract counterpart (x,z) x: \"There is a small yellow item not touching any wall.\""
                    },
                    {
                        "id": 46,
                        "string": "z: Exist(Filter(ALL ITEMS, λx.And(And(IsYellow(x), IsSmall(x)), Not(IsTouchingWall(x, Side.Any))))) x: \"One tower has a yellow base.\""
                    },
                    {
                        "id": 47,
                        "string": "z: GreaterEqual(1, Count(Filter(ALL ITEMS, λx.And(IsYellow(x), IsBottom(x))))) Table 2 : Examples for utterance-program pairs."
                    },
                    {
                        "id": 48,
                        "string": "Commas and parenthesis provided for readability only."
                    },
                    {
                        "id": 49,
                        "string": "atomic types: Int, Bool, Item, Size, Shape, Color, Side (sides of a box in the image); or a composite type Set(?"
                    },
                    {
                        "id": 50,
                        "string": "), and Func(?,?)."
                    },
                    {
                        "id": 51,
                        "string": "Valid programs have a return type Bool."
                    },
                    {
                        "id": 52,
                        "string": "Tables 1 and 2 provide examples for utterances and their correct programs."
                    },
                    {
                        "id": 53,
                        "string": "The supplementary material provides a full description of all program tokens, their arguments and return types."
                    },
                    {
                        "id": 54,
                        "string": "Unlike CLEVR, CNLVR requires substantial set-theoretic reasoning (utterances refer to various aspects of sets of items in one of the three boxes in the image), which required extending the language described by Johnson et al."
                    },
                    {
                        "id": 55,
                        "string": "(2017b) to include set operators and lambda abstraction."
                    },
                    {
                        "id": 56,
                        "string": "We manually sampled 100 training examples from the training data and estimate that roughly 95% of the utterances in the training data can be expressed with this programming language."
                    },
                    {
                        "id": 57,
                        "string": "Model We base our model on the semantic parser of Guu et al."
                    },
                    {
                        "id": 58,
                        "string": "(2017) ."
                    },
                    {
                        "id": 59,
                        "string": "In their work, they used an encoderdecoder architecture (Sutskever et al., 2014) to define a distribution p θ (z | x)."
                    },
                    {
                        "id": 60,
                        "string": "The utterance x is encoded using a bi-directional LSTM (Hochreiter and Schmidhuber, 1997 ) that creates a contextualized representation h i for every utterance token x i , and the decoder is a feed-forward network combined with an attention mechanism over the encoder outputs (Bahdanau et al., 2015) ."
                    },
                    {
                        "id": 61,
                        "string": "The feedforward decoder takes as input the last K tokens that were decoded."
                    },
                    {
                        "id": 62,
                        "string": "More formally the probability of a program is the product of the probability of its tokens given the history: p θ (z | x) = t p θ (z t | x, z 1:t−1 ), and the probability of a decoded token is computed as follows."
                    },
                    {
                        "id": 63,
                        "string": "First, a Bi-LSTM encoder converts the input sequence of utterance embeddings into a sequence of forward and backward states h {F,B} 1 , ."
                    },
                    {
                        "id": 64,
                        "string": "."
                    },
                    {
                        "id": 65,
                        "string": "."
                    },
                    {
                        "id": 66,
                        "string": ", h {F,B} |x| ."
                    },
                    {
                        "id": 67,
                        "string": "The utterance representation x isx = [h F |x| ; h B 1 ]."
                    },
                    {
                        "id": 68,
                        "string": "Then decoding produces the program token-by-token: q t = ReLU(W q [x;v; z t−K−1:t−1 ]), α t,i ∝ exp(q t W α h i ) , c t = i α t,i h i , p θ (z t | x, z 1:t−1 ) ∝ exp(φ zt W s [q t ; c t ]), where φ z is an embedding for program token z, v is a bag-of-words vector for the tokens in x, z i:j = (z i , ."
                    },
                    {
                        "id": 69,
                        "string": "."
                    },
                    {
                        "id": 70,
                        "string": "."
                    },
                    {
                        "id": 71,
                        "string": ", z j ) is a history vector of size K, the matrices W q , W α , W s are learned parameters (along with the LSTM parameters and embedding matrices), and ';' denotes concatenation."
                    },
                    {
                        "id": 72,
                        "string": "Search: Searching through the large space of programs is a fundamental challenge in semantic parsing."
                    },
                    {
                        "id": 73,
                        "string": "To combat this challenge we apply several techniques."
                    },
                    {
                        "id": 74,
                        "string": "First, we use beam search at decoding time and when training from weak supervision (see Sec."
                    },
                    {
                        "id": 75,
                        "string": "4), similar to prior work Guu et al., 2017) ."
                    },
                    {
                        "id": 76,
                        "string": "At each decoding step we maintain a beam B of program prefixes of length n, expand them exhaustively to programs of length n+1 and keep the top-|B| program prefixes with highest model probability."
                    },
                    {
                        "id": 77,
                        "string": "Second, we utilize the semantic typing system to only construct programs that are syntactically valid, and substantially prune the program search space (similar to type constraints in Krishnamurthy et al."
                    },
                    {
                        "id": 78,
                        "string": "(2017) 2017) )."
                    },
                    {
                        "id": 79,
                        "string": "We maintain a stack that keeps track of the expected semantic type at each decoding step."
                    },
                    {
                        "id": 80,
                        "string": "The stack is initialized with the type Bool."
                    },
                    {
                        "id": 81,
                        "string": "Then, at each decoding step, only tokens that return the semantic type at the top of the stack are allowed, the stack is popped, and if the decoded token is a function, the semantic types of its arguments are pushed to the stack."
                    },
                    {
                        "id": 82,
                        "string": "This dramatically reduces the search space and guarantees that only syntactically valid programs will be produced."
                    },
                    {
                        "id": 83,
                        "string": "Fig."
                    },
                    {
                        "id": 84,
                        "string": "2 illustrates the state of the stack when decoding a program for an input utterance."
                    },
                    {
                        "id": 85,
                        "string": "Given the constrains on valid programs, our model p θ (z | x) is defined as: t p θ (z t | x, z 1:t−1 ) · 1(z t | z 1:t−1 ) z p θ (z | x, z 1:t−1 ) · 1(z | z 1:t−1 ) , where 1(z t | z 1:t−1 ) indicates whether a certain program token is valid given the program prefix."
                    },
                    {
                        "id": 86,
                        "string": "Discriminative re-ranking: The above model is a locally-normalized model that provides a distribution for every decoded token, and thus might suffer from the label bias problem (Andor et al., 2016; Lafferty et al., 2001) ."
                    },
                    {
                        "id": 87,
                        "string": "Thus, we add a globally-normalized re-ranker p ψ (z | x) that scores all |B| programs in the final beam produced by p θ (z | x)."
                    },
                    {
                        "id": 88,
                        "string": "Our globally-normalized model is: p g ψ (z | x) ∝ exp(s ψ (x, z)), and is normalized over all programs in the beam."
                    },
                    {
                        "id": 89,
                        "string": "The scoring function s ψ (x, z) is a neural network with identical architecture to the locallynormalized model, except that (a) it feeds the decoder with the candidate program z and does not generate it."
                    },
                    {
                        "id": 90,
                        "string": "(b) the last hidden state is inserted to a feed-forward network whose output is s ψ (x, z)."
                    },
                    {
                        "id": 91,
                        "string": "Our final ranking score is p θ (z|x)p g ψ (z | x)."
                    },
                    {
                        "id": 92,
                        "string": "Training We now describe our basic method for training from weak supervision, which we extend upon in Sec."
                    },
                    {
                        "id": 93,
                        "string": "5 using abstract examples."
                    },
                    {
                        "id": 94,
                        "string": "To use weak supervision, we treat the program z as a latent variable that is approximately marginalized."
                    },
                    {
                        "id": 95,
                        "string": "To describe the objective, define R(z, k, y) ∈ {0, 1} to be one if executing program z on KB k results in denotation y, and zero otherwise."
                    },
                    {
                        "id": 96,
                        "string": "The objective is then to maximize p(y | x) given by: z∈Z p θ (z | x)p(y | z, k) = z∈Z p θ (z | x)R(z, k, y) ≈ z∈B p θ (z | x)R(z, k, y) where Z is the space of all programs and B ⊂ Z are the programs found by beam search."
                    },
                    {
                        "id": 97,
                        "string": "In most semantic parsers there will be relatively few z that generate the correct denotation y."
                    },
                    {
                        "id": 98,
                        "string": "However, in CNLVR, y is binary, and so spuriousness is a central problem."
                    },
                    {
                        "id": 99,
                        "string": "To alleviate it, we utilize a property of CNLVR: the same utterance appears 4 times with 4 different images."
                    },
                    {
                        "id": 100,
                        "string": "2 If a program is spurious it is likely that it will yield the wrong denotation in one of those 4 images."
                    },
                    {
                        "id": 101,
                        "string": "Thus, we can re-define each training example to be (x, {(k j , y j )} 4 j=1 ), where each utterance x is paired with 4 different KBs and the denotations of the utterance with respect to these KBs."
                    },
                    {
                        "id": 102,
                        "string": "Then, we maximize p({y j } 4 j=1 | x, ) by maximizing the objective above, except that R(z, {k j , y j } 4 j=1 ) = 1 iff the denotation of z is correct for all four KBs."
                    },
                    {
                        "id": 103,
                        "string": "This dramatically reduces the problem of spuriousness, as the chance of randomly obtaining a correct denotation goes down from 1 2 to 1 16 ."
                    },
                    {
                        "id": 104,
                        "string": "This is reminiscent of Pasupat and Liang (2016) , where random permutations of Wikipedia tables were shown to crowdsourcing workers to eliminate spurious programs."
                    },
                    {
                        "id": 105,
                        "string": "We train the discriminative ranker analogously by maximizing the probability of programs with correct denotation z∈B p g ψ (z | x)R(z, k, y)."
                    },
                    {
                        "id": 106,
                        "string": "This basic training method fails for CNLVR (see Sec."
                    },
                    {
                        "id": 107,
                        "string": "6), due to the difficulties of search and spuriousness."
                    },
                    {
                        "id": 108,
                        "string": "Thus, we turn to learning from abstract examples, which substantially reduce these problems."
                    },
                    {
                        "id": 109,
                        "string": "Learning from Abstract Examples The main premise of this work is that in closed, well-typed domains such as visual reasoning, the main challenge is handling language compositionality, since questions may have a complex and nested structure."
                    },
                    {
                        "id": 110,
                        "string": "Conversely, the problem of mapping lexical items to functions and constants in the programming language can be substantially alleviated by taking advantage of the compact KB schema and typing system, and utilizing a Utterance Program Cluster # \"yellow\" IsYellow C-Color 3 \"big\" IsBig C-Size 3 \"square\" IsSquare C-Shape 4 \"3\" 3 C-Num 2 \"exactly\" EqualInt C-QuantMod 5 \"top\" Side.Top C-Location 2 \"above\" GetAbove C-SpaceRel 6 Total: 25 Table 3 : Example mappings from utterance tokens to program tokens for the seven clusters used in the abstract representation."
                    },
                    {
                        "id": 111,
                        "string": "The rightmost column counts the number of mapping in each cluster, resulting in a total of 25 mappings."
                    },
                    {
                        "id": 112,
                        "string": "small lexicon that maps prevalent lexical items into typed program constants."
                    },
                    {
                        "id": 113,
                        "string": "Thus, if we abstract away from the actual utterance into a partially abstract representation, we can combat the search and spuriousness challenges as we can generalize better across examples in small datasets."
                    },
                    {
                        "id": 114,
                        "string": "Consider the utterances: 1."
                    },
                    {
                        "id": 115,
                        "string": "\"There are exactly 3 yellow squares touching the wall.\""
                    },
                    {
                        "id": 116,
                        "string": "2."
                    },
                    {
                        "id": 117,
                        "string": "\"There are at least 2 blue circles touching the wall.\""
                    },
                    {
                        "id": 118,
                        "string": "While the surface forms of these utterances are different, at an abstract level they are similar and it would be useful to leverage this similarity."
                    },
                    {
                        "id": 119,
                        "string": "We therefore define an abstract representation for utterances and logical forms that is suitable for spatial reasoning."
                    },
                    {
                        "id": 120,
                        "string": "We define seven abstract clusters (see Table 3 ) that correspond to the main semantic types in our domain."
                    },
                    {
                        "id": 121,
                        "string": "Then, we associate each cluster with a small lexicon that contains language-program token pairs associated with this cluster."
                    },
                    {
                        "id": 122,
                        "string": "These mappings represent the canonical ways in which program constants are expressed in natural language."
                    },
                    {
                        "id": 123,
                        "string": "Table 3 shows the seven clusters we use, with an example for an utterance-program token pair from the cluster, and the number of mappings in each cluster."
                    },
                    {
                        "id": 124,
                        "string": "In total, 25 mappings are used to define abstract representations."
                    },
                    {
                        "id": 125,
                        "string": "As we show next, abstract examples can be used to improve the process of training semantic parsers."
                    },
                    {
                        "id": 126,
                        "string": "Specifically, in sections 5.1-5.3, we use abstract examples in several ways, from generating new training data to improving search accuracy."
                    },
                    {
                        "id": 127,
                        "string": "The combined effect of these approaches is quite dramatic, as our evaluation demonstrates."
                    },
                    {
                        "id": 128,
                        "string": "High Coverage via Abstract Examples We begin by demonstrating that abstraction leads to rather effective coverage of the types of questions asked in a dataset."
                    },
                    {
                        "id": 129,
                        "string": "Namely, that many ques-tions in the data correspond to a small set of abstract examples."
                    },
                    {
                        "id": 130,
                        "string": "We created abstract representations for all 3,163 utterances in the training examples by mapping utterance tokens to their cluster label, and then counted how many distinct abstract utterances exist."
                    },
                    {
                        "id": 131,
                        "string": "We found that as few as 200 abstract utterances cover roughly half of the training examples in the original training set."
                    },
                    {
                        "id": 132,
                        "string": "The above suggests that knowing how to answer a small set of abstract questions may already yield a reasonable baseline."
                    },
                    {
                        "id": 133,
                        "string": "To test this baseline, we constructured a \"rule-based\" parser as follows."
                    },
                    {
                        "id": 134,
                        "string": "We manually annotated 106 abstract utterances with their corresponding abstract program (including alignment between abstract tokens in the utterance and program)."
                    },
                    {
                        "id": 135,
                        "string": "For example, Table 1 shows the abstract utterance and program for the utterance \"There are exactly 3 yellow squares touching the wall\"."
                    },
                    {
                        "id": 136,
                        "string": "Note that the utterance \"There are at least 2 blue circles touching the wall\" will be mapped to the same abstract utterance and program."
                    },
                    {
                        "id": 137,
                        "string": "Given this set of manual annotations, our rulebased semantic parser operates as follows: Given an utterance x, create its abstract representationx."
                    },
                    {
                        "id": 138,
                        "string": "If it exactly matches one of the manually annotated utterances, map it to its corresponding abstract programz."
                    },
                    {
                        "id": 139,
                        "string": "Replace the abstract program tokens with real program tokens based on the alignment with the utterance tokens, and obtain a final program z. Ifx does not match return TRUE, the majority label."
                    },
                    {
                        "id": 140,
                        "string": "The rule-based parser will fail for examples not covered by the manual annotation."
                    },
                    {
                        "id": 141,
                        "string": "However, it already provides a reasonable baseline (see Table 4 )."
                    },
                    {
                        "id": 142,
                        "string": "As shown next, manual annotations can also be used for generating new training data."
                    },
                    {
                        "id": 143,
                        "string": "Data Augmentation While the rule-based semantic parser has high precision and gauges the amount of structural variance in the data, it cannot generalize beyond observed examples."
                    },
                    {
                        "id": 144,
                        "string": "However, we can automatically generate non-abstract utterance-program pairs from the manually annotated abstract pairs and train a semantic parser with strong supervision that can potentially generalize better."
                    },
                    {
                        "id": 145,
                        "string": "E.g., consider the utterance \"There are exactly 3 yellow squares touching the wall\", whose abstract representation is given in Table 1 ."
                    },
                    {
                        "id": 146,
                        "string": "It is clear that we can use this abstract pair to generate a program for a new utterance \"There are exactly 3 blue squares touching the wall\"."
                    },
                    {
                        "id": 147,
                        "string": "This program will be identical Algorithm 1 Decoding with an Abstract Cache 1: procedure DECODE(x, y, C, D) 2: // C is a map where the key is an abstract utterance and the value is a pair (Z,R) of a list of abstract programs Z and their average rewardsR."
                    },
                    {
                        "id": 148,
                        "string": "D is an integer."
                    },
                    {
                        "id": 149,
                        "string": "3:x ← Abstract utterance of x 4: A ← D programs in C[x] with top reward values 5: B1 ← compute beam of programs of length 1 6: for t = 2 ."
                    },
                    {
                        "id": 150,
                        "string": "."
                    },
                    {
                        "id": 151,
                        "string": "."
                    },
                    {
                        "id": 152,
                        "string": "T do // Decode with cache 7: Bt ← construct beam from Bt−1 8: At = truncate(A, t) 9: Bt.add(de-abstract(At)) 10: for z ∈ BT do //Update cache 11: Update rewards in C[x] using (z, R(z, y)) 12: return BT ∪ de-abstract(A)."
                    },
                    {
                        "id": 153,
                        "string": "to the program of the first utterance, with IsBlue replacing IsYellow."
                    },
                    {
                        "id": 154,
                        "string": "More generally, we can sample any abstract example and instantiate the abstract clusters that appear in it by sampling pairs of utterance-program tokens for each abstract cluster."
                    },
                    {
                        "id": 155,
                        "string": "Formally, this is equivalent to a synchronous context-free grammar (Chiang, 2005) that has a rule for generating each manually-annotated abstract utteranceprogram pair, and rules for synchronously generating utterance and program tokens from the seven clusters."
                    },
                    {
                        "id": 156,
                        "string": "We generated 6,158 (x, z) examples using this method and trained a standard sequence to sequence parser by maximizing log p θ (z|x) in the model above."
                    },
                    {
                        "id": 157,
                        "string": "Although these are generated from a small set of 106 abstract utterances, they can be used to learn a model with higher coverage and accuracy compared to the rule-based parser, as our evaluation demonstrates."
                    },
                    {
                        "id": 158,
                        "string": "3 The resulting parser can be used as a standalone semantic parser."
                    },
                    {
                        "id": 159,
                        "string": "However, it can also be used as an initialization point for the weakly-supervised semantic parser."
                    },
                    {
                        "id": 160,
                        "string": "As we observe in Sec."
                    },
                    {
                        "id": 161,
                        "string": "6, this results in further improvement in accuracy."
                    },
                    {
                        "id": 162,
                        "string": "Caching Abstract Examples We now describe a caching mechanism that uses abstract examples to combat search and spuriousness when training from weak supervision."
                    },
                    {
                        "id": 163,
                        "string": "As shown in Sec."
                    },
                    {
                        "id": 164,
                        "string": "5.1, many utterances are identical at the abstract level."
                    },
                    {
                        "id": 165,
                        "string": "Thus, a natural idea is to keep track at training time of abstract utteranceprogram pairs that resulted in a correct denotation, and use this information to direct the search procedure."
                    },
                    {
                        "id": 166,
                        "string": "Concretely, we construct a cache C that maps abstract utterances to all abstract programs that were decoded by the model, and tracks the average reward obtained for those programs."
                    },
                    {
                        "id": 167,
                        "string": "For every utterance x, after obtaining the final beam of programs, we add to the cache all abstract utteranceprogram pairs (x,z), and update their average reward (Alg."
                    },
                    {
                        "id": 168,
                        "string": "1, line 10)."
                    },
                    {
                        "id": 169,
                        "string": "To construct an abstract example (x,z) from an utterance-program pair (x, z) in the beam, we perform the following procedure."
                    },
                    {
                        "id": 170,
                        "string": "First, we createx by replacing utterance tokens with their cluster label, as in the rule-based semantic parser."
                    },
                    {
                        "id": 171,
                        "string": "Then, we go over every program token in z, and replace it with an abstract cluster if the utterance contains a token that is mapped to this program token according to the mappings from Table 3 ."
                    },
                    {
                        "id": 172,
                        "string": "This also provides an alignment from abstract program tokens to abstract utterance tokens that is necessary when utilizing the cache."
                    },
                    {
                        "id": 173,
                        "string": "We propose two variants for taking advantage of the cache C. Both are shown in Algorithm 1."
                    },
                    {
                        "id": 174,
                        "string": "1."
                    },
                    {
                        "id": 175,
                        "string": "Full program retrieval (Alg."
                    },
                    {
                        "id": 176,
                        "string": "1, line 12): Given utterance x, construct an abstract utterancex, retrieve the top D abstract programs A from the cache, compute the de-abstracted programs Z using alignments from program tokens to utterance tokens, and add the D programs to the final beam."
                    },
                    {
                        "id": 177,
                        "string": "2."
                    },
                    {
                        "id": 178,
                        "string": "Program prefix retrieval (Alg."
                    },
                    {
                        "id": 179,
                        "string": "1, line 9): Here, we additionally consider prefixes of abstract programs to the beam, to further guide the search process."
                    },
                    {
                        "id": 180,
                        "string": "At each step t, let B t be the beam of decoded programs at step t. For every abstract programz ∈ A add the de-abstracted prefix z 1:t to B t and expand B t+1 accordingly."
                    },
                    {
                        "id": 181,
                        "string": "This allows the parser to potentially construct new programs that are not in the cache already."
                    },
                    {
                        "id": 182,
                        "string": "This approach combats both spuriousness and the search challenge, because we add promising program prefixes to the beam that might have fallen off of it earlier."
                    },
                    {
                        "id": 183,
                        "string": "Fig."
                    },
                    {
                        "id": 184,
                        "string": "3 visualizes the caching mechanism."
                    },
                    {
                        "id": 185,
                        "string": "A high-level overview of our entire approach for utilizing abstract examples at training time for both data augmentation and model training is given in Fig."
                    },
                    {
                        "id": 186,
                        "string": "4 ."
                    },
                    {
                        "id": 187,
                        "string": "Experimental Evaluation Model and Training Parameters The Bi-LSTM state dimension is 30."
                    },
                    {
                        "id": 188,
                        "string": "The decoder has one hidden layer of dimension 50, that takes the  last 4 decoded tokens as input as well as encoder states."
                    },
                    {
                        "id": 189,
                        "string": "Token embeddings are of dimension 12, beam size is 40 and D = 10 programs are used in Algorithm 1."
                    },
                    {
                        "id": 190,
                        "string": "Word embeddings are initialized from CBOW (Mikolov et al., 2013) trained on the training data, and are then optimized end-toend."
                    },
                    {
                        "id": 191,
                        "string": "In the weakly-supervised parser we encourage exploration with meritocratic gradient updates with β = 0.5 (Guu et al., 2017) ."
                    },
                    {
                        "id": 192,
                        "string": "In the weaklysupervised parser we warm-start the parameters with the supervised parser, as mentioned above."
                    },
                    {
                        "id": 193,
                        "string": "For optimization, Adam is used (Kingma and Ba, 2014) ), with learning rate of 0.001, and mini-batch size of 8."
                    },
                    {
                        "id": 194,
                        "string": "Pre-processing Because the number of utterances is relatively small for training a neural model, we take the following steps to reduce sparsity."
                    },
                    {
                        "id": 195,
                        "string": "We lowercase all utterance tokens, and also use their lemmatized form."
                    },
                    {
                        "id": 196,
                        "string": "We also use spelling correction to replace words that contain typos."
                    },
                    {
                        "id": 197,
                        "string": "After pre-processing we replace every word that occurs less than 5 times with an UNK symbol."
                    },
                    {
                        "id": 198,
                        "string": "Evaluation We evaluate on the public development and test sets of CNLVR as well as on the hidden test set."
                    },
                    {
                        "id": 199,
                        "string": "The standard evaluation metric is accuracy, i.e., how many examples are correctly classified."
                    },
                    {
                        "id": 200,
                        "string": "In addition, we report consistency, which is the proportion of utterances for which the decoded program has the correct denotation for all 4 images/KBs."
                    },
                    {
                        "id": 201,
                        "string": "It captures whether a model consistently produces a correct answer."
                    },
                    {
                        "id": 202,
                        "string": "when taking the KB as input, which is a maximum entropy classifier (MAXENT)."
                    },
                    {
                        "id": 203,
                        "string": "For our models, we evaluate the following variants of our approach: • RULE: The rule-based parser from Sec."
                    },
                    {
                        "id": 204,
                        "string": "5.1."
                    },
                    {
                        "id": 205,
                        "string": "• SUP."
                    },
                    {
                        "id": 206,
                        "string": ": The supervised semantic parser trained on augmented data as in Sec."
                    },
                    {
                        "id": 207,
                        "string": "5.2 (5, 598 examples for training and 560 for validation)."
                    },
                    {
                        "id": 208,
                        "string": "• WEAKSUP."
                    },
                    {
                        "id": 209,
                        "string": ": Our full weakly-supervised semantic parser that uses abstract examples."
                    },
                    {
                        "id": 210,
                        "string": "• +DISC: We add a discriminative re-ranker (Sec."
                    },
                    {
                        "id": 211,
                        "string": "3) for both SUP."
                    },
                    {
                        "id": 212,
                        "string": "and WEAKSUP."
                    },
                    {
                        "id": 213,
                        "string": "Table 4 describes our main results."
                    },
                    {
                        "id": 214,
                        "string": "Our weakly-supervised semantic parser with re-ranking (W.+DISC) obtains 84.0 accuracy and 65.0 consistency on the public test set and 82.5 accuracy and 63.9 on the hidden one, improving accuracy by 14.7 points compared to state-of-theart."
                    },
                    {
                        "id": 215,
                        "string": "The accuracy of the rule-based parser (RULE) is less than 2 points below MAXENT, showing that a semantic parsing approach is very suitable for this task."
                    },
                    {
                        "id": 216,
                        "string": "The supervised parser obtains better performance (especially in consistency), and with re-ranking reaches 76.6 accuracy, showing that generalizing from generated examples is better than memorizing manually-defined patterns."
                    },
                    {
                        "id": 217,
                        "string": "Our weakly-supervised parser significantly improves over SUP., reaching an accuracy of 81.7 before reranking, and 84.0 after re-ranking (on the public test set)."
                    },
                    {
                        "id": 218,
                        "string": "Consistency results show an even crisper trend of improvement across the models."
                    },
                    {
                        "id": 219,
                        "string": "Main results Analysis We analyze our results by running multiple ablations of our best model W.+DISC on the development set."
                    },
                    {
                        "id": 220,
                        "string": "To examine the overall impact of our procedure, we trained a weakly-supervised parser from scratch without pre-training a supervised parser nor using a cache, which amounts to a re-implementation of the RANDOMER algorithm (Guu et al., 2017) ."
                    },
                    {
                        "id": 221,
                        "string": "We find that the algorithm is  unable to bootstrap in this challenging setup and obtains very low performance."
                    },
                    {
                        "id": 222,
                        "string": "Next, we examined the importance of abstract examples, by pretraining only on examples that were manually annotated (utterances that match the 106 abstract patterns), but with no data augmentation or use of a cache (−ABSTRACTION)."
                    },
                    {
                        "id": 223,
                        "string": "This results in performance that is similar to the MAJORITY baseline."
                    },
                    {
                        "id": 224,
                        "string": "To further examine the importance of abstraction, we decoupled the two contributions, training once with a cache but without data augmentation for pre-training (−DATAAUGMENTATION), and again with pre-training over the augmented data, but without the cache (−BEAMCACHE)."
                    },
                    {
                        "id": 225,
                        "string": "We found that the former improves by a few points over the MAXENT baseline, and the latter performs comparably to the supervised parser, that is, we are still unable to improve learning by training from denotations."
                    },
                    {
                        "id": 226,
                        "string": "Lastly, we use a beam cache without line 9 in Alg."
                    },
                    {
                        "id": 227,
                        "string": "1 (−EVERYSTEPBEAMCACHE)."
                    },
                    {
                        "id": 228,
                        "string": "This already results in good performance, substantially higher than SUP."
                    },
                    {
                        "id": 229,
                        "string": "but is still 3.4 points worse than our best performing model on the development set."
                    },
                    {
                        "id": 230,
                        "string": "Orthogonally, to analyze the importance of tying the reward of all four examples that share an utterance, we trained a model without this tying, where the reward is 1 iff the denotation is correct (ONEEXAMPLEREWARD)."
                    },
                    {
                        "id": 231,
                        "string": "We find that spuriousness becomes a major issue and weaklysupervised learning fails."
                    },
                    {
                        "id": 232,
                        "string": "Error Analysis We sampled 50 consistent and 50 inconsistent programs from the development set to analyze the weaknesses of our model."
                    },
                    {
                        "id": 233,
                        "string": "By and large, errors correspond to utterances that are more complex syntactically and semantically."
                    },
                    {
                        "id": 234,
                        "string": "In about half of the errors an object was described by two or more modifying clauses: \"there is a box with a yellow circle and three blue items\"; or nesting occurred: \"one of the gray boxes has exactly three objects one of which is a circle\"."
                    },
                    {
                        "id": 235,
                        "string": "In these cases the model either ignored one of the conditions, resulting in a program equivalent to \"there is a box with three blue items\" for the first case, or applied composition operators wrongly, outputting an equivalent to \"one of the gray boxes has exactly three circles\" for the second case."
                    },
                    {
                        "id": 236,
                        "string": "However, in some cases the parser succeeds on such examples and we found that 12% of the sampled utterances that were parsed correctly had a similar complex structure."
                    },
                    {
                        "id": 237,
                        "string": "Other, less frequent reasons for failure were problems with cardinality interpretation, i.e."
                    },
                    {
                        "id": 238,
                        "string": ",\"there are 2\" parsed as \"exactly 2\" instead of \"at least 2\"; applying conditions to items rather than sets, e.g., \"there are 2 boxes with a triangle closely touching a corner\" parsed as \"there are 2 triangles closely touching a corner\"; and utterances with questionable phrasing, e.g., \"there is a tower that has three the same blocks color\"."
                    },
                    {
                        "id": 239,
                        "string": "Other insights are that the algorithm tended to give higher probability to the top ranked program when it is correct (average probability 0.18), compared to cases when it is incorrect (average probability 0.08), indicating that probabilities are correlated with confidence."
                    },
                    {
                        "id": 240,
                        "string": "In addition, sentence length is not predictive for whether the model will succeed: average sentence length of an utterance is 10.9 when the model is correct, and 11.1 when it errs."
                    },
                    {
                        "id": 241,
                        "string": "We also note that the model was successful with sentences that deal with spatial relations, but struggled with sentences that refer to the size of shapes."
                    },
                    {
                        "id": 242,
                        "string": "This is due to the data distribution, which includes many examples of the former case and fewer examples of the latter."
                    },
                    {
                        "id": 243,
                        "string": "Related Work Training semantic parsers from denotations has been one of the most popular training schemes for scaling semantic parsers since the beginning of the decade."
                    },
                    {
                        "id": 244,
                        "string": "Early work focused on traditional log-linear models (Clarke et al., 2010; Liang et al., 2011; Kwiatkowski et al., 2013) , but recently denotations have been used to train neural semantic parsers Krishnamurthy et al., 2017; Rabinovich et al., 2017; Cheng et al., 2017) ."
                    },
                    {
                        "id": 245,
                        "string": "Visual reasoning has attracted considerable attention, with datasets such as VQA (Antol et al., 2015) and CLEVR (Johnson et al., 2017a) ."
                    },
                    {
                        "id": 246,
                        "string": "The advantage of CNLVR is that language utterances are both natural and compositional."
                    },
                    {
                        "id": 247,
                        "string": "Treating vi-sual reasoning as an end-to-end semantic parsing problem has been previously done on CLEVR (Hu et al., 2017; Johnson et al., 2017b) ."
                    },
                    {
                        "id": 248,
                        "string": "Our method for generating training data resembles data re-combination ideas in Jia and Liang (2016) , where examples are generated automatically by replacing entities with their categories."
                    },
                    {
                        "id": 249,
                        "string": "While spuriousness is central to semantic parsing when denotations are not very informative, there has been relatively little work on explicitly tackling it."
                    },
                    {
                        "id": 250,
                        "string": "Pasupat and Liang (2015) used manual rules to prune unlikely programs on the WIK-ITABLEQUESTIONS dataset, and then later utilized crowdsourcing (Pasupat and Liang, 2016) to eliminate spurious programs."
                    },
                    {
                        "id": 251,
                        "string": "Guu et al."
                    },
                    {
                        "id": 252,
                        "string": "(2017) proposed RANDOMER, a method for increasing exploration and handling spuriousness by adding randomness to beam search and a proposing a \"meritocratic\" weighting scheme for gradients."
                    },
                    {
                        "id": 253,
                        "string": "In our work we found that random exploration during beam search did not improve results while meritocratic updates slightly improved performance."
                    },
                    {
                        "id": 254,
                        "string": "Discussion In this work we presented the first semantic parser for the CNLVR dataset, taking structured representations as input."
                    },
                    {
                        "id": 255,
                        "string": "Our main insight is that in closed, well-typed domains we can generate abstract examples that can help combat the difficulties of training a parser from delayed supervision."
                    },
                    {
                        "id": 256,
                        "string": "First, we use abstract examples to semiautomatically generate utterance-program pairs that help warm-start our parameters, thereby reducing the difficult search challenge of finding correct programs with random parameters."
                    },
                    {
                        "id": 257,
                        "string": "Second, we focus on an abstract representation of examples, which allows us to tackle spuriousness and alleviate search, by sharing information about promising programs between different examples."
                    },
                    {
                        "id": 258,
                        "string": "Our approach dramatically improves performance on CNLVR, establishing a new state-of-the-art."
                    },
                    {
                        "id": 259,
                        "string": "In this paper, we used a manually-built highprecision lexicon to construct abstract examples."
                    },
                    {
                        "id": 260,
                        "string": "This is suitable for well-typed domains, which are ubiquitous in the virtual assistant use case."
                    },
                    {
                        "id": 261,
                        "string": "In future work we plan to extend this work and automatically learn such a lexicon."
                    },
                    {
                        "id": 262,
                        "string": "This can reduce manual effort and scale to larger domains where there is substantial variability on the language side."
                    }
                ],
                "headers": [
                    {
                        "section": "Introduction",
                        "n": "1",
                        "start": 0,
                        "end": 32
                    },
                    {
                        "section": "Setup",
                        "n": "2",
                        "start": 33,
                        "end": 56
                    },
                    {
                        "section": "Model",
                        "n": "3",
                        "start": 57,
                        "end": 91
                    },
                    {
                        "section": "Training",
                        "n": "4",
                        "start": 92,
                        "end": 108
                    },
                    {
                        "section": "Learning from Abstract Examples",
                        "n": "5",
                        "start": 109,
                        "end": 127
                    },
                    {
                        "section": "High Coverage via Abstract Examples",
                        "n": "5.1",
                        "start": 128,
                        "end": 142
                    },
                    {
                        "section": "Data Augmentation",
                        "n": "5.2",
                        "start": 143,
                        "end": 161
                    },
                    {
                        "section": "Caching Abstract Examples",
                        "n": "5.3",
                        "start": 162,
                        "end": 186
                    },
                    {
                        "section": "Experimental Evaluation",
                        "n": "6",
                        "start": 187,
                        "end": 218
                    },
                    {
                        "section": "Analysis",
                        "n": "6.1",
                        "start": 219,
                        "end": 242
                    },
                    {
                        "section": "Related Work",
                        "n": "7",
                        "start": 243,
                        "end": 253
                    },
                    {
                        "section": "Discussion",
                        "n": "8",
                        "start": 254,
                        "end": 262
                    }
                ],
                "figures": [
                    {
                        "filename": "../figure/image/1363-Figure4-1.png",
                        "caption": "Figure 4: An overview of our approach for utilizing abstract examples for data augmentation and model training.",
                        "page": 6,
                        "bbox": {
                            "x1": 81.6,
                            "x2": 515.04,
                            "y1": 316.8,
                            "y2": 466.08
                        }
                    },
                    {
                        "filename": "../figure/image/1363-Figure3-1.png",
                        "caption": "Figure 3: A visualization of the caching mechanism. At each decoding step, prefixes of high-reward abstract programs are added to the beam from the cache.",
                        "page": 6,
                        "bbox": {
                            "x1": 112.8,
                            "x2": 484.32,
                            "y1": 61.44,
                            "y2": 271.2
                        }
                    },
                    {
                        "filename": "../figure/image/1363-Table1-1.png",
                        "caption": "Table 1: An example for an utterance-program pair (x, z) and its abstract counterpart (x̄, z̄)",
                        "page": 2,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 526.0799999999999,
                            "y1": 63.36,
                            "y2": 92.64
                        }
                    },
                    {
                        "filename": "../figure/image/1363-Table2-1.png",
                        "caption": "Table 2: Examples for utterance-program pairs. Commas and parenthesis provided for readability only.",
                        "page": 2,
                        "bbox": {
                            "x1": 72.96,
                            "x2": 526.0799999999999,
                            "y1": 126.24,
                            "y2": 158.4
                        }
                    },
                    {
                        "filename": "../figure/image/1363-Table4-1.png",
                        "caption": "Table 4: Results on the development, public test (Test-P) and hidden test (Test-H) sets. For each model, we report both accuracy and consistency.",
                        "page": 7,
                        "bbox": {
                            "x1": 81.6,
                            "x2": 281.28,
                            "y1": 61.44,
                            "y2": 136.32
                        }
                    },
                    {
                        "filename": "../figure/image/1363-Table5-1.png",
                        "caption": "Table 5: Results of ablations of our main models on the development set. Explanation for the nature of the models is in the body of the paper.",
                        "page": 7,
                        "bbox": {
                            "x1": 327.84,
                            "x2": 505.44,
                            "y1": 61.44,
                            "y2": 144.0
                        }
                    },
                    {
                        "filename": "../figure/image/1363-Figure2-1.png",
                        "caption": "Figure 2: An example for the state of the type stack s while decoding a program z for an utterance x.",
                        "page": 3,
                        "bbox": {
                            "x1": 98.39999999999999,
                            "x2": 499.2,
                            "y1": 65.75999999999999,
                            "y2": 110.39999999999999
                        }
                    },
                    {
                        "filename": "../figure/image/1363-Table3-1.png",
                        "caption": "Table 3: Example mappings from utterance tokens to program tokens for the seven clusters used in the abstract representation. The rightmost column counts the number of mapping in each cluster, resulting in a total of 25 mappings.",
                        "page": 4,
                        "bbox": {
                            "x1": 100.8,
                            "x2": 261.12,
                            "y1": 61.44,
                            "y2": 138.24
                        }
                    }
                ]
            },
            "gem_id": "GEM-SciDuet-chal-81"
        }
    ]
}