metadata
dict | paper
dict | review
dict | citation_count
int64 0
0
| normalized_citation_count
int64 0
0
| cited_papers
listlengths 0
0
| citing_papers
listlengths 0
0
|
---|---|---|---|---|---|---|
{
"id": "p95H-KeMjDt",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=p95H-KeMjDt",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Tracking and Improving Information in the Service of Fairness",
"authors": [
"Sumegha Garg",
"Michael P. Kim",
"Omer Reingold"
],
"abstract": "As algorithmic prediction systems have become widespread, fears that these systems may inadvertently discriminate against members of underrepresented populations have grown. With the goal of understanding fundamental principles that underpin the growing number of approaches to mitigating algorithmic discrimination, we investigate the role of information in fair prediction. A common strategy for decision-making uses a predictor to assign individuals a risk score; then, individuals are selected or rejected on the basis of this score. In this work, we study a formal framework for measuring the information content of predictors. Central to the framework is the notion of a refinement; intuitively, a refinement of a predictor z increases the overall informativeness of the predictions without losing the information already contained in z. We show that increasing information content through refinements improves the downstream selection rules across a wide range of fairness measures (e.g. true positive rates, false positive rates, selection rates). In turn, refinements provide a simple but effective tool for reducing disparity in treatment and impact without sacrificing the utility of the predictions. Our results suggest that in many applications, the perceived \"cost of fairness\" results from an information disparity across populations, and thus, may be avoided with improved information.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Q0-RPC96q7",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200273",
"forum_link": "https://openreview.net/forum?id=Q0-RPC96q7",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Group Behavior Recognition Using Attention- and Graph-Based Neural Networks",
"authors": [
"Fangkai Yang",
"Wenjie Yin",
"Tetsunari Inamura",
"Mårten Björkman",
"Christopher E. Peters"
],
"abstract": "When a conversational group is approached by a newcomer who wishes to join it, the group may dynamically react by adjusting their positions and orientations in order to accommodate it. These reactions represent important cues to the newcomer about if and how they should plan their approach. The recognition and analysis of such socially complaint dynamic group behaviors have rarely been studied in depth and remain a challenging problem in social multi-agent systems. In this paper, we present novel group behavior recognition models, attention-based and graph-based, that consider behaviors on both the individual and group levels. The attention-based category consists of Approach Group Net (AGNet) and Approach Group Transformer (AGTransformer). They share a similar architecture and use attention mechanisms to encode both temporal and spatial information on both the individual and group levels. The graph-based models consist of Approach Group Graph Convolutional Networks (AG-GCN), which combine Multi-Spatial-Temporal Graph Convolutional Networks (MST-GCN) on the individual level and Graph Convolutional Networks (GCN) on the group level, with multi-temporal stages. The individual level learns the spatial and temporal movement patterns of each agent, while the group level captures the relations and interactions of multiple agents. In order to train and evaluate these models, we collected a full-body motion-captured dataset of multiple individuals in conversational groups. Experiments performed using our models to recognize group behaviors from the collected dataset show that AG-GCN, with additional distance and orientation information, achieves the best performance. We also present a multi-agent interaction use case in a virtual environment to show how the models can be practically applied.",
"keywords": [],
"raw_extracted_content": "Group Behavior Recognition Using\nAttention- and Graph-Based Neural Networks\nFangkai Yang1†, Wenjie Yin1†, Tetsunari Inamura2,M˚arten Bj ¨orkman1, Christopher Peters1\nAbstract. When a conversational group is approached by a new-\ncomer who wishes to join it, the group may dynamically react by\nadjusting their positions and orientations in order to accommodate it.These reactions represent important cues to the newcomer about ifand how they should plan their approach. The recognition and anal-ysis of such socially complaint dynamic group behaviors have rarelybeen studied in depth and remain a challenging problem in socialmulti-agent systems. In this paper, we present novel group behaviorrecognition models, attention-based and graph-based, that considerbehaviors on both the individual and group levels. The attention-based category consists of Approach Group Net (AGNet ) and Ap-\nproach Group Transformer (AGTransformer ). They share a similar\narchitecture and use attention mechanisms to encode both tempo-ral and spatial information on both the individual and group levels.The graph-based models consist of Approach Group Graph Convolu-tional Networks (AG-GCN ), which combine Multi-Spatial-Temporal\nGraph Convolutional Networks (MST-GCN ) on the individual level\nand Graph Convolutional Networks (GCN ) on the group level, with\nmulti-temporal stages. The individual level learns the spatial andtemporal movement patterns of each agent, while the group levelcaptures the relations and interactions of multiple agents. In orderto train and evaluate these models, we collected a full-body motion-captured dataset of multiple individuals in conversational groups. Ex-periments performed using our models to recognize group behaviorsfrom the collected dataset show that AG-GCN, with additional dis-tance and orientation information, achieves the best performance. Wealso present a multi-agent interaction use case in a virtual environ-ment to show how the models can be practically applied.\n1 Introduction\nA common pattern of multi-agent interactions arises in small groupswhere people gather and stand in an environment to converse. Thispattern is referred as free-standing conversational groups [11] which\nare ubiquitous in daily life. When humans or robots approach to jointhese groups, one vital ability is to present social compliance. Thenewcomer should adopt behaviors in a socially-acceptable mannerin order to make individuals in the group feel comfortable [20, 34].However, group dynamics are not appropriately considered in previ-ous works so that the conversational groups are assumed to be staticwhen approached by a newcomer [2, 21]. As observed in real scenar-ios such as during coffee breaks and in poster sessions [1, 13] or inhuman-robot interaction experiments [29], the group members react\n†Authors contributed equally, {fangkai, yinw} @kth.se .\n1KTH Royal Institute of Technology, Stockholm, Sweden.\n2National Institute of Informatics, Tokyo, Japan.to the newcomer as they either ignore the newcomer or adjust theirpositions and orientations to better accommodate them (Figure 1).\nDue to the lack of such a capability to recognize dynamic group\nbehaviors, recent works [18, 29] use humans to teleoperate robots toapproach groups leveraging the human perception on the dynamicgroup behaviors. Such teleoperation suffers from limitations that thecontrol needs experienced operators and it is hard to keep consistencyamong different situations. This motivates us to collect data that canbe used to train machine learning models in order to recognize andunderstand group dynamics. It aims to support research, especially inhuman-agent interaction, by providing human-group interaction datato better understand and learn human behaviors in groups.\nBehavior recognition methods have been widely used in real-\nworld scenarios [12, 37], but with fewer applications for human-human/robot interaction on group level. It is challenging to recog-nize group behaviors, and the difficulty lies in modeling the rela-tions among group members and the lack of datasets for training. Inthis paper, we present novel machine learning models that trained onour collected dataset to recognize group behaviors. They are cate-gorized into attention-based and graph-based models. The attention-based models share a similar architecture but differ in the attentionmechanism where AGNet uses LSTM-based soft attention and AG-Transformer uses the Transformer model with self-attention. Amongthese two categories, the Approach Group Graph Convolutional Net-work (AG-GCN), which combines Multi-Spatial-Temporal GraphConvolutional Neural Networks (MST-GCN) and Graph Convolu-tional Networks (GCN), achieves the best performance. It builds aspatial-temporal graph from a sequence of body markers. The move-ment of each agent is modeled through a multi-temporal stage graphon an individual-level, and a group-level graph is combined to en-code the group spatial relationships. In order to apply our trainedmodel, we present a virtual online group interaction use case basedon a cloud-based VR platform [25]. Each participant controls a vir-tual character through a VR device. Motion data are fed to the trainedmodel to recognize group behaviors in real-time.\nThe major contributions of the paper are summarized as follows:\n•We propose novel machine learning models for group behaviorrecognition when a group is approached by a newcomer, supportedby a new full-body motion-captured dataset that we collected.\n•We present a multi-agent interaction use case in virtual environ-ments to recognize group dynamics in real-time using our models.\n2 Related Work\n2.1 Multi-agent Interactions in Groups\nThere have been numerous studies on multi-agent interactions in\nthe field of Artificial Intelligence [23], Social Science and Cogni-ECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2002731626\ntive Science [24], with fewer focused on group interaction, specif-\nically situations in which a newcomer approaches to join a group.\nIn a free-standing conversational group, Kendon [16] proposed theF-formation system to define the positions and orientations of in-\ndividuals within a group. F-formations and other group formationmodels have been studied computationally [6, 21, 28], and have beenused as a basis for group joining behaviors of a mobile robot or anagent [3, 20, 26]. These works focus on navigating a robot or anagent to approach a group in a safe and socially-acceptable man-ner. However, they rely on hand-crafted features. Other recent works[10, 34, 35] have made use of data-driven methods to generate suchjoining group behaviors, but they were trained using synthetic dataor prior computational models due to the lack of real-life datasets.All aforementioned works assume the newcomer would eventuallyapproach to join the group which is not aware of the newcomer, i.e.,the group members have no reaction to the newcomer but stand stillin a group. However, as observed in publicly available datasets con-cerning cocktail parties or poster sessions [1, 13], the free-standingconversational groups would have interactions when approached by anewcomer as they make adjustments in positions and orientations tobetter accommodate the newcomer or ignore them. Our dataset con-tains these group behaviors with detailed 3D full-body informationthat could be used to learn group behaviors utilizing our models.\n2.2 Behavior Recognition Methods\nAnalysis of multi-agent interactions benefits from human motionrecognition that the group behavior composed by the action of eachgroup member is more interpretable. Human behavior recognitionhas been explored by many researchers, and it could be grouped intovision-based approaches and skeleton-based approaches. While thevision-based approaches has be addressed in numerous works [37],the complex factors such as scenario, occlusion, pose estimation er-ror limit the performance of vision-based approaches. On the otherside, the skeleton data recorded by Motion Capture systems (Mo-Cap) are stable with respect to external factors, we thus collect ourdata using motion capture and focus related works on skeleton-basedapproaches (see [12] for a review).\nFrom the model perspective, behavior recognition approaches also\ncould be categorized into machine learning algorithms with hand-crafted features and end-to-end deep learning methods [37]. Histori-cally, hand-crafted machine learning algorithms are highly active inthe topic of human behavior recognition. These works have been us-ing Hidden Markov Models [31], K-means and SVM [15] to learnbehavior recognition models. However, these methods rely on hand-crafted features. With the dawn of deep learning, deep learning al-gorithms have been used to achieve great success. Recurrent Neu-ral Networks (RNNs), specially Long Short-Term Memory (LSTM)models, have shown extraordinary performance on human behaviorrecognition by capturing the sequential information [8, 38]. In ad-dition, attention-based models [30, 36] utilize attention mechanismstogether with RNNs to focus on important information in behaviorrecognition. Recent graph-based methods use Graph ConvolutionalNetworks (GCN) [17, 22] on constructed human skeleton graphs forbehavior recognition. Yan et al. [33] propose the Spatial-TemporalGraph Convolutional Networks (ST-GCN) to extract spatial and tem-poral features. Li et al. [19] use the Actional-Structural Graph Con-volutional Networks (AS-GCN), which combines actional and struc-tual links into a generalized skeleton graph. Inspired by the successof attention- and graph-based methods, in this work, we adopt bothapproaches for multi-agent interactions.All of the above methods focus on single person behavior recogni-\ntion. As for group behavior recognition, Ibrahim et al. [14] propose atwo-stage LSTM model to recognize group activity. [5] uses a set ofinterconnected RNNs to jointly capture the actions of individuals.However, these RNN-based methods ignore the physical relationsamong group members and suffer from the lack of flexibility. Weadopt graph-based models to represent the relations among agents.In [32], an actor relation graph is proposed to capture the appearanceand position relation between actors, but it simplifies the individ-ual behaviors as the changes in body positions without consideringskeletons or body joints. In our work, the skeleton of each individualis used for training which contains more detailed behavioral infor-mation in behavior recognition. In contrast to previous works, wedevelop AG-GCN that combines Multi-Spatial-Temporal GCN andGroup-GCN to address the graphical relations between body mark-ers and group members. Meanwhile, we include body distance andhead orientation for better interaction recognition performance.\n3 Methods\nIn this section, we present a novel dataset of group behaviors col-lected with motion capture (Section 3.1). Then we develop novel be-havior recognition models, i.e., attention-based models (Section 3.2)and graph-based models (Section 3.3), trained on our dataset.\nFigure 1 : Two group behaviors when the newcomer (yellow charac-\nter) approaches to join the group. The red arrow indicates the move-ment of the newcomer. The group members stand still and ignoresthe newcomer purposefully (top). The group members accommodatethe newcomer, with one group member (red character) moving back-wards in order to make space for them (bottom). All skeletons aboveare reconstructed from our collected data.\n3.1 Dataset Collection\nTo provide a scenario for group interaction behaviors, we adopted agame scene called Who’s the Spy. Forty participants (27F:13M) aged\nbetween 22-35 years old (M=25.8, SD=3.2) were recruited from thelocal city and the university through public bulletins and online ad-vertisement to participate in the motion capture sessions. Three smallbooklets were distributed to three group players, and each bookletF . Yang et al. / Group Behavior Recognition Using Attention- and Graph-Based Neural Networks 1627\ncontains 40 word cards with an order that ensures only one different\nbut synonymous word exists in one game round. For example, thefirst word cards from the three booklets are Bee, Bee, and Butterfly,\nwhere Butterfly is the spy word. Each player takes turns to play as\nthe newcomer, and shifts after 10 game rounds. In each round, groupmembers take turns to describe the word at hand, and the newcomerobserves the group from 1.5 meters outside the group. Once the spyis determined, the newcomer will approach and join the group to in-form the judgment.\nThe motion data of each participant was recorded with a motion\ncapture suit with 37 markers and a NaturalPoint Optitrack system\n1.\nThe group behaviors are labeled into two general types, Accommo-\ndate and Ignore, which corresponds to the behaviors of the group\nas the newcomer is approaching to join. These group behaviors areobserved in real-life datasets [1, 13] and experiments [29]. Figure 1shows two randomly sampled behaviors from our dataset using re-constructed skeletons from full-body markers.\n3.2 Attention-Based Models\nIn this section, we present attention-based models that we train andevaluate on our collected dataset. Attention mechanisms-based re-current neural networks have been successfully applied to human be-havior recognition [30, 36]. Such models benefit from the attentionmechanism that it enables the model to automatically focus on im-portant spatial and temporal information during individual behaviorrecognition. Inspired by that, we develop attention-based models forgroup behavior recognition during human-group interactions. Twomodels are presented in this section including Approach Group Net(AGNet ) and Approach Group Transformer (AGTransformer ). These\ntwo models share a similar architecture with different types of agentencoders.\nFigure 2 : The overview of the attention-based architecture. The full-\nbody markers of three group members are encoded through a sharedGroup Member Encoder and the newcomer is encoded through an-\nother Newcomer Encoder. Both encoders are instantiated from the\nAgent Encoder which encodes the importance of the spatial and tem-\nporal information. The output from the agent encoders is fed to theGroup Attention Module in order to find which group member exerts\nmore impact on the overall group behaviors. The output is then usedto classify the group behavior to be Accommodate orIgnore\nThe overview of the attention-based architecture is shown in Fig-\nure 2. There are three humans P\n1,P2,P3in a conversational group\nand a newcomer Qapproaches to join the group. The input is a se-\nquence of tensors which contains 3D positions of full-body mark-ers from each agent during a period of time. The Agent Encoder\n(dashed yellow boxes in Figure 2) is developed to extract the tempo-ral and spatial information of all the markers from both group mem-bers (Group Member Encoder ) and the newcomer (Newcomer En-\ncoder ). Note that the details of the Agent Encoder will be discussed\n1https://optitrack.com/later in separate models (see Section 3.2.1&3.2.2). The output of the\nAgent Encoder encodes the full-body markers of an agent with thefocus on important markers at important time frames. It has the formH\n⋆\nPi=[h1P\ni,h2P\ni,···,hKP\ni]TandH⋆Q, where i=1,2,3for three\ngroup members, and Kis the hidden layer dimension.\nUtilizing the Agent Encoder, the network is able to encode the\nimportant spatial and temporal information from each group mem-\nber and the newcomer. We thus need a higher-level Group Attention(GA) module to find out which group member exerts a larger impacton the overall group behaviors. The group attention score is com-puted with two fully-connected layers with tanh and softmax activa-\ntions:\nC=softmax( tanh (W\ngroup H⋆)) (1)\nwhere Wgroup is a weight matrix and H⋆=[H⋆\nP1,H⋆P\n2,H⋆P\n3,H⋆Q].\nThe group attention score is then used to modulate the output of the\nAgent Encoders:\nS⋆=C⊙H⋆(2)\nwhere S⋆is aK×4matrix which encodes the importance of both\ngroup members and the newcomer, and ⊙represents Hadamard\nproduct. This output from the group attention module is fed to afully-connected layer with a softmax activation to classify the typeof the group behavior.\nTwo attention-based models will be presented as follows, and they\nshare the similar architecture as shown in Figure 2 with differencesin the Agent Encoder.\n3.2.1 The AGNet architecture\nThe Agent Encoder of the AGNet (see Figure 3) is implemented with\ntwo modules: a temporal attention module, separate for each marker,followed by a body attention module which learns the subset of thebody markers that play an important role in the full-body behaviors.\nFigure 3 : The Agent Encoder of the AGNet.\nTemporal Attention Module A shared Long Short-Term Mem-\nory (LSTM) is used to extract the temporal information for each ofthe 37 markers independently, i.e., given a marker M\nmof a group\nmember Pi, the output of the LSTM encoder is a K×Tmatrix of\nhidden states HPi,Mm=[H1\nPi,Mm,H2P\ni,Mm,···,HTP\ni,Mm], where\nTis the temporal length of the input data matrix, and HtP\ni,Mm=\n[ht,1\nPi,Mm,ht,2P\ni,Mm,···,ht,KP\ni,Mm]T, where Kis the hidden layer di-\nmension. These outputs are then fed into the Temporal Attention\nModule. Wang et al. [30] justified that a 1×1convolutional layer can\nhelp to reduce the number of trainable parameters compared with a\nfully-connected layer, i.e., limits irrelevant temporal connections. Wethus use a 1×1convolutional layer with a softmax activation to learn\nthe temporal attention score a\nPi,Mm=softmax( Wtemp HPi,Mm),F . Yang et al. / Group Behavior Recognition Using Attention- and Graph-Based Neural Networks 1628\nwhere Wtemp∈R1×Kis a weight matrix. The temporal attention\nscore is further used to combine the original output of the LSTM\nencoder for different moments in time:\nH/prime\nPi,Mm=T/summationdisplay\nt=1atP\ni,MmHtP\ni,Mm (3)\nBody Attention Module The temporal attention module has en-\ncoded the information from each marker separately. We thus need\na body attention module to learn a body attention score for eachmarker, in order to better understand the subset of the full-bodymarkers that play an important role in the full-body behaviors. Sim-ilar to [30], two fully-connected layers with tanh activation and soft-\nmax activation are used to compute the body attention score:\nb\nPi=softmax( tanh (WbodyH/prime\nPi)) (4)\nwhere H/prime\nPi=[H/primeP\ni,M1,H/primeP\ni,M2,···,H/primeP\ni,Mm], and Wbody is a\nweight matrix. The body attention score is thus used to merge the\noutput of the temporal attention module for different markers:\nH⋆\nPi=37/summationdisplay\nm=1bPi,MmH/primeP\ni,Mm (5)\nAs mentioned above each H⋆\nPithen goes through group attention.\nThe vector H⋆Qrepresenting the newcomer is computed in a similar\nmanner, but in another instance of the Agent Encoder, i.e. Newcomer\nEncoder.\n3.2.2 The AGTransformer architecture\nThe Transformer model [27] has proven to be successful in learning\na better representation of each element in a sequence for machinetranslation tasks. This has inspired us to apply Transformer layersin our AGTransformer model as a self-attention mechanism to dealwith sequential information. The Agent Encoder of the AGTrans-former uses two transformer layers, Marker Transformer and BodyTransformer. The Marker Transformer learns a deeper representationfor each body marker by capturing its self-attention on different timeframes, and the Body Transformer captures the self-attention of eachmarker related to others. Besides a marker embedding layer whichembeds all input markers into a fixed dimension vector, we have apositional embedding layer which encodes the temporal position ofeach marker similar to the word position in [27].\nFigure 4 : The Agent Encoder of the AGTransfomer.Transformer Layer The Transformer layer (see the right green\nbox in Figure 4) contains a multi-head self-attention layer and afeed-forward layer, and each of these two layers has a residual con-nection followed by a standard normalization step. The multi-headself-attention is defined as:\nMH (H)= Concat (head\n1,h e a d 2,···,h e a d h)Wh(6)\nwhere His the embedded matrix of each marker on sequential\nframes, Wh∈Rh×Krepresents a weight matrix, and headi=\nAttention( HWQ\ni,HWK\ni,HWV\ni)is a scaled dot-product attention:\nAttention( Q,K,V)=softmax(QKT\n√\ndV) (7)\nHere Q,K,Vrepresent query, key, and value vectors of length d\n(see [27] for details), WQ\ni,WK\ni,WV\ni∈RK×dare projection matri-\nces, and Kis the embedding dimensionality. The output is then fed\ninto a feed-forward layer. Note that both dropout and LeakyReLU are\nused in the multi-head self-attention layer and the feed-forward layerto avoid overfitting. The output is passed through Multilayer percep-trons (MLP) to reduce dimensionality, e.g., MLP after the MarkerTransformer reduces the temporal dimension.\nWe use two Transformer layers in our AGTransformer architec-\nture in order to learn a representation for an agent by learning theself-attention of the body markers during a period of time. The finaloutput is embedded to the same dimension as the output from theAGNet Agent Encoder through MLP .\nIn summary, the attention-based models share the same architec-\nture with differences in the Agent Encoder. On the individual level,the full-body markers of each agent are encoded through the AgentEncoder, and on the group level, the output from each agent encoderis fed to the Group Attention module to combine them with attentionweights before it is sent to a classifier.\n3.3 Graph-Based Models\nWe introduce Approach Group Graph Convolutional Networks (AG-GCN) for the group behavior recognition. Figure 5 shows anoverview of AG-GCN. The input data is a skeleton graph. The skele-ton graph construction is described in Section 3.3.1. Hierarchically,the model consists of two levels, the individual level, and the grouplevel. We discuss these two levels in Section 3.3.2. Finally, we diveinto the multi-temporal model in Section 3.3.3.\n3.3.1 Skeleton Graph Construction\nWe create a spatial-temporal graph from the sequence of marker co-ordinates. The format of the input data of the graph neural networkis significantly different from the one in the attention models. As de-scribed in Section 3.2, the features of all markers are concatenatedto one vector. In the graph neural network, we convert the data to agraph structure based on the spatial structure of the human skeletonwith related markers. For example, as shown in Figure 6(a), markers(each one has an ID) are connected to construct the skeleton graph.\nSpatially, the skeleton graph can be represented as an undirected\ngraph G\nS=(VS,ES), where VSis the set of nodes, and ESis\nthe set of edges. Each marker is a node, and there exist edges be-tween connected markers. For the skeleton with Nnodes, V\nS=\n{vi|i=1, ..., N}. There also exist temporal connections that con-\nnect the same marker nodes in consecutive frames. For a sequenceF . Yang et al. / Group Behavior Recognition Using Attention- and Graph-Based Neural Networks 1629\nFigure 5 : Overview of the Approach Group Graph Convolutional Neural Network (AG-GCN) for group behavior analysis. The full-body\nmarkers are connected as skeleton graphs and fed into the Multi-spatial-temporal Graph Convolutional Network (MST-GCN) which encodes\nthe marker’s spatial relationships and movement over time. The group members (P1, P2, P3) share the same model, while the newcomer\n(Q) is modeled through another MST-GCN model. The output from the MST-GCN module is then fed to the Group Graph Convolutional\nNeural Network (G-GCN) Module which encodes the group’s spatial relationships. The output is used to classify the group behavior to either\nAccommodate orIgnore.\nFigure 6 : The partition strategy: (a) Spatial configuration partition\nstrategy. On the individual level, the nodes in a neighbor set are di-vided into three sets: the node itself, the nodes that are closer to thecenter of the graph, and the nodes that are farther away. (b) Distancepartition strategy. On the group level, the nodes in a neighbor set aredivided into two sets: the node itself and the neighbor nodes.\nwithTframes, v\nt,iconnects to vt−1,iandvt+1,1along the temporal\ndimension. For the node vi, the temporal connections are represented\nasGi=(Vi,Ei), where Vi={vt,i|t=1, ..., T}, and Eirepre-\nsents the temporal edges. The whole graph sequence is composed of\nthe spatial graph and temporal graph.\n3.3.2 Spatial Graph Convolutional Neural Network\nSpatial Graph Convolutional Neural Network (S-GCN) extends con-volution operations on images [7] to graphs. On graphs, we can de-fine a sampling function on a node and its neighbor set. Unlike imageconvolutions, in a skeleton graph, the nodes within a neighborhooddo not have a fixed spatial order. To address this problem, Kipf et al.[17] proposed a strategy that is equivalent to calculate the inner prod-uct of the average feature vector in the set and a weight vector. Yanet al. proposed spatial configuration partition and distance partition[33]. In our implementations, we follow the same idea. Hierarchi-cally, the whole model is divided into two levels, the individual leveland group level. The individual level adopts the spatial configurationpartition and the group level adopts the distance partition. The spatialconfiguration partition divides neighborhoods into three subsets, i.e.,the node itself, the nodes that are closer to the center of the graph,and the nodes that are farther away.\nFor example, as shown in Figure 6(a), we assume the chest\n(marker-6, the black node) is the center of the body. Within the neigh-bor set of marker-19 (within the blue dotted line), there are three sub-sets: the marker-8 (the green node, closer to the center), the marker-19 (the red node, the node itself), and the marker-22 (the yellow node,farther away from the center). On the group level, the distance parti-tion divided nodes of agents set into two subsets, i.e., the node itselfand the neighbor nodes. When the newcomer (agent in the red circle)is the root node (to the left in Figure 6(b)), there are two subsets,the newcomer itself and all other group members (agents in the bluecircles). When one of the group members is the root node (to theright in Figure 6(b)), the two subsets are this group member and thenewcomer. After dividing the points in a neighbor sets into severalsubsets, we can determine the spatial order based on the order of thesubsets. For example, in distance partition, the index of the root nodeitself is 0, the index of the neighbor nodes is 1.\nUsing neighborhood subsets defined for both individual and group\nlevels, graph convolutions are performed by the corresponding net-works, S-GCN (to the right in Figure 5) and G-GCN (at the bottomleft). With x\niandyibeing the feature maps of node vibefore and af-\nter a graph convolution operation, a graph convolution can be definedas:\ny\ni=/summationdisplay\nvj∈Sixj\nDvi(vj)w(lvi(vj)), (8)\nwhere Siis the set of neighbor nodes of vi,wis a weight function,\nlvi(vj)is a mapping from vjto the index of its corresponding subset\nand the normalizing term Dvi(vj)is the number of nodes in this\nsubset. Essentially, an average is computed for each subset, with theoutput being a weighted sum of these averages.\nTo improve the results, we calculate distance and angle features,\nand concatenate these features to the end of the output of the previ-ous level as the input of the group level. As stated in [34, 35], bodydistances and head orientations between two people in group interac-tions have shown to be essential factors in group behavior analysis.For group members, the distance feature is the distance to the new-comer, the angle feature is the angle between the head orientationsof the member and the newcomer. For the newcomer, the values ofF . Yang et al. / Group Behavior Recognition Using Attention- and Graph-Based Neural Networks 1630\nthese two features are the average value of group member features.\n3.3.3 Multi-Temporal Convolutional Neural Network\nBefore delving into the multi-temporal convolutional neural network,\nwe first study the temporal convolutional neural network (TCN). In-stead of ordinary convolutions along the temporal dimension, TCNuses dilated convolutions (shown in the ‘Stage-n’ box of Figure 5)to enable larger receptive fields for higher layers of the network [4].Given the frames of a sequence x\n1:T=(x1, ..., xT), and a filter ker-\nnelfk,k=0, ..., K−1, the dilated convolution operation on xt,i n\nthe temporal domain, is defined as:\nyt=K−1/summationdisplay\ni=0fkxt−dk, (9)\nwhere dis the dilation factor, and Kis the filter size. Residual con-\nnections are further adopted to promote gradients flow to speed uptraining and improve accuracy. In the residual block, the inputs areadded to the outputs (orange rectangle ‘Layer-n’ in Figure 5). From\none layer to the next, the dilation factor dis doubled.\nIn the skeleton graph construction (Section 3.3.1), nodes are con-\nnected to the same nodes of consecutive frames in the temporal do-main. Similar to TCN applied to image sequences [4], TCN on graphsequences can be extended to multiple stages. In the multi-stage TCNmodel [9], the input to the first stage is the original sequence. Eachstage generates a refined prediction based on the previous stage:\nY\n0=X 1:T,\nYs=F(Ys−1),(10)\nwhere X1:Tis the original sequence, Fis each stage, and Ysis the\noutput of stage s. We stack several stages sequentially, and concate-\nnate the prediction of each stage, as illustrated in Figure 5.\nIn summary, in contrast with ST-GCN [33] that only has one stage\nwith multiple layers without dilation, MST-GCN computes a featurevector that is a concatenation of features from a series of stages. Eachstage consists of a number of residual layers, where each such layerincludes a spatial GCN followed by a dilated TCN over the temporaldomain, with a residual connection. AG-GCN further adds a groupGCN on the output of the all MST-GCNs of the group, before thecombined result is sent to a classifier.\n4 Experiment\nData Preparation We run a sliding window over data sequences\nto pre-process the data. The window length is 180 with an overlapratio of 0.75. All samples are down-sampled to 60 frames. 5-foldcross-validation is further applied to make full use of the data.\nImplementation Details The data source\n2and code3can be found\nhere. AGNet is trained with 16 embedding dimensions and the LSTMencoder contains 3-layer LSTM networks. The dimensions of thehidden state is 16 for each layer. Dropout with probability of 0.5 isused after each LSTM layer. AGTransformer also has 16 embeddingdimensions, and the transform layer has 8 heads with the query, thekey, and the value vector size set to be 64.\nAs for the model details of AG-GCN, hierarchically, the AG-GCN\nmodel is composed of two levels. For the individual level, there are\n2https://www.csc.kth.se/ ∼chpeters/ESAL/infrastructure.html\n3https://github.com/YIN95/Group-Behavior-Recognitionthree temporal stages, and each stage has three layers. The numberof channels in these three temporal stages is 64, 128, and 256. Apooling layer with a stride of 2 exists between every two stages. Thesize of the temporal kernel is 9 and the dilation factor is doubled ateach layer. An average pooling is performed after each stage, and weconcatenate the features as the input of the group level. For the grouplevel, the number of channel is 64. The GCN is connected with afully connected layer and softmax classifier.\n4.1 Experimental Results\nThe classification performances of the attention-based neural net-works and the graph-based neural networks are presented in Table 1.We can see that the graph-based models generally perform better thanthe attention-based ones with higher F1 scores, and AG-GCN withadditional features of body distance and head orientation achievedthe best performance (highest F1 score).\nTable 1 : Confusion matrix and F1 score for group behavior classifi-\ncation. ”GT” means Ground truth, ”A” means Accommodate and ”I”\nstands for Ignore. For the meaning of each abbreviation of networks,\nplease refer to Section 4.1.\nGT A I F1 score\nAGNetA3940\n(76.27%)1226\n(23.73%)0.754\nI1345\n(38.42%)2156\n(61.58%)\nAGTransformerA4825\n(93.40%)341\n(6.60%)0.842\nI1473\n(42.07%)2028\n(57.93%)\nST-GCN +\nGroup AttentionA4713\n(91.23%)453\n(8.77%)0.892\nI688\n(19.65%)2813\n(80.35%)\nST-GCN +\nGroup GCNA4763\n(92.20%)403\n(7.80%)0.919\nI438\n(12.51%)3063\n(87.49%)\nMST-GCN +\nGroup AttentionA4806\n(93.03%)360\n(6.97%)0.926\nI411\n(11.73%)3090\n(88.27%)\nAG-GCN\n(MST-GCN +\nGroup GCN)A4822\n(93.32%)344\n(6.68%)0.930\nI389\n(11.11%)3112\n(88.89%)\nAG-GCN\n(dis & ori )A4839\n(93.67%)327\n(6.33%)0.941\nI276\n(7.88%)3225\n(92.12%)\nAGTransformer achieves comparable True Positive (TP) with the\ngraph-based models. A possible reason is that the transformer layerprovides a better capability to capture the sequential information ofall markers than a naive attention module in AGNet. AGTransformerhas high False Positives (FP) as it recognized some Ignore behav-iors to be Accommodate. The reason might be that the markers arepassed to the Body Transformer layers without ordering, and it makesAGTransformer fail in attending to the right markers which are rep-resentative in Accommodate behaviors. Note that Ignore behaviorsare not static and contain body motions, and these motions are actedmostly within the group rather than to the newcomer.\nThen we evaluate the graph-based networks by analyzing the ef-\nfectiveness of the proposed modules in AG-GCN. We first compareF . Yang et al. / Group Behavior Recognition Using Attention- and Graph-Based Neural Networks 1631\nthe spatial-temporal graph convolutional neural network (ST-GCN)\n[33] with Group Attention (GA) to AGNet, letting ST-GCN replacethe body attention module and temporal attention module in AG-Net. Seen from Table 1, ST-GCN+GA significantly outperforms theattention-based methods. In AGNet and AGTransformer, all bodymarkers are simply concatenated as the input features without spa-tial ordering. However, in ST-GCN, the spatial information amongmarkers is naturally preserved by using the graph structure, and themotion trajectory is expressed in the form of temporal-edges. Theuse of graph convolutional networks enhances the association amongbody markers and reduces the complexity of networks.\nWe also evaluate the efficiency of using graph neural networks\non group level by combining ST-GCN with a group-level GCN (ST-GCN+Group GCN), in effect replacing the Group Attention modulein ST-GCN+GA with a GCN. On the individual level, each agent isthus modeled by ST-GCN, while on the group level, a GCN is utilizedto model the spatial relationship among group members. With thesetwo levels, the performance is improved further.\nIn ST-GCN+GA and ST-GCN+Group GCN, a single-stage tem-\nporal convolutional network (TCN) without dilation is adopted. Toverify the multi-temporal-stage architecture is better than a single-temporal-stage one, we train multi-temporal-stage networks thathave the same number of parameters as the single-stage one. Wecan observe in Table 1 that AG-GCN (MST-GCN+GA) outperformsST-GCN+GA. Applying multiple temporal stages to the ST-GCN,the AG-GCN enhances the F1 score to 0.930. Even if AG-GCN al-ready performs quite well, the recognition performance can still beimproved by concatenating the features of body distance and headorientation to the output of each MST-GCN.\n5 Virtual Reality Use Case\nIn this section, we present a use case in a virtual environment toshow how the model is being applied. There is a common patternof online multi-agent interactions that small groups of people gatherand stand in an environment to converse, e.g., VRChat\n4.The group\nbehaviors should account for the interactions between the group anda newcomer that the group members either ignore the newcomer orreact to them by adjusting their positions and orientations to accom-modate the newcomer in the group formation better. The motivationfor establishing such a virtual reality interaction environment is toshow the capability of perceiving the group behaviors is desirablefor artificial-intelligent agents, in a case that the agents should besocially-acceptable when approaching free-standing conversationalgroups. Such capability is also essential in the pedagogical domainwhere students could learn when and how to join a group politely inthis virtual environment.\nTo generate such a virtual scenario for supporting multi-agent\ninteractions, a cloud-based VR platform, SIGV erse\n5[25], is used.\nMIXAMO63D humanoid character with no facial features is used in\norder to ensure that the perception of the characters will exclusivelyresult from the body behaviors. Similar to the aforementioned datacollection scenario, four participants are engaged in the use case (seeFigure 7 top-left), i.e., three group members are in a free-standingconversational group and one newcomer approaches the group to jointhe conversation.\nEach of these four participants controls one virtual character by a\nVR device (see Figure 7 bottom), and four VR devices are thus used\n4https://www.vrchat.com/\n5http://www.sigverse.org/wiki/en/\n6https://www.mixamo.com/including three Oculus Rift S and one Oculus Rift CV1. The VR de-vices track the head and hand movements, and these data are simul-taneously transferred to control the virtual characters. Note that thelower body motions are resolved by the built-in Inverse Kinematics(IK) system in SIGV erse. The full-body motion data are passed to ourtrained AG-GCN model to determine whether the group membersaccommodate or ignore the newcomer in real-time (see Figure 7 top-right). Note that the participants can operate and control the virtualcharacters from separate places since the scenario is cloud-based.\nFigure 7 : The virtual reality use case. The perspective view of the\nmulti-agent interaction scenario, where a newcomer approaches tojoin a conversational group (top-left). The first-person view of thenewcomer with the normalized probability of group behaviors repre-sented by color bars (top-right). Each participant controls one virtualcharacter with one VR device (bottom).\n6 Conclusion\nIn this paper, we propose novel attention- and graph-based modelsto recognize dynamic group behaviors when a group is approachedby a newcomer. A novel full-body motion capture dataset of conver-sational groups is collected to understand and learn group behaviors.The provided experimental results show the graph-based models out-perform attention-based models by leveraging graph convolutionalnetworks to learn the representations within the agent body and thehuman group. We further present a use case in a virtual environmentto recognize group dynamics in real-time using a trained AG-GCN.\nIn social scenarios, dynamic group behavior recognition has rarely\nbeen studied due to its complexity and the lack of data. This hasmotivated us to apply our models to dynamic group behaviors in aubiquitous scenario where a conversational group is approached by anewcomer. We believe our models and methods, which involve vir-tual environments, are suitable for extension to other group behaviourscenarios with minor modifications to the architecture. In the future,we plan to generate autonomous and socially-acceptable behaviorsfor an agent/robot to approach or coordinate with groups.\nAcknowledgements\nThis research has received funding from the European Union‘s Hori-zon 2020 research and innovation program under grant agreement n.824160 (EnTimeMent). This research is also supported by the 2019NII International Internship Program and Inamura Lab.F . Yang et al. / Group Behavior Recognition Using Attention- and Graph-Based Neural Networks 1632\nREFERENCES\n[1] Xavier Alameda-Pineda, Jacopo Staiano, Ramanathan Subramanian,\nLigia Batrinca, Elisa Ricci, Bruno Lepri, Oswald Lanz, and Nicu Sebe,\n‘Salsa: A novel dataset for multimodal group behavior analysis’, IEEE\ntransactions on pattern analysis and machine intelligence ,38(8), 1707–\n1720, (2015).\n[2] Xavier Alameda-Pineda, Yan Yan, Elisa Ricci, Oswald Lanz, and Nicu\nSebe, ‘Analyzing free-standing conversational groups: A multimodalapproach’, in Proceedings of the 23rd ACM international conference\non Multimedia, pp. 5–14. ACM, (2015).\n[3] Philipp Althaus, Hiroshi Ishiguro, Takayuki Kanda, Takahiro\nMiyashita, and Henrik I Christensen, ‘Navigation for human-robot\ninteraction tasks’, in IEEE International Conference on Robotics\nand Automation, 2004. Proceedings. ICRA’04. 2004, volume 2, pp.\n1894–1900. IEEE, (2004).\n[4] Shaojie Bai, J Zico Kolter, and Vladlen Koltun, ‘An empirical eval-\nuation of generic convolutional and recurrent networks for sequence\nmodeling’, arXiv preprint arXiv:1803.01271, (2018).\n[5] Sovan Biswas and Juergen Gall, ‘Structural recurrent neural network\n(srnn) for group activity analysis’, in 2018 IEEE Winter Conference\non Applications of Computer Vision (WACV), pp. 1625–1632. IEEE,\n(2018).\n[6] Marco Cristani, Loris Bazzani, Giulia Paggetti, Andrea Fossati, Diego\nTosato, Alessio Del Bue, Gloria Menegaz, and Vittorio Murino, ‘Socialinteraction discovery by statistical analysis of f-formations.’, in BMVC,\nvolume 2, p. 4, (2011).\n[7] Jifeng Dai, Haozhi Qi, Y uwen Xiong, Yi Li, Guodong Zhang, Han Hu,\nand Yichen Wei, ‘Deformable convolutional networks’, in Proceedings\nof the IEEE international conference on computer vision, pp. 764–773,(2017).\n[8] Y ong Du, Wei Wang, and Liang Wang, ‘Hierarchical recurrent neu-\nral network for skeleton based action recognition’, in Proceedings of\nthe IEEE conference on computer vision and pattern recognition , pp.\n1110–1118, (2015).\n[9] Yazan Abu Farha and Jurgen Gall, ‘Ms-tcn: Multi-stage temporal con-\nvolutional network for action segmentation’, in Proceedings of the\nIEEE Conference on Computer Vision and Pattern Recognition , pp.\n3575–3584, (2019).\n[10] Y uan Gao, Fangkai Yang, Martin Frisk, Daniel Hernandez, Christopher\nPeters, and Ginevra Castellano, ‘Social behavior learning with realisticreward shaping’, in 2019 28th IEEE International Symposium on Robot\nand Human Interactive Communication (RO-MAN). IEEE, (2019).\n[11] Erving Goffman, Encounters: Two studies in the sociology of interac-\ntion, Ravenio Books, 1961.\n[12] Fei Han, Brian Reily, William Hoff, and Hao Zhang, ‘Space-time rep-\nresentation of people based on 3d skeletal data: A review’, Computer\nVision and Image Understanding, 158, 85–105, (2017).\n[13] Hayley Hung and Ben Kr ¨ose, ‘Detecting f-formations as dominant\nsets’, in Proceedings of the 13th international conference on multi-\nmodal interfaces, pp. 231–238. ACM, (2011).\n[14] Mostafa S Ibrahim, Srikanth Muralidharan, Zhiwei Deng, Arash V ah-\ndat, and Greg Mori, ‘A hierarchical deep temporal model for group ac-\ntivity recognition’, in Proceedings of the IEEE Conference on Com-\nputer Vision and Pattern Recognition, pp. 1971–1980, (2016).\n[15] Ioannis Kapsouras and Nikos Nikolaidis, ‘Action recognition on mo-\ntion capture data using a dynemes and forward differences represen-\ntation’, Journal of Visual Communication and Image Representation,\n25(6), 1432–1445, (2014).\n[16] Adam Kendon, Conducting interaction: Patterns of behavior in focused\nencounters, volume 7, CUP Archive, 1990.\n[17] Thomas N Kipf and Max Welling, ‘Semi-supervised classification\nwith graph convolutional networks’, arXiv preprint arXiv:1609.02907,\n(2016).\n[18] Annica Kristoffersson, Kerstin Severinson Eklundh, and Amy Loutfi,\n‘Measuring the quality of interaction in mobile robotic telepresence:\nA pilot’s perspective’, International Journal of Social Robotics, 5(1),\n89–101, (2013).\n[19] Maosen Li, Siheng Chen, Xu Chen, Ya Zhang, Yanfeng Wang,\nand Qi Tian, ‘Actional-structural graph convolutional networks for\nskeleton-based action recognition’, in Proceedings of the IEEE Con-\nference on Computer Vision and Pattern Recognition, (2019).\n[20] Sai Krishna Pathi, ‘Join the group formations using social cues in so-\ncial robots’, in Proceedings of the 17th International Conference onAutonomous Agents and MultiAgent Systems. International Foundationfor Autonomous Agents and Multiagent Systems, (2018).\n[21] Francesco Setti, Oswald Lanz, Roberta Ferrario, Vittorio Murino, and\nMarco Cristani, ‘Multi-scale f-formation discovery for group detec-tion’, in 2013 IEEE International Conference on Image Processing, pp.\n3547–3551. IEEE, (2013).\n[22] Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu, ‘Skeleton-based\naction recognition with directed graph neural networks’, in Proceedings\nof the IEEE Conference on Computer Vision and Pattern Recognition,pp. 7912–7921, (2019).\n[23] Peter Stone and Manuela V eloso, ‘Multiagent systems: A survey from\na machine learning perspective’, Autonomous Robots, 8(3), 345–383,\n(2000).\n[24] Ron Sun et al., Cognition and multi-agent interaction: From cognitive\nmodeling to social simulation, Cambridge University Press, 2006.\n[25] Jeffrey Too Chuan Tan and Tetsunari Inamura, ‘Sigverse-a cloud com-\nputing architecture simulation platform for social human-robot interac-tion’, in 2012 IEEE International Conference on Robotics and Automa-\ntion, pp. 1310–1315. IEEE, (2012).\n[26] Xuan-Tung Truong and Trung-Dung Ngo, ‘To approach humans?: A\nunified framework for approaching pose prediction and socially aware\nrobot navigation’, IEEE Transactions on Cognitive and Developmental\nSystems, 10(3), 557–572, (2018).\n[27] Ashish V aswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion\nJones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin, ‘Attention\nis all you need’, in Advances in neural information processing systems,\npp. 5998–6008, (2017).\n[28] Marynel V ´azquez, Aaron Steinfeld, and Scott E Hudson, ‘Parallel de-\ntection of conversational groups of free-standing people and tracking of\ntheir lower-body orientation’, in 2015 IEEE/RSJ International Confer-\nence on Intelligent Robots and Systems (IROS), pp. 3010–3017. IEEE,\n(2015).\n[29] Jered Vroon, Michiel Joosse, Manja Lohse, Jan Kolkmeier, Jaebok\nKim, Khiet Truong, Gwenn Englebienne, Dirk Heylen, and V anessa\nEvers, ‘Dynamics of social positioning patterns in group-robot inter-\nactions’, in 2015 24th IEEE International Symposium on Robot and\nHuman Interactive Communication (RO-MAN), pp. 394–399. IEEE,\n(2015).\n[30] Chongyang Wang, Min Peng, Temitayo A Olugbade, Nicholas D Lane,\nAmanda C De C Williams, and Nadia Bianchi-Berthouze, ‘Learningbodily and temporal attention in protective movement behavior detec-\ntion’, arXiv preprint arXiv:1904.10824, (2019).\n[31] Di Wu and Ling Shao, ‘Leveraging hierarchical parametric networks\nfor skeletal joints based action segmentation and recognition’, in Pro-\nceedings of the IEEE conference on computer vision and pattern recog-\nnition, pp. 724–731, (2014).\n[32] Jianchao Wu, Limin Wang, Li Wang, Jie Guo, and Gangshan Wu,\n‘Learning actor relation graphs for group activity recognition’, in Pro-\nceedings of the IEEE Conference on Computer Vision and Pattern\nRecognition, pp. 9964–9974, (2019).\n[33] Sijie Yan, Y uanjun Xiong, and Dahua Lin, ‘Spatial temporal graph con-\nvolutional networks for skeleton-based action recognition’, in Thirty-\nSecond AAAI Conference on Artificial Intelligence, (2018).\n[34] Fangkai Yang and Christopher Peters, ‘App-LSTM: Data-driven gener-\nation of socially acceptable trajectories for approaching small groups ofagents’, in Proceedings of the 7th International Conference on Human-\nAgent Interaction, pp. 144–152. ACM, (2019).\n[35] Fangkai Yang and Christopher Peters, ‘AppGAN: Generative adversar-\nial networks for generating robot approach behaviors into small groups\nof people’, in 2019 28th IEEE International Symposium on Robot and\nHuman Interactive Communication (RO-MAN). IEEE, (2019).\n[36] Ming Zeng, Haoxiang Gao, Tong Y u, Ole J Mengshoel, Helge\nLangseth, Ian Lane, and Xiaobing Liu, ‘Understanding and improv-\ning recurrent networks for human activity recognition by continuous\nattention’, in Proceedings of the 2018 ACM International Symposium\non Wearable Computers, pp. 56–63. ACM, (2018).\n[37] Hong-Bo Zhang, Yi-Xiang Zhang, Bineng Zhong, Qing Lei, Lijie\nYang, Ji-Xiang Du, and Duan-Sheng Chen, ‘A comprehensive survey of\nvision-based human action recognition methods’, Sensors, 19(5), 1005,\n(2019).\n[38] Wentao Zhu, Cuiling Lan, Junliang Xing, Wenjun Zeng, Yanghao Li,\nLi Shen, and Xiaohui Xie, ‘Co-occurrence feature learning for skele-\nton based action recognition using regularized deep lstm networks’, in\nThirtieth AAAI Conference on Artificial Intelligence, (2016).F.Yangetal./GroupBehavior Recognition Using Attention- andGraph-Based NeuralNetworks 1633",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "oUbgMWFm-S",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200312",
"forum_link": "https://openreview.net/forum?id=oUbgMWFm-S",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Heuristics for Link Prediction in Multiplex Networks",
"authors": [
"Robert E. Tillman",
"Vamsi K. Potluru",
"Jiahao Chen",
"Prashant P. Reddy",
"Manuela Veloso"
],
"abstract": "Link prediction, or the inference of future or missing connections between entities, is a well-studied problem in network analysis. A multitude of heuristics exist for link prediction in ordinary networks with a single type of connection. However, link prediction in multiplex networks, or networks with multiple types of connections, is not a well understood problem. We propose a novel general framework and three families of heuristics for multiplex network link prediction that are simple, interpretable, and take advantage of the rich connection type correlation structure that exists in many real world networks. We further derive a theoretical threshold for determining when to use a different connection type based on the number of links that overlap with an Erdős-Rényi random graph. Through experiments with simulated and real world scientific collaboration, transportation and global trade networks, we demonstrate that the proposed heuristics show increased performance with the richness of connection type correlation structure and significantly outperform their baseline heuristics for ordinary networks with a single connection type.",
"keywords": [],
"raw_extracted_content": "Heuristics for Link Prediction in Multiplex Networks\nRobert E. Tillman and Vamsi K. Potluru and Jiahao Chen and Prashant Reddy and Manuela Veloso1\nAbstract. Link prediction, or the inference of future or missing\nconnections between entities, is a well-studied problem in network\nanalysis. A multitude of heuristics exist for link prediction in ordinary\nnetworks with a single type of connection. However, link prediction inmultiplex networks, or networks with multiple types of connections, isnot a well understood problem. We propose a novel general framework\nand three families of heuristics for multiplex network link prediction\nthat are simple, interpretable, and take advantage of the rich connec-\ntion type correlation structure that exists in many real world networks.\nWe further derive a theoretical threshold for determining when to use\na different connection type based on the number of links that overlap\nwith an Erd ˝os-R ´enyi random graph. Through experiments with simu-\nlated and real world scientific collaboration, transportation and global\ntrade networks, we demonstrate that the proposed heuristics show\nincreased performance with the richness of connection type correla-\ntion structure and significantly outperform their baseline heuristics\nfor ordinary networks with a single connection type.\n1 Introduction\nNetworks are powerful representations of interactions in complex sys-\ntems with a wide range of applications in biology, physics, sociology,\nengineering and computer science. Modeling interactions between\nentities as links between nodes in a graph allows us to leverage formal\nmethods to understand influence, community structure and other pat-\nterns, make predictions about future interactions and detect unusual\nactivity. The study of networks and their applications has thus become\na major focus of many scientific disciplines in recent decades.\nSince the advent of large-scale online social networks, the link pre-\ndiction problem [18] has received increased attention. Link prediction\nis usually defined in terms of the following two interrelated problems:\n•Given a current snapshot of a network at the present time, what\nnew connections are likely to develop in the future?\n•Given an incomplete network, what connections are likely to be\nactually present but missing from the graph?\nLink prediction has numerous applications including social network\nrecommendation systems for new friends or individuals to follow [ 23],\npredicting protein and metabolic interactions in biological networks\n[26], finding experts and predicting collaborations in scientific co-\nauthorship networks [ 18], identifying hidden interactions of criminal\norganizations [6] and predicting future routes in transit systems [19].\nMost of the existing link prediction literature focuses on ordinary\nnetworks which represent a single type of interaction between entities.\nIn many complex systems, however, we observe multiple types ofinteractions. For example, individuals may interact using multiplesocial networks and cities and transit stations may be linked via\n1JPMorgan AI Research, email: [email protected] carriers, lines or modes of transit. In order to apply standard\ntechniques, these multiple interactions must either be conflated to a\nsingle type, which is not appropriate if they are sufficiently dissimilar,\nor the analysis must be restricted to only one type of interaction. This\nis limiting since conflation restricts our ability to predict the type of\nfuture or missing interactions while using only a single interaction\ntype fails to leverage additional useful information gleaned from other\ntypes of interactions in the network.\nMultiplex networks are graphical structures that can represent multi-\nple types of interactions between entities [ 17]. In multiplex networks,\nconnections between entities occur at a layer of the network, which\nrepresents a specific interaction type. These networks can be visu-alized as either a single graph with multiple edge types or a set of\nordinary (single-layer ormonoplex ) graphs with the same nodes but\ndifferent edges, each corresponding to a different layer. Figure 1 de-\npicts a multiplex network representing 3 types of interactions among\n9 entities. In this example, XandVare connected in layer 1, which\nmight correspond to a specific social network, but are not connected\nin the other layers, which might correspond to other social networks.\nLayer 1\nLayer 2\nLayer 3\nX\nY\nZ\nU\nV\nW\nP\nQ\nR\nX\nY\nZ\nU\nV\nW\nP\nQ\nR\nX\nY\nZ\nU\nV\nW\nP\nQ\nR\nFigure 1. Multiplex network with 3 layers and 9 nodes\nWhile interest in multiplex networks has grown across communi-\nties and there is prior work investigating centrality and communitystructure [\n17,15], there is limited existing work on link prediction.\nIn contrast to the multitude of simple heuristics for link predictionin ordinary networks, which have been thoroughly investigated em-\npirically [18] and theoretically [24], we are not aware of any generalheuristics for link prediction at specific multiplex network layers.\nWe propose a novel general framework and three families of heuris-\ntics for link prediction in multiplex networks which take advantage\nof strong cross-layer correlation structure, which has been observed\nin many real-world complex systems [ 22]. We show that the per-ECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2003121938\nformance of the proposed heuristics increases with the strength of\ncross-layer correlations and they outperform their baseline heuristics\nin synthetically generated and real world multiplex networks.\n2 Background\nWe represent an ordinary undirected graph as G=/angbracketleftV,E/angbracketright, whereV\nis a set of nodes and Ea set of edges. Distinct nodes v,v/prime∈V are\nneighbors if they are connected by an edge in E; otherwise, they are\nnon-neighbors. N(v)represents the set of neighbors of v∈V. The\ndegree of a node is the cardinality of its neighbors set. A path between\nu,w∈V is an ordered set /angbracketleftv1,...,v n/angbracketright⊂V such that u∈N (v1),\nw∈N (vn)and for 1≤i<n ,vi∈N (vi+1). We restricted our\nanalysis to undirected graphs in this paper.\n2.1 Link Prediction in Single-Layer Networks\n[18] provides the first comprehensive introduction to and analysis of\nthe link prediction problem. Recent surveys include [19] and [20].\nLink prediction is often posed as a ranking problem where pairs of\nnon-neighbors are scored according to the predicted likelihood of a\nfuture or missing connection and the top khighest scoring pairs are\nselected. It can also be posed as a binary classification problem where\nthe class of a pair of nodes is whether or not a link exists.\nThe most extensively studied link prediction techniques are based\nonsimilarity heuristics, which score pairs of nodes according to\ntopological features of the network related to coherent assumptions\nabout their similarity [ 20]. Most similarity heuristics are adapted from\ntechniques from graph theory and social network analysis [ 18]. We\ndefine and discuss some of the most common heuristics below. A\nmore comprehensive list is provided in [20].\nNeighbor-based heuristics are based on the idea that a link is most\nlikely to exist between nodes vandv/primewhose sets of neighbors sig-\nnificantly overlap. This property has been empirically observed in\nreal world networks [ 21]. The heuristic which most directly imple-\nments this concept is Common Neighbors (CN), which is simply the\ncardinality of the intersection of neighbor sets [21]:\nCN (v,v)=/vextendsingle/vextendsingleN(v)∩N/parenleftbig\nv/prime/parenrightbig/vextendsingle/vextendsingle\nA related measure is the Jaccard Coefficient (JC), which is the ratio\nof this intersection to the union of the neighbor sets:\nJC/parenleftbig\nv,v/prime/parenrightbig\n=|N(v)∩N (v/prime)|\n|N(v)∪N (v/prime)|\nResource Allocation (RA) andAdamic-Adar (AA) [1] score links in-\nversely proportional to the number of neighbors of each common\nneighbor of two nodes:\nRA/parenleftbig\nv,v/prime/parenrightbig\n=/summationdisplay\nu∈N (v)∩N (v/prime)1\n|N(u)|\nAA/parenleftbig\nv,v/prime/parenrightbig\n=/summationdisplay\nu∈N (v)∩N (v/prime)1\nlog|N(u)|\nPreferential Attachment (PA), adapted from the Barab ´asi-Albert net-\nwork growth model [3], is the product of node degrees [4]:\nPA/parenleftbig\nv,v/prime/parenrightbig\n=|N(v)|×/vextendsingle/vextendsingleN/parenleftbig\nv/prime/parenrightbig/vextendsingle/vextendsingle\nTheProduct of Clustering Coefficient (PCC) scores the likelihood of\na link proportional to the product of the nodes’ clustering coefficients,or number of links between nodes that are neighbors proportional to\nthe total possible links between those nodes:\nPCC/parenleftbig\nv,v/prime/parenrightbig\n=/productdisplay\nw∈{v,v/prime}2|{u,u/prime∈N (w):u/prime∈N (u)}|\n|N(w)|(|N(w)|−1)\nThese heuristics are simple, interpretable, computationally efficient\nand highly parallelizable. Their primary disadvantage is they do not\nconsider paths between nodes without common neighbors [20].\nPath-based heuristics consider all paths between nodes. The Katz\nScore (KS) sums over all paths between two nodes and applies expo-\nnential dampening according to path lengths for specified β[16]:\nKS/parenleftbig\nv,v/prime/parenrightbig\n=/summationdisplay\np∈paths(v,v/prime)β|p|\nSmallerβvalues result in a heuristic similar to neighbor-based ap-\nproaches. Rooted PageRank (RPR), based on the PageRank measure\nfor website authoritativeness [ 8], is defined as the stationary prob-\nability that a random walk from vtov/primewith probability 1−αof\nreturning to vand otherwise moving to a random neighbor reaches v/prime,\nrepresented as [πv]v/prime[25]:\nRPR/parenleftbig\nv,v/prime/parenrightbig\n=[πv]v/prime+[πv/prime]v\nWhile comprehensive studies of link prediction have focused\non unsupervised prediction using these heuristics, supervised and\noptimization-based approaches have also been considered. Most of\nthese use similarity heuristics as features, sometimes with additionalinformation, to train a classifier [\n2,11] or learn a weighting function\n[7]. Empirical studies have found simple neighbor-based heuristics\noften perform as well or better than more complex methods [ 20,18].\nThere are some theoretical justifications for their success [24].\n2.2 Multiplex Networks\nFor decades, different disciplines have proposed systems which orga-\nnize different types of connections between entities, but only recently\nhave there been significant efforts to develop general frameworks for\nstudying networks with multiple layers or types of connections [ 17].\nThis increased interest has resulted in disparate terminology and for-\nmulations of multiplex networks and related network representations.\nOne popular formulation of multiplex networks is a graph with\nmultiple edge types which each correspond to different layers. We can\nrepresent a multiplex network as G=/angbracketleftV,E,T/angbracketrightwhereTis a set of\nedge types and each edge in Eis between v,v/prime∈V and of type t∈T.\nOther formulations allow for different node sets and edges which\ncross layers [ 9], sometimes referred to as heterogeneous networks.I n\nour setup, edges are always within the same layer and node sets are\ncommon across layers. We can thus equivalently represent a multiplex\nnetwork as a set of graphs with the same node set, where each graph\nrepresents a different layer in the network, e.g. G=/angbracketleftG1,...,Gk/angbracketright.\n3 Multiplex Network Link Prediction Heuristics\nThe framework we propose for specifying heuristics for link predic-\ntion in multiplex networks is inspired by the rich connection typecorrelation structures that have been empirically observed in many\nreal world complex systems [ 22]. We provide a general approach to\ndefining heuristics in terms of topological features across layers of\na multiplex network weighted according to this structure. The moti-\nvation for this approach is that real world multiplex networks oftenR.E. Tillman et al. / Heuristics for Link Prediction in Multiplex Networks 1939\ncontain sets of layers which are highly (positively or negatively) corre-\nlated but many pairs of layers which are not strongly correlated. When\npredicting links at a given layer, we would like to take advantage of\nstructural information from other layers which are highly correlated,\nbut ignore layers where correlations are weak.\n3.1 Cross-Layer Correlation\nFirst, we define correlation between multiplex network layers. Pre-\nvious work comparing layers primarily considers layer similarity in\nterms of shared edges and hubs (high degree nodes) [ 9,22]; however,\nfor specific problems, it may be appropriate to consider higher-order\nstructural features [ 5], e.g. shared triangles, or other contextual infor-\nmation. Our framework is general enough that it can be adapted to the\nspecific needs of a particular application, allowing the specification\nof both relevant features and metrics used to define correlation.\nAs an initial step, we define a property matrix, following [ 9], for a\nmultiplex network which specifies the relevant features to considercross-layer correlation in terms of. For example, to calculate cross-\nlayer correlation in terms of shared edges, we construct the following\nproperty matrix Pfor the multiplex network depicted in Figure 1:\nX−YX−UX−V/bracketleftBiggLayer 1 111 ...\nP= Layer 2 110 ...\nLayer 3 100 ...\nRows in Prepresent layers, and columns represent unique node pairs.\nEntries of 1 or 0 indicate the presence or lack of an edge, respectively.\nSimilarly, to compare layers in terms of shared hubs, we make the\ncolumns represent nodes and have the entries indicate the node degree\nin each layer. For a property matrix Pwe use pito indicate the\nproperty vector for theith layer and pi\njthe value in the jth column\nfor layeri. By convention, all vectors are treated as column vectors.\nWhen property matrices/vectors are defined in terms of shared edges\nor shared hubs, we refer to them as edge property matrices/vectors or\ndegree property matrices/vectors, respectively.\nWe next construct a cross-layer correlation matrix Cfrom ak×x\nproperty matrix Pby setting the diagonal entries in Cto 1 and the\noff-diagonal entries ci,jto the value resulting from some correlation\nmetric applied to the property vectors piandpj. For example, using\nPearson correlation we get the following for the off-diagonals, where\nwe represent the mean taken with respect to a property vector ias\n¯pi=1\nx/summationtextx\nj=1pi\nj:\nci,j=/parenleftbig\npi−¯pi/parenrightbig/prime/parenleftbig\npj−¯pj/parenrightbig\n/radicalbig\n(pi−¯pi)/prime(pi−¯pi)(pi−¯pj)/prime(pj−¯pj)\nWhile Pearson correlation is an appropriate metric for edge property\nmatrices, Spearman (rank-based) correlation is more appropriate for\ndegree property matrices since denser layers may have the same rank\nordering of hubs, but with different degrees. We focus on correlation\nmetrics as opposed to general distance metrics since they distinguish\npositive from negative correlation, which has been observed in real\nworld networks and which we account for in our proposed heuristics.\n3.2 Multiplex Network Heuristics\nWe now propose three multiplex network heuristics which use cross-\nlayer correlation structure to weight features observed across layers.\nEach are defined in terms of a specified cross-layer correlation matrixC, allowing for the use of any property matrix and correlation metric.\nFirst, we define the following normalization for a layer iandC:\nZi\nC=k/summationdisplay\nl=1|ci,l|.\nThe first and simplest heuristic, Count and Weight by Correlation\n(CWC), counts the number of layers which contain a link between two\nnodes and weights that count according to the cross-layer correlations.\nHeuristic 1 (Count and Weight by Correlation) .LetG=\n/angbracketleftG1,...,G k/angbracketrightbe a multiplex network with edge property vectors\ne1,...,ekand cross-layer correlation matrix C. CWC is defined\nfor a layer iand a possible edge represented by an edge property\nvector index jas follows:\n1\nZi\nCk/summationdisplay\nl=1/braceleftBigg\neijci,l,c i,l>0/parenleftbig\n1−eij/parenrightbig\n|ci,l|,c i,l<0\nFor example, to consider a link in the multiplex network in Figure 1\nbetweenXandVat layer 2 using CWC, we would proceed with the\nfollowing calculation (assuming only positive correlations):\n1\nZ2\nC(1×c2,1+0×c2,3)\nOnlyc2,1receives weight in the numerator since XandVare con-\nnected in layer 1 but not in layer 3.\nCWC encodes the intuition that correlated layers should have sim-\nilar links: the more correlated a layer which does not contain a par-\nticular link is to another layer which does contain that link, the more\nlikely it is that link is missing or will develop in the future. CWC\nalso takes anti-correlation into account: a link is more likely to be\npredicted if it is missing from a layer which is anti-correlated. Despite\nits simplicity, this heuristic performs extremely well in practice.\nThe second heuristic, Correlation Weighted Heuristic (CWH) ,e x -\ntends the heuristics discussed in the previous section to the multiplex\ndomain by applying them across layers of a multiplex network and\nweighting them according to cross-layer correlations. While empirical\nstudies have found that no particular monoplex heuristic consistently\noutperforms all others [ 18], there may be problem-specific reasons\nto prefer a particular heuristic. For example, if we know there are\nfew long paths between nodes, a neighbor-based heuristic is likely to\nperform at least as well as a path-based heuristic at a lower computa-\ntional cost. Taking this into consideration, CWH allows any monoplex\nheuristic to be extended to multiplex networks.\nHeuristic 2 (Correlation Weighted Heuristic) .LetG=/angbracketleftG1,...,G k/angbracketright\nbe a multiplex network with cross-layer correlation matrix C. Lethl\nj\nbe a heuristic for monoplex networks evaluated at layer lofGfor a\npossible edge represented by an edge property vector index j. Then,\nCWH is defined for a layer iand possible edge index jas follows:\n1\nZi\nCk/summationdisplay\nl=1/braceleftBigg\nhljci,l,c i,l>0/parenleftbig\n1−hlj/parenrightbig\n|ci,l|,c i,l<0\nFor example, to consider a link in the multiplex network in Figure 1\nbetweenXandVat layer 2 using CWH with Common Neighbors\nas the monoplex heuristic, we would proceed with the following\ncalculation (assuming only positive correlations):\n1\nZ2\nC/bracketleftbig\nCN1(X,V )×c2,1+CN2(X,V )+CN3(X,V )×c2,3/bracketrightbigR.E. Tillman et al. / Heuristics for Link Prediction in Multiplex Networks 1940\nCWH is similarly based on the intuition that since existing monoplex\nheuristics have been shown to be predictive of missing and future\nlinks in single-layer networks, they should also be predictive in corre-\nlated layers of multiplex networks and this predictive power should\nincrease based on the magnitude of correlations. Like CWC, CWH\ntakes anti-correlation into account: links are more likely to be pre-dicted if they are not strongly predicted by a monoplex heuristic inan anti-correlated layer. In our definition, we assume the monoplex\nheuristichis normalized to be within 0 and 1.\nThe third heuristic combines the previous two ideas. For a given a\nmonoplex heuristic, Count Correlation-Weighted Heuristics (CCWH)\ncounts the number of layers which contain a link between two nodes\nand weights that count according to both cross-layer correlations and\nthe values resulting from evaluating the monoplex heuristic at each\nlayer in the network.\nHeuristic 3 (Count Correlation-Weighted Heuristics) .LetG=\n/angbracketleftG1,...,G k/angbracketrightbe a multiplex network with edge property vectors\ne1,...,ekand cross-layer correlation matrix C. Lethl\njbe a similar-\nity heuristic for monoplex networks evaluated at layer lofGfor a\npossible edge represented by an edge property vector index j. Then,\nCCWH is defined for a layer iand possible edge index jas follows:\n1\nZi\nCk/summationdisplay\nl=1⎧\n⎪⎨\n⎪⎩hi\nj,i =l\neijhijci,l,c i,l>0/parenleftbig\n1−eij/parenrightbig/parenleftbig\n1−hij/parenrightbig\n|ci,l|,c i,l<0\nCCWH also accounts for negative correlation: links are more likely if\nthey are not present in an anti-correlated layer and the magnitudes of\nthese predictions are inversely proportional to the values of the heuris-\ntic evaluated at that layer. We also include the heuristic evaluated at\nthe layer being predicted so that CCWH yields informative values\neven when there are no layers containing the edge being predicted.\n3.3 Expected Overlap Threshold for Layers\nOne potential issue with using cross-layer correlation as weights in\nthe proposed heuristics is the sensitivity of many correlation metrics\nto sample size error. When layers are not related, we may still observe\nsmall correlation values which add noise. This may be particularly\nacute when networks have small numbers of nodes but many lay-ers. To improve empirical performance in such cases, we propose a\nthresholding method to ignore layers likely to only add noise.\nOne possibility is to simply ignore small values of correlation,\nbut there is no clear guideline for setting the threshold for values toignore. Instead, we propose a threshold for excluding layers basedon properties of the two graphs being compared. If two graphs are\nrelated, especially in the context of link prediction, we expect them tohave edges in common. Thus, we should expect that a layer\nlused for\npredicting a link at another layer ihas at least as many overlapping\nedges with ias a random graph. However, graphs with many edges\nare more likely to have overlapping edges so we should only consider\nrandom graphs with the same number of edges as the layer for which\nwe are predicting links. The Erd ˝os-R ´enyiGn,mrandom graph model\n[14], which uniformly considers all undirected graphs with nnodes\nandmedges. provides a theoretical framework for this comparison.\nLetGibe an observed layer with nnodes and miedges at which we\nwould like to predict links and let Glbe some other layer with ml\nedges. We define the expected number of overlapping edges (OE) in\nterms of the cosine distance between the edge property vector pifor\nGiand the edge property vector pjfor a random graph with mj=mledges generated according to the Erd ˝os-R ´enyi random process:\nE/parenleftBig\nOE(Gi,mj)/parenrightBig\n=E/parenleftBigg\npi/primepj\n/radicalbig\npi/primepipj/primepj/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleG\ni,mj/parenrightBigg\nTo evaluate this quantity, we need the following lemma.\nLemma 1. LetG=/angbracketleftV,E/angbracketrightbe a graph with nnodes,medges and\nedge property vector pthat is generated according to an Erd ˝os-R´enyi\nGn,mrandom graph process. Then, for 1≤i≤n(n− 1)\n2,\nE(pi|m)=2m\nn(n−1)\nProof. During the kth step of an Erd ˝os-R ´enyi random process, the\nprobability that a non-neighbor tuple v,v/prime∈V is not selected is\nn(n−1)\n2−k\nn(n−1)\n1−k+1\nTherefore,\nE(pi|m)=( 0 ) P(pi=0|m)+( 1 ) P(pi=1|m)\n=P(pi=1|m)\n=1−P(pi=0|m)\n=1−m/productdisplay\nk=1n(n−1)\n2−k\nn(n− 1)\n2−k+1\n=1−n(n−1)\n2−m\nn(n−1)\n2\n=2m\nn(n−1)\nTheorem 1. LetGi=/angbracketleftV,Ei/angbracketrightbe an observed graph with nnodes,\nmiedges and edge property vector piandGj=/angbracketleftV,Ej/angbracketrighta graph\ngenerated from an Erd ˝os-R ´enyiGn,mjrandom process with edge\nproperty vector pj. Then,\nE/parenleftBig\nOE(Gi,mj)/parenrightBig\n=2√\nmimj\nn(n−1)\nProof.\nE/parenleftBig\nOE(Gi,mj)/parenrightBig\n=E/parenleftBigg\npi/primepj\n/radicalbig\npi/primepipj/primepj/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleG\ni,mj/parenrightBigg\n=E/parenleftbiggpi/primepj\n√\nmimj/vextendsingle/vextendsingle/vextendsingle/vextendsingleG\ni,mj/parenrightbigg\n=1√\nmimjE⎛\n⎜⎝n(n−1)\n2/summationdisplay\nk=1pi\nkpj\nk/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleG\ni,mj⎞\n⎟⎠\n=1√\nmimjn(n−1)\n2/summationdisplay\nk=1pi\nkE/parenleftBig\npj\nk/vextendsingle/vextendsingle/vextendsinglemj/parenrightBig\n=1√\nmimjn(n−1)\n2/summationdisplay\nk=1pi\nk2mj\nn(n−1)\n=1√\nmimj(mi)2mj\nn(n−1)\n=2√\nmimj\nn(n−1)R.E. Tillman et al. / Heuristics for Link Prediction in Multiplex Networks 1941\nThe expected overlapping edges can be calculated whenever an-\nother layer is considered when evaluating a heuristics at a layer i\nand ignored if the observed cosine distance between that layer’s edge\nproperty vector and the edge property vector for layer iis less than\nthis quantity. However, we might also wish to consider only layers\nthat are several standard deviations from a random graph. We thus\nneed the following lemma to evaluate the second moment.\nLemma 2. LetG=/angbracketleftV,E/angbracketrightbe a graph with nnodes,medges and\nedge property vector pgenerated according to an Erd ˝os-R´enyiGn,m\nrandom graph process. Then, for 1≤i,j≤n(n−1)\n2such that i/negationslash=j,\nE(pipj|m)=4m(m−1)\nn(n−2)(n2−1)\nProof. First note that\nm2=E/parenleftbig\nm2/parenrightbig\n=E⎛\n⎜⎝⎡\n⎢⎣n(n−1)\n2/summationdisplay\ni=1pi⎤\n⎥⎦⎡\n⎢⎣n(n−1)\n2/summationdisplay\nj=1pj⎤\n⎥⎦/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglem⎞\n⎟⎠\n=\nn(n−1)\n2/summationdisplay\ni=1n(n−1)\n2/summationdisplay\nj=1E(pipj|m)\n=n(n−1)\n2/summationdisplay\ni=1E/parenleftbig\n(pi)2/vextendsingle/vextendsinglem/parenrightbig\n+n(n−1)\n2/summationdisplay\nj,i=1s.t.j /negationslash=iE(pipj|m)\n=/parenleftbiggn(n−1)\n2/parenrightbigg/parenleftbigg2m\nn(n−1)/parenrightbigg\n+/parenleftbiggn(n−1)\n2/parenrightbigg/parenleftbiggn(n−1)\n2−1/parenrightbigg\nE(pipj|m)\n=m+n2(n−1)2−2n(n−1)\n4E(pipj|m)\nFactoring yields\nE(pipj|m)=4m(m−1)\nn(n−2)(n2−1)\nTheorem 2. LetGi=/angbracketleftV,Ei/angbracketrightbe an observed graph with nnodes,\nmiedges and edge property vector piandGj=/angbracketleftV,Ej/angbracketrighta graph\ngenerated from an Erd ˝os-R ´enyiGn,mjrandom process with edge\nproperty vector pj. Then,\nE/parenleftbigg/bracketleftBig\nOE(Gi,mj)/bracketrightBig2/parenrightbigg\n=2\nn(n−1)+4/parenleftbig\nmi−1/parenrightbig/parenleftbig\nmj−1/parenrightbig\nn(n−2)(n2−1)\nProof. Partition the indices 1,...,n(n−1)\n2into/angbracketleftIi\n+,Ii−/angbracketrightsuch that\nfor1≤k≤n(n−1)\n2,k∈Ii+if and only if pi\nk=1 andk∈Ii\n−if\nand on if pi\nk=0. Then,E/parenleftBig/bracketleftbig\nOE(Gi,mj)/bracketrightbig2/parenrightBig\n=E/parenleftBigg/bracketleftBigg\npi/primepj\n/radicalbig\npi/primepipj/primepj/bracketrightBigg2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleG\ni,mj/parenrightBigg\n=E/parenleftBigg/bracketleftbiggpi/primepj\n√\nmimj/bracketrightbigg2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleG\ni,mj/parenrightBigg\n=1\nmimjE⎛\n⎜⎝⎡\n⎢⎣n(n−1)\n2/summationdisplay\nk=1pi\nkpj\nk⎤\n⎥⎦2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleG\ni,mj⎞\n⎟⎠\n=1\nmimjE⎛\n⎜⎝n(n−1)\n2/summationdisplay\nk=1n(n−1)\n2/summationdisplay\nl=1pi\nkpj\nkpi\nlpj\nl/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleG\ni,mj⎞\n⎟⎠\n=1\nmimjE⎛\n⎜⎝/summationdisplay\nk∈Ii\n+/summationdisplay\nl∈Ii+pj\nkpjl/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglem\nj⎞\n⎟⎠\n=1\nmimj/summationdisplay\nk∈Ii\n+/summationdisplay\nl∈Ii+E/parenleftBig\npj\nkpjl/vextendsingle/vextendsingle/vextendsinglemj/parenrightBig\n=1\nmimj/summationdisplay\nk∈Ii\n+E/parenleftbigg/parenleftBig\npj\nk/parenrightBig2/vextendsingle/vextendsingle/vextendsingle/vextendsinglem\nj/parenrightbigg\n+1\nmimj/summationdisplay\nk,l∈Ii\n+s.t.k/negationslash=lE/parenleftBig\npj\nkpjl/vextendsingle/vextendsingle/vextendsinglemj/parenrightBig\n=1\nmimj/summationdisplay\nk∈Ii\n+2mj\nn(n−1)\n+1\nmimj/summationdisplay\nk,l∈Ii+s.t.k/negationslash=l4mj/parenleftbig\nmj−1/parenrightbig\nn(n−2) (n2−1)\n=1\nmimj/parenleftBig\nmi/parenrightBig2mj\nn(n−1)\n+1\nmimj/parenleftBig\nmi/parenleftBig\nmi−1/parenrightBig/parenrightBig4mj/parenleftbig\nmj−1/parenrightbig\nn(n−2) (n2−1)\n=2\nn(n−1)+4/parenleftbig\nmi−1/parenrightbig/parenleftbig\nmj−1/parenrightbig\nn(n−2)(n2−1)\nThe variance then follows as:\n2n(n−1)−4mimj\nn2(n−1)2+4/parenleftbig\nmi−1/parenrightbig/parenleftbig\nmj−1/parenrightbig\nn(n−2)(n2−1)\n4 Experiments\nWe first evaluated each of the multiplex network heuristics proposed\nin the previous section on synthetically generated multiplex networks\nwith varying numbers of nodes, layers and magnitudes of cross-layer\ncorrelation. To generate random multiplex networks, we begin by\ngenerating random graphs for each layer using the Barab ´asi-Albert\nrandom graph generating model, which incorporates the preferential\nattachment and “rich get richer” properties that characterize many real\nworld networks [ 3]. Then, for each node pair in each layer, we add\nor remove the corresponding edge according to whether it exists at a\nrandomly chosen layer with a specified probability calibrated to matchR.E. Tillman et al. / Heuristics for Link Prediction in Multiplex Networks 1942\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nMedian Layer Correlation0.00.10.20.30.40.50.6AccuracyAdamic Adar\nBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nMedian Layer Correlation0.00.10.20.30.40.50.6AccuracyCommon Neighbors\nBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nMedian Layer Correlation0.00.10.20.30.40.50.6AccuracyJaccard Coefficient\nBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nMedian Layer Correlation0.00.10.20.30.40.50.6AccuracyPreferential Attachment\nBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nMedian Layer Correlation0.00.10.20.30.40.50.6AccuracyProduct of Clusting Coefficient\nBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nMedian Layer Correlation0.00.10.20.30.40.50.6AccuracyResource Allocation\nBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nMedian Layer Correlation0.00.10.20.30.40.50.6AccuracyKatz Score\nBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nMedian Layer Correlation0.00.10.20.30.40.50.6AccuracyRooted PageRank\nBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd\nFigure 2. Accuracy of the proposed multiplex heuristics and monoplex baselines on synthetic networks with 100 nodes, 10 layers and median cross-layer\ncorrelations between 0.10 and 0.90. Larger values indicate higher accuracy.\na desired value for median cross-layer correlation (in terms of shared\nedges). For each random network, we then downsample the edges\nat each layer by 25% and evaluate each of our proposed multiplex\nheuristics for all node pairs at which no link exists. We predict links forthe top\nxscoring pairs corresponding to the number of edges removed.\nWe do this for each layer and average over 100 random networks\nthe percentage of correctly predicted links (predicted edges that were\nremoved during downsampling), which we report as accuracy. We\ncompare the three proposed heuristics to baselines where we evaluate\nthe corresponding monoplex heuristics at the layer being predicted.\nWe plot accuracy against median cross-layer correlation for synthetic\nnetworks with 100 nodes and 10 layers in Figure 2. We append ’e’ and\n’d’ to the abbreviations of multiplex heuristics to indicate the usage\nof edge or degree property matrices when calculating cross-layer\ncorrelations.\nWe first note that both CWC and CCWH significantly outperform\nall of the baseline monoplex heuristics when cross-layer correlationstructure is present, and this outperformance increases linearly with\nmedian cross-layer correlation. For the neighbor-based heuristics,\nCWC, the simplest heuristic, either performs comparable to or better\nthan CCWH, while for the path-based heuristics, CWC and CCWH\nperform comparably. This is consistent with the finding in [ 18] that\nsimpler heuristics often outperform more complex heuristics in thesingle-layer case. Furthermore, while CWC is the simplest of thethree heuristics, it also most directly captures the richest source of\ninformation available when layers are correlated, i.e. whether the edge\nexists in a highly correlated layer. Thus, in this context, the outperfor-\nmance of CWC is not surprising. While CWH also outperforms all\nof the baseline monplex heuristics (not always significantly) the out-\nperformance does not increase with median cross-layer correlations\nwhen neighbor-based heuristics are used. This seems to indicate the\nheuristics applied at additional layers provides limited value when\nnot combined with additional layer specific information, even when\ncross-layer correlations are significant. However, when path-based\nheuristics are used, the performance of CWH does increase with me-\ndian cross-layer correlation indicating the path-based heuristics dopick up on increasingly useful information as cross-layer correlations\nincrease. We observe similar performance when we vary the nodesbetween 10 and 100 and layers between 5 and 50. In general there\nis a slight performance increase with more layers, but the increase is\nminimal once median layer correlation reaches approximately 0.50.\nWe also evaluated the proposed heuristics on three real world multi-\nplex networks using the same procedure where we downsample edges:\na scientific collaboration network with 16 layers representing collab-\noration on different tasks among 514 scientists at the Pierre Auger\nObservatory, the largest observatory of ultra-high-energy cosmic rays\n[12], an airline transportation network with 37 layers representing\ndifferent European airline carriers’ direct routes between 450 airports\n[10] and an economic global trade network from the United Nations\nFood and Agriculture Organization with 364 layers representing im-\nport/export relations for a particular food item among 214 countries\n[13]. We show the cross-layer correlation matrices for the edge and\ndegree property matrices in Figure 3, which indicate strong correla-\ntion structure, particularly in the case of the UN FAO trade network.\nGiven this strong correlation structure, we should expect the multiplex\nnetwork heuristics to outperform their monoplex heuristic baselines.\nFor each network, we provide accuracy as a heat map comparing the\ncorresponding monoplex heuristic baselines to each of the proposed\nmultiplex heuristics as columns in Figure 4.\nWe first note that CWC significantly outperforms all of the mono-\nplex heuristic baselines on all of the real-world networks, consistent\nwith the performance seen in the simulations. CWH and CCWH also\neither perform better than or the same as each baseline in the airline\nand trade networks, while their performance is in general similar to\ntheir baselines in the collaboration network. We note, however, that in\nthe collaboration network, the baseline performance is already quite\nhigh and this network exhibits the weakest correlation structure ofthese real world networks. Performance is most consistent with the\nsimulations in the trade network, where we see strong outperformance\nfor all of the multiplex heuristics, with the outperformance most sig-\nnificant for CWC and CCWH. We note that this network contains\nboth the most layers and the richest cross-layer correlation structure,R.E. Tillman et al. / Heuristics for Link Prediction in Multiplex Networks 1943\nFigure 3. Cross-layer correlation matrices for Pierre Auger Physics Collaboration, European Airlines and UN FAO Global Trade multiplex networks. Darker\nred cells indicate stronger positive correlations whereas darker blue cells indicate stronger negative correlations.\nAA CN JC PAPCC RA KSRPR\nPierre Auger Physics Collaboration NetworkBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd0.70 0.67 0.33 0.71 0.35 0.68 0.69 0.37\n0.77 0.77 0.77 0.77 0.77 0.77 0.77 0.77\n0.79 0.77 0.78 0.78 0.78 0.78 0.80 0.780.69 0.66 0.34 0.71 0.35 0.68 0.56 0.34\n0.69 0.66 0.34 0.70 0.36 0.68 0.63 0.35\n0.69 0.66 0.34 0.70 0.36 0.68 0.56 0.35\n0.69 0.66 0.34 0.70 0.37 0.68 0.63 0.350.40.50.60.7\nAA CN JC PAPCC RA KSRPR\nEuropean Airline NetworkBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd0.06 0.04 0.05 0.06 0.07 0.00 0.00 0.00\n0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.10\n0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.100.09 0.07 0.11 0.09 0.10 0.00 0.01 0.01\n0.08 0.06 0.09 0.08 0.09 0.01 0.02 0.02\n0.07 0.07 0.12 0.07 0.09 0.00 0.01 0.01\n0.06 0.06 0.11 0.05 0.08 0.00 0.01 0.010.020.040.060.080.10\nAA CN JC PAPCC RA KSRPR\nUN FAO Global Trade NetworkBaseline\nCWCe\nCWCd\nCWHe\nCWHd\nCCWHe\nCCWHd0.32 0.28 0.23 0.32 0.17 0.10 0.06 0.03\n0.36 0.36 0.36 0.36 0.36 0.36 0.36 0.36\n0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35\n0.31 0.31 0.24 0.32 0.23 0.22 0.06 0.05\n0.31 0.30 0.23 0.31 0.23 0.23 0.06 0.05\n0.34 0.34 0.35 0.34 0.34 0.33 0.21 0.28\n0.34 0.33 0.35 0.34 0.34 0.33 0.22 0.280.050.100.150.200.250.300.35\nFigure 4. Accuracy of the proposed multiplex heuristics and monoplex baselines on real world scientific collaboration, transportation and global trade multiplex\nnetworks. Larger values / darker cells within a specific column indicate higher accuracy.\nwhich supports our motivation and objective to develop heuristics that\ntake advantage of this structure when present.\nFinally, while our focus was to evaluate the proposed heuristics\nwhen used for unsupervised link prediction, we also investigated\nusing them as additional features with supervised approaches. Previ-\nous supervised approaches for link prediction in multiplex networks\nhave trained separate classifiers for each layer in the network using\nmonoplex heuristics evaluated at each layer as features (as opposed to\nonly using heuristics evaluated at the layer at which links are being\npredicted). To investigate whether adding our proposed multiplexheuristics as additional features to this set improves supervised per-\nformance, we trained Logistic Regression, Naive Bayes and Random\nForest classifiers using three different feature sets: Monoplex-only,\nwhich includes all of the monoplex heuristics discussed in section 2\nevaluated at all of the layers in the network, Multiplex-only, which\nincludes only the proposed multiplex heuristics CWC, CWH and\nCCWH using edge and degree cross-layer correlations for each of the\nmonoplex heuristics discussed in section 2 as inputs, and All features,\nwhich includes both of these sets. To generate these feature sets, we\nevaluate the heuristics for all node pairs at each layer in the network\nand make the corresponding label 1 or 0 depending on whether an\nedge exists. This results in significantly fewer 1 labels so we balance\nthe datasets by subsampling the 0 labeled examples. We then split the\ndatasets into 20% test data and 80% training data. We did this for the\nEuropean Airlines Network and Pierre Auger Physics Collaboration\nNetwork, excluding the UN FAO Global Trade Network (the network\nwith the most layers) for computational reasons. We report area under\nthe ROC curve (AUROC) on the test data averaged across the classi-\nfiers trained on each layer for each feature set and classifier in tables\n1 and 2.\nWe first we note that adding the multiplex heuristics to the mono-\nplex features improves performance in terms of both AUROC in all\ncases (for all classifiers and networks). Additionally, using only theTable 1. AUROC for European Airlines Network\nMonoplex-only Multiplex-only All features\nLogistic Reg. 0.893 0.957 0.900\nNaive Bayes 0.963 0.954 0.977\nRandom Forest 0.985 0.967 0.994\nTable 2. AUROC for Pierre Auger Physics Collaboration Network\nMonoplex-only Multiplex-only All features\nLogistic Reg. 0.995 0.999 0.996\nNaive Bayes 0.995 0.998 0.999\nRandom Forest 0.991 0.995 0.996\nmultiplex heuristics leads to greater performance than using all ofthe monoplex features from all layers when Logistic Regression is\nused for the European Airlines network and for all of the classifiers\ntrained on the Pierre Auger Physics Collaboration Network. This is\ndespite the fact that this is a much smaller feature set than using the\nmonoplex heuristics evaluated across all layers. While our focus was\nto provide simple, interpretable heuristics for unsupervised predic-\ntion, akin to the similiarity heuristics for unsupervised prediction in\nmonoplex networks, rather than to develop supervised methods, these\nresult provide evidence that our heuristics both (i) add value as addi-\ntional unique features and (ii) are efficient in that they result in similar\nor better performance than higher-dimensional feature sets resulting\nfrom applying monoplex heuristics across all network layers.\nWe also note that these AUROC scores are indicative of greater\nprediction accuracy than those reported in the unsupervised experi-\nments (for both multiplex and monoplex features). This is not simply\na consequence of using a supervised method compared to an unsu-pervised method, but also reflective of the fact that picking a top\nk\nR.E. Tillman et al. / Heuristics for Link Prediction in Multiplex Networks 1944\nranking of most likely links after a sufficient amount of the network\nhas been corrupted by removing edges is a significantly more difficult\nproblem than predicting whether an edge is present from heuristics\nwhich are calculated using a fully-uncorrupted network and provided\nfor all existing edges in a training set. The former problem is more\nreflective of real-world applications.\n5 Conclusion and Future Work\nWe proposed a general framework and three families of multiplex\nnetwork heuristics for link prediction, CWC, CWH and CCWH. While\nthese heuristics improve supervised methods, they provide a simple,\ninterpretable representation that can be used for efficient unsupervised\nprediction. Our framework is adaptive to a given problem setting and\nefficiently takes advantage of rich cross-layer correlation structure\nwhen present. Experiments using synthetic and real world networks\nconfirm these heuristics significantly outperform their baselines and\nperformance increases with the strength of correlations.\nOne line of future research is a more structure specific thresholding\nprocedure: while we find cases of multiplex networks with many\ncorrelated and uncorrelated layers where the threshold we provide\nimproves performance, in many cases performance is not affected by\nusing the threshold. If we instead used thresholds based on randomgraph models that are more specific to the observed structure of a\ngiven layer, e.g. Barab ´asi-Albert random graph models if we observe\npower-law node degree distributions, this might result in a more\nrobust procedure. Deriving thresholds based on Barab ´asi-Albert and\nother more complex random graph models is, however, much less\nstraightforward. Another line of open research is developing a more\nrobust procedure for simulating random multiplex networks withspecified correlation structures. The procedure we use begins with\nrealistic Barab ´asi-Albert random graphs as layers, but after edges are\nadded and removed to create correlated layers, the degree distribution\nguarantees of Barab ´asi-Albert graphs are no longer valid. A more\nrobust procedure for generating random multiplex networks wouldguarantee both a specified layer correlation structure in addition to\nlocal properties at each of the layers.\nDisclaimer\nThis paper was prepared for information purposes by the AI Research\nGroup of JPMorgan Chase & Co and its affiliates (“J.P. Morgan”),\nand is not a product of the Research Department of J.P. Morgan. J.P.\nMorgan makes no explicit or implied representation and warranty and\naccepts no liability, for the completeness, accuracy or reliability ofinformation, or the legal, compliance, financial, tax or accountingeffects of matters contained herein. This document is not intendedas investment research or investment advice, or a recommendation,\noffer or solicitation for the purchase or sale of any security, financialinstrument, financial product or service, or to be used in any way forevaluating the merits of participating in any transaction.\nREFERENCES\n[1] Lada A. Adamic and Eytan Adar, ‘Friends and neighbors on the web’,\nSocial Networks, 25(3), 211–230, (2003).\n[2] Mohammad Al Hasan, Vineet Chaoji, Saeed Salem, and Mohammed\nZaki, ‘Link prediction using supervised learning’, in Proceedings of the\nSDM 06 Workshop on Link Analysis, Counterterrorism and Security,\n(2006).\n[3] Albert-L ´aszl´o Barab ´asi and Reka Albert, ‘Emergence of scaling in ran-\ndom networks’, Science, 286, 509–512, (1999).[4] Albert-L ´aszl´o Barab ´asi, Hawoong Jeong, Zolt ´an N ´eda, Erzs ´ebet Regan,\nAndras Schubert, and Tam ´as Vicsek, ‘Evolution of the social network of\nscientific collaborations’, Physica A, 311(3), 590–614, (2002).\n[5] Austin R. Benson, Rediet Abebe, Michael T. Schaub, Ali Jadbabaie,\nand Jon Kleinberg, ‘Simplicial closure and higher-order link prediction’,\nProceedings of the National Academy of Sciences, 115(48), E11221–\nE11230, (2018).\n[6] Giulia Berlusconi, Francesco Calderoni, Nicola Parolini, Marco Verani,\nand Carlo Piccardi, ‘Link prediction in criminal networks: A tool for\ncriminal intelligence analysis’, PloS ONE, 11(4), 0154244, (2016).\n[7] Catherine A. Bliss, Morgan Frank, Christopher M/ Danforth, and Pe-\nter Dodds, ‘An evolutionary algorithm approach to link prediction in\ndynamic social networks’, Journal of Computational Science, 5(5), 750–\n764, (April 2013).\n[8] Sergey Brin and Lawrence Page, ‘The anatomy of a large-scale hyper-\ntextual web search engine’, Computer Networks and ISDN Systems,\n30(1–7), 107–117, (1998).\n[9] Piotr Br ´odka, Anna Chmiel, Matteo Magnani, and Giancarlo Ragozini,\n‘Quantifying layer similarity in multiplex networks: A systematic study’,\nRoyal Society Open Science, 5(8), (2017).\n[10] Alessio Cardillo, Jes ´us G ´omez-Garde ˜nes, Massimiliano Zanin, Miguel\nRomance, David Papo, Francisco del Pozo, and Stefano Boccaletti,\n‘Emergence of network features from multiplexity’, Scientific Reports,\n3, 1344, (2013).\n[11] William Cukierski, Benjamin Hamner, and Bo Yang, ‘Graph-based fea-\ntures for supervised link prediction’, in Proceedings of the International\nJoint Conference on Neural Networks, pp. 1237–1244, (2011).\n[12] Manlio De Domenico, Andrea Lancichinetti, Alex Arenas, and Martin\nRosvall, ‘Identifying modular flows on multilayer networks revealshighly overlapping organization in interconnected systems’, Physical\nReview X, 5(1), 11–27, (2015).\n[13] Manlio De Domenico, Vincenzo Nicosia, Alex, Alexandre Arenas, and\nVito Latora, ‘Structural reducibility of multilayer networks’, Nature\nCommunications, 6, 6864, (2015).\n[14] Paul Erd ˝os and Alfr ´ed R ´enyi, ‘On random graphs I’, Publicationes\nMathematicae, 6, 290–297, (1959).\n[15] Rushed Kanawati, ‘Multiplex network mining: A brief survey’, IEEE\nIntelligent Informatics Bulletin, 16(1), 24–27, (2015).\n[16] Leo Katz, ‘A new status index derived from sociometric analysis’, Psy-\nchometrika, 18(1), 39–43, (1953).\n[17] Mikko Kivel ¨a, Alex Arenas, Marc Barthelemy, James P. Gleeson, Yamir\nMoreno, and Mason A. Porter, ‘Multilayer networks’, Journal of Com-\nplex Networks, 2(3), 203–271, (2014).\n[18] David Liben-Nowell and Jon Kleinberg, ‘The link-prediction problem\nfor social networks’, Journal of the American Society for Information\nScience and Technology, 58(7), 1019–1031, (2007).\n[19] Linyuan L ¨u and Tao Zhou, ‘Link prediction in complex networks: A\nsurvey’, Physica A, 390(6), 1150–1179, (2011).\n[20] V´ıctor Mart ´ınez, Fernando Berzal, and Juan-Carlos Cubero, ‘A survey of\nlink prediction in complex networks’, ACM Computing Surveys, 49(4),\n69, (2016).\n[21] M E J Newman, ‘Clustering and preferential attachment in growing\nnetworks’, Physical Review E, 64(2), 025102(R), (2001).\n[22] Vincenzo Nicosia and Vito Latora, ‘Measuring and modeling correla-\ntions in multiplex networks’, Physical Review E, 92, 032805, (2015).\n[23] V . S. Parvathy and T. K. Ratheesh, ‘Friend recommendation system for\nonline social networks: A survey’, in Proceedings of 2017 International\nconference of Electronics, Communication and Aerospace Technology,\nvolume 2, pp. 359–365, (2017).\n[24] Purnamrita Sarkar, Deepayan Chakrabarti, and Andrew W. Moore, ‘The-\noretical justification of popular link prediction heuristics’, in Proceed-\nings of the Twenty-Second International Joint Conference on Artificial\nIntelligence, pp. 2722–2727, (2011).\n[25] Hanghang Tong, Christos Faloutsos, and Jia-Yu Pan, ‘Fast random walk\nwith restart and its applications’, in Proceedings of the Sixth Interna-\ntional Conference on Data Mining, pp. 613–622, (2006).\n[26] Liang Wang, Ke Hu, and Yi Tang, ‘Robustness of link-prediction al-gorithm based on similarity and application to biological networks’,\nCurrent Bioinformatics, 9(3), 246–252, (2013).R.E. Tillman et al. / Heuristics for Link Prediction in Multiplex Networks 1945",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "sMa5PEPzLs",
"year": null,
"venue": "ECAI 2016",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-672-9-1698",
"forum_link": "https://openreview.net/forum?id=sMa5PEPzLs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Delete-Free Reachability Analysis for Temporal and Hierarchical Planning",
"authors": [
"Arthur Bit-Monnot",
"David E. Smith",
"Minh Do"
],
"abstract": "Reachability analysis is a crucial part of the heuristic computation for many state of the art classical and temporal planners. In this paper, we study the difficulty that arises in assessing the reachability of actions in planning problems containing sets of interdependent actions, notably including problems with required concurrency as well as hierarchical planning problems. We show the limitation of state-of-the-art techniques and propose a new method suitable for both temporal and hierarchical planning problems. Our proposal is evaluated on FAPE, a constraint-based temporal planner. A long version of this paper was presented at the HSDIP workshop [1].",
"keywords": [],
"raw_extracted_content": "Delete- ree Reachability Analysis\nfor Temporal and Hierarchical Planning\nArthur Bit-Monnot1and David E. Smith2and Minh Do2\nAbstract. Reachability analysis is a crucial part of the heuristic\ncomputation for many state of the art classical and temporal plan-\nners. In this paper, we study the difficulty that arises in assessing thereachability of actions in planning problems containing sets of inter-dependent actions, notably including problems with required concur-\nrency as well as hierarchical planning problems. We show the limita-\ntion of state-of-the-art techniques and propose a new method suitablefor both temporal and hierarchical planning problems. Our proposalis evaluated on FAPE, a constraint-based temporal planner.\n3\n1 Introduction\nReachability analysis is crucial in computing heuristics guiding\nmany classical and temporal planners. This is typically done by re-\nlaxing the action delete lists and constructing the reachability graph.This graph is then used as a basis to extract a relaxed plan, whichserves as a non-admissible heuristic estimate of the actual plan reach-ing the goals from the current state.\nTemporal planning poses some additional challenges for reacha-\nbility analysis as heuristics should not only estimate the total cost\nbut also the earliest time at which goals can be achieved. This can\nbe accomplished on the reachability graph by labeling: (1) proposi-tions with the minimum time of the effects that can achieve them;and (2) actions with the maximum time of the propositions they re-quire as conditions. Since the reachability graph construction processprogresses as time increases, when all start conditions are reachable,\na given action ais eligible to be added to the graph. However, there\nis the additional problem that a’s end conditions must also be reach-\nable, although they do not need to be reachable until the end time ofa. To see why this is a problem for the conventional way of buildingthe reachability graph, consider the two actions in Figure 1: actionBachieves the end condition for action A, but requires a start effect\nofAbefore it can start. Thus, Bcannot start before A,b u tA cannot\nend until after Bhas ended. This means that Ais not fully reachable\nuntilBis reachable, but Bis not reachable unless Ais reachable.\nWhether this turns out to be possible depends on whether Bfits in-\nside ofA. In this example, the reasoning is simple enough, but more\ngenerally, Bmight be a complex chain of actions.\nPlanners such as\nPOPF [2] address this problem by splitting du-\nrative actions into instantaneous start and end events, and forcing a\ntime delay between the start and end events. In our example, the start\nofAwould be reachable, leading to the start of Bbeing reachable,\nwhich leads to the end of Bbeing reachable, and finally the end\nofAbeing reachable. This approach therefore concludes that Ais\nreachable. Unfortunately, the same conclusion is reached even when\n1LAAS-CNRS, Universit ´e de Toulouse, email: [email protected]\n2NASA Ames Research Center, email: {david.smith, minh.do }@nasa.gov\n3A long version of this paper was presented at the HSDIP workshop [1].A(duration: 10)y\nx\nB(duration: 7)x\ny\nFigure 1 : Two interdependent actions: Awith a start effect xand an\nend condition y, andBwith a start condition xand an end effect y.\nBdoes not fit inside of A, because this “action-splitting” approach\nallowsAto “stretch” beyond its actual duration.\nIn this paper, we present an approach to reachability analysis that\naddresses the above limitation and show how it can be beneficial forboth generative and hierarchical temporal planners.\n2 Planning Model & Relaxation\nTemporal Model. We consider temporal planning problems sim-\nilar to those of PDDL 2.1. For ease of presentation, we consider a\ndiscrete time model and restrict ourselves to actions with fixed du-\nration and positive conditions.4An action has a set of conditions Ca\nand a set of effects Ea, all at arbitrary instants in the action envelope.\nA condition c∈Cahas the form /angbracketleft[tc]fc/angbracketright, wherefcis a fluent and\ntcis a positive delay from the start of the action to the moment fc\nis required to be true. An effect e∈Eahas the form /angbracketleft[te]fe/angbracketright(resp.\n/angbracketleft[te]¬fe/angbracketrightfor negative effects), where teis the positive delay from\nthe start of the action to the moment the fluent febecomes true (resp.\nfalse).\nRelaxed Model. To estimate when each fact can be achieved, our\nreachability analysis utilizes elementary actions, which are artificial\nactions created from the original temporal actions defined in the do-main description. Elementary actions contain: (i)only a single ‘add’\neffect and (ii) the minimal set of conditions required to achieve that\neffect. More specifically for each positive effect e=/angbracketleft[t\ne]fe/angbracketrightof an\nactiona, there will be an elementary action aewith:\n•a single effect /angbracketleft[1]fe/angbracketright\n•for each condition /angbracketleft[tc]fc/angbracketrightofa, a condition /angbracketleft[tc−te+1 ]fc/angbracketright\nOur relaxed model is composed of those delete-free elementary ac-\ntions, each giving one possible way of achieving a given fluent. Forany given elementary action, we say that a condition c=/angbracketleft[t\nc]fc/angbracketrightis\nabefore-condition (resp. an after-condition)i ft c≤0(resp.tc>0).\nAn after-condition represents a condition that is required when or af-ter the effect of the elementary action becomes active (e.g. yis an\nafter-condition of the action Aof Figure 1). In a reachability graph,\nsuch conditions would be represented by a negative edge. Thoseafter-conditions are necessary for the presence of interdependenciessuch as the one shown in Figure 1 [3].\n4Extensions to more general models are discussed in the long version of this\npaper [1].FECAI 2016\nG.A. Kaminka et al. (Eds.)\n© 2016 The Authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/978-1-61499-672-9-16981698\n3 Reachability Analysis with After-conditions\nIn POPF , the splitting mechanism used for reachability analysis re-\nsults in ignoring all after-conditions of durative actions. In order to\navoid this additional relaxation, our method for reachability analy-\nsis is based on repeatedly alternating two steps: (i)we optimistically\npropagate achievements times while ignoring after-conditions; then(ii) we enforce all after-conditions. More specifically:\n1. As a preliminary, we select a set of symbols that are assumed\nreachable at time 0. All fluents of the initial state are obviouslypart of this set. They are optimistically complemented with all el-ementary actions that have no before-conditions.\n2. We then iteratively extend the set of assumed reachable nodes\nwith: (i)all fluents that have an assumed reachable achiever and\n(ii)all actions whose every before-condition is assumed reachable.\nEach reachable symbol is associated with an earliest appearancetime satisfying: (i)the minimal delays between an action and its\nbefore-conditions and (ii) the minimal delay between a fluent and\nits first achiever. This is done by a Dijkstra-like procedure that\nprocesses the nodes by increasing their earliest appearance times.\n3. Our optimistic assumptions are then revised recursively by in-\ncorporating the ignored after-conditions. Specifically: (i)any el-\nementary action with an after-condition on an unreachable fluent\nis removed from the model, (ii) if a removed action ais the only\nachiever of a fluent f,fis removed together with any action de-\npending on it, (iii) the minimal delays between an action and its\nafter-conditions are enforced by increasing the action’s earliest ap-pearance time as much as necessary.\n4. If any action was updated in the previous step, we go back to step\n(2) and restart the propagation of earliest appearances from theupdated nodes. Otherwise, analysis finishes with a set of reachableactions and fluents, each associated with an earliest appearance\ntime.\nBecause earliest appearances could be endlessly increased towards\ninfinity, we complement the last step with a detection of unreachablenodes. A group of nodes Nis unreachable if for any node n∈N ,\nthere is a delay of at least d\nmax between the earliest appearance of\nany node n/prime/negationslash∈N andn,dmax being the highest delay between any\ncondition and its action or any action and its effect. The intuition isthat nodes of this group are delaying each other due to unachievable\ninterdependencies [1].\n4 Extension to Hiearchical Planning\nWhile automated reachability analysis is widespread in generative\nplanners, hierarchical planners still rely on manual annotation ofmethods for this purpose. Here we propose a translation of hierarchi-cal actions that exposes hierarchical features as additional conditionsand effects for the purpose of reachability analysis.\nWe associate each hierarchical action ato a task symbol τ\naand a\nset of subtasks Sa. The intuition is that aachieves the task τaand\nrequires all its subtasks to be achieved by other actions. For a plan π\nto be a solution it is required that:\n•all initial tasks and all subtasks have been achieved. A task τ\nspanning a duration [stτ,etτ]is said to be achieved if there is\nan action aτin the plan that: (i)achieves the task τ;(ii) starts at\nstτ; and (iii) ends atetτ.\n•all actions in πachieve some task. This simulates HTN planners,\nin which all actions are inserted to achieve a pending task.\nTo allow reasoning on those additional requirements, we transform\na hierarchical action a, with task τa, into a ‘flat’ action aflat with:\n•all conditions and effects of a,•one start condition /angbracketleft[0]required (τa)/angbracketright,\n•one start effect /angbracketleft[0]started(τa)/angbracketrightand one end effect\n/angbracketleft[da]ended(τ a)/angbracketright, wheredais the duration of a,\n•for each subtask /angbracketleft[d1,d2]τ/angbracketrightofa:\n–two conditions /angbracketleft[d1]started(τ)/angbracketrightand/angbracketleft[d2]ended(τ )/angbracketright,\n–one effect /angbracketleft[d1]required (τ)/angbracketright.\nActions resulting from this compilation step encompass both\ncausal and hierarchical features of the domain and can be split into el-ementary actions for reachability analysis techniques described ear-lier. This transformation usually exposes many interdependencies aseach action both enables and requires the presence of its subactions.\n5 Experiments & Conclusion\nOur technique has been implemented in FAPE [4], a constraint-basedtemporal planner for the ANML language supporting both hierarchi-cal and generative planning. Reachability analysis is used to (i)prune\nthe search space by detecting dead-ends; and (ii) disregard resolvers\ninvolving unreachable actions.\nOur method is tested with different configurations, R\n∞being the\noriginal method and R5andR1are variants in which the number of\niterations are limited to 5and1respectively. R+is the configuration\nwhere all after-conditions are ignored, thus producing the same resultas the reachability analysis of\nPOPF .∅denotes the configuration in\nwhich no reachability analysis is performed. Evaluation was done onvarious temporal domain with and without hierarchies.\nR∞R5R1R+∅\n(IPC-8) satellite (20) 14 14 14 14 15\n(IPC-5) rovers (40) 25 25 25 25 25\n(IPC-2) logistics (28) 8 8 8 8 8\n(IPC-8) satellite-hier (20) 17 17 17 17 16\n(IPC-5) rovers-hier (40) 22 22 22 22 22\n(IPC-8) tms-hier (20) 7 7 7 7 7\n(IPC-2) logistics-hier (28) 28 28 28 6 9\n(LAAS )handover-hier (20) 16 16 16 7 7\n(IPC-8) hiking-hier (20) 20 17 16 15 17\n(LAAS )docks-hier (18) 17 13 12 7 7\nTotal (254) 174 167 165 128 133\nTable 1 : Number of solved tasks for various domains with a 30 min-\nutes timeout. The best result is shown in bold. The number of prob-lem instances is given in parenthesis.\nAs shown in Table 1, our method results in significant perfor-\nmance gain on hierarchical problems. This is because those prob-lems feature many examples of interdependent actions for which ourmethod is especially usefull. On temporally simple problems (herenon-hierarchical ones), our method is equivalent to the reachability\nanalysis of\nPOPF and does not result in any runtime penalty. Note that\nthe use of reachability analysis is here limited to dead-end detection.\nWhile this proves extremely useful on a wide variety of problems,one could also contemplate using it as a base for heuristic extraction.\nREFERENCES\n[1] Arthur Bit-Monnot, David E. Smith, and Minh Do, ‘Delete-free Reach-\nability Analysis for Temporal and Hierarchical Planning’, in Heuristics\nand Search for Domain-independent Planning, pp. 93–102, (2016).\n[2] Amanda Coles, Andrew Coles, Maria Fox, and Derek Long, ‘Forward-\nChaining Partial-Order Planning’, in ICAPS, pp. 42–49, (2010).\n[3] Martin C. Cooper, Frederic Maris, and Pierre R ´egnier, ‘Managing Tem-\nporal Cycles in Planning Problems Requiring Concurrency’, Computa-\ntional Intelligence, 29, 111–128, (2013).\n[4] Filip Dvorak, Roman Bartak, Arthur Bit-Monnot, Felix Ingrand, and Ma-\nlik Ghallab, ‘Planning and Acting with Temporal and Hierarchical De-\ncomposition Models’, in ICTAI, pp. 115–121, (2014).A. Bit-Monnot et al. / Delete-Free Reachability Analysis for Temporal and Hierarchical Planning 1699",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "bU8cKifxSLN",
"year": null,
"venue": "ECAI 2006",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=bU8cKifxSLN",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On Probing and Multi-Threading in Platypus",
"authors": [
"Jean Gressmann",
"Tomi Janhunen",
"Robert E. Mercer",
"Torsten Schaub",
"Sven Thiele",
"Richard Tichy"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "sfQ1vSDxWM9U",
"year": null,
"venue": "Guide to e-Science 2011",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=sfQ1vSDxWM9U",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Facilitating e-Science Discovery Using Scientific Workflows on the Grid",
"authors": [
"Jianwu Wang",
"Prakashan Korambath",
"Seonah Kim",
"Scott A. Johnson",
"Kejian Jin",
"Daniel Crawl",
"Ilkay Altintas",
"Shava Smallen",
"Bill Labate",
"Kendall N. Houk"
],
"abstract": "e-Science has been greatly enhanced from the developing capability and usability of cyberinfrastructure. This chapter explains how scientific workflow systems can facilitate e-Science discovery in Grid environments by providing features including scientific process automation, resource consolidation, parallelism, provenance tracking, fault tolerance, and workflow reuse. We first overview the core services to support e-Science discovery. To demonstrate how these services can be seamlessly assembled, an open source scientific workflow system, called Kepler, is integrated into the University of California Grid. This architecture is being applied to a computational enzyme design process, which is a formidable and collaborative problem in computational chemistry that challenges our knowledge of protein chemistry. Our implementation and experiments validate how the Kepler workflow system can make the scientific computation process automated, pipelined, efficient, extensible, stable, and easy-to-use.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "zlNSwGcUzkX",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=zlNSwGcUzkX",
"arxiv_id": null,
"doi": null
}
|
{
"title": "To Reviewer e1bo",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "NM4533MLgS6",
"year": null,
"venue": "Data Min. Knowl. Discov. 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=NM4533MLgS6",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Correction to: Domain agnostic online semantic segmentation for multi-dimensional time series",
"authors": [
"Shaghayegh Gharghabi",
"Chin-Chia Michael Yeh",
"Yifei Ding",
"Wei Ding",
"Paul Hibbing",
"Samuel LaMunion",
"Andrew Kaplan",
"Scott E. Crouter",
"Eamonn J. Keogh"
],
"abstract": "The article Domain agnostic online semantic segmentation for multi-dimensional time series, written by Shaghayegh Gharghabi, Chin-Chia Michael Yeh, Yifei Ding, Wei Ding, Paul Hibbing, Samuel LaMunion, Andrew Kaplan, Scott E. Crouter, Eamonn Keogh was originally published electronically on the publisher’s internet portal (currently SpringerLink) on 25 September 2018 without open access.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "GMIoo2ovMb6",
"year": null,
"venue": "EC-TEL 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=GMIoo2ovMb6",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Mass Customization in Continuing Medical Education: Automated Extraction of E-Learning Topics",
"authors": [
"Nicolae Nistor",
"Mihai Dascalu",
"Gabriel Gutu",
"Stefan Trausan-Matu",
"Sunhea Choi",
"Ashley Haberman-Lawson",
"Brigitte Angela Brands",
"Christian Körner",
"Berthold Koletzko"
],
"abstract": "To satisfy the individual learning needs of the high number of the Early Nutrition (EN) eAcademy participants, and to reduce development costs, the mass customization (MC) approach was applied. Key concepts of the learning needs, and corresponding learner subgroups with similar needs were extracted from learner-generated text using the natural language processing tool ReaderBench. Two collections of key concepts where built, which enabled EN experts to formulate topics for e-learning modules to be developed. Ongoing work will assess learner satisfaction and e-learning development costs, in order to evaluate the MC application in continuing medical education.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "tOi-bBG7es",
"year": null,
"venue": "ECAL (1) 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=tOi-bBG7es",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Towards Self-reflecting Machines: Two-Minds in One Robot",
"authors": [
"Juan Cristóbal Zagal",
"Hod Lipson"
],
"abstract": "We introduce a technique that allows a robot to increase its resiliency and learning skills by exploiting a process akin to self-reflection. A robot contains two controllers: A pure reactive innate controller, and a reflective controller that can observe, model and control the innate controller. The reflective controller adapts the innate controller without access to the innate controller’s internal state or architecture; Instead, it models it and then synthesizes filters that exploit its existing capabilities for new situations. In this paper we explore a number of scenarios where the innate controller is a recurrent neural network. We demonstrate significant adaptation ability with relatively few physical trials.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "alMlI9YsA2Fk",
"year": null,
"venue": "ECAL (2) 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=alMlI9YsA2Fk",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Evolution of Division of Labor",
"authors": [
"Heather Goldsby",
"David B. Knoester",
"Jeff Clune",
"Philip K. McKinley",
"Charles Ofria"
],
"abstract": "We use digital evolution to study the division of labor among heterogeneous organisms under multiple levels of selection. Although division of labor is practiced by many social organisms, the labor roles are typically associated with different individual fitness effects. This fitness variation raises the question of why an individual organism would select a less desirable role. For this study, we provide organisms with varying rewards for labor roles and impose a group-level pressure for division of labor. We demonstrate that a group selection pressure acting on a heterogeneous population is sufficient to ensure role diversity regardless of individual selection pressures, be they beneficial or detrimental.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "_-_LBJG66Ku",
"year": null,
"venue": "ECAL (1) 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=_-_LBJG66Ku",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Evolving for Creativity: Maximizing Complexity in a Self-organized Multi-particle System",
"authors": [
"Heiko Hamann",
"Thomas Schmickl",
"Karl Crailsheim"
],
"abstract": "We investigate an artificial self-organizing multi-particle (also multi-agent or swarm) system consisting of many (up to 103) reactive, mobile agents. The agents’ movements are governed by a few simple rules and interact indirectly via a pheromone field. The system generates a wide variety of complex patterns. For some parameter settings this system shows a notable property: seemingly never-ending, dynamic formation and reconfiguration of complex patterns. For other settings, however, the system degenerates and converges after a transient to patterns of low complexity. Therefore, we consider this model as an example of a class of self-organizing systems that show complex behavior mainly in the transient. In a first case study, we inspect the possibility of using a standard genetic algorithm to prolongate the transients. We present first promising results and investigate the evolved system.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "l8pmg7RFT5",
"year": null,
"venue": "ECAL (1) 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=l8pmg7RFT5",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Evolving a Novel Bio-inspired Controller in Reconfigurable Robots",
"authors": [
"Jürgen Stradner",
"Heiko Hamann",
"Thomas Schmickl",
"Ronald Thenius",
"Karl Crailsheim"
],
"abstract": "Evolutionary robotics uses evolutionary computation to optimize physically embodied agents. We present here a framework for performing off-line evolution of a pluripotent robot controller that manages to form multicellular robotic organisms from a swarm of autonomously moving small robot modules. We describe our evolutionary framework, show first results and discuss the advantages and disadvantages of our off-line evolution approach. In detail, we explain the single parts of the framework and a novel homeostatic hormone-based controller, which is shaped by artificial evolution to control both, the non-aggregated single robotic modules and the joined high-level robotic organisms. As a first step we present results of this evolutionary shaped controller showing the potential for different motion behaviours.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "tIE8vEp-AF",
"year": null,
"venue": "ECAL (2) 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=tIE8vEp-AF",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Embodiment of Honeybee's Thermotaxis in a Mobile Robot Swarm",
"authors": [
"Daniela Kengyel",
"Thomas Schmickl",
"Heiko Hamann",
"Ronald Thenius",
"Karl Crailsheim"
],
"abstract": "Searching an area of interest based on environmental cues is a challenging benchmark task for an autonomous robot. It gets even harder to achieve if the goal is to aggregate a whole swarm of robots at such a target site after exhaustive exploration of the whole environment. When searching gas leakages or heat sources, swarm robotic approaches have been evaluated in recent years, which were, in part, inspired by biologically motivated control algorithms. Here we present a bio-inspired control program for swarm robots, which collectively explore the environment for a heat source to aggregate. Behaviours of young honeybees were embodied on a robot by adding thermosensors in ‘virtual antennae’. This enables the robot to perform thermotaxis, which was evaluated in a comparative study of an egoistic versus a collective swarm approach.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "NriIyTF4B1",
"year": null,
"venue": "ECAL (2) 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=NriIyTF4B1",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Simulating Swarm Robots for Collision Avoidance Problem Based on a Dynamic Bayesian Network",
"authors": [
"Hiroshi Hirai",
"Shigeru Takano",
"Einoshin Suzuki"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "y92iihTUrA",
"year": null,
"venue": "ECAL (1)2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=y92iihTUrA",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Swarm-Bots to the Rescue.",
"authors": [
"Rehan O'Grady",
"Carlo Pinciroli",
"Roderich Groß",
"Anders Lyhne Christensen",
"Francesco Mondada",
"Michael Bonani",
"Marco Dorigo"
],
"abstract": "We explore the problem of resource allocation in a system made up of autonomous agents that can either carry out tasks individually or, when necessary, cooperate by forming physical connections with each other. We consider a group transport scenario that involves transporting broken robots to a repair zone. Some broken robots can be transported by an individual ‘rescue’ robot, whereas other broken robots are heavier and therefore require the rescue robots to self-assemble into a larger and stronger composite entity. We present a distributed controller that solves this task while efficiently allocating resources. We conduct a series of real-world experiments to show that our system can i) transport separate broken robots in parallel, ii) trigger self-assembly into composite entities when necessary to overcome the physical limitations of individual agents, iii) efficiently allocate resources and iv) resolve deadlock situations.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "pXjlAAbmzkmA",
"year": null,
"venue": "ECAL (1)2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=pXjlAAbmzkmA",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Towards an Autonomous Evolution of Non-biological Physical Organisms.",
"authors": [
"Roderich Groß",
"Stéphane Magnenat",
"Lorenz Küchler",
"Vasili Massaras",
"Michael Bonani",
"Francesco Mondada"
],
"abstract": "We propose an experimental study where simplistic organisms rise from inanimate matter and evolve solely through physical interactions. These organisms are composed of three types of macroscopic building blocks floating in an agitated medium. The dynamism of the medium allows the blocks to physically bind with and disband from each other. This results in the emergence of organisms and their reproduction. The process is governed solely by the building blocks’ local interactions in the absence of any blueprint or central command. We demonstrate the feasibility of our approach by realistic computer simulations and a hardware prototype. Our results suggest that an autonomous evolution of non-biological organisms can be realized in human-designed environments and, potentially, in natural environments as well.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "AonKIf22Ljj",
"year": null,
"venue": "ECAL (2) 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=AonKIf22Ljj",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Impoverished Empowerment: 'Meaningful' Action Sequence Generation through Bandwidth Limitation",
"authors": [
"Tom Anthony",
"Daniel Polani",
"Chrystopher L. Nehaniv"
],
"abstract": "Empowerment is a promising concept to begin explaining how some biological organisms may assign a priori value expectations to states in taskless scenarios. Standard empowerment samples the full richness of an environment and assumes it can be fully explored. This may be too aggressive an assumption; here we explore impoverished versions achieved by limiting the bandwidth of the empowerment generating action sequences. It turns out that limited richness of actions concentrate on the “most important” ones with the additional benefit that the empowerment horizon can be extended drastically into the future. This indicates a path towards and intrinsic preselection for preferred behaviour sequences and helps to suggest more biologically plausible approaches.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "7kKEOhCdUdK",
"year": null,
"venue": "ECAL (2) 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=7kKEOhCdUdK",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Hierarchical Behaviours: Getting the Most Bang for Your Bit",
"authors": [
"Sander G. van Dijk",
"Daniel Polani",
"Chrystopher L. Nehaniv"
],
"abstract": "Hierarchical structuring of behaviour is prevalent in natural and artificial agents and can be shown to be useful for learning and performing tasks. To progress systematic understanding of these benefits we study the effect of hierarchical architectures on the required information processing capability of an optimally acting agent. We show that an information-theoretical approach provides important insights into why factored and layered behaviour structures are beneficial.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "HlZNcynBGy9",
"year": null,
"venue": "ECAL (2) 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=HlZNcynBGy9",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Influence of Promoter Length on Network Convergence in GRN-Based Evolutionary Algorithms",
"authors": [
"Paul Tonelli",
"Jean-Baptiste Mouret",
"Stéphane Doncieux"
],
"abstract": "Genetic Regulation Networks (GRNs) are a model of the mechanisms by which a cell regulates the expression of its different genes depending on its state and the surrounding environment. These mechanisms are thought to greatly improve the capacity of the evolutionary process through the regulation loop they create. Some Evolutionary Algorithms have been designed to offer improved performance by taking advantage of the GRN mechanisms. A recent hypothesis suggests a correlation between the length of promoters for a gene and the complexity of its activation behavior in a given genome. This hypothesis is used to identify the links in in-vivo GRNs in a recent paper and is also interesting for evolutionary algorithms. In this work, we first confirm the correlation between the length of a promoter (binding site) and the complexity of the interactions involved on a simplified model. We then show that an operator modifying the length of the promoter during evolution is useful to converge on complex specific network topologies. We used the Analog Genetic Encoding (AGE) model in order to test our hypothesis.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "SLsbJXWcU7j",
"year": null,
"venue": "UIST 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=SLsbJXWcU7j",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Optimizing Portrait Lighting at Capture-Time Using a 360 Camera as a Light Probe",
"authors": [
"Jane L. E",
"Ohad Fried",
"Maneesh Agrawala"
],
"abstract": "We present a capture-time tool designed to help casual photographers orient their subject to achieve a user-specified target facial appearance. The inputs to our tool are an HDR environment map of the scene captured using a 360 camera, and a target facial appearance, selected from a gallery of common studio lighting styles. Our tool computes the optimal orientation for the subject to achieve the target lighting using a computationally efficient precomputed radiance transfer-based approach. It then tells the photographer how far to rotate about the subject. Optionally, our tool can suggest how to orient a secondary external light source (e.g. a phone screen) about the subject's face to further improve the match to the target lighting. We demonstrate the effectiveness of our approach in a variety of indoor and outdoor scenes using many different subjects to achieve a variety of looks. A user evaluation suggests that our tool reduces the mental effort required by photographers to produce well-lit portraits.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Y__5pFa56WM",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Y__5pFa56WM",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer e1mG",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "vVLkg8XRlz",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200323",
"forum_link": "https://openreview.net/forum?id=vVLkg8XRlz",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Behavior Based Dynamic Summarization on Product Aspects via Reinforcement Neighbour Selection",
"authors": [
"Zheng Gao",
"Lujun Zhao",
"Heng Huang",
"Hongsong Li",
"Changlong Sun",
"Luo Si",
"Xiaozhong Liu"
],
"abstract": "Dynamic summarization on product aspects, as a newly proposed topic, is an important task in E-commerce for tracking and understanding the nature of products. This can benefit both customers and sellers in different downstream tasks, such as explainable recommendations. However, most existing research works focus on analyzing product static reviews but miss dynamic sentiment changes. In this paper, we propose an innovative multi-task model to sample neighbour products whose information is simultaneously utilized to generate product summarization. In detail, a reinforcement learning approach selects neighbour products from a group of seed products by considering their pairwise similarities calculated from user behaviors. Meanwhile, a generative model helps to summarize product aspects via product descriptive phrases and selected neighbour products’ sentimental phrases. To the best of our knowledge, this is the first work that studies dynamic product summarization leveraging user behaviors instead of self-reviews. It means that the proposed approach can naturally address the cold-start scenario where few recent product reviews are available. Extensive experiments are conducted with real-world reviews plus behavior data to validate the proposed method against several strong alternatives.",
"keywords": [],
"raw_extracted_content": "Behavior Based Dynamic Summarization on Product\nAspects via Reinforcement Neighbour Selection\nZheng Gao1and Lujun Zhao2and Heng Huang3and Hongsong Li4\nand Changlong Sun5and Luo Si6and Xiaozhong Liu7\nAbstract. Dynamic summarization on product aspects, as a newly\nproposed topic, is an important task in E-commerce for tracking\nand understanding the nature of products. This can benefit both cus-tomers and sellers in different downstream tasks, such as explain-\nable recommendations. However, most existing research works fo-\ncus on analyzing product static reviews but miss dynamic sentiment\nchanges. In this paper, we propose an innovative multi-task model\nto sample neighbour products whose information is simultaneously\nutilized to generate product summarization. In detail, a reinforce-\nment learning approach selects neighbour products from a group of\nseed products by considering their pairwise similarities calculated\nfrom user behaviors. Meanwhile, a generative model helps to sum-\nmarize product aspects via product descriptive phrases and selected\nneighbour products’ sentimental phrases. To the best of our knowl-\nedge, this is the first work that studies dynamic product summariza-\ntion leveraging user behaviors instead of self-reviews. It means that\nthe proposed approach can naturally address the cold-start scenario\nwhere few recent product reviews are available. Extensive experi-\nments are conducted with real-world reviews plus behavior data to\nvalidate the proposed method against several strong alternatives.\n1 Introduction\nUnderstanding product aspect-sentiments and tracking its changes\nin a timely manner can support better decision-making for commer-cial purposes, such as enlightening online retailers to make timelysales plans[43]. Therefore, the Dynamic Summarization on Product\nAspects task is of great importance. It not only indicates products’\ndynamic aspect-sentiment changes, but also depicts the changes into\nreadable contexts for easier interpretation.\nPrior investigations on another similar topic, review summa-\nrization, mainly follow Natural Language Generation (NLG) ap-\nproaches. [31] introduces a new Recurrent Neural Network (RNN)\nvariant that uses gated connections to construct a character-level text\ngeneration model; [21] designs a multi-task model to predict rating\nand generate review summarization simultaneously using pairwise\nuser-product relationship; [36] uses a memory network for review\nsummarization generation. However, all these models are originally\ndesigned for static review summarization and take product reviews\n1Indiana University Bloomington, United States, email: [email protected]\n2Alibaba Group, China, email: [email protected]\n3Red Co.Ltd, China, email:[email protected]\n4Alibaba Group, China, email: [email protected]\n5Alibaba Group, China, email: [email protected]\n6Alibaba Group, China, email: [email protected]\n7Indiana University Bloomington, United States, email: [email protected] ExpensiveSale Event\nSummarization : The\nphone is a high-end version\nand very expensive!\nBeforerich user\nPrice Considerablefrugal user\nAfter\nSummarization : Price\nis acceptable. A worthy\nphone with cheap cost.neighbour phone \nneighbour phone \nFigure 1 : An example to illustrate how to dynamically select neigh-\nbour products (red phone →green phone) for depicting current prod-\nuct (blue phone) sentiment change before & after a sale event fromuser behaviors.\nas model input. They are vulnerable to depict product sentiment\nchanges because of (real-time review) data sparsity. For instance,\nafter investigating 2.16 billion products sold by Taobao, a world-\nleading online shopping website owned by Alibaba, only 0.05% of\nproducts are able to gather more than 100 reviews within a three-day\nwindow. Thus, review-based approaches are not feasible for dynamic\nsummarizations in large scope because of the lack of instant reviews.\nOn the other hand, user behavior offers an alternative to address\nsentiment dynamics. Based on the statistics of the Taobao collection,\nmore than 2.53% of products can receive more than 100 multi-typeuser behaviors (e.g., ‘Click’ or‘Purchase’) within a three-day win-\ndow where the coverage is 50 times greater than the review scope.\nRational Choice Theory [5], on the theory side, proves that user shop-\nping behavior rationality has a coherent relationship with the product\npeculiarity. As Figure 1 depicted, when a sale event on a high-endphone brings frugal users’ instant clicks, behavior-based algorithms\ncan immediately consume this information and locate updated neigh-\nbour products (red phone →green phone), whose sufficient reviews\nhelp to update the product (blue phone) summarization. For review-\nbased approaches, accumulating enough reviews to characterize this\ndynamic change may take a longer time.\nIn this paper, instead of review summarization, we aim to gen-\nerate product aspect summarization in a dynamic manner. They are\nsimilar topics but still with huge differences in terms of concept def-\ninition and generated summary context. Conceptually, review sum-ECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2003232022\nmarization reflects customer subjective and personalized expressions\non products. While aspect summarization contains objective descrip-tions only on restricted product aspects. Contextually, review sum-marization contains more emotional and general terms in a free for-\nmat, such as ‘I love its color so much’. While aspect summarization\ngenerates more formal and descriptive expressions, which only focuson specific aspects such as ‘price is more expensive than expected’.\nMotivated by all mentioned above, we propose a Behavior based\nDynamic Summarization (BDS) model to accommodate user behav-\nior for dynamic product aspect summarization. [6] proves the user\nshopping preference is stable in a relatively long-term period, which\noffers us the theoretical feasibility to learn product behavior repre-sentation from user dynamic behavior and consistent shopping pref-erence. The learned representation supports neighbor product selec-\ntion from a group of seed products with abundant instant reviews\n(Task 1) and meanwhile implicitly helps to generate aspect summa-rization from product own descriptive phrases and neighbour prod-ucts’ filtered sentimental phrases ( Task 2). As both user behavior\nand seed products’ instant reviews are changed across time, the se-lected neighbour products and generated summarizations are associ-ated with changes as well. The contribution of this work is threefold:\n•To the best of our knowledge, this is the first effort to leverage\nuser behavior for dynamic summarization on product aspects. This\nwork pioneers behavior-based summarization investigation.\n•In our model, a reinforcement learning approach learns the sam-pling strategy on seed products with rewards from both senti-ment (calculated from behavior-to-sentiment prediction) and se-mantic (calculated from summarization generation) viewpoints.When generating product aspect summarization, our model doesnot require target product’s reviews as model input, which is ableto solve review sparseness, even zero-review, problem.\n•Experiments on a large E-commerce dataset show that our pro-\nposed model significantly outperforms the baselines from both au-\ntomatic and human perspectives. Extensive studies also prove theefficacy of each model input component.\n2 BDS Model\nThe overall model architecture is sketched in Figure 2. Our final goalis to generate product dynamic aspect summarizations using behav-ior data instead of product reviews. To optimize BDS model, the in-volved data are categorized into two groups: training & validationdata and seed product data. The training & validation data contains\nproducts with both sufficient user behavior and temporal reviews in\nan individual time period. While seed product data are products re-\nquired to have sufficient reviews across all time periods. The train-\ning & validation data helps to calculate dynamic product behavior\npresentations (Section 2.2) and pretrain product sentiment predictionmodel (Section 2.3.1). Subsequently, a multi-task model dynamically\nselects neighbour products (Section 2.3.2) from seed products whose\nsentimental phrases filtered from reviews contribute to generate prod-\nuct aspect summarization (Section 2.4).\n2.1 Data Prerequisite\nBefore optimizing BDS model, four following types of informationneed to be clarified and extracted as data prerequisite:\nProduct Sentiment Distribution: AliNLP\n8, a paid NLP service\n8English Tutorial: https://www.alibabacloud.com/help/product/57736.html\nChinese Tutorial: https://data.aliyun.com/product/nlpby Alibaba, is particularly developed to analyze the E-commerce re-views’ sentiment polarity on aspects such as ‘Quality’o r‘ Fitness’.\nWe use its sentiment analyzer to label and rate the sentiment of areview towards a specific aspect as ‘None (0.5)’, ‘Negative (0)’ or\n‘Positive (1)’. Each aspect sentiment score of a given product is cal-\nculated as the weighted average score of all three labels appeared inits eligible reviews (with more than ten words). Product sentiment\ndistribution is a vector of all aspect sentiment scores normalized by\ndividing its sum, which is regarded as the ground truth for Sentiment\nPrediction Pretraining (Section 2.3.1). In this paper, four aspects are\nselected to construct product sentiment distribution, which will be\nillustrated in the Experiments (Section 3).\nProduct Descriptive Phrase: Each product is associated with a\ndescription profile. After applying AliNLP Named Entity Recognizer\n(NER) on their profiles, we only keep the noun phrases to character-\nize related products.\nReview Sentimental Phrase: The AliNLP sentiment analyzer can\nalso extract sentimental phrases related to specific aspects from seedproduct reviews, i.e., ‘poor quality’ is a sentimental phrases of ‘ Qual-\nity’ aspect. Later on, these filtered sentimental phrases will be con-catenated with product descriptive phrases as the model input for as-pect summarization (Section 2.4).\nSeed Products: The top products having the most sufficient re-\nviews across all ttime periods are regarded as seed products. Their\naspect sentiment distributions are pre-calculated directly from re-views in each time period, which are later used to guide neighbour\nproduct selection (Section 2.3).\n2.2 Dynamic Product Behavior Representation\nMatrix factorization is utilized on dynamic user behaviors to learn\nproduct behavior representation across all time periods. In the ini-tial time period, we learn both user and product mtypes of behavior\nrepresentations via related user behavior {B\n(0)\n1,...,B(0)\nm}, where a\nbehavior type can refer to ‘Click ’o r‘ Purchase’, etc. The ithtype of\nuser behavior B(0)\niis in a format of sparse matrix where each row\ndenotes a user and each column denotes a product. The data points\ninB(0)\nidenote users’ ithtype of behaviors on products. Consider-\ning all potential methods listed in Section 3.2, we empirically select\nProbabilistic Matrix Factorization (PMF) to decompose B(0)\nito user\nrepresentation matrix U(0)\niand product representation matrix P(0)\ni\nby satisfying the following equation:\nB(0)\ni=U(0)\ni(P(0)\ni)/intercal,i=1,...,m (1)\nAs user shopping preference is consistently stable [6], once user\nrepresentation U(0)\niis calculated from the initial period, it remains\nunchanged and bridges product dynamic behavior representation in\nsubsequent time periods.\nIn thetthtime period, the matrix form of Least Squares Approx-\nimation [26] helps to calculate the ithtype of product behavior rep-\nresentation P(t)\nigiven related user behavior B(t)\niand user consistent\nrepresentation U(0)\ni:\nB(t)\ni=U(0)\ni(P(t)\ni)/intercal,i=1,...,m\n⇒(U(0)\ni)/intercal(B(t)\ni−U(0)\ni(P(t)\ni)/intercal)=0\n⇒(U(0)\ni)/intercalU(0)\ni(P(t)\ni)/intercal=(U(0)\ni)/intercalB(t)\ni\n⇒(P(t)\ni)/intercal=( (U(0)\ni)/intercalU(0)\ni)−1(U(0)\ni)/intercalB(t)\ni\n⇒P(t)\ni=(B(t)\ni)/intercalU(0)\ni((U(0)\ni)/intercalU(0)\ni)−1(2)Z.Gao etal./Behavior Based Dynamic Summarization onProduct Aspects viaReinfor cement Neighbour Selection 2023\n............\n<s>......\nCNNMaxpoolingSentiment Prediction Pretraining\nObservation\nState ProfileActionRL Agent\nUpdate Policy Context Vector\nAttention\nBi-LSTM\nLSTM\nDynamic Product \nBehavior RepresentationNeighbour Product Selection Task \nin the time periodSummarization Generation Task \nin the time periodRewardGenerated Summarization \nin the time period \nDescriptive &\nSentimental Phrases time periodUser ProductBehavior across Time\n...\n...\nNeighbour Products\nFigure 2 : The overall architecture of BDS model in the tthtime period. Each color refers to an individual model component. Dynamic Product\nBehavior Representation and Sentiment Prediction Pretraining are optimized before training the main multi-task model (Neighbour Product\nSelection & Summarization Generation). Red dashed lines show the workflows to calculate RL reward and in return to optimize RL policy.Data Prerequisite steps are omitted here for simplicity but are illustrated in detail in the main paper.\n2.3 Neighbour Product Selection Task\n2.3.1 Sentiment Prediction Pretraining\nIn thetthtime period, we combine product behavior representation\nand category to estimate product aspect sentiment distribution viaa hybrid CNN-MLP (Convolutional Neural Network and MultilayerPerceptron) approach. Let cbe the category of a target product, stored\noriginally as one-hot embedding. To better represent its enriched in-formation, we first apply an one-layer MLP with tahn(·) activation\nfunction to convert it as a dense vector v\nc.\nvc= tahn(Wcc+bc) (3)\nwhereWcandbcdenote the weight matrix and bias respectively.\nFor the same product, we can also obtain its multi-type behavior\nrepresentation {p(t)\n1,...,p(t)\nm}, wherep(t)\nm∈P(t)\nmdenotes its mth\ntype behavior representation. As CNN kernel can help to filter outthe most important dimensions (local features) from a vector, a CNNlayercnn(·)with Max pooling mechanism max\npooling(·) is ap-\nplied to capture product ithtype local feature representation l(t)\ni.\nl(t)\ni=m a xpooling(tahn(cnn(p(t)\ni))),i=1,...,m (4)\nSubsequently, we average all mtypes of local feature representa-\ntion to a single vector l(t). Similarly, we calculate the average prod-\nuct behavior vector p(t)as global feature representation of mtypes\nof behaviors:\nl(t)=1\nmm/summationdisplay\ni=1l(t)\ni\np(t)=1\nmm/summationdisplay\ni=1p(t)\ni(5)In the end, a product has three types of information representation,\nincluding global feature representation p(t), local feature representa-\ntionl(t), and category representation vc. We concatenate all these\ninformation to calculate the estimated product sentiment distributiond\n(t)over all pre-selected aspects via one layer MLP normalized by\nsoftmax(·) function:\nd(t)=s o f t m a x ( Wd[p(t),l(t),vc]+bd) (6)\nwhereWdandbdare related weight matrix and bias. [,]denotes the\nconcatenation operation.\nThis pretraining process is optimized by minimizing the cross en-\ntropy between the estimated product sentiment distribution d(t)and\nthe actual product sentiment distribution calculated from product re-views (Section 2.1). This optimization step has to be done ahead ofthe main multi-task model introduced in the following sections.\n2.3.2 Reinforcement Neighbour Selection\nIn thetthtime period, among hseed products with sufficient reviews,\na policy gradient approach [30] learns an action to select sneighbour\nproducts A={α(t)\n1,α(t)\n2,...,α(t)\ns}out ofhcandidates where α(t)\ns\ndenotes the sthselected neighbour product. As we do not consider\nthe sampling sequence on neighbour products, the reinforcement ap-proach is a one-step Markov Decision Process (MDP) with singlestateSand single action A.\nAssume there are nproducts in total across all time periods, given\na target product, the initial observation Ois the product itself, rep-\nresented as a one-hot embedding ∈R\nn. The state Sis learned via a\ntwo-layer Multilayer Perception (MLP) on the initial observation O:\nS= softmax(W 2tahn(W 1O+b1)) (7)Z. Gao et al. / Behavior Based Dynamic Summarization on Product Aspects via Reinforcement Neighbour Selection 2024\nwhereW1,W2andb1denote related weight matrices and bias, re-\nspectively.\nThe learned state S∈ Rhis the selection probability dis-\ntribution over hseed products. Assuming an action is taken\nto sample sneighbour products with related probability weights\n{ω(t)\n1,ω(t)\n2,...,ω(t)\ns}∈S , the sampling policy πΘ1(A|S)can be\ntherefore calculated as:\nπΘ1(A|S)=s!s/productdisplay\ni=1ω(t)\ni (8)\nΘ1denote the parameters to be learned. The factorial of s(s!) de-\nnotes the number of permutations for the selected neighbour prod-\nucts as the neighbour products are sequence insensitive./producttexts\ni=1ω(t)\ni\nis the generative probability of each permutation.\nTo assess the fitness of the sselected neighbour products for the\ntarget product, we design two dynamic rewards: a sentiment reward\nto measure the aspect-sentiments similarity, and a semantic rewardto calculate the content similarity between sneighbour products and\nthe target product.\nSentiment Reward: In thet\nthtime period, we estimate the tar-\nget product’s sentiment distribution as d(t)\naby the Sentiment Predic-\ntion Pretraining (Section 2.3.1). The sentiment distributions of all its\nselected neighbour products {d(t)\n1,...,d(t)\ns}are calculated directly\nfrom their reviews (Section 2.1). To evaluate pairwise distribution\nsimilarity, Pearson correlation [4] calculates the sentiment reward\nR(t)\nsen,i of theithselected neighbour product α(t)\nias follows:\nR(t)\nsen,i=E[(d(t)\na−μ(d(t)\na))(d(t)\ni−μ(d(t)\ni))]\nσ(d(t)\na)σ(d(t)\ni),i=1,...,s (9)\nwhere E(·)denotes the expectation, μ(·)denotes the mean and\nσ(·)denotes the standard deviation. Larger similarity score offers a\nhigher reward to the related neighbour product.\nSemantic Reward: In thetthtime period, the semantic reward of\nneighbour products is measured by the accuracy of generated product\nsummarization. In this paper, we use the word level Jaccard Similar-\nity as the indicator to calculate the semantic reward R(t)\nsem for all\nselected neighbour products, which contains two parts: the averaged\nJaccard Similarity between all neighbour product original reviewsand real product summarization, and the Jaccard Similarity betweengenerated product summarization and actual product summarization:\nR\n(t)\nsem=1\nss/summationdisplay\ni=1|R(t)\ni∩Y(t)|\n|R(t)\ni∪Y(t)|+|ˆY(t)∩Y(t)|\n|ˆY(t)∪Y(t)|(10)\nwhereR(t)\nidenotes all original reviews of the ithneighbour prod-\nuctα(t)\ni,ˆY(t)denotes the generated summarization of target product,\nandY(t)denotes its actual summarization. The total reward R(t)\nAof\nneighbour products is the weighted sum between sentiment rewardand semantic reward controlled by a weighting factor γ:\nR\n(t)\nA=s/summationdisplay\ni=1ω(t)\niR(t)\nsen,i+γR(t)\nsem (11)\nTask Optimization: We use policy gradient method to optimize\nthe sampling policy, aiming to maximize the expected total rewardfor neighbour products. The expected reward J\nsel(Θ1)in thetthtime period is:\nJsel(Θ1)=EA∼πΘ1(A|S)[R(t)\nA] (12)\nThen, the gradient is estimated using the likelihood ratio trick [30]:\n∇Θ1Jsel(Θ1)=∇Θ1/summationdisplay\nAπΘ1(A|S)R(t)\nA\n≈1\nNN/summationdisplay\ni=1∇Θ1logπΘ1(Ai|S)R(t)\nAi(13)\nwhereAidenotes the ithofNrandomly sampled actions (select-\ningsneighbour products from hseed products).\n2.4 Summarization Generation Task\nIn thetthtime period, the filtered neighbour product sentimental\nphrases together with product own descriptive phrases are concate-nated into a sequence X\n(t)={x1,...,xw}. It is used as the input\nof Neural Machine Translation (NMT) model [2] to generate aspectsummarization sequence Y\n(t)={y1,...,yk}.wandkdenote the\ninput and output sequence length, respectively. Filtering out most ofemotional and other irrelevant words in advance can better map theinput to aspect oriented summarizations instead of subjective reviewsummaries.\nThe input sequence X\n(t)={x1,...,xw}is fed one-by-one into\nthe encoder (a single-layer bidirectional LSTM), producing a se-\nquence of encoder hidden states {e1,...,ew}. In decoding step i, the\ndecoder (a single-layer unidirectional LSTM) has a decoder hidden\nstatehi. Its context vector uiis generated via an Attention mecha-\nnism [33] on all encoder hidden states and current decoder hiddenstate:\na\nij= attention( hi,ej),j=1,...,w\na∗\nij=exp(aij)/summationtextw\nk=1exp(aik)\nui=w/summationdisplay\nj=1a∗\nijej(14)\nwhereaijdenotes the attention weight of encoder hidden state ej.a∗\nij\nis the normalized weight by Softmax function. The weighted sum of\nall encoder hidden states, ui, is the context vector for current step i,\nreflecting the auxiliary information from input sequences.\nA one-layer MLP is subsequently utilized on the combination of\ncontext vector uiand decoder hidden state hito generate the vocab-\nulary probability distribution Pvocab :\nPvocab= softmax(Wo[ui,hi]+bo) (15)\nwhereWoandboare related weight and bias.\nIn decoding step i, the generation loss for target word yiis its\nnegative log likelihood, −logP vocab(yi). The overall generation loss\nJgen(Θ2)is the average of all kstep generation losses and Θ2are\nall related parameters to be optimized.\nJgen(Θ2)=1\nkk/summationdisplay\ni=1−logP vocab(yi) (16)\nIn the multi-task model, Jsel(Θ1)andJgen(Θ2)both need to be\nminimized during the model training process. Each task is learnedseparately and alternately after taking a certain number of trainingdata batches in their optimization processes.Z.Gao etal./Behavior Based Dynamic Summarization onProduct Aspects viaReinfor cement Neighbour Selection 2025\n3 Experiments\n3.1 Dataset\nFrom Taobao, a world-leading E-commerce website owned by Al-\nibaba, we collect user behavior and product reviews during the pe-\nriod from Apr/12/2018 to Jul/10/2018.the raw dataset is split into\ntwo parts: the first two-week data, as the initial time period t0, helps\nto generate user consistent shopping preference ( Section 2.2). For the\nrest data, we empirically set fifteen days as the time window to splitit evenly into five consecutive time periods (t\n1∼t5). In each time\nperiod, four types of information need to be prepared and calculatedin advance to support model training, including multi-type user be-havior, product profile, product sentiment distribution, and productground truth summarization:\nFirst, multi-type user behavior is used for generating dynamic\nproduct behavior representation (Section 2.2). Three types of userbehavior are considered in this paper including ‘Click ’, ‘Add Cart ’\nand ‘Purchase’.\nSecond, from product profile, product category is encoded as one-\nhot embedding for sentiment distribution prediction ( Section 2.3).\nDescriptive phrases are extracted from product description profilesto support aspect summarization generation (Section 2.4).\nThird, product sentiment distribution is calculated from reviews\n(Section 2.1) as the ground truth of sentiment prediction pretraining(Section 2.3). We consider four common aspects including ‘Quality’,\n‘Cost-performance Ratio’, ‘Fitness’ and ‘Material ’.\nFourth,i n Taobao, a user can rate reviews with thumbs-up or\nthumbs-down signal. For each product in the training data, we firstly\nselect its top ten reviews relevant to the four picked aspects and withthe largest number of thumbs-ups. After that, using AliNLP senti-ment analyzer, we can locate and filter out the aspect-related sen-\ntences from the reviews as the ground truth for aspect summarization\ngeneration (Section 2.4).\nIn total, the whole dataset contains 125,598 products, 51,366 users\nand 108,749,788 multi-type behaviors. We use 80% of the data for\ntraining, 10% for validation and 10% for testing.\n3.2 Baselines and Settings\nAs the main contribution of this paper lies in the reinforcement\nneighbour product selection task, five baselines are chosen to com-pare from neighbour product selection viewpoint. 1) Title Similarity\n(TS): Neighbour products are selected with the shortest LevenshteinDistance [26] on titles. 2) Random: Neighbour products are ran-\ndomly selected. 3) PMF: PMF [25] firstly learns product embeddings\nvia matrix factorization. Neighbour products are selected with thehighest cosine similarity score on product embeddings. 4) GBPR:\nGBPR [28] uses a Bayesian based collaborative filtering method toassign user preferences on products. Neighbour products are selectedwith the most similar user preferences. 5) EALS : EALS [20] is a\nfast matrix factorization approach which learns product embeddings.\nNeighbour products are selected with the highest cosine similarity\nscore on product embeddings. After the neighbour products are se-lected, their reviews’ sentimental phrases together with product own\ndescriptive phrases are utilized to generate aspect summarization via\nthe same NMT model struture [2] as our BDS model. We intention-ally apply the same generative model so as to compare the effective-ness of neighbour products selection in different models.\nTo assess the usefulness of neighbour product information, our\nmodel is compared with another two generative models only lever-aging product own metadata. 6) Raw Title (RT) uses the product ti-tles and 7) Title-Review (TR) uses the concatenation of product title\nand sparse reviews as the input of the same NMT model to generateaspect summarization.\nMini-batch (size = 20) Adam SGD optimizer is used to train our\nmodel for 100 epochs. Learning rate is 0.01. Dimension of product& user representation is 300 (Section 2.2). s=5 neighbour products\nare selected from h= 100 seed products (Section 2.3). V ocabulary\nsize in summarization generation (Section 2.4)i s 15K. Other param-\neter details will be offered once the paper gets published.\n3.3 Evaluation Metrics\nIn this paper, we report the model performance via the following met-rics from both automatic and human perspectives. Automatic metricsevaluate the accuracy of the generated summarizations by objectivelycalculating the correctness of predicted words. While human metricsevaluate the semantic quality of generated summarizations by sub-jectively considering their sentence readability.\n•ROUGE (RG): [22] evaluates text summarization quality by com-\nparing the overlap between generated sequences and the ground\ntruth. We report RG-1, RG-2 and RG-L in this paper.\n•METEOR: [10] is the harmonic mean of generated summariza-\ntions’ unigram precision and recall. It offers stemming and syn-onymy matching along with standard exact word matching.\n•Human Evaluation: We generate summaries of 200 randomproducts by each of the seven baselines and our model. To com-pare BDS model with each baseline, three human votes are col-lected from a crowd-sourcing platform for each pair of product\nsummaries ( 200∗7∗3=4,200 human judgements). People are\nrequired to vote the one with more comprehensive information\nand better sentence structure. We define winning time rate (WTR)\nandwinning count rate (WCR) as two human evaluation metrics.\nGiven a pair of model Aand model B, the WTR of Ais the ra-\ntio of winning products for Aand the WCR of Ais the ratio of\nwinning votes for A. For instance, given two products αandβ,\nif method Agets two votes for αand one vote for β, its WTR is\n(1+0)/(1+1) = 0.5 and WCR is (2+1)/(3+3) = 0.5 .\n3.4 Results\nAs neighbour product selection (Task 1) is an unsupervised approachwhose ultimate goal is to support product summarization (Task 2),we only report the experimental results on Task 2 to demonstrate ourmodel’s superiority over the baselines across different timestamps.The efficacy of each input component is also presented here.\n3.4.1 Automatic Evaluation\nWe run our model ten times and report the average evaluation re-sults in the left part of Table 1. To verify our model’s superiority, wecalculate the performance differences between our model and each\nbaseline on each automatic metric for all the ten runs, and apply a\nt-test [11] on the ten differences to check whether the performancedifference is significant.\nRT performs the worst as the product title contains limited and\nstatic information to reveal product sentiment dynamics. From TRresults, adding sparse reviews can improve model performance, butis still worse than the rest approaches, which strongly indicates theeffectiveness of using neighbour product reviews for generating as-pect summarization. Most of neighbour selection based baselinesZ.Gao etal./Behavior Based Dynamic Summarization onProduct Aspects viaReinfor cement Neighbour Selection 2026\nhave roughly similar results except TS model, meaning that behav-\nior based is better than content based neighbour selection. Surpris-ingly, Random model can achieve a relatively satisfying result. Onepossible reason is that because most of reviews in online shopping\nwebsites are positive, randomly sampled neighbour product might\nreceive reviews containing relevant sentimental phrases. Howsoever,our BDS model outperforms all baselines significantly ( p<0.01)o n\nall metrics, demonstrating the superiority of our proposed reinforce-\nment neighbour selection.\nModelAutomatic Human\nRG-1 RG-2 RG-L METEOR WTR WCR\nRT 36.19 10.01 27.95 16.05 0.00 0.01\nTR 43.05 19.08 33.10 19.24 0.07 0.13\nTS 42.97 18.85 32.85 19.34 0.13 0.24\nRandom 44.21 19.58 33.98 19.86 0.03 0.13\nPMF 45.72 20.25 35.21 20.80 0.14 0.28\nGBPR 45.09 19.67 34.55 20.36 0.32 0.39\nEALS 44.66 19.50 34.28 20.32 0.08 0.17\nBDS 51.11* 23.55* 39.86* 22.99* 0.89 0.81\nTable 1 : Automatic & human evaluation results of our model com-\npared with baselines. Symbol ‘*’ highlights the cases where our\nmodel significantly beats all baselines with pvalue smaller than 0.01.\n3.4.2 Human Evaluation\nThe right part of Table 1 reports the human evaluation result for allthe models. Higher WTR and WCR scores indicate the related modelcan generate better structured summaries from human perspective.\nFor each baseline, WTR and WCR scores are the pairwise compari-\nson results with the BDS model. RT model performs the worst. Andcontent based methods (RT and TR) also perform worse than neigh-bour selection based methods in general. GBPR performs much bet-\nter than the rest baselines. It beats our model in roughly 30% of sum-\nmary pairs. The reported two scores of our model are the average ofpairwise comparison results with all baselines, which shows that ourmodel can beat other baselines on 90% of all summary pairs.\nBoth automatic and human evaluation results demonstrate content\nbased baselines perform worse than neighbour selection based base-\nlines. However, there are still some inconsistencies between their\nevaluation results. Although GBPR has similar performance with therest of neighbour based baselines on automatic metrics, it unexpect-edly outperforms on human metrics, which indicates how to organizesequences has huge impact on summary semantic quality.\n3.4.3 Dynamic Performance\nTo better present our model dynamic performance, we visualize theautomatic evaluation results of our model as well as the best threebaselines (PMF, EALS and GBPR) in all five consecutive time pe-riods, shown in Figure 3. The three best baselines are all neighbourselection approaches. Across all time periods, all four model perfor-mances are relatively consistent and follow similar trend. Their per-\nformances go down a little bit in the fourth time period but raise up\nimmediately in the next time period. Among three baselines, GBPR\nachieves the best performance result over the rest two. It does not\nperform well in the beginning, but keeps growing and beats the rest\nbaselines in later time periods. In general, the performances of threebaselines are not far away from each other, especially from time pe-riodt\n2tot4. Shown in Figure 3, their plotted lines are basically min-\ngled together. However, our model achieves a far better evaluation(a) ROUGE-1 (b) ROUGE-2\n(c) ROUGE-L (d) METEOR\nFigure 3 : Automatic evaluation results of our model compared with\nthe Top 3 baselines in five consecutive time periods (t 1∼t5).\nresult than baselines. Its plotted lines (red lines) are always signifi-cantly above the rest three lines in terms of all four evaluation met-rics. Moreover, model performances on the four reported metrics arewith similar trend. And the three ROUGE based metrics are with aneven more similar trend than METEOR.\n3.4.4 Input Component Evaluation\nAs aforementioned, our model requires three types of product input\ninformation: user behavior, product category, and profile descriptive\nphrases. To examine whether all involved information are effective,we conduct an extensive study by removing each type of informationiteratively while holding the rest information fixed. Table 2 shows theperformance difference between the models with truncated input andoriginal BDS model. In detail, removing user behavior (product cat-\negory) means that only product category (user behavior) is used for\nsentiment prediction pretraining. Removing product profiles means\nthat only neighbour product sentimental phrases contribute to sum-\nmarization generation.\nModel RG-1 RG-2 RG-L METEOR\n– Behavior -14.21 -12.63 -11.37 -5.72\n– Category -4.74 -2.94 -4.27 -1.65\n– Profile -5.78 -3.77 -5.33 -2.34\nTable 2 : Performance differences between the model with truncated\ninput and original BDS model. ‘–’ means removing related input\ncomponent from our model.\nFrom Table 2, removing related input component always leads a\ndecrease on model performance, which indicates that all the three\ntypes of input component are useful in BDS model. Compared with\nproduct category and descriptive phrases, user behavior obviouslyhas the most influential impact because removing it causes the largestdrop in model performance over all four reported metrics. Surpris-ingly, it declines the model performance down close to the worstZ.Gao etal./Behavior Based Dynamic Summarization onProduct Aspects viaReinfor cement Neighbour Selection 2027\nbaseline, RT. It explicitly verifies the importance of user behavior for\nproduct summarization generation. Moreover, the performance de-crease by removing profile descriptive phrases is larger than remov-ing product category embedding. The reason might be that profiledescriptive phrases are directly used for summarization generation.While product category contributes to the sentiment prediction pre-training, which only has implicit impact and is not able to directlyreflect dynamic aspect sentiment changes.\n3.4.5 Case Study\nWe conduct a case study in Table 3 on an actual dress to show howour model summarize its aspect-sentiment changes in a timely man-ner. In this case, we only care about sentiment changes on productmaterial and demonstrate material-related content from the full gen-erated summaries. In the real world, the sentiment of the dress mate-rial goes from positive to negative (concluded from summarization)due to its manufacturer’s counterfeit (reported by customers in timet\n2). Our model is able to detect this aspect-sentiment change from its\nupdated customer shopping behaviors and dynamically locate neigh-bour products with similar issues (like material problems). From theresult shown in Table 3, the generated summaries on ‘Material’ as-pect supports our model effectiveness. Moreover, the generated sum-maries solely contain objective descriptions instead of emotional ex-pressions used in personal reviews, which shows a more formal andaspect-concentrated way of description than summarized reviews.\nTime Ground Truth BDS\nt1 Material : Positive\nSummarization: The material\nof the dress is super nice andsoft\nto wear. It is made of cotton and\ntouches like a high-end dress. A\nperfect gift for aged women.Material : Positive\nSummarization: The dress has\ngood material . It is made of\ncotton and touches very soft.A\nperfect gift for mums who have\nhigh-standard requirement on\ndress material.\nt2 Material : Negative\nSummarization: The dress\nmaterial isnot as good as\npromised.I ti snot good for\naged women with high require-\nments. It touches as cheap\nmaterial and smells weird.Material : Negative\nSummarization: The ma-\nterial is terrible. It touches\nlike carded yarn with much\ncheaper material as promised\nfor mums. It has a weird smellwhen wearing it.\nTable 3 : A real case to show our model’s generated dynamic senti-\nment summarization for a dress on product material. The green color\nindicates positive words. While redcolor indicates negative words.\n4 Related Works\nBehavior Analysis: Revealing the coupling between user behavior\nand product peculiarity has been explored for years. By applying ma-\ntrix factorization techniques, user-product behavior matrix can be de-composed into a user matrix and a product matrix where each productis represented as a dense vector [25]. BPR [29] proposes a Bayesianapproach to learn personalized user preferences on products. In thisapproach, a product’s neighbours have similar user preferences withitself. GBPR [28] extends from BPR and adds group preference topredict the relationship between users and products. EALS [20] de-signs a fast matrix factorization method by weighting the missingdata based on product popularity. TrustSVD [19] selects trust datafrom both explicit and implicit user feedbacks to calculate user andproduct representations. [44] develops a complicated localized ma-trix factorization method to learn product representations based onmatrix block diagonal forms. Deep Matrix Factorization [40] is pro-posed to consider both user and product behavior as the input to pre-dict the user-product pairwise relationship, which derives a series offollow-up works [32, 7].Summarization Models: Existing works for summarization are\neither data-to-text or text-to-text approaches [35]. Data-to-text mod-els mostly use structured data as input to generate summaries. [42]uses product aspect-sentiments to summarize product reviews with ahierarchical-structured RNN model. [12] also utilizes product aspectattributes to generate associated reviews with an encoder-decoderframework by conducting the combination of user, product and ratinginformation. [39] offers a practical guide on how to efficiently pro-cess such data and train model. [8] learns structured data embeddingand uses a Copy mechanism to avoid generating repetitive content.\nThe input of text-to-text models is usually sequential reviews. [31]\nfirstly uses a character-level RNN model to generate text reviews.After that, [18] applies Long Short-term Memory (LSTM) to im-prove the performance of generated summaries. [34] uses an atten-tion mechanism so that the model can absorb information from mul-tiple text units. [17] proposes a CNN model to learn a sequence tosequence mapping relationship from reviews. CF-GCN [27] jointlyperforms recommendation and review generation by combining col-laborative filtering and LSTM-based generative models. Similarly,[21, 36] both predict product ratings and generate summarizationsvia a gated RNN model.\nDynamic Models: A few models consider time impact on\nproducts for either review summarization [1] or sentiment analysis[15, 13]. These models play a vital role especially in e-commercescenarios [14, 16]. ETTS [41] considers time stamped sequences forreview summarization. GCN [23] proposes a character level RNNto generate personalized reviews capturing complex product senti-ment dynamics. [24] develops a novel semi-supervised method tosimultaneously solve sparseness problem on dynamic rating predic-tion task. And [3] leverages collaborative filtering techniques to trackproduct temporal aspect-sentiments via Singular V alue Decompo-sition (SVD). [38, 37] apply LSTM mechanism in a RNN modelto track the temporal dynamics of product aspect-sentiments in anauto-regressive way. [9] designs a time-aware gated recurrent unit(T-GRU) to generate explanations for personalized recommendation.\n5 Conclusion\nBy leveraging multi-type user behaviors instead of sparse reviews,we propose a multi-task model to solve an innovative dynamic sum-marization task for product aspects. Extensive experiments show ourmodel is consistently promising and significantly outperforms thebaselines. Being the first study on this newly proposed task, we aimto explore the relationship between user behavior and product sum-marization so as to address the cold-start problem, i.e., products with-out any recent reviews. As a result, the proposed model can cover amuch larger scope in the E-commerce ecosystem while enabling ex-plainable sentiment analysis on products. As the generated summa-rization is sensitive to customers, we never want to make up ‘fakereviews’ to mislead them. Instead, the summarization should onlybe provided to online sellers as auxiliary information. In our cur-rent model, both product behavior representation and behavior-to-sentiment pretraining need to be learned apart from the multi-taskmodel. In the future, we plan to integrate all separated segments intothe main model to achieve joint training. Besides, as the reinforce-ment neighbour product selection task contributes our major novelty,all compared baselines are intentionally chosen from models aimingat neighbour selection. For the next step, we will put more efforts onexploring the influence of summarization generation task by involv-ing other generative models such as Pointer Network and CopyNet.Z. Gao et al. / Behavior Based Dynamic Summarization on Product Aspects via Reinforcement Neighbour Selection 2028\nREFERENCES\n[1] Romi Akpala. Dynamic predefined product reviews, August 13 2019.\nUS Patent 10,380,656.\n[2] Dzmitry Bahdanau, Kyunghyun Cho, and Y oshua Bengio, ‘Neural\nmachine translation by jointly learning to align and translate’, arXiv\npreprint arXiv:1409.0473, (2014).\n[3] Cigdem Bakir, ‘Collaborative filtering with temporal dynamics with us-\ning singular value decomposition’, Tehni ˇcki vjesnik, 25(1), 130–135,\n(2018).\n[4] Christopher M Bishop, Pattern recognition and machine learning ,\nspringer, 2006.\n[5] Lawrence E Blume and David Easley, ‘Rationality’, The new Palgrave\ndictionary of economics, 6, 884–893, (2008).\n[6] Roy Brouwer, Ivana Logar, and Oleg Sheremet, ‘Choice consistency\nand preference stability in test-retests of discrete choice experiment and\nopen-ended willingness to pay elicitation formats’, Environmental and\nResource Economics, 68(3), 729–751, (2017).\n[7] Hung-Hsuan Chen, ‘Behavior2vec: Generating distributed representa-\ntions of users’ behaviors on products for recommender systems’, ACM\nTransactions on Knowledge Discovery from Data (TKDD), 12(4), 43,\n(2018).\n[8] Shuang Chen, ‘A general model for neural text generation from struc-\ntured data’, E2E NLG Challenge System Descriptions, (2018).\n[9] Xu Chen, Y ongfeng Zhang, and Zheng Qin, ‘Dynamic explainable rec-\nommendation based on neural attentive models’, in Proceedings of\nthe AAAI Conference on Artificial Intelligence, volume 33, pp. 53–60,(2019).\n[10] Michael Denkowski and Alon Lavie, ‘Meteor universal: Language spe-\ncific translation evaluation for any target language’, in Proceedings of\nthe ninth workshop on statistical machine translation , pp. 376–380,\n(2014).\n[11] Ben Derrick, Deirdre Toher, and Paul White, ‘How to compare the\nmeans of two samples that include paired observations and independent\nobservations: A companion to derrick, russ, toher and white (2017)’,\nThe Quantitative Methods in Psychology, 13(2), 120–126, (2017).\n[12] Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and\nKe Xu, ‘Learning to generate product reviews from attributes’, in Pro-\nceedings of the 15th Conference of the European Chapter of the Associ-\nation for Computational Linguistics: Volume 1, Long Papers, volume 1,\npp. 623–632, (2017).\n[13] Zheng Gao and Rui Bi, ‘University of pittsburgh at trec 2014 microblog\ntrack’, Technical report, PITTSBURGH UNIV PA SCHOOL OF INFOSCIENCES, (2014).\n[14] Zheng Gao, Lin Guo, Chi Ma, Xiao Ma, Kai Sun, Hang Xiang, Xi-\naoqiang Zhu, Hongsong Li, and Xiaozhong Liu, ‘Amad: adversarialmultiscale anomaly detection on high-dimensional and time-evolving\ncategorical data’, in Proceedings of the 1st International Workshop on\nDeep Learning Practice for High-Dimensional Sparse Data , pp. 1–8,\n(2019).\n[15] Zheng Gao and John Wolohan, ‘Fast nlp-based pattern matching in real\ntime tweet recommendation.’.\n[16] Zizhe Gao, Zheng Gao, Heng Huang, Zhuoren Jiang, and Y uliang Yan,\n‘An end-to-end model of predicting diverse ranking on heterogeneous\nfeeds.’, (2018).\n[17] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and\nYann N Dauphin, ‘Convolutional sequence to sequence learning’, arXiv\npreprint arXiv:1705.03122, (2017).\n[18] Alex Graves, ‘Generating sequences with recurrent neural networks’,\narXiv preprint arXiv:1308.0850, (2013).\n[19] Guibing Guo, Jie Zhang, and Neil Y orke-Smith, ‘Trustsvd: Collabora-\ntive filtering with both the explicit and implicit influence of user trustand of item ratings.’, in AAAI, volume 15, pp. 123–125, (2015).\n[20] Xiangnan He, Hanwang Zhang, Min-Yen Kan, and Tat-Seng Chua,\n‘Fast matrix factorization for online recommendation with implicitfeedback’, in Proceedings of the 39th International ACM SIGIR con-\nference on Research and Development in Information Retrieval, pp.549–558. ACM, (2016).\n[21] Piji Li, Zihao Wang, Zhaochun Ren, Lidong Bing, and Wai Lam, ‘Neu-\nral rating regression with abstractive tips generation for recommenda-tion’, in Proceedings of the 40th International ACM SIGIR conference\non Research and Development in Information Retrieval, pp. 345–354.\nACM, (2017).\n[22] Chin-Yew Lin, ‘Rouge: A package for automatic evaluation of sum-\nmaries’, Text Summarization Branches Out, (2004).[23] Zachary C Lipton, Sharad Vikram, and Julian McAuley, ‘Generative\nconcatenative nets jointly learn to write and classify reviews’, arXiv\npreprint arXiv:1511.03683, (2015).\n[24] Cheng Luo, Xiongcai Cai, and Nipa Chowdhury, ‘Self-training tem-\nporal dynamic collaborative filtering’, in Pacific-Asia Conference on\nKnowledge Discovery and Data Mining, pp. 461–472. Springer, (2014).\n[25] Andriy Mnih and Ruslan R Salakhutdinov, ‘Probabilistic matrix fac-\ntorization’, in Advances in neural information processing systems, pp.\n1257–1264, (2008).\n[26] Nasser M Nasrabadi, ‘Pattern recognition and machine learning’, Jour-\nnal of electronic imaging, 16(4), 049901, (2007).\n[27] Jianmo Ni, Zachary C Lipton, Sharad Vikram, and Julian McAuley,\n‘Estimating reactions and recommending products with generative\nmodels of reviews’, in Proceedings of the Eighth International Joint\nConference on Natural Language Processing (Volume 1: Long Papers),volume 1, pp. 783–791, (2017).\n[28] Weike Pan and Li Chen, ‘Gbpr: Group preference based bayesian per-\nsonalized ranking for one-class collaborative filtering.’, in IJCAI, vol-\nume 13, pp. 2691–2697, (2013).\n[29] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars\nSchmidt-Thieme, ‘Bpr: Bayesian personalized ranking from implicitfeedback’, in Proceedings of the twenty-fifth conference on uncertainty\nin artificial intelligence, pp. 452–461. AUAI Press, (2009).\n[30] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wier-\nstra, and Martin Riedmiller, ‘Deterministic policy gradient algorithms’,\ninICML, (2014).\n[31] Ilya Sutskever, James Martens, and Geoffrey E Hinton, ‘Generating text\nwith recurrent neural networks’, in Proceedings of the 28th Interna-\ntional Conference on Machine Learning (ICML-11), pp. 1017–1024,\n(2011).\n[32] George Trigeorgis, Konstantinos Bousmalis, Stefanos Zafeiriou, and\nBj¨orn W Schuller, ‘A deep matrix factorization method for learning\nattribute representations’, IEEE transactions on pattern analysis and\nmachine intelligence, 39(3), 417–429, (2017).\n[33] Ashish V aswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion\nJones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin, ‘Attention\nis all you need’, in Advances in Neural Information Processing Systems,\npp. 5998–6008, (2017).\n[34] Lu Wang and Wang Ling, ‘Neural network-based abstract generation\nfor opinions and arguments’, arXiv preprint arXiv:1606.02785 , (2016).\n[35] Y ongzhen Wang, Xiaozhong Liu, and Zheng Gao, ‘Neural related work\nsummarization with a joint context-driven attention mechanism’, arXiv\npreprint arXiv:1901.09492, (2019).\n[36] Zhongqing Wang and Y ue Zhang, ‘Opinion recommendation using a\nneural model’, in Proceedings of the 2017 Conference on Empirical\nMethods in Natural Language Processing, pp. 1626–1637, (2017).\n[37] Chao-Y uan Wu, Amr Ahmed, Alex Beutel, and Alexander J Smola,\n‘Joint training of ratings and reviews with recurrent recommender net-\nworks’, (2016).\n[38] Chao-Y uan Wu, Amr Ahmed, Alex Beutel, Alexander J Smola, and\nHow Jing, ‘Recurrent recommender networks’, in Proceedings of the\ntenth ACM international conference on web search and data mining,\npp. 495–503. ACM, (2017).\n[39] Ziang Xie, ‘Neural text generation: A practical guide’, arXiv preprint\narXiv:1711.09534, (2017).\n[40] Hong-Jian Xue, Xinyu Dai, Jianbing Zhang, Shujian Huang, and Jiajun\nChen, ‘Deep matrix factorization models for recommender systems.’,\ninIJCAI, pp. 3203–3209, (2017).\n[41] Rui Yan, Liang Kong, Congrui Huang, Xiaojun Wan, Xiaoming Li, and\nYan Zhang, ‘Timeline generation through evolutionary trans-temporal\nsummarization’, in Proceedings of the Conference on Empirical Meth-\nods in Natural Language Processing , pp. 433–443. Association for\nComputational Linguistics, (2011).\n[42] Hongyu Zang and Xiaojun Wan, ‘Towards automatic generation of\nproduct reviews from aspect-sentiment scores’, in Proceedings of the\n10th International Conference on Natural Language Generation, pp.168–177, (2017).\n[43] Y ongfeng Zhang, ‘Explainable recommendation: Theory and applica-\ntions’, arXiv preprint arXiv:1708.06409, (2017).\n[44] Y ongfeng Zhang, Min Zhang, Yiqun Liu, Shaoping Ma, and Shi Feng,\n‘Localized matrix factorization for recommendation based on matrixblock diagonal forms’, in Proceedings of the 22nd international con-\nference on World Wide Web, pp. 1511–1520. ACM, (2013).Z.Gao etal./Behavior Based Dynamic Summarization onProduct Aspects viaReinfor cement Neighbour Selection 2029",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "BJwAarTM07",
"year": null,
"venue": "ECIR (1) 2022",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=BJwAarTM07",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Extending CLIP for Category-to-Image Retrieval in E-Commerce",
"authors": [
"Mariya Hendriksen",
"Maurits J. R. Bleeker",
"Svitlana Vakulenko",
"Nanne van Noord",
"Ernst Kuiper",
"Maarten de Rijke"
],
"abstract": "E-commerce provides rich multimodal data that is barely leveraged in practice. One aspect of this data is a category tree that is being used in search and recommendation. However, in practice, during a user’s session there is often a mismatch between a textual and a visual representation of a given category. Motivated by the problem, we introduce the task of category-to-image retrieval in e-commerce and propose a model for the task, CLIP-ITA. The model leverages information from multiple modalities (textual, visual, and attribute modality) to create product representations. We explore how adding information from multiple modalities (textual, visual, and attribute modality) impacts the model’s performance. In particular, we observe that CLIP-ITA significantly outperforms a comparable model that leverages only the visual modality and a comparable model that leverages the visual and attribute modality.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kIA2oa3xc5f",
"year": null,
"venue": "AI Commun. 2018",
"pdf_link": "https://content.iospress.com/download/ai-communications/aic761?id=ai-communications%2Faic761",
"forum_link": "https://openreview.net/forum?id=kIA2oa3xc5f",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Hierarchical invention of theorem proving strategies",
"authors": [
"Jan Jakubuv",
"Josef Urban"
],
"abstract": "State-of-the-art automated theorem provers (ATPs) such as E and Vampire use a large number of different strategies to traverse the search space. Inventing targeted proof search strategies for specific problem sets is a difficult task. Several machine learning methods that invent strategies automatically for ATPs have been proposed previously. One of them is the Blind Strategymaker (BliStr) system for inventing strategies of the E prover. In this paper we describe BliStrTune – a hierarchical extension of BliStr. BliStrTune explores much larger space of E strategies than BliStr by interleaving search for high-level parameters with their fine-tuning. We use BliStrTune to invent new strategies based also on new clause weight functions targeted at problems from large ITP libraries. We show that the new strategies significantly improve E’s performance.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "LuR175EFGm",
"year": null,
"venue": "ESA 2002",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=LuR175EFGm",
"arxiv_id": null,
"doi": null
}
|
{
"title": "SCIL - Symbolic Constraints in Integer Linear Programming",
"authors": [
"Ernst Althaus",
"Alexander Bockmayr",
"Matthias Elf",
"Michael Jünger",
"Thomas Kasper",
"Kurt Mehlhorn"
],
"abstract": "We describe a new software system SCIL that introduces symbolic constraints into branch-and-cut-and-price algorithms for integer linear programs. Symbolic constraints are known from constraint programming and contribute signi.cantly to the expressive power, ease of use, and e.ciency of constraint programming systems.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "augtXyfpVNC",
"year": null,
"venue": "ECAI 2016",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-672-9-1565",
"forum_link": "https://openreview.net/forum?id=augtXyfpVNC",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A General Characterization of Model-Based Diagnosis",
"authors": [
"Gregory M. Provan"
],
"abstract": "The Model-Based Diagnosis (MBD) framework developed by Reiter has been a strong theoretical foundation for MBD, yet is limited to models that are described in terms of logical sentences. We propose a more general framework that covers a wide range of modelling languages, ranging from AI-based languages (e.g., logic and Bayesian networks) to FDI-based languages (e.g., linear Gaussian models). We show that a graph-theoretic basis for decomposable system models can be augmented with several languages and corresponding inference algorithms based on valuation algebras.",
"keywords": [],
"raw_extracted_content": "A General Characterization of\nModel-Based Diagnosis\nGregory Provan1\nAbstract. The Model-Based Diagnosis (MBD) framework devel-\noped by Reiter has been a strong theoretical foundation for MBD, yet\nis limited to models that are described in terms of logical sentences.We propose a more general framework that covers a wide range ofmodelling languages, ranging from AI-based languages (e.g., logicand Bayesian networks) to FDI-based languages (e.g., linear Gaus-sian models). We show that a graph-theoretic basis for decomposablesystem models can be augmented with several languages and corre-sponding inference algorithms based on valuation algebras.\n1 A GENERAL MBD FRAMEWORK\nWe propose a framework for extending the Reiter MBD approach[7] by integrating several MBD and FDI approaches within a de-composable graphical framework in which the modeling languageand inference are specified by a valuation algebra [6].\nMore formally, we define an MBD framework using a tuple\n(G,T,Γ), where Gis a factor graph [5], Tis the diagnosis task, and\nΓis a valuation algebra [6]. The factor graph Gspecifies a system\ntopology in terms of a decomposable relation Ψ, defined over a set\nVof variables, such that Ψ=⊗\niψi(Vi), whereψiis a sub-relation,\n⊗is the composition operator and Vi⊆V. This decomposition can\nbe mapped to a graph, e.g., a DAG for a Bayesian network (BN)or an undirected graph for a Markov network. The diagnosis task is\ngiven by the tuple T=(D,y,R), whereDis the task specifica-\ntion;yis the required system measurement; and residual R(Ψ,y)\nindicates a discrepancy between observation yand model-derived\nprediction ˆyusing some distance measure. The valuation algebra Γ\nspecifies (1) the underlying language for the diagnosis system, and(2) the inference necessary to compute the diagnosis for the task T,\ne.g., multiple-fault subset-minimal diagnosis or Most-Probable Ex-planation.\nThis decomposable representation can encode a wide range of di-\nagnosis models, including propositional logic models, FDI dynami-cal systems models, as well as stochastic models (Bayesian networks,HMMs, and linear Gaussian models). Previous work, e.g., [4] hasshown AI-based approaches to diagnosis [7, 2] can be described byvaluation algebras. Here, we extend this to include FDI approachesbased on ordinary differential equations (ODEs), and show the im-portance of system structure and diagnosis task in specifying the fulldiagnosis representation.\nThis framework has several outcomes. First, it enables a clear sep-\naration between models and inference (although the two are linked).Specifically, the model structure encoded as a factor graph that gov-erns inference complexity. For example, tree-structured factor graphs\n1University College Cork, Cork, Ireland, email: [email protected]. Sup-\nported by Science Foundation Ireland (SFI) Grant SFI/12/RC/2289.are all poly-time computable. Second, the factor graph encodingof models clarifies the structural difference between AI-based ap-proaches and FDI-based approaches.\n2 REPRESENTING MULTIPLE MODEL TYPES\nFigure 1. Bayesian network for controlled tank example.\nWe can represent a system (or plant) model Ψusing a factor graph,\nwhich represents the physical connectivity of Ψin terms of a struc-\ntured decomposition of Ψinto sub-relations. Consider Figure 1(a),\nwhich shows an example of a tank with a valve, where we controlthe levelxin the tank by controlling the inflow f\n1and the valve state\nV. There are two possible failures in the system: (1) the tank can\nleak, with failure mode φT, and (2) the valve can malfunction, with\nfailure mode φV. This example has variables {f1,f2,f3,x,φT,φV}\nand three relations: ψ1(f1,φT,x),ψ2(x,f2), andψ3(f2,f3,φV).\nψ1represents how the tank’s fluid level xis governed by inflow f1\nand fault (leak) φT,ψ2represents how outflow is governed by fluid\nheightx, andψ3represents the valve’s impact on flows f2,f3.\nGiven such a decomposition, we can represent the modelling\nlanguage as a valuation over ψ1⊗ψ2⊗ψ3. For example, if we\nchoose a probabilistic algebra then we obtain a diagnostic BNmodel, for which Figure 1(b) shows the structure and valuationP(V)=P(f\n1)P(φT)P(φV)P(f1|φt,x)P(x|f2)P(f3|f2,φV).\nP(x|f1,φT)defines the conditional dependence of tank level xon\nthe inflow f1and the tank fault-state φT, and (2) P(f3|f2,φV)the\nconditional dependence of flow from this system, f3, on the tank out-\nflowf2and the valve fault-state φV.ECAI 2016\nG.A. Kaminka et al. (Eds.)© 2016 The Authors and IOS Press.This article is published online with Open Access by IOS Press and distributed under the termsof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/978-1-61499-672-9-15651565\nApproach Language Structure Task Semi-Ring RA FI Complexity\nAtemporal Reiter Prop. Logic DAG Dx /angbracketleft{0,1},∧,∨/angbracketrightΨ↓∅Ψ↓φNP-hard\nA TMS M-V Logic DAG all Dx /angbracketleft2P,∩,∪/angbracketright Ψ↓∅Ψ↓φNP-hard\nQualitative Q-Relation arbitrary Dx /angbracketleft2P,/circledivide,⊙/angbracketright Ψ↓∅Ψ↓φNP-hard\nConstraint Network Constraint arbitrary all Dx /angbracketleft2P,×,+/angbracketright Ψ↓∅Ψ↓φNP-hard\nBN-Posterior PGM DAG P(φ|y)/angbracketleft[0,1],×,+/angbracketrightΨ↓∅Ψ↓φNP-hard\nBN-MPE PGM DAG MPE /angbracketleft[0,1],max,×/angbracketrightΨ↓∅Ψ↓φNP-hard\nTemporal DES M-V Logic arbitrary all Dx /angbracketleft2P,∩,∪/angbracketright Ψ↓∅Ψ↓φNP-hard\nDBN PGM DAG P(φ)/angbracketleft[0,1],×,+/angbracketrightΨ↓∅Ψ↓φNP-hard\nHMM PGM tree P(φ)/angbracketleft[0,1],×,+/angbracketrightΨ↓RΨ↓φO(n)\nFDI ODE bipartite Dx /angbracketleft[0,1],×,+/angbracketrightΨ↓RΨ↓φNP-hard\nKalman filter PGM tree MPE /angbracketleft[0,1],×,+/angbracketrightΨ↓RΨ↓φO(n3)\nParticle filter PGM arbitrary MPE /angbracketleft[0,1],×,+/angbracketrightΨ↓RΨ↓φNP-hard\nTable 1. Classification of Diagnosis Approaches using Generalized Approach. RA and FI denote Residual Analysis and Fault Isolation, respectively. Shaded\nrows denote stochastic methods, and unshaded rows denote deterministic methods. BN denotes Bayesian-network. For the language, Prop and M-V denote\npropositional and multivalued respectively; Q-Relation denotes Qualitative Relation; PGM refers to probabilistic graphical model, and ODE to Ordinary\nDifferential Equation. For the task, Dx corresponds to a single diagnosis, P(φ)to the probability distribution over φ, and MPE to Most-Probable Explanation.\nWe perform inference in Ψby message-passing and valuation up-\ndating [6]. Figure 1(c) shows how we can compute a diagnosis (i.e.,\nevaluateφT,φV) by passing messages among the nodes starting\nwith the control and observation settings S. If all assignments in\nSare nominal (nom), the “diagnosis” is P(φT=fault)=.004\nandP(φV=fault)=.004. If scenario Shas control assignment\nf1=nom, and observation of f2,f3both as low, we obtain a diagno-\nsis given by P(φT=fault)=.067 andP(φV=fault)=.009,\ni.e., a faulty tank is the most likely diagnosis.\nWe can represent several different tank models by keeping the\nsame decomposable structure and changing the valuation. For ex-ample, we obtain a qualitative model by replacing (1) the conditionalprobability tables with qualitative relations, and (2) passing qualita-\ntive messages (e.g., {+,-,0} rather than discrete-valued probabilities),\nand using qualitative inference rather than Bayesian updating.\nTable 1 summarizes the properties of several models characterized\nby our framework, defining the language, model structure, the under-\nlying semi-ring, and the inference complexity. The language and taskcan be characterized by the valuation semi-ring (Z,O\n1,O2), which\nconsists of a set Zand two operations (O1,O2)[3]. The last column\nof Table 1 shows the inference complexity for which the primary de-terminant is the topology [1]: any non-tree topology is likely to beNP-hard for computing a task requiring at least one multiple-faultdiagnosis, whereas tree topologies are poly-time solvable for the ma-jority of languages and tasks.\nAvaluation is a measure over the possible values of a set V\nof variables [3]. Each valuation ψrefers to a finite set of variables\nd(ψ)⊆V, called its domain. Given the power set PofVand a set\nψof valuations with their domains in P, we can define 3 key oper-\nations: (1) Labeling: ψ/mapsto→d(ψ), which returns the domain of each\nvaluation; (2) Combination: (ψ\n1,ψ2)/mapsto→ψ1⊗ψ2, which specifies\nfunctional composition, i.e., the aggregation of data from multiplefunctions; (3) Projection: (ψ,V)/mapsto→ψ\n↓VforV⊆d(ψ), which\nspecifies the computation of a query (set of variables) of interest.\nGiven an observation y, we specify diagnosis within a valuation\nalgebra as a two-step process of: (1) residual analysis (RA); and (2)\nfault isolation (FI).\nResidual analysis: This inference depends on the type of residual.\nAI logic-based approaches compute RA using a consistency check,\ndenotedΨ↓∅. FDI continuous-valued systems compute RA as R=\n|ˆy−y|, where ˆyis the model’s prediction. Residual-specific FGstructure may be necessary to enable us to compute Ψ↓R.\nFault isolation: Isolating a diagnosis is equivalent to project-\ning the marginal over the fault-mode variables φ, denoted Ψ↓φ=\n(ψ1⊗···⊗ψn)↓φ. Diagnostic inference requires all 3 valuation op-\nerations, in particular combination and projection. The task also maychange the FG structure and operations required. For example, dif-\nferent operations are required for computing a posterior distribution\nP(φ|y)as opposed to the Most Probable Explanation (MPE).\nGiven an observation yand prediction ˆy, the typical objective of a\ndiagnosis process is to identify the system fault-state that minimises\nthe residual vector: φ\n∗= argminφ∈ΦR(Ψ,y). The full paper gen-\neralizes the inference metric to define our diagnosis task as jointly\nminimizing the accuracy (based on R) and the inference complexity.\n3 CONCLUSION\nThis article has presented a general framework for MBD that inte-grates several approaches developed within different communities,most notably the AI and FDI communities. By characterizing MBDusing the triple (G,T,Γ)we show structural similarities in MBD\ntechniques using the underlying graph G. The valuation algebra Γ\nenables us to demonstrate the operations and message-passing tech-\nniques underlying the MBD approaches. As a consequence, we are\nable to identify similarities among MBD approaches, thereby pavingthe way for a more holistic approach to MBD and potential cross-pollination of MBD inference techniques.\nREFERENCES\n[1] Adnan Darwiche, ‘Utilizing device behavior in structure-based diagno-\nsis’, in IJCAI, pp. 1096–1101. Citeseer, (1999).\n[2] Johan De Kleer, ‘An assumption-based TMS’, Artificial intelligence,\n28(2), 127–162, (1986).\n[3] J. Kohlas, Information Algebras: Generic Structures for Inference,\nSpringer, Heidelberg, 2003.\n[4] J ¨urg Kohlas, Rolf Haenni, and Seraf ´ın Moral, ‘Propositional information\nsystems’, Journal of Logic and Computation, 9(5), 651–681, (1999).\n[5] Hans-Andrea Loeliger, ‘An introduction to factor graphs’, Signal Pro-\ncessing Magazine, IEEE, 21(1), 28–41, (2004).\n[6] Marc Pouly and J ¨urg Kohlas, Generic Inference: A Unifying Theory for\nAutomated Reasoning, John Wiley & Sons, 2012.\n[7] Raymond Reiter, ‘A theory of diagnosis from first principles’, Artificial\nintelligence, 32(1), 57–95, (1987).G.Provan/AGener alCharacterization ofModel-Based Diagnosis 1566",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Hyxxg0yqPH",
"year": null,
"venue": null,
"pdf_link": "https://arxiv.org/pdf/1905.07376.pdf",
"forum_link": "https://openreview.net/forum?id=Hyxxg0yqPH",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Integer Discrete Flows and Lossless Compression",
"authors": [
"E Hoogeboom",
"JWT Peters",
"R van den Berg",
"M Welling"
],
"abstract": "Lossless compression methods shorten the expected representation size of data without loss of information, using a statistical model. Flow-based models are attractive in this setting because they admit exact likelihood optimization, which is equivalent to minimizing the expected number of bits per message. However, conventional flows assume continuous data, which may lead to reconstruction errors when quantized for compression. For that reason, we introduce a flow-based generative model for ordinal discrete data called Integer Discrete Flow (IDF): a bijective integer map that can learn rich transformations on high-dimensional data. As building blocks for IDFs, we introduce a flexible transformation layer called integer discrete coupling. Our experiments show that IDFs are competitive with other flow-based generative models. Furthermore, we demonstrate that IDF based compression achieves state-of-the-art lossless compression rates on CIFAR10, ImageNet32, and ImageNet64. To the best of our knowledge, this is the first lossless compression method that uses invertible neural networks.",
"keywords": [],
"raw_extracted_content": "Integer Discrete Flows and Lossless Compression\nEmiel Hoogeboom\u0003\nUvA-Bosch Delta Lab\nUniversity of Amsterdam\nNetherlands\[email protected] W.T. Peters\u0003\nUvA-Bosch Delta Lab\nUniversity of Amsterdam\nNetherlands\[email protected]\nRianne van den Bergy\nUniversity of Amsterdam\nNetherlands\[email protected] Welling\nUvA-Bosch Delta Lab\nUniversity of Amsterdam\nNetherlands\[email protected]\nAbstract\nLossless compression methods shorten the expected representation size of data\nwithout loss of information, using a statistical model. Flow-based models are\nattractive in this setting because they admit exact likelihood optimization, which\nis equivalent to minimizing the expected number of bits per message. However,\nconventional flows assume continuous data, which may lead to reconstruction\nerrors when quantized for compression. For that reason, we introduce a flow-based\ngenerative model for ordinal discrete data called Integer Discrete Flow (IDF): a\nbijective integer map that can learn rich transformations on high-dimensional data.\nAs building blocks for IDFs, we introduce a flexible transformation layer called\ninteger discrete coupling. Our experiments show that IDFs are competitive with\nother flow-based generative models. Furthermore, we demonstrate that IDF based\ncompression achieves state-of-the-art lossless compression rates on CIFAR10,\nImageNet32, and ImageNet64. To the best of our knowledge, this is the first\nlossless compression method that uses invertible neural networks.\n1 Introduction\nEvery day, 2500 petabytes of data are generated. Clearly, there is a need for compression to enable\nefficient transmission and storage of this data. Compression algorithms aim to decrease the size\nof representations by exploiting patterns and structure in data. In particular, lossless compression\nmethods preserve information perfectly–which is essential in domains such as medical imaging,\nastronomy, photography, text and archiving. Lossless compression and likelihood maximization\nare inherently connected through Shannon’s source coding theorem [34], i.e., the expected message\nlength of an optimal entropy encoder is equal to the negative log-likelihood of the statistical model.\nIn other words, maximizing the log-likelihood (of data) is equivalent to minimizing the expected\nnumber of bits required per message.\nIn practice, data is usually high-dimensional which introduces challenges when building statistical\nmodels for compression. In other words, designing the likelihood and optimizing it for high dimen-\nsional data is often difficult. Deep generative models permit learning these complicated statistical\nmodels from data and have demonstrated their effectiveness in image, video, and audio modeling\n\u0003Equal contribution\nyNow at Google\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.arXiv:1905.07376v4 [cs.LG] 6 Dec 2019\n…IDFCoderFigure 1: Overview of IDF based lossless compression. An image xis transformed to a latent\nrepresentation zwith a tractable distribution pZ(\u0001). An entropy encoder takes zandpZ(\u0001)as input,\nand produces a bitstream c. To obtainx, the decoder uses pZ(\u0001)andcto reconstruct z. Subsequently,\nzis mapped to xusing the inverse of the IDF.\n[22,24,29]. Flow-based generative models [ 7,8,27,22,14,16] are advantageous over other genera-\ntive models: i)they admit exact log-likelihood optimization in contrast with Variational AutoEncoders\n(V AEs) [ 21] and ii)drawing samples (and decoding) is comparable to inference in terms of computa-\ntional cost, as opposed to PixelCNNs [ 41]. However, flow-based models are generally defined for\ncontinuous probability distributions, disregarding the fact that digital media is stored discretely–for\nexample, pixels from 8-bit images have 256 distinct values. In order to utilize continuous flow models\nfor compression, the latent space must be quantized. This produces reconstruction errors in image\nspace, and is therefore not suited for lossless compression.\nTo circumvent the (de)quantization issues, we propose Integer Discrete Flows (IDFs), which are\ninvertible transformations for ordinal discrete data–such as images, video and audio. We demonstrate\nthe effectiveness of IDFs by attaining state-of-the-art lossless compression performance on CIFAR10,\nImageNet32 and ImageNet64. For a graphical illustration of the coding steps, see Figure 1. In addition,\nwe show that IDFs achieve generative modelling results competitive with other flow-based methods.\nThe main contributions of this paper are summarized as follows: 1)We introduce a generative flow\nfor ordinal discrete data (Integer Discrete Flow), circumventing the problem of (de)quantization;\n2)As building blocks for IDFs, we introduce a flexible transformation layer called integer discrete\ncoupling; 3)We propose a neural network based compression method that leverages IDFs; and\n4)We empirically show that our image compression method allows for progressive decoding that\nmaintains the global structure of the encoded image. Code to reproduce the experiments is available\nathttps://github.com/jornpeters/integer_discrete_flows .\n2 Background\nThecontinuous change of variables formula lies at the foundation of flow-based generative models. It\nadmits exact optimization of a (data) distribution using a simple distribution and a learnable bijective\nmap. Letf:X!Z be a bijective map, and pZ(\u0001)a prior distribution on Z. The model distribution\npX(\u0001)can then be expressed as:\npX(x) =pZ(z)\f\f\f\fdz\ndx\f\f\f\f;forz=f(x): (1)\nThat is, for a given observation x, the likelihood is given by pZ(\u0001)evaluated at f(x), normalized by\nthe Jacobian determinant. A composition of invertible functions, which can be viewed as a repeated\napplication of the change of variables formula, is generally referred to as a normalizing flow in the\ndeep learning literature [5, 37, 36, 30].\n2.1 Flow Layers\nThe design of invertible transformations is integral to the construction of normalizing flows. In this\nsection two important layers for flow-based generative modelling are discussed.\nCoupling layers are tractable bijective mappings that are extremely flexible, when combined into a\nflow [ 8,7]. Specifically, they have an analytical inverse, which is similar to a forward pass in terms\nof computational cost and the Jacobian determinant is easily computed, which makes coupling layers\nattractive for flow models. Given an input tensor x2Rd, the input to a coupling layer is partitioned\n2\ninto two sets such that x= [xa;xb]. The transformation, denoted f(\u0001), is then defined by:\nz= [za;zb] =f(x) = [xa;xb\fs(xa) +t(xa)]; (2)\nwhere\fdenotes element-wise multiplication and sandtmay be modelled using neural networks.\nGiven this, the inverse is easily computed, i.e., xa=za, andxb= (zb\u0000t(xa))\u000bs(xa), where\n\u000bdenotes element-wise division. For f(\u0001)to be invertible, s(xa)must not be zero, and is often\nconstrained to have strictly positive values.\nFactor-out layers allow for more efficient inference and hierarchical modelling. A general flow,\nfollowing the change of variables formula, is described as a single map X!Z . This implies that a\nd-dimensional vector is propagated throughout the whole flow model. Alternatively, a part of the\ndimensions can already be factored-out at regular intervals [ 8], such that the remainder of the flow\nnetwork operates on lower dimensional data. We give an example for two levels ( L= 2) although\nthis principle can be applied to an arbitrary number of levels:\n[z1;y1] =f1(x); z2=f2(y1); z= [z1;z2]; (3)\nwhere x2Rdandy1;z1;z22Rd=2. The likelihood of xis then given by:\np(x) =p(z2)\f\f\f\f@f2(y1)\n@y1\f\f\f\fp(z1jy1)\f\f\f\f@f1(x)\n@x\f\f\f\f: (4)\nThis approach has two clear advantages. First, it admits a factored model for z,p(z) =\np(zL)p(zL\u00001jzL)\u0001\u0001\u0001p(z1jz2;:::;zL), which allows for conditional dependence between parts of\nz. This holds because the flow defines a bijective map between yland[zl+1;:::;zL]. Second, the\nlower dimensional flows are computationally more efficient.\n2.2 Entropy Encoding\nLossless compression algorithms map every input to a unique output and are designed to make\nprobable inputs shorter andimprobable inputs longer . Shannon’s source coding theorem [ 34] states\nthat the optimal code length for a symbol xis\u0000logD(x), and the minimum expected code length is\nlower-bounded by the entropy:\nEx\u0018D[jc(x)j]\u0019Ex\u0018D[\u0000logpX(x)]\u0015H(D); (5)\nwherec(x)denotes the encoded message, which is chosen such that jc(x)j\u0019\u0000 logpX(x),j\u0001jis\nlength,Hdenotes entropy,Dis the data distribution, and pX(\u0001)is the statistical model that is used\nby the encoder. Therefore, maximizing the model log-likelihood is equivalent to minimizing the\nexpected number of bits required per message, when the encoder is optimal.\nStream coders encode sequences of random variables with different probability distributions. They\nhave near-optimal performance, and they can meet the entropy-based lower bound of Shannon [ 32,26].\nIn our experiments, the recently discovered and increasingly popular stream coder rANS [ 10] is used.\nIt has gained popularity due to its computational and coding efficiency. See Appendix A.1 for an\nintroduction to rANS.\n3 Integer Discrete Flows\nWe introduce Integer Discrete Flows (IDFs): a bijective integer map that can represent rich trans-\nformations. IDFs can be used to learn the probability mass function on (high-dimensional) ordinal\ndiscrete data. Consider an integer-valued observation x2X =Zd, a prior distribution pZ(\u0001)with\nsupport on Zd, and a bijective map f:Zd!Zddefined by an IDF. The model distribution pX(\u0001)\ncan then be expressed as:\npX(x) =pZ(z); z =f(x): (6)\nNote that in contrast to Equation 1, there is no need for re-normalization using the Jacobian deter-\nminant. Deep IDFs are obtained by stacking multiple IDF layers fflgL\nl=1, which are guaranteed to\nbe bijective if the individual maps flare all bijective. For an individual map to be bijective, it must\nbe one-to-one and onto. Consider the bijective map f:Z!2Z; x7!2x. Although, this map is\na bijection, it requires us to keep track of the codomain of f, which is impracticable in the case of\nmany dimensions and multiple layers. Instead, we design layers to be bijective maps from ZdtoZd,\nwhich ensures that the composition of layers and its inverse is closed on Zd.\n3\n3.1 Integer Discrete Coupling\nAs a building block for IDFs, we introduce integer discrete coupling layers. These are invertible and\nthe set Zdis closed under their transformations. Let [xa;xb] =x2Zdbe an input of the layer. The\noutput z= [za;zb]is defined as a copy za=xa, and a transformation zb=xb+bt(xa)e, where\nb\u0001edenotes a nearest rounding operation and tis a neural network (Figure 2).\nFigure 2: Forward computation of an integer\ndiscrete coupling layer. The input is split in\ntwo parts. The output consists of a copy of the\nfirst part, and a conditional transformation of\nthe second part. The inverse of the coupling\nlayer is computed by inverting the conditional\ntransformation.Notice the multiplication operation in standard cou-\npling is not used in integer discrete coupling, because\nit does not meet our requirement that the image of\nthe transformations is equal to Z. It may seem dis-\nadvantageous that our model only uses translation,\nalso known as additive coupling, however, large-scale\ncontinuous flow models in the literature tend to use\nadditive coupling instead of affine coupling [22].\nIn contrast to existing coupling layers, the input is\nsplit in 75%–25% parts for xaandxb, respectively.\nAs a consequence, rounding is applied to fewer di-\nmensions, which results in less gradient bias. In\naddition, the transformation is richer, because it is\nconditioned on more dimensions. Empirically this\nresults in better performance.\nBackpropagation through Rounding Operation\nAs shown in Figure 2, a coupling layer in IDF re-\nquires a rounding operation ( b\u0001e) on the predicted translation. Since the rounding operation is\neffectively a step function, its gradient is zero almost everywhere. As a consequence, the rounding\noperation is inherently incompatible with gradient based learning methods. In order to backpropagate\nthrough the rounding operations, we make use of the Straight Through Estimator (STE) [ 2]. In short,\nthe STE ignores the rounding operation during back-propagation, which is equivalent to redefining\nthe gradient of the rounding operation as follows:\nrxbxe,I: (7)\nLower Triangular Coupling\nThere exists a trade-off between the number of integer discrete coupling layers and the complexity of\nthe layers in IDF architectures, due to the gradient bias that is introduced by the rounding operation\n(see section 4.1). We introduce a multivariate coupling transformation called Lower Triangular\nCoupling, which is specifically designed such that the number of rounding operations remains\nunchanged. For more details, see Appendix B.\n3.2 Tractable Discrete distribution\nFigure 3: The discretized logistic\ndistribution. The shaded area shows\nthe probability density.As discussed in Section 2, a simple distribution pZ(\u0001)is posed\nonZin flow-based models. In IDFs, the prior pZ(\u0001)is a fac-\ntored discretized logistic distribution (DLogistic) [ 20,33]. The\ndiscretized logistic captures the inductive bias that values close\ntogether are related, which is well-suited for ordinal data.\nThe probability mass DLogistic (zj\u0016;s)for an integer z2Z,\nmean\u0016, and scalesis defined as the density assigned to the\ninterval [z\u00001\n2;z+1\n2]by the probability density function of\nLogistic (\u0016;s)(see Figure 3). This can be efficiently computed\nby evaluating the cumulative distribution function twice:\nDLogistic (zj\u0016;s) =Zz+1\n2\nz\u00001\n2Logistic (z0j\u0016;s)dz0=\u001b\u0012z+1\n2\u0000\u0016\ns\u0013\n\u0000\u001b\u0012z\u00001\n2\u0000\u0016\ns\u0013\n;(8)\nwhere\u001b(\u0001)denotes the sigmoid, the cumulative distribution function of a standard Logistic. In\nthe context of a factor-out layer, the mean \u0016and scalesare conditioned on the subset of\n4\nInteger Flow\nSqueeze\nFactor out\nInteger Flow\nSqueezeFigure 4: Example of a 2-level flow ar-\nchitecture. The squeeze layer reduces\nthe spatial dimensions by two, and in-\ncreases the number of channels by four.\nA single integer flow layer consists of a\nchannel permutation and an integer dis-\ncrete coupling layer. Each level consists\nofDflow layers.\n24 8 16 24 32\ndepth3.43.53.63.73.83.94.04.1bpdIDF\nContinuous\nFigure 5: Performance of flow models\nfor different depths (i.e. coupling lay-\ners per level). The networks in the cou-\npling layers contain 3 convolution lay-\ners. Although performance increases\nwith depth for continuous flows, this is\nnot the case for discrete flows.data that is not factored out. That is, the input\nto thelth factor-out layer is split into zlandyl.\nThe conditional distribution on zl;iis then given as\nDLogistic (zl;ij\u0016(yl)i;s(yl)i), where\u0016(\u0001)ands(\u0001)are\nparametrized as neural networks.\nDiscrete Mixture distributions The discretized logistic\ndistribution is unimodal and therefore limited in complex-\nity. With a marginal increase in computational cost, we\nincrease the flexibility of the latent prior on zLby ex-\ntending it to a mixture of Klogistic distributions [ 33]:\np(zj\u0016;s;\u0019) =KX\nk\u0019k\u0001p(zj\u0016k;sk): (9)\nNote that as K!1 , the mixture distribution can model\narbitrary univariate discrete distributions. In practice, we\nfind that a limited number of mixtures ( K= 5) is usually\nsufficient for image density modelling tasks.\n3.3 Lossless Source Compression\nLossless compression is an essential technique to limit\nthe size of representations without destroying information.\nMethods for lossless compression require i)a statistical\nmodel of the source, and ii)a mapping from source sym-\nbols to bit streams.\nIDFs are a natural statistical model for lossless com-\npression of ordinal discrete data, such as images, video\nand audio. They are capable of modelling complicated\nhigh-dimensional distributions, and they provide error-\nfree reconstructions when inverting latent representations.\nThe mapping between symbols and bit streams may be\nprovided by any entropy encoder. Specifically, stream\ncoders can get arbitrarily close to the entropy regardless\nof the symbol distributions, because they encode entire\nsequences instead of a single symbol at a time.\nIn the case of compression using an IDF, the mapping\nf:x7!zis defined by the IDF. Subsequently, zis\nencoded under the distribution pZ(z)to a bitstream cusing\nan entropy encoder. Note that, when using factor-out\nlayers,pZ(z)is also defined using the IDF. Finally, in\norder to decode a bitstream c, an entropy encoder uses\npZ(z)to obtain z. and the original image is obtained by\nusing the map f\u00001:z7!x, i.e., the inverse IDF. See\nFigure 1 for a graphical depiction of this process.\nIn rare cases, the compressed file may be larger than the\noriginal. Therefore, following established practice in com-\npression algorithms, we utilize an escape bit . That is, the encoder will decide whether to encode the\nmessage or save it in raw format and encode that decision into the first bit.\n4 Architecture\nThe IDF architecture is split up into one or more levels. Each level consists of a squeeze operation [ 8],\nDinteger flow layers, and a factor-out layer. Hence, each level defines a mapping from yl\u00001to\n[zl;yl], except for the final level L, which defines a mapping yL\u000017!zL. Each of the Dinteger\nflow layers per level consist of a permutation layer followed by an integer discrete coupling layer.\n5\nFollowing [ 8], the permutation layers are initialized once and kept fixed throughout training and\nevaluation. Figure 4 shows a graphical illustration of a two level IDF. The specific architecture details\nfor each experiment are presented in Appendix D.1. In the remainder of this section, we discuss the\ntrade-off between network depth and performance when rounding operations are used.\n4.1 Flow Depth and Network Depth\nThe performance of IDFs depends on a trade-off between complexity and gradient bias, influenced\nby the number of rounding functions. Increasing the performance of standard normalizing flows is\noften achieved by increasing the depth, i.e., the number of flow-modules. However, for IDFs each\nflow-module results in additional rounding operations that introduce gradient bias. As a consequence,\nadding more flow layers hurts performance, after some point, as is depicted in Figure 5. We found that\nthe limitation of using fewer coupling layers in an IDF can be negated by increasing the complexity\nof the neural networks part of the coupling and factor-out layers. That is, we use DenseNets [ 17] in\norder to predict the translation tin the integer discrete coupling layers and \u0016andsin the factor-out\nlayers.\n5 Related Work\nThere exist several deep generative modelling frameworks. This work builds mainly upon flow-based\ngenerative models, described in [ 31,7,8]. In these works, invertible functions for continuous random\nvariables are developed. However, quantizing a latent representation, and subsequently inverting back\nto image space may lead to reconstruction errors [6, 3, 4].\nOther likelihood-based models such as PixelCNNs [ 41] utilize a decomposition of conditional\nprobability distributions. However, this decomposition assumes an order on pixels which may not\nreflect the actual generative process. Furthermore, drawing samples (and decoding) is generally\ncomputationally expensive. V AEs [ 21] optimize a lower bound on the log likelihood instead of\nthe exact likelihood. They are used for lossless compression with deterministic encoders [ 25] and\nthrough bits-back coding. However, the performance of this approach is bounded by the lower bound.\nMoreover, in bits back coding a single data example can be inefficient to compress, and the extra bits\nshould be random, which is not the case in practice and may also lead to coding inefficiencies [38].\nNon-likelihood based generative models tend to utilize Generative Adversarial Networks [ 13], and\ncan generate high-quality images. However, since GANs do not optimize for likelihood, which\nis directly connected to the expected number of bits in a message, they are not suited for lossless\ncompression.\nIn the lossless compression literature, numerous reversible integer to integer transforms have been\nproposed [ 1,6,3,4]. Specifically, lossless JPEG2000 uses a reversible integer wavelet transform\n[11]. However, because these transformations are largely hand-designed, they are difficult to tune for\nreal-world data, which may require complicated nonlinear transformations.\nAround time of submission, unpublished concurrent work appeared [ 39] that explores discrete flows.\nThe main differences between our method and this work are: i)we propose discrete flows for ordinal\ndiscrete data (e.g. audio, video, images), whereas they are are focused on categorical data. ii)we\nprovide a connection with the source coding theorem, and present a compression algorithm. iii)We\npresent results on more large-scale image datasets.\n6 Experiments\nTo test the compression performance of IDFs, we compare with a number of established lossless\ncompression methods: PNG [ 12]; JPEG2000 [ 11]; FLIF [ 35], a recent format that uses machine\nlearning to build decision trees for efficient coding; and Bit-Swap [ 23], a V AE based lossless\ncompression method. We show that IDFs outperform all these formats on CIFAR10, ImageNet32 and\nImageNet64. In addition, we demonstrate that IDFs can be very easily tuned for specific domains, by\ncompressing the ER + BCa histology dataset. For the exact treatment of datasets and optimization\nprocedures, see Section D.4.\n6\nTable 1: Compression performance of IDFs on CIFAR10, ImageNet32 and ImageNet64 in bits per\ndimension, and compression rate (shown in parentheses). The Bit-Swap results are retrieved from\n[23]. The column marked IDFydenotes an IDF trained on ImageNet32 and evaluated on the other\ndatasets.\nDataset IDF IDFyBit-Swap FLIF [35] PNG JPEG2000\nCIFAR10 3.34 (2.40\u0002)3.60 (2.22\u0002) 3.82 (2.09\u0002) 4.37 (1.83\u0002) 5.89 (1.36\u0002) 5.20 (1.54\u0002)\nImageNet32 4.18 (1.91\u0002) 4.18 (1.91\u0002)4.50 (1.78\u0002) 5.09 (1.57\u0002) 6.42 (1.25\u0002) 6.48 (1.23\u0002)\nImageNet64 3.90 (2.05\u0002)3.94 (2.03\u0002) – 4.55 (1.76 \u0002) 5.74 (1.39\u0002) 5.10 (1.56\u0002)\nFigure 6: Left: An example from the ER + BCa histology\ndataset. Right: 625 IDF samples of size 80 \u000280px.\nFigure 7: 49 samples from the\nImageNet 64\u000264 IDF.\n6.1 Image Compression\nThe compression performance of IDFs is compared with competing methods on standard datasets,\nin bits per dimension and compression rate. The IDFs and Bit-Swap are trained on the train data,\nand compression performance of all methods is reported on the test data in Table 1. IDFs achieve\nstate-of-the-art lossless compression performance on all datasets.\nEven though one can argue that a compressor should be tuned for the source domain, the performance\nof IDFs is also examined on out-of-dataset examples, in order to evaluate compression generalization.\nWe utilize the IDF trained on Imagenet32, and compress the CIFAR10 and ImageNet64 data. For the\nlatter, a single image is split into four 32\u000232patches. Surprisingly, the IDF trained on ImageNet32\n(IDFy) still outperforms the competing methods showing only a slight decrease in compression\nperformance on CIFAR10 and ImageNet64, compared to its source-trained counterpart.\nAs an alternative method for lossless compression, one could quantize the distribution pZ(\u0001)and the\nlatent spaceZof a continuous flow. This results in reconstruction errors that need to be stored in\naddition to the latent representation z, such that the original data can be recovered perfectly. We show\nthat this scheme is ineffective for lossless compression. Results are presented in Appendix C.\n6.2 Tuneable Compression\nThus far, IDFs have been tested on standard machine learning datasets. In this section, IDFs are\ntested on a specific domain, medical images. In particular, the ER + BCa histology dataset [ 18] is\nused, which contains 141 regions of interest scanned at 40\u0002, where each image is 2000\u00022000 pixels\n(see Figure 6, left). Since current hardware does not support training on such large images directly,\nthe model is trained on random 80\u000280px patches. See Figure 6, right for samples from the model.\nLikewise, the compression is performed in a patch-based manner, i.e., each patch is compressed\nindependently of all other patches. IDFs are again compared with FLIF and JPEG2000, and also\nwith a modified version of JPEG2000 that has been optimized for virtual microscopy specifically,\nnamed JP2-WSI [ 15]. Although the IDF is at a disadvantage because it has to compress in patches, it\nconsiderably outperforms the established formats, as presented in Table 2.\n7\nTable 2: Compression performance on the ER + BCa histology dataset in bits per dimension and\ncompression rate. JP2-WSI is a specialized format optimized for virtual microscopy.\nDataset IDF JP2-WSI FLIF [35] JPEG2000\nHistology 2.42 (3.19\u0002)3.04 (2.63\u0002) 4.00 (2.00\u0002) 4.26 (1.88\u0002)\n~30%~15%~60%100%\nFigure 8: Progressive display of the data stream for images taken from the test set of ImageNet64.\nFrom top to bottom row, each image uses approximately 15%, 30%, 60% and 100% of the stream,\nwhere the remaining dimensions are sampled. Best viewed electronically.\n6.3 Progressive Image Rendering\nIn general, transferring data may take time because of slow internet connections or disk I/O. For\nthis reason, it is desired to progressively visualize data, i.e., to render the image with more detail\nas more data arrives. Several graphics formats support progressive loading. However, the encoded\nfile size may increase by enabling this option, depending on the format [ 12], whereas IDFs support\nprogressive rendering naturally. To partially render an image using IDFs, first the received variables\nare decoded. Next, using the hierarchical structure of the prior and ancestral sampling, the remaining\ndimensions are obtained. The progressive display of IDFs for ImageNet64 is presented in Figure 8,\nwhere the rows use approximately 15%, 30%, 60%, and 100% of the bitstream. The global structure\nis already captured by smaller fragments of the bitstream, even for fragments that contain only 15%\nof the stream.\n6.4 Probability Mass Estimation\nIn addition to a statistical model for compression, IDFs can also be used for image generation and\nprobability mass estimation. Samples are drawn from an ImageNet 32 \u000232 IDF and presented in\nFigure 7. IDFs are compared with recent flow-based generative models, RealNVP [ 8], Glow [ 22],\nand Flow++ in analytical bits per dimension (negative log 2-likelihood). To compare architectural\nchanges, we modify the IDFs to Continuous models by dequantizing, disabling rounding, and using\na continuous prior. The continuous versions of IDFs tend to perform slightly better, which may\nbe caused by the gradient bias on the rounding operation. IDFs show competitive performance on\nCIFAR10, ImageNet32, and ImageNet64, as presented in Table 3. Note that in contrast with IDFs,\nRealNVP uses scale transformations, Glow has 1\u00021convolutions and actnorm layers for stability,\nand Flow++ uses the aforementioned, and an additional flow for dequantization. Interestingly, IDFs\nhave comparable performance even though the architecture is relatively simple.\nTable 3: Generative modeling performance of IDFs and comparable flow-based methods in bits per\ndimension (negative log 2-likelihood).\nDataset IDF Continuous RealNVP Glow Flow++\nCIFAR10 3.32 3.31 3.49 3.35 3.08\nImageNet32 4.15 4.13 4.28 4.09 3.86\nImageNet64 3.90 3.85 3.98 3.81 3.69\n8\n7 Conclusion\nWe have introduced Integer Discrete Flows, flows for ordinal discrete data that can be used for deep\ngenerative modelling and neural lossless compression. We show that IDFs are competitive with\ncurrent flow-based models, and that we achieve state-of-the-art lossless compression performance\non CIFAR10, ImageNet32 and ImageNet64. To the best of our knowledge, this is the first lossless\ncompression method that uses invertible neural networks.\nReferences\n[1]Nasir Ahmed, T Natarajan, and Kamisetty R Rao. Discrete cosine transform. IEEE transactions\non Computers , 100(1):90–93, 1974.\n[2]Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients\nthrough stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 , 2013.\n[3]A Robert Calderbank, Ingrid Daubechies, Wim Sweldens, and Boon-Lock Yeo. Lossless\nimage compression using integer to integer wavelet transforms. In Proceedings of International\nConference on Image Processing , volume 1, pages 596–599. IEEE, 1997.\n[4] AR Calderbank, Ingrid Daubechies, Wim Sweldens, and Boon-Lock Yeo. Wavelet transforms\nthat map integers to integers. Applied and computational harmonic analysis , 5(3):332–369,\n1998.\n[5]Gustavo Deco and Wilfried Brauer. Higher Order Statistical Decorrelation without Information\nLoss. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information\nProcessing Systems 7 , pages 247–254. MIT Press, 1995.\n[6]Steven Dewitte and Jan Cornelis. Lossless integer wavelet transform. IEEE signal processing\nletters , 4(6):158–160, 1997.\n[7]Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: Non-linear independent components\nestimation. 3rd International Conference on Learning Representations, ICLR, Workshop Track\nProceedings , 2015.\n[8]Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP.\n5th International Conference on Learning Representations, ICLR , 2017.\n[9] Jarek Duda. Asymmetric numeral systems. arXiv preprint arXiv:0902.0271 , 2009.\n[10] Jarek Duda. Asymmetric numeral systems: entropy coding combining speed of huffman coding\nwith compression rate of arithmetic coding. arXiv preprint arXiv:1311.2540 , 2013.\n[11] International Organization for Standardization. JPEG 2000 image coding system. ISO Standard\nNo. 15444-1:2016 , 2003.\n[12] International Organization for Standardization. Portable Network Graphics (PNG): Functional\nspecification. ISO Standard No. 15948:2003 , 2003.\n[13] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil\nOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural\ninformation processing systems , pages 2672–2680, 2014.\n[14] Will Grathwohl, Ricky TQ Chen, Jesse Betterncourt, Ilya Sutskever, and David Duvenaud.\nFfjord: Free-form continuous dynamics for scalable reversible generative models. 7th Interna-\ntional Conference on Learning Representations, ICLR , 2019.\n[15] Henrik Helin, Teemu Tolonen, Onni Ylinen, Petteri Tolonen, Juha Näpänkangas, and Jorma\nIsola. Optimized jpeg 2000 compression for efficient storage of histopathological whole-slide\nimages. Journal of pathology informatics , 9, 2018.\n[16] Emiel Hoogeboom, Rianne van den Berg, and Max Welling. Emerging convolutions for\ngenerative normalizing flows. Proceedings of the 36th International Conference on Machine\nLearning , 2019.\n[17] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected\nconvolutional networks. In Proceedings of the IEEE conference on computer vision and pattern\nrecognition , pages 4700–4708, 2017.\n9\n[18] Andrew Janowczyk, Scott Doyle, Hannah Gilmore, and Anant Madabhushi. A resolution\nadaptive deep hierarchical (radhical) learning scheme applied to nuclear segmentation of digital\npathology images. Computer Methods in Biomechanics and Biomedical Engineering: Imaging\n& Visualization , 6(3):270–276, 2018.\n[19] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 3rd Interna-\ntional Conference on Learning Representations, ICLR , 2015.\n[20] Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max\nWelling. Improved variational inference with inverse autoregressive flow. In Advances in Neural\nInformation Processing Systems , pages 4743–4751, 2016.\n[21] Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. In Proceedings of the\n2nd International Conference on Learning Representations , 2014.\n[22] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions.\nInAdvances in Neural Information Processing Systems , pages 10236–10245, 2018.\n[23] Friso H Kingma, Pieter Abbeel, and Jonathan Ho. Bit-swap: Recursive bits-back coding\nfor lossless compression with hierarchical latent variables. 36th International Conference on\nMachine Learning , 2019.\n[24] Manoj Kumar, Mohammad Babaeizadeh, Dumitru Erhan, Chelsea Finn, Sergey Levine, Laurent\nDinh, and Durk Kingma. Videoflow: A flow-based generative model for video. arXiv preprint\narXiv:1903.01434 , 2019.\n[25] Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, and Luc Van Gool.\nPractical full resolution learned lossless image compression. In IEEE Conference on Computer\nVision and Pattern Recognition, CVPR , pages 10629–10638, 2019.\n[26] Alistair Moffat, Radford M Neal, and Ian H Witten. Arithmetic coding revisited. ACM\nTransactions on Information Systems (TOIS) , 16(3):256–294, 1998.\n[27] George Papamakarios, Iain Murray, and Theo Pavlakou. Masked autoregressive flow for density\nestimation. In Advances in Neural Information Processing Systems , pages 2338–2347, 2017.\n[28] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito,\nZeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in\nPyTorch. In NIPS Autodiff Workshop , 2017.\n[29] Ryan Prenger, Rafael Valle, and Bryan Catanzaro. Waveglow: A flow-based generative network\nfor speech synthesis. In ICASSP 2019-2019 IEEE International Conference on Acoustics,\nSpeech and Signal Processing (ICASSP) , pages 3617–3621. IEEE, 2019.\n[30] Danilo Rezende and Shakir Mohamed. Variational Inference with Normalizing Flows. In Pro-\nceedings of the 32nd International Conference on Machine Learning , volume 37 of Proceedings\nof Machine Learning Research , pages 1530–1538. PMLR, 2015.\n[31] Oren Rippel and Ryan Prescott Adams. High-dimensional probability estimation with deep\ndensity models. arXiv preprint arXiv:1302.5125 , 2013.\n[32] Jorma Rissanen and Glen G Langdon. Arithmetic coding. IBM Journal of research and\ndevelopment , 23(2):149–162, 1979.\n[33] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. PixelCNN++: Improving the\npixelcnn with discretized logistic mixture likelihood and other modifications. 5th International\nConference on Learning Representations, ICLR , 2017.\n[34] Claude Elwood Shannon. A mathematical theory of communication. Bell system technical\njournal , 27(3):379–423, 1948.\n[35] Jon Sneyers and Pieter Wuille. Flif: Free lossless image format based on maniac compression.\nIn2016 IEEE International Conference on Image Processing (ICIP) , pages 66–70. IEEE, 2016.\n[36] EG Tabak and Cristina V Turner. A family of nonparametric density estimation algorithms.\nCommunications on Pure and Applied Mathematics , 66(2):145–164, 2013.\n[37] Esteban G Tabak, Eric Vanden-Eijnden, et al. Density estimation by dual ascent of the log-\nlikelihood. Communications in Mathematical Sciences , 8(1):217–233, 2010.\n[38] James Townsend, Tom Bird, and David Barber. Practical lossless compression with latent\nvariables using bits back coding. 7th International Conference on Learning Representations,\nICLR , 2019.\n10\n[39] Dustin Tran, Keyon Vafa, Kumar Agrawal, Laurent Dinh, and Ben Poole. Discrete flows:\nInvertible generative models of discrete data. ICLR 2019 Workshop DeepGenStruct , 2019.\n[40] Rianne van den Berg, Leonard Hasenclever, Jakub M Tomczak, and Max Welling. Sylvester\nnormalizing flows for variational inference. Thirty-Fourth Conference on Uncertainty in\nArtificial Intelligence, UAI , 2018.\n[41] Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.\nInInternational Conference on Machine Learning , pages 1747–1756, 2016.\n11\nA Additional background\nA.1 Asymmetric Numeral Systems\nAsymmetric Numeral Systems (ANS) [ 9] is a recent approach to entropy coding. The range-based\nvariant: rANS, is generally used as a faster replacement for arithmetic coding, because a state is only\nrepresented by a single number and fewer mathematical operations are required [10].\nThe encoding function of rANS encodes a symbol sinto a codec0given the so far existing code c:\nc0(c;s) =bc=lsc\u0001m+ (cmodls) +bs; (10)\nwheremis a large integer that functions as the quantization denominator. Integers are chosen for ls\nsuch thatp(s)\u0019ls=m, wherep(s)denotes the probability of symbol s. Each symbol is associated\nwith a unique interval [bs;bs+ls), wherebs=Ps\u00001\ni=1li, as depicted in Figure 9.\nFigure 9: The unique sequences for each symbol\nThe decoding function needs to retrieve the encoded symbol s, and the previous state cfrom the\nnew codec0. First consider the term c0modm, which is equal to the last two terms of the encoding\nfunction:cmodls+bs. This term is guaranteed to lie in the interval [bs,bs+ls). Therefore, the\nsymbol can be retrieved by finding:\ns(c0) =ts.t.bt\u0014c0modm<b t+1: (11)\nConsequently with the knowledge of s, the previous state ccan be obtained by computing:\nc(c0;s) =ls\u0001bc0=mc+ (c0modm)\u0000bs: (12)\nIn practice,mis chosen as a power of two (for example 232). As such, multiplication and division\nwithmreduces to bit shifts and modulo mreduces to a binary masking operation.\nB Lower Triangular Coupling\nThere exists a trade-off between the number of integer discrete coupling layers and the complexity of\nthe layers in IDF architectures, due to the gradient bias that is introduced by the rounding operation.\nFor this reason, it is desired to increase the flexibility of layers without increasing the number of\nrounding operations. We introduce a multivariate coupling transformation called Lower Triangular\nCoupling, which is specifically designed such that the number of rounding operations remains\nunchanged. In practice, Lower Triangular Coupling does not offer significant improvements over\nstandard coupling layers, and they both attain 4.15 bits per dimension (standard \u00060.009 and lower\ntriangular\u00060.007), which is averaged over two runs with random weight initialization. The method\nis presented below for completeness.\nThe transformation of xbis formed by multiplication with a strictly lower triangular matrix Lwhich\nis conditioned on xa:\nzb=xb+bt(xa) +L(xa)xbe: (13)\nThe main trick is to round the sum of all transformations, such that no additional gradient bias is\nintroduced. This transformation is guaranteed to be invertible, and the inverse can be found with a\nmodified version of forward substitution:\nx(b)\ni=z(b)\ni\u00006664ti+i\u00001X\nj=1Lij\u0001x(b)\nj3\n777; (14)\nwherex(b)\nidenotes the ith element of xb, andtandLare still conditioned on xa, however, this\nnotation is dropped for clarity. The continuous case can even be solved analytically by using the\ninverse xb= (I+L)\u00001(zb\u0000t).\n12\nIn practice we restrict the computational cost on feature maps x;z2Znc\u0002h\u0002wby parametrizing a\nlocal triangular matrix. That is, the transformation can be computed in parallel spatially, and is defined\nas:z(b)\n:;vu=x(b)\n:;vu+j\nt:;vu+Lvux(b)\n:;vum\n8vu, wherev;udenote spatial coordinates, Lvu2Rcb\u0002cb\nandtare conditioned on x(a), andcbdenotes the number of channels in x(b). Since the dimensions\nofLvuare small, relative to the neural networks parametrizing them, the inverse can be found in cb\niterations using spatially parallelized matrix operations.\nC Quantizing a Continuous Flow\n128 256 384\ninverse bin size02468bpdresidual\nquantized\ncontinuous\nFigure 10: Compression performance of a\nquantized continuous flow model using differ-\nent bin sizes. The dashed line denotes the ana-\nlytical bpd of the continuous model. The total\nrequired bpd consists of both the quantized\nlatent zand the residual errors are encoded\nseparately using the FLIF format.To test the lossless compression performance of con-\ntinuous flows, the latent space is quantized to a linear\nspaced bins. Because the latent space is quantized,\nthe reconstructions may contain errors. To enable\nlossless compression, FLIF is used to encode the er-\nrors in reconstruction. Hence, given the quantized\nlatent variables and the reconstruction errors, the orig-\ninal input can be obtained.\nThe performance of the quantized flow is shown in\nFigure 10. When the bin size is large (1\n128), encoding\nthe latent representation requires relatively few bits,\nbecause the probability area is larger. However, the\nresiduals are higher, and require more bits to be mod-\nelled. Analogously, when the bin size is small (1\n512),\nencoding the latent representation requires more bits,\nbut the residual can be modelled using fewer bits. Al-\nthough the bits required for the residual or the quan-\ntized latents may be small individually, their sum is\nalways large. In total the quantized flow performs\npoorly on lossless compression.\nD Experimental details\nD.1 Networks\nThe coupling and factor out layers are parametrized using neural networks. These networks are\nDenseNets [ 17]. Specifically we use n= 512 intermediate channels and a depth d= 12 . In contrast\nwith standard DenseNets, we do not use normalization layers. A single layer in the densenet consists\nof:\nConv1\u00021!ReLU!Conv3\u00023!ReLU;\nD.2 IDF architecture\nThe exact architecture for experiments is specified in Table 4. All models are trained using Adamax\n[19] with standard parameters. Furthermore, the learning rate is computed as: lr=lrbase\u0001decayepoch.\nWe follow the preprocessing procedure for CIFAR10 as described in [ 22]. For ImageNet32 and\nImageNet64, we do use additional preprocessing. For the ER + BCa dataset, we employ random\nhorizontal and vertical flips during training.\nTable 4: IDF architecture and optimization parameters for each experiment.\nDataset L D densenet depth densenet channels batchsize patchsize train examples lr decay epochs\nCIFAR10 3 8 12 512 256 32 40000 0.999 2000\nImageNet32 3 8 12 512 256 32 1230000 0.99 100\nImageNet64 4 8 12 512 64 64 1230000 0.99 20\nER + BCa 4 8 12 512 50 80 114 0.99999 50000\n13\nIn our implementation, instead of using integers in Z, we use the equivalent representation Z=256,\nwhich we found to work better with standard weight initialization and optimization methods. Despite\nthe fact that this implementation does not use integers, it is functionally equivalent to the method\npresented in the main text.\nD.3 Dataset preparation\nThe dataset for CIFAR10 originally consists of 50000 train images and 10000 test images. We use\nthe last 10000 images for validation which results in 40000 train, 10000 validation and 10000 test\nimages. ImageNet32 and ImageNet64 originally contain approximately 1250000 train and 50000\nvalidation images. The validation images are used solely for testing, and 20000 images are randomly\nselected as a new validation set. This results in roughly 1230000 train, 20000 validation and 50000\ntest images.\nThe ER + BCa dataset [ 18]3is split into 114train images and 28test images such that specific\npatients IDs only occur in one of the two sets. The test patient identifiers are:\n8915 8959 9023 9081 9256 9382 10264 10301\n12749 16532 12818 12871 12884 12908 12931 12949\n13106 13459 13459 13617 13694 14154 14305 16661\n17117 17643 25289 25617\nD.4 Hardware and Software\nThe code for our experiments is implemented using PyTorch [ 28]. The model implementations are\nbased on the codebase released along with [ 40] whereas the rANS coder implementation was taken\nfrom [38]. All experiments were run using 4 Nvidia GTX 1080Ti GPUs.\n3http://andrewjanowczyk.com/wp-static/nuclei.tgz\n14",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "BkgeDp1cDr",
"year": null,
"venue": null,
"pdf_link": "http://proceedings.mlr.press/v97/hoogeboom19a/hoogeboom19a.pdf",
"forum_link": "https://openreview.net/forum?id=BkgeDp1cDr",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Emerging Convolutions for Generative Normalizing Flows",
"authors": [
"E Hoogeboom",
"R van den Berg",
"M Welling"
],
"abstract": "Generative flows are attractive because they admit\nexact likelihood optimization and efficient image\nsynthesis. Recently, Kingma & Dhariwal (2018)\ndemonstrated with Glow that generative flows are\ncapable of generating high quality images. We\ngeneralize the 1 × 1 convolutions proposed in\nGlow to invertible d × d convolutions, which are\nmore flexible since they operate on both channel and spatial axes. We propose two methods\nto produce invertible convolutions that have receptive fields identical to standard convolutions:\nEmerging convolutions are obtained by chaining\nspecific autoregressive convolutions, and periodic\nconvolutions are decoupled in the frequency domain. Our experiments show that the flexibility\nof d × d convolutions significantly improves the\nperformance of generative flow models on galaxy\nimages, CIFAR10 and ImageNet.",
"keywords": [],
"raw_extracted_content": "Emerging Convolutions for Generative Normalizing Flows\nEmiel Hoogeboom1Rianne van den Berg2Max Welling1 3\nAbstract\nGenerative flows are attractive because they admit\nexact likelihood optimization and efficient image\nsynthesis. Recently, Kingma & Dhariwal (2018)\ndemonstrated with Glow that generative flows are\ncapable of generating high quality images. We\ngeneralize the 1\u00021convolutions proposed in\nGlow to invertible d\u0002dconvolutions, which are\nmore flexible since they operate on both chan-\nnel and spatial axes. We propose two methods\nto produce invertible convolutions that have re-\nceptive fields identical to standard convolutions:\nEmerging convolutions are obtained by chaining\nspecific autoregressive convolutions, and periodic\nconvolutions are decoupled in the frequency do-\nmain. Our experiments show that the flexibility\nofd\u0002dconvolutions significantly improves the\nperformance of generative flow models on galaxy\nimages, CIFAR10 and ImageNet.\n1. Introduction\nGenerative models aim to learn a representation of the data\np(x), in contrast with discriminative models that learn a\nprobability distribution of labels given data p(yjx). Gen-\nerative modeling may be used for numerous applications\nsuch as anomaly detection, denoising, inpainting, and super-\nresolution. The task of generative modeling is challenging,\nbecause data is often very high-dimensional, which makes\noptimization and choosing a successful objective difficult.\nGenerative models based on normalizing flows (Rippel &\nAdams, 2013) have several advantages over other generative\nmodels: i)They optimize the log likelihood of a contin-\nuous distribution exactly, as opposed to Variational Auto-\nEncoders (V AEs) (Kingma & Welling, 2014; Rezende et al.,\n2014) which optimize a lower bound to the log-likelihood.\n1UvA-Bosch Delta Lab, University of Amsterdam, Netherlands\n2University of Amsterdam, Netherlands3Canadian Institute for Ad-\nvanced Research (CIFAR). Correspondence to: Emiel Hoogeboom\n<[email protected] >.\nProceedings of the 36thInternational Conference on Machine\nLearning , Long Beach, California, PMLR 97, 2019. Copyright\n2019 by the author(s).\nout 1in 1in 2==*out 2out 1in 1in 2out 2Figure 1. Illustration of a square emerging convolution. The input\nhas spatial dimensions equal to 3\u00023, and two channels, white\nsquares denote zero values. Convolutions use one-pixel-wide zero\npadding at each border. Two consecutive square autoregressive\nconvolutions with filters k2andk1have a receptive field identical\nto a standard convolution, with filter k2\u0003lk1, where\u0003ldenotes a\nconvolution layer. These operations are equivalent to the multipli-\ncation of matrices K2\u0001K1and a vectorized input signal ~ x. Since\nthe filters are learned decomposed, the Jacobian determinant and\ninverse are straightforward to compute.\nii)Drawing samples has a computational cost comparable\nto inference, in contrast with PixelCNNs (Van Oord et al.,\n2016). iii)Generative flows also have the potential for huge\nmemory savings, because activations necessary in the back-\nward pass can be obtained by computing the inverse of\nlayers (Gomez et al., 2017; Li & Grathwohl, 2018).\nThe performance of flow-based generative models can be\nlargely attributed to Masked Autoregressive Flows (MAFs)\n(Papamakarios et al., 2017) and the coupling layers intro-\nduced in NICE and RealNVP (Dinh et al., 2014; 2017).\nMAFs contain flexible autoregressive transformations, but\nare computationally expensive to invert, which is a disadvan-\ntage for sampling high-dimensional data. Coupling layers\ntransform a subset of the dimensions of the data, parameter-\nized by the remaining dimensions. The inverse of coupling\nlayers is straightforward to compute, which makes them\nsuitable for generative flows. However, since coupling lay-\ners can only operate on a subset of the dimensions of the\ndata, they may be limited in flexibility.\nTo improve their effectiveness, coupling layers are alter-\nnated with less complex transformations that do operate on\nall dimensions of the data. Dinh et al. (2017) use a fixed\nchannel permutation in Real NVP, and Kingma & Dhariwal\n(2018) utilize learnable 1\u00021convolutions in Glow.\nHowever, 1\u00021convolutions suffer from limited flexibil-\nity, and using standard convolutions is not straightforward\nEmerging Convolutions for Generative Normalizing Flows\nas they are very computationally expensive to invert. We\npropose two methods to obtain easily invertible and flexible\nconvolutions: emerging andperiodic convolutions. Both of\nthese convolutions have receptive fields identical to standard\nconvolutions, resulting in flexible transformations over both\nthe channel andspatial axes.\nThe structure of an emerging convolution is depicted in Fig-\nure 1, where the top depicts the convolution filters, and the\nbottom shows the equivalent matrices of these convolutions.\nTwo autoregressive convolutions are chained to obtain an\nemerging receptive field identical to a standard convolu-\ntion. Empirically, we find that replacing 1\u00021convolutions\nwith the generalized invertible convolutions produces sig-\nnificantly better results on galaxy images, CIFAR10 and\nImageNet, even when correcting for the increase in parame-\nters.\nIn addition to invertible convolutions, we also propose a\nQR decomposition for 1\u00021convolutions, which resolves\nflexibility issues of the PLU decomposition proposed by\nKingma & Dhariwal (2018).\nThe main contributions of this paper are: 1)Invertible\nemerging convolutions using autoregressive convolutions.\n2)Invertible periodic convolutions using decoupling in\nthe frequency domain. 3)Numerically stable and flexi-\nble1\u00021convolutions parameterized by a QR decompo-\nsition. 4)An accelerated inversion module for autoregres-\nsive convolutions. The code is available at: github.com/\nehoogeboom/emerging .\n2. Background\n2.1. Change of variables formula\nConsider a bijective map between variables xandz. The\nlikelihood of the variable xcan be written as the likelihood\nof the transformation z=f(x)evaluated by pZ, using the\nchange of variables formula:\npX(x) =pZ(z)\f\f\f\f@z\n@x\f\f\f\f;z=f(x): (1)\nThe complicated probability density pX(x)is equal to the\nprobability density pZ(z)multiplied by the Jacobian deter-\nminant, where pZis chosen to be tractable. The function\nfcan be learned, but the choice of fis constrained by two\npractical issues: Firstly, the Jacobian determinant should be\ntractable. Secondly, to draw samples from pX, the inverse\noffshould be tractable.\n2.1.1. C OMPOSITION OF FUNCTIONS\nA sequence composed of several applications of the change\nof variables formula is often referred to as a normalizing\nflow (Deco & Brauer, 1995; Tabak et al., 2010; Tabak &Turner, 2013; Rezende & Mohamed, 2015). Let fhlgL\nl=1\nbe the intermediate representations produced by the layers\nof a neural network, where z=hLandh0=x. The log-\nlikelihood of xis written as the log-likelihood of z, and the\nsummation of the log Jacobian determinant of each layer:\nlogpX(x) = log pZ(z) +LX\nl=1log\f\f\f\f@hl\n@hl\u00001\f\f\f\f: (2)\n2.1.2. D EQUANTIZATION\nWe will evaluate our methods with experiments on im-\nage datasets, where pixels are discrete-valued from 0 to\n255. Since generative flows are continuous density mod-\nels, they may trivially place infinite mass on discretized bin\nlocations. Therefore, we use the definition of Theis et al.\n(2016) that defines the relation between a discrete model\n^p(^x)and continuous model p(x)as an integration over bins:\n^p(^x)\u0011R\n[0;1)dp(^x+u)du, where x= ^x+u. They further\nderive a lowerbound to optimize this model with Jensen’s\ninequality, resulting in additive uniform noise for the integer\nvalued pixels from the data distribution D:\nE^x\u0018Dh\nlog ^p(^x)i\n=E^x\u0018D\"\nlogZ\n[0;1)dp(^x+u)du#\n(3)\n\u0015E^x\u0018D;u\u0018U[0;1)dh\nlogp(^x+u)i\n:\n2.2. Generative flows\nGenerative flows are bijective functions, often structured\nas deep learning layers, that are designed to have tractable\nJacobian determinants and inverses. An overview of several\ngenerative flows is provided in Table 1, and a description is\ngiven below:\nCoupling layers (Dinh et al., 2017) split the input in two\nparts. The output is a combination of a copy of the first half,\nand a transformation of the second half, parametrized by the\nfirst part. As a result, the inverse and Jacobian determinant\nare straightforward to compute.\nActnorm layers (Kingma & Dhariwal, 2018) are data de-\npendent initialized layers with scale and translation parame-\nters. They are initialized such that the distribution of activa-\ntions has mean zero and standard deviation one. Actnorm\nlayers improve training stability and performance.\n1\u00021Convolutions (Kingma & Dhariwal, 2018) are easy\nto invert, and can be seen as a generalization of the permu-\ntation operations that were used by Dinh et al. (2017). 1\n\u00021 convolutions improve the effectiveness of the coupling\nlayers.\nEmerging Convolutions for Generative Normalizing Flows\nTable 1. The definition of several generative normalizing flows. All flow functions have an inverse and determinant that are straightforward\nto compute. The height h, width wand number of channels ncof an output remains identical to the dimensions of the input. The\nsymbols\fand=denote element-wise multiplication and division. Input and output may be denoted as tensors xandzwith dimensions\nnc\u0002h\u0002w. The inputs and outputs may be denoted as one-dimensional vectors ~xand~zwith dimension nc\u0001h\u0001w. Input and output in\nfrequency domain are denoted with ^xand^z, with dimensions nc\u0002h\u0002w, where the last two components denote frequencies.\nGenerative Flow Function Inverse Log Determinant\nActnorm ~z=~x\f~\r+~\f ~x= (~z\u0000~\f)=~\r sum(logj~\rj)\nAffine coupling [~xa;~xb] =~x\n~za=~xa\ff(~xb) +g(~xb)\n~z= [~za;~xb][~za;~zb] =~z\n~za= (~za\u0000g(~zb))=f(~zb)\n~x= [~za;~xb]sum(logjf(~xb)j)\n1\u00021Conv 8ij:z:;ij=Wx :;ij 8ij:x:;ij=W\u00001z:;ij h\u0001w\u0001logjdetWj\nEmerging Conv k=w1\fm1\ng=w2\fm2\nz=k?l(g?lx)8t:~ yt= (~ zt\u0000P\ni=t+1Gt;i~ yi)=Gt;t\n8t:~ xt= (~ yt\u0000Pt\u00001\ni=1Kt;i~ xi)=Kt;tP\nclogjkc;c;m y;mxgc;c;m y;mxj\nPeriodic Conv 8uv:^z:;uv=^Wuv^x:;uv8uv:^x:;uv=^W\u00001\nuv^z:;uvP\nu;vlogjdet^Wuvj\nabcdefghi012345678hihghgihihghgideeeffddeeeffddeeeffdbcbabacbcbabac012345678\nFigure 2. Illustration of a standard 3 \u00023 convolution layer with\none input and output channel. The spatial input size is 3\u00023, and\nthe input values are f0;1; : : : ; 8g. The convolution uses one-pixel-\nwide zero padding at each border, and the filter has parameters\nfa; b; : : : ; ig. Left: the convolution w?x. Right: the matrix\nmultiplication W\u0001~xwhich produces the equivalent result.\n2.3. Convolutions\nGenerally, a convolution layer1with filter wand input x\nis equivalent to the multiplication of W, ah w n cout\u0002\nh w n cinmatrix, and a vectorized input ~x. An example of\na single channel convolution and its equivalent matrix is\ndepicted in Figure 2. The signals ~xand~zare indexed as\nt=i+w\u0001j, where iis the width index, jis the height\nindex, and wis the total width. Note that the matrix W\nbecomes sparser as the image dimensions grow and that the\nparameters of the filter woccur repeatedly in the matrix\nW. A two-channel convolution is visualized in Figure 3,\nwhere we have omitted parameters inside filters to avoid\nclutter. Here, ~xand~zare vectorized using indexing t=\nc+nc\u0001i+ (nc\u0001w)\u0001j, where cdenotes the channel index\nandncthe number of channels.\nUsing standard convolutions as a generative flow is ineffi-\ncient. The determinant and inverse can be obtained na ¨ıvely\n1In deep learning, convolutions are often actually cross-\ncorrelations. In equations, ?denotes a cross-correlation and \u0003\ndenotes a convolution. Moreover, a convolution layer is usually\nimplemented as an aggregation of cross-correlations, i.e. a cross-\ncorrelation layer, which is denoted as ?l. In text we may omit these\ndetails.\nout 1in 1in 2out 2Figure 3. A standard 3\u00023 convolution layer with two input and\noutput channels. The input is 3\u00023spatially, and has two chan-\nnels. The convolution uses one-pixel-wide zero padding at each\nborder. Left: the convolution filter w. Right: the matrix Wwhich\nproduces the equivalent result when multiplied with a vectorized\ninput.\nby operating directly on the corresponding matrix, but this\nwould be very expensive, corresponding to computational\ncomplexityO(h3\u0001w3\u0001n3\nc).\n2.4. Autoregressive Convolutions\nAutoregressive convolutions have been widely used in the\nfield of normalizing flows (Germain et al., 2015; Kingma\net al., 2016) because it is straightforward to compute their\nJacobian determinant. Although there exist autoregressive\nconvolutions with different input and output dimensions, we\nletncout=ncinfor invertibility. In this case, autoregressive\nconvolutions can be expressed as a multiplication between\na triangular weight matrix and a vectorized input.\nIn practice, a filter k=w\fmis constructed from weights\nwand a binary mask mthat enforces the autoregressive\nstructure (see Figure 4). The convolution with the masked\nfilter is autoregressive without the need to mask inputs,\nwhich allows parallel computation of the convolution layer:\nz=k?lx; (4)\nwhere ?ldenotes a convolution layer1. The matrix multipli-\ncation ~z=K~xproduces the equivalent result, where ~xand\nEmerging Convolutions for Generative Normalizing Flows\nout 1in 1in 2out 2\nFigure 4. Anautoregressive 3\u00023 convolution layer with two input\nand output channels. The input has spatial dimensions 3\u00023, and\ntwo channels. The convolution uses one-pixel-wide zero padding\nat each border. Left: the autoregressive convolution filter k. Right:\nthe matrix Kwhich produces the equivalent result on a vectorized\ninput. Note that the equivalent matrix is triangular .\n~zare the vectorized signals, and Kis a sparse triangular\nmatrix constructed from k(see Figure 4). The Jacobian is\ntriangular by design and its determinant can be computed\ninO(nc)since it only depends on the diagonal elements of\nthe matrix K:\nlog\f\f\f\fdet@z\n@x\f\f\f\f=h\u0001wncX\nclog\f\fkc;c;m y;mx\f\f; (5)\nwhere index cdenotes the channel and ( my,mx) denotes the\nspatial center of the filter. The inverse of an autoregressive\nconvolution can theoretically be computed using ~x=K\u00001~z.\nIn reality this matrix is large and impractical to invert. Since\nKis triangular, the solution for ~xcan be found through\nforward substitution:\n~ xt=~ zt\u0000Pt\u00001\ni=1Kt;i\u0001~ xi\nKt;t: (6)\nThe inverse can be computed by sequentially traversing\nthrough the input feature map in the imposed autoregressive\norder. The computational complexity of the inverse is O(h\u0001\nw\u0001n2\nc)and computation can be parallelized across examples\nin the minibatch.\n3. Method\nWe present two methods to generalize 1 \u00021 convolutions\nto invertible d\u0002dconvolutions, improving the flexibility\nof generative flow models. Emerging convolutions are ob-\ntained by chaining autoregressive convolutions (section 3.1),\nand periodic convolutions are decoupled in frequency do-\nmain (section 3.2). In section 3.3, we provide a stable and\nflexible parameterization for invertible 1\u00021convolutions.\n3.1. Emerging convolutions\nAlthough autoregressive convolutions are invertible, their\ntransformation is restricted by the imposed autoregressive\nFigure 5. Achievable emerging receptive fields that consist of two\ndistinct auto-regressive convolutions. Grey areas denote the first\nconvolution filter and orange areas denote the second convolution\nfilter. Blue areas denote the emerging receptive field, and white\nareas are masked. The convolution in the bottom row is a special\ncase, which has a receptive field identical to a standard convolution.\norder, enforced through masking of the filters (as depicted\nin Figure 4). To alleviate this restriction, we propose emerg-\ning convolutions, which are more flexible and nevertheless\ninvertible. Emerging convolutions are obtained by chain-\ning specific autoregressive convolutions, invertible via the\nautoregressive inverses. To some extent this resembles the\ncombination of stacks used to resolve the blind spot problem\nin conditional image modeling with PixelCNNs (van den\nOord et al., 2016), with the important difference that we do\nnot constrain the resulting convolution itself to be autore-\ngressive.\nThe emerging receptive field can be controlled by chaining\nautoregressive convolutions with variations in the imposed\norder. A collection of achievable receptive fields for emerg-\ning convolutions is depicted in Figure 5, based on commonly\nused autoregressive masking.\nThe autoregressive inverse requires the solution to a se-\nquential problem, and as a result, it inevitably suffers some\nadditional computational cost. In emerging convolutions\nwe minimize this cost through the use of an accelerated\nparallel inversion module, implemented in Cython, and by\nmaintaining relatively small dimensionality in the emerg-\ning convolutions compared to the internal size of coupling\nlayers.\n3.1.1. S QUARE EMERGING CONVOLUTIONS\nDeep learning applications tend to use square filters, and\nlibraries are specifically optimized for these shapes. Since\nmost of the receptive fields in Figure 5 are unusually shaped,\nthese would require masking to fit them in rectangular arrays,\nleading to unnecessary computation.\nHowever, there is a special case in which the emerging re-\nEmerging Convolutions for Generative Normalizing Flows\nceptive field of two specific autoregressive convolutions is\nidentical to a standard convolution. These square emerg-\ning convolutions can be obtained by combining off center\nsquare convolutions, depicted in the bottom row of Figure\n5 (also Figure 1). Our square emerging convolution filters\nare more efficient since they require fewer masked values in\nrectangular arrays.\nThere are two approaches to efficiently compute square\nemerging convolutions during optimization and density es-\ntimation: i)ad\u0002demerging convolution is expressed as\ntwo smaller consecutived+1\n2\u0002d+1\n2convolutions. Alter-\nnatively, ii)the order of convolution can be changed: first\nthe smallerd+1\n2filters ( k2andk1) are convolved to obtain\na single equivalent convolution filter. Then, the output of\nthe emerging convolution is obtained by convolving the\nequivalent filter, k=k2\u0003k1, with the feature map f:\nk2?(k1? f) = (k2\u0003k1)? f: (7)\nThis equivalence follows from the associativity of convolu-\ntions and the time reversal of real discrete signals in cross-\ncorrelations.\nWhen d= 1, two autoregressive convolutions simplify to an\nLU decomposed 1\u00021convolution. To ensure that emerging\nconvolutions are flexible, we use emerging convolutions\nthat consists of: a single 1\u00021convolution, and two square\nautoregressive convolutions with different masking as de-\npicted in the bottom row of Figure 1. Again, the individual\nconvolutions may all be combined into a single emerging\nconvolution filter using the associativity of convolutions\n(Equation 7).\n3.2. Invertible Periodic Convolutions\nIn some cases, data may be periodic or boundaries may\ncontain roughly the same values. In these cases it may be\nadvantageous to use invertible periodic convolutions, which\nassume that boundaries wrap around. When computed in\nthe frequency domain, this alternative convolution has a\ntractable determinant Jacobian and inverse. The method\nleverages the convolution theorem, which states that the\nFourier transform of a convolution is given by the element-\nwise product of the Fourier transformed signals. Specifi-\ncally, the input and filter are transformed using the Discrete\nFourier Transform (DFT) and multiplied element-wise, af-\nter which the inverse DFT is taken. By considering the\ntransformation in the frequency domain, the computational\ncomplexity of the determinant Jacobian and the inverse are\nconsiderably reduced. In contrast with emerging convolu-\ntions, which are very specifically parameterized, the filters\nof periodic convolutions are completely unconstrained.\nA standard convolution layer in deep learning is convention-\nally implemented as an aggregation of cross-correlations\nfor every output channel. The convolution layer with input\nout 1in 1in 2out 2Figure 6. Visualization of a periodic 3\u00023 convolution layer in\nthefrequency domain . The input and output have height 3, width\n3 and channels 2. The shape of the filter in the frequency domain\ndetermined by the shape of the image, which is also 3 \u00023 spatially\nin this specific example. Left: the convolution filter transformed to\nthe frequency domain ^w. Right: the matrix ^Win the frequency\ndomain, which produces the equivalent result on a vectorized input.\nThe equivalent matrix in the frequency domain is partitioned .\nxand filter woutputs the feature map z=w?lx, which is\ncomputed as:\nzcout=X\ncinwcout;cin?xcin: (8)\nLetF(\u0001)denote the Fourier transform and let F\u00001(\u0001)de-\nnote the inverse Fourier transform. The Fourier trans-\nform can be moved inside the channel summation, since\nit is distributive over addition. Let ^zcout=F(zcout),\n^wcout;cin=F(w\u0003\ncout;cin)and^xcin=F(xcin), which are\nindexed by frequencies uandv. Because a convolution\ndiffers from a cross-correlation by a time reversal for real\nsignals, let w\u0003\ncout;cindenote the reflection of filter wcout;cin\nin both spatial directions. Using these definitions, each\ncross-correlation is written as an element-wise multiplica-\ntion in the frequency domain:\n^zcout=X\ncin^wcout;cin\f^xcin; (9)\nwhich can be written as a sum of products in scalar form:\n^zcout;uv=X\ncin^wcout;cin;uv\u0001^xcin;uv: (10)\nThe summation of multiplications can be reformulated as\na matrix multiplication over the channel axis by viewing\nthe output ^z:;uvat frequency u; vas a multiplication of the\nmatrix ^Wuv=^w:;:;u;vand the input vector ^x:;uv:\n^z:;uv=^Wuv^x:;uv: (11)\nThe matrix ^Wuvhas dimensions cout\u0002cin, the input ^x:;uv\nand output ^z:;uvare vectors with dimension cinandcout.\nThe output in the original domain zcoutcan simply be re-\ntrieved by taking the inverse Fourier transform, F\u00001(^zcout).\nEmerging Convolutions for Generative Normalizing Flows\nThe perspective of matrix multiplication in the frequency do-\nmain decouples the convolution transformation (see Figure\n6). Therefore, the log determinant of a periodic convolu-\ntion layer is equal to the sum of determinants of individual\nfrequency components:\nlog\f\f\f\fdet@z\n@x\f\f\f\f= log\f\f\f\fdet@^z\n@^x\f\f\f\f=X\nu;vlog\f\f\fdet^Wuv\f\f\f:(12)\nThe determinant remains unchanged by the Fourier trans-\nform and its inverse, since these are unitary transformations.\nThe inverse operation requires an inversion of the matrix\n^Wuvfor every frequency u; v:\n^x:;uv=^W\u00001\nuv^z:;uv: (13)\nThe solution of xin the original domain is obtained by the\ninverse Fourier transform, xcin=F\u00001(^xcin), for every\nchannel cin.\nIn theory, a periodic convolutions may be not invertible, if\nthe determinant of any ^Wuvis equal to zero. In practice the\nfilter is initialized with a nonzero determinant. Furthermore,\nthe absolute determinant is maximized in the likelihood\nobjective (Equation 1), which pushes the determinant away\nfrom zero.\nRecall that a standard convolution layer is equivalent to a\nmatrix multiplication with a h w n cout\u0002h w n cinmatrix,\nwhere we let ncout=ncinfor invertibility. The Fourier\ntransform decouples the transformation of the convolution\nlayer at each frequency, which divides the computation into\nh\u0001wseparate matrix multiplications with nc\u0002ncmatrices.\nTherefore, the computational cost of the determinant is re-\nduced fromO(h3\u0001w3\u0001n3\nc)toO(h\u0001w\u0001n3\nc)in the frequency\ndomain, and computation can be parallelized since the ma-\ntrices are independent across frequencies and independent\nof the data. Furthermore, the inverse matrices ^W\u00001\nuvonly\nneed to be computed once after the model has converged,\nwhich reduces the inverse convolution to an efficient matrix\nmultiplication with computational complexity2O(h\u0001w\u0001n2\nc).\n3.3. QR 1\u00021 convolutions\nStandard 1\u00021 convolutions are flexible but may be numer-\nically unstable during optimization, causing crashes in the\ntraining procedure. Kingma & Dhariwal (2018) propose\nto learn a PLU decomposition, but since the permutation\nmatrix Pis fixed during optimization, their flexibility is\nlimited.\nIn order to resolve the stability issues while retaining the\nflexibility of the transformation, we propose to use a QR\n2The inverse also incurs some overhead due to the Fourier trans-\nform of the feature maps which corresponds to a computational\ncomplexityO(h\u0001w\u0001nc\u0001logh w).decomposition. Any real square matrix can be decomposed\ninto a multiplication of an orthogonal and a triangular matrix.\nIn a similar fashion to the PLU parametrization, we stabilize\nthe decomposition by choosing W=Q(R+diag(s)),\nwhere Qis orthogonal, Ris strictly triangular, and elements\ninsare nonzero. Any n\u0002northogonal matrix Qcan be\nconstructed from at most nHouseholder reflections through\nQ=Q1Q2: : :Qn, where Qiis a Householder reflection:\nQi=I\u00002vivT\ni\nvT\nivi: (14)\nfvign\ni=1are learnable parameters. Note that in our case\nn=nc. In practice, arbitrary flexibility of Qmay be\nredundant, and we can trade off computational complexity\nand flexibility by using a smaller number of Householder\nreflections. The log determinant of the QR decomposition\nish\u0001w\u0001sum(logjsj)and can be computed in O(nc). The\ncomputational complexity to construct Qis betweenO(n2\nc)\nandO(n3\nc)depending on the desired flexibility. The QR\nparametrization has two main advantages: in contrast with\nthe straightforward parameterization it is numerically stable,\nand it can be completely flexible in contrast with the PLU\nparametrization.\n4. Related Work\nThe field of generative modeling has been approached from\nseveral directions. This work mainly builds upon genera-\ntive flow methods developed in (Rippel & Adams, 2013;\nDinh et al., 2014; 2017; Papamakarios et al., 2017; Kingma\n& Dhariwal, 2018). In (Papamakarios et al., 2017) autore-\ngressive convolutions are also used for density estimation,\nbut both its depth and number of channels makes drawing\nsamples computationally expensive.\nNormalizing flows have also been used to perform flexi-\nble inference in variational auto-encoders (Rezende & Mo-\nhamed, 2015; Kingma et al., 2016; Tomczak & Welling,\n2016; van den Berg et al., 2018; Huang et al., 2018) and\nBayesian neural networks (Louizos & Welling, 2017). In-\nstead of designing discrete sequences of transformations,\ncontinuous-time normalizing flows can also be designed by\ndrawing a connection with ordinary differential equations\n(Chen et al., 2018; Grathwohl et al., 2018).\nOther likelihood-based methods such as PixelCNNs\n(Van Oord et al., 2016) impose a specific order on the di-\nmensions of the image, which may not reflect the actual gen-\nerative process. Furthermore, drawing samples tends to be\ncomputationally expensive. Alternatively, V AEs (Kingma\n& Welling, 2014) optimize a lower bound of the likelihood.\nThe likelihood can be evaluated via an importance sampling\nscheme, but the quality of the estimate depends on the num-\nber of samples and the quality of the proposal distribution.\nEmerging Convolutions for Generative Normalizing Flows\nPeriodic ConvCouplingActnorm\nEmerging ConvCouplingActnorm\nSplitFlowSqueezeFlow\nSqueeze\nFigure 7. Overview of the model architecture. Left and center\ndepict the flow modules we propose: containing either a periodic\nconvolution or an emerging convolution. The diagram on the right\nshows the entire model architecture, where the flow module is\nnow grouped. The squeeze module reorders pixels by reducing\nthe spatial dimensions by a half, and increasing the channel depth\nby four. A hierarchical prior is placed on part of the intermediate\nrepresentation using the split module as in (Kingma & Dhariwal,\n2018). xandzdenote input and output. The model has Llevels,\nandDflow modules per level.\nMany non likelihood-based methods that can generate high\nresolution image samples utilize Generative Adversarial Net-\nworks (GAN) (Goodfellow et al., 2014). Although GANs\ntend to generate high quality images, they do not directly\noptimize a likelihood. This makes it difficult to obtain like-\nlihoods and to measure their coverage of the dataset.\n5. Results\nThe architecture of (Kingma & Dhariwal, 2018) is the start-\ning point for the architecture in our experiments. In the\nflow module, the invertible 1\u00021convolution can simply\nbe replaced with a d\u0002dperiodic or emerging convolution.\nFor a detailed overview of the architecture see Figure 7.\nWe quantitatively evaluate models on a variety of datasets\nin bits per dimension, which is equivalent to the negative\nlog2-likelihood. We do not use inception based metrics, as\nthey do not generalize to different datasets, and they do not\nreport overfitting (Barratt & Sharma, 2018). In addition,\nwe provide image samples generated with periodic convolu-\ntions trained on galaxy images, and samples generated with\nemerging convolutions trained on CIFAR10.\n5.1. Galaxy density modeling\nSince periodic convolutions assume that image boundaries\nare connected, they are suited for data where pixels along\nthe boundaries are roughly the same, or are actually con-\nnected. An example of such data is pictures taken in space,\nas they tend to contain some scattered light sources, and\nboundaries are mostly dark. Ackermann et al. collected\na small classification dataset of galaxies with images of\nmerging and non-merging galaxies. On the non-merging\ngalaxy images, we compare the bits per dimension of three\nmodels, constrained by the same parameter budget: 1\u00021\nconvolutions (Glow), 3\u00023Periodic and 3\u00023EmergingTable 2. Comparison of 1\u00021, periodic and emerging convolutions\non the galaxy images dataset. Performance is measured in bits per\ndimension. Results are obtained by running 3 times with different\nrandom seeds,\u0006reports standard deviation.\nGalaxy\n1\u00021(Glow) 2.03\u00060:026\nPeriodic 3\u00023 1.98\u00060:003\nEmerging 3\u000231.98\u00060:007\nFigure 8. 100 samples from a generative flow model utilizing peri-\nodic convolutions, trained on the galaxy images dataset.\nconvolutions (see Table 2). Experiments show that both\nour periodic and emerging convolutions significantly out-\nperform 1\u00021convolutions, and their performance is less\nsensitive to initialization. Samples of the model using peri-\nodic convolutions are depicted in Figure 8.\n5.2. Emerging convolutions\nThe performance of emerging convolution is extensively\ntested on CIFAR10 (Krizhevsky & Hinton, 2009) and Im-\nageNet (Russakovsky et al., 2015), with different architec-\ntural sizes. The experiments in Table 3 use the architecture\nfrom Kingma & Dhariwal (2018), where emerging convolu-\ntions replace the 1\u00021convolutions. Emerging convolutions\nperform either on par or better than Glow3, which may be\ncaused by the overparameterization of these large models.\nSamples of the model using emerging convolutions are de-\npicted in Figure 9.\nIn some cases, it may not be feasible to run very large mod-\nels in production because of the large computational cost.\nTherefore, it is interesting to study the behavior of models\nwhen they are constrained in size. We compare 1\u00021and\nemerging convolutions with the same number of flows per\nlevel ( D), for D= 8 andD= 4. Both on CIFAR10 and\nImageNet, we observe that models using emerging convolu-\n3The CIFAR10 performance of Glow was obtained by running\nthe code from the original github repository.\nEmerging Convolutions for Generative Normalizing Flows\nFigure 9. 100 samples from a generative flow model utilizing\nemerging convolutions, trained on CIFAR10.\nTable 3. Performance of Emerging convolutions on CIFAR10, Im-\nageNet 32x32 and ImageNet 64x64 in bits per dimension (negative\nlog2-likelihood), and\u0006reports standard deviation.\nCIFAR10 ImageNet\n32x32ImageNet\n64x64\nReal NVP 3.51 4.28 3.98\nGlow 3.36 \u00060:002 4.09 3.81\nEmerging 3.34\u00060:002 4.09 3.81\ntions perform significantly better. Furthermore, for smaller\nmodels the contribution of emerging convolutions becomes\nmore important, as evidenced by the increasing performance\ngap (see Table 4).\n5.3. Modeling and sample time comparison with MAF\nRecall that the inverse of autoregressive convolutions re-\nquires solving a sequential problem, which we have ac-\ncelerated with an inversion module that uses Cython and\nparallelism across the minibatch. Considering CIFAR-10\nand the same architecture as used in Table 3, it takes 39ms\nto sample an image using our accelerated emerging inverses,\n46 times faster than the na ¨ıvely obtained inverses using ten-\nsorflow bijectors (see Table 5). As expected, sampling from\nmodels using 1\u00021convolutions remains faster and takes\n5ms.\nTable 4. Performance of Emerging convolutions with different ar-\nchitectures on CIFAR10 and ImageNet 32x32 in bits per dimension.\nResults are obtained by running 3 times with different random\nseeds,\u0006reports standard deviation.\nCIFAR10 ImageNet 32x32 D\n1\u00021(Glow) 3.46\u00060:005 4.18\u00060:003 8\nEmerging 3.43\u00060:004 4.16\u00060:004 8\n1\u00021(Glow) 3.56\u00060:008 4.28\u00060:008 4\nEmerging 3.51\u00060:001 4.25\u00060:002 4Table 5. Comparison of 1\u00021, MAF and Emerging convolutions\non CIFAR-10. Performance is measured in bits per dimension, and\nthe time required to sample a datapoint, when computed in mini-\nbatches of size 100. The na ¨ıve implementation uses Tensorflow\nbijectors, and our accelerated implementation uses Cython with\nMPI parallelization.\nCIFAR10 bits/dim Na ¨ıve\nsample (ms)Accelerated\nsample (ms)\n1\u00021(Glow) 3.36 5 5\nMAF & 1\u000213.33 3000 650\nEmerging 3.34 1800 39\nMasked Autoregressive Flows (MAFs) are a very flexible\nmethod for density estimation, and they improve perfor-\nmance over emerging convolutions slightly, 3.33 versus\n3.34 bits per dimension. However, the width and depth of\nMAFs makes them a poor choice for sampling, because it\nconsiderably increases the time to compute their inverse:\n3000ms per sample using a na ¨ıve solution, and 650ms per\nsample using our inversion module. Since emerging convo-\nlutions operate on lower dimensions of the data, they are 17\ntimes faster to invert than the MAFs.\n5.4. QR 1\u00021 convolutions\nQR1\u00021convolutions are compared with standard and PLU\nconvolutions on the CIFAR10 dataset. The models have 3\nlevels and 8 flows per level. Experiments confirm that our\nstable QR decomposition achieves the same performance as\nthe standard parameterization, as shown in Table 6. This is\nexpected, since any real square matrix has a QR decompo-\nsition. Furthermore, the experiments confirm that the less\nflexible PLU parameterization leads to worse performance,\nwhich is caused by the fixed permutation matrix.\nTable 6. Comparison of standard, PLU and QR 1\u00021convolutions.\nPerformance is measured in bits per dimension (negative log 2-\nlikelihood). Results are obtained by running 3 times with different\nrandom seeds,\u0006reports standard deviation.\nParametrization CIFAR10\nW 3.46\u00060:005\nPLU 3.47 \u00060:006\nQR 3.46\u00060:004\n6. Conclusion\nWe have introduced three generative flows: i)d\u0002demerg-\ning convolutions as invertible standard zero-padded convo-\nlutions, ii)d\u0002dperiodic convolutions for periodic data\nor data with minimal boundary variation, and iii)stable\nand flexible 1\u00021convolutions using a QR parametrization.\nOur methods show consistent improvements over various\ndatasets using the same parameter budget, especially when\nconsidering models constrained in size.\nEmerging Convolutions for Generative Normalizing Flows\nReferences\nAckermann, S., Schawinksi, K., Zhang, C., Weigel, A. K.,\nand Turp, M. D. Using transfer learning to detect galaxy\nmergers. Monthly Notices of the Royal Astronomical\nSociety .\nBarratt, S. and Sharma, R. A note on the inception score.\nICML Workshop on Theoretical Foundations and Appli-\ncations of Deep Generative Models , 2018.\nChen, T. Q., Rubanova, Y ., Bettencourt, J., and Duvenaud,\nD. K. Neural ordinary differential equations. In Ad-\nvances in Neural Information Processing Systems , pp.\n6572–6583, 2018.\nDeco, G. and Brauer, W. Higher Order Statistical Decorrela-\ntion without Information Loss. In Tesauro, G., Touretzky,\nD. S., and Leen, T. K. (eds.), Advances in Neural Infor-\nmation Processing Systems 7 , pp. 247–254. MIT Press,\n1995.\nDinh, L., Krueger, D., and Bengio, Y . NICE: Non-linear\nindependent components estimation. arXiv preprint\narXiv:1410.8516 , 2014.\nDinh, L., Sohl-Dickstein, J., and Bengio, S. Density esti-\nmation using Real NVP. International Conference on\nLearning Representations, ICLR , 2017.\nGermain, M., Gregor, K., Murray, I., and Larochelle, H.\nMade: Masked autoencoder for distribution estimation.\nInInternational Conference on Machine Learning , pp.\n881–889, 2015.\nGomez, A. N., Ren, M., Urtasun, R., and Grosse, R. B. The\nreversible residual network: Backpropagation without\nstoring activations. In Advances in Neural Information\nProcessing Systems , pp. 2214–2224, 2017.\nGoodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,\nWarde-Farley, D., Ozair, S., Courville, A., and Bengio,\nY . Generative adversarial nets. In Advances in neural\ninformation processing systems , pp. 2672–2680, 2014.\nGrathwohl, W., Chen, R. T., Betterncourt, J., Sutskever,\nI., and Duvenaud, D. Ffjord: Free-form continuous dy-\nnamics for scalable reversible generative models. arXiv\npreprint arXiv:1810.01367 , 2018.\nHuang, C.-W., Krueger, D., Lacoste, A., and Courville,\nA. Neural autoregressive flows. arXiv preprint\narXiv:1804.00779 , 2018.\nKingma, D. P. and Dhariwal, P. Glow: Generative flow\nwith invertible 1x1 convolutions. In Advances in Neural\nInformation Processing Systems , pp. 10236–10245, 2018.Kingma, D. P. and Welling, M. Auto-Encoding Variational\nBayes. In Proceedings of the 2nd International Confer-\nence on Learning Representations , 2014.\nKingma, D. P., Salimans, T., Jozefowicz, R., Chen, X.,\nSutskever, I., and Welling, M. Improved variational in-\nference with inverse autoregressive flow. In Advances in\nNeural Information Processing Systems , pp. 4743–4751,\n2016.\nKrizhevsky, A. and Hinton, G. Learning multiple layers\nof features from tiny images. Technical report, Citeseer,\n2009.\nLi, X. and Grathwohl, W. Training Glow with Constant\nMemory Cost. NIPS Workshop on Bayesian Deep Learn-\ning, 2018.\nLouizos, C. and Welling, M. Multiplicative normalizing\nflows for variational bayesian neural networks. In Pro-\nceedings of the 34th International Conference on Ma-\nchine Learning-Volume 70 , pp. 2218–2227. JMLR. org,\n2017.\nPapamakarios, G., Murray, I., and Pavlakou, T. Masked\nautoregressive flow for density estimation. In Advances in\nNeural Information Processing Systems , pp. 2338–2347,\n2017.\nRezende, D. and Mohamed, S. Variational Inference with\nNormalizing Flows. In Proceedings of the 32nd Interna-\ntional Conference on Machine Learning , volume 37 of\nProceedings of Machine Learning Research , pp. 1530–\n1538. PMLR, 2015.\nRezende, D. J., Mohamed, S., and Wierstra, D. Stochastic\nbackpropagation and approximate inference in deep gen-\nerative models. In Proceedings of the 31st International\nConference on Machine Learning , volume 32 of Pro-\nceedings of Machine Learning Research , pp. 1278–1286.\nPMLR, 2014.\nRippel, O. and Adams, R. P. High-dimensional probability\nestimation with deep density models. arXiv preprint\narXiv:1302.5125 , 2013.\nRussakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S.,\nMa, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein,\nM., et al. Imagenet large scale visual recognition chal-\nlenge. International journal of computer vision , 115(3):\n211–252, 2015.\nTabak, E. and Turner, C. V . A family of nonparametric\ndensity estimation algorithms. Communications on Pure\nand Applied Mathematics , 66(2):145–164, 2013.\nTabak, E. G., Vanden-Eijnden, E., et al. Density estimation\nby dual ascent of the log-likelihood. Communications in\nMathematical Sciences , 8(1):217–233, 2010.\nEmerging Convolutions for Generative Normalizing Flows\nTheis, L., van den Oord, A., and Bethge, M. A note on\nthe evaluation of generative models. In International\nConference on Learning Representations , 2016.\nTomczak, J. M. and Welling, M. Improving variational\nauto-encoders using householder flow. arXiv preprint\narXiv:1611.09630 , 2016.\nvan den Berg, R., Hasenclever, L., Tomczak, J. M., and\nWelling, M. Sylvester normalizing flows for variational\ninference. arXiv preprint arXiv:1803.05649 , 2018.\nvan den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals,\nO., Graves, A., et al. Conditional Image Generation with\nPixelCNN Decoders. In Advances in Neural Information\nProcessing Systems , pp. 4790–4798, 2016.\nVan Oord, A., Kalchbrenner, N., and Kavukcuoglu, K. Pixel\nrecurrent neural networks. In International Conference\non Machine Learning , pp. 1747–1756, 2016.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "d-HwWP1QfZI",
"year": null,
"venue": "IWQoS 2014",
"pdf_link": "https://ieeexplore.ieee.org/iel7/6908148/6914291/06914297.pdf",
"forum_link": "https://openreview.net/forum?id=d-HwWP1QfZI",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E3: Towards energy-efficient distributed least squares estimation in sensor networks",
"authors": [
"Wanyu Lin",
"Jiannong Cao",
"Xuefeng Liu"
],
"abstract": "Domain-specific applications, such as structural health monitoring, have been one of the main drivers that motivates the real-world deployment of wireless sensor networks. Due to their data-intensive nature, it is typical for these applications to make heavy uses of least squares estimation as a foundation for their algorithms, which is a standard approach to compute the approximate solution of sets of equations in which there are more equations than unknowns. Due to the very limited amount of energy and computation power available on the sensors, it is imperative to design new algorithms to perform least squares estimation in a distributed fashion. While we wish to conserving energy by minimizing communication with our design, constraints on communication delays will also need to be satisfied. In this paper, we propose E <sup xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">3</sup> , a new distributed algorithm specifically designed to guarantee the precision of least squares estimation in sensor networks, with the objective of minimizing the energy consumption incurred during communication, while observing constraints on application-specific communication delays. Compared to previous works, we show that E <sup xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">3</sup> maintains the same level of estimation precision while incurring much lower energy costs.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "xBL-2bF9HD",
"year": null,
"venue": "IEEE/ACM Trans. Netw. 2020",
"pdf_link": "https://ieeexplore.ieee.org/iel7/90/9295473/09195166.pdf",
"forum_link": "https://openreview.net/forum?id=xBL-2bF9HD",
"arxiv_id": null,
"doi": null
}
|
{
"title": "HyCloud: Tweaking Hybrid Cloud Storage Services for Cost-Efficient Filesystem Hosting",
"authors": [
"Jinlong E",
"Yong Cui",
"Zhenhua Li",
"Mingkang Ruan",
"Ennan Zhai"
],
"abstract": "Today's cloud storage infrastructures typically provide two distinct types of services for hosting files: object storage like Amazon S3 and filesystem storage like Amazon EFS. In practice, a cloud storage user often desires the advantages of both-efficient filesystem operations with a low unit storage price. An intuitive approach to achieving this goal is to combine the two types of services, e.g., by hosting large files in S3 and small files together with directory structures in EFS. Unfortunately, our benchmark experiments indicate that the clients' download performance for large files becomes a severe system bottleneck. In this article, we attempt to address the bottleneck with little overhead by carefully tweaking the usages of S3 and EFS. Guided by two key observations, we design and implement an open-source system called HyCloud. It automatically invokes the data APIs of S3 and EFS on behalf of users, and intelligently schedules the data transfer among S3, EFS and the clients in a distributed manner. Real-world evaluations demonstrate that the unit storage price of HyCloud is close to that of S3, and the filesystem operations are executed as quickly as in EFS in most times (sometimes even more quickly than in EFS).",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "dJSaFlki76C",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=dJSaFlki76C",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Budget-Feasible Mechanism Design for Non-Monotone Submodular Objectives: Offline and Online",
"authors": [
"Georgios Amanatidis",
"Pieter Kleer",
"Guido Schäfer"
],
"abstract": "The framework of budget-feasible mechanism design studies procurement auctions where the auctioneer (buyer) aims to maximize his valuation function subject to a hard budget constraint. We study the problem of designing truthful mechanisms that have good approximation guarantees and never pay the participating agents (sellers) more than the budget. We focus on the case of general (non-monotone) submodular valuation functions and derive the first truthful, budget-feasible and $O(1)$-approximation mechanisms that run in polynomial time in the value query model, for both offline and online auctions. Since the introduction of the problem by Singer \\citepSinger10, obtaining efficient mechanisms for objectives that go beyond the class of monotone submodular functions has been elusive. Prior to our work, the only $O(1)$-approximation mechanism known for non-monotone submodular objectives required an exponential number of value queries. At the heart of our approach lies a novel greedy algorithm for non-monotone submodular maximization under a knapsack constraint. Our algorithm builds two candidate solutions simultaneously (to achieve a good approximation), yet ensures that agents cannot jump from one solution to the other (to implicitly enforce truthfulness). Ours is the first mechanism for the problem where---crucially---the agents are not ordered according to their marginal value per cost. This allows us to appropriately adapt these ideas to the online setting as well. To further illustrate the applicability of our approach, we also consider the case where additional feasibility constraints are present, e.g., at most k agents can be selected. We obtain O(p)-approximation mechanisms for both monotone and non-monotone submodular objectives, when the feasible solutions are independent sets of a p-system. With the exception of additive valuation functions, no mechanisms were known for this setting prior to our work. Finally, we provide lower bounds suggesting that, when one cares about non-trivial approximation guarantees in polynomial time, our results are asymptotically best possible.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "pQDHx73miGL",
"year": null,
"venue": "EACL (2) 2017",
"pdf_link": "https://aclanthology.org/E17-2101.pdf",
"forum_link": "https://openreview.net/forum?id=pQDHx73miGL",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Using Images to Improve Machine-Translating E-Commerce Product Listings",
"authors": [
"Iacer Calixto",
"Daniel Stein",
"Evgeny Matusov",
"Pintu Lohar",
"Sheila Castilho",
"Andy Way"
],
"abstract": "Iacer Calixto, Daniel Stein, Evgeny Matusov, Pintu Lohar, Sheila Castilho, Andy Way. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. 2017.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers , pages 637–643,\nValencia, Spain, April 3-7, 2017. c\r2017 Association for Computational Linguistics\nUsing Images to Improve Machine-Translating\nE-Commerce Product Listings\nIacer Calixto1, Daniel Stein2, Evgeny Matusov2,\nPintu Lohar1, Sheila Castilho1and Andy Way1\n1ADAPT Centre, School of Computing, Dublin City University, Dublin, Ireland\n2eBay Inc., Aachen, Germany\n{iacer.calixto,pintu.lohar,sheila.castilho,andy.way }@adaptcentre.ie\n{danstein,ematusov }@ebay.com\nAbstract\nIn this paper we study the impact of using\nimages to machine-translate user-generated e-\ncommerce product listings. We study how\na multi-modal Neural Machine Translation\n(NMT) model compares to two text-only ap-\nproaches: a conventional state-of-the-art atten-\ntional NMT and a Statistical Machine Trans-\nlation (SMT) model. User-generated product\nlistings often do not constitute grammatical\nor well-formed sentences. More often than\nnot, they consist of the juxtaposition of short\nphrases or keywords. We train our models\nend-to-end as well as use text-only and multi-\nmodal NMT models for re-ranking n-best lists\ngenerated by an SMT model. We qualita-\ntively evaluate our user-generated training data\nalso analyse how adding synthetic data im-\npacts the results. We evaluate our models\nquantitatively using BLEU and TER and find\nthat(i)additional synthetic data has a general\npositive impact on text-only and multi-modal\nNMT models, and that (ii)using a multi-modal\nNMT model for re-ranking n-best lists im-\nproves TER significantly across different n-\nbest list sizes.\n1 Introduction\nIn e-commerce, there is a strong requirement to make\nproducts accessible regardless of the customer’s native\nlanguage and home country, by leveraging the gains\navailable from machine translation (MT). Among the\nchallenges in automatic processing are the specialized\nlanguage and grammar for listing titles, as well as\na high percentage of user-generated content for non-\nbusiness sellers, who often are not native speakers\nthemselves.\nWe investigate the nature of user-generated auction\nlistings’ titles as listed on the eBay main site1. Prod-\nuct listings contain extremely high trigram perplexi-\nties even if trained (and applied) on in-domain data,\nwhich is a challenge not only for proper language mod-\nels but also for automatic evaluation metrics such as the\nn-gram precision-based BLEU (Papineni et al., 2002)\n1http://www.ebay.com/metric. Nevertheless, when presenting humans with\nimages of the product which come along with the auc-\ntion titles, the listings are perceived as somewhere be-\ntween “easy” and “neutral” to understand.\nImages can bring useful complementary information\nto MT (Calixto et al., 2012; Hitschler et al., 2016;\nHuang et al., 2016). Therefore, we explore the potential\nof multi-modal, multilingual MT of auction listings’ ti-\ntles and product images from English into German. To\nthat end, we compare eBay’s production system, due to\nservice-level agreements a classic phrase-based statisti-\ncal MT (PBSMT) system, with two neural MT (NMT)\nsystems. One of the NMT models is a text-only atten-\ntional NMT and the other is a multi-modal attentional\nNMT model trained using the product images as addi-\ntional data.\nPBSMT still outperforms both text-only and multi-\nmodal NMT models in the translation of product list-\nings, contrary to recent findings (Bentivogli et al.,\n2016). Under the hypothesis that the amount of training\ndata could be the culprit and since curated multilingual,\nmulti-modal in-domain data is very expensive to ob-\ntain, we back-translate monolingual listings and incor-\nporate them as additional synthetic training data. Util-\nising synthetic data leads to big gains in performance\nand ultimately brings NMT models closer to bridging\nthe gap with an optimized PBSMT system. We also\nuse multi-modal NMT models to rescore the output of\na PBSMT system and show significant improvements\nin TER (Snover et al., 2006).\nThis paper is structured as follows. In §2 we describe\nthe text-only and multi-modal MT models we evaluate\nand in§3 the data sets we used, also introducing and\ndiscussing interesting findings. In §4 we discuss how\nwe structure our quantitative evaluation, and in §5 we\nanalyse and discuss our results. In §6 we discuss some\nrelevant related work and in §7 we draw conclusions\nand devise future work.\n2 Model\nWe first briefly introduce the two text-only baselines\nused in this work: a PBSMT model ( §2.1) and a text-\nonly attentive NMT model ( §2.2). We then discuss the\ndoubly-attentive multi-modal NMT model that we use\nin our experiments ( §2.3), which is comparable to the\nmodel introduced by Calixto et al. (2016).637\nFigure 1: Decoder RNN with attention over source sentence and image features. This decoder learns to indepen-\ndently attend to image patches and source-language words when generating translations.\n2.1 Statistical Machine Translation (SMT)\nWe use a PBSMT model built with the Moses SMT\nToolkit (Koehn et al., 2007). The language model (LM)\nis a 5-gram LM with modified Kneser-Ney smooth-\ning (Kneser and Ney, 1995). We use minimum error\nrate training (Och, 2003) for tuning the model parame-\nters for BLEU scores.\n2.2 Text-only Neural Machine Translation\n(NMT t)\nWe use the attentive NMT model introduced by Bah-\ndanau et al. (2015) as our text-only NMT baseline.\nIt is based on the encoder–decoder framework and it\nimplements an attention mechanism over the source-\nsentence words. Being X= (x1, x2,···, xN)and\nY= (y1, y2,···, yM)a one-hot representation of a\nsentence in a source language and its translation into\na target language, respectively, the model is trained\nto maximise the log-likelihood of the target given the\nsource.\nThe encoder is a bidirectional recurrent neural\nnetwork (Schuster and Paliwal, 1997) with GRU\nunits (Cho et al., 2014). The annotation vector for a\ngiven source word xi,i∈[1, N]is the concatenation\nof forward and backward vectors hi=/bracketleftbig− →hi;← −hi/bracketrightbig\nob-\ntained with forward and backward RNNs, respectively,\nand C = (h1,h2,···,hN)is the set of source annota-\ntion vectors.\nThe decoder is also a recurrent neural network, more\nspecifically a neural LM (Bengio et al., 2003) condi-\ntioned upon its past predictions via its previous hidden\nstatest−1and the word emitted in the previous time\nstepyt−1, as well as the source sentence via an atten-tion mechanism . The attention mechanism computes a\ncontext vector ctfor each time step tof the decoder\nwhere this vector is a weighted sum of the source an-\nnotation vectors C:\nesrc\nt,i= (vsrc\na)Ttanh(Usrc\nast−1+Wsrc\nahi),(1)\nαsrc\nt,i=exp (esrc\nt,i)\n/summationtextN\nj=1exp (esrc\nt,j), (2)\nct=N/summationdisplay\ni=1αsrc\nt,ihi, (3)\nwhereαsrc\nt,iis the normalised alignment matrix between\neach source annotation vector hiand the word to be\nemitted at time step t, andvsrc\na,Usrc\naandWsrc\naare\nmodel parameters.\n2.3 Multi-modal Neural Machine Translation\n(NMT m)\nWe use a multi-modal NMT model similar to the one\nintroduced by Calixto et al. (2016), illustrated in Fig-\nure 1. It can be seen as an expansion of the attentive\nNMT framework described in §2.2 with the addition of\navisual component to incorporate visual features.\nWe use a publicly available pre-trained Convolu-\ntional Neural Network (CNN), namely the 50-layer\nResidual network (ResNet-50) of He et al. (2015) to\nextract convolutional image features ( a1,···,aL) for\nall images in our dataset. These features are extracted\nfrom the res4f layer and consist of a 196 x 1024 di-\nmensional matrix where each row (i.e., a 1024D vec-\ntor) represents features from a specific area and there-\nfore only encodes information about that specific area638\nof the image. In our NMT experiments, the ResNet-50\nnetwork is fixed during training, and there is no fine-\ntuning done for the translation task.\nThe visual attention mechanism computes a context\nvectoritfor each time step tof the decoder similarly\nto the textual attention mechanism described in §2.2:\neimg\nt,l= (vimg\na)Ttanh(Uimg\nast−1+Wimg\naal),(4)\nαimg\nt,l=exp (eimg\nt,l)\n/summationtextL\nj=1exp (eimg\nt,j), (5)\nit=L/summationdisplay\nl=1αimg\nt,lal, (6)\nwhereαimg\nt,lis the normalised alignment matrix be-\ntween each image annotation vector aland the word\nto be emitted at time step t, andvimg\na,Uimg\naandWimg\na\nare model parameters.\n3 Data sets\nThe multi-modal NMT model we evaluate uses parallel\nsentences and an image as input. Thus, we use the data\nset of product listings and images produced by eBay.\nThey consist of 23,697triples of products, henceforth\noriginal , containing each (i)a listing in English, (ii)\nits translation into German and (iii)a product image.\nValidation and test sets used in our experiments consist\nof 480 and 444 triples, respectively.\nThe curation of parallel product listings with an\naccompanying product image is costly and time-\nconsuming, so the in-domain data is rather small. More\neasily accessible are monolingual German listings ac-\ncompanied by the product image where the source text\ninput can be emulated by back-translating the target\nlisting. For this set of experiments, we use 83,832tu-\nples, henceforth mono . Finally, we also use the publicly\navailable Multi30k dataset (Elliott et al., 2016), a mul-\ntilingual expansion of the original Flickr30k (Young et\nal., 2014) with∼30k pictures from Flickr, one descrip-\ntion in English and one human translation of the En-\nglish description into German.\nTranslating user-generated product listings has par-\nticular challenges; they are often ungrammatical and\ncan be difficult to interpret in isolation even by a native\nspeaker of the language, as can be seen in the examples\nin Table 1. To further demonstrate this issue, in Table 2\nwe show the number of running words as well as the\nperplexity scores obtained with LMs trained on three\nsets of different German corpora: the Multi30k, eBay’s\nin-domain data and a concatenation of the WMT 20152\nEuroparl (Koehn, 2005), Common Crawl and News\nCommentary corpora (Bojar et al., 2015).3\n2We use the German side of the English–German parallel\nWMT 2015 corpora.\n3These are 5-gram LMs trained with KenLM (Heafield et\nal., 2013) using modified Kneser-Ney smoothing (Kneser and\nNey, 1995) on tokenized, lowercased data.Image Product Listing\n(en) just rewired original mission 774\nfluid damped low mass tonearm , very\ngood cond .\n(de) vor kurzem neu verkabelter\nfl¨ussigkeitsged ¨ampfter leichter original -\nmission 774 - tonarm , sehr guter zustand\n(en) mary kay cheek color mineral pick\ncitrus bloom shy blush bold berry + more\n(de) mary kay mineral cheek colour\nfarbauswahl citrus bloom shy blush bold\nbold berry + mehr\nTable 1: Examples of product listings and their accom-\npanying image.\nLM training #words Perplexity (×1000 )\ncorpus (×1000 )eBay Multi30k\nWMT’15 4310.0 60.1 0.5\nMulti30k 29.0 25.2 0.05\neBay 99.0 1.8 4.2\nTable 2: Perplexity on eBay and Multi30k’s test sets for\nLMs trained on different corpora. WMT’15 is the con-\ncatenation of the Europarl, Common Crawl and News\nCommentary corpora (the German side of the parallel\nEnglish–German corpora).\nWe see that different LM perplexities on eBay’s test\nset are high even for an LM trained on eBay in-domain\ndata. LMs trained on mixed-domain corpora such as\nthe WMT 2015 corpora or the Multi30k have perplexi-\nties below 500on the Multi30k test set, which is ex-\npected. However, when applied to eBay’s test data,\nperplexities computed can be over 60k. Conversely, an\nLM trained on eBay in-domain data, when applied to\nthe Multi30k test set, also computes very high perplex-\nity scores. These perplexity scores indicate that fluency\nmight not be a good metric to use in our study, i.e. we\nshould not expect a fluent machine-translated output of\na model trained on poorly fluent training data.\nClearly, translating user-generated product listings is\nvery challenging; for that reason, we decided to check\nwith humans how they perceive that data with and with-\nout having the associated images available. We hy-\npothesise that images bring additional understanding to\ntheir corresponding listings.\n3.1 Source (target) product title–image\nassessment\nA human evaluator is presented with the English (Ger-\nman) product listing. Half of them are also shown\nthe product image, whereas the other half is not. For\nthe first group, we ask two questions: (i)in the con-\ntext of the product image, how easy it is to understand\nthe English (German) product listing and (ii)how well\ndoes the English (German) product listing describe the639\nListing Difficulty Adequacy\nlanguage N listing only listing+image listing+image\nEnglish 20 2.50±0.84 2.40±0.84 2.45±0.49\nGerman 15 2.83±0.75 2.00±0.50 2.39±0.78\nTable 3: Difficulty to understand product listings with\nand without images and adequacy of product listings\nand images. Nis the number of raters.\nproduct image. For the second group, we just ask\n(i)how easy it is to understand the English (German)\nproduct listing. In all cases humans must select from\na five-level Likert scale where in (i)answers range\nfrom 1–Very easy to5–Very difficult and in (ii)from\n1–Very well to5–Very poorly .\nTable 3 suggests that the intelligibility of both the\nEnglish and German product listings are perceived to\nbe somewhere between “easy” and “neutral” when im-\nages are also available. It is notable that, for German,\nthere is a statistically significant difference between\nthe group who had access to the image and the prod-\nuct listings (M=2.00, SD=.50) and the group who only\nviewed the listings (M=2.83, ST=.30), where F(1,13) =\n6.72, p < 0.05. Furthermore, humans find that prod-\nuct listings describe the associated image somewhere\nbetween “well” and “neutral” with no statistically sig-\nnificant differences between the adequacy of product\nlistings and images in different languages.\nAltogether, we have a strong indication that images\ncan indeed help an MT model translate product listings,\nespecially for translations into German.\n4 Experimental setup\nThe PBSMT model we use as a baseline is trained on\n120k in-domain parallel sentences ( §2.1).\nTo measure how well multi-modal and text-only\nNMT models perform when trained on exactly the\nsame data with and without images, respectively, we\ntrained them only on the original and the Multi30k (El-\nliott et al., 2016) data sets. We also did not use any ad-\nditional parallel, but out-of-domain data that had been\nused to train eBay’s PBSMT production system (see\nSection 5). Training our text-only NMT tbaseline on\nthis large corpus would not help shed more light on\nhow multi-modality helps MT, since it has no images\navailable and thus cannot be used to train the multi-\nmodal model NMT m. Rather, we report results of re-\nranking experiments using n-best lists generated by\neBay’s best-performing PBSMT production system.\nIn order to measure the impact of the training data\nsize on MT quality, we follow Sennrich et al. (2016)\nand back-translate the mono German product listings\nusing our baseline NMT tmodel trained on the original\n23,697German→English corpus (- images). These ad-\nditional synthetic data (including images) are added to\ntheoriginal ’s23,697triples and used in our translation\nexperiments. We do not include the back-translated\ndata set when training NMT models for re-ranking n-Model Training data BLEU TER\nPBSMT original + Multi30k 26.1 ↓0.054.9↓0.0\n+ backtranslated 27.4↑1.355.4↑0.5\nNMT t original + Multi30k 21.1 ↓0.060.0↓0.0\n+ backtranslated 22.5 ↑1.458.0↓2.0\nNMT m original + Multi30k 17.8 ↓0.062.2↓0.0\n+ backtranslated 25.1↑7.355.5↓6.7\nImprovements\nNMT mvs. NMT t ↑2.3↓2.5\nNMT mvs. SMT t ↓2.3↑0.6\nTable 4: Comparative results with PBSMT, NMT tand\nmulti-modal models NMT mevaluated on eBay’s test\nset. Best PBSMT and NMT results in bold.\nbest lists to be able to evaluate these two scenarios in-\ndependently.\nWe evaluate our models quantitatively using\nBLEU4 (Papineni et al., 2002) and TER (Snover et\nal., 2006) and report statistical significance computed\nusing approximate randomisation with the Multeval\ntoolkit (Clark et al., 2011).\n5 Results\nIn Table 4 we present quantitative results obtained with\nthe two text-only baselines SMT and NMT tand one\nmulti-modal model NMT m.\nIt is clear that the gains from adding more data are\nmuch more apparent to the multi-modal NMT mmodel\nthan to the two text-only ones. This can be attributed\nto the fact that this model has access to more data,\ni.e. image features, and consequently can learn bet-\nter representations derived from them. The PBSMT\nmodel’s improvements are inconsistent; its TER score\neven deteriorates by 0.5with the additional data. The\nsame does not happen with the NMT models, which\nboth (text-only and multi-modal) benefit from the ad-\nditional data. Model NMT m’s gains are more than 3×\nlarger than that of models NMT tand SMT, indicat-\ning that they can properly exploit the additional data.\nNevertheless, even with the added back-translated data,\nmodel NMT mstill falls behind the PBSMT model both\nin terms of BLEU and TER, although it seems to be\ncatching up as the data size increases.\nIn Table 5, we show results for re-ranking 10- and\n100-best lists generated by eBay’s PBSMT production\nsystem. This system was trained with additional data\nsampled from out-of-domain corpora and also includes\nextra features and optimizations. Its BLEU score on the\neBay test set is 29.0. Nevertheless, we still observe im-\nprovements in rescoring of n-best lists from this system\nusing our “weaker” NMT models. When n= 10 , both\nmodels NMT tand NMT msignificantly improve the\nbaseline in terms of TER, with model NMT mperform-\ning slightly better. With larger lists ( n= 100 ), it seems\nthat both neural models have more difficulty to re-rank.\nNonetheless, in this scenario model NMT mstill sig-640\nModel Training data N BLEU oracle TER oracle Translation length\nbaseline — 29.0 — 53.0 — 13.60 ±2.59\nNMT t 100k in-domain 10 29.3 ↑0.335.4 52.4†↓0.646.4 13.48±2.59\nNMT m orig. + Multi30k 10 29.4↑0.435.4 52.1†↓0.946.4 13.41±2.58\nNMT t 100k in-domain 100 28.9 ↓0.142.2 53.6†↑0.641.0 13.80±2.67\nNMT m orig. + Multi30k 100 28.9 ↓0.142.2 52.4†↓0.641.0 13.50±2.59\nTable 5: Results for re-ranking n-best lists generated for eBay’s test set with text-only and multi-modal NMT\nmodels.†Difference is statistically significant ( p≤0.05). Best individual results are underscored, best overall\nresults in bold. We also show the translation length for re-ranked n-best lists.\nnificantly improves the MT quality in terms of TER,\nwhile model NMT tshows differences in BLEU and\nTER which are not statistically significant ( p≤0.05).\nWe note that model NMT m’s improvements in TER\nare consistent across different n-best list sizes; model\nNMT t’s improvements are not.\nThe best BLEU ( = 29.4) and TER ( = 52.1) scores\nwere achieved by model NMT mwhen applied to re-\nrank 10-best lists, although model NMT mstill im-\nproves in terms of TER when n= 100 . This suggests\nthat model NMT mcan efficiently exploit the additional\nmulti-modal signals.\nIn order to check whether improvements observed\nin TER could be due to a preference of text-only and\nmulti-modal NMT models for shorter sentences (Ta-\nble 5), we also computed the average length of transla-\ntions for n-best lists re-ranked with each of our models,\nand note that there is no significant difference between\nthe length of translations for the baseline and the re-\nranked models.\n6 Related work\nNMT has been successfully tackled by different groups\nusing the sequence-to-sequence framework (Kalch-\nbrenner and Blunsom, 2013; Cho et al., 2014;\nSutskever et al., 2014). However, multi-modal MT has\njust recently been addressed by the MT community in\na shared task (Specia et al., 2016). In NMT, Bahdanau\net al. (2015) first proposed to use an attention mecha-\nnism in the decoder. Their decoder learns to attend to\nthe relevant source-language words as it generates each\nword of the target sentence. Since then, many authors\nhave proposed different ways to incorporate attention\ninto MT (Luong et al., 2015; Firat et al., 2016; Tu et\nal., 2016).\nIn the context of image description generation\n(IDG), Vinyals et al. (2015) proposed an influential\nneural IDG model based on the sequence-to-sequence\nframework and trained end-to-end. Elliott et al. (2015)\nput forward a model to generate multilingual descrip-\ntions of images by learning and transferring features\nbetween two independent, non-attentive neural image\ndescription models. Finally, Xu et al. (2015) proposed\nan attention-based model where a model learns to at-\ntend to specific areas of an image representation as itgenerates its description in natural language with a soft-\nattention mechanism.\nAlthough no purely neural multi-modal model to\ndate has significantly improved on both text-only NMT\nand SMT models on the Multi30k data set (Specia et\nal., 2016), different research groups have proposed to\ninclude images in re-ranking n-best lists generated by\nan SMT system or directly in a NMT framework with\nsome success (Caglayan et al., 2016; Calixto et al.,\n2016; Huang et al., 2016; Libovick ´y et al., 2016; Shah\net al., 2016).\nTo the best of our knowledge, we are the first to study\nmulti-modal NMT applied to the translation of product\nlistings, i.e. for the e-commerce domain.\n7 Conclusions and Future work\nIn this paper, we investigate the potential impact of\nmulti-modal NMT in the context of e-commerce prod-\nuct listings. With only a limited amount of multi-\nmodal and multilingual training data available, both\ntext-only and multi-modal NMT models still fail to out-\nperform a productive SMT system, contrary to recent\nfindings (Bentivogli et al., 2016). However, the intro-\nduction of back-translated data leads to substantial im-\nprovements, especially to a multi-modal NMT model.\nThis seems to be an interesting approach that we will\ncontinue to explore in future work.\nWe also found that NMT models trained on small\nin-domain data sets can still be successfully used to\nrescore a standard PBSMT system with significant im-\nprovements in TER. Since we know from our experi-\nments with LM perplexities that these are very high for\ne-commerce data. i.e. fluency is quite low, it seems\nfitting that BLEU scores do not improve as much. In\nfuture work, we will also conduct a human evaluation\nof the translations generated by the various systems.\nAcknowledgements\nThe ADAPT Centre for Digital Content Technology\n(www.adaptcentre.ie ) at Dublin City University\nis funded under the Science Foundation Ireland Re-\nsearch Centres Programme (Grant 13/RC/2106) and is\nco-funded under the European Regional Development\nFund.641\nReferences\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua\nBengio. 2015. Neural Machine Translation by\nJointly Learning to Align and Translate. In Inter-\nnational Conference on Learning Representations.\nICLR 2015 , San Diego, CA.\nYoshua Bengio, R ´ejean Ducharme, Pascal Vincent, and\nChristian Janvin. 2003. A Neural Probabilistic Lan-\nguage Model. J. Mach. Learn. Res. , 3:1137–1155,\nMarch.\nLuisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and\nMarcello Federico. 2016. Neural versus Phrase-\nBased Machine Translation Quality: a Case Study.\nInProceedings of the 2016 Conference on Empirical\nMethods in Natural Language Processing, EMNLP ,\npages 257–267, Austin, Texas.\nOndˇrej Bojar, Rajen Chatterjee, Christian Federmann,\nBarry Haddow, Matthias Huck, Chris Hokamp,\nPhilipp Koehn, Varvara Logacheva, Christof Monz,\nMatteo Negri, Matt Post, Carolina Scarton, Lucia\nSpecia, and Marco Turchi. 2015. Findings of the\n2015 workshop on statistical machine translation.\nInProceedings of the Tenth Workshop on Statistical\nMachine Translation , pages 1–46, Lisbon, Portugal.\nOzan Caglayan, Walid Aransa, Yaxing Wang,\nMarc Masana, Mercedes Garc ´ıa-Mart ´ınez, Fethi\nBougares, Lo ¨ıc Barrault, and Joost van de Wei-\njer. 2016. Does multimodality help human and\nmachine for translation and image captioning? In\nProceedings of the First Conference on Machine\nTranslation , pages 627–633, Berlin, Germany.\nIacer Calixto, Teofilo de Campos, and Lucia Spe-\ncia. 2012. Images as context in Statistical Ma-\nchine Translation. In The 2nd Annual Meeting of\nthe EPSRC Network on Vision & Language (VL ´12),\nSheffield, UK. EPSRC Vision and Language Net-\nwork.\nIacer Calixto, Desmond Elliott, and Stella Frank. 2016.\nDCU-UvA Multimodal MT System Report. In Pro-\nceedings of the First Conference on Machine Trans-\nlation , pages 634–638, Berlin, Germany.\nKyunghyun Cho, Bart van Merrienboer, Dzmitry Bah-\ndanau, and Yoshua Bengio. 2014. On the Properties\nof Neural Machine Translation: Encoder-Decoder\nApproaches. In Proceedings of SSST@EMNLP\n2014, Eighth Workshop on Syntax, Semantics and\nStructure in Statistical Translation , pages 103–111,\nDoha, Qatar.\nJonathan H. Clark, Chris Dyer, Alon Lavie, and\nNoah A. Smith. 2011. Better Hypothesis Testing\nfor Statistical Machine Translation: Controlling for\nOptimizer Instability. In Proceedings of the 49th An-\nnual Meeting of the Association for Computational\nLinguistics: Human Language Technologies: Short\nPapers - Volume 2 , HLT ’11, pages 176–181, Port-\nland, Oregon.Desmond Elliott, Stella Frank, and Eva Hasler. 2015.\nMulti-Language Image Description with Neural Se-\nquence Models. CoRR , abs/1510.04709.\nDesmond Elliott, Stella Frank, Khalil Sima’an, and Lu-\ncia Specia. 2016. Multi30k: Multilingual english-\ngerman image descriptions. In Workshop on Vision\nand Language at ACL ’16 , Berlin, Germany.\nOrhan Firat, Kyunghyun Cho, and Yoshua Bengio.\n2016. Multi-Way, Multilingual Neural Machine\nTranslation with a Shared Attention Mechanism. In\nProceedings of the 2016 Conference of the North\nAmerican Chapter of the Association for Computa-\ntional Linguistics: Human Language Technologies ,\npages 866–875, San Diego, California.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian\nSun. 2015. Deep residual learning for image recog-\nnition. arXiv preprint arXiv:1512.03385 .\nKenneth Heafield, Ivan Pouzyrevsky, Jonathan H.\nClark, and Philipp Koehn. 2013. Scalable modi-\nfied Kneser-Ney language model estimation. In Pro-\nceedings of the 51st Annual Meeting of the Associa-\ntion for Computational Linguistics , pages 690–696,\nSofia, Bulgaria.\nJulian Hitschler, Shigehiko Schamoni, and Stefan Rie-\nzler. 2016. Multimodal Pivots for Image Cap-\ntion Translation. In Proceedings of the 54th Annual\nMeeting of the Association for Computational Lin-\nguistics (Volume 1: Long Papers) , pages 2399–2409,\nBerlin, Germany.\nPo-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean\nOh, and Chris Dyer. 2016. Attention-based Mul-\ntimodal Neural Machine Translation. In Proceed-\nings of the First Conference on Machine Translation ,\npages 639–645, Berlin, Germany.\nNal Kalchbrenner and Phil Blunsom. 2013. Recurrent\nContinuous Translation Models. In Proceedings of\nthe 2013 Conference on Empirical Methods in Nat-\nural Language Processing (EMNLP) , pages 1700–\n1709, Seattle, USA.\nReinhard Kneser and Hermann Ney. 1995. Improved\nbacking-off for m-gram language modeling. In In\nProceedings of the IEEE International Conference\non Acoustics, Speech and Signal Processing , vol-\nume I, pages 181–184, Detroit, Michigan.\nPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nSource Toolkit for Statistical Machine Translation.\nInProceedings of the 45th Annual Meeting of the\nACL on Interactive Poster and Demonstration Ses-\nsions , ACL ’07, pages 177–180, Prague, Czech Re-\npublic.\nPhilipp Koehn. 2005. Europarl: A Parallel Corpus\nfor Statistical Machine Translation. In Conference\nProceedings: the tenth Machine Translation Summit ,\npages 79–86, Phuket, Thailand.642\nJindˇrich Libovick ´y, Jind ˇrich Helcl, Marek Tlust ´y,\nOndˇrej Bojar, and Pavel Pecina. 2016. CUNI Sys-\ntem for WMT16 Automatic Post-Editing and Multi-\nmodal Translation Tasks. In Proceedings of the First\nConference on Machine Translation , pages 646–654,\nBerlin, Germany.\nThang Luong, Hieu Pham, and Christopher D. Man-\nning. 2015. Effective Approaches to Attention-\nbased Neural Machine Translation. In Proceed-\nings of the 2015 Conference on Empirical Methods\nin Natural Language Processing (EMNLP) , pages\n1412–1421, Lisbon, Portugal.\nFranz Josef Och. 2003. Minimum error rate training\nin statistical machine translation. In Proceedings of\nthe 41st Annual Meeting on Association for Com-\nputational Linguistics - Volume 1 , ACL ’03, pages\n160–167, Sapporo, Japan.\nKishore Papineni, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: A method for automatic\nevaluation of machine translation. In Proceedings\nof the 40th Annual Meeting on Association for Com-\nputational Linguistics , ACL ’02, pages 311–318,\nPhiladelphia, Pennsylvania.\nM. Schuster and K.K. Paliwal. 1997. Bidirectional\nRecurrent Neural Networks. IEEE Transactions on\nSignal Processing , 45(11):2673–2681.\nRico Sennrich, Barry Haddow, and Alexandra Birch.\n2016. Improving neural machine translation mod-\nels with monolingual data. In Proceedings of the\n54th Annual Meeting of the Association for Compu-\ntational Linguistics (Volume 1: Long Papers) , pages\n86–96, Berlin, Germany.\nKashif Shah, Josiah Wang, and Lucia Specia. 2016.\nSHEF-Multimodal: Grounding Machine Translation\non Images. In Proceedings of the First Conference\non Machine Translation , pages 660–665, Berlin,\nGermany.\nMatthew Snover, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study\nof translation edit rate with targeted human anno-\ntation. In Proceedings of Association for Machine\nTranslation in the Americas , pages 223–231, Cam-\nbridge, MA, USA.\nLucia Specia, Stella Frank, Khalil Sima’an, and\nDesmond Elliott. 2016. A Shared Task on Mul-\ntimodal Machine Translation and Crosslingual Im-\nage Description. In Proceedings of the First Con-\nference on Machine Translation , pages 543–553,\nBerlin, Germany.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. 2014.\nSequence to Sequence Learning with Neural Net-\nworks. In Advances in Neural Information Pro-\ncessing Systems, NIPS , pages 3104–3112, Montr ´eal,\nCanada.\nZhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu,\nand Hang Li. 2016. Modeling Coverage for NeuralMachine Translation. In Proceedings of the 54th An-\nnual Meeting of the Association for Computational\nLinguistics (Volume 1: Long Papers) , pages 76–85,\nBerlin, Germany.\nOriol Vinyals, Alexander Toshev, Samy Bengio, and\nDumitru Erhan. 2015. Show and tell: A neural im-\nage caption generator. In IEEE Conference on Com-\nputer Vision and Pattern Recognition, CVPR 2015 ,\npages 3156–3164, Boston, Massachusetts.\nKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho,\nAaron Courville, Ruslan Salakhudinov, Rich Zemel,\nand Yoshua Bengio. 2015. Show, attend and\ntell: Neural image caption generation with visual at-\ntention. In Proceedings of the 32nd International\nConference on Machine Learning (ICML-15) , pages\n2048–2057, Lille, France.\nPeter Young, Alice Lai, Micah Hodosh, and Julia\nHockenmaier. 2014. From image descriptions to\nvisual denotations: New similarity metrics for se-\nmantic inference over event descriptions. Transac-\ntions of the Association for Computational Linguis-\ntics, 2:67–78.643",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "uyi7c0gUXU4",
"year": null,
"venue": "DSN 2000",
"pdf_link": "https://ieeexplore.ieee.org/iel5/6928/18625/00857575.pdf",
"forum_link": "https://openreview.net/forum?id=uyi7c0gUXU4",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Implementing e-Transactions with Asynchronous Replication",
"authors": [
"Svend Frølund",
"Rachid Guerraoui"
],
"abstract": "An e-Transaction is one that executes exactly-once despite failures. This paper describes a distributed protocol that implements the abstraction of e-Transaction in three-tier architectures. Three-tier architectures are typically Internet-oriented architectures, where the end-user interacts with front-end clients (e.g., browsers) that invoke middle-tier application servers (e.g., web servers) to access back-end databases. We implement the e-Transaction abstraction using an asynchronous replication scheme that preserves the three-tier nature of the architecture and introduces a very acceptable overhead with respect to unreliable solutions.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "L4-4GnmsK7",
"year": null,
"venue": "EC2001",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=L4-4GnmsK7",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Incentives for sharing in peer-to-peer networks.",
"authors": [
"Philippe Golle",
"Kevin Leyton-Brown",
"Ilya Mironov"
],
"abstract": "We consider the free-rider problem that arises in peer-to-peer file sharing networks such as Napster: the problem that individual users are provided with no incentive for adding value to the network. We examine the design implications of the assumption that users will selfishly act to maximize their own rewards, by constructing a formal game theoretic model of the system and analyzing equilibria of user strategies under several novel payment mechanisms. We support and extend upon our theoretical predictions with experimental results from a multi-agent reinforcement learning model.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "fALNS13kDNLf",
"year": null,
"venue": "EC2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=fALNS13kDNLf",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Coalitional games on graphs: core structure, substitutes and frugality.",
"authors": [
"Rahul Garg",
"Vijay Kumar",
"Atri Rudra",
"Akshat Verma"
],
"abstract": "No abstract available.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "MlPNLiyBVH8",
"year": null,
"venue": "ECAI 2016",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-672-9-1640",
"forum_link": "https://openreview.net/forum?id=MlPNLiyBVH8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Transfer of Reinforcement Learning Negotiation Policies: From Bilateral to Multilateral Scenarios",
"authors": [
"Ioannis Efstathiou",
"Oliver Lemon"
],
"abstract": "Trading and negotiation dialogue capabilities have been identified as important in a variety of AI application areas. In prior work, it was shown how Reinforcement Learning (RL) agents in bilateral negotiations can learn to use manipulation in dialogue to deceive adversaries in non-cooperative trading games. In this paper we show that such trained policies can also be used effectively for multilateral negotiations, and can even outperform those which are trained in these multilateral environments. Ultimately, it is shown that training in simple bilateral environments (e.g. a generic version of “Catan”) may suffice for complex multilateral non-cooperative trading scenarios (e.g. the full version of Catan).",
"keywords": [],
"raw_extracted_content": "Transfer of Reinforcement Learning Negotiation Policies:\nIoannis Efstathiou and Oliver Lemon1\nAbstract. Trading and negotiation dialogue capabilities have been\nidentified as important in a variety of AI application areas. In prior\nwork, it was shown how Reinforcement Learning (RL) agents inbilateral negotiations can learn to use manipulation in dialogue todeceive adversaries in non-cooperative trading games. In this paperwe show that such trained policies can also be used effectively for\nmultilateral negotiations, and can even outperform those which are\ntrained in these multilateral environments. Ultimately, it is shown\nthat training in simple bilateral environments (e.g. a generic version\nof “Catan”) may suffice for complex multilateral non-cooperativetrading scenarios (e.g. the full version of Catan).\n1 Introduction\nWork on automated conversational systems has previously been fo-cused on cooperative dialogue, where a dialogue system’s core goalis to assist humans in their tasks such as finding a restaurant [13].However, non-cooperative dialogues, where an agent may act to sat-isfy its own goals, are also of practical and theoretical interest [6]. Itmay be useful for a dialogue agent not to be fully cooperative whentrying to gather information from a human, or when trying to per-suade, or in the area of believable characters in video games andeducational simulations [6]. Another area in which non-cooperative\ndialogue behaviour is desirable is in negotiation [12]. Recently, Re-\ninforcement Learning (RL) methods have been applied in order to\noptimise cooperative dialogue management, where the decision of\nthe next dialogue move to make in a conversation is in focus, in order\nto maximise an agent’s overall long-term expected utility [13, 9, 10].Those methodologies used RL with reward functions that give pos-itive feedback to the agent only when it meets the user’s goals.This work has shown that robust and efficient dialogue managementstrategies can be learned, but until [3], has only addressed coopera-tive dialogue. Lately it has been shown [5] that when given the abilityto perform both cooperative and non-cooperative (manipulative) di-alogue moves, a dialogue agent can learn to bluff and to lie duringtrading so as to win games more often, under various conditions suchas risking penalties for being caught in deception – against a varietyof adversaries [4]. Here we transfer those learned bilateral policies tomore complex multilateral negotiations, and evaluate them.\n2 Learning in Bilateral Negotiations\nTo learn trading policies in a controlled setting we initially [5]used a 2-player version of the non-cooperative 4-player board game“Catan”. We call the 2 players the “adversary” and the “Reinforce-ment learning agent” (RLA). The goal of the RLA was to gather a\n1Interaction Lab, Heriot-Watt University, Edinburgh, email: i.efstathiou,\[email protected] number of resources via trading dialogue. Trade occurredthrough proposals that might lead to acceptance or rejection fromthe adversary. In an agent’s proposal (turn) only one ‘give 1-for-1’or ‘give 1-for-2’ trading proposal might occur, or nothing (41 ac-tions in total), e.g. “I will give you a brick and I need two rocks”.To overcome issues related to long training times and high memorydemands, we have implemented a state encoding mechanism [5] thatautomatically compresses all of our numeric trading game states.\nWe first investigated the case of learning trading policies against\nadversaries which always accepted a trading proposal. The goal-\noriented RLA did not use any manipulative actions and learned to\nreach its goal resources as soon as possible. In the case where thegoal was to build a city it learned to win 96.8% of the time [5]. Wethen trained the manipulative (dishonest) RLA [5], which could ask\nfor resources that it did not really need. It could also propose tradeswithout checking whether the offered resource was available. Themanipulated adversary [5] was implemented based on the intuitionthat a rational adversary will act so as to hinder other players in re-spect of their expressed preferences. The above trained policies ofboth of the agents are now evaluated in JSettlers [11].\n3 Evaluating in Multilateral Negotiations\nThe experiments here are all conducted using JSettlers [11], a re-search environment developed in Java that captures the full multi-player version of the game Catan, where there is trading and build-ing. 10k games were played for each experiment. The players are:\nThe original STAC Robot (Bot) is based on the original expert\nrule-based agent of JSettlers [11] which is further modified to im-prove its winning performance. This agent (the Bot), which is the“benchmark” agent described in [7], uses complex heuristics to in-\ncrease performance by following a dynamic building plan according\nto its current resource needs and the board’s set-up.\nOur trained RLA is in fact a Bot which has been modified to\nmake offers based on our four learnt policies (for the developmentof city, road, development card, and settlement) in our version ofthe game “Catan” (Section 2). These policies were either the goal-\noriented ones or the manipulative (dishonest) ones.\nThe Bayesian agent (Bayes) [8] is a Bot whose trading proposals\nare made based on the human corpus that was collected from Afan-tenos et al. [1]. The Bayesian agent was 65.7% accurate in reproduc-ing the human moves.\nThe Manipulated Bot is a Bot which can be manipulated by our\ntrained dishonest agent (i.e. the weights of the resources that theyoffer and ask for change according to the trained manipulative RLproposals). There are 3 types of manipulated Bots as we will see.From Bilateral to Multilateral ScenariosECAI 2016\nG.A. Kaminka et al. (Eds.)\n© 2016 The Authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/978-1-61499-672-9-16401640\n3.1 Evaluation without Manipulation\nTrained RLA (goal-oriented) vs. 3 Bots: Our trained RLA re-\nsulted in a performance of 32.66%2, while those of the Bots were\n22.9%, 22.66% and 21.78% respectively. This was interesting be-\ncause it proved that our generic 2-player version of the game (Sec-tion 2) was enough to train a successful policy for the multi-playerversion of the game, by effectively treating all three opponents asone. Hence our RLA proposed only public trades. Furthermore the\n32.66% performance of our RLA was around 7% better than that of\n[8], who trained it in the real multilateral negotiations environment(JSettlers).\nTrained RLA (goal-oriented) vs. 3 Bayes: In this experiment\nour trained agent scored a performance of 36.32%, which is muchhigher than those of the three Bayes agents. Their performances were21.43%, 21.02% and 21.23% respectively.\n3.2 Evaluation with Manipulation\nHere we evaluated our previously trained dishonest RL policiesagainst the 3 types of Manipulated Bots and the Bayes agents.\nTrained Dishonest RLA vs. 3 Manipulated Bots: In this exper-\niment the 3 manipulated Bots win rates were 21.44%, 20.79% and21.42% respectively. Our trained Dishonest RLA won by 36.35%.\nTrained Dishonest RLA vs. 3 Manipulated Bots (Weights based\non Building Plan): The Bot’s probabilities are adjusted further ac-\ncording to the building plan (BP) in this case. That means that the\nBots are initially biased towards specific resources, as the BP indi-\ncates the next piece to build (e.g. city). The results of this experimentwere still satisfying: the 3 manipulated Bots won by 22.53%, 21.47%and 21.8% respectively. Our trained Dishonest RLA won by 34.2%.\nTrained Dishonest RLA vs. 3 Manipulated Bots (Weights based\non Building Plan and Resource Quantity): This case is identical\nto the above but the trade probabilities are additionally adjusted ac-cording to the goal resource quantity. The results of this experimentfor the trained Dishonest RL policies were as good as the above: the 3manipulated Bots win rates were 21.72%, 21.5% and 22.47% respec-tively. Our trained Dishonest RLA won by 34.33%. This result, alongwith the two above, suggested that the RLA ’s dishonest manipulativepolicies were very effective against the Bots of the multi-player ver-sion of the game, showing that our transition from a bilateral negoti-ation environment to a multilateral one was successful.\nTrained Dishonest RLA vs. 3 Bayes: We hypothesised in this\ncase that the human players might have been affected by their op-ponents’ manipulation (if any occurred in the data collection [1]),and we wanted to test that by using our Dishonest policy. The re-sults proved our hypothesis: the 3 Bayes agents won by 21.97%,20.58% and 21.64% respectively. Our trained Dishonest RLA wonby 35.81%. This was an evidence that the Bayes agents were indeedaffected by manipulation, and now by the Dishonest RLA ’s manipu-lative policy too, and its success resulted in almost 14% more win-ning games.\n4 Conclusion\nWe showed that our trained bilateral RL policies from our genericversion of “Catan” were able to outperform (by at least 10%)the agents of the JSettlers [11] environment and even managed tosuccessfully manipulate them. That demonstrated how successful\n2The baseline performance in a four-player game is 25%trained policies from bilateral negotiations can be, when evaluated inmore complex multilateral ones, even compared to those which aretrained in these multilateral negotiations. Hence training RL policiesin complex multilateral negotiations may be unnecessary in somecases. Furthermore, by considering all of the opponents as one player,and by proposing public trades for all players, we bypass complexi-ties that arise by personalizing the agent’s trading proposals for eachdistinct opponent. Our findings show that an explicit model of each\nadversary is not required for successful RL policies to be learned in\nthis case. Ultimately, it suggests that an implicit model of a complex\ntrading scenario may be enough for effective RL, providing that effi-\ncient selection of the state representation and of the actions has been\nmade.\nFurther work explores Deep Reinforcement Learning approaches\nto trading dialogue [2].\nAcknowledgements\nThis work is funded by the ERC, grant no. 269427 (STAC project).\nREFERENCES\n[1] Stergos Afantenos, Nicholas Asher, Farah Benamara, Anais Cadil-\nhac, Cedric Degremont, Pascal Denis, Markus Guhe, Simon Keizer,\nAlex Lascarides, Oliver Lemon, Philippe Muller, Soumya Paul, V erenaRieser, and Laure Vieu, ‘Developing a corpus of strategic conversationin The Settlers of Catan’, in Proceedings of SemDial 2012, (2012).\n[2] Heriberto Cuayahuitl, Simon Keizer, and Oliver Lemon, ‘Strategic Di-\nalogue Management via Deep Reinforcement Learning’, in NIPS work-\nshop on Deep Reinforcement Learning, (2015).\n[3] Ioannis Efstathiou and Oliver Lemon, ‘Learning non-cooperative dia-\nlogue behaviours’, in Proceedings of the 15th Annual Meeting of the\nSpecial Interest Group on Discourse and Dialogue (SIGDIAL), pp. 60–68, Philadelphia, PA, U.S.A, (2014).\n[4] Ioannis Efstathiou and Oliver Lemon, ‘Learning to manage risks in non-\ncooperative dialogues.’, in Proceedings of the 18th Workshop on the\nSemantics and Pragmatics of Dialogue (SemDial 2014 - DialWatt) , pp.\n173–175, Edinburgh, Scotland, U.K., (2014).\n[5] Ioannis Efstathiou and Oliver Lemon, ‘Learning non-cooperative di-\nalogue policies to beat opponent models: “the good, the bad and theugly”’, in Proceedings of the 19th Workshop on the Semantics and\nPragmatics of Dialogue (SemDial 2015 - GoDial), pp. 33–41, Gothen-burg, Sweden, (2015).\n[6] Kallirroi Georgila and David Traum, ‘Reinforcement learning of argu-\nmentation dialogue policies in negotiation’, in Proc. INTERSPEECH,\n(2011).\n[7] Markus Guhe and Alex Lascarides, ‘Game strategies in the settlers of\ncatan’, in Proceedings of the IEEE Conference on Computational In-\ntelligence in Games, Dortmund, (2014).\n[8] Simon Keizer, Heriberto Cuayahuitl, and Oliver Lemon, ‘Learning\ntrade negotiation policies in strategic conversation’, in The 19th Work-\nshop on the Semantics and Pragmatics of Dialogue (SemDial 2015 -goDIAL), pp. 104–112, (2015).\n[9] V erena Rieser and Oliver Lemon, Reinforcement Learning for Adaptive\nDialogue Systems: A Data-driven Methodology for Dialogue Manage-ment and Natural Language Generation, Theory and Applications ofNatural Language Processing, Springer, 2011.\n[10] V erena Rieser, Oliver Lemon, and Xingkun Liu, ‘Optimising informa-\ntion presentation for spoken dialogue systems’, in Proc. ACL, (2010).\n[11] R. Thomas and K. Hammond, ‘Java settlers: a research environment\nfor studying multi-agent negotiation’, in Proc. of IUI ’02, pp. 240–240,\n(2002).\n[12] David Traum, ‘Extended abstract: Computational models of non-\ncooperative dialogue’, in Proc. of SIGdial Workshop on Discourse and\nDialogue, (2008).\n[13] Steve Y oung, M. Gasic, S. Keizer, F. Mairesse, J. Schatzmann,\nB. Thomson, and K. Y u, ‘The Hidden Information State Model: apractical framework for POMDP-based spoken dialogue management’,Computer Speech and Language, 24(2), 150–174, (2010).I.Efstathiou andO.Lemon /Transfer ofReinfor cement Learning Negotiation Policies: FromBilater altoMultilater alScenarios 1641",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Pm7UUTaLN7",
"year": null,
"venue": "CoRR 2017",
"pdf_link": "http://arxiv.org/pdf/1709.05903v2",
"forum_link": "https://openreview.net/forum?id=Pm7UUTaLN7",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E$^2$BoWs: An End-to-End Bag-of-Words Model via Deep Convolutional Neural Network",
"authors": [
"Xiaobin Liu",
"Shiliang Zhang",
"Tiejun Huang",
"Qi Tian"
],
"abstract": "Traditional Bag-of-visual Words (BoWs) model is commonly generated with many steps including local feature extraction, codebook generation, and feature quantization, etc. Those steps are relatively independent with each other and are hard to be jointly optimized. Moreover, the dependency on hand-crafted local feature makes BoWs model not effective in conveying high-level semantics. These issues largely hinder the performance of BoWs model in large-scale image applications. To conquer these issues, we propose an End-to-End BoWs (E$^2$BoWs) model based on Deep Convolutional Neural Network (DCNN). Our model takes an image as input, then identifies and separates the semantic objects in it, and finally outputs the visual words with high semantic discriminative power. Specifically, our model firstly generates Semantic Feature Maps (SFMs) corresponding to different object categories through convolutional layers, then introduces Bag-of-Words Layers (BoWL) to generate visual words for each individual feature map. We also introduce a novel learning algorithm to reinforce the sparsity of the generated E$^2$BoWs model, which further ensures the time and memory efficiency. We evaluate the proposed E$^2$BoWs model on several image search datasets including CIFAR-10, CIFAR-100, MIRFLICKR-25K and NUS-WIDE. Experimental results show that our method achieves promising accuracy and efficiency compared with recent deep learning based retrieval works.",
"keywords": [],
"raw_extracted_content": "E2BoWs: An End-to-End Bag-of-Words Model via Deep Convolutional Neural\nNetwork\nXiaobin Liu\u0003, Shiliang Zhang\u0003, Tiejun Huang\u0003, Qi Tiany\n\u0003School of Electronics Engineering and Computer Science, Peking University, Beijing, 100871, China\nEmail:fxbliu.vmc, slzhang.jdl, tjhuang [email protected]\nyDepartment of Computer Science, University of Texas at San Antonio, San Antonio, TX 78249-1604, USA\nEmail: [email protected]\nAbstract —Traditional Bag-of-visual Words (BoWs) model is\ncommonly generated with many steps including local feature\nextraction, codebook generation, and feature quantization, etc.\nThose steps are relatively independent with each other and\nare hard to be jointly optimized. Moreover, the dependency on\nhand-crafted local feature makes BoWs model not effective in\nconveying high-level semantics. These issues largely hinder the\nperformance of BoWs model in large-scale image applications.\nTo conquer these issues, we propose an End-to-End BoWs\n(E2BoWs) model based on Deep Convolutional Neural Network\n(DCNN). Our model takes an image as input, then identifies\nand separates the semantic objects in it, and finally outputs\nthe visual words with high semantic discriminative power.\nSpecifically, our model firstly generates Semantic Feature Maps\n(SFMs) corresponding to different object categories through\nconvolutional layers, then introduces Bag-of-Words Layers\n(BoWL) to generate visual words for each individual feature\nmap. We also introduce a novel learning algorithm to reinforce\nthe sparsity of the generated E2BoWs model, which further\nensures the time and memory efficiency. We evaluate the\nproposed E2BoWs model on several image search datasets\nincluding CIFAR-10 ,CIFAR-100 ,MIRFLICKR-25K andNUS-\nWIDE . Experimental results show that our method achieves\npromising accuracy and efficiency compared with recent deep\nlearning based retrieval works.\n1. Introduction\nA huge number of images are being uploaded to the\nInternet every moment, and each image commonly conveys\nrich information. This makes Content-Based Image Retrieval\n(CBIR) a challenging and promising task. Bag-of-visual\nWords (BoWs) model, which considers an image as a collec-\ntion of visual words, has been widely applied for large-scale\nimage retrieval. Conventional BoWs model is computed\nwith many stages, e.g., feature extraction, codebook genera-\ntion, and feature quantization [3–6]. Then inverted file index\nand Term Frequency-Inverse Document Frequency (TF-IDF)\nstrategy can be used for indexing and retrieval. Since the\nnumber of visual vocabulary is commonly very large, e.g.,\n1 million in [3], and a certain image only contains a smallnumber of visual words, indexes generated by BoWs model\nare sparse and thus ensure the high retrieval efficiency.\nMost of existing BoWs models are based on hand-\ncrafted local features, e.g., SIFT [7]. These models\nhave shown promising performance in large-scale partial-\nduplicate image retrieval [3–5]. However, as the local de-\nscriptor cannot effectively describe high-level semantics,\ni.e., commonly known as the “semantic gap” issue, BoWs\nmodels build on local descriptors always fail to address the\nsemantic similar image retrieval task [8]. Although some\nworks have been proposed to conquer this issue [9–11], most\nof these works introduce extra computations and memory\noverheads.\nRecent years have witnessed a lot of breakthroughs\nin end-to-end deep learning model for vision tasks. After\nAlexNet [12] achieving the best performance in ImageNet\nLarge-Scale Visual Recognition Challenge (ILSVRC), Deep\nConvolutional Neural Network (DCNN) has been applied to\nvarious vision tasks, including image classification [1, 13],\nobject detection [14, 15], semantic segmentation [16] and\nmany other tasks [17–20]. Most of DCNNs consist of\na set of convolutional layers and Fully Connected (FC)\nlayers. It is found that convolutional layers can extract\nhigh-level semantic cues from pixel-level input and hence\nprovide a possible solution to solve the “semantic ga” issue.\nTherefore, it is straightforward to leverage DCNN in image\nretrieval [8]. Some works use DCNN to generate hash codes\nand yield promising performance [21–24]. However, there\nstill lacks research efforts on DCNN based BoWs model,\nwhich could be integrated with inverted file indexing and\nTF-IDF weighting for large-scale image retrieval.\nTargeting to leverage the efficiency of BoWs model and\nthe semantic learning ability of DCNN models in large-scale\nimage retrieval, we propose to generate a DCNN based End-\nto-End BoWs (E2BoWs) model as shown in Fig. 1. Structure\nof our E2BoWs model coincides with GoogLeNet [1] with\nBatch Normalization (BN) [2] up to Inception5. We discard\nPool5 layer and transform the last FC layer into a convo-\nlutional layer to generate Semantic Feature Maps (SFMs)\nspecifically corresponding to different object categories. A\nBag-of-Words Layer (BoWL) is then introduced to generate\nsparse visual words from each semantic feature map. This\nensures the resulting visual words to preserve clear semanticarXiv:1709.05903v2 [cs.CV] 20 Sep 2017\nConvolutional layerPooling layer\nBatch normalization ……7×7 7×7\n7×7\nTransformed \nconvolution \n(1024×1×1×n)7×7\nVisual words \ngenerationSemantic\nfeature maps\n7×7×n\nTraining \nimages\nQuery \nimages……Inception5 in \nGoogLeNet\n7×7×1024Dense\nvisual words\nm×nSparse\nvisual words\nm×n\n………………\nThresholding𝒕𝒉𝒓𝒙,𝜷\nSemantic feature \nmaps generation Bog-of-words layerFully\nconnected\n(49×m)\nFully\nconnected\n(49×m)Classification\nloss\nSimilarity\nloss\nSparsity\nloss\nConvolutional \nlayersFigure 1: Framework of the proposed E2BoWs model. The structure of our deep model is identical to the one of\nGoogLeNet [1] with BN [2] till the Inception5 layer. The output size of Inception5 is 7\u00027\u00021024 . Pool5 in GoogLeNet [1]\nis discarded. The n-way output layer is transformed into a convolutional layer to generate nsemantic feature maps. m\nsparse visual words are then generated by bog-of-words layer from each individual semantic feature map, resulting in m\u0002n\nvisual words. Finally, a three-component loss function is applied for training the model.\ncues. Finally, a three-component loss function is designed\nto ensure: 1) fast convergence of the training procedure,\n2) similar images sharing more visual words, and 3) high\nsparsity of the generated E2BoWs model, respectively.\nThe proposed method has several advantages compared\nwith traditional BoWs models: 1) Instead of using hand-\ncrafted features and being generated with several steps,\nour E2BoWs model is generated in an end-to-end manner,\nthus is more efficient and easier to be jointly optimized\nand tuned. 2) Incorporating DCNN into BoWs model is\npotential to bring higher discriminative power to semantics\nand provide a better solution for semantic similar image\nsearch task. Our E2BoWs model also shows advantages\nover traditional hashing methods in that it conveys clear\nsemantic cues. We evaluate the proposed E2BoWs model on\nseveral image search datasets including CIFAR-10 ,CIFAR-\n100,MIRFLICKR-25K , and NUS-WIDE . Comparisons with\nrecent deep learning based image retrieval works show that\nour method achieves promising accuracy and efficiency.\nThe rest of this paper is organized as follows: Section\n2 discusses some works related to our model. Section 3\npresents our model in detail. Section 4 evaluates the pro-\nposed model on different datasets and Section 5 gives our\nconclusions.\n2. Related Work\nAs a fundamental task in multimedia content analysis\nand computer vision [8, 25, 26], CBIR aims to search for\nimages similar with the query in an image gallery. Since\ndirectly computing similarity between two images with raw\nimage pixels is infeasible, BoWs model is widely used as\nan image representation for large-scale image retrieval. Over\nthe past decade, various BoWs models [3–6] have been\nproposed based on local descriptors, such as SIFT [7] and\nSURF [27]. Those BoWs models have shown promising\nperformance in large-scale image retrieval. Conventional\nBoWs models consider an image as a collection of visual\nwords and is generated by many stages, e.g., feature extrac-\ntion, codebook generation and feature quantization [3–6].For instnace, Nister et al. [3] extract SIFT [7] descriptors\nfrom MSER regions [28] and then hierarchically quantize\nSIFT descriptors by the vocabulary tree. As individual visual\nword cannot depict the spatial cues in images, some works\ncombine visual words with spatial information [29, 30] to\nmake the resulting BoWs model more discriminative to\nthe spatial cues. Some other works aim to generate more\neffective and discriminative vocabularies [31, 32].\nHowever, the dependency on hand-crafted local feature\nhinders the ability of conventional visual words to convey\nsemantic cues due to the “semantic gap” between low-level\nlocal features and high-level semantics. For instance, two\nobjects from different categories might share similar local\nfeatures, which can be quantized to same visual words in\nthe vocabulary tree.\nSome works have been proposed to enhance the dis-\ncriminative power of BoWs model to semantic cues [9–11].\nWuet al. [9] propose an off-line distance metric learning\nscheme to map related features to the same visual words\nto generate an optimized codebook. Wu et al. [10] present\nan on-line metric learning algorithm to improve the BoWs\nmodel by optimizing the proposed semantic loss. Zhang et\nal.[11] propose a method to co-index semantic attributes\ninto inverted index generated by local features to make it\nconvey more semantic cues. However, most of these works\nneed extra computations either in the off-line indexing or\non-line retrieval stages. Moreover, since these models are\ngenerated by many independent steps, they are hard to be\njointly optimized to achieve better efficiency and accuracy.\nRecently, many works leverage DCNN in CBIR [8, 21–\n24, 33, 34]. Wan et al. [8] propose three schemes to apply\nDCNN in CBIR, i.e., 1) directly use the features from the\nmodel pre-trained on ImageNet [35], 2) refine the features\nby metric learning, and 3) retrain the model on the do-\nmain dataset. They prove that DCNN based features can\nsignificantly outperform hand-crafted features after being\nfine-tuned. However, they don’t consider the retrieval ef-\nficiency when apply the features in large-scale datasets. Xia\net al. [22] introduce a DCNN based hashing method. The\nmethod consists of two steps: first generate hash codes on\ntraining set by an iterative algorithm, and then learn a hash\nfunction based on DCNN to fit the hash codes generated\nin step 1. The independence of two steps hinders the joint\nlearning of the whole model. Lin et al. [23] propose a\nframework to generate hash codes directly by a classification\nobject function. They show that deep model trained by\nclassification task can be adopted for CBIR task. Zhao et\nal.[24] and Lai et al. [33] use triplet loss to train the network\nto preserve semantic relations of images.\nIn these aforementioned methods, real-value hash codes\nare learned during training. The real-value hash codes are\nthen quantized to binary codes for testing. Different dis-\ntance metrics used in training and testing, e.g., Euclidean\ndistance and Hamming distance, may bring approximation\nerror and hinder the training efficiency. Quantization error\ncould also be produced by the quantization stage. Different\nfrom those works, Liu et al. [21] and Zhu et al. [34]\nreinforce the networks to output binary-like hash codes\nto reduce quantization error and approximation error. So\nfar, most of deep learning based retrieval works focus on\ngenerating hashing codes. There still lacks research efforts\nin DCNN based BoWs model. It is promising to generate\na discriminative BoWs model directly from an end-to-end\nDCNN and leverage the scalability of BoWs model for\nlarge-scale image retrieval.\n3. Proposed Method\nE2BoWs model is generated by modifying the\nGoogLeNet [1] with BN [2]. As shown in Fig. 1, before\nthe Inception5 layer, the structure of our deep model is\nidentical to the one of GoogLeNet [1] with BN [2]. Most of\nprevious works extract features for retrieval from FC layers.\nDifferently, we propose to learn features from feature maps\nwhich preserve more visual cues than FC layers. We thus\ntransform the last n-way FC layer into a convolutional layer\nto generatenSFMs corresponding to ntraining categories.\nThen,msparse visual words are generated from each indi-\nvidual SFM by the Bag-of-Words Layer, resulting in m\u0002n\nvisual words. Finally, a three-component loss function is\napplied to train the model. In the following parts, we present\nthe details of the network structure, model training and\ngeneralization ability improvement.\n3.1. Semantic Feature Maps Generation\nIn GoogLeNet [1], the output layer conveys semantic\ncues because the label supervision is directly applied on it.\nHowever, the output layer losses certain visual details of\nthe images, such as the location and size of objects, which\ncould be beneficial in image retrieval. Meantime, Inception5\ncontains more visual cues than semantics. Learning visual\nwords from the output layer or Inception5 may loss discrim-\ninative power to either visual details or semantic cues. To\npreserve both semantics and visual details, we propose to\ngenerate Semantic Feature Maps (SFMs) from Inception5\nand generate visual words from SFMs.\n7×77×7 7×7\nPooling\nFully \nconnected\nParameters \n(1024×n)7×7 7×7\nTransformed \nconvolution \nparameters\n(1024×1×1×n)Parameters \ntransformation……\n…… ……Feature maps\n7×7×1024\nSemantic feature maps\n7×7×nFully connected outputs\nnFeature maps\n7×7×1024\n……\n…………Figure 2: Illustration of transforming parameters of FC layer\ninto a convolutional layer to generate SFMs. Lines in same\ncolor indicate the same parameters.\nSFMs is generated by transforming the parameters in\nFC layers into a convolutional layer. This transformation is\nillustrated in Fig. 2. The size of parameters in the FC layer\nis1024\u0002n, where 2014 is the feature dimensionality after\npooling and nis the number of training categories. Those\nparameters can be reshaped into nconvolutional kernels of\nsize1024\u00021\u00021. In other words, we transform parameters\ncorresponding to each output in FC layer of size 1024\u00021\ninto a convolutional kernel of size 1\u00021\u00021024 . There-\nfore,n-channels of convolutional kernel can be generated.\nAccordingly, nSFMs can be generated after Inception5.\nIn FC layers, each output is a classification score for\nan object category. Compared with the output of FC layer,\nSFMs also contain such classification cues. For example,\naverage pooling the activation on each SFM gets the clas-\nsification score for the corresponding category. Moreover,\nSFMs preserve certain visual cues because they are pro-\nduced from Inception5 without pooling.\nWe illustrate examples of SFMs in Fig. 3. Three images\nwith the same label “elkhound” in ImageNet [35] and their\nSFMs with the top-4 largest response values are illustrated.\nIt can be observed that, the illustrated SFMs show 75%\noverlap among the three images. SFM #175 constantly\nshows the strongest activation. This means the activation\nvalues of SFMs represent the semantic and category cues.\nMoreover, the location and size of object are presented by\nSFMs.\n3.2. Bag-of-Words Layer\nBecause different SFMs correspond to different object\ncategories, they are potential to identify and separate the\nobjects in images. Those characteristics make SFMs more\nsuitable to generate visual words that conveys both semantic\nand visual cues. To preserve the spacial and semantic cues\nin SFMs, we introduce Bag-of-Words Layer (BoWL) to\ngenerate sparse visual words directly from each individual\nSFM.\nSpecifically, a local FC layer with ReLU is used to\ngeneratemvisual words from each individual SFM. This\nstrategy finally generates m\u0002nvisual words. Each local FC\nlayer is trained independently. Compared with traditional FC\nlayer, local FC layer better preserves semantic and visual\nTABLE 1: Retrieval efficiency and accuracy on CIFAR-100 [36] testing set with different thresholds.\nThreshold 0 0 :05 0 :06 0 :07 0 :08 0 :09 0 :10 0 :11 0 :13 0 :13 0 :14 0 :15\nmAP 0:697 0 :686 0 :689 0 :693 0 :697 0 :700 0 :703 0:704 0:703 0 :700 0 :693 0 :684\nANV 409 :0 50 :4 36 :7 28 :4 23 :0 19 :0 16 :8 15 :0 13 :5 12 :3 11 :4 10 :6\nANI 4090 500 370 280 230 190 170 150 140 120 110 110\nAOP 1;672 ;810 25 ;200 13 ;579 7 ;952 5 ;290 3 ;610 2 ;856 2 ;250 1 ;890 1 ;476 1 ;254 1 ;166\n#175 #250 #251 #249\n#175 #250 #255 #251\n#175 #270 #251 #250Toanning\nXiamen\ninstead of \nBeijin\nFigure 3: Visualization of some SFMs. Input images are\nin the first column. The rest are SFMs with top-4 largest\nresponse values. The number under each SFM denotes its\nunique ID in all SFMs. The same IDs are highlighted with\nthe same color.\ncues in each SFM, and introduces less parameters to learn.\nFor example, BoWL needs 49\u0002m\u0002nparameters, while a\nFC layer following a pooling layer needs (49\u0002n)\u0002(m\u0002n)\nparameters. It should be noted that, we discard SFMs with\nnegative average active values during visual words genera-\ntion. This reduces the number of nonzero visual words and\nimproves the efficiency for indexing and retrieval.\nThe generated visual words are L2-normalized for in-\nverted file indexing and retrieval. Our experiments show\nthat, there commonly exist many visual words with small\nresponse values, e.g.,1e-3. During online retrieval, those\nvisual words won’t contribute much to the similarity com-\nputation. Moreover, they are harmful to the sparsity of the\nBoWs model and would make more images embedded in\ninverted lists, resulting in more memory overhead. We find\nthat discarding visual words, whose response values are\nsmaller than a threshold, dramatically improves the retrieval\nefficiency without degrading the accuracy. This procedure is\nformulated as follows:\nthr(x;\f) =\u001ax; x>\f\n0; otherwise(1)\nwhere\fdenotes the threshold.\nWe evaluate this procedure on the testing set of CIFAR-\n100 [36] with different thresholds. We measure the re-\ntrieval performance by mean Average Precision (mAP). The\nefficiency is measured by Average Number of Operation\n(ANO) per query image. Using inverted file index, ANO\ncan be approximately computed as the product of Average\nNumber of nonzero Visual words per image (ANV) and\nAverage Number of Images in each inverted list (ANI), i.e.,\nANO =ANV\u0002ANI. Therefore, large mAP implies high dis-\ncriminative power, and small ANO implies high efficiencyfor indexing and retrieval. The results are shown in Tab. 1.\nIt is clear that, retrieval efficiency is significantly improved\nby filtering visual words with small response values. Mean-\nwhile, retrieval accuracy is improved by removing noisy\nvisual words.\nIn the aforementioned procedure, the threshold is hard to\ndecide for different testing sets. To determine the threshold\nautomatically, we design a sparsity loss function based on\nKLD as following:\n`spa(\f) = ^\u001alog^\u001a\n\u001a+ (1\u0000^\u001a) log(1\u0000^\u001a)\n1\u0000\u001a; (2)\nwhere ^\u001adenotes the desired ratio between the number of\nnonzero visual words and the total number of visual words.\n\u001ais the ratio computed on training set of Nimages, i.e.,\n\u001a=1\nN\u0002m\u0002nNX\ni=1m\u0002nX\nj=isign(vi(j)\u0000\f): (3)\nsign(\u0001)is sign function defined as follows:\nsign(x) =\u001a1; x> 0\n0; otherwise(4)\nWith this object function, the model is trained to learn the\nthreshold\fto ensure a ratio of ^\u001avisual words are nonzero.\nWe thus use this sparsity loss to control the sparsity of the\ngenerated visual words.\n3.3. Model Training\nThe overall network is trained by SGD with object\nfunction as following,\nL(\u0012;\f) =`cls+\u00151`tri+\u00152`spa; (5)\nwhere\u0012denotes parameters in convolutional layers, \fde-\nnotes the threshold in BoWL, `cla,`triand`spadenote the\nloss of classification, triplet similarity and sparsity, respec-\ntively. Since only using the triplet loss takes a long time\nto converge, we further introduce the classification loss to\nensure fast convergence. The triplet similarity loss ensures\nthe discriminative ability of the learned features in similarity\ncomputation. The sparsity loss ensures retrieval efficiency.\nWe design the triplet similarity loss as:\n`tri(va;vp;vn) =maxf0;simvn\nva\u0000simvp\nva+\u000bg; (6)\nwhere\u000bis the margin parameter, va,vpandvnare the vec-\ntors of L2-normalized visual words of anchor image, similar\nimage, and dissimilar image, respectively. simv2v1is the\ncosine distance between two vectors, i.e.,simv2v1=vT\n1\u0003v2.\nWhen`tri(va;vp;vn)6= 0, the gradient with respect to each\nvector can be computed as:\n@`tri(va;vp;vn)\nva=vn\u0000vp (7)\n@`tri(va;vp;vn)\nvp=\u0000va (8)\n@`tri(va;vp;vn)\nvn=va (9)\nDifferent from other works that use Euclidean distance\nto compute the triplet similarity, we choose Cosine distance\nto make similar images share more visual words and vice\nversa. This is mainly because we also use Cosine distance to\ncompute image similarity during retrieval based on inverted\nindexes.\nThe sparsity loss `spais formulated in Eq. 2. Since the\nsign(\u0001)function is non-differential, we define the gradient\nof it as\n@sign (vi(j)\u0000\f)\n@\f=\u0000sign(vi(j)\u0000\f)\n=\u001a\u00001; vi(j)\u0000\f >0\n0; otherwise(10)\nThe gradient of `spa(\f)can be computed as\n@`spa(\f)\n@\f=@`spa(\f)\n@\u001a\u0001@\u001a\n@\f\n=^\u001a\u0000\u001a\n1\u0000\u001a(11)\nTherefore,\fcan be leaned by gradient descent method.\n3.4. Generalization Ability Improvement\nMost of conventional retrieval models based on DCNN\nneed to be fine-tuned on the domain dataset [8]. However,\nfine-tuning is commonly unavailable in real image retrieval\napplications. Then ImageNet [35] could be a reasonable\noption for training as it contains large-scale labeled images.\nHowever, ImageNet contains some fine-grained categories\nand some categories are both visually and semantically\nsimilar as shown in Fig. 4.\nIn our method, different categories correspond to dif-\nferent SFMs, which hence generate different visual words.\nIt’s not reasonable to regard similar categories to generate\nunrelated visual words, when using ImageNet as the training\nset. For example, images of “red fox” should be allowed to\nshare more visual words with images of “kit fox” than with\nimages of “jeep”. Therefore, original labels in ImageNet [35]\nare not optimal for training E2BoWs and may mislead the\nmodel for retrieval tasks.\nTo tackle the above issue, we change the parameter \u000b\nin triplet loss function according to the similarity of two\ncategories, i.e., set a small value of \u000bfor images of similar\ncategories and use a large value for images of dissimilar cat-\negories. Specifically, we first compute the similarity between\nKit fox\nRed fox\nRed fox\n Kit fox\nFigure 4: Illustration of two categories in ImageNet [35],\nthat are visually and semantically similar.\ntwo categories based on the tree struct1ofImageNet [35].\nGivenHdenotes the height of the tree and hc2c1denotes\nthe height of the common parent nodes of two different\ncategoriesc1andc2, the similarity S(c1;c2)betweenc1and\nc2is defined as: S(c1;c2) =h\nH. Then we modify parameter\n\u000bas:\n\u000b0=\u000b\n(1 +S(c1;c2))2(12)\nThe above strategy allows images from similar categories to\nshare more common visual words, thus makes ImageNet a\nmore reasonable training set. It is thus potential to improve\nthe generalization ability of the learned E2BoWs on other\nunseen datasets.\n4. Experiments\n4.1. Datasets and Implementation Details\nWe first evaluate our model in tiny image retrieval\ntask on CIFAR-10 [36] and CIFAR-100 [36]. Then, our\nmodel is evaluated in image retrieval task on MIRFLICKR-\n25K [37]. Finally, we compare the generalization ability\nbetween the proposed E2BoWs and deep features extracted\nfrom GoogLeNet [1] without/with BN [2] by firt training\nthe model on ImageNet [35] and then testing the model\nonNUS-WIDE [38]. Details of those test sets are given as\nfollows:\n\u000fCIFAR-10 [36] contains tiny images belonging to 10\nclasses. Each class contains 5,000 training images\nand 1,000 testing images.\n\u000fCIFAR-100 [36] contains 100 classes of tiny images.\nEach class contains 500 training images and 100\ntesting images. Retrieval task on it is more chal-\nlenging than the one on CIFAR-10 [36].\n\u000fMIRFLICKR-25K [37] consists of 25,000 images\nwith 38 concepts.\n\u000fImageNet [35] contains 1,000 categories and around\n1,200 images per category.\n\u000fNUS-WIDE [38] consists of around 270K images\nand 81 concepts.\nEach SFM corresponds to a category on the training\nset. Therefore, the number of SFMs equals to the num-\nber of training categories. On CIFAR-10 ,CIFAR-100 , and\nMIRFLICKR-25K , 10 visual words are generated from each\n1. ImageNet Tree View. http://image-net.org/explore\nTABLE 2: Comparison of mAP (%) among different meth-\nods on CIFAR-10 [36] and CIFAR-100 [36].\nMethod CIFAR-10 CIFAR-100\nITQ [39] 0:175 —\nITQ-CCA [39] 0:295 —\nKSH [40] 0:315 —\nSH [41] 0:132 —\nMLH [42] 0:211 —\nBRE [43] 0:196 —\nCNNH [22] 0:522 —\nCNNH+ [22] 0:532 —\nDNNH [44] 0:581 —\nDSH [21] 0:676 —\nBHC [23] 0:897 0 :650\u0003\nE2BoWs 0:909 0 :689\nE2BoWs-B 0:908 0 :624\nSFM. This results in 100, 1,000 and 380 visual words, re-\nspectively. For ImageNet [35], we generate 25 visual words\non each SFM and get totally 25,000 visual words. Margin\nparameter in similarity loss is set to 0.2 on all datasets.\nmAP (mean Average Precision) is used to evaluate the\nretrieval performance on CIFAR-10 ,CIFAR-100 , and NUS-\nWIDE . For MIRFLICKR-25K [37], we use NDCG@100\nas the evaluation metric to consider different levels of\nrelevance. In Tab. 2, 3, and 4, the tag “-B” denotes that\nfeature is binarized by using sign(\u0001)function to accelerate\nthe retrieval.\n4.2. Performance on CIFAR\nOnCIFAR-10 and CIFAR-100 , we use the training\nsets for model fine-tuning and use the test sets for re-\ntrieval, respectively. The sparsity loss parameter ^\u001ais set\nas 0.08 and 0.01 on CIFAR-10 and CIFAR-100 , respec-\ntively depending on the number of categories. We compare\nthe retrieval performance between E2BoWs and existing\nmethods including ITQ [39], ITQ-CCA [39], KSH [40],\nSH [41], MLH [42], BRE [43], CNNH [22], CNNH+ [22],\nDNNH [44], DSH [21], and BHC [23].\nThe performance comparison is summarized in Tab. 2,\nwhich shows the best performance of each method with 48-\nbit codes. The compared methods do not report their perfor-\nmance on the CIFAR-100 . Among those methods, BHC [23]\nshows the best performance on CIFAR-10 . Therefore, we\nimplement BHC [23] and report its performance on CIFAR-\n100for comparison. In Tab. 2, “*” denotes our implementa-\ntion. It can be observed from Tab. 2 that, methods based on\nDCNN perform better than conventional retrieval methods\nusing hand-crafted features. Among DCNN based methods,\nour model yields the highest mAP on the two datasets. It\nis also clear that, our work also show substantial advantage\non the more challenging CIFAR-100 [36] dataset.\n4.3. Performance on MIRFLICKR-25K\nOnMIRFLICKR-25K [37], we follow the experimental\nsetting of [24], where 2,000 images are randomly selected\nas query images and the rest are used for training. We\nset sparsity loss parameter ^\u001ato 0.11. We also implementTABLE 3: Comparison of NDCG@100 among different\nmethods on MIRFLICKR-25K [37].\nITQ-CCA [39] KSH [40] BHC [23] E2BoWs E2BoWs-b\n0:402 0 :350 0 :510\u00030:492 0:526\nBHC [23] for comparison because it shows the best perfor-\nmance among the compared works on CIFAR-10 [36].\nPerformance comparison is shown in Tab. 3. It can be\nobserved that, DCNN based methods also perform better\nthan the conventional methods. This implies the powerful\nfeature learning ability of deep models. It is also clear that,\nbinarized E2Bows achieves the best performance. Examples\nof image retrieval results of BHC [23] and E2BoWs-B are\nshown in Fig. 5. As shown in Fig. 5, E2BoWs-B is more\ndiscriminative to semantic cues. For example, E2BoWs ef-\nfectively identifies the semantic of “people” from an human\neye image, and gets better retrieval results than BHC [23].\n4.4. Evaluation on Generalization Ability\nTo validate the generalization ability of the proposed\nE2BoWs feature, we first train E2BoWs on ImageNet [35],\nthen test it on NUS-WIDE [38]. When training on Ima-\ngeNet [35], the sparsity loss parameter is relaxed to 0.14\nand 25 visual words are generated from each SFM. The\nretrieval on NUS-WIDE [38] uses the same experimental\nsetting in [21, 22], i.e., use the images associated with\nthe 21 most frequent concepts and the testing set in [21],\nwhich consists of 10,000 images. As one image may be\nassociated with many concepts, we follow [21] and consider\ntwo images are similar if they share at least one concept.\nWe compare our model with features generated directly from\nGoogLeNet [1] with and without BN [2], i.e.,\n\u000fGN 1024/GNBN\n1024: 1024-d feature extracted from the\npool5 layer in GoogLeNet [1] without/with BN [2].\n\u000fGN 1000/GNBN\n1000: 1000-d feature extracted from the\noutput layer in GoogLeNet [1] without/with BN [2].\nThe comparison between E2BoWs and GoogLeNet fea-\ntures is summarized in Tab. 4. It could be observed that our\nmodel constantly shows better retrieval accuracy. Note that,\nthe above experiments use independent training and testing\nsets. Therefore, we can conclude that E2BoWs shows better\ngeneralization ability than GoogLeNet features.\n4.5. Discussions\nDuring training, we encourage E2BoWs to be sparse\nto ensure its high efficiency in inverted file indexing and\nretrieval. On CIFAR-10 ,CIFAR-100 , and MIRFLICKR-25K ,\nwe analyze the retrieval complexity of our E2BoWs model\nand compare it with the one of 48-bit binary code generated\nby BHC [23].\nAs shown in Tab. 5, E2BoWs is sparse. For instance,\nthe average number of visual words in each image on\nMIRFLICKR-25K is about 44, which is significantly smaller\nthan the total visual word size 380. It is also clear that,\nReturned images\nPeople \nFood \nPlant \nlifeQuery \n Returned images\nWater \nTransport \nPeople Query Figure 5: Examples of retrieval results of BHC [23] and proposed E2BoWs-B on MIRFLICKR-25K [37]. In each example,\nthe query image is placed on the left with the ground truth label under it. The first row shows the top 5 images returned\nby BHC [23], the second row shows the result of E2BoWs-B. Relevant/irrelevant images are annotated by green/red boxes,\nrespectively.\nTABLE 4: Comparison of mAP (%) between GoogLeNet feature and E2BoWs on NUS-WIDE [38]. The compared features\nare trained on an independent training set.\nFeature GN1024 GN1000 GNBN\n1024GNBN\n1000E2BoWs GN1024 -B GN 1000 -B GNBN\n1024-B GNBN\n1000-B E2BoWs-B\nmAP 0:552 0 :594 0 :551 0 :591 0:599 0:388 0 :549 0 :326 0 :543 0:563\nTABLE 5: Retrieval efficiency of different methods on\nCIFAR-10 [36], CIFAR-100 [36] and MIRFLICKR-25K [37].\nMethod CIFAR-10 CIFAR-100 MIRFLICKR-25K\nBHC [23] ANO 480 ;000 480 ;000 406 ;944\nE2BoWsANV 8:64 10 :6 43 :6\nANI 960 110 975\nANO 8;294 1 ;166 42 ;510\nwith inverted file index, retrieval based on E2BoWs can\nbe efficiently finished with less operations than the linear\nsearch with binary code. From the above experiments, we\ncan conclude E2BoWs shows advantages in the aspects of\nboth accuracy and efficiency, compared with BHC [23].\n5. Conclusions\nThis paper presents E2BoWs for large-scale CBIR\nbased on DCNN. E2BoWs first transforms FC layer in\nGoogLeNet [1] into convolutional layer to generate semantic\nfeature maps. Visual words are then generated from these\nfeature maps by the proposed Bag-of-Words layer to pre-\nserve both the semantic and visual cues. A threshold layer\nis hence introduced to ensure the sparsity of generated visual\nwords. We also introduce a novel learning algorithm to re-\ninforce the sparsity of the generated E2BoWs model, which\nfurther ensures the time and memory efficiency. Experiments\non four benchmark datasets demonstrate that our model\nshows substantial advantages in the aspects of discriminative\npower, efficiency, and generalization ability.Acknowledgements\nThis work is supported by National Science Foun-\ndation of China under Grant No. 61572050, 91538111,\n61620106009, 61429201, and the National 1000 Youth Tal-\nents Plan, in part to Dr. Qi Tian by ARO grant W911NF-\n15-1-0290 and Faculty Research Gift Awards by NEC Lab-\noratories of America and Blippar.\nReferences\n[1] C. Szegedy, W. Liu, Y . Jia, P. Sermanet, S. Reed,\nD. Anguelov, D. Erhan, V . Vanhoucke, and A. Rabi-\nnovich, “Going deeper with convolutions,” in CVPR ,\n2015.\n[2] S. Ioffe and C. Szegedy, “Batch normalization: Accel-\nerating deep network training by reducing internal co-\nvariate shift,” arXiv preprint arXiv:1502.03167 , 2015.\n[3] D. Nister and H. Stewenius, “Scalable recognition with\na vocabulary tree,” in CVPR , 2006.\n[4] Z. Wu, Q. Ke, M. Isard, and J. Sun, “Bundling features\nfor large scale partial-duplicate web image search,” in\nCVPR , 2009.\n[5] J. Sivic, A. Zisserman et al. , “Video google: A text\nretrieval approach to object matching in videos.” in\nICCV , 2003.\n[6] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and\nW. T. Freeman, “Discovering objects and their location\nin images,” in ICCV , 2005.\n[7] D. G. Lowe, “Distinctive image features from scale-\ninvariant keypoints,” International Journal of Com-\nputer Vision , vol. 60, no. 2, pp. 91–110, 2004.\n[8] J. Wan, D. Wang, S. C. H. Hoi, P. Wu, J. Zhu,\nY . Zhang, and J. Li, “Deep learning for content-based\nimage retrieval: A comprehensive study,” in ACM MM ,\n2014.\n[9] L. Wu, S. C. Hoi, and N. Yu, “Semantics-preserving\nbag-of-words models and applications,” IEEE Trans-\nactions on Image Processing , vol. 19, no. 7, pp. 1908–\n1920, 2010.\n[10] L. Wu and S. C. Hoi, “Enhancing bag-of-words mod-\nels with semantics-preserving metric learning,” IEEE\nMultiMedia , vol. 18, no. 1, pp. 24–37, 2011.\n[11] S. Zhang, M. Yang, X. Wang, Y . Lin, and Q. Tian,\n“Semantic-aware co-indexing for image retrieval,”\nIEEE Transactions on Pattern Analysis and Machine\nIntelligence , vol. 37, no. 12, pp. 2573–2587, 2015.\n[12] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Im-\nagenet classification with deep convolutional neural\nnetworks,” in NIPS , 2012.\n[13] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual\nlearning for image recognition,” in CVPR , 2016.\n[14] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich\nfeature hierarchies for accurate object detection and\nsemantic segmentation,” in CVPR , 2014.\n[15] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-\ncnn: Towards real-time object detection with region\nproposal networks,” in NIPS , 2015.\n[16] J. Long, E. Shelhamer, and T. Darrell, “Fully convolu-\ntional networks for semantic segmentation,” in CVPR ,\n2015.\n[17] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet:\nA unified embedding for face recognition and cluster-\ning,” in CVPR , 2015.\n[18] J. Wang, Y . Song, T. Leung, C. Rosenberg, J. Wang,\nJ. Philbin, B. Chen, and Y . Wu, “Learning fine-grained\nimage similarity with deep ranking,” in CVPR , 2014.\n[19] L. Shen, Z. Lin, and Q. Huang, “Learning deep con-\nvolutional neural networks for places2 scene recogni-\ntion,” CoRR, vol. abs/1512.05830 , 2015.\n[20] Y . Sun, Y . Chen, X. Wang, and X. Tang, “Deep\nlearning face representation by joint identification-\nverification,” in NIPS , 2014.\n[21] H. Liu, R. Wang, S. Shan, and C. X., “Deep supervised\nhashing for fast image retrieval,” in CVPR , 2016.\n[22] R. Xia, Y . Pan, H. Lai, C. Liu, and S. Yan, “Supervised\nhashing for image retrieval via image representation\nlearning,” in AAAI , 2014.\n[23] K. Lin, H. Yang, J. Hsiao, and C. Chen, “Deep learn-\ning of binary hash codes for fast image retrieval,” in\nCVPRW , 2015.\n[24] F. Zhao, Y . Huang, L. Wang, and T. Tan, “Deep\nsemantic ranking based hashing for multi-label image\nretrieval,” in CVPR , 2015.\n[25] A. W. Smeulders, M. Worring, S. Santini, A. Gupta,\nand R. Jain, “Content-based image retrieval at the\nend of the early years,” IEEE Transactions on PatternAnalysis and Machine Intelligence , vol. 22, no. 12, pp.\n1349–1380, 2000.\n[26] Y . Jing and S. Baluja, “Visualrank: Applying pagerank\nto large-scale image search,” IEEE Transactions on\nPattern Analysis and Machine Intelligence , vol. 30,\nno. 11, pp. 1877–1890, 2008.\n[27] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded\nup robust features,” in ECCV , 2006.\n[28] J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust\nwide-baseline stereo from maximally stable extremal\nregions,” Image and Vision Computing , vol. 22, no. 10,\npp. 761–767, 2004.\n[29] S. Battiato, G. Farinella, G. Gallo, and D. Rav `ı, “Spa-\ntial hierarchy of textons distributions for scene classi-\nfication,” Advances in Multimedia Modeling , pp. 333–\n343, 2009.\n[30] D. Liu, G. Hua, P. Viola, and T. Chen, “Integrated\nfeature selection and higher-order spatial feature ex-\ntraction for object categorization,” in CVPR , 2008.\n[31] S. Lazebnik and M. Raginsky, “Supervised learning\nof quantizer codebooks by information loss minimiza-\ntion,” IEEE transactions on Pattern Analysis and Ma-\nchine Intelligence , vol. 31, no. 7, pp. 1294–1309, 2009.\n[32] F. Perronnin, “Universal and adapted vocabularies for\ngeneric visual categorization,” IEEE Transactions on\nPattern Analysis and Machine Intelligence , vol. 30,\nno. 7, pp. 1243–1256, 2008.\n[33] H. Lai, Y . Pan, Y . Liu, and S. Yan, “Simultaneous\nfeature learning and hash coding with deep neural\nnetworks,” in CVPR , 2015.\n[34] H. Zhu, M. Long, J. Wang, and Y . Cao, “Deep hashing\nnetwork for efficient similarity retrieval.” in AAAI ,\n2016.\n[35] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and\nL. Fei-Fei, “Imagenet: A large-scale hierarchical image\ndatabase,” in CVPR , 2009.\n[36] K. Alex, “Learning multiple layers of features from\ntiny images,” Department of Computer Science, Uni-\nversity of Toronto, Tech. Rep., 2009.\n[37] M. J. Huiskes and M. S. Lew, “The mir flickr retrieval\nevaluation,” in MIR, 2008.\n[38] T. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y . Zheng,\n“Nus-wide: A real-world web image database from\nnational university of singapore,” in CIVR , 2009.\n[39] Y . Gong and S. Lazebnik, “Iterative quantization: A\nprocrustean approach to learning binary codes,” in\nCVPR , 2011.\n[40] W. Liu, J. Wang, R. Ji, Y .-G. Jiang, and S.-F. Chang,\n“Supervised hashing with kernels,” in CVPR , 2012.\n[41] Y . Weiss, A. Torralba, and R. Fergus, “Spectral hash-\ning,” in NIPS , 2009.\n[42] M. Norouzi and D. M. Blei, “Minimal loss hashing for\ncompact binary codes,” in ICML , 2011.\n[43] B. Kulis and T. Darrell, “Learning to hash with binary\nreconstructive embeddings,” in NIPS , 2009.\n[44] H. Lai, Y . Pan, Y . Liu, and S. Yan, “Simultaneous\nfeature learning and hash coding with deep neural\nnetworks,” in CVPR , 2015.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "amcmy_Ja57",
"year": null,
"venue": null,
"pdf_link": "https://www.spiedigitallibrary.org/conference-proceedings-of-spie/12471/124710X/A-novel-HE-color-augmentation-for-domain-invariance-classification-of/10.1117/12.2654040.full",
"forum_link": "https://openreview.net/forum?id=amcmy_Ja57",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A novel H&E color augmentation for domain invariance classification of unannotated histopathology prostate cancer images",
"authors": [
"Roozbeh Bazargani",
"Wanwen Chen",
"Sadaf Sadeghian",
"Maryam Asadi",
"Jeffrey Boschman",
"Amirali Darbandsari",
"Ali Bashashati",
"Septimiu E Salcudean"
],
"abstract": "Most current deep learning models for hematoxylin and eosin (H&E) histopathology image analysis lack the power of generalization to datasets collected from other institutes due to the domain shift in the data. In this research, we study the domain shift problem on two prostate cancer (PCa) datasets collected from the Vancouver Prostate Centre (source dataset) and the University of Colorado (target dataset) and develop a novel center-based H&E color augmentation for cross-center model generalization. While previous work used methods such as random augmentation, color normalization, or learning domain-independent features to improve the robustness of the model to changes in H&E stains, our method first augments the H&E color space of the source dataset to color space of both datasets and then adds random color augmentation. Our method covers the larger range of the color distribution of both institutions resulting in a better generalization. We compared our method with two different State-Of-The-Art (SOTA) un-annotated domain adaptation methods: color normalization and unsupervised domain adversarial neural network (DANN) training, with an ablation study. Our proposed method improves the model performance on both the source and target datasets, and has the best performance on the unlabeled target dataset, showing promise as an approach to learning more generalizable features for histopathology image analysis.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "nMMwZEbXR9",
"year": null,
"venue": "CICM 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=nMMwZEbXR9",
"arxiv_id": null,
"doi": null
}
|
{
"title": "ENIGMA: Efficient Learning-Based Inference Guiding Machine",
"authors": [
"Jan Jakubuv",
"Josef Urban"
],
"abstract": "ENIGMA is a learning-based method for guiding given clause selection in saturation-based theorem provers. Clauses from many previous proof searches are classified as positive and negative based on their participation in the proofs. An efficient classification model is trained on this data, classifying a clause as useful or un-useful for the proof search. This learned classification is used to guide next proof searches prioritizing useful clauses among other generated clauses. The approach is evaluated on the E prover and the CASC 2016 AIM benchmark, showing a large increase of E’s performance.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "L5m4w2LwJU",
"year": null,
"venue": "EC-Web 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=L5m4w2LwJU",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Semi-automated Structural Adaptation of Advanced E-Commerce Ontologies",
"authors": [
"Marek Dudás",
"Vojtech Svátek",
"László Török",
"Ondrej Sváb-Zamazal",
"Benedicto Rodriguez-Castro",
"Martin Hepp"
],
"abstract": "Most ontologies used in e-commerce are nowadays taxonomies with simple structure and loose semantics. One exception is the OPDM collection of ontologies, which express rich information about product categories and their parameters for a number of domains. Yet, having been created by different designers and with specific bias, such ontologies could still benefit from semi-automatic post-processing. We demonstrate how the versatile PatOMat framework for pattern-based ontology transformation can be exploited for suppressing incoherence within the collection and for adapting the ontologies for an unforeseen purpose.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ERMOge4DKSj",
"year": null,
"venue": "2023 ICLR - MLGH Poster",
"pdf_link": "/pdf/bbb7caf081bd5b12c6223f1833303fac3d42bc8f.pdf",
"forum_link": "https://openreview.net/forum?id=ERMOge4DKSj",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Self-supervised Learning to Predict Ejection Fraction using Motion-mode Images",
"authors": [
"Yurong Hu",
"Thomas M. Sutter",
"Ece Ozkan",
"Julia E Vogt"
],
"abstract": "Data scarcity is a fundamental problem since data lies at the heart of any ML project. For most applications, annotation is an expensive task in addition to data collection. Thus, learning from limited labeled data is very critical for data-limited problems, such as in healthcare applications, to have the ability to learn in a sample-efficient manner. Self-supervised learning (SSL) can learn meaningful representations from exploiting structures in unlabeled data, which allows the model to achieve high accuracy in various downstream tasks, even with limited annotations. In this work, we extend contrastive learning, an efficient implementation of SSL, to cardiac imaging. We propose to use generated M(otion)-mode images from readily available B(rightness)-mode echocardiograms and design contrastive objectives with structure and patient-awareness. Experiments on EchoNet-Dynamic show that our proposed model can achieve an AUROC score of 0.85 by simply training a linear head on top of the learned representations, and is insensitive to the reduction of labeled data.",
"keywords": [
"cardiac imaging",
"self-supervised learning",
"contrastive learning",
"motion-mode image"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "oVNgKr5JtD0",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=oVNgKr5JtD0",
"arxiv_id": null,
"doi": null
}
|
{
"title": "To Reviewer E1Jp",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "MGHYP9m6YAL",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=MGHYP9m6YAL",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Ijo9lkGpJv",
"year": null,
"venue": "ECCTD 2013",
"pdf_link": "https://ieeexplore.ieee.org/iel7/6653328/6662190/06662278.pdf",
"forum_link": "https://openreview.net/forum?id=Ijo9lkGpJv",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Inductively coupled wireless power transfer with class-E2 DC-DC converter",
"authors": [
"Tomoharu Nagashima",
"Kazuhide Inoue",
"Xiuqin Wei",
"Elisenda Bou",
"Eduard Alarcón",
"Hiroo Sekiya"
],
"abstract": "This paper proposes an inductive coupled wireless power transfer (WPT) system with class-E <sup xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">2</sup> dc-dc converter along with its design procedure. The proposed WPT system can achieve high power-conversion efficiency at high frequencies because it satisfies the class-E zero-voltage switching and zero-derivative-voltage switching conditions on both the inverter and the rectifier. By using the class-E inverter as a transmitter and the class-E rectifier as a receiver, high power-delivery efficiency can be achieved in the designed WPT system. By using a numerical design procedure proposed in the previous work, it is possible to design the WPT system without considering the impedance matching for satisfying the class-E ZVS/ZDS conditions. The experimental results of the design example showed the overall efficiency of 85.1 % at 100 W output power and 200 kHz operating frequency.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kxi6wOtmSNU",
"year": null,
"venue": "ICEC 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=kxi6wOtmSNU",
"arxiv_id": null,
"doi": null
}
|
{
"title": "MiniDiver: A Novel Mobile Media Playback Interface for Rich Video Content on an iPhoneTM",
"authors": [
"Gregor Miller",
"Sidney S. Fels",
"Matthias Finke",
"Will Motz",
"Walker Eagleston",
"Chris Eagleston"
],
"abstract": "We describe our new mobile media content browser called a MiniDiver. MiniDiving considers media browsing as a personal experience that is viewed, personalized, saved, shared and annotated. When placed on a mobile platform, such as the iPhoneTM, consideration of the particular interface elements lead to new ways to experience media content. The MiniDiver interface elements currently supports multi-camera selection, video hyperlinks, history mechanisms and semantic and episodic video search. We compare performance of the MiniDiver on different media streams to illustrate its feasibility.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "4vrbn5tgaWfy",
"year": null,
"venue": "EC 2005",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=4vrbn5tgaWfy",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Integrating tradeoff support in product search tools for e-commerce sites",
"authors": [
"Pearl Pu",
"Li Chen"
],
"abstract": "In a previously reported user study, we found that users were able to perform decision tradeoff tasks more efficiently and commit considerably fewer errors with the example critiquing interface than with the ranked list. We concluded that example-based search tools were likely to be useful particularly for extending the scope of consumer e-commerce to more complex products where decision making is critical. This paper presents results from a follow-up user study quantifying the benefits of tradeoff support. Users were able to refine the quality of their preference structures and improve decision accuracy by up to 57% after performing tradeoff tasks. Tradeoff support also significantly increased users' confidence in their choices. Together, these two studies show that example critiquing enables users to more accurately find what they want and be confident in their choices, while only requiring a level of effort that is comparable to the ranked list interface.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "eHUSOOnKp7h",
"year": null,
"venue": "EACL 2014",
"pdf_link": "https://aclanthology.org/E14-2009.pdf",
"forum_link": "https://openreview.net/forum?id=eHUSOOnKp7h",
"arxiv_id": null,
"doi": null
}
|
{
"title": "CHISPA on the GO: A mobile Chinese-Spanish translation service for travellers in trouble",
"authors": [
"Jordi Centelles",
"Marta R. Costa-jussà",
"Rafael E. Banchs"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics , pages 33–36,\nGothenburg, Sweden, April 26-30 2014. c\r2014 Association for Computational Linguistics\nCHISPA on the GO\nA mobile Chinese-Spanish translation service for travelers in trouble\nJordi Centelles1,2, Marta R. Costa-juss `a1,2and Rafael E. Banchs2\n1Universitat Polit `ecnica de Catalunya, Barcelona\n2Institute for Infocomm Research, Singapore\n{visjcs,vismrc,rembanchs }@i2r.a-star.edu.sg\nAbstract\nThis demo showcases a translation service that\nallows travelers to have an easy and convenient\naccess to Chinese-Spanish translations via a mo-\nbile app. The system integrates a phrase-based\ntranslation system with other open source compo-\nnents such as Optical Character Recognition and\nAutomatic Speech Recognition to provide a very\nfriendly user experience.\n1 Introduction\nDuring the last twenty years, Machine Transla-\ntion technologies have matured enough to get out\nfrom the academic world and jump into the com-\nmercial area. Current commercially available ma-\nchine translation services, although still not good\nenough to replace human translations, are able to\nprovide useful and reliable support in certain ap-\nplications such as cross-language information re-\ntrieval, cross-language web browsing and docu-\nment exploration.\nOn the other hand, the increasing use of smart-\nphones, their portability and the availability of in-\nternet almost everywhere, have allowed for lots of\ntraditional on-line applications and services to be\ndeployed on these mobile platforms.\nIn this demo paper we describe “CHISPA on the\nGO” a Chinese-Spanish translation service that in-\ntends to provide a portable and easy to use lan-\nguage assistance tool for travelers between Chi-\nnese and Spanish speaking countries.\nThe main three characteristics of the presented\ndemo system are as follows:\n•First, the system uses a direct translation be-\ntween Chinese and Spanish, rather than using\na pivot language as intermediate step as most\nof the current commercial systems do when\ndealing with distant languages.•Second, in addition to support on-line trans-\nlations, as other commercial systems, our\nsystem also supports access from mobile\nplatforms, Android and iOS, by means of na-\ntive mobile apps.\n•Third, the mobile apps combine the base\ntranslation technology with other supporting\ntechnologies such as Automatic Speech\nRecognition (ASR), Optical Character\nRecognition (OCR), Image retrieval and\nLanguage detection in order to provide a\nfriendly user experience.\n2 SMT system description\nThe translation technology used in our system\nis based on the well-known phrase-based trans-\nlation statistical approach (Koehn et al., 2003).\nThis approach performs the translation splitting\nthe source sentence in segments and assigning to\neach segment a bilingual phrase from a phrase-\ntable. Bilingual phrases are translation units that\ncontain source words and target words, and have\ndifferent scores associated to them. These bilin-\ngual phrases are then selected in order to max-\nimize a linear combination of feature functions.\nSuch strategy is known as the log-linear model\n(Och and Ney, 2002). The two main feature func-\ntions are the translation model and the target lan-\nguage model. Additional models include lexical\nweights, phrase and word penalty and reordering.\n2.1 Experimental details\nGenerally, Chinese-Spanish translation follows\npivot approaches to be translated (Costa-juss `a et\nal., 2012) because of the lack of parallel data to\ntrain the direct approach. The main advantage\nof our system is that we are using the direct ap-\nproach and at the same time we rely on a pretty\nlarge corpus. For Chinese-Spanish, we use (1) the\nHoly Bible corpus (Banchs and Li, 2008), (2) the33\nUnited Nations corpus, which was released for re-\nsearch purposes (Rafalovitch and Dale, 2009), (3)\na small subset of the European Parliament Plenary\nSpeeches where the Chinese part was syntheti-\ncally produced by translating from English, (4) a\nlarge TAUS corpus (TausData, 2013) which comes\nfrom technical translation memories, and (5) an in-\nhouse developed small corpus in the transportation\nand hospitality domains. In total we have 70 mil-\nlion words.\nA careful preprocessing was developed for all\nlanguages. Chinese was segmented with Stanford\nsegmenter (Tseng et al., 2005) and Spanish was\npreprocessed with Freeling (Padr ´o et al., 2010).\nWhen Spanish is used as a source language, it is\npreprocessed by lower-casing and unaccented the\ninput. Finally, we use the MOSES decoder (Koehn\net al., 2007) with standard configuration: align-\ngrow-final-and alignment symmetrization, 5-gram\nlanguage model with interpolation and kneser-ney\ndiscount and phrase-smoothing and lexicalized re-\nordering. We use our in-house developed corpus\nto optimize because our application is targeted to\nthe travelers-in-need domain.\n3 Web Translator and Mobile\nApplication\nThis section describes the main system architec-\nture and the main features of web translator and\nthe mobile applications.\n3.1 System architecture\nFigure 1 shows a block diagram of the system ar-\nchitecture. Below, we explain the main compo-\nnents of the architecture, starting with the back-\nend and ending with the front-end.\n3.1.1 Back-end\nAs previously mentioned, our translation system\nuses MOSES . More specifically, we use the open\nsource MOSES server application developed by\nSaint-Amand (2013). Because translation tables\nneed to be kept permanently in memory, we use bi-\nnary tables to reduce the memory space consump-\ntion. The MOSES server communicates with a PHP\nscript that is responsible for receiving the query to\nbe translated and sending the translation back.\nFor the Chinese-Spanish language pair, we\ncount with four types of PHPscripts. Two of them\ncommunicate with the web-site and the other two\nwith the mobile applications. In both cases, one\nFigure 1: Block diagram of the system architec-\nture\nof the two PHP scripts supports Chinese to Span-\nish translations and the other one the Spanish to\nChinese translations.\nThe functions of the PHP scripts responsible\nfor supporting translations are: (1) receive the\nChinese/Spanish queries from the front-end; (2)\npreprocess the Chinese/Spanish queries; (3) send\nthese preprocessed queries to the Chinese/Spanish\nto Spanish/Chinese MOSES servers; (4) receive the\ntranslated queries; and (5) send them back to the\nfront-end.\n3.1.2 Front-end\nHTML and Javascript constitute the main code\ncomponents of the translation website.Another\nweb development technique used was Ajax, which\nallows for asynchronous communication between\ntheMOSES server and the website. This means that\nthe website does not need to be refreshed after ev-\nery translation.\nThe HTTP protocol is used for the communica-\ntions between the web and the server. Specifically,34\nwe use the POST method, in which the server re-\nceives data through the request message’s body.\nThe Javascript is used mainly to implement the\ninput methods of the website, which are a Spanish\nkeyboard and a Pinyin input method, both open\nsource and embedded into our code. Also, using\nJavascript, a small delay was programmed in order\nto automatically send the query to the translator\neach time the user stops typing.\nAnother feature that is worth mentioning is the\nsupport of user feedback to suggest better transla-\ntions. Using M YSQL, we created a database in\nthe server where all user suggestions are stored.\nLater, these suggestions can be processed off-line\nand used in order to improve the system.\nAdditionally, all translations processed by the\nsystem are stored in a file. This information is to\nbe exploited in the near future, when a large num-\nber of translations has been collected, to mine for\nthe most commonly requested translations. The\nmost common translation set will be used to im-\nplement an index and search engine so that any\nquery entered by a user, will be first checked\nagainst the index to avoid overloading the trans-\nlation engine.\n3.2 Android and iphone applications\nThe android app was programmed with the An-\ndroid development tools ( ADT). It is a plug-in for\nthe Eclipse IDEthat provides the necessary envi-\nronment for building an app.\nThe Android-based “CHISPA on the GO” app\nis depicted in Figure 2.\nFor the communication between the Android\napp and the server we use the HTTP Client inter-\nface. Among other things, it allows a client to\nsend data to the server via, for instance, the POST\nmethod, as used on the website case.\nFor the Iphone app we use the xcode software\nprovided by apple and the programming language\nused is Objective C.\nIn addition to the base translation system, the\napp also incorporates Automatic Speech Recogni-\ntion (ASR), Optical Character Recognition tech-\nnologies as input methods (OCR), Image retrieval\nand Language detection.\n3.2.1 ASR and OCR\nIn the case of ASR, we relay on the native ASR\nengines of the used mobile platforms: Jelly-bean\nin the case of Android1and Siri in the case of\n1http://www.android.com/about/jelly-bean/\nFigure 2: Android application\niOS2. Regarding the OCR implemented technol-\nogy, this is an electronic conversion of scanned\nimages into machine-encoded text. We adapted\nthe open-source OCR Tesseract (released under the\nApache license) (Tesseract, 2013).\n3.2.2 Image retrieval\nFor image retrieving, we use the popular website\nflickr (Ludicorp, 2004). The image retrieving is\nactivated with an specific button ”search Image”\nbutton in the app (see Figure 2). Then, an URL\n(using the HTTP Client method) is sent to a flickr\nserver. In the URL we specify the tag (i.e. the\ntopic of the images we want), the number of im-\nages, the secret key (needed to interact with flickr)\nand also the type of object we expect (in our case,\naJSON object). When the server response is re-\nceived, we parse the JSON object. Afterwards,\nwith the HTTP Connection method and the infor-\nmation parsed, we send the URL back to the server\nand we retrieve the images requested. Also, the\nJAVA class that implements all these methods ex-\ntends an AsyncTask in order to not block the\nuser interface meanwhile is exchanging informa-\ntion with the flickr servers.\n3.2.3 Language detection\nWe have also implemented a very simple but ef-\nfective language detection system, which is very\nsuitable for distinguishing between Chinese and\nSpanish. Given the type of encoding we are using\n2http://www.apple.com/ios/siri/35\n(UTF-8), codes for most characters used in Span-\nish are in the range from 40 to 255, and codes for\nmost characters used in Chinese are in the range\nfrom 11,000 and 30,000. Accordingly, we have\ndesigned a simple procedure which computes the\naverage code for the sequence of characters to be\ntranslated. This average value is compared with a\nthreshold to determine whether the given sequence\nof characters represents a Chinese or a Spanish in-\nput.\n4 Conclusions\nIn this demo paper, we described “CHISPA on\nthe GO” a translation service that allows travelers-\nin-need to have an easy and convenient access to\nChinese-Spanish translations via a mobile app.\nThe main characteristics of the presented sys-\ntem are: the use direct translation between Chi-\nnese and Spanish, the support of both website as\nwell as mobile platforms, and the integration of\nsupporting input technologies such as Automatic\nSpeech Recognition, Optical Character Recogni-\ntion, Image retrieval and Language detection.\nAs future work we intend to exploit collected\ndata to implement an index and search engine for\nproviding fast access to most commonly requested\ntranslations. The objective of this enhancement is\ntwofold: supporting off-line mode and alleviating\nthe translation server load.\nAcknowledgments\nThe authors would like to thank the Universitat\nPolit `ecnica de Catalunya and the Institute for In-\nfocomm Research for their support and permission\nto publish this research. This work has been par-\ntially funded by the Seventh Framework Program\nof the European Commission through the Inter-\nnational Outgoing Fellowship Marie Curie Action\n(IMTraP-2011-29951) and the HLT Department of\nthe Institute for Infocomm Reseach.\nReferences\nR. E. Banchs and H. Li. 2008. Exploring Span-\nish Morphology effects on Chinese-Spanish SMT.\nInMATMT 2008: Mixing Approaches to Machine\nTranslation , pages 49–53, Donostia-San Sebastian,\nSpain, February.\nM. R. Costa-juss `a, C. A. Henr ´ıquez Q, and R. E.\nBanchs. 2012. Evaluating indirect strategies for\nchinese-spanish statistical machine translation. J.\nArtif. Int. Res. , 45(1):761–780, September.P. Koehn, F.J. Och, and D. Marcu. 2003. Statisti-\ncal Phrase-Based Translation. In Proceedings of the\n41st Annual Meeting of the Association for Compu-\ntational Linguistics (ACL’03) .\nP. Koehn, H. Hoang, A. Birch, C. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen,\nC. Moran, R. Zens, C. Dyer, O. Bojar, A. Con-\nstantin, and E. Herbst. 2007. Moses: Open source\ntoolkit for statistical machine translation. In Pro-\nceedings of the 45th Annual Meeting of the Associa-\ntion for Computational Linguistics (ACL’07) , pages\n177–180, Prague, Czech Republic, June.\nLudicorp. 2004. Flickr. accessed online May 2013\nhttp://www.flickr.com/ .\nF.J. Och and H. Ney. 2002. Dicriminative training\nand maximum entropy models for statistical ma-\nchine translation. In Proceedings of the 40th An-\nnual Meeting of the Association for Computational\nLinguistics (ACL’02) , pages 295–302, Philadelphia,\nPA, July.\nL. Padr ´o, M. Collado, S. Reese, M. Lloberes, and\nI. Castell ´on. 2010. FreeLing 2.1: Five Years of\nOpen-Source Language Processing Tools. In Pro-\nceedings of 7th Language Resources and Evaluation\nConference (LREC 2010) , La Valleta, Malta, May.\nA. Rafalovitch and R. Dale. 2009. United Nations\nGeneral Assembly Resolutions: A Six-Language\nParallel Corpus. In Proceedings of the MT Summit\nXII, pages 292–299, Ottawa.\nH. Saint-Amand. 2013. Moses server. accessed\nonline May 2013 http://www.statmt.org/\nmoses/?n=Moses.WebTranslation .\nTausData. 2013. Taus data. accessed online May 2013\nhttp://www.tausdata.org .\nTesseract. 2013. Ocr. accessed online\nMay 2013 https://code.google.com/p/\ntesseract-ocr/ .\nH. Tseng, P. Chang, G. Andrew, D. Jurafsky, and\nC. Manning. 2005. A conditional random field\nword segmenter. In Fourth SIGHAN Workshop on\nChinese Language Processing .\nAppendix: Demo Script Outline\nThe presenter will showcase the “CHISPA on the\nGO” app by using the three different supported in-\nput methods: typing, speech and image. Trans-\nlated results will be displayed along with related\npictures of the translated items and/or locations\nwhen available. A poster will be displayed close\nto the demo site, which will illustrate the main ar-\nchitecture of the platform and will briefly explain\nthe technology components of it.36",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "V5b8vk1aGEq",
"year": null,
"venue": "Guide to e-Science 2011",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=V5b8vk1aGEq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Orchestrating e-Science with the Workflow Paradigm: Task-Based Scientific Workflow Modeling and Executing",
"authors": [
"Xiping Liu",
"Wanchun Dou",
"Jinjun Chen"
],
"abstract": "e-Science usually involves a great number of data sets, computing resources, and large teams managed and developed by research laboratories, universities, or governments. Science processes, if deployed in the workflow forms, can be managed more effectively and executed more automatically. Scientific workflows have therefore emerged and been adopted as a paradigm to organize and orchestrate activities in e-Science processes. Differing with workflows applied in the business world, however, scientific workflows need to take account of specific characteristics of science processes and make corresponding changes to accommodate those specific characteristics. A task-based scientific workflow modeling and executing approach is therefore proposed in this chapter for orchestrating e-Science with the workflow paradigm. Besides, this chapter also discusses some related work in the scientific workflow field.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "sZTOJKRjXa",
"year": null,
"venue": "e-Science 2015",
"pdf_link": "https://ieeexplore.ieee.org/iel7/7303998/7304061/07304337.pdf",
"forum_link": "https://openreview.net/forum?id=sZTOJKRjXa",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Interoperability Oriented Architecture: The Approach of EPOS for Solid Earth e-Infrastructures",
"authors": [
"Daniele Bailo",
"Keith G. Jeffery",
"Alessandro Spinuso",
"Giuseppe Fiameni"
],
"abstract": "EPOS is an e-Infrastructure for solid Earh science in Europe. It integrates many heterogeneous Research Infrastructures (RIs) using a novel approach based on the harmonization of existing service and component interfaces. EPOS is designed to provide an architectural framework for new Research Infrastructures in the domain, and to interface with increasing sophistication of existing RIs working with them in co-development from their present state to a future integrated state. The key is the metadata catalogue based on CERIF which provides the virtualization required for EPOS to provide a homogeneous view over the heterogeneity. Architectural concepts together with a plan for integration and collaboration with EPOS nodes in order to interoperate are presented in this paper.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "w9r0zJI6_J",
"year": null,
"venue": "INFOCOM 2019",
"pdf_link": "https://ieeexplore.ieee.org/iel7/8732929/8737365/08737399.pdf",
"forum_link": "https://openreview.net/forum?id=w9r0zJI6_J",
"arxiv_id": null,
"doi": null
}
|
{
"title": "HyCloud: Tweaking Hybrid Cloud Storage Services for Cost-Efficient Filesystem Hosting",
"authors": [
"Jinlong E",
"Yong Cui",
"Mingkang Ruan",
"Zhenhua Li",
"Ennan Zhai"
],
"abstract": "Today's cloud storage infrastructures typically provide two distinct types of services for hosting files: object storage like Amazon S3 and filesystem storage like Amazon EFS. The former supports simple, flat object operations with a low unit storage price, while the latter supports complex, hierarchical filesystem operations with a high unit storage price. In practice, however, a cloud storage user often desires the advantages of both-efficient filesystem operations with a low unit storage price. An intuitive approach to achieving this goal is to combine the two types of services, e.g., by hosting large files in S3 and small files together with directory structures in EFS. Unfortunately, our benchmark experiments indicate that the clients' download performance for large files becomes a severe system bottleneck. In this paper, we attempt to address the bottleneck with little overhead by carefully tweaking the usages of S3 and EFS. This attempt is enabled by two key observations. First, since S3 and EFS have the same unit network-traffic price and the data transfer between S3 and EFS is free of charge, we can employ EFS as a relay for the clients' quickly downloading large files. Second, noticing that significant similarity exists between the files hosted at the cloud and its users, in most times we can convert large-size file downloads into small-size file synchronizations (through delta encoding and data compression). Guided by the observations, we design and implement an open-source system called HyCloud. It automatically invokes the data APIs of S3 and EFS on behalf of users, and handles the data transfer among S3, EFS and the clients. Real-world evaluations demonstrate that the unit storage price of HyCloud is close to that of S3, and the filesystem operations are executed as quickly as in EFS in most times (sometimes even more quickly than in EFS).",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "HzOy6lUzPj1",
"year": null,
"venue": "PRL 2022 Poster",
"pdf_link": "/pdf/5cf99b2f3dabcc1c17db11a2854e8a79f2951b5e.pdf",
"forum_link": "https://openreview.net/forum?id=HzOy6lUzPj1",
"arxiv_id": null,
"doi": null
}
|
{
"title": "DALL-E-Bot: Introducing Web-Scale Diffusion Models to Robotics",
"authors": [
"Ivan Kapelyukh",
"Vitalis Vosylius",
"Edward Johns"
],
"abstract": "We introduce the first work to explore web-scale diffusion models for robotics. DALL-E-Bot enables a robot to rearrange objects in a scene, by first inferring a text description of those objects, then generating an image representing a natural, human-like arrangement of those objects, and finally physically arranging the objects according to that image. The significance is that we achieve this zero-shot using DALL-E, without needing any further data collection or training. Encouraging real-world results with human studies show that this is a promising direction for the future of web-scale robot learning. We also propose a list of recommendations to the text-to-image community, to align further developments of these models with applications to robotics. Videos are available on our webpage at: https://www.robot-learning.uk/dall-e-bot",
"keywords": [
"Diffusion Models",
"Image Generation",
"Object Rearrangement"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "SJx3gbC-2Q",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=SJx3gbC-2Q",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A practical system development with applications in industry.",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "fQDGt0RJkMu",
"year": null,
"venue": "MIDL 2021 Poster",
"pdf_link": "/pdf/7ff12633354907db45d1838d10e0a8c6d4948ab3.pdf",
"forum_link": "https://openreview.net/forum?id=fQDGt0RJkMu",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Gated CNNs for Nuclei Segmentation in H&E Breast Images",
"authors": [
"Shana Beniamin",
"April Khademi",
"Dimitri Androutsos"
],
"abstract": "Nuclei segmentation using deep learning has been achieving high accuracy using U-Net and variants, but a remaining challenge is distinguishing touching and overlapping cells. In this work, we propose using gated CNN (GCNN) networks to obtain sharper predictions around object boundaries and improve nuclei segmentation performance. The method is evaluated in over 1000 multicentre diverse H&E breast cancer images from three databases and compared to baseline U-Net and R2U-Net.",
"keywords": [
"Nuclei Segmentation",
"Breast Cancer",
"Deep Learning",
"Histopathology",
"CNNs"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "mQ-KsquJRqF",
"year": null,
"venue": "ICDE 2021",
"pdf_link": "https://ieeexplore.ieee.org/iel7/9458599/9458600/09458936.pdf",
"forum_link": "https://openreview.net/forum?id=mQ-KsquJRqF",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E2DTC: An End to End Deep Trajectory Clustering Framework via Self-Training",
"authors": [
"Ziquan Fang",
"Yuntao Du",
"Lu Chen",
"Yujia Hu",
"Yunjun Gao",
"Gang Chen"
],
"abstract": "Trajectory clustering has played an essential role in trajectory mining tasks. It serves in a wide range of real-life applications, including transportation, location-based services, behavioral study, and so on. To support trajectory clustering analytics, a plethora of trajectory clustering methods have been proposed, which mainly extend traditional clustering algorithms by using spatio-temporal characteristics of trajectories. However, existing traditional trajectory clustering approaches based on raw trajectory representation highly rely on hand-craft similarity metrics, and can not capture hidden spatial dependencies in trajectory data, which is inefficient and inflexible for clustering analysis. To this end, we propose an end-to-end deep trajectory clustering framework via self-training, termed as E <sup xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">2</sup> DTC, inspired by the data-driven capabilities of deep neural networks. E <sup xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">2</sup> DTC does not require any additional manual feature extraction operations, and can be easily adapted for trajectory clustering analytics on any trajectory dataset. Extensive experimental evaluations on three real-life datasets show that our framework E2DTC achieves superior accuracy and efficiency, compared with classical clustering methods (i.e., K-Medoids) and state-of-the-art neural-network based approaches (i.e., t2vec).",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "yBZOmVe2Akf",
"year": null,
"venue": "ECAI 2008",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-58603-891-5-751",
"forum_link": "https://openreview.net/forum?id=yBZOmVe2Akf",
"arxiv_id": null,
"doi": null
}
|
{
"title": "WikiTaxonomy: A Large Scale Knowledge Resource",
"authors": [
"Simone Paolo Ponzetto",
"Michael Strube"
],
"abstract": "We present a taxonomy automatically generated from the system of categories in Wikipedia. Categories in the resource are identified as either classes or instances and included in a large subsumption, i.e. isa, hierarchy. The taxonomy is made available in RDFS format to the research community, e.g. for direct use within AI applications or to bootstrap the process of manual ontology creation.",
"keywords": [],
"raw_extracted_content": "WikiTaxonomy: A Large Scale Knowledge Resource\nSimone Paolo Ponzetto1and Michael Strube1\nAbstract. We present a taxonomy automatically generated from\nthe system of categories in Wikipedia. Categories in the resource\nare identified as either classes orinstances and included in a large\nsubsumption, i.e. isa, hierarchy. The taxonomy is made available in\nRDFS format to the research community, e.g. for direct use within AIapplications or to bootstrap the process of manual ontology creation.\n1 INTRODUCTION\nAdvances in the development of knowledge intensive AI systems\ncrucially depend on the availability of large coverage, machine read-\nable knowledge sources. While tremendous progress in AI has been\nmade in the last decades by investigating data-driven inference meth-\nods, we believe that further advancement ultimately depends also\non the free access to large repositories of structured knowledge on\nwhich these inference techniques can be applied. In this article we\napproach the problem by using Wikipedia. We present methods for\nderiving a large coverage taxonomy of classes and instances from the\nnetwork of categories in Wikipedia and present the RDF Schema wemake freely available to the research community.\n2 METHODS\nWe apply in sequence the methods described in Ponzetto & Strube\n[8] and Zirn et al. [13] in order to generate a semantic network from\nthe system of categories in Wikipedia.\n1. We label the relations between category pairs as isaandnotisa.\nThis way the category network, which per-se is merely a hierarchi-\ncal thematic categorization of the topics of articles, is transformed\ninto a subsumption hierarchy with a well-defined semantics.\n2. We classify categories as either classes orinstances in order to\ndistinguish between isasubsumption and instance-of relations.\n2.1 Deriving a taxonomy from Wikipedia\nIn [8] we presented a set of lightweight heuristics for distinguishing\nbetween isaandnotisa links in the Wikipedia category network.\nSyntax-based methods label category links based on string match-\ning of syntactic components of the category labels. They use a\nfull syntactic parse of the category labels to check whether cate-\ngory label pairs share the same lexical head2(head matching) or\nthe head of a category label occurs as a modifier in another one\n(modifier matching).\n1EML Research gGmbH, Schloss-Wolfsbrunnenweg 33, 69118 Heidelberg,\nGermany. Website: http://www.eml-research.de/nlp\n2The head of a phrase is the word that determines the syntactic type of the\noverall phrase of which it is a member. In the case of category labels, it is\nthe main noun of the label, e.g. the noun Scientists for the category label\nSCIENTISTS WHO COMMITTED SUICIDE .Connectivity-based methods reason on the structure and connec-\ntivity of the categorization network. Instance categorization ap-\nplies the method from [10] to identify instances from Wikipedia\npages to those categories referring to the same entities as the\npages. Redundant categorization labels category pairs as in an isa\nrelation by looking for directly connected categories redundantly\nhaving a page in common.\nLexico-syntactic based methods use lexico-syntactic patterns ap-\nplied to large text corpora (e.g. Wikipedia itself) to identify isa[4]\nandpart-of relations [2], the latter providing evidence that the re-\nlation is not an isarelation. A majority voting scheme based on the\nnumber of hits for each set of patterns is used to decide whether\nthe relation is isaor not.\nInference-based methods propagate the previously found relations\nbased on the properties of multiple inheritance and transitivity of\ntheisarelation.\nThese methods generate 105,418 isalinks from a network of 127,325\ncategories and 267,707 links. We achieve a score of 87.9 balanced\nF-measure when evaluating the taxonomy against the subset of Re-\nsearchCyc [6] in which the categories can be mapped to.\n2.2 Distinguishing between classes and instances\nZirn et al. [13] go one step forward from [8] and classify categories asinstances orclasses. This step yields a taxonomy with finer grained\nsemantics, and it is necessary since the network contains many cate-gories whose reference is an entity, e.g. the M\nICROSOFT category3,\nrather than a property of a set of individuals, e.g. M ULTINATIONAL\nCOMPANIES . Similarly to [8], they devise a set of heuristics on which\nto decide the reference type of a category label and combine the best\nperforming methods for each class into a voting scheme. Given a cat-\negory cwith label l,cis classified as either an instance or aclass by\nthe first satisfied criterion.\n1.Page & Plural: if no page titled lexists and the lexical head of l\nis plural, then cis aclass.\n2.Capitalization & NER: else if lis capitalized and has been rec-\nognized by a Named Entity Recognizer as a named entity, then c\nis an instance.\n3.Page: else if no page titled lexists, then cis aclass.\n4.Plural: else if the head of lis plural, then cis aclass.\n5.Structure: else if chas no sub-category, then it is a class.\n6.Capitalization: else if lis capitalized, then cis an instance.\n7.Default: else cis aclass.\nUsing the same category network from [8] this pipeline of heuristics\nis shown to classify 111,652 class and 15,472 instance categories\nwith an accuracy of 84.5% when evaluated against ResearchCyc.\n3We use Sans Serif for words and queries, CAPITALS for Wikipedia pages\nand S MALL CAPSfor Wikipedia categories.ECAI 2008\nM. Ghallab et al. (Eds.)\nIOS Press, 2008\n© 2008 The authors and IOS Press. All rights reserved.\ndoi:10.3233/978-1-58603-891-5-751751\n<rdf:Description rdf:about=\"http://www.eml-research.de/WikipediaOntology/Class#_1268\">\n<rdfs:subClassOf rdf:resource=\"http://www.eml-research.de/WikipediaOntology/Class#_2419\"/>\n<rdfs:comment>http://en.wikipedia.org/Wiki/Category:Multinational_companies</rdfs:comment>\n<rdfs:label>Multinational_companies</rdfs:label><rdf:type rdf:resource=\"http://www.w3.org/2000/01/rdf-schema#Class\"/>\n</rdf:Description>\n<rdf:Description rdf:about=\"http://www.eml-research.de/WikipediaOntology/Individual#:_36\">\n<rdfs:comment>http://en.wikipedia.org/Wiki/Category:Microsoft</rdfs:comment>\n<rdfs:label>Microsoft</rdfs:label>\n<rdf:type rdf:resource=\"http://www.eml-research.de/WikipediaOntology/Class#_1268\"/>\n</rdf:Description>\nFigure 1. Fragment of WikiTaxonomy in RDFS format. Individuals are linked to the class they are instances of using the rdf:type predicate.\n3 WIKITAXONOMY\nWe applied the methods from [8] and [13] using the English\nWikipedia database dump from 25 September 2006. The extracted\ntaxonomy was converted into RDF Schema [3, RDFS] using the\nJena Semantic Web Framework4. RDFS has a very limited seman-\ntics and serves mostly as foundation for other Semantic Web lan-\nguages. Nevertheless it suffices in the present scenario of data ex-\nchange where we have only a set of classes in a hierarchical rela-\ntion. RDFS in addition provides compatibility with free ontology ed-\nitors such as Prot ´eg´e [5] for visualization, additional manual edit-\ning or conversion to richer knowledge representation languages such\nas OWL [7]. Figure 1 shows a sample fragment of the WikiTaxon-\nomy in RDFS format. In the RDFS data model Wikipedia categories\nare represented as resources (i.e. a list of rdf:Description el-\nements) and the subsumption relation is modeled straightforwardly\nusing therdfs:subClassOf property. A human readable version\nof the name of the category is given via the rdfs:label prop-\nerty and a link to the on-line version of the corresponding page isprovided using the rdfs:comment property. In order to distin-\nguish between categories which are instances or classes we use the\nrdf:type predicate to state whether a resource is a class or an in-\ndividual of a class. In addition, the distinction is also given in theresource identifier, i.e. the URI-reference.\n4 RELATED WORK\nResearchers working in information extraction have recently begun\nto use Wikipedia as a resource for automatically deriving structuredsemantic content. Suchanek et al. build the YAGO system [10] by\nmerging WordNet and Wikipedia: the isahierarchy of WordNet is\npopulated with instances taken from Wikipedia pages. Auer et al.\npresent the DBpedia system [1] which generates RDF statements\nby extracting the attribute-value pairs contained in the infoboxes\nof the Wikipedia pages (i.e. the tables summarizing the most im-\nportant attributes of the entity referred by the page), e.g. the pair\ncapital=[[Berlin]] from the GERMANY page. Wu & Weld\nshow in [11] how to augment Wikipedia with automatically extracted\ninformation. They propose to ‘autonomously semantify’ Wikipedia\nby (1) extracting new facts from its text via a cascade of Condi-\ntional Random Field models; (2) adding new hyperlinks to the ar-\nticles’ text by finding the target articles nouns refer to. Wu & Weld’s\nKylin Ontology Generator (KOG) [12] is the work closer to ours.\nTheir system builds a subsumption hierarchy of classes by combiningWikipedia infoboxes with WordNet using statistical-relational learn-\ning. Each infobox template, e.g. Infobox Country for countries,\n4http://jena.sourceforge.netrepresents a class and the slots of the template are considered as\nthe attributes of the class. KOG uses Markov Logic Networks [9]\nin order to jointly predict both the subsumption relation between\nclasses and their mapping to WordNet. While KOG represents a the-\noretically sounder methodology than [8] and [13], the lightweight\nheuristics from the latters are straightforward to implement and show\nthat, when given high quality semi-structured input as in the case of\nWikipedia, large coverage semantic networks can be generated by\nusing simple heuristics which capture the conventions governing its\npublic editorial base.\nACKNOWLEDGEMENTS\nThis work has been funded by the Klaus Tschira Foundation, Heidel-\nberg, Germany. The first author has been supported by a KTF grant\n(09.003.2004).\nREFERENCES\n[1] S ¨oren Auer, Christian Bizer, Jens Lehmann, Georgi Kobilarov, Richard\nCyganiak, and Zachary Ives, ‘DBpedia: A nucleus for a Web of open\ndata’, in Proc. of ISWC 2007 + ASWC 2007, pp. 722–735, (2007).\n[2] Matthew Berland and Eugene Charniak, ‘Finding parts in very large\ncorpora’, in Proc. of ACL-99, pp. 57–64, (1999).\n[3] Dan Brickley and Ramanathan V . Guha, ‘RDF vocabulary description\nlanguage 1.0: RDF schema’, Technical report, W3C, (2004). http:\n//www.w3.org/TR/rdf-schema.\n[4] Marti A. Hearst, ‘Automatic acquisition of hyponyms from large text\ncorpora’, in Proc. of COLING-92, pp. 539–545, (1992).\n[5] Holger Knublauch, Ray W. Fergerson, Natalya Fridman Noy, and\nMark A. Musen, ‘The Prot ´eg´e OWL plugin: an open development en-\nvironment for semantic web applications’, in Proc. of ISWC 2004, pp.\n229–243, (2004).\n[6] Douglas B. Lenat and R. V . Guha, Building Large Knowledge-Based\nSystems: Representation and Inference in the CYC Project, Addison-\nWesley, Reading, Mass., 1990.\n[7] Peter F. Patel-Schneider, Patrick Hayes, and Ian Horrocks, ‘OWL Web\nOntology Language semantics and abstract syntax’, Technical report,\nW3C, (2004). http://www.w3.org/TR/owl-semantics.\n[8] Simone Paolo Ponzetto and Michael Strube, ‘Deriving a large scale tax-\nonomy from Wikipedia’, in Proc. of AAAI-07, pp. 1440–1445, (2007).\n[9] Matthew Richardson and Pedro Domingos, ‘Markov logic networks’,\nMachine Learning, 62, 107–136, (2006).\n[10] Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum, ‘YAGO:\nA core of semantic knowledge’, in Proc. of WWW-07, pp. 697–706,\n(2007).\n[11] Fei Wu and Daniel Weld, ‘Automatically semantifying Wikipedia’, in\nProc. of CIKM-07, pp. 41–50, (2007).\n[12] Fei Wu and Daniel Weld, ‘Automatically refining the Wikipedia in-\nfobox ontology’, in Proc. of WWW-08, (2008).\n[13] C ¨acilia Zirn, Vivi Nastase, and Michael Strube, ‘Distinguishing be-\ntween instances and classes in the Wikipedia taxonomy’, in Proc. of\nESWC-08, pp. 376–387, (2008).S.P . Ponzetto and M. Strube / WikiTaxonomy: A Large Scale Knowledge Resource 752",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "8h5EqZE_y5",
"year": null,
"venue": "CoRR 2023",
"pdf_link": "http://arxiv.org/pdf/2304.00150v1",
"forum_link": "https://openreview.net/forum?id=8h5EqZE_y5",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E(3) Equivariant Graph Neural Networks for Particle-Based Fluid Mechanics",
"authors": [
"Artur P. Toshev",
"Gianluca Galletti",
"Johannes Brandstetter",
"Stefan Adami",
"Nikolaus A. Adams"
],
"abstract": "We contribute to the vastly growing field of machine learning for engineering systems by demonstrating that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models than their non-equivariant counterparts. We benchmark two well-studied fluid flow systems, namely the 3D decaying Taylor-Green vortex and the 3D reverse Poiseuille flow, and compare equivariant graph neural networks to their non-equivariant counterparts on different performance measures, such as kinetic energy or Sinkhorn distance. Such measures are typically used in engineering to validate numerical solvers. Our main findings are that while being rather slow to train and evaluate, equivariant models learn more physically accurate interactions. This indicates opportunities for future work towards coarse-grained models for turbulent flows, and generalization across system dynamics and parameters.",
"keywords": [],
"raw_extracted_content": "Accepted at the ICLR 2023 Workshop on Physics for Machine Learning\nE(3) EQUIVARIANT GRAPH NEURAL NETWORKS FOR\nPARTICLE -BASED FLUID MECHANICS\nArtur Toshev, Gianluca Galletti, Stefan Adami & Nikolaus Adams\nTechnical University of Munich, Chair of Aerodynamics and Fluid Mechanics\nfartur.toshev,g.galletti [email protected]\nJohannes Brandstetter\nMicrosoft Research AI4Science\nABSTRACT\nWe contribute to the vastly growing field of machine learning for engineering\nsystems by demonstrating that equivariant graph neural networks have the poten-\ntial to learn more accurate dynamic-interaction models than their non-equivariant\ncounterparts. We benchmark two well-studied fluid flow systems, namely the 3D\ndecaying Taylor-Green vortex and the 3D reverse Poiseuille flow, and compare\nequivariant graph neural networks to their non-equivariant counterparts on dif-\nferent performance measures, such as kinetic energy or Sinkhorn distance. Such\nmeasures are typically used in engineering to validate numerical solvers. Our main\nfindings are that while being rather slow to train and evaluate, equivariant models\nlearn more physically accurate interactions. This indicates opportunities for fu-\nture work towards coarse-grained models for turbulent flows, and generalization\nacross system dynamics and parameters.\n(a)\n (b)\nFigure 1: Velocity magnitude of Taylor-Green vortex (a) and x-velocity of reverse Poiseuille (b).\n1 P ARTICLE -BASED FLUID MECHANICS\nNavier-Stokes equations (NSE) are omnipresent in fluid mechanics, hydrodynamics or weather mod-\neling. However, for the majority of problems, solutions are analytically intractable, and obtaining\naccurate predictions necessitates falling back to numerical solution schemes. Those can be split into\ntwo categories: grid/mesh-based (Eulerian description) and particle-based (Lagrangian description).\nSmoothed Particle Hydrodynamics. In this work, we investigate Lagrangian methods and more\nprecisely the Smoothed Particle Hydrodynamics (SPH) approach, which was independently devel-\noped by Gingold & Monaghan (1977) and Lucy (1977) to simulate astrophysical systems. Since\nthen, SPH has established as the preferred approach in various applications ranging from free sur-\nfaces such as ocean waves (Violeau & Rogers, 2016), through fluid-structure interaction systems\n(Zhang et al., 2021), to selective laser melting in additive manufacturing (Weirather et al., 2019).\n1arXiv:2304.00150v1 [cs.LG] 31 Mar 2023\nAccepted at the ICLR 2023 Workshop on Physics for Machine Learning\nThe main idea behind SPH is to represent the fluid properties at discrete points in space and to use\ntruncated radial interpolation kernel functions to approximate them at any arbitrary location. The\nkernel functions are used to estimate state statistics which define continuum-scale interactions be-\ntween particles. The justification for truncating kernel support is the assumption of local interactions\nbetween particles. The resulting discretized equations are then integrated in time using numerical\nintegration techniques like symplectic Euler by which the particle positions are updated.\nTo generate training data, we implemented our own SPH solver based on the transport velocity\nformulation by Adami et al. (2013), which promises a homogeneous particle distribution over the\ndomain. We then selected two flow cases, both of which are well-known in the fluid mechanics com-\nmunity: the 3D laminar Taylor-Green V ortex and the 3D reverse Poiseuille Flow. We are planning\nto open-source the datasets in the near future.\nTaylor-Green Vortex. The Taylor-Green vortex system (TGV , see Figure 1 (a)) with Reynolds\nnumber of Re = 100 is neither laminar nor turbulent, i.e. there is no layering of the flow (typical for\nlaminar flows), but also the small scales caused by vortex stretching do not lead to a fully developed\nenergy cascade (typical for turbulent flows) Brachet et al. (1984). The TGV has been extensively\nstudied starting with Taylor & Green (1937) and continuing all the way to Sharma & Sengupta\n(2019). The TGV system is typically initialized with a velocity field given by\nu=\u0000cos(kx) cos(ky) cos(kz); v = sin(kx) cos(ky) cos(kz); w = 0; (1)\nwherekis an integer multiple of 2\u0019. The TGV datasets used in this work consist of 8/2/2 trajectories\nfor training/validation/testing, where each trajectory comprises 8000 particles. Each trajectory spans\n1s physical time and was simulated with dt= 0:001resulting in 1000 time steps per trajectory.\nThe ultimate goal would be to learn the dynamics over much larger time steps than those taken by\nthe numerical solver, but with this dataset we just want to demonstrate the applicability of learned\napproaches to reproducing numerical solver results.\nReverse Poiseuille Flow. The Poiseuille flow, i.e. laminar channel flow, is another well-studied\nflow case in fluid mechanics. However, channel flow requires the treatment of wall-boundary con-\nditions, which is beyond the focus of this work. In this work, we therefore consider data obtained\nby reverse Poiseuille flow (RPF, see Figure 1 (b)) (Fedosov et al., 2008), which essentially consists\nof two opposing streams in a fully periodic domain. Those flows are exposed to opposite force\nfields, i.e., the upper and lower half are accelerated in negative xdirection and positive xdirection,\nrespectively. Due to the fact that the flow is statistically stationary (the vertical velocity profile has\na time-independent mean value), the RPF dataset consists of one long trajectory spanning 120s.\nThe flow field is discretized by 8000 particles and simulated with dt= 0:001, followed by sub-\nsampling at every 10th step. Learning to directly predict every 10th state is what we call temporal\ncoarse-graining. The resulting number of training/validation/testing instances is the same as for\nTGV , namely 8000/2000/2000.\n2 (E QUIVARIANT )GRAPH NETWORK -BASED SIMULATORS\nWe first formalize the task of autoregressively predicting the next state of a Lagrangian fluid me-\nchanics simulation based on the notation from Sanchez-Gonzalez et al. (2020). Let Xtdenote\nthe state of a particle system at time t. One full trajectory of K+ 1 steps can be written as\nXt0:K= (Xt0;:::;XtK). Each state Xtis made up of Nparticles, namely Xt= (xt\n1;xt\n2;:::xt\nN),\nwhere each xiis the state vector of the i-th particle. However, the inputs to the learned simulator\ncan span multiple time instances. Each node xt\nican contain node-level information like the current\nposition pt\niand a time sequence of Hprevious velocity vectors _pt1+k\u0000H:k, as well as global features\nlike the external force field fiin the reverse Poiseuille flow. To build the connectivity graph, we use\nan interaction radius of \u00181:5times the average interparticle distance. This results in around 10-20\none-hop neighbors.\nGraph Network-based Simulator. The Graph Network-based Simulator (GNS) frame-\nwork (Sanchez-Gonzalez et al., 2020) is one of the most popular learned surrogates for engineering\nparticle-based simulations. The main idea of the GNS model is to use the established encoder-\nprocessor-decoder architecture (Battaglia et al., 2018) with a processor that stacks several message\npassing layers (Gilmer et al., 2017). One major strength of the GNS model lies in its simplicity given\nthat all its building blocks are simple MLPs. However, the performance of GNS when predicting\n2\nAccepted at the ICLR 2023 Workshop on Physics for Machine Learning\nlong trajectories strongly depends on choosing the right amount of Gaussian noise to perturb input\ndata. Additionally, GNS and other non-equivariant models are less data-efficient (Batzner et al.,\n2022). For these reasons, we implement and tune GNS as a comparison baseline, and use it as an\ninspiration for which setup, features, and hyperparameters to use for equivariant models.\nSteerable E(3)-equivariant Graph Neural Network. Steerable E(3)-equivariant Graph Neural\nNetworks (SEGNNs) (Brandstetter et al., 2022a) are an instance of E( 3)-equivariant GNNs, i.e.,\nGNNs that are equivariant with respect to isometries of the Euclidean space (rotations, translations,\nand reflections). Most E( 3)-equivariant GNNs that are tailored towards molecular property predic-\ntion tasks, (Batzner et al., 2022; Batatia et al., 2022) restrict the parametrization of the Clebsch-\nGordan tensor products to an MLP-parameterized embedding of pairwise distances. In contrast,\nSEGNNs use general steerable node and edge attributes which can incorporate any kind of physical\nquantity, and directly learn the weights of the Clebsch-Gordan tensor product. Indeed, extensions\nof methods such as NequIP (Batzner et al., 2022) towards general physical features would results in\nsomething akin to SEGNN.\nSteerable attributes strongly impact the Clebsch-Gordan tensor products, and thus finding physically\nmeaningful edge and node attributes is crucial for good performance. In particular, we chose edge\nattributes ^aij=V(pij), whereV(\u0001)is the spherical harmonic embedding and pij=pi\u0000pjare\nthe pairwise distances. We further choose node attributes ^ai=V(\u0016_pi) +P\nk2N(i)^aik, where \u0016_piare\naveraged historical velocities and N(i)is thei-neighborhood. As for node and edge features, we\nfound that concatenated historical velocities for the nodes and pairwise displacements for the edges\ncapture best the Navier-Stokes dynamics.\nFor training SEGNNs, we verified that adding Gaussian noise to the inputs (Sanchez-Gonzalez et al.,\n2020) indeed significantly improves performance. We further found that explicitly concatenating the\nexternal force vector fito the node features boosts performance in the RPF case. However, adding\nfito the node attributes ^ai0=V(fi) +V(\u0016_pi) +P\nk2N(i)^aikdoes not improve performance.\nOther models, like EGNN by Satorras et al. (2021), achieve equivariance by working with invariant\nmessages, but it does not allow the same flexibility in terms of features. On a slightly more distant\nnote, there has been a rapid raise in physics-informed machine learning (Raissi et al., 2019) and\noperator learning (Li et al., 2021), where functions or surrogates are learned in an Eulerian (grid-\nbased) way. SEGNN is a sound choice for Lagrangian fluid mechanics problems since it is designed\nto work directly with vectorial information and particles.\n3 R ESULTS\nThe task we train on is the autoregressive prediction of accelerations pgiven the current position\npiandH= 5 past velocities of the particles. We measured the performance of the GNS and the\nSEGNN models in four aspects when evaluating on the test dataset: (i) Mean-squared error (MSE)\nof particle positions MSE pwhen rolling out a trajectory over 100 time steps (1 physical second\nfor both flow cases). This is also the validation loss during training. (ii) Sinkhorn distance , as\nan optimal transport distance measure between particle distributions. Lower values indicate that\nthe particle distribution is closer to the reference one. (iii) Kinetic energy Ekin(= 0:5mv2) as a\nglobal measure of physical behavior. Performance comparisons are summarized in Table 1. GNS\nand SEGNN models have roughly the same number of parameters for Taylor-Green (both have 5\nlayers and 128-dim features), whereas for reverse Poiseuille SEGNN has three times less parameters\nthan GNS (SEGNN has 64-dim features). Looking at the timing in Table 1, equivariant models of\nsimilar size are one order of magnitude slower than non-equivariant ones. This is a known result\nand is related to the constraint of how the Clebsch-Gordan tensor product can be implemented on\naccelerators like GPUs.\nTaylor-Green Vortex. One of the major challenges of the Taylor-Green dataset are the varying\ninput and output scales throughout a trajectory, by up to one order of magnitude. Consequently, this\nresults in the larger importance of first-time steps in the loss even after data normalization. Figure 2\n(a) summarizes the most important performance properties of the Taylor-Green vortex experiment.\nIn general, both models match the ground truth kinetic energy well, but GNS drifts away from the\nreference SPH curve earlier. Both learned solvers, seem to preserve larger system velocities resulting\nin higherEkin. The rollout MSE for this case matches the behavior seen in Ekin.\n3\nAccepted at the ICLR 2023 Workshop on Physics for Machine Learning\nTable 1: Performance measures on the Taylor-Green vortex and reverse Poiseuille flow. The\nSinkhorn distance is averaged over test rollouts, the inference time is obtained for one rollout step\nof 8000 particles.\nTaylor-Green vortex Reverse Poiseuille flow\nSEGNN GNS SPH SEGNN GNS SPH\nMSE p 7.7e-5 1.3e-4 - 7.7e-3 8.0e-3 -\nMSE Ekin 5.3e-5 1.3e-4 - 2.8e-1 3.0e-1 -\nSinkhorn 1.3e-7 1.1e-7 - 7.8e-8 1.9e-6 -\nTime [ms] 290 32 9.7 180 33 110\n#params 720k 630k - 180k 630k -\n(a)\n (b)\nFigure 2: Taylor-Green vortex (a) and reverse Poiseuille (b) performance evolution.\nReverse Poiseuille Flow. The challenge of the reverse Poiseuille case lies in the different velocity\nscales between the main flow direction ( x-axis) and the yandzcomponents of the velocity. Al-\nthough such unbalanced velocities are used as inputs, target accelerations in x-,y-, andz-direction\nall underlie similar distributions. This, combined with temporal coarsening makes the problem sen-\nsitive to input deviations. Figure 2 (b) shows that SEGNN reproduces the particle distribution almost\nperfectly, whereas GNS shows signs of particle clustering, resulting in a larger Sinkhorn distance.\nInterestingly, the shear layers in-between the inverted flows (around planes y=f0;1;2g) seem to\nhave the largest deviation from the ground truth, which could be source of clusters, see Figure 3.\n4 F UTURE WORK\nIn this work, we demonstrate that equivariant models are well suited to capture underlying physics\nproperties of particle-based fluid mechanics systems. Natural future steps are enforcing physical\nbehaviors such as homogeneous particle distributions, and including recent developments for neural\nPDE training into the training procedure of Sanchez-Gonzalez et al. (2020). The latter include e.g.,\nthe push-forward trick and temporal bundling (Brandstetter et al., 2022b). One major weakness of\nrecursively applied solvers, which these strategies aim to mitigate, is error accumulation, which in\nmost cases leads to out-of-distribution states, and consequently unphysical behavior after several\nrollout steps. We conjecture that together with such extensions equivariant models offer a promising\ndirection to tackle some of the long-standing problems in fluid mechanics, such as the learning of\ncoarse-grained representations of turbulent flow problems, e.g. Taylor-Green (Brachet et al., 1984),\nor learning the multi-resolution dynamics of NSE problems (Hu et al., 2017).\n4\nAccepted at the ICLR 2023 Workshop on Physics for Machine Learning\nREFERENCES\nStefan Adami, XY Hu, and Nikolaus A Adams. A transport-velocity formulation for smoothed\nparticle hydrodynamics. Journal of Computational Physics , 241:292–307, 2013.\nIlyes Batatia, D ´avid P ´eter Kov ´acs, Gregor N. C. Simm, Christoph Ortner, and G ´abor Cs ´anyi. Mace:\nHigher order equivariant message passing neural networks for fast and accurate force fields, 2022.\nPeter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi,\nMateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al.\nRelational inductive biases, deep learning, and graph networks. 2018.\nSimon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Ko-\nrnbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural\nnetworks for data-efficient and accurate interatomic potentials. Nature communications , 13(1):\n2453, 2022.\nMarc E Brachet, D Meiron, S Orszag, B Nickel, R Morf, and Uriel Frisch. The taylor-green vortex\nand fully developed turbulence. Journal of Statistical Physics , 34(5-6):1049–1063, 1984.\nJohannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik J Bekkers, and Max Welling. Geomet-\nric and physical quantities improve e(3) equivariant message passing. In International Conference\non Learning Representations , 2022a.\nJohannes Brandstetter, Daniel E. Worrall, and Max Welling. Message passing neural PDE solvers.\nInInternational Conference on Learning Representations , 2022b.\nDmitry A Fedosov, Bruce Caswell, and George Em Karniadakis. Reverse poiseuille flow: the nu-\nmerical viscometer. In Aip conference proceedings , volume 1027, pp. 1432–1434. American\nInstitute of Physics, 2008.\nJustin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural\nmessage passing for quantum chemistry. In International conference on machine learning , pp.\n1263–1272. PMLR, 2017.\nRobert A Gingold and Joseph J Monaghan. Smoothed particle hydrodynamics: theory and applica-\ntion to non-spherical stars. Monthly notices of the royal astronomical society , 181(3):375–389,\n1977.\nWei Hu, Wenxiao Pan, Milad Rakhsha, Qiang Tian, Haiyan Hu, and Dan Negrut. A consistent multi-\nresolution smoothed particle hydrodynamics method. Computer Methods in Applied Mechanics\nand Engineering , 324:278–299, 2017.\nZongyi Li, Nikola Borislavov Kovachki, Kamyar Azizzadenesheli, Burigede liu, Kaushik Bhat-\ntacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial\ndifferential equations. In ICLR , 2021.\nLeon B Lucy. A numerical approach to the testing of the fission hypothesis. Astronomical Journal,\nvol. 82, Dec. 1977, p. 1013-1024. , 82:1013–1024, 1977.\nMaziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A\ndeep learning framework for solving forward and inverse problems involving nonlinear partial\ndifferential equations. Journal of Computational physics , 378:686–707, 2019.\nAlvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter\nBattaglia. Learning to simulate complex physics with graph networks. In International conference\non machine learning , pp. 8459–8468. PMLR, 2020.\nVıctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) equivariant graph neural net-\nworks. In ICML , pp. 9323–9332. PMLR, 2021.\nNidhi Sharma and Tapan K Sengupta. V orticity dynamics of the three-dimensional taylor-green\nvortex problem. Physics of Fluids , 31(3):035106, 2019.\n5\nAccepted at the ICLR 2023 Workshop on Physics for Machine Learning\nGeoffrey Ingram Taylor and Albert Edward Green. Mechanism of the production of small eddies\nfrom large ones. Proceedings of the Royal Society of London. Series A-Mathematical and Physical\nSciences , 158(895):499–521, 1937.\nDamien Violeau and Benedict D Rogers. Smoothed particle hydrodynamics (sph) for free-surface\nflows: past, present and future. Journal of Hydraulic Research , 54(1):1–26, 2016.\nJohannes Weirather, Vladyslav Rozov, Mario Wille, Paul Schuler, Christian Seidel, Nikolaus A\nAdams, and Michael F Zaeh. A smoothed particle hydrodynamics model for laser beam melting\nof ni-based alloy 718. Computers & Mathematics with Applications , 78(7):2377–2394, 2019.\nChi Zhang, Massoud Rezavand, and Xiangyu Hu. A multi-resolution sph method for fluid-structure\ninteractions. Journal of Computational Physics , 429:110028, 2021.\nA A PPENDIX\nReverse Poiseuille plots.\nFigure 3: Reverse Poiseuille x-y view of velocity fields (top) and position error (bottom) at time\nsteps [10, 500, 990] (left to right) of the test rollout.\n6",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "5ByoWjLmUa",
"year": null,
"venue": "Physics4ML Poster",
"pdf_link": "/pdf/8bb328a44c09d0974f24318ab8c127d23a6c1158.pdf",
"forum_link": "https://openreview.net/forum?id=5ByoWjLmUa",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E($3$) Equivariant Graph Neural Networks for Particle-Based Fluid Mechanics",
"authors": [
"Artur Toshev",
"Gianluca Galletti",
"Johannes Brandstetter",
"Stefan Adami",
"Nikolaus A. Adams"
],
"abstract": "We contribute to the vastly growing field of machine learning for engineering systems by demonstrating that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models than their non-equivariant counterparts. We benchmark two well-studied fluid flow systems, namely the 3D decaying Taylor-Green vortex and the 3D reverse Poiseuille flow, and compare equivariant graph neural networks to their non-equivariant counterparts on different performance measures, such as kinetic energy or Sinkhorn distance. Such measures are typically used in engineering to validate numerical solvers. Our main findings are that while being rather slow to train and evaluate, equivariant models learn more physically accurate interactions. This indicates opportunities for future work towards coarse-grained models for turbulent flows, and generalization across system dynamics and parameters.",
"keywords": [
"equivariance",
"GNN",
"SPH",
"fluid mechanics",
"surrogate",
"Taylor-Green",
"Reverse Poiseuille"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "X9Z3YBrX55y",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=X9Z3YBrX55y",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Reply to Reviewer E1tY ",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Uj_q8yb6-7",
"year": null,
"venue": "ECAI 2016",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-672-9-1458",
"forum_link": "https://openreview.net/forum?id=Uj_q8yb6-7",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Towards Lifelong Object Learning by Integrating Situated Robot Perception and Semantic Web Mining",
"authors": [
"Jay Young",
"Valerio Basile",
"Lars Kunze",
"Elena Cabrio",
"Nick Hawes"
],
"abstract": "Autonomous robots that are to assist humans in their daily lives are required, among other things, to recognize and understand the meaning of task-related objects. However, given an open-ended set of tasks, the set of everyday objects that robots will encounter during their lifetime is not foreseeable. That is, robots have to learn and extend their knowledge about previously unknown objects on-the-job. Our approach automatically acquires parts of this knowledge (e.g., the class of an object and its typical location) in form of ranked hypotheses from the Semantic Web using contextual information extracted from observations and experiences made by robots. Thus, by integrating situated robot perception and Semantic Web mining, robots can continuously extend their object knowledge beyond perceptual models which allows them to reason about task-related objects, e.g., when searching for them, robots can infer the most likely object locations. An evaluation of the integrated system on long-term data from real office observations, demonstrates that generated hypotheses can effectively constrain the meaning of objects. Hence, we believe that the proposed system can be an essential component in a lifelong learning framework which acquires knowledge about objects from real world observations.",
"keywords": [],
"raw_extracted_content": "Towards Lifelong Object Learning by Integrating\nSituated Robot Perception and Semantic Web Mining\nJay Young1and Valerio Basile2and Lars Kunze1and Elena Cabrio2and Nick Hawes1\nAbstract.\nAutonomous robots that are to assist humans in their daily lives arerequired, among other things, to recognize and understand the mean-ing of task-related objects. However, given an open-ended set oftasks, the set of everyday objects that robots will encounter duringtheir lifetime is not foreseeable. That is, robots have to learn and ex-tend their knowledge about previously unknown objects on-the-job.Our approach automatically acquires parts of this knowledge (e.g.,theclass of an object and its typical location) in form of ranked hy-\npotheses from the Semantic Web using contextual information ex-\ntracted from observations and experiences made by robots. Thus,by integrating situated robot perception and Semantic Web mining,robots can continuously extend their object knowledge beyond per-ceptual models which allows them to reason about task-related ob-jects, e.g., when searching for them, robots can infer the most likelyobject locations. An evaluation of the integrated system on long-termdata from real office observations, demonstrates that generated hy-potheses can effectively constrain the meaning of objects. Hence, webelieve that the proposed system can be an essential component in alifelong learning framework which acquires knowledge about objectsfrom real world observations.\n1 Introduction\nIt is crucial for autonomous robots working in human environmentssuch as homes, offices or factories to have the ability to represent,reason about, and learn new information about the objects in their en-vironment. Current robot perception systems must be provided withmodels of the objects in advance, and their extensibility is typicallypoor. This includes both perceptual models (used to recognise theobject in the environment) and semantic models (describing what theobject is, what it is used for etc.). Equipping a robot a priori with a\n(necessarily closed) database of object knowledge is problematic be-cause the system designer must predict which subset of all the differ-ent domain objects is required, and then build all of these models (atime-consuming task). If a new object appears in the environment, oran unmodelled object becomes important to a task, the robot will beunable to perceive, or reason about, it. The solution to this problemis for the robot to learn on-line about previously unknown objects .\nThis allows robots to autonomously extend their knowledge of theenvironment, training new models from their own experiences andobservations.\n1Intelligent Robotics Lab, School of Computer Science, Univer-\nsity of Birmingham, United Kingdom, {j.young, l.kunze,\nn.a.hawes}@cs.bham.ac.uk\n2INRIA Sophia Antipolis M ´editerran ´ee, Sophia Antipolis, France,\n{valerio.basile, elena.cabrio}@inria.frThe online learning of perceptual and semantic object models is a\nmajor challenge for the integration of robotics and AI. In this paperwe address one problem from this larger challenge: given an obser-vation of a scene containing an unknown object, can an autonomoussystem predict the semantic description of this object. This is an im-portant problem because online-learnt object models ([5]) must beintegrated into the robot’s existing knowledge base, and a structured,semantic description of the object is crucial to this. Our solution com-bines semantic descriptions of perceived scenes containing unknownobjects, with a distributional semantic approach which allows us tofill gaps in the scene descriptions by mining knowledge from the Se-mantic Web. Our approach assumes that the knowledge onboard therobot is a subset of some larger knowledge base, i.e. that the objectis not unknown beyond the robot’s pre-configured knowledge. To de-termine which concepts from this larger knowledge base might ap-ply to the unknown object, our approach exploits the spatio-temporalcontext in which objects appear, e.g. a teacup is often found nextto a teapot and sugar bowl. These spatio-temporal co-occurrencesprovide contextual clues to the properties and identity of otherwiseunknown objects.\nThis paper makes the following contributions:\n•a novel distributional semantics-based approach for predictingboth the semantic identity of an unknown, everyday object basedon its spatial context and its most likely location based on seman-tic relatedness;\n•an extension to an existing semantic perception architecture toprovide this spatial context; and\n•an evaluation of these techniques on real-world scenes gatheredfrom a long-term autonomous robot deployment.\nThe remainder of the paper is structured as follows. In Section 2,\nwe first state the problem of acquiring semantic descriptions for un-known objects and give an overview of our approach. We then dis-cuss related work in Section 3. In Section 4, we describe the under-lying robot perception system and explain how it is integrated with aSemantic Web mining component. Section 5 describes how the com-ponent generates answers/hypotheses to web-queries from the per-ception module. In Section 6, we describe the experimental setupand present the results. Before we conclude in Section 8, we providea detailed discussion about our approach in Section 7. We also makeavailable our data set and software source code for the benefit of thecommunity at: http://github.com/aloof-project/ECAI 2016\nG.A. Kaminka et al. (Eds.)\n© 2016 The Authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/978-1-61499-672-9-14581458\nRoom kitchen\nSurface coutertop\nFurniture refrigerator, kitchen cabinet, sink\nSmall Objects bowl, teabox, instant coffee, water boiler, mug\nFigure 1. Perceived and interpreted kitchen scene, with various objects.\n2 Problem Statement and Methodology\n2.1 Problem Statement\nThe problem we consider in this work can be summarized as fol-\nlows: Given the context of a perceived scene and the experience from\nprevious observations, predict the class of an as ’unknown’ identi-fied object. The context of a scene can include information about the\ntypes and locations of recognized small objects, furniture, and thetype of the room where the observation has been made.\nIn this paper we use the following running example (Figure 1) to\nillustrate the problem and our approach:\nWhile operating 24/7 in an office environment, a robot rou-tinely visits the kitchen and scans all surfaces for objects. Ona kitchen counter it finds several household objects: a bowl, ateabox, a box of instant coffee, and a water boiler. However,one of the segmented objects, a mug, cannot be identified asone of the known object classes. The robot’s task is to identifythe unknown object solely based on the context of the perceivedscene and scenes that have been previously perceived and inwhich the respective object was identified.\nThe problem of predicting the class of an object purely based on\nthe context can also be seen as top-down reasoning ortop-down pro-\ncessing of information. This stands in contrast to data-driven bottom-\nup processing where, for example, a robot tries to recognize an ob-ject based on its sensor data. In top-down processing, an agent, or therobot, has some expectations of what it will perceive based on com-monsense knowledge and its experiences. For example, if a robotsees a fork and a knife close to each other, and a flat unknown ob-ject with a square bounding box next to them, it might deduce thatthe unknown object is probably a plate. In the following, we referto this kind of processing which combines top-down reasoning andbottom-up perception as knowledge-enabled perception.\nSystems such as the one described in this paper are key compo-\nnents of integrated, situated AI systems intended for life-long learn-ing and extensibility. We currently develop the system with two mainuse-cases in mind, both stemming from the system’s capability tosuggest information about unknown objects based on the spatial con-text in which they appear. The first use case is as part of a crowd-sourcing platform, allowing humans that inhabit the robot’s environ-ment to help it label unknown objects. Here, the prediction system isused to narrow down the list of candidate labels and categories to beshown to users to select from alongside images of unknown objectsthe robot has encountered. Our second use case will be to help formmore informative queries for larger machine learning systems, in ourcase an image classification system trained on extensive, though cat-egorised, image data from websites like Amazon. Here, having somehints as to an object’s identity, such as a distribution over a set ofpossible labels or categories it might belong to or be related to, couldproduce a significant speed boost by letting the classification systemknow what objects it does nothave to test against. In this case, we\naim to use the system to help a robot make smarter, more informedqueries when asking external systems questions about the world.\n2.2 Our Approach\nIn this work we address the problem of predicting information aboutthe class of an object based on the perceived scene context by min-ing the Semantic Web. The extracted scene context includes a list ofrecognized objects and their spatial relations among each other, plusadditional information from a semantic environment map. This in-formation is then used to mine potential object classes based on thesemantic relatedness of concepts in the Web. In particular, we useDBpedia as a resource for object knowledge, and will later on useWordNet to investigate object taxonomies. The result of the web min-ing component is a ranked list of potential objects classes, expressedas DBPedia entries, which allows us access to further informationbeyond just the class of an object, such as categorical knowledge. Anoverview of the entire developed system is given in Figure 2.\nOverall, we see our context-based class prediction approach as a\nmeans to restrict the number of applicable classes for an object. Theaim of our knowledge-enabled perception system is not to replace abottom-up perception system but rather to complement it as an addi-tional expert. For example, in the context of a crowdsourcing-based\nlabeling platform our system could generate label suggestions forusers. Thereby labeling tasks can be performed in less time and ob-ject labels would be more consistent across users. Hence, we beliefthat our system provides an essential functionality in the context oflifelong object learning.\nBefore we present related work in Section 3 we briefly discuss\nvarious resources of object knowledge.\nResources for object knowledge To provide a common format for\nobject knowledge, and to access the wide variety of structured knowl-edge available on the Web, we link the observations made by therobot to DBpedia concepts. DBpedia [2] is a crowd-sourced commu-nity effort started by the Semantic Web community to extract struc-tured information from Wikipedia and make this information avail-able on the Web. DBpedia has a broad scope of entities covering dif-ferent domains of human knowledge: it contains more than 4 millionthings classified in a consistent ontology and denoted by a URI-basedJ. Young et al. / Towards Lifelong Object Learning by Integrating Situated Robot Perception and Semantic Web Mining 1459\nFigure 2. System overview. The robot perception component identifies all\nobject candidates within a scene. All object candidates that can be recog-\nnized are labled according their class, all remaining objects are labeled as’unknown’. Furthermore, the component computes the spatial relations be-tween all objects in the scene. Together with context information from a se-mantic environment map, the robot generates a query to a web service whichis processed by the Semantic Web mining component. Based on the semanticrelatedness of objects the component provides a ranked list of the potentialclasses for all unknown objects.\nreference of the form http://dbpedia.org/page/Teapot\nfor the Teapot concept. DBpedia supports sophisticated queries (us-\ning an SQL-like query language for RDF called SPARQL) to minerelationships and properties associated with Wikipedia resources.We link the objects that the robot can encounter in natural environ-ments to DBpedia concepts, thus exploiting this structured, ontolog-ical knowledge.\nBabelNet [16] is both a multilingual encyclopedic dictionary and\na semantic network which connects concepts and named entities in avery large network of semantic relations (about 14 million entries).BabelNet covers and is obtained from the automatic integration ofseveral resources, such as WordNet [6], Wiktionary and Wikipedia.Each concept contained in BabelNet is represented as a vector in ahigh-dimensional geometric space in the NASARI resource, that weuse to compute the semantic relatedness among objects.\n3 Related Work\nTo obtain information about unknown objects from the Web, a robotcan use perceptual or knowledge-based queries. Future systems willinevitably need to use both. In this paper we focus on the knowledge-based approach, but this can be seen as complementary to systemswhich use image-based queries to search databases of labelled im-ages for similarity, e.g. [17].\nAlthough the online learning of new visual object models is cur-\nrently a niche area in robotics, some approaches do exist [5, 7]. Theseapproaches are capable of segmenting previously unknown objectsin a scene and building models to support their future re-recognition.However, this work focuses purely on visual models (what objectslook like), and does not address how the learnt objects are describedsemantically (what objects are).\nThe RoboSherlock framework [1] (which we build upon, see Sec-\ntion 4.1) is one of the most prominent projects to add semantic de-scriptions to objects for everyday environments, but the frameworkmust largely be configured a priori with knowledge of the objects\nin its environment. It does support more open ended performance,e.g. through the use of Google Goggles, but does not use spatialor semantic context for its Web queries, only vision. The same re-search group pioneered Web and cloud robotics, where tools such as\nK\nNOW ROB[20] (also used in RoboSherlock) both formalised robot\nknowledge and capabilities, and used this formal structure to exploitthe Web for remote data sources and knowledge sharing. In a moresupervised setting, many approaches have used humans to train mo-bile robots about new objects in their environment [9, 19] and robotshave also used Web knowledge sources to improve their performancein closed worlds, e.g. the use of object-room co-occurrence data forroom categorisation in [10].\nThe spatial organisation of a robot’s environment has also been\npreviously exploited to improve task performance. For example, [21,12] present a system in which the previous experience of spatial ar-rangements of desktop objects is used to refine the results of a noisyobject categorisation system. This demonstrates the predictive powerof spatial arrangements, which is something we also exploit in thispaper. However this prior work matched between scenes in the sameenvironment and input modality. In our work we connect spatial ar-rangements in the robot’s situated experience to structured knowl-edge on the Web.\nOur predictions for unknown objects rely on determining the se-\nmantic relatedness of terms. This is an important topic in severalareas, including data mining, information retrieval and web rec-ommendation. [18] applies ontology-based similarity measures inthe robotics domain. Background knowledge about all the objectsthe robot could encounter, is stored in an extended version of the\nK\nNOW ROBontology. Then, WUP similarity [22] is applied to cal-\nculate relatedness of the concept types by considering the depth ofthe concepts and the depth of their lowest common super-concept inthe ontology. [14] presents an approach for computing the seman-tic relatedness of terms using ontological information extracted fromDBpedia for a given domain, using the results for music recommen-dations. Contrary to these approaches, we compute the semantic re-latedness between objects by leveraging the vectorial representationof the DBpedia concepts provided by the NASARI resource [3]. Thismethod links back to earlier distributional semantics work (e.g. La-tent Semantic Analysis [13]) with the difference that here conceptsare represented as vectors, rather than words.\n4 Situated Robot Perception\n4.1 The RoboSherlock Framework\nTo be able to detect both known and unknown objects in its envi-\nronment a robot must have perceptual capabilities. Our perceptionpipeline is based on the RoboSherlock framework [1], an open-source\nframework for implementing perception systems for robots, gearedtowards interaction with objects in human environments. The use ofRoboSherlock provides us with a suite of vision and perception al-gorithms. Following the paradigm of Unstructured Information Man-J. Young et al. / Towards Lifelong Object Learning by Integrating Situated Robot Perception and Semantic Web Mining 1460\nagement (as used by the IBM Watson project), RoboSherlock ap-\nproaches perception as a problem of content analysis, whereby sen-sor data is processed by a set of specialised information extractionand processing algorithms called annotators. The RoboSherlock per-\nception pipeline is a sequence of annotators which include plane seg-mentation, RGB-D object segmentation, and object detection algo-rithms. The output of the pipeline includes 3D point clusters, bound-ing boxes of segmented objects (as seen in Figure 2), and feature vec-tors (colour, 3D shape and texture) describing each object. These fea-ture vectors are important as they allow the robot to track unknownobjects as it takes multiple views of the same scene. Though in thispaper we work with a collected and annotated dataset, we do not re-quire the segmentation or 3D object recognition steps RoboSherlockcan provide via LINE-MOD-3D [11], though this component is usedin our full Robot and Simulated system where a range of perceptionalgorithms are connected and used instead of dataset input. We makeuse of all other RoboSherlock capabilities the pipeline to process thedata and provide a general architecture for our representation and ex-traction of historical spatial context, web query generation and theapplication of Qualitative Spatial Relations, which we will discuss ina following section.\n4.2 Scene Perception\nIn this paper we assume the robot is tasked with observing objectsin natural environments. Whilst this is not a service robot task in it-self, it is a precursor to many other task-driven capabilities such asobject search, manipulation, human-robot interaction etc. Similar toprior work (e.g. [18]) we assume that the robot already has a semanticmap of its environment which provides it with at least annotations ofsupporting surfaces (desks, worktops, shelves etc.), plus the semanticcategory of the area in which the surface is located (office, kitchen,meeting room etc.). Surfaces and locations are linked to DBpedia en-tries just as object labels are, typically as entities under the categoriesFurniture andRoom respectively.\nFrom here, we have access to object, surface and furniture labels\ndescribed by the data, along with 3D bounding boxes via 3D pointdata. In the kitchen scene the robot may observe various objects typ-ical of the room, such as a refrigerator, a cabinet, mugs, sugar bowlsor coffee tins. Their positions in space relative to a global map frameare recorded and we can then record the distance between objects,estimate their size (volume) and record information about their co-occurrences, and the surfaces upon which they were observed, byupdating histograms attached to each object.\nIn the following we assume that each scene only contains a single\nunknown object, but the approach generalises to multiple unknownobjects treated independently. Joint inference over multiple unknownobjects is future work.\n4.3 Spatial and Semantic Context Extraction\nIn order to provide additional information to help subsequent compo-nents predict the unknown object, we augment the scene descriptionwith additional spatial and semantic context information, describing\nthe relationships between the unknown object and the surroundingknown objects and furniture. This context starts from the knowledgewe already have in the semantic map: labels for the room and surfacethe object is supported by.\nWe make use of Qualitative Spatial Relations (QSRs) to repre-\nsent information about objects [8]. QSRs discretise continuous spa-tial measurements, particularly relational information such as the dis-tance and orientation between points, yielding symbolic representa-tions of ranges of possible continuous values. In this work, we makeuse of a qualitative distance measure, often called a Ring calculus.When observing an object, we categorise its distance relationshipwith any other objects in a scene with the following set of sym-bols: near\n0,n e a r 1,n e a r 2, where near 0is the closest. This is ac-\ncomplished by placing sets of thresholds on the distance functionbetween objects, taken from the centroid of the 3D cluster. For exam-ple, this allows us to represent that the mug is closer to the spoon thanthe kettle ( near\n0(mug, spoon )near 2(mug, kettle )) without using\nfloating-point distance values based on noisy and unreliable readingsfrom the robot’s sensors. The RoboSherlock framework provides ameasure of the qualitative size of objects by thresholding the valuesassociated with the volume of 3D bounding-boxes around objects asthey are observed. We categorise objects as small, medium, large\nin this way, allowing the robot to represent and compare object sizes.Whilst our symbolic abstractions are currently based on manualthresholds, approaches exist for learning parametrisations of QSRsthrough experience (e.g. [23]) and we will try this in the future. Fornow, we choose parameters for our qualitative calculi tuned by ourown knowledge of objects in the world, and how they might relate.We use near\n0for distances in cluster space lower than 0.5,near 1\nfor distances between than 0.5and1.0,near 2for distances between\n1.0and3.5andnear 3for distances greater than 3.5.\nAs the robot makes subsequent observations, it may re-identify the\nsame unknown object in additional scenes. When this happens westore all the scene descriptions together, providing additional contextdescriptions for the same object. In Figure 3 we show part of thedata structure describing the objects that co-occured with a plate in akitchen, and their most common qualitative spatial relations.\n1\"co_occurrences\": [\n2 [\"Coffee\", 0.5, \"near_0\" ],\n3 [\"Kitchen_side\", 1.0,\"near_0\" ],\n4 [\"Kitchen_cabinet\", 1.0,\"near_1\" ],\n5 [\"Fridge\", 0.625,\"near_1\" ],\n6 [\"Teabox\", 0.625,\"near_0\" ],\n7 [\"Waste_container\", 0.375,\"near_2\" ],\n8 [\"Utensil_rack\", 0.625, \"near_1\" ],\n9 [\"Sugar_bowl_(dishware)\", 0.625,\"near_0\" ]\n,\n10 [\"Electric_water_boiler\", 0.875,\"near_1\" ],\n11 [\"Sink\", 0.625, \"near_1\" ]],\n12 \"context_history\": [\n13 [\"Kitchen\", 1.0, \"Kitchen_counter\",1 ],\n14 [\"Office\", 0.0,\"Desk\", 0]],\n15 \"context_room_label\": \"Kitchen\",\n16 \"context_surface_label\": \"Kitchen_counter\",\nFigure 3. An example data fragment taken from a series of observations\nof a Plate in a series of kitchen scenes, showing object, furniture, room and\nsurface co-occurence\n5 Semantic Web Mining\nFor an unknown object, our aim is to be able to provide a list of likely\nDBpedia concepts to describe it, and we will later consider and com-pare the merits and difficulties associated with providing object la-\nbelsand object categories. As this knowledge is not available on the\nrobot (the object is locally unknown), it must query an external dataJ. Young et al. / Towards Lifelong Object Learning by Integrating Situated Robot Perception and Semantic Web Mining 1461\nsource to fill this knowledge gap. We therefore use the scene descrip-\ntions and spatial contexts for an unknown object to generate a queryto a Web service. In return this service provides a list of the possi-ble DBpedia concepts which may describe the unknown object. Weexpect the robot to use this list in the future to either automaticallylabel a new object model, or to use the list of possible concepts toguide a human through a restricted (rather than open-ended) learninginteraction.\nThe Web service provides access to object- and scene-relevant\nknowledge extracted from Web sources. It is queried using a JSONstructure sent via an HTTP request (shown in Figure 2). This struc-ture aggregates the spatial contexts collected over multiple observa-tions of the unknown object. In our current work we focus on theco-occurrence structure. Each entry in this structure describes an ob-ject that was observed with the unknown object, the ratio of observa-tions this object was in, and the spatial relation that most frequentlyheld between the two. The room and surface fields describe wherethe observations were made.\nUpon receiving a query, the service computes the semantic relat-\nedness between each object included in the co-occurrence structure\nand every object in a large set of candidate objects from which pos-sible concepts are drawn from (we discuss the nature of this set inSection 6).\nThis semantic relatedness is computed by leveraging the vectorial\nrepresentation of the DBpedia concepts provided by the NASARIresource [3]. In NASARI each concept contained in the multilin-gual resource BabelNet [16] is represented as a vector in a high-dimensional geometric space. The vector components are computedwith the word2vec [15] tool, based on the cooccurrence of the men-\ntions of each concept, in this case using Wikipedia as source corpus.\nSince the vectors are based on distributional semantic knowl-\nedge (based on the distributional hypothesis: words that occurr to-\ngether often are likely semantically related.), vectors that representrelated entities end up close in the vector space. We are able to mea-sure such relatedness by computing the inverse of the cosine dis-tance between two vectors. For instance, the NASARI vectors forPointing\ndevice andMouse (computing) have relatedness\n0.98(on a continuous scale from 0to1), while Mousepad and\nTeabox are0.26related.\nThe system computes the aggregate of the relatedness of a candi-\ndate object to each of the scene objects contained in the query. Us-ing relatedness to score the likely descriptions of an unknown objectfollows from the intuition that related objects are more likely thanunrelated objects to appear in a scene, e.g., to identify a Teapot is\nmore useful to know that there is a Teacup at the scene rather than\naDesk.\nFormally, given nobserved objects in the query q\n1, ..., q n, and m\ncandidate objects in the universe under consideration o1, ..., o m∈\nO, each oiis given a score that indicates its likelihood of being the\nunknown object by aggregating its relatedness across all observedobjects. The aggregation function can be as simple as the arithmeticmean of the relatedness scores, or a more complex function. For in-stance, if the aggregation function is the product, the likelihood of anobject o\niis given by:\nlikelihood (oi)=n/productdisplay\nj=1relatedness(o i,qj)\nFor the sake of this work, we experimented with the product as\naggregating function. This way of aggregating similarity scores giveshigher weight to highly related pairs, as opposed to the arithmeticmean, where each query object contributes equally to the final score.The idea behind this choice is that if an object is highly related to thetarget it should be regarded as more informative.\nThe information carried by each query is richer than just a bare set\nof object labels. One piece of knowledge that can be exploited to ob-tain a more accurate prediction is the relative position of the observedobjects with respect to the target unknown object. Since this informa-tion is represented as a discrete level or proximity (from near\n0to\nnear 3), we can use this as a threshold to determine whether or not\nan object should be included in relatedness calculation. In this workwe discard any object related by near\n3, based on the intuition that\nthe further away an object is spatially, the less related it is. Section 6.2includes an empirical investigation into approach.\nFor clarity, here we present an example of execution\nof the algorithm described above on the query corre-sponding to the kitchen example seen throughout the pa-per. The input to the Web module is a query contain-ing a list of pairs (object, distance): ( Refrigerator, 3),\n(Kitchen\ncabinet, 3), (Sink, 3), (Kitchen cabinet, 3),\n(Sugar bowl (dishware), 1), (Teabox, 1),\n(Instant coffee, 2), (Electric water boiler, 3). For\nthe sake of readability, let us assume a set of candidate objects madeonly of three elements: Tea\ncosy, Pitcher (container) and\nMug. Table 1 show the full matrix of pairwise similarities.\nTeacosy Pitcher (container) Mug\nRefrigerator 0.473 0.544 0.522\nSink 0.565 0.693 0.621\nSugar bowl (dishware) 0.555 0.600 0.627\nTeabox 0.781 0.466 0.602\nInstant coffee 0.821 0.575 0.796\nElectric water boiler 0.503 0.559 0.488\nproduct 0.048 0.034 0.047\nTable 1. Object similarity of the three candidates Teacosy,\nPitcher (container) andMug to the objects observed at the ex-\nample kitchen scene. The last line shows the similarity scores aggregated by\nproduct.\nAmong the three candidates, the one with highest aggregated score\nisTeacosy, followed by Mug andPitcher (container).\nFor reference, the ground truth in the example query is Mug, that\nended up second in the final ranking returned by the algorithm.\nWe can also alter the performance of the system using the\nfrequency of the objects returned by the query. The notion of\nfrequency, taken from [4], is a measure based on the number\nof incoming links in the Wikipedia page of an entity. Usingthis measure we can choose to filter uncommon objects fromthe results of the query, by thresholding with a given frequencyvalue. In the example above, the frequency counts of Tea\ncosy,\nPitcher (container) andMug are respectively 25, 161 and\n108. Setting a threshold anywhere between 25 and 100 would fil-terTea\ncosy out of the result, moving up the ground truth to\nrank 1. Similarly, we can filter out objects that are too far fromthe target by imposing a limit on their observed distance. A thresh-old of 2 (inclusive) for the distance of the objects in the examplewould exclude Refrigerator, Kitchen\ncabinet, Sink and\nElectric water boiler from the computation.\nOther useful information available from the spatial context in-\ncludes the label of the room, surface or furniture where the unknownwas observed. Unfortunately, in order to leverage such information,one needs a complete knowledge base containing these kind of rela-tions, and such a collection is unavailable at the moment. However,J. Young et al. / Towards Lifelong Object Learning by Integrating Situated Robot Perception and Semantic Web Mining 1462\nthe room and the surface labels are included in the relatedness calcu-\nlations along with the observed objects.\n6 Experiments\nIn order to evaluate the effectiveness of the method we propose inpredicting unknown objects’ labels, we perform some experimentaltests. In this section we report on the experimental setup and the re-sults we obtained, before discussing them in further detail.\n6.1 Experimental Set-up\nOur experimental evaluation is an experiment based on a collectionof panoramic RGB-D scans taken from an autonomous mobile ser-vice robot deployed in a working office for a month. It took thesescans at fixed locations according to a flexible schedule. After thedeployment we annotated the objects and furniture items in thesesweeps, providing each one with a DBpedia concept. This gives us1329 real world scenes (384 kitchen, 945 office) on which we cantest our approach. From this data, our evaluation treats each labeledobject in turn as an unknown object in a leave-one-out experiment,querying the Web service with the historical spatial context data forthe unknown object similar to that shown in Figure 3.\nFigure 4. An example office scene as an RGB image from our real-world\ndeployment. Our data contains 945 office scenes, and 384 kitchen scenes.\nIn all of the experiments we compare the ground truth (known la-\nbel in the data) to the DBpedia concepts predicted by our system. Wemeasure performance based on two metrics. The first WUP similar-\nitymeasures the semantic similarity between the ground truth and the\nconcept predicted as most likely for the unknown object. The secondmeasure is the ranking of the ground truth in the list of suggested\nconcepts.\nFor the experiments, the set of candidate objects ( Oin Section 5)\nwas created by adding all concepts from the DBpedia ontology con-nected to the room types in our data by up to a depth of 3. For ex-ample, starting from office leads us to office equipment, computers,stationary etc. This resulted in a set of 1248 possible concepts. We\nset the frequency threshold to 20, meaning we ignored any suggest\nconcept which had a frequency value lower than this. This means un-common concepts such as Chafing\ndish (frequency=13) wouldalways be ignored if suggested, but more common ones such asMouse\n(computing) (frequency=1106) would be kept.\n6.2 Results\nFigure 5. WUP similarity measure between WordNet synsets of ground\ntruth and top-ranked result, with t=5 0 ,p=2 using the prod method.\nRanks closer to 1 are better. Values closer to 1indicate similarity, and values\ncloser to 0indicate dissimilarity.\nFigure 6. Rank in result by object category, matching the highest ranked\nobject with a category shared with the ground truth in the result set, with\nvarying values of the parameter t, with p=2 and the prod method. Ranks\ncloser to 1 are better. Ranking is determined by the position in the result ofthe first object with an immediate category in common with the ground truth.56% (9/16) achieve <=1 0 .J. Young et al. / Towards Lifelong Object Learning by Integrating Situated Robot Perception and Semantic Web Mining 1463\nFigure 7. Rank in result by object label, matching the label of the ground\ntruth in the result set, with varying values of the parameter t, with p=2\nand the prod method. Increasing values of T can cause some objects to be\nexcluded from the result set entirely, such as the Teaware or Monitor at T=50\nFigure 5shows the result of calculating the WUP similarity [22]\nbetween the WordNet synsets of the ground truth and the top-ranked\nresult from our semantic web-mining system. WUP measures seman-tic relatedness by considering the depth of two synsets in addition tothe depth of their Lowest Common Subsumer (LCS). This means thatlarge leaps between concepts will reduce the eventual similarity scoremore than small hops might. To do this we used ready available map-pings to link DBPedia concepts in our system to WordNet synsets,which are themselves organised as a hierarchy of is-arelations. This\nis in contrast to DBPedia, which is organised as a directed acyclicgraph, and while that still means that we could apply the WUP mea-sure to DBPedia nodes directly, WordNet offers a more structuredtaxonomy of concepts that is more well-suited to this kind of work.This serves to highlight the importance of a multi-modal approachto the use of such ontologies. In the results, the system predictedLightpen when the ground truth was Mouse producing a WUP\nscore of 0.73, with the LCS being the Device concept, with Mouse\nandLightpen having depth 10and11respectively, and Device hav-\ning depth 8measured from the root node of Entity . In this case, the\nsystem suggested an object that fell within 3concepts of the ground\ntruth, and this is true for the majority of the results in 5. However,in the case of refrigerator as the ground truth, the system sug-\ngests keypad as the highest ranked result, producing a WUP score\nof0.52. Here, the LCS is at depth 6with the concept Artifact , the\nground truth refrigerator is at depth 13and the prediction keypad\nis at depth 10. While in this case the node distance between the LCS\nand the prediction is 4, where in the previous example it was 3, the\nWUP score is much worse here (0 .73vs0.52) as there are more\nlarge leaps across conceptual space. Our best result in this experi-ment is for Pri n t e r as the ground truth, for which the system sug-\ngests keypad again, however the LCS here is the peripheral node\nat depth 10, where printer is at depth 11andkeypad is at depth 12.Mean Median Std. Dev Variance Range\nWUP 0.69 0.70 0.12 0.01 0.43\nCategory Rank 17.00 9.50 20.17 407.20 73.00\nObject Rank 50.93 36.5 50.18 2518.32 141\nFigure 8. Statistics on WUP and Rank-in-result data, both for t=5 0 ,p=\n2using prod\nThe system suggests a range of objects that are closely related to\nthe unknown object, inferred only from its spatial context and knowl-edge of the objects and environment around it. From here this allowsus to generate a list of candidate concepts which we can use in a sec-ond stage of refinement, such as by presentation to a human-in-loop.\nFigure 6 shows how frequency thresholding effects the perfor-\nmance of the system. In this experiment we consider the positionin the ranked result of the first object with an immediate parent DB-Pedia category in common with the ground truth. Doing so essen-tially maps the larger set of object labels to a smaller set of objectcategories. This is in contrast to considering the position in the re-sult of the specific ground truth label, as shown in Figure 7, andallows us to generate a ranking over categories of objects. To en-\nsure categories remain relevant to the situated objects we are in-terested in, we prune a number DBPedia categories such as thoselisting objects invented in certain years, or in certain countries. Weregard these as being overly broad, and provide a more abstract de-gree of semantic knowledge about objects than we are interested in.As such, we retrieve the rank-in-result of the first object that sharesan immediate DBPedia category with the ground truth, which in thecase of Electric water boiler turns out to be Samovar , a kind of\nRussian water boiler, as both share the immediate ancestor categoryBoilers\n(cookware) . The Samovar , and thus the boiler category,\nappears at rank 12, whereas the specific label Electric water boiler\nappears near the end of the result set of 1248 objects, which covers\n641unique DBPedia categories. In our results, categories associated\nwith 9of the 16objects ( 56%) appear within the result’s top 10en-\ntries. Here as we filter out uncommon words by increasing the filterthreshold Twe improve the position of the concept in the list. Whilst\nthis allows us to definitely remove very unlikely answers that appearrelated due to some quirk of the data, the more we also start to re-duce the ability of the robot to learn about certain objects. This isdiscussed further in Section 7.\nUnlike WordNet synsets and concepts, DBPedia categories\nare more loosely defined and structured, being generated fromWikiPedia, but this means they are typically richer in the kind ofsemantic detail and broad knowledge representation that may bemore suitable for presentation to humans, or more easily mappedto human-authored domains. While WordNet affords us accessto a well-defined hierarchy of concepts, categories like device\nandcontainer are fairly broad, whereas DBPedia categories such\nasVi d e o\ngame control methods orKitchenware describe a\nsmaller set of potential objects, but may be more semantically mean-ingful when presented to humans.\n7 Discussion\nOverall, whilst the results of our object category prediction systemshow that it is possible for this novel system to generate some goodpredictions, the performance is variable across objects. There are anumber of factors that influence performance, and lead to this vari-ability. The first issue is that the current system does not rule outJ. Young et al. / Towards Lifelong Object Learning by Integrating Situated Robot Perception and Semantic Web Mining 1464\nsuggestions of things it already knows. For example if the unknown\nobject is a keyboard, the spatial context and relatedness may resultin a top suggestion of a mouse, but as the system already knowsabout that, it is probably a less useful suggestion. However, it is pos-sible that the unknown object could be a mouse, but has not beenrecognised correctly. Perhaps the most fundamental issue in the chal-lenge of predicting objects concepts from limited information is howthe limit the scope of suggestions. In our system we restricted our-selves to 1248 possible concepts, automatically selected from DB-\npedia by ontological connectivity. This is clearly a tiny fraction ofall the possible objects in existence. On one hand this means thatour autonomous robot will potentially be quite limited in what itcan learn about. On the other hand, a large number of this restrictedset of objects still make for highly unlikely suggestions. One rea-son for this is the corpus-based automatically-extracted nature ofDBpedia, which means that it includes interesting objects whichmay never be observed by a robot (e.g. Mangle\n(machine)).\nMore interestingly though is the effect that the structure of theontology has on the nature of suggestions. In this work we havebeen using hierarchical knowledge to unpin our space of hypothe-ses (i.e. the wider world our robot is placed within), but have notaddressed this within our system. This leads to a mismatch betweenour expectations and the performance of the system with respectto arbitrary precision. For example, if the robot sees a joystick asan unknown object, an appropriate DBpedia concept would seem(to us) to be Controller\n(computing) orJoystick.H o w -\never, much more specific concepts such as Thrustmaster and\nLogitech Thunderpad Digital are also available to the sys-\ntem in its current form. When learning about an object for the firsttime, it seems much more useful for the robot to receive a sugges-tion of the former kind (allowing it to later refine its knowledge tolocally observable instances) than the latter (which unlikely to matchthe environment of the robot). Instead, returning the category of the\nranked objects our system suggests allows us to go some way to-wards this as shown in Figure 6, but still provides us a range of pos-sible candidate categories – though narrowed down from 641possi-\nble categories, to in some cases less than 5. As such, from here we\ncan switch to a secondary level of labelling: that of a human-in-loop.We will next integrate the suggestion system with a crowd-sourcingplatform, allowing humans that inhabit the robot’s environment tohelp it label unknown objects. The suggestion system will be usedto narrow down the list of candidate categories that will be shown tousers as they provide labels for images of objects the robot has seenand learned, but has not yet labelled. While further work is necessaryto refine the current 56% of objects that have a category in the top-\n10 ranked result, we expect that the current results will be sufficientenough to allow a human to pick a good label when provided a brieflist of candidates and shown images of the unknown objects. Suchsystems are crucial for life-long situated learning for mobile robotplatforms, and will allow robot systems to extend their world modelsover time, and learn new objects and patterns.\nThe issue of how to select which set of possible objects to draw\nsuggestions from is at the heart of the challenge of this work: makethe set too large and it is hard to get good, accurate suggestions, butmake it too small and you risk ruling out objects that your robotmay need to know about. Whilst the use of frequency-based filter-ing improved our results by removing low-frequency outliers, moresemantically-aware approaches may be necessary to improve thingsfurther. Further improvements can be made, for instance we largelydo not use current instance observations about the object, but preferits historical context when forming queries. This may be the wrongthing to do in some cases, in fact it may be preferable to weight ob-servations of object context based on their recency. The differencebetween historical context and the context of an object in a particu-lar instance may provide important contextual clues, and allow us toperform other tasks such as anomaly detection or boost the speed ofobject search tasks.\nOne issue we believe our work highlights is the need to integrate a\nmulti-modal approach to the use of differing corpora and ontologies.For instance, the more formal WordNet hierarchy was used to cal-culate the semantic relatedness of our experiment results, rather thanthe less formal DBPedia ontology. However we hold that the DBPe-dia category relationships are more useful in the human-facing com-ponent of our system. There exist other ontologies such as YAGOwhich integrates both WordNet and DBPedia, along with its owncategory system, that will certainly be of interest to us in the futureas we seek to improve the performance of our system. One of ourprimary goals is to better exploit the hierarchical nature of these on-tologies to provide a way of retrieving richer categorical informationabout objects. While reliably predicting the specific object label fromspatial context alone is difficult, we canprovide higher-level ances-\ntor categories that could be used to spur further learning or improveprevious results. As such, we view the prediction process as one ofmatching the characteristics of a series of increasingly more specificcategories to the characteristics of an unknown object, rather thanimmediately attempting to match the specific lowest-level character-istics and produce the direct object label. This requires an ontologyboth formally-defined enough to express a meaningful hierarchy ofcategories for each item, andbroad enough to provide us mapping\nto a large set of common-sense categories and objects. It is not clearyet which combination of existing tools will provide the best route toaccomplishing this.\n8 Conclusions\nIn this paper we presented an integrated system for solving a novelproblem: the suggestion of concept labels for unknown objects ob-served by a mobile robot. Our system stores the spatial contexts inwhich objects are observed and uses these to query a Web-based sug-gestion system to receive a list of possible concepts that could applyto the unknown object. These suggestions are based on the related-ness of the objects observed with the unknown object, and can beimproved by filtering the results based on both frequency and spatialproximity. We evaluated our system data from real office observa-tions and demonstrated how various filter parameters changed thematch of the results to ground truth data.\nWe showed that the suggestion systems provides object label sug-\ngestions with a reasonably high degree of semantic similarity, asmeasured by WUP similarity on WordNet synsets. We also achievedsuccess in retrieving the categories of objects, rather than their di-\nrect labels. In the future we will explore the hierarchical nature ofthe knowledge used for object concept suggestions, explore differentcorpora and ontologies to base the suggesting system on, and performa situated evaluation of our system on a mobile robot with additionalperceptual learning capabilities, and crowd-sourcing functionality tolabel objects on-line with the help of humans using the suggestionsystem.\nThe research leading to these results has received funding from the\nEuropean Union Seventh Framework Programme (FP7/2007-2013)under ALOOF project (CHIST-ERA program)J. Young et al. / Towards Lifelong Object Learning by Integrating Situated Robot Perception and Semantic Web Mining 1465\nREFERENCES\n[1] Michael Beetz, Ferenc B ´alint-Bencz ´edi, Nico Blodow, Daniel Nyga,\nThiemo Wiedemeyer, and Zolt ´an-Csaba Marton, ‘Robosherlock: Un-\nstructured information processing for robot perception’, in ICRA,\n(2015).\n[2] Christian Bizer, Jens Lehmann, Georgi Kobilarov, S ¨oren Auer, Chris-\ntian Becker, Richard Cyganiak, and Sebastian Hellmann, ‘DBpedia - a\ncrystallization point for the web of data’, Web Semant., 7(3), 154–165,\n(September 2009).\n[3] Jos ´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Nav-\nigli, ‘Nasari: a novel approach to a semantically-aware representationof items.’, in HLT-NAACL, eds., Rada Mihalcea, Joyce Yue Chai, and\nAnoop Sarkar, pp. 567–577. The Association for Computational Lin-guistics, (2015).\n[4] Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N. Mendes, ‘Im-\nproving efficiency and accuracy in multilingual entity extraction’, inProceedings of the 9th International Conference on Semantic Systems(I-Semantics), (2013).\n[5] T. Faeulhammer, R. Ambrus, C. Burbridge, M. Zillich, J. Folkesson,\nN. Hawes, P. Jensfelt, and M. Vincze, ‘Autonomous learning of objectmodels on a mobile robot’, IEEE RAL, PP(99), 1–1, (2016).\n[6] Christiane Fellbaum, WordNet: An Electronic Lexical Database, Brad-\nford Books, 1998.\n[7] Ross Finman, Thomas Whelan, Michael Kaess, and John J Leonard,\n‘Toward lifelong object segmentation from change detection in densergb-d maps’, in ECMR. IEEE, (2013).\n[8] L. Frommberger and D. Wolter, ‘Structural knowledge transfer by spa-\ntial abstraction for reinforcement learning agents’, Adaptive Behavior,\n18(6), 507–525, (December 2010).\n[9] Guglielmo Gemignani, Roberto Capobianco, Emanuele Bastianelli,\nDomenico Bloisi, Luca Iocchi, and Daniele Nardi, ‘Living with robots:Interactive environmental knowledge acquisition’, Robotics and Au-\ntonomous Systems, (2016).\n[10] Marc Hanheide, Charles Gretton, Richard Dearden, Nick Hawes,\nJeremy L. Wyatt, Andrzej Pronobis, Alper Aydemir, Moritz\nG¨obelbecker, and Hendrik Zender, ‘Exploiting probabilistic knowledge\nunder uncertain sensing for efficient robot behaviour’, in IJCAI’11,\nBarcelona, Spain, (July 2011).\n[11] S. Hinterstoisser, S. Holzer, C. Cagniart, S. Ilic, K. Konolige, N. Navab,\nand V . Lepetit, ‘Multimodal templates for real-time detection of\ntexture-less objects in heavily cluttered scenes’, in IEEE ICCV, (2011).\n[12] Lars Kunze, Chris Burbridge, Marina Alberti, Akshaya Tippur, John\nFolkesson, Patric Jensfelt, and Nick Hawes, ‘Combining top-down spa-\ntial reasoning and bottom-up object class recognition for scene un-\nderstanding’, in IEEE IROS, Chicago, Illinois, US, (September, 14–18\n2014).\n[13] Thomas K Landauer and Susan T. Dutnais, ‘A solution to plato’s prob-\nlem: The latent semantic analysis theory of acquisition, induction, andrepresentation of knowledge’, PSYCHOLOGICAL REVIEW, 104(2),\n211–240, (1997).\n[14] Jos ´e Paulo Leal, V ˆania Rodrigues, and Ricardo Queir ´os, ‘Comput-\ning Semantic Relatedness using DBPedia’, in 1st Symposium on Lan-\nguages, Applications and Technologies, eds., Alberto Sim ˜oes, Ricardo\nQueir ´os,and Daniela da Cruz, volume 21 of OpenAccess Series in In-\nformatics (OASIcs), pp. 133–147, Dagstuhl, Germany, (2012). SchlossDagstuhl–Leibniz-Zentrum fuer Informatik.\n[15] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean, ‘Effi-\ncient estimation of word representations in vector space’, arXiv preprint\narXiv:1301.3781, (2013).\n[16] Roberto Navigli and Simone Paolo Ponzetto, ‘Babelnet: The auto-\nmatic construction, evaluation and application of a wide-coverage mul-tilingual semantic network’, Artificial Intelligence, 193(0), 217 – 250,\n(2012).[17] J Philbin, ‘Lost in quantization: Improving particular object retrieval in\nlarge scale image databases’, in CVPR 2008, pp. 1–8, (June 2008).\n[18] M.J. Schuster, D. Jain, M. Tenorth, and M. Beetz, ‘Learning organi-\nzational principles in human environments’, in ICRA, pp. 3867–3874,\n(May 2012).\n[19] Shuran Song, Linguang Zhang, and Jianxiong Xiao, ‘Robot in a room:\nToward perfect object recognition in closed environments’, CoRR,\nabs/1507.02703, (2015).\n[20] Moritz Tenorth, Lars Kunze, Dominik Jain, and Michael Beetz,\n‘KNOWROB-MAP – knowledge-linked semantic object maps’, inIEEE-RAS ICHR, pp. 430–435, Nashville, TN, USA, (December 6-82010).\n[21] Akshaya Thippur, Chris Burbridge, Lars Kunze, Marina Alberti, John\nFolkesson, Patric Jensfelt, and Nick Hawes, ‘A comparison of quali-\ntative and metric spatial relation models for scene understanding’, in\nAAAI’15, (January 2015).\n[22] Zhibiao Wu and Martha Palmer, ‘Verbs semantics and lexical selec-\ntion’, in ACL, ACL ’94, pp. 133–138, Stroudsburg, PA, USA, (1994).\nAssociation for Computational Linguistics.\n[23] Jay Young and Nick Hawes, ‘Learning by observation using qualitative\nspatial relations’, in AAMAS 2015, (May 2015).J.Young etal./TowardsLifelong Object Learning byIntegrating Situated Robot Perception andSemantic WebMining 1466",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CBcw8Ln9FwI",
"year": null,
"venue": null,
"pdf_link": "/pdf/89717e0d1437f5af4795e1cfd4f13fc497353bab.pdf",
"forum_link": "https://openreview.net/forum?id=CBcw8Ln9FwI",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Minimax Approach to Variable Fidelity Data Interpolation",
"authors": [
"A. Zaytsev",
"E. Burnaev"
],
"abstract": "Engineering problems often involve data sources of variable fidelity with different costs of obtaining an observation. In particular, one can use both a cheap low fidelity function (e.g. a computational experiment with a CFD code) and an expensive high fidelity function (e.g. a wind tunnel experiment) to generate a data sample in order to construct a regression model of a high fidelity function.\nThe key question in this setting is how the sizes of the high and low fidelity data samples should be selected in order to stay within a given computational budget and maximize accuracy of the regression model prior to committing resources on data acquisition. \n\nIn this paper we obtain minimax interpolation errors for single and variable fidelity scenarios for a multivariate Gaussian process regression. Evaluation of the minimax errors allows us to identify cases when the variable fidelity data provides better interpolation accuracy than the exclusively high fidelity data for the same computational budget. These results allow us to calculate the\noptimal shares of variable fidelity data samples under the given computational budget constraint. Real and synthetic data experiments suggest that using the obtained optimal shares often outperforms natural heuristics in terms of the regression accuracy.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "DNzujqA_Txp",
"year": null,
"venue": "CoRR 2017",
"pdf_link": "http://arxiv.org/pdf/1701.06532v1",
"forum_link": "https://openreview.net/forum?id=DNzujqA_Txp",
"arxiv_id": null,
"doi": null
}
|
{
"title": "ENIGMA: Efficient Learning-based Inference Guiding Machine",
"authors": [
"Jan Jakubuv",
"Josef Urban"
],
"abstract": "ENIGMA is a learning-based method for guiding given clause selection in saturation-based theorem provers. Clauses from many proof searches are classified as positive and negative based on their participation in the proofs. An efficient classification model is trained on this data, using fast feature-based characterization of the clauses . The learned model is then tightly linked with the core prover and used as a basis of a new parameterized evaluation heuristic that provides fast ranking of all generated clauses. The approach is evaluated on the E prover and the CASC 2016 AIM benchmark, showing a large increase of E's performance.",
"keywords": [],
"raw_extracted_content": "arXiv:1701.06532v1 [cs.LO] 23 Jan 2017ENIGMA:\nEfficient Learning-based Inference Guiding Machine\nJan Jakub˚ uv1and Josef Urban1\nCzech Technical University in Prague, Czech Republic, {jakubuv,josef.urban }@gmail.com\nAbstract\nENIGMA is a learning-based method for guiding given clause s election in saturation-\nbased theorem provers. Clauses from many proof searches are classified as positive and\nnegative based on their participation in the proofs. An effici ent classification model is\ntrained on this data, using fast feature-based characteriz ation of the clauses . The learned\nmodel is then tightly linked with the core prover and used as a basis of anew parameterized\nevaluation heuristic that provides fast ranking of all gene rated clauses. The approach is\nevaluated on the E prover and the CASC 2016 AIM benchmark, sho wing a large increase\nof E’s performance.\n1 Introduction: Theorem Proving and Learning\nState-of-the-art resolution/superposition automated theore m provers (ATPs) such as Vampire\n[15] and E [ 20] are today’s most advanced tools for general reasoning across a variety of math-\nematical and scientific domains. The stronger the performance of such tools, the more realistic\nbecome tasks such as full computer understanding and automate d development of complicated\nmathematical theories, and verification of software, hardware a nd engineering designs. While\nperformanceofATPshas steadilygrownoverthe pastyearsdue t o anumber ofhuman-designed\nimprovements,itisstillonaveragefarbehindtheperformanceoft rainedmathematicians. Their\nadvanced knowledge-based proof finding is an enigma, which is unlikely to be deciphered and\nprogrammed completely manually in near future.\nOn large corpora such as Flyspeck, Mizar and Isabelle, the ATP prog ress has been mainly\ndue to learning how to select the most relevant knowledge, based on many previous proofs\n[10,12,1,2]. Learning from many proofs has also recently become a very usefu l method for\nautomatedfindingofparametersofATPstrategies[ 22,9,19,16], andforlearningofsequencesof\ntacticsininteractivetheoremprovers(ITPs)[ 7]. SeveralexperimentswiththecompactleanCoP\n[18] system have recently shown that directly using trained machine lea rner for internal clause\nselection can significantly prune the search space and solve addition al problems [ 24,11,5].\nAn obvious next step is to implement efficient learning-based clause se lection also inside the\nstrongest superposition-based ATPs.\nIn this work, we introduce ENIGMA – Efficient learNing-based Internal Guidance MA-\nchinefor state-of-the-art saturation-based ATPs. The method app lies fast machine learning\nalgorithms to a large number of proofs, and uses the trained classifi er together with simpler\nheuristicstoevaluatethe millionsofclausesgeneratedduringthe re solution/superpositionproof\nsearch. This way, the theorem prover automatically takes into acc ount thousands of previous\nsuccessesandfailuresthatithasseeninpreviousproblems,similarly totrainedhumans. Thanks\nto a carefully chosen efficient learning/evaluationmethod and its tigh t integration with the core\nATP (in our case the E prover), the penalty for this ubiquitous know ledge-based internal proof\nguidance is very low. This in turn very significantly improves the perfo rmance of E in terms of\nthe number of solved problems in the CASC 2016 AIM benchmark [ 21].\n2 Preliminaries\nWe useNto denote the set of natural numbers including 0. When Sis a finite set then |S|\ndenotes its size, and SnwherenPNis then-ary Cartesian product of S, that is, the set of all\nvectors of size nwith members from S. WhenxPSnthen we use notation xristo denote its\ni-th member, counting indexes from 1. We use xTto denote the transposed vector.\nMultiset Mover a set Sis represented by a total function from StoN, that is, Mpsqis the\ncount of sPSinM. The union M1YM2of two multisets M1andM2overSis the multiset\nrepresented by function pM1YM2qpsq “M1psq`M2psqfor allsPS. We use the notation\nts1ÞÑn1,...,s kÞÑnkuto describe a multiset, omitting the members with count 0.\nWe assume a fixed first-order theory with stable symbol names, an d denote Σ its signature,\nthat is, a set of symbols with assigned arities. We use Lto range over the set of all first-order\nliterals (Literal ),Cto range over the set of all first-order clauses ( Clause). Finally, we use\nCto range over sets of clauses ( Clauses ).\n3 Training Clause Classifiers\nThere are many different machine learning methods, with different fu nction spaces they can\nexplore, different training and evaluation speeds, etc. Based on ou r previous experiments with\npremise selection and with guiding leanCoP, we have decided to choose a very fast and scalable\nlearning method for the first ENIGMA instantiation. While more expre ssive learning methods\nusually lead to stronger single-strategy ATP results, very importa nt aspects of our domain are\nthat (i) the learning and proving evolve together in a feedback loop [ 23] where fast learning\nis useful, and (ii) combinations of multiple strategies – which can be pro vided by learning in\ndifferent ways from different proofs – usually solve much more proble ms than the best strategy.\nAfter several experiments, we have chosen LIBLINEAR : open source library [ 4] for large-scale\nlinear classification. This section describes how we use LIBLINEAR to train a clause classifier to\nguide given clause selection. Section 3.1describes how training examples can be obtained from\nATP runs. Section 3.2describes how clauses are represented as fixed-length feature v ectors.\nFinally, Section 3.3describes how to use LIBLINEAR to train a clause classifier.\n3.1 Extracting Training Examples from ATP Runs\nSuppose we run a saturation-based ATP to prove a conjecture ϕin theory T. When the ATP\nsuccessfully terminates with a proof, we can extract training exam ples from this particular\nproof search as follows. We collect all the clauses that were selecte d as given clauses during the\nproof search. From these clauses, those which appear in the final proof are classified as positives\nwhile the remaining given clauses as negative. This gives us two sets of clauses, positive clauses\nC‘and negative clauses Ca.\nRe-running the proof search using the information pC‘,Caqto prefer clauses from C‘\nas given clauses should significantly shorten the proof search. The challenge is to generalize\nthis knowledge to be able to prove new problems which are in some sens e similar. To achieve\nthat, the positive and negative clauses extracted from proof run s on many related problems are\ncombined and learned from jointly.\n3.2 Encoding Clauses by Features\nIn order to use LIBLINEAR for linear classification (Section 3.3), we need to represent clauses\nas finitefeature vectors . For our purposes, a feature vector xrepresenting a clause Cis a fixed-\n“\nf\nx yg\nsko1sko2\nx‘\n“\nf\nf fg\nd d\nf\nFigure 1: The tree representing literal L1from Example 1(left) and its corresponding feature\ntree (right).\nlength vector of natural numbers whose i-th member xrisspecifies how often the i-th feature\nappears in the clause C.\nSeveral choices of clause features are possible [ 14], for example sub-terms, their generaliza-\ntions, or paths in term trees. In this work we use term walks of lengt h 3 as follows. First we\nconstruct a feature vector for every literal Lin the clause C. We write the literal Las a tree\nwhere nodes are labeled by the symbols from Σ. In order to deal with possibly infinite number\nof variables and Skolem symbols, we substitute all variables and Skole m symbols with special\nsymbols. We count for each triple of symbols ps1,s2,s3q PΣ3, the number of directed node\npaths of length 3 in the literal tree, provided the trees are oriente d from the root. Finally, to\nconstruct the feature vector of clause C, we sum the vectors of all literals LPC.\nMore formally as follows. We consider a fixed theory T, hence we have a fixed signature\nΣ. We extend Σ with 4 special symbols for variables ( f), Skolem symbols ( d), positive literals\n(‘), and negative literals ( a). Afeatureφis a triple of symbols from Σ. The set of all features\nis denoted Feature , that is, Feature“Σ3.Clause (or literal) features Φ is a multiset of\nfeatures, thus recording for each feature how many times it appe ars in a literal or a clause.\nWe use Φ to range over literal/clause features and the set of all liter al/clause features (that is,\nfeature multisets) is denoted Features . Recall that we represent multisets as total functions\nfromFeature toN. Hence every member Φ PFeatures is a total function of the type\n“FeaturesÑN” and we can write Φ pφqto denote the count of φin Φ.\nNowit iseasytodefinefunction featuresofthetype“ LiteralÑFeatures ”whichextracts\nfeatures Φ from a literal L. For a literal L, we construct a rooted feature tree with nodes labeled\nby the symbols from Σ. The feature tree basically corresponds to t he tree representing literal\nLwith the following exceptions. The root node of the tree is labeled by ‘iffLis a positive\nliteral, otherwise it is labeled by a. Furthermore, all variable nodes are labeled by the symbol\nfand all nodes corresponding to Skolem symbols are labeled by the sym bold.\nExample 1. Consider the following equality literal L1:fpx,yq“gpsko1,sko2pxqqwith Skolem\nsymbols sko 1and sko 2, and with variables xandy. In Figure 1, the tree representation of L1\nis depicted on the left, while the corresponding feature tre e used to collect features is shown on\nthe right.\nFunction featuresconstructs the feature tree of a literal Land collects all directed paths of\nlength 3. It returns the result as a feature multiset Φ.\nExample 2. For literal L2:Ppxqwe obtain featurespL2q “ tp‘ ,P,fq ÞÑ1u. For literal\nL3:/visualspaceQpx,yqwe have featuresp/visualspaceQpx,yqq “ tpa ,Q,fq ÞÑ2u. Finally, for literal L1from\nExample 1we obtain the following multiset.\nt p‘,“,fqÞÑ1,p‘,“,gqÞÑ1,p“,f,fqÞÑ2,p“,g,dqÞÑ2,pg,d,fqÞÑ1u\nFinally, the function featuresis extended to clauses ( features:ClauseÑFeatures ) by\nmultiset union as featurespCq“Ť\nLPCfeaturespLq.\n3.2.1 A Technical Note on Feature Vector Representation\nIn orderto use LIBLINEAR , we transformthe feature multiset Φ to a vectorofnumbers of len gth\n|Feature|. We assign a natural index to every feature and we construct a ve ctor whose i-th\nmember contains the count Φ pφqwhereiis the index of feature φ. Technically, we construct a\nbijection symbetween Σ and t0,...,|Σ|´1uwhich encodes symbols by natural numbers. Then\nwe construct a bijection between Feature andt1,...,|Feature|uwhich encodes features by\nnumbers1. Now it is easy to construct a function vectorwhich translates Φ to a vector from\nN|Feature |as follows:\nvectorpΦq“xsuch that xrcode pφqs“Φpφqfor allφPFeature\n3.3 Training Clause Classifiers with LIBLINEAR\nOnce we have the training examples pC‘,Caqand encoding of clauses by feature vectors, we\ncan useLIBLINEAR to construct a classification model. LIBLINEAR implements the function\ntrainof the type “ ClausesˆClausesÑModel” which takes two sets of clauses (positive and\nnegative examples) and constructs a classification model. Once we h ave a classification model\nM“trainpC‘,Caq,LIBLINEAR provides a function predictof the type “ ClauseˆModelÑ\nt‘,au” which can be used to predict clause classification as positive ( ‘) or negative ( a).\nLIBLINEAR supports several classification methods, but we have so far used only the de-\nfault solver L2-SVM (L2-regularized L2-loss Support Vector Class ification) [ 3]. Using the func-\ntions from the previous section, we can translate the training exam plespC‘,Caqto the set\nof instance-label pairs pxi,yiq, whereiPt1,...,|C‘|`|Ca|u,xiPN|Feature |,yiPt‘,au. A\ntraining clause Ciis translated to the feature vector xi“vectorpfeaturespCiqqand the corre-\nsponding yiis set to ‘ifCiPC‘or to aifCiPCa. Then,LIBLINEAR solves the following\noptimization problem:\nmin\nw`1\n2wTw`clÿ\ni“1ξpw,xi,yiq˘\nforwPR|Feature |, wherecą0 is a penalty parameter and ξis the following loss function.\nξpw,xi,yiq“maxp1´y1\niwTxi,0q2wherey1\ni“#\n1 iffy1“‘\n´1 otherwise\nLIBLINEAR implementsacoordinatedescendmethod[ 8]andatrustregionNewtonmethod[ 17].\nThe model computed by LIBLINEAR is basically the vector wobtained by solving the above\noptimization problem. When computing the prediction for a clause C, the clause is trans-\nlated to the corresponding feature vector x“vectorpfeaturespCqqandLIBLINEAR classifies\nCas positive ( ‘) iffwTxą0. Hence we see that the prediction can be computed in time\nOpmaxp|Feature|,lengthpCqqqwherelengthpCqis the length of clause C(number of symbols).\n1We usecode pφq “sympφr1sq ¨ |Σ|2`sympφr2sq ¨ |Σ| `sympφr3sq `1.\n4 Guiding the Proof Search\nOnce we have a LIBLINEAR model (classifier) M, we construct a clause weight function which\ncan be used inside the ATP given-clause loop to evaluate the generat ed clauses. As usual,\nclauses with smaller weight are selected before those with a higher we ight. First, we define the\nfunction predictwhich assigns a smaller number to positively classified clauses as follows :\npredict-weight pC,Mq“#\n1 iffpredictpC,Mq“‘\n10 otherwise\nIn order to additionally prefer smaller clauses to larger ones, we add the clause length to the\nabove predicted weight. We use lengthpCqto denote the length of Ccounted as the number of\nsymbols. Furthermore, we use a real-valued parameter γto multiply the length as follows.\nweightpC,Mq“γ¨lengthpCq`predict-weight pC,Mq\nThis scheme is designed for the E automated prover which uses clause evaluation functions\n(CEFs) to select the given clause. A clause evaluation function CEFis a function which\nassigns a real weight to a clause. The unprocessed clause with the s mallest weight is chosen\nto be the given clause. E allows combining several CEFs to jointly guide the proof search.\nThis is done by specifying a finite number of CEFs together with their frequencies as follows:\npf1‹CEF1,...,f k‹CEFkq. Each frequency fidenotes how often the corresponding CEFi\nis used to select a given clause in this weighted round-robin scheme. W e have implemented\nlearning-based guidance as a new CEF given by the above weightfunction. We can either use\nthis new CEF alone or combine it with other CEFs already defined in E.\n5 Experimental Evaluation\nWe use the AIM2category of the CASC 2016 competition for evaluation. This benchm ark\nfits our needs as it targets internal guidance in ATPs based on train ing and testing examples.\nBefore the competition, 1020 training problems were provided for t he training of ATPs, while\nadditional 200 problems were used in the competition. Prover9proo fs were provided along with\nall the training problems. Due to several interesting issues,3we have decided not to use the\ntraining Prover9 proofs yet and instead find as many proofs as pos sible by a single E strategy.\nUsingfastpreliminaryevaluation, wehaveselectedastrongE4strategy S0(seeAppendix A)\nwhich can by itself solve 239 training problems with a 30 s timeout. For c omparison, E’s\nauto-schedule mode (using optimized strategy scheduling) can solv e 261 problems. We train\na clause classifier model M0(Section 3) on the 239 proofs and then run E enhanced with\nthe classifier M0in different ways to obtain even more training examples. Either we use the\nclassifier CEF based on M0(i.e., function weightpC,M0qfrom Section 4) alone, or combine it\nwith the CEFs from S0by adding weightpC,M0qtoS0with a grid of frequencies ranging over\n{1,5,6,7,8,9,10,15,20,30,40,50 }. Furthermore, every combination may be run with a different\nvalue of the parameter γPt0,0.1,0.2,0.4,0.7,1,2,4,8uof the function weightpC,M0q. All the\nmethods arerun with 30secondstime limit, leading to the total of337 solvedtrainingproblems.\nAs expected, the numbers of processed clauses and the solving tim es on the previously solved\nproblems are typically very significantly decreased when using weightpC,M0q. This is a good\n2AIM is a long-term project on proving open algebraic conject ures by Kinyon and Veroff.\n3E.g., different term orderings, rewriting settings, etc., m ay largely change the proof search.\n4We use E 1.9 and Intel Xeon 2.6GHz workstation for all experim ents.\nsign, however, the ultimate test of ENIGMA’s capability to learn and g eneralize is to evaluate\nthe trained strategies on the testing problems. This is done as follow s.\nOn the 337 solved training problems, we (greedily) find that 4 strate gies are needed to cover\nthe whole set. The strongest strategy is our classifier weightpC,M0qalone with γ“0.2, solving\n318 problems. Another 15 problems are added by combining S0with the trained classifier using\nfrequency50and γ“0.2. Threeproblemsarecontributedby S0andtwobythetrainedclassifier\nalone using γ“0. We take these four strategies and use only the proofs they fou nd to train\na new enhanced classifier M1. The proofs yield 6821 positive and 219012 negative examples.\nTraining of M1byLIBLINEAR takes about 7 seconds – 2 seconds for feature extraction and 5\nseconds for learning. The classifier evaluation on the training examp les takes about 6 seconds\nand reaches 97.6% accuracy (ratio of the correctly classified claus es).\nThis means that both the feature generation and the model evalua tion times per clause are\nat the order of 10 microseconds. This is comparable to the speed at which clauses are generated\nby E on our hardware and evaluated by its built-in heuristics. Our lear ning-based guidance\ncan thus be quickly trained and used by normal users of E, without e xpensive training phase\nor using multiple CPUs or GPUs for clause evaluation.\nThen we use the M1classifier to attack the 200 competition problems using 180 s time limit\nas in CASC. We again run several strategies: both weightpC,M1qalone and combined with\nS0with different frequencies and parameters γ. All the strategies solve together 52 problems\nand only 3 of the strategies are needed for this. While S0solves only 22 of the competition\nproblems, our strongest strategy solves 41 problems, see Table 1. This strategy combines S0\nwithweightpC,M1qusing frequency 30 and γ“0.2. 7 more problems are contributed by\nweightpC,M1qalone with γ“0.2 and 4 more problems are added by the E auto-schedule\nmode. For comparison, Vampire solves 47 problems (compared to ou r 52 proofs) in 3*180\nseconds per problem (simulating 3 runs of the best strategies, eac h for 180 seconds).\nauto-schedule 0 1 5 10 15 30 50 8\n0 29 22 - - - - 1817 16\n0.2 - - 23 31 32 40 4133 40\n8 - - 23 31 31 40 4133 35\nTable 1: Performance of the differently parameterized (frequency an dγofweight pC,M1qcombined\nwithS0) trained evaluation heuristics on the 200 AIM CASC 2016 comp etition problems. Frequency 0\n(third column) for weight pC,M1qmeans that S0is used alone, whereas 8means that weight pC,M1q\nis used alone. The empty entries were not run.\n5.1 Looping and Boosting\nThe recent work on the premise-selection task has shown that typ ically there is not a single\noptimalwayhowtoguideproofsearch. Re-learningfromnewproof sasintroducedbyMaLARea\nand combining proofs and learners usually outperforms a single meth od. Since we are using a\nvery fast classifier here, we can easily experiment with giving it more a nd different data.\nFirst such experiment is done as follows. We add the proofs obtained on the solved 52\ncompetition problems to the training data obtained from the 337 solv ed training problems.\nInstead of immediately re-learning and re-running (as in the MaLARe a loop), we however first\nboost allpositive examples(i.e., clausesparticipatingin the proofs)b yrepeatingthem ten times\nin the training data. This way, we inform the learner to more strongly avoid misclassifying the\npositive examples as negative, than the other way round. The resu lting clasifier M2has lower\noverall accuracy on all of the data (93% vs. 98% for the unbooste d), however, its accuracy on\nthe relatively rare positive data grows significantly, from 12.5% to 81 .8%.\nRunning the most successful strategy using M2instead of M1indeed helps. In 180 s, it\nsolves additional 5 problems (4 of them not solved by Vampire), all of them in less than 45 s.\nThis raises ENIGMA’s performance on the competition problems to 57 problems (in general in\n600 s). Interestingly, the second most useful strategy (now us ingM2instead of M1) which is\nmuch more focused on doing inferences on the positively classified cla uses, solves only two of\nthese new problems, but six times faster. It is clear that we can con tinue experimenting this\nway with ENIGMA for long time, producing quickly a large number of str ategies that have\nquite different search properties. In total we have proved 16 pro blems unsolved by Vampire.\n6 Conclusions\nThe first experiments with ENIGMA are extremely encouraging. While the recent work on\npremise selection and on internal guidance for leanCoP indicated tha t large improvements\nare possible, this is the first practical and usable improvement of a s tate-of-the-art ATP by\ninternal learning-based guidance on a large CASC benchmark. It is c lear that a wide range of\nfuture improvements are possible: the learning could be dynamically u sed also during the proof\nsearch, training problems selected according to their similarity with t he current problem,5more\nsophisticated learning and feature characterization methods cou ld be employed, etc.\nThe magnitude of the improvement is unusually big for the ATP field, an d similar to the\nimprovements obtained with high-level learning in MaLARea 0.5 over E- LTB (sharing the\nsame underlying engine) in CASC 2013 [ 13]. We believe that this may well mark the arrival of\nENIGMAs – efficient learning-based inference guiding machines – to th e automated reasoning,\nas crucial and indispensable technology for building the strongest a utomated theorem provers.\n7 Acknowledgments\nWe thank Stephan Schulz for his open and modular implementation of E and its many features\nthat allowed us to do this work. We also thank the Machine Learning Gr oup at National\nTaiwan University for making LIBLINEAR openly available. This work was supported by the\nERC Consolidator grant no. 649043 AI4REASON .\nReferences\n[1] J. C. Blanchette, D. Greenaway, C. Kaliszyk, D. K¨ uhlwei n, and J. Urban. A learning-based fact\nselector for Isabelle/HOL. J. Autom. Reasoning , 57(3):219–244, 2016.\n[2] J. C. Blanchette, C. Kaliszyk, L. C. Paulson, and J. Urban . Hammering towards QED. J.\nFormalized Reasoning , 9(1):101–148, 2016.\n[3] B. E. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In\nCOLT, pages 144–152. ACM, 1992.\n[4] R. Fan, K. Chang, C. Hsieh, X. Wang, and C. Lin. LIBLINEAR: A library for large linear\nclassification. Journal of Machine Learning Research , 9:1871–1874, 2008.\n[5] M. F¨ arber, C. Kaliszyk, and J. Urban. Monte Carlo connec tion prover. CoRR, abs/1611.05990,\n2016.\n5In an initial experiment, a simple nearest-neighbor select ion of training problems for the learning further\ndecreases the solving times and proves one more AIM problem u nsolved by Prover9.\n[6] G. Gottlob, G. Sutcliffe, and A. Voronkov, editors. Global Conference on Artificial Intelligence,\nGCAI 2015, Tbilisi, Georgia, October 16-19, 2015 , volume 36 of EPiC Series in Computing .\nEasyChair, 2015.\n[7] T. Gransden, N. Walkinshaw, and R. Raman. SEPIA: search f or proofs using inferred automata.\nInAutomated Deduction - CADE-25 - 25th International Confere nce on Automated Deduction,\nBerlin, Germany, August 1-7, 2015, Proceedings , pages 246–255, 2015.\n[8] C. Hsieh, K. Chang, C. Lin, S. S. Keerthi, and S. Sundarara jan. A dual coordinate descent method\nfor large-scale linear SVM. In ICML, volume 307 of ACM International Conference Proceeding\nSeries, pages 408–415. ACM, 2008.\n[9] J. Jakubuv and J. Urban. BliStrTune: hierarchical inven tion of theorem proving strategies. In\nY. Bertot andV. Vafeiadis, editors, Proceedings of the 6th ACM SIGPLAN Conference on Certified\nPrograms and Proofs, CPP 2017, Paris, France, January 16-17 , 2017, pages 43–52. ACM, 2017.\n[10] C. Kaliszyk and J. Urban. Learning-assisted automated reasoning with Flyspeck. J. Autom.\nReasoning , 53(2):173–213, 2014.\n[11] C. Kaliszyk and J. Urban. FEMaLeCoP: Fairly efficient mac hine learning connection prover. In\nM. Davis, A. Fehnker, A. McIver, and A. Voronkov, editors, Logic for Programming, Artificial\nIntelligence, and Reasoning - 20th International Conferen ce, LPAR-20 2015, Suva, Fiji, Novem-\nber 24-28, 2015, Proceedings , volume 9450 of Lecture Notes in Computer Science , pages 88–96.\nSpringer, 2015.\n[12] C. Kaliszyk and J. Urban. MizAR 40 for Mizar 40. J. Autom. Reasoning , 55(3):245–256, 2015.\n[13] C. Kaliszyk, J. Urban, and J. Vyskoˇ cil. Machine learne r for automated reasoning 0.4 and 0.5.\nCoRR, abs/1402.2359, 2014. Accepted to PAAR’14.\n[14] C. Kaliszyk, J. Urban, and J. Vyskoˇ cil. Efficient semant ic features for automated reasoning over\nlarge theories. In Q. Yang and M. Wooldridge, editors, IJCAI’15 , pages 3084–3090. AAAI Press,\n2015.\n[15] L. Kov´ acs and A. Voronkov. First-order theorem provin g and Vampire. In N. Sharygina and\nH. Veith, editors, CAV, volume 8044 of LNCS, pages 1–35. Springer, 2013.\n[16] D. K¨ uhlwein and J. Urban. MaLeS: A framework for automa tic tuning of automated theorem\nprovers. J. Autom. Reasoning , 55(2):91–116, 2015.\n[17] C. Lin, R. C. Weng, and S. S. Keerthi. Trust region newton method for logistic regression. Journal\nof Machine Learning Research , 9:627–650, 2008.\n[18] J. Otten and W. Bibel. leanCoP: lean connection-based t heorem proving. J. Symb. Comput. ,\n36(1-2):139–161, 2003.\n[19] S. Sch¨ afer and S. Schulz. Breeding theorem proving heu ristics with genetic algorithms. In Gottlob\net al. [6], pages 263–274.\n[20] S. Schulz. E - A Brainiac Theorem Prover. AI Commun. , 15(2-3):111–126, 2002.\n[21] G. Sutcliffe. The 8th IJCAR automated theorem proving sy stem competition - CASC-J8. AI\nCommun. , 29(5):607–619, 2016.\n[22] J. Urban. BliStr: The Blind Strategymaker. In Gottlob e t al. [6], pages 312–319.\n[23] J. Urban, G. Sutcliffe, P. Pudl´ ak, and J. Vyskoˇ cil. MaL ARea SG1 - Machine Learner for Au-\ntomated Reasoning with Semantic Guidance. In A. Armando, P. Baumgartner, and G. Dowek,\neditors,IJCAR, volume 5195 of LNCS, pages 441–456. Springer, 2008.\n[24] J. Urban, J. Vyskoˇ cil, and P. ˇStˇ ep´ anek. MaLeCoP: Machine learning connection prover. In\nK. Br¨ unnler and G. Metcalfe, editors, TABLEAUX , volume 6793 of LNCS, pages 263–277.\nSpringer, 2011.\nA The E Prover Strategy Used in Experiments\nThe following fixed E strategy S0, described by its command line arguments, was used in the\nexperiments:\n--definitional-cnf=24 --destructive-er-aggressive --d estructive-er\n--prefer-initial-clauses -F1 --delete-bad-limit=15000 0000 --forward-context-sr\n-winvfreqrank -c1 -Ginvfreq -WSelectComplexG --oriented -simul-paramod -tKBO6\n-H(1*ConjectureRelativeSymbolWeight(SimulateSOS,0.5 ,100,100,100,100,1.5,1.5,1),\n4*ConjectureRelativeSymbolWeight(ConstPrio,0.1,100, 100,100,100,1.5,1.5,1.5),\n1*FIFOWeight(PreferProcessed),\n1*ConjectureRelativeSymbolWeight(PreferNonGoals,0.5 ,100,100,100,100,1.5,1.5,1),\n4*Refinedweight(SimulateSOS,3,2,2,1.5,2))",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CpjQmvQy52",
"year": null,
"venue": "ECAI 2023",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA230633",
"forum_link": "https://openreview.net/forum?id=CpjQmvQy52",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Deep Co-Training for Cross-Modality Medical Image Segmentation",
"authors": [
"Lei Zhu",
"Ling Ling Chan",
"Teck Khim Ng",
"Meihui Zhang",
"Beng Chin Ooi"
],
"abstract": "Due to the expensive segmentation annotation cost, cross-modality medical image segmentation aims to leverage annotations from a source modality (e.g. MRI) to learn a model for target modality (e.g. CT). In this paper, we present a novel method to tackle cross-modality medical image segmentation as semi-supervised multi-modal learning with image translation, which learns better feature representations and is more robust to source annotation scarcity. For semi-supervised multi-modal learning, we develop a deep co-training framework. We address the challenges of co-training on divergent labeled and unlabeled data distributions with a theoretical analysis on multi-view adaptation and propose decomposed multi-view adaptation, which shows better performance than a naive adaptation method on concatenated multi-view features. We further formulate inter-view regularization to alleviate overfitting in deep networks, which regularizes deep co-training networks to be compatible with the underlying data distribution. We perform extensive experiments to evaluate our framework. Our framework significantly outperforms state-of-the-art domain adaptation methods on three segmentation datasets, including two public datasets on cross-modality cardiac substructure segmentation and abdominal multi-organ segmentation and one large scale private dataset on cross-modality brain tissue segmentation. Our code is publicly available at https://github.com/zlheui/DCT.",
"keywords": [],
"raw_extracted_content": "Deep Co-Training for Cross-Modality Medical Image\nSegmentation\nLei Zhua;*, Ling Ling Chanb, Teck Khim Ngc, Meihui Zhangdand Beng Chin Ooic\naInstitute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*ST AR),\nSingapore\nbSingapore General Hospital, Singapore\ncNational University of Singapore, Singapore\ndBeijing Institute of Technology, Beijing, China\nAbstract. Due to the expensive segmentation annotation cost,\ncross-modality medical image segmentation aims to leverage an-\nnotations from a source modality (e.g. MRI) to learn a modelfor target modality (e.g. CT). In this paper, we present a novelmethod to tackle cross-modality medical image segmentation assemi-supervised multi-modal learning with image translation, whichlearns better feature representations and is more robust to source an-notation scarcity. For semi-supervised multi-modal learning, we de-velop a deep co-training framework. We address the challenges ofco-training on divergent labeled and unlabeled data distributions witha theoretical analysis on multi-view adaptation and propose decom-posed multi-view adaptation, which shows better performance thana naive adaptation method on concatenated multi-view features. Wefurther formulate inter-view regularization to alleviate overfitting indeep networks, which regularizes deep co-training networks to becompatible with the underlying data distribution. We perform exten-sive experiments to evaluate our framework. Our framework signif-icantly outperforms state-of-the-art domain adaptation methods onthree segmentation datasets, including two public datasets on cross-modality cardiac substructure segmentation and abdominal multi-organ segmentation and one large scale private dataset on cross-modality brain tissue segmentation. Our code is publicly available\nat https://github.com/zlheui/DCT.\n1 Introduction\nDeep learning has achieved great success in medical image analy-sis [17], however, it requires huge amount of labeled data to be ef-fective, which is both expensive and time consuming to obtain. Itis desirable to develop deep learning methods that are annotation-efficient. To this end, cross-modality medical image segmentationaims to leverage existing annotations from a source modality (e.g.MRI) to learn a model for target modality (e.g. CT). However, deeplearning models trained in one modality usually give poor perfor-mance in another modality due to distribution shift. Unsuperviseddomain adaptation (UDA) methods have shown promising perfor-mance for cross-modality medical image segmentation task. State-of-the-art UDA methods adopt synergistic image and feature adap-tation to reduce the distribution shift across domains at both imageand feature level [3, 33, 10, 12]. However, these UDA methods maybe sub-optimal, as they merely align the target data distribution with\n∗Corresponding Author. Email: [email protected]\nFigure 1. Image translation enables semi-supervised multi-modal\nlearning. We adopt CycleGAN to translate source MRI image into CT im-\nage and target CT image into MRI image. The image translation results arequite good where the appearance of the original images are translated intothe appearance of the other modality and the contents of the images are well-preserved. Augmenting the original single-modal datasets with the translatedimages creates one labeled multi-modal dataset in source domain and one un-labeled multi-modal dataset in target domain, which suggests semi-supervisedmulti-modal learning.\nannotated source data without considering learning target data struc-\nture.\nIn this paper, we present a novel method to tackle cross-modality\nmedical image segmentation with semi-supervised multi-modallearning. From Fig. 1, we can observe that existing image translationtechniques can transform an MRI image into CT appearance and aCT image into MRI appearance while preserving the image contentwith fairly good quality [31]. Similarly, we should be able to augmentthe datasets in cross-modality medical image segmentation, namely,the labeled images in source modality (e.g. MRI) and unlabeled im-ages in target modality (e.g. CT) with their translated images. In thisway, we can obtain a labeled multi-modal dataset from source do-main and an unlabeled multi-modal dataset from target domain asshown in figure.\nAs opposed to solving the cross-modality medical image segmen-\ntation task with domain adaptation, which merely aligns the targetdata distribution towards the source data distribution, we propose asemi-supervised multi-modal learning approach. Consequently, ourgoal is to learn a model that can perform well with complementarymulti-modal information in a semi-supervised manner, which canECAI 2023\nK. Gal et al. (Eds.)\n© 2023 The Authors.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2306333140\n1). lead to better feature representations for both domains ,a s\nboth labeled and unlabeled data are leveraged for learning discrim-\ninative deep feature representations; 2). be more robust to source\nannotation scarcity, as a solution to semi-supervised multi-modallearning naturally handles the case when we have limited annota-tions in source domain. In essence, we propose to transform the taskof cross-modality medical image segmentation into the task of semi-supervised multi-modal learning with image translation. Solving thelatter task can potentially provide a better solution to the former taskand is robust to annotation scarcity.\nCo-training [1] is a semi-supervised multi-modal learning method,\nwhere two models are first learned on the two different views (modal-ities) of the labeled data. Subsequently, unlabeled data with modelassigned pseudo-labels are gradually added to the labeled data setfor continual training. We can apply co-training on our augmenteddatasets to learn two segmentation networks for the two differentmodalities. However, plain co-training with deep networks is un-likely to work. The challenges originate from both the dataset set-ting and the engagement of deep networks: 1). The augmented multi-modal datasets are synthesized from the labeled source dataset andunlabeled target dataset, which are drawn from different distribu-tions. As can be observed in Fig. 1, despite the modality difference,there are still some morphological and scale differences betweensource and target data, which may be due to different patient dis-tributions across domains or different machine scanning parameters;2). Deep networks are notoriously known to require large scale la-beled data to be effective and tend to overfit, which will deteriorateco-training performance.\nTo facilitate effective deep co-training, we address the two chal-\nlenges in our framework as follows: First, to reduce the distribu-tion shift between the source and target data, we conduct theoreti-cal analysis on multi-view adaptation and develop a theorem to en-able decomposition of adaptation with multi-view data into adapta-tion on each single view. Based on which, we propose decomposedmulti-view adaptation which yields better performance than a naiveadaptation method on concatenated multi-view features that does notconsider the view-wise features. Second, co-training assumes targetconcepts to be compatible with the underlying data distribution to beeffective. This can however be violated when deep networks over-fit the labeled data and cannot generalize to unlabeled data. To thisend, we introduce inter-view regularization to enforce consistency ofpredictions on different views of the same data point.\nIn summary, we have made following contributions in this paper:\n•We develop a deep co-training framework for cross-modality med-ical image segmentation. Compared to existing UDA methods, ourmethod learns better feature representations and is more robust tosource annotation scarcity.\n•We prove a theorem for multi-view adaptation and propose a gen-eral decomposed multi-view adaptation method, which shows bet-ter performance compared to adaptation with concatenated multi-view features.\n•We propose inter-view regularization to regularize deep co-training networks to be compatible with synthesized multi-modaldata, which is generally applicable for deep co-training.\n•We conduct extensive experiments to evaluate our framework,where we have collected and processed a large scale private braintissue segmentation dataset to verify that our framework can beeffectively applied in real clinical settings. Our framework outper-forms state-of-the-art cross-modality medical image segmentationmethods significantly in all the three datasets we evaluate.2 Related Works\nUnsupervised Domain Adaptation has shown promising perfor-\nmance in cross-modality medical image segmentation task. Exist-ing UDA methods can be mostly categorized into feature adapta-tion methods, image adaptation methods, and hybrid methods whichcombine feature and image adaptation. Feature adaptation methodsreduce feature distribution shift by either minimizing certain distri-bution metric like maximum mean discrepancy (MMD) [30, 18], orthrough adversarial training with a domain discriminator [8, 29, 7].Image adaptation methods [2, 13] translate image appearance acrossdomains and learn a target model on translated source images. Hoff-man et al. [11] are among the first to combine feature adaptation withimage adaptation while enforcing semantic consistency with a staticsource trained model. Chen et al. [3] proposes synergistic feature andimage adaptation which fuses part of image translation pipeline withfeature representation learning. Zou et al. [33] introduces a target-to-source adaptation branch with a dual-scheme fusion network formore effective adaptation. Han et al. [10] proposes deep symmetricadaptation network with a bidirectional adaptation structure. Most re-cently, Hu et al. [12] introduce a semantic similarity constraint withcontrastive learning [6] to further boost the cross-modality medi-cal image segmentation performance. Unlike previous methods, self-training based UDA method CBST [34] use source trained networkto assign pseudo-labels to unlabeled target data and then use pseudo-labeled target data to update the network for target data structurelearning. Co-training for Domain Adaptation (CODA) [4] solves anoptimization problem which simultaneously learns a target classifier,a split of the feature space into two different views, and a subset ofsource and target features. Their method is however limited to simplelinear models and text-based classification tasks. Asymmetric Tri-training for Domain Adaptation (A TDA) [24] uses two source classi-fiers to assign pseudo labels to unlabeled target data and then uses thepseudo-labeled target data to train a target classifier with shared en-coder network, but their method is limited to simple digit and reviewclassification tasks. Most existing UDA methods focus on reducingthe distribution shift across domains for knowledge transfer, thus,they are well-suited for cross-modality medical image segmentationtask. However, unlike existing UDA methods, our deep co-trainingframework tackle cross-modality medical image segmentation as asemi-supervised multi-modal learning task via image translation.Image Translation aims to translate images from one style into an-\nother style while preserving the image content. Most image transla-tion methods are based on generative adversarial networks (GAN)framework [9]. DCGAN [23] proposes deep convolutional GANarchitecture to learn better feature representations and improvethe image translation quality. CycleGAN [31] introduces a cycle-consistency loss, which demonstrates great image translation resultswhile preserving the image content. Image translation has alreadydemonstrated good performance in UDA [2, 13] to translate sourceimage into target style for learning. As far as we are concerned, weare the first to explore usage of image translation to synthesize multi-modal data for semi-supervised multi-modal learning.Co-Training [1] is a method for semi-supervised multi-modal learn-\ning, where two models are trained on two modalities for learningcomplementary information. Co-training has been applied in variousmachine learning tasks like text classification [21], object recogni-tion [5], domain adaptation [4], and deep semi-supervised classifica-tion [22]. As far as we are concerned, we are the first to investigatea deep co-training framework on synthesized multi-modal data forsemi-supervised multi-modal segmentation.L. Zhu et al. / Deep Co-Training for Cross-Modality Medical Image Segmentation 3141\nFigure 2. Overview of our proposed deep co-training framework. (a). Plain\ndeep co-training on synthesized multi-modal data with image translation.\nThere are two segmentation networks to perform segmentation in each modal-ity. Labeled source data and unlabeled target data are utilized to perform co-training on the two networks. The plain deep co-training framework is un-likely to work due to distribution discrepancy and potential overfit of deepnetworks on limited annotations. Thus, we introduce the following two com-ponents into our framework: (b). Decomposed multi-view adaptation. Twodomain discriminators are introduced into the framework to perform decom-posed adaptation on each view separately based on our theorem; (c). Inter-view regularization is introduced to regularize the deep co-training networksto be compatible with the synthesized multi-modal target data to alleviateoverfitting. Our whole framework is composed of the components in (a), (b),and (c).\n3 Methods\nIn cross-modality medical image segmentation task, we are given\nNslabeled data Ds={(xs\ni,ys\ni)}Ns\ni=1in source domain and Ntun-\nlabeled data Dt={xti}Nt\ni=1in target domain. The source and target\ndata share the same set of Clabels and are sampled from probability\ndistributions PsandPtrespectively with Ps/negationslash=Pt. The goal is to\nlearn a model with labeled source data and unlabeled target data that\ncan perform well in target domain. In Fig. 2, we present an overviewof our proposed deep co-training framework which tackles the taskas semi-supervised multi-modal learning.\n3.1 Image Translation and Deep Co-Training\nWe adopt CycleGAN [31] for image translation in our framework\ndue to its good performance, while better image translation tech-niques will further boost the performance of our framework. Withimage translation, we augment the original dataset with their trans-lated images. Denote /tildewideD\ns={((xs\ni,xs→ti),ys\ni)}Ns\ni=1 and/tildewideDt=\n{(xt→s\ni,xti)}Nt\ni=1 as the augmented source and target dataset respec-\ntively, where xs→tiis the translated image of xsiin target modality\nandxt→s\ni is the translated image of xtiin source modality.\nIn our deep co-training framework, we first learn two segmentation\nnetworks on the two different modalities with the following hybrid\nloss:\nLs\nseg=E[H(ys,Fs(xs))+Dice(ys,Fs(xs))], (1)\nLs→tseg=E[H(ys,Ft(xs→t))+Dice(ys,Ft(xs→t))], (2)\nwhereFsandFtare two segmentation networks for source and\ntarget modality respectively, H(·)is the pixel-wise cross-entropyloss, which we assign class weights to balance different classes and\nDice(·)is the widely adopted dice loss [19] . The hybrid loss is\ndesigned to sufficiently learn the two segmentation networks withcomplementary supervision signals.\nNext, we perform co-training on the unlabeled target data. We ex-\ntend class-balanced self-training [34] for deep co-training where theselected target pixels to label is the union of the selected target pix-els from the two segmentation networks. The selection function Sis\ndefined as follows:\nS(p\nt)= 1[c=argmax cp(c)\nt∧p(c)\nt>exp(−kc)](p(c)\nt), (3)\nwhereptis the prediction mask, 1is the indicator function which\nreturns 1 if the condition is true and 0 otherwise and kcis the class-\nbalanced weights [34]. The final labeled target pixel is S(xt)=\nS(Fs(xt→s))∪S(Ft(xt)). The co-training loss is defined as fol-\nlows:\nLt\ncot=E[H(S(xt),Ft(xt))], (4)\nLt→s\ncot=E[H(S(xt),Fs(xt→s))]. (5)\nDiscussion. Will the above plain deep co-training framework\nwork? As we mentioned before, it is challenging to use synthesized\nmulti-modal dataset for deep co-training. Thus, the above framework\nis unlikely to work as effective. To this end, we introduce two extracomponents into our framework to ensure deep co-training workseffectively. Will back-propagating the supervision signal to train\nthe image translation model help improve the performance? Cur-\nrently, the image translation model and the segmentation networksare trained in isolation. However, as the segmentation networks re-ceive the translated images as input and possess semantic knowledgeon each class, we think back-propagating the supervision signal fromthe segmentation networks to train image translation model fromend-to-end would help boost the performance of our framework. Weprovide empirical results in Sec. 4.3.\n3.2 Decomposed Multi-View Adaptation\nIn our problem, source and target data are drawn from different datadistributions. As observed in Fig. 1, there is difference between thesource image pair and target image pair despite the modality differ-ence. Thus, it is necessary to reduce the distribution shift betweenthe labeled source data and unlabeled target data. One naive solutionis to concatenate multi-view features and use a domain discrimina-tor to discriminate the concatenated features from source and targetdata for distribution alignment without considering the view-wisefeatures. However, using a single domain discriminator may fail tocapture the minor differences for features within a single view. Wedevelop a theorem which states that we can decompose multi-viewadaptation into adaptation in each single view:\nTheorem 1 LetHbe a hypothesis space of VC dimension dand\nletP\nsandPtbe the data distribution for source data and tar-\nget data respectively. Suppose data instances in both source distri-\nbution and target distribution have kdifferent views, where xs=\n(vs\n1,vs\n2,...,vs\nk)for each(xs,ys)∈Ps. Similarly for (xt,yt)∈Pt.\nIfUs,Utare unlabeled data of size m/primeeach drawn from PsandPt\nrespectively, then for any δ∈(0,1), with probability at least 1-δ , forL. Zhu et al. / Deep Co-Training for Cross-Modality Medical Image Segmentation 3142\neveryh1,h2,...,hk∈H , we have:\n/epsilon1t(h)≤1\nkk/summationdisplay\ni=1/parenleftbigg\n/epsilon1s(hi)+1\n2dHΔH(Us\ni,Ut\ni)+Ci/parenrightbigg\n+4/radicalBigg\n2dlog(2m/prime)+log(2\nδ)\nm/prime ,(6)\nwhere/epsilon1s(·)(resp./epsilon1t(·)) measures the expectation error of a hypoth-\nesis on source (resp. target) data distribution, h=/summationtextk\ni=1hi\nkis the\ncomposite hypothesis, dHΔH(Us\ni,Ut\ni)is the empirical estimation of\ntheHΔH -distance on the i-th view of unlabeled data UsandUt,\nCi= min hi∈H/epsilon1t(hi)+/epsilon1s(hi).\nThe above theorem states that for the composite multi-view model,\nits performance on target distribution is upper bounded by the perfor-mance of each constituent model on source distribution and the distri-bution shift between source and target distribution in each view plussome constant terms. In deep co-training, the model performance ontarget distribution affects the accuracy of the assigned labels for un-labeled target data. Thus, the theorem confirms the necessity in min-imizing the distribution shift across source and target data. In addi-tion, the theorem states that it suffices to reduce the distribution shiftfor each view separately. Based on our theorem, we proposed decom-posed multi-view adaptation. Specifically, we introduce two domaindiscriminators into our framework, namely D\nsandDtto separately\nreduce the distribution shift in the two different modalities and wecontrol the strength of adaptation with a balancing weight. We useD\nsto discriminate prediction masks from source data and translated\ntarget data, and Dtto discriminate prediction masks from translated\nsource data and target data. We train the two segmentation networksF\nsandFtadversarially so that the learned features become domain\ninvariant to produce similar prediction masks across domains. Theadversarial training losses are defined as follows:\nL\ns\nadv=E[logDs(Fs(xs))]+E[log(1−Ds(Fs(xt→s)))], (7)\nLtadv=E[logDt(Ft(xs→t))]+E[log(1−Dt(Ft(xt)))]. (8)\n3.3 Inter-View Regularization\nThe success of co-training relies on the “compatibility\" assumption\namong the target concepts in each view and the underlying data dis-tribution [1], namely if F\n∗denotes the combined target concept and\nF∗\n1andF∗\n2denote the target concept in each view, then for any exam-\nplex=(v1,v2),w eh a v eF∗(x)=F∗\n1(v1)=F∗\n2(v2). Intuitively,\nthe “compatibility\" assumption enables us to use model from oneview to assign labels to unlabeled data and then use the other view ofunlabeled data with assigned labels to train the other model, which isat the core of co-training.\nIn the original algorithm [1], models are trained on labeled data\nand perform co-training on unlabeled data without regularization.This is because the original models are simple linear models andregularization is not needed. However, deep learning models are rep-resentation learning, have high capacities, and are easy to overfit.Moreover, the labeled source data can be scarce. Consequently, the“compatibility\" assumption can be violated when models overfit thelabeled data and cannot generalize to unlabeled data. Thus, it is nec-essary to regularize the two segmentation networks in our deep co-training framework to conform to the “compatibility\" assumption.To this end, we propose inter-view regularization with synthesizedmulti-modal data to ensure the predictions on the original and trans-lated data to be compatible. Specifically, we input the target data andthe translated target data into the corresponding segmentation net-work to obtain their prediction masks. Then, we minimize the dis-crepancy for the predicted probability vectors at each pixel to ensurethe two segmentation network have similar predictions. We choosesymmetric Kullback-Leibler (KL)-divergence which measures howone probability distribution is different from another as follows:\nL\nreg=E[1\n2(KL(Ft(xt),Fs(xt→s))\n+KL(Fs(xt→s),Ft(xt)))],(9)\nwhereKL(·,·)measures the average pixel-wise KL-divergence be-\ntween two prediction masks.Discussion. Consistency regularization is widely adopted in semi-supervised learning to regularize the learning of deep networks toavoid overfitting. They usually input two perturbed data points intothe same network and ensure the network to make similar predic-tions [25, 27]. Some use mean teacher [28], some use virtual adver-sarial training [20], and some use the same input into two differentnetworks [22] for regularization. Different from them, we are the firstto use synthesized multi-modal data for regularization; the regular-ization method is proposed to enable co-training with deep networks;and we focus on segmentation task as opposed to classification.\nTraining objective: The overall objective function of our deep co-\ntraining framework is as follows:\nL\nall=Ls\nseg+Ls→tseg+Ltcot+Lt→s\ncot\n+λadv(Lsadv+Ltadv)+λregLreg,(10)\nwhereλadv andλreg are the balancing weights, which are both set\nto1empirically.\n4 Experiments\n4.1 Datasets and Evaluation Metrics\nWe validate the effectiveness of our framework with three datasets:\ncardiac substructure segmentation [32]; abdominal multi-organ seg-mentation [15, 16]; and a large-scale private brain tissue segmen-\ntation dataset. More details about the large scale private brain tissue\nsegmentation dataset can be found in our supplementary material.\nThe cardiac dataset consists of 20unpaired MRI and CT volumes\nwith ground truth masks on four heart substructures: ascending aorta(AA), left atrium blood cavity (LAC), left ventricle blood cavity(L VC), and myocardium of the left ventricle (MYO). The abdominaldataset consists of 20unpaired T2-SPIR MRI and 30CT volumes\ncollected from two public datasets with ground truth masks on fourorgans: spleen, right kidney, left kidney, and liver. The private braintissue segmentation dataset consists of 968 paired MRI and CT vol-\numes with ground truth masks on three types of tissues: cerebrospinalfluid (CSF), grey matter (GM), and white matter (WM). All the dataare cropped, normalized with zero mean and unit variance, and re-sampled into the size of 256×256. Coronal view of cardiac volumesand axial view of abdominal volumes and brain volumes are used totrain the 2D network. Both MRI to CT and CT to MRI transfer areconsidered for the two public datasets. MRI to CT transfer is con-sidered for the private dataset. Each modality is randomly split with80% scans for training and 20% for testing following existing stud-\nies [3, 33, 12].\nWe employ three commonly-used metrics, namely the Dice sim-\nilarity coefficient (Dice), continuous Dice similarity coefficientL. Zhu et al. / Deep Co-Training for Cross-Modality Medical Image Segmentation 3143\nTable 1. Ablation study of our deep co-training framework on Abdomi-\nnal Multi-Organ MRI→CT transfer task. DCT = Deep Co-Training, DMA =\nDecomposed Multi-view Adaptation, IVR = Inter-View Regularization, DCT-SEP = Deep Co-Training without back-propagating gradients to image trans-lation model, DCT-CA = Deep Co-Training with adaptation on concatenatedmulti-view features, DCT-CA↑ = DCT-CA with increased domain discrimi-\nnator size.\nAbdominal Multi-Organ Segmentation Performance MRI→CT\nMethodLsegLcotLadvLreg Dice ASD\nFsFtEnsemble Ensemble\nSource only /check NA NA 62.0 4.3\nPlain DCT /check/check 32.3 41.8 35.3 15.4\nPlain DCT + DMA /check/check/check 58.4 67.4 74.4 7.1\nPlain DCT + IVR /check/check /check 85.3 85.7 86.4 2.1\nDCT (Our Proposed) /check/check/check/check 87.1 87.8 88.0 1.6\nDCT-SEP /check/check/check/check 81.8 84.9 85.0 2.2\nDCT-CA /check/check/check/check 85.0 85.1 85.7 2.3\nDCT-CA↑ /check/check/check/check 84.0 87.0 87.2 2.3\n(cDice), and the average symmetric surface distance (ASD) to quan-\ntitatively evaluate the segmentation performance. Dice measures thevoxel-wise segmentation accuracy between the predicted and refer-ence volumes. cDice [26] is a variant of the Dice coefficient that eval-uate spatial similarity between binary images and real-valued prob-ability maps. ASD calculates the average distances between the sur-face of the prediction mask and the ground truth in 3D. A higher Diceand cDice value or a lower ASD value indicates better segmentationresults. The evaluation is performed on the subject-level segmenta-tion volume.\n4.2 Implementation Details\nFor the implementation of our deep co-training framework, the im-age translation network is implemented and trained according to theoriginal CycleGAN paper [31]. We implement the segmentation net-works in our framework follow the same architecture as [3, 33] forfair comparison, which consists of twelve convolutional operationgroups, two dilated convolutional groups and one softmax layer. Thedomain discriminator networks in our framework is implemented fol-lowing the architecture of PatchGAN [14], which have five convolu-tional layers with channels size of 64, 128, 256, 512, and 1, respec-tively. We use Adam optimizer with learning rate of 2×10\n−4.W e\nsplit the training of our framework into two phases. In the first phase,we train our framework without co-training loss L\ncot for10k iter-\nations to warm up the segmentation networks. In the second phase,we include the co-training loss and train the framework for another10k iterations. The batch size is set to 6on a NVIDIA GeForce GTX\n1080p GPU. For final prediction, we ensemble the predictions fromboth segmentation networks by averaging their prediction probabili-ties. We run all experiments three times and report the mean.\n4.3 Ablation Study\nWe perform extensive ablation studies to investigate how our de-signs contribute to a deep co-training framework for cross-modalitymedical image segmentation. Table 1 shows the experiment results.First, the plain deep co-training framework fails to work due to thesource and target data distribution discrepancy and the violation ofco-training “compatibility\" assumption when deep networks over-fit. Second, the addition of either our proposed decomposed multi-view adaptation or inter-view regularization technique tackles oneof the above two challenges and helps to boost the performance ofour framework to be better than the source only baseline. Third, thecombination of both components into our framework achieves thebest performance, which is due to the complementary roles of themplayed in enabling deep co-training.\nNext, we present ablation studies on some other aspects of our\nframework. First, our ablation studies show that our framework cangain about 3 points boost in dice score when we back-propagate thesupervision signal from the segmentation networks to train the im-age translation model compared to when we do not as shown in Ta-ble 1. Second, for the final prediction, we ensemble the predictionsfrom the two segmentation networks. Our ablation studies show thatensemble generally helps to improve the results when compared tousing either single segmentation network for final prediction.\nFinally, we have proposed decomposed multi-view adaptation,\nwhich is a general methodology for multi-view adaptation. We com-pare it with a naive adaptation method on concatenated multi-viewfeatures. The results show that decomposed multi-view adaptationleads to better results compared to adaptation on concatenated multi-view features. To ensure the difference is not due to the larger ca-pacity of our method as our method have two domain discrimina-tors, we double the size of the domain discriminator in the compari-son method. We find that increasing the size of domain discriminatorhelps to improve the performance in the comparison method, how-ever, our method still outperforms it. The experiment results indicatethat dedicated adaptation for each single view is better than adapta-tion on the concatenated multi-view features without considering theview-wise features, where adaptation on the concatenated multi-viewfeatures may fail to differentiate the minor differences in each singleview.\n4.4 Comparison with State-of-The-Art\nWe compare our deep co-training framework with state-of-the-artUDA methods for cross-modality medical image segmentation in-cluding CBST [34], A TDA [24], SynSeg-Net [13], CycleGAN [31],PnP-AdaNet [7], AdaOutput [29], CyCADA [11], DSFN [33], SIFA-v2 [3], DSAN [10], and SSC [12]. CBST is self-training basedUDA method. A TDA is tri-training based UDA method. SynSeg-Net and CycleGAN are image adaptation based UDA methods. PnP-AdaNet and AdaOutput are feature adaptation based UDA meth-ods. CyCADA, DSFN, SIFA-v2, DSAN, and SSC are joint imageand feature adaptation UDA methods. In particular, SIFA-v2, DSFN,DSAN, and SSC all perform synergistic image and feature adaptationand are designed for medical image analysis. To demonstrate the do-main shift across domains, we present the performance lower bound“Source only\" by directly applying the model trained in source do-main to target data. We also provide the performance upper bound“Supervised training\" by training the model on target labels.\nTable 2 presents both the experiment results for cardiac substruc-\nture segmentation and abdominal multi-organ segmentation. As canbe observed, our deep co-training framework significantly outper-form state-of-the-art UDA methods for cross-modality medical im-age segmentation. Specifically, for the Cardiac MRI →CT transfer\ntask, our deep co-training framework has improved the average resultby 1.0 point in Dice score compared to the previously best method.And for the more challenging Cardiac CT→ MRI transfer task, our\nframework has improved the average result by 8.2 points in Dicescore and reduced the ASD score by 1.6 points compared to the pre-viously best method. For abdominal multi-organ segmentation, theimprovement of our deep co-training framework upon SIFA-v2 out-performs state-of-the-art UDA method SSC by 2.0 points in dicescore and 0.2 points in ASD score for MRI →CT transfer task. ForL. Zhu et al. / Deep Co-Training for Cross-Modality Medical Image Segmentation 3144\nTable 2. Performance comparison with state-of-the-art domain adaptation methods on cardiac substructure segmentation and abdominal multi-organ segmen-\ntation. Numbers before the slash ‘/’ are for MRI to CT transfer, after the slash ‘/’ are for CT to MRI transfer. Results for method with∗are cited from their\npaper. ‘+’ and ‘-’ denotes the increment or decrement upon SIFA-v2. Bold number highlights the best performance or best improvement upon SIFA-v2. Note\nthat we compare our method with SSC and DSAN by improvement upon SIFA-v2 as both codes for SSC and DSAN are not publicly available and we preprocess\nthe multi-organ dataset differently compared to them.\nCardiac Substructure Segmentation Performance (MRI→ CT/CT→ MRI)\nMethodDice ASD\nAA LAC L VC MYO Avg AA LAC L VC MYO Avg\nSupervised training 83.2/ 82.8 90.5/ 86.5 92.0/ 92.4 88.3/ 79.1 88.5/ 85.2 2.3/ 3.8 2.3/ 2.1 1.7/ 2.0 1.5/ 1.6 1.9/ 2.3\nSource only 11.4/ 0.8 40.3/ 21.3 8.7/ 30.4 0.4/ 10.9 15.2/ 15.8 33.9/ 24.7 29.3/ 19.6 34.3/ 10.9 34.8/ 7.6 33.1/ 15.7\nCBST [34] 16.6/ 15.7 27.8/ 34.5 12.5/ 46.4 3.5/ 32.5 15.1/ 32.3 36.8/ 25.3 34.1/ 19.5 34.7/ 12.5 31.7/ 14.0 34.3/ 17.8\nA TDA [24] 46.4/ 28.5 28.4/ 37.7 2.7/ 54.5 2.2/ 13.6 19.9/ 33.6 30.1/ 17.8 41.1/ 12.6 18.0/ 14.4 45.6/ 7.7 33.7/ 13.1\nSynSeg-Net [13] 71.6/ 41.3 69.0/ 57.5 51.6/ 63.6 40.8/ 36.5 58.2/ 49.7 11.7/ 8.6 7.8/ 10.7 7.0/ 5.4 9.2/ 5.9 8.9/ 7.6\nCycleGAN [31] 73.8/ 64.3 75.7/ 30.7 52.3/ 65.0 28.7/ 43.0 57.6/ 50.7 11.5/ 5.8 13.6/ 9.8 9.2/ 6.0 8.8/ 5.0 10.8/ 6.6\nPnP-AdaNet [7] 74.0/ 43.7 68.9/ 47.0 61.9/ 77.7 50.8/ 48.6 63.9/ 54.3 12.8/ 11.4 6.3/ 14.5 17.4/ 4.5 14.7/ 5.3 12.8/ 8.9\nAdaOutput [29] 65.2/ 60.8 76.6/ 39.8 54.4/ 71.5 43.6/ 35.5 59.9/ 51.9 17.9/ 5.7 5.5/ 8.0 5.9/ 4.6 8.9/ 4.6 9.6/ 5.7\nCyCADA [11] 72.9/ 60.5 77.0/ 44.0 62.4/ 77.6 45.3/ 47.9 64.4/ 57.5 9.6/ 7.7 8.0/ 13.9 9.6/ 4.8 10.5/ 5.2 9.4/ 7.9\nDSFN[33] 81.5/ 53.0 82.7/ 62.3 76.9/ 69.0 60.0/ 36.7 75.3/ 55.2 11.4/ 7.5 5.2/ 8.1 4.6/ 5.4 4.2/ 4.9 6.4/ 6.4\nSIFA-v2 [3] 81.3/ 67.0 79.5/ 60.7 73.8/ 75.1 61.6/ 45.8 74.1/ 62.1 7.9/ 6.2 6.2/ 9.8 5.5/ 4.4 8.5/ 4.4 7.0/ 6.2\nDSAN∗[10] 79.9/ 71.3 84.8/ 66.2 82.8/ 76.2 66.5/ 52.1 78.5/ 66.5 7.7/ 4.4 6.7/ 7.3 3.8/ 5.5 5.6/ 4.3 5.9/ 5.4\nSSC∗[12] 82.0/ NA 85.3/ NA 88.4/ NA 67.6/ NA 80.8/ NA 6.2/ NA 4.1/ NA 3.0/ NA 3.4/ NA 4.2/ NA\nDeep Co-Training (Our Proposed) 86.7/ 72.6 85.5/ 75.7 84.8/ 87.2 70.5/ 63.4 81.8/ 74.7 7.5/ 5.2 3.2/ 4.2 3.0/ 2.6 3.7/ 3.4 4.2/ 3.8\nAbdominal Multi-Organ Segmentation Performance (MRI→ CT/CT→ MRI)\nMethodDice ASD\nSpleen R. kidney L. kidney Liver Avg Spleen R. kidney L. kidney Liver Avg\nSupervised training 93.8/ 91.0 89.9/ 94.4 94.1/ 92.6 93.8/ 94.6 92.9/ 93.1 0.6/ 1.2 3.4/ 0.3 0.8/ 1.1 1.3/ 0.6 1.5/ 0.8\nSource only 66.2/ 28.4 66.3/ 11.7 61.9/ 46.7 52.5/ 73.1 61.7/ 40.0 5.4/ 11.4 4.8/ 25.7 3.0/ 2.3 4.2/ 2.1 4.4/ 10.4\nCBST [34] 81.8/ 81.5 77.3/ 81.8 85.0/ 86.5 83.4/ 77.9 81.9/ 81.9 6.0/ 2.9 4.4/ 1.1 2.8/ 1.9 3.8/ 2.9 4.3/ 2.2\nA TDA [24] 85.5/ 43.0 67.7/ 3.7 62.2/ 48.6 77.7/ 30.8 73.3/ 31.5 3.8/ 7.8 7.7/ 24.0 15.9/ 7.4 7.9/ 10.7 8.8/ 12.5\nSynSeg-Net [13] 81.1/ 85.3 82.6/ 83.9 82.8/ 87.0 83.8/ 83.5 82.6/ 84.9 1.9/ 1.5 2.4/ 0.9 2.5/ 0.9 4.5/ 2.5 2.8/ 1.5\nCycleGAN [31] 83.3/ 79.4 80.7/ 84.4 82.9/ 89.1 87.4/ 87.4 83.6/ 85.1 2.2/ 2.3 2.8/ 1.0 1.7/ 0.7 2.4/ 2.2 2.3/ 1.5\nAdaOutput [29] 87.2/ 80.0 81.7/ 87.1 86.0/ 85.2 84.0/ 85.5 84.7/ 84.5 1.6/ 0.8 3.2/ 0.6 1.6/ 0.8 2.3/ 1.5 2.2/ 0.9\nCyCADA [11] 86.2/ 76.2 84.8/ 86.3 82.6/ 88.0 85.8/ 90.3 84.9/ 85.2 1.9/ 1.3 2.0/ 0.6 1.9/ 0.6 2.3/ 1.0 2.0/ 0.9\nDSFN [33] 82.4/ 78.3 83.2/ 89.1 84.4/ 90.7 83.3/ 87.2 83.3/ 86.3 2.1/ 4.1 2.6/ 1.2 1.8/ 0.6 4.3/ 1.6 2.7/ 1.9\nSIFA-v2 [3] 83.4/ 86.9 80.1/ 89.2 86.6/ 80.4 87.7/ 88.6 84.5/ 86.3 1.5/ 1.7 2.3/ 0.6 1.5/ 0.8 1.9/ 1.3 1.8/ 1.1\nDeep Co-Training (Our Proposed) 89.2/ 89.3 81.7/ 89.8 87.2/ 87.3 89.9/ 86.2 87.0/ 88.1 1.2/ 0.5 2.4/ 0.7 1.4/ 0.6 1.5/ 1.4 1.6/ 0.8\nDSAN∗[10] NA/-0.6 NA/ +2.3 NA/+3.5 NA/ +2.3 NA/+1.8 NA/+0.3 NA/ -0.1 NA/-0.4 NA/-0.5 NA/-0.2\nSSC∗[12] +0.5/ NA +0 /NA +1.1/ NA +0.5/ NA +0.5/ NA +0.1/ NA +0/ NA -0.3/ NA +0 /NA +0 /NA\nDeep Co-Training (Our Proposed) +5.8/ +2.4 +1.6/ +0.6 +0.6/ +6.9 +2.2/ -2.4 +2.5/ +1.8 -0.3/ -1.2 +0.1/ +0.1 -0.1/ -0.2 -0.4/ +0.1 -0.2/ -0.3\nTable 3. Performance comparison with state-of-the-art domain adaptation\nmethods on brain tissue segmentation. Bold number highlights the best per-\nformance.\nBrain Tissue Segmentation Performance MRI→ CT\nMethodcDice ASD\nCSF GM WM Avg CSF GM WM Avg\nSupervised training 79.6 74.2 84.8 79.6 0.7 0.7 0.8 0.7\nSource only 12.7 34.3 9.7 18.9 16.7 5.6 11.1 11.1\nCBST [34] 33.4 50.5 37.0 40.3 13.0 6.4 17.4 12.3\nSynSeg-Net [13] 66.0 57.7 15.9 46.5 1.3 0.8 2.7 1.6\nAdaOutput [29] 60.0 60.6 23.9 48.2 1.5 0.9 5.0 2.5\nSIFA-v2 [3] 67.1 60.7 53.9 60.6 1.2 0.9 1.7 1.2\nDeep Co-Training (Our Proposed) 75.8 66.1 75.9 72.6 1.1 0.8 1.4 1.1\nCT→ MRI transfer task, the improvement of our framework upon\nSIFA-v2 is the same as DSAN in dice score and 0.1 points better in\nASD score. Note that we compare our method with SSC and DSANby improvement upon SIFA-v2 as both codes for SSC and DSANare not publicly available and we preprocess the multi-organ datasetdifferently compared to them. Finally, the performance of our frame-work also approaches the supervised training upper bound.\nTable 3 presents the experiment results on our large scale pri-\nvate brain tissue segmentation dataset. As can be observed, ourframework outperforms state-of-the-art UDA methods significantly.Specifically, our framework has improved cDice score by 12.0 pointsand reduced ASD by 0.1 points compared to state-of-the-art UDAmethod SIFA-v2. Note that we do not compare with more advancedDSAN and SSC methods as their codes are not publicly available.But our experiment results on the two public datasets already demon-strate the effectiveness of our framework when compared to theirs.\nFig. 3 shows the visual comparison results for cardiac substructure\nsegmentation. Due to space limit, we put more visual comparison re-Table 4. Evaluation on the feature representations learned by the two con-\nstituent segmentation networks of our deep co-training framework and that\nof SIFA-v2 on cardiac substructure segmentation dataset. DCT-Fs= TheFs\nnetwork in our deep co-training framework, DCT-Ft= TheFtnetwork in\nour deep co-training framework.\nCardiac Substructure Segmentation Performance\nMethod (Dice)Source Representation Target Representation\nMRI→CT CT→MRI MRI→CT CT→MRI\nSIFA-v2 [3] 88.9 83.2 78.6 70.7\nDCT-Fs91.0 84.5 87.0 79.0\nDCT-Ft90.9 84.1 87.2 80.2\nsults on abdominal multi-organ segmentation in the supplementary\nmaterial. We do not present the visualization results on brain tissuesegmentation as the dataset is private. As can be seen in the figure,the segmentation masks produced by our deep co-training frameworkare closer to the ground truth and contain fewer wrong semantic pre-diction results. However, as shown in the forth row in Fig. 3, all UDAmethods fail to segment a small portion of ascending aorta, which isdisconnected from the main part due to slice cut. But the supervisedlearning method can accurately segment that portion out, which in-dicates there is still a gap between existing UDA methods and super-vised learning method that needs to be filled in future works.\n4.5 Discussion\nDeep Co-Training Learns Better Feature Representations forBoth Domains. One of our arguments to tackle the cross-modality\nmedical image segmentation task as semi-supervised multi-modallearning is that semi-supervised multi-modal learning can leverageL. Zhu et al. / Deep Co-Training for Cross-Modality Medical Image Segmentation 3145\nFigure 3. Visual comparison of segmentation results with different unsupervised domain adaptation methods for cardiac CT images and MRI images. The\ncardiac substructure of AA, LAC, L VC and MYO are indicated in green, orange, purple, blue colors respectively.\nFigure 4. Evaluation on the performance of our framework with reduced\nsource data annotations on cardiac substructure segmentation (a) MRI→CT\ntransfer task and (b) CT→MRI transfer task.\nthe complementary multi-modal information to learn better feature\nrepresentations for both domains. To validate it, we evaluate the fea-ture representations learned by the two constituent segmentation net-works of our framework and that of state-of-the-art UDA methodSIFA-v2 on cardiac substructure segmentation dataset. For sourcerepresentation, we fix the feature learned by our framework andSIFA-v2, and fine tune the last layer of segmentation network onlabeled source data and report performance on source test data. Sim-ilarly we do that for target representation. The experiment results areshown in Table 4. As can be seen, both the two constituent segmen-tation networks in our framework learns much better feature repre-sentations compared to that of SIFA-v2 in both domains due to theleverage of complementary multi-modal information in co-training.Deep Co-Training is More Robust to Source Annotation Scarcity.As our framework tackles the cross modality medical image seg-mentation task as semi-supervised multi-modal learning, our frame-work naturally handles the case when we have limited annotationsin source domain. To verify it, we compare our framework with thestate-of-the-art UDA method SIFA-v2 [3] when we decrease the an-notated data size in source domain. Fig. 4 shows the experiment re-sults. As we can observe, for all source annotation sizes, our frame-work significantly outperforms SIFA-v2; with the decrease of the an-notation size, the drop of performance in our framework is muchsmaller than SIFA-v2; more importantly, our framework with only2 annotated source data volume outperforms SIFA-v2 with 16 an-notated source data volume. The experiment results highlight thewide applicability of our framework even under extreme annotationscarcity scenarios.Sensitivity Analysis. We perform post-experiment sensitivity analy-\nsis with the two balancing weights of our framework, namely λ\nadv\nFigure 5. Sensitivity analysis of our framework on cardiac substructure seg-\nmentation CT→ MRI transfer task with (a) λadv and (b)λreg .\nandλreg. As can be seen in Fig. 5,our framework is generally robust\nto the change of λadv in a wide range. For λreg, too large or too small\nregularization either hinders co-training or fails to effectively regu-larize the deep networks, which leads to poor performance. Y et evenwith the worst value of λ\nreg in Fig. 5(b), our framework still per-\nforms better than SIFA-v2 and close to the previously best methodDSAN. The empirical value of λ\nreg=1 gives the best performance.\n5 Conclusions\nIn this paper, we propose a novel method to tackle cross-modalitymedical image segmentation via converting the task into semi-supervised multi-modal learning with image translation. To this end,we propose a deep co-training framework, where we address thechallenges of co-training on divergent labeled and unlabeled data dis-tributions with theoretical analysis on multi-view adaptation and pro-pose decomposed multi-view adaptation, which is a general multi-view adaptation methodology and shows better performance thanadaptation with concatenated multi-view features. We further formu-late inter-view regularization to tackle the challenge of co-trainingwith deep networks. Our inter-view regularization is a general regu-larization method to make deep co-training networks to be compat-ible with the underlying data distribution. We perform extensive ex-periments to evaluate our framework. We further evaluate our frame-work with a large scale private dataset to test its applicability in realclinical settings. Our framework significantly outperforms state-of-the-art UDA methods on all three segmentation tasks, learns betterfeature representations, and is more robust to source data scarcity.L. Zhu et al. / Deep Co-Training for Cross-Modality Medical Image Segmentation 3146\nAcknowledgements\nWe would like to thank the reviewers for their comments, which\nhelped improve this paper considerably. This work was supported\nby the National Research Foundation, Singapore under its AI Singa-pore Programme (AISG Award No: AISG2-TC-2021-003), Agencyfor Science, Technology and Research (A*ST AR) through its AMEProgrammatic Funding Scheme Under Project A20H4b0141, and un-der its RIE2020 Health and Biomedical Sciences (HBMS) IndustryAlignment Fund Pre-Positioning (IAF-PP) grant no. H20C6a0032.\nReferences\n[1] Avrim Blum and Tom Mitchell, ‘Combining labeled and unlabeled data\nwith co-training’, in Proceedings of the eleventh annual conference on\nComputational learning theory, pp. 92–100, (1998).\n[2] Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Er-\nhan, and Dilip Krishnan, ‘Unsupervised pixel-level domain adaptation\nwith generative adversarial networks’, in Proceedings of the IEEE con-\nference on computer vision and pattern recognition, pp. 3722–3731,\n(2017).\n[3] Cheng Chen, Qi Dou, Hao Chen, Jing Qin, and Pheng Ann Heng, ‘Un-\nsupervised bidirectional cross-modality adaptation via deeply synergis-tic image and feature alignment for medical image segmentation’, IEEE\nTransactions on Medical Imaging, (2020).\n[4] Minmin Chen, Kilian Q Weinberger, and John Blitzer, ‘Co-training for\ndomain adaptation’, Advances in neural information processing sys-\ntems, 24, (2011).\n[5] Minmin Chen, Kilian Q Weinberger, and Yixin Chen, ‘Automatic fea-\nture decomposition for single view co-training’, in ICML, (2011).\n[6] Xinlei Chen and Kaiming He, ‘Exploring simple siamese representation\nlearning’, in Proceedings of the IEEE/CVF conference on computer vi-\nsion and pattern recognition, pp. 15750–15758, (2021).\n[7] Qi Dou, Cheng Ouyang, Cheng Chen, Hao Chen, Ben Glocker, Xiahai\nZhuang, and Pheng-Ann Heng, ‘Pnp-adanet: Plug-and-play adversarial\ndomain adaptation network with a benchmark at cross-modality cardiac\nsegmentation’, arXiv preprint arXiv:1812.07907, (2018).\n[8] Y aroslav Ganin and Victor Lempitsky, ‘Unsupervised domain adap-\ntation by backpropagation’, in International conference on machine\nlearning, pp. 1180–1189. PMLR, (2015).\n[9] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David\nWarde-Farley, Sherjil Ozair, Aaron Courville, and Y oshua Bengio,\n‘Generative adversarial nets’, in Advances in neural information pro-\ncessing systems, pp. 2672–2680, (2014).\n[10] Xiaoting Han, Lei Qi, Qian Y u, Ziqi Zhou, Y efeng Zheng, Yinghuan\nShi, and Y ang Gao, ‘Deep symmetric adaptation network for cross-\nmodality medical image segmentation’, IEEE transactions on medical\nimaging, 41(1), 121–132, (2021).\n[11] Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Y an Zhu, Phillip Isola,\nKate Saenko, Alexei Efros, and Trevor Darrell, ‘Cycada: Cycle-\nconsistent adversarial domain adaptation’, in International conference\non machine learning, pp. 1989–1998. PMLR, (2018).\n[12] Tao Hu, Shiliang Sun, Jing Zhao, and Dongyu Shi, ‘Enhancing unsu-\npervised domain adaptation via semantic similarity constraint for med-\nical image segmentation’, in Proceedings of the Thirty-First Interna-\ntional Joint Conference on Artificial Intelligence, IJCAI-22, ed., Lud DeRaedt, pp. 3071–3077. International Joint Conferences on Artificial In-telligence Organization, (7 2022). Main Track.\n[13] Y uankai Huo, Zhoubing Xu, Hyeonsoo Moon, Shunxing Bao, Albert\nAssad, Tamara K Moyo, Michael R Savona, Richard G Abramson,and Bennett A Landman, ‘Synseg-net: Synthetic segmentation without\ntarget modality ground truth’, IEEE transactions on medical imaging,\n38(4), 1016–1025, (2018).\n[14] Phillip Isola, Jun-Y an Zhu, Tinghui Zhou, and Alexei A Efros, ‘Image-\nto-image translation with conditional adversarial networks’, in IEEE\nConference on Computer Vision and Pattern Recognition, (2017).\n[15] A Emre Kavur, N Sinem Gezer, Mustafa Barı¸ s, Sinem Aslan, Pierre-\nHenri Conze, Vladimir Groza, Duc Duy Pham, Soumick Chatterjee,\nPhilipp Ernst, Sava¸ s Özkan, et al., ‘Chaos challenge-combined (ct-mr)\nhealthy abdominal organ segmentation’, Medical Image Analysis, 69,\n101950, (2021).[16] B. Landman, Z. Xu, J. E. Iglesias, M. Styner, T. R. Langerak, and\nA. Klein, ‘Multi-atlas labeling beyond the cranial vault’, (2020).\n[17] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud\nArindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian,\nJeroen Awm V an Der Laak, Bram V an Ginneken, and Clara I Sánchez,\n‘A survey on deep learning in medical image analysis’, Medical image\nanalysis, 42, 60–88, (2017).\n[18] Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and\nPhilip S Y u, ‘Transfer joint matching for unsupervised domain adapta-tion’, in Proceedings of the IEEE conference on computer vision and\npattern recognition, pp. 1410–1417, (2014).\n[19] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi, ‘V -net:\nFully convolutional neural networks for volumetric medical image\nsegmentation’, in 2016 fourth international conference on 3D vision\n(3DV), pp. 565–571. IEEE, (2016).\n[20] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii,\n‘Virtual adversarial training: a regularization method for supervised and\nsemi-supervised learning’, IEEE transactions on pattern analysis and\nmachine intelligence, 41(8), 1979–1993, (2018).\n[21] Kamal Nigam and Rayid Ghani, ‘Analyzing the effectiveness and appli-\ncability of co-training’, in Proceedings of the ninth international con-\nference on Information and knowledge management, pp. 86–93, (2000).\n[22] Siyuan Qiao, Wei Shen, Zhishuai Zhang, Bo Wang, and Alan Y uille,\n‘Deep co-training for semi-supervised image recognition’, in Proceed-\nings of the european conference on computer vision (eccv), pp. 135–152, (2018).\n[23] Alec Radford, Luke Metz, and Soumith Chintala, ‘Unsupervised repre-\nsentation learning with deep convolutional generative adversarial net-\nworks’, arXiv preprint arXiv:1511.06434, (2015).\n[24] Kuniaki Saito, Y oshitaka Ushiku, and Tatsuya Harada, ‘Asymmet-\nric tri-training for unsupervised domain adaptation’, arXiv preprint\narXiv:1702.08400, (2017).\n[25] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen, ‘Regulariza-\ntion with stochastic transformations and perturbations for deep semi-\nsupervised learning’, Advances in neural information processing sys-\ntems, 29, (2016).\n[26] Reuben R Shamir, Y uval Duchin, Jinyoung Kim, Guillermo Sapiro,\nand Noam Harel, ‘Continuous dice coefficient: a method for evaluating\nprobabilistic segmentations’, arXiv preprint arXiv:1906.11031, (2019).\n[27] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han\nZhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li, ‘Fixmatch: Simplifying semi-supervised learning with con-\nsistency and confidence’, Advances in neural information processing\nsystems, 33, 596–608, (2020).\n[28] Antti Tarvainen and Harri V alpola, ‘Mean teachers are better role mod-\nels: Weight-averaged consistency targets improve semi-supervised deep\nlearning results’, Advances in neural information processing systems ,\n30, (2017).\n[29] Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kihyuk Sohn, Ming-\nHsuan Y ang, and Manmohan Chandraker, ‘Learning to adapt structured\noutput space for semantic segmentation’, in Proceedings of the IEEE\nConference on Computer Vision and Pattern Recognition , pp. 7472–\n7481, (2018).\n[30] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor\nDarrell, ‘Deep domain confusion: Maximizing for domain invariance’,\narXiv preprint arXiv:1412.3474, (2014).\n[31] Jun-Y an Zhu, Taesung Park, Phillip Isola, and Alexei A Efros, ‘Un-\npaired image-to-image translation using cycle-consistent adversarial\nnetworks’, in Proceedings of the IEEE international conference on\ncomputer vision, pp. 2223–2232, (2017).\n[32] Xiahai Zhuang and Juan Shen, ‘Multi-scale patch and multi-modality\natlases for whole heart segmentation of mri’, Medical image analysis ,\n31, 77–87, (2016).\n[33] Danbing Zou, Qikui Zhu, and Pingkun Y an, ‘Unsupervised domain\nadaptation with dualscheme fusion network for medical image segmen-\ntation’, in Proceedings of the Twenty-Ninth International Joint Confer-\nence on Artificial Intelligence, IJCAI-20, International Joint Confer-ences on Artificial Intelligence Organization , pp. 3291–3298, (2020).\n[34] Y ang Zou, Zhiding Y u, BVK Vijaya Kumar, and Jinsong Wang, ‘Un-\nsupervised domain adaptation for semantic segmentation via class-\nbalanced self-training’, in Proceedings of the European Conference on\nComputer Vision (ECCV), pp. 289–305, (2018).L. Zhu et al. / Deep Co-Training for Cross-Modality Medical Image Segmentation 3147",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "r4eSn6WZRCI",
"year": null,
"venue": "EC2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=r4eSn6WZRCI",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Algorithms against Anarchy: Understanding Non-Truthful Mechanisms.",
"authors": [
"Paul Dütting",
"Thomas Kesselheim"
],
"abstract": "The algorithmic requirements for dominant strategy incentive compatibility, or truthfulness, are well understood. Is there a similar characterization of algorithms that when combined with a suitable payment rule yield near-optimal welfare in all equilibria? We address this question by providing a tight characterization of a (possibly randomized) mechanism's Price of Anarchy provable via smoothness, for single-parameter settings. The characterization assigns a unique value to each allocation algorithm; this value provides an upper and a matching lower bound on the Price of Anarchy of a derived mechanism provable via smoothness. The characterization also applies to the sequential or simultaneous composition of single-parameter mechanisms. Importantly, the factor that we identify is typically not in one-to-one correspondence to the approximation guarantee of the algorithm. Rather, it is usually the product of the approximation guarantee and the degree to which the mechanism is loser independent. We apply our characterization to show the optimality of greedy mechanisms for single-minded combinatorial auctions, whether these mechanisms are polynomial-time computable or not. We also use it to establish the optimality of a non-greedy, randomized mechanism for independent set in interval graphs and show that it is strictly better than any other deterministic mechanism.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ehc1Qtybz3",
"year": null,
"venue": "EC2011",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=ehc1Qtybz3",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Simplicity-expressiveness tradeoffs in mechanism design.",
"authors": [
"Paul Dütting",
"Felix A. Fischer",
"David C. Parkes"
],
"abstract": "A fundamental result in mechanism design theory, the so-called revelation principle, asserts that for many questions concerning the existence of mechanisms with a given outcome one can restrict attention to truthful direct-revelation mechanisms. In practice, however, many mechanisms use a restricted message space. This motivates the study of the tradeoffs involved in choosing simplified mechanisms, which can sometimes bring benefits in precluding bad or promoting good equilibria, and other times impose costs on welfare and revenue. We study the simplicity-expressiveness tradeoff in two representative settings, sponsored search auctions and combinatorial auctions, each being a canonical example for complete information and incomplete information analysis, respectively. We observe that the amount of information available to the agents plays an important role for the tradeoff between simplicity and expressiveness.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "3BOWaTvh-4Ud",
"year": null,
"venue": "EC2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=3BOWaTvh-4Ud",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Algorithms as Mechanisms: The Price of Anarchy of Relax-and-Round.",
"authors": [
"Paul Dütting",
"Thomas Kesselheim",
"Éva Tardos"
],
"abstract": "Many algorithms, that are originally designed without explicitly considering incentive properties, are later combined with simple pricing rules and used as mechanisms. The resulting mechanisms are often natural and simple to understand. But how good are these algorithms as mechanisms? Truthful reporting of valuations is typically not a dominant strategy (certainly not with a pay-your-bid, first-price rule, but it is likely not a good strategy even with a critical value, or second-price style rule either). Our goal is to show that a wide class of approximation algorithms yields this way mechanisms with low Price of Anarchy. The seminal result of Lucier and Borodin [2010] shows that combining a greedy algorithm that is an α-approximation algorithm with a pay-your-bid payment rule yields a mechanism whose Price of Anarchy is O(α). In this paper we significantly extend the class of algorithms for which such a result is available by showing that this close connection between approximation ratio on the one hand and Price of Anarchy on the other also holds for the design principle of relaxation and rounding provided that the relaxation is smooth and the rounding is oblivious. We demonstrate the far-reaching consequences of our result by showing its implications for sparse packing integer programs, such as multi-unit auctions and generalized matching, for the maximum traveling salesman problem, for combinatorial auctions, and for single source unsplittable flow problems. In all these problems our approach leads to novel simple, near-optimal mechanisms whose Price of Anarchy either matches or beats the performance guarantees of known mechanisms.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "MOyPO8lUtDt",
"year": null,
"venue": "EC2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=MOyPO8lUtDt",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Peer Prediction with Heterogeneous Users.",
"authors": [
"Arpit Agarwal",
"Debmalya Mandal",
"David C. Parkes",
"Nisarg Shah"
],
"abstract": "Peer prediction mechanisms incentivize agents to truthfully report their signals, in the absence of a verification mechanism, by comparing their reports with those of their peers. Prior work in this area is essentially restricted to the case of homogeneous agents, whose signal distributions are identical. This is limiting in many domains, where we would expect agents to differ in taste, judgment and reliability. Although the Correlated Agreement (CA) mechanism [30] can be extended to handle heterogeneous agents, the new challenge is with the efficient estimation of agent signal types. We solve this problem by clustering agents based on their reporting behavior, proposing a mechanism that works with clusters of agents and designing algorithms that learn such a clustering. In this way, we also connect peer prediction with the Dawid and Skene [5] literature on latent types. We retain the robustness against coordinated misreports of the CA mechanism, achieving an approximate incentive guarantee of ε-informed truthfulness. We show on real data that this incentive approximation is reasonable in practice, and even with a small number of clusters.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "s6sK_Jp9QL9",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200132",
"forum_link": "https://openreview.net/forum?id=s6sK_Jp9QL9",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Secure Social Recommendation Based on Secret Sharing",
"authors": [
"Chaochao Chen",
"Liang Li",
"Bingzhe Wu",
"Cheng Hong",
"Li Wang",
"Jun Zhou"
],
"abstract": "Nowadays, privacy preserving machine learning has been drawing much attention in both industry and academy. Meanwhile, recommender systems have been extensively adopted by many commercial platforms (e.g. Amazon) and they are mainly built based on user-item interactions. Besides, social platforms (e.g. Facebook) have rich resources of user social information. It is well known that social information, which is rich on social platforms such as Facebook, are useful to build intelligent recommender systems. It is anticipated to combine the social information with the user-item ratings to improve the overall recommendation performance. Most existing recommendation models are built based on the assumptions that the social information are available. However, different platforms are usually reluctant to (or can not) share their data due to certain concerns. In this paper, we first propose a SEcure SOcial RECommendation (SeSoRec) framework which is able to (1) collaboratively mine knowledge from social platform to improve the recommendation performance of the rating platform, and (2) securely keep the raw data of both platforms. We then propose a Secret Sharing based Matrix Multiplication (SSMM) protocol to optimize SeSoRec and prove its correctness and security theoretically. By applying minibatch gradient descent, SeSoRec has linear time complexities in terms of both computation and communication. The comprehensive experimental results on three real-world datasets demonstrate the effectiveness of our proposed SeSoRec and SSMM.",
"keywords": [],
"raw_extracted_content": "Secure Social Recommendation Based on Secret Sharing\nChaochao Chen1, Liang Li2, Bingzhe Wu3, Cheng Hong4,L iW a n g5, Jun Zhou6\nAbstract. Nowadays, privacy preserving machine learning has been\ndrawing much attention in both industry and academy. Meanwhile,\nrecommender systems have been extensively adopted by many com-mercial platforms (e.g. Amazon) and they are mainly built based\non user-item interactions. Besides, social platforms (e.g. Facebook)\nhave rich resources of user social information. It is well known that\nsocial information, which is rich on social platforms such as Face-\nbook, are useful to build intelligent recommender systems. It is an-\nticipated to combine the social information with the user-item rat-\nings to improve the overall recommendation performance. Most ex-\nisting recommendation models are built based on the assumptions\nthat the social information are available. However, different plat-forms are usually reluctant to (or can not) share their data due tocertain concerns. In this paper, we first propose a SEcure SOcial\nRECommendation (SeSoRec) framework which is able to (1) col-laboratively mine knowledge from social platform to improve the\nrecommendation performance of the rating platform, and (2) se-\ncurely keep the raw data of both platforms. We then propose a\nSecret Sharing based Matrix Multiplication (SSMM) protocol to op-\ntimize SeSoRec and prove its correctness and security theoreti-\ncally. By applying minibatch gradient descent, SeSoRec has linear\ntime complexities in terms of both computation and communication.\nThe comprehensive experimental results on three real-world datasets\ndemonstrate the effectiveness of our proposed SeSoRec andSSMM.\n1 Introduction\nNowadays, recommender systems have been extensively used inmany commercial platforms [3]. The key point for recommendationis to use as much information as possible to learn better preferencesof users and items. To achieve this, besides user-item interaction\ninformation, additional information such as social relationship and\ncontextual information have been utilized [19, 29, 2].\nExisting researchers usually make the assumption that all kinds of\ninformation are available, which is somehow inconsistent with mostof the real-world cases. In practice, different kinds of information are\nlocated on different platforms, e.g., huge user-item interaction infor-\nmation on Amazon while rich user social information on Facebook.\nHowever, different platforms are reluctant to (or can not) share their\nown data due to competition or regulation reasons.\nTherefore, for the recommendation platforms who have rich user-\nitem interaction data, how to use the additional data such as social\ninformation on other platforms to further improve recommendation\n1Ant Financial Services Group, China, email: chaochao.ccc@antfin.com\n2Huawei Noah’s Ark Lab, China, email: [email protected]\n3Peking University, China, email: [email protected]\n4Alibaba security, China, email: [email protected]\n5Ant Financial Services Group, China, email: raymond.wangl@antfin.com\n6Ant Financial Services Group, China, email: jun.zhoujun@antfin.comperformance, meanwhile protect the raw data security of both plat-\nforms, is a crucial question to be answered. It is worthwhile to study\nsuch a research topic in both industry and academia.\nSecure Multi-Party Computation (MPC) provides a solution to the\nabove question. MPC aims to jointly compute a function for multi-\nparties while keeping the individual inputs private [35], and it has\nbeen adopted by many machine learning algorithms for secure data\nmining, including decision tree [17], linear regression [27], and lo-\ngistic regression [25]. However, it has not been applied to the above-\nmentioned secure multi-party recommendation problems yet.\nIn this paper, we consider the scenarios where user-item interac-\ntion information and user social information are on different plat-\nforms, which is quite common in practice. Platform Ahas user-item\ninteraction information and Platform Bhas user social information,\nthe challenge is to improve the recommendation performance of A\nby securely using the user social information on B. To fulfill this,\nwe formalize secure social recommendation as a MPC problem and\npropose a SEcure SOcial RECommendation (SeSoRec) framework\nfor it. Our proposed SeSoRec is able to (1) collaboratively mine\nknowledge from social platform to improve the recommendation per-\nformance of the rating platform, and (2) keep the raw data of both\nplatforms securely. We further propose a novel Secret Sharing based\nMatrix Multiplication ( SSMM) protocol to optimize SeSoRec, and\nwe also prove its correctness and security. Our proposed SeSoRec\nandSSMM have linear computation and communication complexities.\nExperimental results conducted on three real-world datasets demon-strate the effectiveness of our proposed SeSoRec andSSMM.\nWe summarize our main contributions as follows:\n•We observe a secure social recommendation problem in practice,\nformalize it as a MPC problem, and propose a SeSoRec frame-\nwork for it.\n•We propose a novel Secret Sharing based Matrix Multiplication\n(SSMM) protocol to optimize SeSoRec, and we also prove its cor-\nrectness and security.\n•Our proposed SeSoRec andSSMM have linear computation and\ncommunication complexities.\n•Experimental results conducted on three real-world datasetsdemonstrate the effectiveness of SeSoRec andSSMM.\n2 Background\nIn this section, we review related backgrounds, including (1) social\nrecommendation, (2) secure multi-party computation, and (3) privacy\npreserving recommendation.\n2.1 Social Recommendation\nFactorization based recommendation [24, 16, 3, 2] is one of the most\npopular approaches in recommender system. It factorizes a user-ECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA200132506\nitem rating (or other interaction) matrix into a user latent matrix\nand an item latent matrix. However, traditional factorization basedapproaches assume that users are independent and identically dis-tributed, which is inconsistent with the reality that users are inher-\nently connected via various types of social relations such as friend-\nships and trust relations. Therefore, social factorization models in-corporate social relationship into account to improve recommenda-\ntion performance [19], and the basic intuition is that connected users\nare likely to have similar preferences. According to [32], social fac-\ntorization models can be formally stated as:\nsocial factorization model = basic factorization model + social infor-\nmation model.\nTo date, different social information models were proposed to cap-\nture social information, and the basic intuition is that connected users\nare likely to have similar preferences.\n2.2 Secure Multi-Party Computation\nThe concept of secure Multi-Party Computation (MPC) was formally\nintroduced in [34], which aims to generate methods (or protocols)for multi-parties to jointly compute a function (e.g., vector multipli-cation) over their inputs (e.g., vectors for each party) while keeping\nthose inputs private. MPC can be implemented using different pro-\ntocols, such as garbled circuits [35], GMW [10], and secret sharing[30]. MPC has been applied into many machine learning algorithms,\nsuch as decision tree [17], linear regression [27], logistic regression\n[25], and collaborative filtering [31]. In this paper, we propose a se-\ncret sharing based matrix muliplication algorithm for secure social\nrecommendation.\n2.3 User Privacy Preserving Recommendation\nAnother related research area belongs to privacy preserving recom-\nmendation. Recently, user privacy has drawn lots of attention, andhow to train models while keeping user privacy becomes a hot re-search topic, e.g., federated learning and shared machine learning[21, 4]. There are research works adopt garbled circuits to protectuser privacy while making recommendation [26]. Some other worksuse differential privacy to protect user privacy while training recom-\nmendation models [22, 14, 23].\nDifference between user privacy and data security. User privacy\npreserving recommendation aims to protect user privacy on the cus-\ntomer side (2C), while data security based recommendation intends\nto protect the data security of business partners who have already col-lected users’ private data (2B). In this paper, we aim to (1) integraterating platform and social platform for better recommendation, and\n(2) protect the data security of both platforms.\n3 The Proposed Model\nIn this section, we first formally describe the secure social recom-\nmendation problem, and then present our proposed SEcure SOcial\nRECommendation (SeSoRec) framework for this problem.\n3.1 Problem Definition\nFormally, let Abe the user-item interaction platform, and UandVbe\nthe user and item set on it, with IandJdenoting user size and item\nsize, respectively. Let Rbe user-item interaction set between user\ni∈U and itemj∈V ,|R|is the total number of ratings. Let Rbe the\nuser-item interaction matrix, with element rijbeing the rating of userion itemj. Let U∈RK×Iand V∈RK×Jdenote the user and item\nlatent factor matrices, with their column vectors uiand vjbeing the\nK-dimensional latent factors for user iand item j, respectively. Let\nBbe the user social platform, and we assume that the social platform\nBhas the same user set Uas the user-item interaction platform A.\nWe further let Sbe the user-user social matrix7, with the element sif\nbeing the social relationship strength between user iand userf.\nThe problem of secure social recommendation is that, platforms\nAandBsecurely keep their own data and model, meanwhile Acan\nimprove its recommendation performance by utilizing the social in-formation of B.\n3.2 Secure Social Recommendation Framework\nSocial recommendation can be formalized as a basic factorizationmodel plus a social information model, based on the assumption thatconnected users tend to have similar preferences, as described in Sec-tion 2.1. Most existing social factorization models have the followingobjective function\nmin\nui,vjI/summationdisplay\ni=1J/summationdisplay\nj=1f(rij,ui,vj)+γI/summationdisplay\ni=1I/summationdisplay\nf=1g(sif,ui,uf), (1)\nwheref(rij,ui,vj)is the loss of the basic factorization model that\nrestricts the relationship between the true ratings and predicted rat-ings,g(s\nif,ui,uf)is the loss of the social information model that\nrestricts the preferences of users who have social relations, and γ\ncontrols the social restriction strength. A classical example is the So-cial Regularizer recommendation (Soreg) approach [19], where\nf(r\nij,ui,vj)=1\n2Iij/parenleftBig\nrij−uiTvj/parenrightBig2\n, (2)\ng(sif,ui,uf)=1\n2sif||u i−uf||2\nF, (3)\nwhereIijis the indicator function that equals to 1 if there is an ex-\nisting user-item interaction pair and 0 otherwise, and | |·| |2\nFis the\nFrobenius norm.\nTraditional social recommendation frameworks such as Soreg can\nbe efficiently solved by stochastic Gradient Descent (GD). However,\nthe social information model in Equation (3) involves a real num-bers\nifwhich belongs to the social platform B, and two real-valued\nvectors uiand ufwhich are located on the rating platform A, secure\ncomputation are not guaranteed due to the breach that Acan easily\ndeduce the values sifbelonging to B.\nTo solve this problem, we propose to use minibatch GD instead\nof stochastic GD. We use Bto denote the user-item rating set in\nthe current minibatch and |B|is the batch size. Let UBandVBbe\nthe user set and item set in the current batch, |UB|and|VB|be the\nuser and item size. Apparently |UB|≤| B|and|VB|≤| B|. We use\nRB∈R|UB|×|V B|to denote the rating matrix in the current batch,\nIB∈R|UB|×|V B|to denote the indicator matrix in the current batch.\nLet UB∈RK×|U B|and VB∈RK×|V B|be the latent factors of the\ncorresponding users and items in the current minibatch. Equation (1)becomes\nmin\nUB,VBL=1\n2||IB◦/parenleftBig\nRB−UT\nBVB/parenrightBig\n||2F\n+γ\n2SUM/parenleftBig\nDB◦(UT\nBUB)/parenrightBig\n−γSUM/parenleftBig\nSB◦(UTBU)/parenrightBig\n+γ\n2SUM/parenleftBig\nE◦(UTU)/parenrightBig\n+λ\n2/parenleftbig\n||U B||2F+||V B||2F/parenrightbig\n,(4)\n7Note that our model can be slightly modifed to meet the case when Sis\nasymmetric.C.Chen etal./Secur eSocial Recommendation Based onSecretSharing 507\nAlgorithm 1: Secure social recommendation\nInput: The observed rating matrix (R) on platform A, user\nsocial matrix (S) on platform B, regularization\nstrength(γ ,λ), learning rate (θ ), and maximum iterations\n(T)\nOutput: user latent matrix (U) and item latent matrix (V)o n\nplatformA\n1PlatformAinitializes Uand V\n2fort=1 toTdo\n3AandBcalculate DTUand STUbased on the secure\nmatrix multiplication in Algorithm 2\n4Alocally calculates∂L\n∂Ubased on Equation (5)\n5Alocally calculates∂L\n∂Vbased on Equation (6)\n6Alocally updates UbyU←U−θ∂L\n∂U\n7Alocally updates VbyV←V−θ∂L\n∂V\n8end\n9return U and VonA\nwhere DB∈R|UB|×|U B|is a diagonal matrix with diagonal element\ndb=/summationtextI\nf=1sbf,SB∈R|UB|×Iis the social matrix of the users\nin current minibatch, and E∈RI×Iis also a diagonal matrix with\ndiagonal element ei=/summationtext|UB|\nb=1sbi. The gradients of Lin Equation (4)\nwith respect to UBand VBare\n∂L\n∂UB=−V B/parenleftbigg/parenleftBig\nRB−UT\nBVB/parenrightBigT\n◦IB/parenrightbigg\n+γ\n2UBDTB\n−γUST\nB+γ\n2UBETB+λUB(5)\n∂L\n∂VB=−U B/parenleftbigg/parenleftBig\nRB−UT\nBVB/parenrightBigT\n◦IB/parenrightbigg\n+λVB, (6)\nwhere EB∈R|UB|×|U B|is a diagonal matrix with diagonal element\neb=ei|i=bwhich is get by extracting the corresponding users’ di-\nagonal elements from Ein current batch.\nWe observe in Equations (5) and (6) that the matrix product terms\nUBDT\nB,USTB, and UBETBare crucial. These terms involve one matrix\n(UorUB) on the rating platform and another matrix (D B,SB,o rE B)\non the social platform. All the other terms can be calculated locally\nby the rating platform. Therefore, we conclude that the key to secure\nsocial recommendation is the secure matrix multiplication operation,which is a secure MPC problem. We summarize the proposed SEcure\nSOcial RECommendation (SeSoRec) solution in Algorithm 1, and\nwill present how to perform secure matrix multiplication in the next\nsection.\n4 Secret Sharing based Matrix Multiplication\nIn this section, we first describe technical preliminaries, and then\npresent a secure matrix multiplication protocol, followed by its cor-rectness and security proof.\n4.1 Preliminaries\nSecret Sharing. Our proposal relies on Additive Sharing. We briefly\nreview this but refer the reader to [7] for more details. To additivelyshare(Shr(·))an/lscript-bit value xfor two parties (A andB), partyA\ngenerates x\nB∈Z2/lscriptuniformly at random, sends xBto to party B,\nand keeps xA=(x−xB)mod2/lscript.W eu s e/angbracketleft x/angbracketrightito denote the share\nof partyi. To reconstruct (Rec(·,·))an additively shared value /angbracketleftx/angbracketright,each party isends/angbracketleftx/angbracketrightito one who computes/summationtext\niximod2/lscript,i∈\n{A,B}. In this paper, we denote additive sharing by /angbracketleft·/angbracketright.\nApply to decimal numbers. The above protocol can not work directly\nwith decimal numbers, since it is not possible to sample uniformlyinR[5]. Following the existing work [25], we approximate deci-\nmal arithmetics by using fixed-point arithmetics. First, fixed-point\naddition is trivial. Second, for fixed-point multiplication, we use the\nfollowing strategy. Suppose aandbare two decimal numbers with at\nmostl\nFbits in the fractional part, we first transform them to integers\nby letting a/prime=2lFaandb/prime=2lFb, and then calculate z=a/primeb/prime.\nFinally, the last lFbits ofzare truncated so that it has at most lFbits\nrepresenting the fractional part. The correctness of the above trunca-tion technique for secret sharing can be found in [25].\nSimulation-based Security Proof. To formally prove that a proto-\ncol is secure, we adopt the semi-honest point of view [9], where each\nparticipant truthfully obeys the protocol while being curious about\nthe other parties’ original data. Under the real world and ideal world\nsimulation-based proof [18], whatever can be computed by one party\ncan be simulated given only the messages it receives during the pro-\ntocol, which implies that each party learns nothing from the protocol\nexecution beyond what they can derive from messages received in\nthe protocol. To formalize our security proof, we need the followingnotations:\n•We usef(x\n1,x2)to denote a function with two variables, where\nx1,x2∈{0,1}ncould be encodings of any mathematical objects,\ne.g. integers, vectors, matrices, or even functionals. We also use π\nto denote a two-party protocol for computing f.\n•The view of thei-th party (i ∈{ A ,B}) during the execution\nofπis denoted as viewπ\ni(x1,x2,n)which can be expanded as\n(xi,ri,mi), wherexiis the input of i-th party, riis its internal\nrandom bits, and miis the messages received or derived by the\ni-th party during the execution of π. Note that miincludes all the\nintermediate messages received, all information derived from the\nintermediate messages, and also the output of i-th party during the\nprotocol.\n•Aprobability ensemble X={X(a,n)}a∈{0,1}∗;n∈Nis an infi-\nnite sequence of random variables indexed by a∈{0,1}∗and\nn∈N. In the context of secure multiparty computation, arepre-\nsents each party’s input and nrepresents problem size.\nDefinition 1 Two probability ensembles P =\n{P(a,n)}a∈{0,1}∗;n∈N andQ={Q(a,n)}a∈{0,1}∗;n∈N are\nsaid to be computatitionally indistinguishable, denoted by Pc≡Q,i f\nfor every non-uniform polynomial-time algorithm Dand every poly-\nnomialp(·), there exists an N∈Nsuch that for every a∈{0,1}∗\nand every n∈N,\n|Pr{D(P(a,n)) = 1}−Pr {D(Q(a,n)) = 1}| ≤1\np(n).\nDefinition 2 Letf(x1,x2)be a function. We say a two-party pro-\ntocolπcomputes fwith information leakage v1to partyAandv2\nto partyBwhere each party is viewed as semi-honest adversaries, if\nthere exist probabilistic polynomial-time algorithms S1andS2such\nthat\n{(S1(1n,x1,v1(x1,x2)))}x1,x2,nc≡{(viewπ\n1(x1,x2,n))},\n{(S2(1n,x1,v2(x1,x2)))}x1,x2,nc≡{(viewπ\n2(x1,x2,n))},\nwherex1,x2∈{0,1}∗and|x1|=|x2|=n.C.Chen etal./Secur eSocial Recommendation Based onSecretSharing 508\n4.2 Secret Sharing based Matrix Multiplication\nSecure matrix multiplication is the key to SeSoRec. There are sev-\neral approaches for secure matrix multiplication, such as homomor-\nphic encryption [12, 33, 8] and the secret sharing scheme [6], amongwhich secret sharing is much more efficient. Existing secret shar-ing based matrix multiplication [6] either needs a trusted Initializer(a trusted third party) or expensive cryptographic primitives [15] to\ngenerate randomness before computation, i.e., Beavers pre-computed\nmultiplication triplet [1]. We call it Trusted Initializer based Secure\nMatrix Multiplication (TISMM), which may not be applicable in\nreality. Besides, TISMM needs to generate many random matrices,\ncausing efficiency concerns.\nIn this paper, we propose a novel protocol for secure and efficient\nmatrix multiplication using secret sharing. Suppose two parties A\nandBhold matrix P∈R\nx×yand matrix Q∈Ry×zseparately,\nwhereyis an even number8. Our algorithm generalizes the inner\nproduct algorithm proposed in [37] to compute the matrix productPQ. We first summarize our proposed Secret Sharing based Matrix\nMultiplication ( SSMM) in Algorithm 2, and then prove its correctness\nand security.\n4.3 Correctness Proof\nAccording to Algorithm 2, we have\nM+N=( P+2P/prime)Q1+(P2+P/prime\no)Q2\n+P 1(2Q−Q/prime)−P2(Q2+Q/prime\ne)\n= PQ1+2P/primeQ1+P2Q2+P/prime\noQ2\n+2P 1Q−P1Q/prime−P2Q2−P2Q/prime\ne (7)\n= PQ/prime−PQ+2P/primeQ/prime−2P/primeQ\n+P/prime\noQ/prime\ne−P/prime\noQ/prime\no+2PQ+2P/primeQ\n−PQ/prime−P/primeQ/prime−P/prime\neQ/prime\ne−P/prime\noQ/prime\ne (8)\n= PQ+P/primeQ/prime−P/prime\noQ/prime\no−P/prime\neQ/prime\ne (9)\n= PQ. (10)\nEquation (8) is by substituting P1,P2,Q1, and Q2in Equation (7)\naccording to Algorithm 2 (Line 4 and Line 5). Equation (9) holds\nby simplifying Equation (8). The (i,j)-th entry of P/primeQ/primeis the inner\nproduct of the i-th row of P/primeand thejth column of Q/prime. Finally, by\nmatrix definition, the (i,j)-th entry of P/prime\noQ/prime\no(resp. P/prime\neQ/prime\ne) is the inner\nproduct of the odd (resp. even) terms in the i-th row of P/primeand the\nodd (resp. even) terms in the j-th column of Q/prime,s ow eh a v e P/primeQ/prime=\nP/prime\noQ/prime\no+P/prime\neQ/prime\neand the last three terms in Line 4 are cancelled. Thus,\nthe correctness is proved.\n4.4 Security Proof\nTheorem 1 Protocol of SSMM (Algorithm 2) computes matrix mul-\ntiplication with information leakage Qe−QotoAand information\nleakage Pe+PotoB.\nWe first give some intuitive discussions on the information disclo-\nsure of Algorithm 2. Let Peand Pobe sub-matrices of Pconstructed\nby its even columns and odd columns. Similarly let Qeand Qobe\nsub-matrices of Qconstructed by its even and odd rows. As indi-\ncated in line 4 of Algorithm 2, Bhas P1,P2fromA. By extracting\n8One can simply change y to an even number by adding an additional zero\ncolumn in Pand zero row in QAlgorithm 2: Secret Sharing based Matrix Multiplication (SSMM)\nInput: A private matrix P∈Rx×yforA, and a private matrix\nQ∈Ry×zforB\nOutput: A matrix M∈Rx×zforA, and a matrix N∈Rx×z\nforB, such that M+N=PQ\n1AandBlocally generate random matrices P/prime∈Rx×yand\nQ/prime∈Ry×z\n2Alocally extracts even columns and odd columns from P/prime, and\ngetP/prime\ne∈Rx×y\n2and P/primeo∈Rx×y\n2\n3Blocally extracts even rows and odd rows from Q/prime, and get\nQ/prime\ne∈Ry\n2×zand Q/primeo∈Ry\n2×z\n4Acomputes P1=P+P/primeand P2=P/prime\ne+P/primeo, and sends P1and\nP2toB\n5Bcomputes Q1=Q/prime−Qand Q2=Q/prime\ne−Q/primeo, and sends Q1\nand Q2toA\n6Alocally computes M=(P+2P/prime)Q1+(P2+P/prime\no)Q2\n7Blocally computes N=P1(2Q−Q/prime)−P2(Q2+Q/prime\ne)\n8Bsends NtoA, andAcalculates M+N\n9return M +NforA\nthe even column sub-matrix P1\neand odd column sub-matrix P1oof\nP1,Bcan calculate P3=P1e+P1o. Since P1=P+P/prime,w eh a v e\nP1\ne=Pe+P/primee,P1o=Po+P/primeo. Thus,Bcan compute Pe+Po\nby subtracting P2from P3. Similar arguments will show that Acan\ncompute Qe−Qoas partial information obtained from B. Although\nAandBboth have some level of information disclosed as discussed\nabove, their own private data are still unrevealed.\nWe then rigorously prove the security level of SSMM, using the\npreliminary techniques we have given above. Note that we first as-\nsume all the matrices are finite field ( Z2/lscript), and then apply fixed point\ndecimal arithmetics. Without loss of generality, we first let Abe the\nadversary and quantify the information leakage to B. The view of A\nin real world contains all information of matrices Pand P/prime(includ-\ning their even and odd column sub-matrices), together with Q1and\nQ2. The key point of the proof is to construct a simulator which can\nreproduce the same distribution of Q1and Q2. The simulator SAfor\nA’s view proceeds like this:\n1. Assume SAhasQe−Qoas prior knowledge;\n2.SAgenerate random matrix Q⋆∈Ry×z;\n3.SACalculate Q⋆\n2=(Q⋆e−Q⋆o)−(Qe−Qo).\nQ⋆eand Q⋆oare similarly defined as the even column and odd column\nsub-matrices of Q⋆. We claim that (Q⋆,Q⋆2)has the same distribu-\ntion as(Q1,Q2), thus being computationally indistinguishable. To\nsee this, we first notice that Q1is the difference between a random\nmatrix Q/primeand a fixed matrix Q, which is equally distributed as a\nrandom matrix, say Q⋆. With this in mind, it can be seen similarly\nthat Q⋆e−Qeis equally distributed as Q/primeeand Q⋆o−Qois equally\ndistributed as Q/primeo. Therefore, Q⋆2is equally distributed as Q2. More-\nover,SAcan reproduce all information of matrices Pand P/prime. So with\nadditional information of Qe−Qo, the ideal world simulator SA\nsuccessfully reconstructs the view of A, which is equivalent to say\nthat in the real world, only partial information Qe−Qohas been\ndisclosed to Aafter running the protocol.\nSimilar simulator SBcan be constructed when assuming Bas the\nadversary. This completes the security proof.\nComplexity Analysis of SSMM. The computational complexity\nmainly comes from Line 6 and 7 in Algorithm 2, which is O(x×\ny×z). The communication complexity from AtoBdepends onC.Chen etal./Secur eSocial Recommendation Based onSecretSharing 509\nmatrices P1and P2, both of which are O(x×y). The communica-\ntion complexity from BtoAdepends on matrices Q1,Q2, and N,\nwhich are O(y×z),O(y×z), andO(x×z), respectively, and\nO((x+y)×z)in total.\nWhen one of the matrices is sparse, we can slightly modify the se-\ncret sharing strategy in Algorithm 2 such that both the computational\nand communication complexities are reduced accordingly. Withoutloss of generality, we assume Qis sparse in the sense that for the\nrows in Qthe average number of non-zero entries is d/lessmuchz. When\ngenerating Q\n/prime,Bdoes not make it so dense as in Line 1 of Algorithm\n2. The new strategy for generating Q/primeis as follows:\nfor each row in Qdo\n1. generate random numbers for all non-zero entries2. randomly select d\n/prime/lessmuchzentries from the zero entries and generate\nrandom numbers for these entries\nendThe value d\n/primeis the selected number of non-zero entries of all rows\ninQ, and the above new strategy makes d/primesmall in order to guarantee\nthat the secret shares from BtoAare sparse. However, as d/primebecomes\nsmaller,Awould obtain more information on Q. An extremal case\nisd/prime=0, in which Acan infer the overall sparsity of Q. Therefore,\na reasonable way is to choose d/prime=O(d). Note that, in practice, one\nshould keep its strategies of choosing d/prime(i.e., the ratio of d/prime/dfor\neach row) privately in case of information leakage. As long as Qis\nsparse, Q1and Q2are both sparse and can be calculated when gen-\nerating Q/prime. The computational complexity for matrix multiplication\ndecreases to O(x×y×d), and the communication complexity from\nBtoAdecreases to O(x×z+y×d), and thus they are significantly\nreduced compared to the general case analysis.\nWe remark that the above new secret sharing strategy for sparse\nmatrix exactly satisfy our requirement in SeSoRec. Usually the so-\ncial matrix Sis sparse. When the user social platform Bshares its\nsecrets, it can use the above new strategy to generate its secret shares.Moreover, the choice of d\n/primecan be private to Bonly so that the user-\nitem interaction platform Acannot gain more information based on\nthe shares from B.\n5 Analysis\nIn this section, we analyze the time complexity of SeSoRec and\ndiscuss its usage and information leakage.\n5.1 Complexity Analysis of SeSoRec\nWe first analyze the communication and computation complexities ofSeSoRec, as shown in Algorithms 1. Recall that Iis user number,\n|U\nB|and|VB|denote the user and item numbers in the current mini-\nbatch respectively, Kdenotes the dimension of latent factor, and |R|\nis the number of ratings (data zise).Communication Complexity. The communications come from the\ncalculations of U\nBDT\nB,USTB, and UBETBusing SSMM. First, for UBDTB\nand UBETB, by refering to the complexity analysis of the modified\nSSMM, their communication costs are both O(|UB|×| U B|)for each\nminibatch, and are both O(|R|/|B|×|U B|×|U B|)≤O(|R|×| B|)\nfor passing the dataset once. Seconed, for USTB, the communication\nofUonly needs to be done once for each data pass, and therefore, its\ncommunication cost is O(I×K). To this end, the total communica-\ntion costs are O(|R| × |B|)+ O(I×K)for passing dataset once.\nSince,|B|/lessmuch| R | andK/lessmuchI<|R|, the total communication cost\nis linear with data size.Computation Complexity. Suppose the average number of neigh-\nbors for each user on platform Bis|N|. The time complexity of lines\n6 and 7 in Algorithm 2 is O(|UB|×| N|×K )for each minibatch,\nand isO(|R|/|B|×| U B|×| N|×K )≤O(|R| × |N| × K)for\npassing the dataset once. Similarly, the time complexity of the lines\n3 and 4 in Algorithm 1 for passing the dataset once is O(|R|/|B|×\n|UB|×|V B|×K)≤O(|R|×|B|× K). Since|N|,|B|,K/lessmuch| R | ,\nthe total computation cost is also linear with data size.\nBy applying minibatch gradient descent, the communication and\ncomputation complexities of SeSoRec are both linear with data size\nand thus can scale to large dataset.\n5.2 Discussion\nSecure common user identification. Our proposed SeSoRec as-\nsumes that platforms AandBhave the same user set in common,\nso that they can proceed SSMM. The essence of secure common user\nidentification is private set intersection (PSI). Existing work [28] has\nprovided efficient solution. PSI can be applied to identify common\nusers on two platforms privately before adopting SeSoRec in prac-\ntice, which guarantees that nothing reveals but the IDs of common\nusers.\nInformation leakage. SeSoRec is asymmetric for two parties, that\nis, the rating platform Aand the social platform Bcollaboratively\nconduct SSMM and return the results to A. Therefore, Breveals more\ninformation to A. Although we have proven its security, it may still\ncause information leakage of BwhenAmaliciously initiate SSMM\niteratively. Suppose AandBcalculate PQusing SSMM,Acan infer\nQby varying Pand fixing Qand doing this procedure with enough\nrounds. A naive solution is to set a constraint on Qwhen conducting\nSSMM. As long as Q(users in each minibatch) is different in each\niteration, SeSoRec will have no information leakage. We leave bet-\nter solutions of this as a future work. Moreover, when one matrix issparse in SSMM and the strategies of choosing d\n/primeare exposed, the\nsocial platform Bmay leak some social information to A. Specifi-\ncally, under this circumstance, the sparsity of the social matrix on B\nis leaked to A, however the specific social values are still protected.\nTherefore, it is crucial that Bkeeps its selection of d/primefor each row of\nthe social matrix privately.\n6 Experiments\nIn this section, we perform experiments to answer the followingquestion. Q1: how does SeSoRec perform comparing with the clas-\nsic matrix factorization and unsecure social recommendation models,Q2: what is the performance of SSMM comparing with the existing\nTISMM, and Q3: how does the social parameter (λ ) affect our model\nperformance.\n6.1 Setting\nWe first describe the datasets, metrics, and comparison methods weuse in experiments.Datasets. We use three public real-world datasets, i.e., Epinions [20],\nFilmTrust [11] and Douban Movie [36]. All these datasets contain\nuser-item ratings and user social (trust) information, and are widelyadopted in literature. Note that although rating and social informa-tion are both available in these datasets, we realistically assume that\nthey are located on separate platforms without any possibility of data\nsharing, which has no side-effect on experiments.C.Chen etal./Secur eSocial Recommendation Based onSecretSharing 510\nTable 1. Dataset statistics. Assuming that rating information exist on A\nand social information are available on B.\nDataset #user #item #rating(A) #social(B )\nEpinions 8,619 5,539 229,920 232,461\nFilmTrust 1,508 2,071 35,497 1,853\nDouban 13,530 13,363 2,530,594 264,811\nSince the original rating matrices of Epinions and Douban are too\nsparse, we filter out the users and items whose interactions are less\nthan 20. Table 1 shows the statistics of these datasets after prepro-cessing, with which we use five-fold cross validation method to con-\nduct experiments and evaluate model performance. That is, we splitthe dataset into five parts, and each time we use four parts as the\ntraining set and take the last part as test set.\nMetrics. To evaluate model performance, we adopt two types of met-\nrics, Root Mean Square Error (RMSE) and Normalized DiscountedCumulative Gain (NDCG@n), both of which are popularly used toevaluate factorization based recommendation performance in litera-\nture [16, 13]. RMSE is defined as\nRMSE=/radicalBigg\n1\n|τ|/summationdisplay\n(i,j)∈τ(rij−ˆrij)2,\nwhereˆrijis the predicted rating of user ion itemj, and|τ|is the\nnumber of predictions in the test dataset τ. RMSE evaluates the er-\nror between real ratings and predicted ratings, with smaller values\nindicating better performance. NDCG@n is defined as\nNDCG@n=Znn/summationdisplay\nn/prime=12r/prime\nn−1\nlog2(n/prime+1),\nwhereZnis a normalizer to ensure that the perfect ranking has value\n1 andr/prime\nnis the relevance (real ratings) of item at position n/prime. NDCG\nevaluates the ranking performance of recommendation models, withlarger values being better. We report NDCG@10 in experiments, andabbreviate it as NDCG.\nComparison methods. Our proposed SeSoRec is a novel secure\nsocial recommendation model, which is a secure version of Soreg\n[19]. We compare SeSoRec with the following latent factor models:\n•MF [24] is a classic latent factor model, which only uses the user-\nitem interaction information on platform A. This is the situation\nwhere the social platform Bis reluctant to share raw social infor-\nmation with the rating platform A.\n•Soreg [19] is a classic social recommendation model, which is\nunsecure in the sense that Aneeds the raw data of B.\nNote that we do not compare with the state-of-the-art recommen-\ndation methods. The reason is: (1) most of them assume the recom-mendation platform has many different kinds of information such ascontextual information [29], which are unfair for our method to com-pare with, and (2) our focus is to study the difference between tra-ditional unsecure social recommendation models and our proposedsecure social recommendation model.\nHyper-parameters. We set the latent factor dimension K=1 0 ,\nbatch size |B|=6 4 , and vary regularizer λand learning rate θto\nchoose their best values. We also vary γin{10\n−2,10−1,100,101}\nto study its effects on SeSoRec. For other parameters, e.g., regular-\nizerλand learning rate θ, we use grid search to find their best values\nof each model.Table 2. Performance comparison on three dataset, including RMSE and\nNDCG.\nDataset Metrics MF Soreg SeSoRec\nEpinionsRMSE 1.2687 1.1791 1.1789\nNDCG 0.0363 0.0405 0.0401\nFilmTrustRMSE 1.1907 1.1754 1.1752\nNDCG 0.2042 0.2128 0.2124\nDoubanRMSE 0.7489 0.7420 0.7419\nNDCG 0.0749 0.0780 0.0778\nTable 3. Running time comparison of SSMM and TISMM.\ndimension (h) 100 1000 10000\nSSMM 0.0025 0.3246 40.744\nTISMM 0.0060 0.7279 105.83\n6.2 Comparison Results (To Q1)\nWe report the comparison results on three datasets in Table 2. From\nit, we can observe that: (1) Soreg and SeSoRec consistently out-\nperform MF. Moreover, we find that the sparser the dataset is, themore Soreg and SeSoRec improve MF. Take RMSE for example,\nSeSoRec improves MF at 7.60%, 1.3%, and 0.98% on Epinions,\nFilmTrust, and Douban, with their rating densities 0.48%, 1.14%,\nand 1.4%, respectively. The results prove that social information is\nindeed important to recommendation performance, especially when\ndata is sparse. (2) Soreg and SeSoRec achieve almost the same rec-\nommendation accuracy, where the differences come from the fixed\npoint decimal numbers in secret sharing. The result further validates\nthe correctness of our proposed SSMM besides the theoretical proof.\n6.3 Comparison between SSMM and TISMM (To\nQ2)\nAs we described in SSMM section, existing Trusted Initializer based\nSecure Matrix Multiplication (TISMM) [6] needs a trusted initializer(a trusted third party) to generate secrets before computation. Al-though TISMM may not be applicable in practice, we would like tocompare the efficiency of our proposed SSMM with it. To this end, we\nrandomly generate two square matrices P∈R\nh×hand Q∈Rh×h,\nwherehis the dimension of the square matrix. We then report the\nrunning time (in seconds) of calculating PQusing both algorithms in\nTable 3, where we use local area network. It can be easily seen that\nour proposed SSMM costs much less time than TISMM. The speedup\nis around 2.4 times on average. This is because TISMM needs to\ngenerate more random matrices and involve more matrix operations.\nMoreover, our proposed SSMM protocol does not rely on the trusted\ninitializer which may be difficult to find in practice, thus is more\npractical.\n6.4 Parameter Analysis (To Q3)\nFinally, we study the effect of social regularizer parameter γon\nSeSoRec. Social recommendation can be formalized as a basic fac-\ntorization model plus a social information model. The social regu-\nlarizer parameter γcontrols the contribution of social information\nmodel to the final model performance. The larger γis, the more likely\nthat the latent factors of connected users are similar, and therefore themore social information model will contribute to the overall perfor-mance. Figure 1 shows its effects on FilmTrust dataset in terms ofC.Chen etal./Secur eSocial Recommendation Based onSecretSharing 511\n/g1 10-210-1100101RMSE\n1.171.181.191.2\n(a) Effect on RMSE/g1 10-210-1100101NDCG\n0.20.2050.210.215\n(b) Effect on NDCG@10\nFigure 1. Effect of γonFilmTrust dataset.\nboth RMSE and NDCG@10. It can be seen that with a good choice\nofγ,SeSoRec can balance the contribution of user-item rating data\non platform Aand user social data on platform B, and thus, achieve\nthe best performance.\n7 Conclusion and Future Work\nIn this paper, we proposed a secret sharing based secure social rec-ommendation framework, which can not only mine knowledge fromsocial platform to improve the recommendation performance of therating platform, but also keep the raw data of both platforms securely.Specifically, we first formalized secure social recommendation asa MPC problem and proposed a SEcure SOcial RECommendation\n(SeSoRec) framework for it. We then proposed a novel Secret\nSharing based Matrix Multiplication (SSMM) algorithm to optimize\nit, and proved its correctness and security. Besides, we analyzedthatSeSoRec has linear communication and computation complex-\nities and thus can scale to large datasets. Experimental results onreal-world datasets demonstrated that, SeSoRec achieves almost\nthe same accuracy as the existing unsecure social recommendationmodel, and SSMM significantly outperforms the existing trusted ini-\ntializer based secure matrix multiplication protocol. In the future,we would like to solve the potential information leakage problemofSeSoRec with better solutions.\nREFERENCES\n[1] Donald Beaver, ‘Efficient multiparty protocols using circuit randomiza-\ntion’, in Cryptology, pp. 420–432. Springer, (1991).\n[2] Chaochao Chen, Kevin Chen-Chuan Chang, Qibing Li, and Xiaolin\nZheng, ‘Semi-supervised learning meets factorization: Learning to rec-\nommend with chain graph model’, TKDD, 12(6), 73, (2018).\n[3] Chaochao Chen, Ziqi Liu, Peilin Zhao, Longfei Li, Jun Zhou, and Xi-\naolong Li, ‘Distributed collaborative hashing and its applications in antfinancial’, in SIGKDD, pp. 100–109. ACM, (2018).\n[4] Chaochao Chen, Ziqi Liu, Peilin Zhao, Jun Zhou, and Xiaolong Li, ‘Pri-\nvacy preserving point-of-interest recommendation using decentralizedmatrix factorization’, in AAAI, pp. 257–264, (2018).\n[5] Martine de Cock, Rafael Dowsley, Anderson CA Nascimento, and\nStacey C Newman, ‘Fast, privacy preserving linear regression over dis-tributed datasets based on pre-distributed data’, in Proceedings of the\n8th ACM Workshop on Artificial Intelligence and Security, pp. 3–14.ACM, (2015).\n[6] Martine De Cock, Rafael Dowsley, Caleb Horst, Raj Katti, Anderson\nNascimento, Wing-Sea Poon, and Stacey Truex, ‘Efficient and privatescoring of decision trees, support vector machines and logistic regres-sion models based on pre-computation’, TDSC, (2017).\n[7] Daniel Demmler, Thomas Schneider, and Michael Zohner, ‘Aby-a\nframework for efficient mixed-protocol secure two-party computation’,inNDSS, (2015).\n[8] Jean-Guillaume Dumas, Pascal Lafourcade, Jean-Baptiste Orfila, and\nMaxime Puys, ‘Private multi-party matrix multiplication and trust com-putations’, arXiv preprint arXiv:1607.03629, (2016).\n[9] Oded Goldreich, F oundations of Cryptography: V olume 2, Basic Appli-\ncations, Cambridge University Press, New York, NY , USA, 2004.[10] Oded Goldreich, Silvio Micali, and Avi Wigderson, ‘How to play any\nmental game’, in STOC, pp. 218–229. ACM, (1987).\n[11] Guibing Guo, Jie Zhang, and Neil Yorke-Smith, ‘A novel bayesian sim-\nilarity measure for recommender systems’, in IJCAI, pp. 2619–2625,\n(2013).\n[12] Shuguo Han and Wee Keong Ng, ‘Privacy-preserving linear fisher dis-\ncriminant analysis’, in Pacific-Asia Conference on Knowledge Discov-\nery and Data Mining, pp. 136–147. Springer, (2008).\n[13] Xiangnan He, Tao Chen, Min-Yen Kan, and Xiao Chen, ‘Trirank:\nReview-aware explainable recommendation by modeling aspects’, inCIKM, pp. 1661–1670. ACM, (2015).\n[14] Jingyu Hua, Chang Xia, and Sheng Zhong, ‘Differentially private ma-\ntrix factorization’, in IJCAI, pp. 1763–1770, (2015).\n[15] Marcel Keller, Valerio Pastro, and Dragos Rotaru, ‘Overdrive: making\nspdz great again’, in Eurocrypt, pp. 158–189. Springer, (2018).\n[16] Yehuda Koren, Robert Bell, Chris V olinsky, et al., ‘Matrix factoriza-\ntion techniques for recommender systems’, Computer, 42(8), 30–37,\n(2009).\n[17] Yehida Lindell, ‘Secure multiparty computation for privacy preserving\ndata mining’, in Encyclopedia of Data Warehousing and Mining, 1005–\n1009, IGI Global, (2005).\n[18] Yehuda Lindell, ‘How to simulate it–a tutorial on the simulation proof\ntechnique’, in Tutorials on the F oundations of Cryptography, 277–346,\n(2017).\n[19] Hao Ma, Dengyong Zhou, Chao Liu, Michael R Lyu, and Irwin King,\n‘Recommender systems with social regularization’, in WSDM, pp. 287–\n296. ACM, (2011).\n[20] Paolo Massa and Paolo Avesani, ‘Trust-aware recommender systems’,\ninProceedings of the 2007 ACM conference on Recommender systems,\npp. 17–24. ACM, (2007).\n[21] H Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson,\net al., ‘Communication-efficient learning of deep networks from decen-tralized data’, arXiv preprint arXiv:1602.05629, (2016).\n[22] Frank McSherry and Ilya Mironov, ‘Differentially private recom-\nmender systems: Building privacy into the netflix prize contenders’, inSIGKDD, pp. 627–636. ACM, (2009).\n[23] Xuying Meng, Suhang Wang, Kai Shu, Jundong Li, Bo Chen, Huan\nLiu, and Yujun Zhang, ‘Personalized privacy-preserving social recom-mendation’, in AAAI, pp. 3796–3803, (2018).\n[24] Andriy Mnih and Ruslan Salakhutdinov, ‘Probabilistic matrix factor-\nization’, in NIPS, pp. 1257–1264, (2007).\n[25] Payman Mohassel and Yupeng Zhang, ‘Secureml: A system for scal-\nable privacy-preserving machine learning’, in S&P , pp. 19–38. IEEE,\n(2017).\n[26] Valeria Nikolaenko, Stratis Ioannidis, Udi Weinsberg, Marc Joye, Nina\nTaft, and Dan Boneh, ‘Privacy-preserving matrix factorization’, in CCS,\npp. 801–812. ACM, (2013).\n[27] Valeria Nikolaenko, Udi Weinsberg, Stratis Ioannidis, Marc Joye, Dan\nBoneh, and Nina Taft, ‘Privacy-preserving ridge regression on hundredsof millions of records’, in S&P , pp. 334–348, (2013).\n[28] Benny Pinkas, Thomas Schneider, and Michael Zohner, ‘Faster private\nset intersection based on {OT}extension’, in USENIX Security , pp.\n797–812, (2014).\n[29] Steffen Rendle, ‘Factorization machines’, in ICDM, pp. 995–1000.\nIEEE, (2010).\n[30] Adi Shamir, ‘How to share a secret’, Communications of the ACM,\n22(11), 612–613, (1979).\n[31] Erez Shmueli and Tamir Tassa, ‘Secure multi-party protocols for item-\nbased collaborative filtering’, in RecSys, pp. 89–97. ACM, (2017).\n[32] Jiliang Tang, Xia Hu, and Huan Liu, ‘Social recommendation: a re-\nview’, Social Network Analysis and Mining, 3(4), 1113–1133, (2013).\n[33] Sin G Teo, Vincent Lee, and Shuguo Han, ‘A study of efficiency and\naccuracy of secure multiparty protocol in privacy-preserving data min-ing’, in WAINA, pp. 85–90. IEEE, (2012).\n[34] Andrew C Yao, ‘Protocols for secure computations’, in FOCS, pp. 160–\n164, (1982).\n[35] Andrew Chi-Chih Yao, ‘How to generate and exchange secrets’, in\nFOCS, pp. 162–167, (1986).\n[36] Erheng Zhong, Wei Fan, Junwei Wang, Lei Xiao, and Yong Li, ‘Com-\nsoc: Adaptive transfer of user behaviors over composite social net-work’, in SIGKDD, pp. 696–704. ACM, (2012).\n[37] Youwen Zhu and Tsuyoshi Takagi, ‘Efficient scalar product protocol\nand its privacy-preserving application’, International Journal of Elec-\ntronic Security and Digital F orensics, 7(1), 1–19, (2015).C. Chen et al. / Secure Social Recommendation Based on Secret Sharing 512",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "iarq7NsMveS",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200306",
"forum_link": "https://openreview.net/forum?id=iarq7NsMveS",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Mining Insights from Large-Scale Corpora Using Fine-Tuned Language Models",
"authors": [
"Shriphani Palakodety",
"Ashiqur R. KhudaBukhsh",
"Jaime G. Carbonell"
],
"abstract": "Mining insights from large volume of social media texts with minimal supervision is a highly challenging Natural Language Processing (NLP) task. While Language Models’ (LMs) efficacy in several downstream tasks is well-studied, assessing their applicability in answering relational questions, tracking perception or mining deeper insights is under-explored. Few recent lines of work have scratched the surface by studying pre-trained LMs’ (e.g., BERT) capability in answering relational questions through “fill-in-the-blank” cloze statements (e.g., [Dante was born in MASK]). BERT predicts the MASK-ed word with a list of words ranked by probability (in this case, BERT successfully predicts Florence with the highest probability). In this paper, we conduct a feasibility study of fine-tuned LMs with a different focus on tracking polls, tracking community perception and mining deeper insights typically obtained through costly surveys. Our main focus is on a substantial corpus of video comments extracted from YouTube videos (6,182,868 comments on 130,067 videos by 1,518,077 users) posted within 100 days prior to the 2019 Indian General Election. Using fill-in-the-blank cloze statements against a recent high-performance language modeling algorithm, BERT, we present a novel application of this family of tools that is able to (1) aggregate political sentiment (2) reveal community perception and (3) track evolving national priorities and issues of interest.",
"keywords": [],
"raw_extracted_content": "Mining Insights from Large-Scale Corpora Using\nFine-Tuned Language Models\nShriphani Palakodety12and Ashiqur R. KhudaBukhsh13and Jaime G. Carbonell4\nAbstract. Mining insights from large volume of social media texts\nwith minimal supervision is a highly challenging Natural Language\nProcessing (NLP) task. While Language Models’ (LMs) efficacy inseveral downstream tasks is well-studied, assessing their applicabil-ity in answering relational questions, tracking perception or miningdeeper insights is under-explored. Few recent lines of work havescratched the surface by studying pre-trained LMs’ (e.g., BERT) ca-\npability in answering relational questions through “fill-in-the-blank”cloze statements (e.g., [Dante was born in MASK]). BERT\npredicts the MASK-ed word with a list of words ranked by prob-ability (in this case, BERT successfully predicts Florence with the\nhighest probability). In this paper, we conduct a feasibility studyof fine-tuned LMs with a different focus on tracking polls, trackingcommunity perception and mining deeper insights typically obtainedthrough costly surveys. Our main focus is on a substantial corpusof video comments extracted from Y ouTube videos (6,182,868 com-ments on 130,067 videos by 1,518,077 users) posted within 100 daysprior to the 2019 Indian General Election. Using fill-in-the-blankcloze statements against a recent high-performance language mod-eling algorithm, BERT, we present a novel application of this family\nof tools that is able to (1) aggregate political sentiment (2) revealcommunity perception and (3) track evolving national priorities andissues of interest.\n1 INTRODUCTION\nPre-trained Language Models (LMs), such as BERT [14],\nELMo [26], XLNet [38] etc. have received widespread attention inrecent NLP literature. While Language Models’ (LMs) efficacy inseveral downstream tasks is well-studied, assessing their applicabil-ity in answering relational questions is largely under-explored. Re-cent lines of work have begun to scratch the surface with analyzingLMs’ capability in answering relational questions presented as “fill-in-the-blank” cloze statements. Competing views about their effec-tiveness as Knowledge Bases (KBs) have been published [21, 27].\nWhile the jury is still out on how effective LMs are as Knowledge\nBases in their current form, in this paper, we explore a related re-search question: is it possible to track community perception, aggre-\ngate opinions and compare popularity of political parties and candi-dates using LMs?\nIn this paper, we introduce a Y ouTube comment corpus relevant\nto the Indian General Election (6,182,868 comments on 130,067\n1Ashiqur R. KhudaBukhsh and Shriphani Palakodety are equal contribution\nfirst authors. Ashiqur R. KhudaBukhsh is the corresponding author.\n2Onai, USA, email: [email protected]\n3Carnegie Mellon University, USA, email: [email protected]\n4Carnegie Mellon University, USA, email:[email protected] by 1,518,077 users). BERT is pre-trained on a book corpus\nand Wikipedia, i.e., on well-formed texts by contributors proficientin English covering a broad range of topics [14]. In contrast, our elec-tion corpus consists of short texts with grammar and spelling disflu-encies and has a topical focus on the general election. In a series ofexperiments using cloze statements, we demonstrate that, in its cur-rent form, fine-tuned BERT can shed interesting insights into three\npreviously unexplored tasks in the context of knowledge-mining us-ing LMs: (1) community perception analysis, (2) comparative anal-ysis of popularity of candidates or political parties, and (3) miningdeeper insights about national priorities. We side-step known issuesof handling negative cloze statements [21] with corpus modificationand construct interesting adversarial scenarios to guide our intuitionsbetter.\nIn social science, major studies often rely on extensive surveys.\nTypically, such surveys are few-and-far-between as conducting themon a regular basis involves significant resources. Also, aggregatingopinions at multiple spatiotemporal granularities is a non-trivial chal-lenge. Further, language modeling and querying allows us to side-step issues of knowledge schema engineering and complex modelingto integrate this schema. In this work, we investigate the possibility ofusing BERT to complement traditional surveys. Note that, our find-\nings are not limited to the current success reported in this paper. Weare rather making a more general claim that going forward, LMs canprovide a compelling solution for performing fast-turnaround analy-sis while requiring minimal supervision.Contributions: Our contributions are the following\n5.\n1. Social: To the best of our knowledge, we report the first large-\nscale social media analysis of community perception focusedon two major religions in India. Religion has remained a con-tentious issue both during the pre-independence (1947) [33] andpost-independence era [16, 29] in India. Our analysis provides astarting point for further research and journalism in this direc-tion [8, 7, 4].\n2. High-performance Language Modeling for insight mining: We\ninvestigate the capabilities of a high-performance language mod-eling tool, BERT, and present an exploratory study on the model’s\ncapability to reveal a variety of insights that align well with actualobservations, outcomes, and surveys.\n3. Side-stepping known issues, analysis of retention: We propose a\ncorpus modification solution to side-step a recently-reported issuewith handling negated cloze statements. We report a new analy-sis outlining how much knowledge a fine-tuned BERT retains and\nprovide interesting insights.\n5Resources and additional details are available at: https://www.cs.cmu.edu/\n∼akhudabu/BERT2019IndianElection.htmlECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2003061890\n2 DATA: YOUTUBE COMMENTS\nY ouTube channels: We considered Y ouTube channels for 11 highly\npopular national news outlets and 3 highly popular national news-\npapers’ official Y ouTube channels. Of the 29 states in India, we re-stricted our focus on 12 states (listed in Table 1) that contribute 20or more seats in the lower house of parliament. Our analysis encom-passes a large fraction of the political voice in India since these 12states account for 423 seats out of 543 seats, i.e., 77.9% of the to-tal seats in the parliament. In terms of vote share, of the overall613,133,300 votes cast in 2019 election, 534,378,886 (87.16%) voteswere cast from these 12 states [3]. For each of the states, we identifiedtwo highly popular Y ouTube news channels. Overall, this implies 38Y ouTube channels (24 regional, 14 national). The average subscribercount of these channels is 3,338,628 (average subscriber count fornational Y ouTube channels: 5,840,950 ; average subscriber count forregional Y ouTube channels: 1,878,941).\nAndhra Pradesh (25), Bihar (40), Gujarat (26), Kar-\nnataka (28), Kerala (20), Madhya Pradesh (29), Maha-rashtra (48). Odisha (21), Rajasthan (25), Tamil Nadu(39), Uttar Pradesh (80), West Bengal (42)\nTable 1 :States with seat counts in brackets.\nPeriod of interest: We considered a 100 day period starting from Feb\n12th to May 22nd, 2019. This spans a 100-day window preceding the\nannouncement of results.Characterization of the videos: Our video data set, V, consists of\n130,067 videos uploaded in the 38 Y ouTube channels during our pe-riod of interest (46,055 videos from national channels, 84,012 videosfrom regional channels). It is not feasible to manually label all videosas relevant (i.e., talking about some aspects concerning the Indianelection) or irrelevant (i.e., talking about unrelated topics like enter-tainment, sports etc.). We randomly selected 100 videos and manu-ally annotated them with the following labels: politics, entertainment,sports, weather, crime, finance and others. Note that, we do not intendthese categories to be formal or exhaustive, but rather to be illustra-tive of the types of news videos that were uploaded during our periodof interest and provide a rough estimate of the relative distribution ofpolitical news in our video data set. As seen in Table 2, a substan-tial chunk of the news videos were on politics mainly covering theelection updates, debates among party spokespersons, evaluation ofcampaign promises and foreign policy discussions.\nCategories # of videos\nPolitics 72\nWeather 3\nEntertainment 3\nCrime report 2\nFinance 1\nSports 1\nOther 18\nTable 2 :Characterization of sampled Y ouTube channels.\nComments data set: Using the publicly available Y ouTube API,\nwe crawled the comments posted on videos in V. Overall, we ob-\ntained 6,182,868 comments (4,198,599 comments from nationalchannels, 1,984,269 comments from regional channels). One ma-jor impediment to analyzing social media responses generated in theIndian subcontinent is its linguistic diversity. We used a recently-proposed, high-accuracy polyglot embedding based language identi-fication technique (first proposed in [24] and successfully replicatedin a different multilingual corpus [25]) to separate the English cor-pus. Overall, we obtained 1,940,757 English comments (denoted asCall) with the following breakdown: 1,512,009 comments from the\n14 national Y ouTube news channels (denoted as Cnational), 428,748\ncomments from the 24 regional Y ouTube channels.Preprocessing: We follow the standard preprocessing steps recom-\nmended for the BERT [14] language model for our fine-tuning tasks.\nFor our task we use the uncased English model with the followingparameter details: 12 transformer layers, hidden state length of 768,12 attention heads, 110M overall parameters\n6.Note that these pa-\nrameters are recommended by the authors of BERT and our analysis\nshows that they work well for our task as well. The base BERT vocab-\nulary is supplemented by 900 most frequent tokens from the Englishsubset of our corpus. Finally, the pre-trained model is fine-tuned onthe target corpus in question using the training hyperparameters arepresented below.\n•Batch size: 16\n•Maximum sequence length: 128\n•Maximum Predictions Per Sequence: 20\n•Fine-tuning steps: 20,000\n•Warmup steps: 10\n•Learning rate: 2e-5\n2.1 A Challenging Data Set\nSimilar to most data sets of short social media texts generated in a lin-guistically diverse region, our data set exhibits a considerable pres-ence of out-of-vocabulary (OOV) words, code-mixing, and grammarand spelling disfluencies. In addition to these challenges, given thata vast majority of the content contributors does not speak English astheir first language, we noticed a substantial incidence of phoneticspelling errors (e.g., [human beings are important notvehicles are bloody pupil] originally intended to express\nbloody people); 32.67% of times, the word liar was misspelled\naslier. In all, considering terms occurring 5 or more times in the\ncorpus, the OOV rate against the BERTbasewas 75.08%.\nTo summarize, our data set (i) captures a considerable fraction of\npolitical voice of India (ii) is obtained from videos predominantlydiscussing election (see, Table 2) and (iii) is markedly different fromthe documents used to train the original BERTbase.\n3 RELATED WORK\nElection analysis: Social media analysis of a variety of elections\nacross several countries has been widely studied (e.g., US [35, 15,23], UK [10], India [19, 30, 22], Netherlands [28], Pakistan, SouthKorea [31] etc.); presenting an exhaustive analysis is beyond thescope of this paper. Special referendum elections like Brexit [11]and the Greek Referendum [34] have also received attention fromthe Information Retrieval (IR) community. Three major directionsdistinguish our work from prior literature: (1) our focus on Y ouTubecomments, a rather under-explored data resource instead of twittervast majority of previously published work focused on Twitter (2)BERT predictions instead of previously explored signals like lexicon-\nbased sentiment analysis, tweet volume, tweet mentions etc. to cor-relate with election outcome (3) beyond typical comparative analysisof popularity measures of candidates and parties, a broader scope totrack evolving national priorities, and community perception.Sentiment-mining using BERT: Unrelated to the task of political\nsentiment-mining, in terms of sentiment analysis using BERT, the\n6https://github.com/google-research/bert/S. Palakodety et al. / Mining Insights from Large-Scale Corpora Using Fine-Tuned Language Models 1891\nclosest work to our contribution is a targeted aspect-based sentiment\nanalysis (TABSA) task presented in [32]. Our work is different alongthe following lines. First, our focus is on political text-mining asopposed to the TABSA task. Second, our data set is substantiallymore challenging than the Sentihood data set used in [32] indicat-ingBERT’s robustness to noisy social media text generated in a part\nof the globe where the majority of the content contributors are non-native speakers of English. Finally, and most importantly, beyondsentiment-mining of political actors, we mine deeper insights fromthe corpus such as evolving national priorities and track communityperception.Language models as Knowledge Bases (KBs): Recent attempts toanswer relational questions using LMs have recieved moderate suc-cess by casting the relational questions as “fill-in-the-blank” clozestatements (e.g., [Gordon Scholes is a member of the\nMASK] political party. - expected answer Labor) [27]. However, fur-\nther probing of these models has uncovered limitations in their han-dling of negated cloze statements [21]. For instance, these modelsoften tend to provide near-identical answers to negated queries e.g.,when [Birds cannot MASK] and[Birds can MASK] are\nused as cloze statements, the answer fly is predicted with high proba-bility in both cases. Our work is different in the following ways. First,unlike [27, 21], we work with fine-tuned BERT operating on a chal-\nlenging corpus of social media texts produced mostly by non-nativespeakers of English. Second, instead of answering relational queries,we focus on tracking community perception, mining national priori-ties and comparing relative popularity of political entities. Third, weprovide a comparative analysis demonstrating the extent to which afine-tuned BERT language model forgets the base knowledge con-\ntained in the original pre-trained model (i.e. retention) - a key aspectto consider if LMs have to replace KBs. Finally, we propose a solu-tion to sidestep the issue of negated queries by removing documents(comments) containing valence shifters.\n4 BACKGROUND\nBERT: BERT [14] is a recent high-performance bi-directional trans-\nformer language model. The transformer architecture is a recent deepneural model for sequence-to-sequence prediction tasks. A sequence-to-sequence task involves accepting a sequence as input and produc-ing a sequence as output. Models for tackling these problems typi-cally contain an encoder that operates on the input and constructs arepresentation, and a decoder that operates on the representation (andalso the input in some cases) and produces the desired output. Boththe encoder and decoder in the transformer model use the Multi-Headattention mechanism to attend to different input positions.\nLarge scale language models, trained on large corpora, have re-\ncently produced strong results in text generation, and strong down-stream performance for tasks like text-classification. BERT itself has\nproduced significant performance-gains in a slew of NLP tasks [14].BERT uses a transformer model [36] with a masked-word prediction\nobjective and a next sentence prediction auxiliary training objective.\nRecent work has explored the knowledge present in these (not fine-\ntuned) large scale language models using cloze sentences [27, 21].As shown in Figure 1, we evaluate the fine-tuning paradigm on anIndian election corpus. In Section 6, we provide an analysis of theacquisition and retention properties of fine-tuned BERT.\nIndian election: India follows a multi-party parliamentary system.The general election allows the voter-base to elect the 543 membersof the lower house of parliament - The Lok Sabha. The winning partyor a coalition of parties then nominate one of the members to serve as/g3/g5/g12/g14/g4/g10/g12/g11/g15/g13\n/g6/g24/g28/g21/g33/g34/g28/g24/g28/g23\n/g4/g8/g10/g16/g5\n/g11/g12/g10/g3/g5/g13/g6/g24/g28/g21/g38/g14/g34/g28/g21/g20\n/g3/g5/g12/g14\n/g9/g2/g13/g7/g1/g35/g24/g26/g26/g1/g35/g24/g28 /g9/g2/g13/g7/g1/g35/g24/g26/g26/g1/g35/g24/g28\n/g39/g24/g39/g1/g1/g1/g40/g37/g44/g45/g42/g49\n/g39/g36/g29/g34/g39/g1/g40/g37/g41/g49/g45/g49/g39/g35/g21/g39/g1/g1/g40/g37/g41/g41/g46/g49/g39/g18/g25/g30/g39/g1/g1/g1/g1/g1/g1/g40/g37/g42/g42/g44/g43/g39/g27/g29/g20/g24/g39/g1/g1/g1/g1/g1/g40/g37/g42/g40/g49/g44/g39/g19/g29/g28/g23/g31/g21/g32/g32/g39/g1/g40/g37/g41/g44/g47/g48\n/g3/g5/g12/g14/g1/g18/g17/g32/g21/g1/g34/g28/g19/g17/g32/g21/g20\n/g11/g31/g21/g20/g24/g19/g33/g24/g29/g28/g32/g3/g5/g12/g14/g1/g22/g24/g28/g21/g38/g33/g34/g28/g21/g20\n/g11/g31/g21/g20/g24/g19/g33/g24/g29/g28/g32\nFigure 1 :System diagram.\nthe Prime Minister. The 2019 election was conducted over a period\nstarting 11thof April and ending on the 19thof May in 7 phases. The\nvotes were counted and the results were announced on the 23rdof\nMay. The ruling party (BJP) won an outright majority and NarendraModi - the incumbent prime minister was nominated for a secondterm. In our work, we focus on two major political parties: IndianNational Congress (popularly, referred to as Congress) and BharatiyaJanata Party (popularly, referred to as BJP) and the two projectedprime-ministerial candidates: Narendra Modi, and Rahul Gandhi.\n5 RESULTS AND ANALYSIS\n5.1 Sanity check on fine-tuning\nWe denote finetuned BERT onCnationalandCallasBERTnational\nandBERTall, respectively. Since BERTbaseis pre-trained on a\nbook corpus and Wikipedia data set, it is possible that without any\nfine-tuning, it may reflect information relevant to India. For exam-ple, when the input sentence is [MASK is a major Indian\ncity], BERT’s top three predictions are Chennai, Delhi and Mum-\nbai with probabilities 0.15, 0.12 and 0.12, respectively. However, theresults could be slightly dated.\nFor instance, Table 3 presents the top three completions ranked by\nprobability on the following cloze statements:\n•[MASK Gandhi] (denoted as cloze\n1)\n•[Narendra MASK] (denoted as cloze 2)\nBERTbaseoncloze 1included two deceased former Indian politi-\ncians belonging to the Gandhi family: Indira Gandhi (former primeminister of India), Sanjay Gandhi (son of Indira Gandhi and also apolitician). In contrast, both fine-tuned BERTnationalandBERTall\npredicted the currently active politicians from the same family. More-over, on cloze\n2,BERTbasefailed to suggest Modi, the most obvious\ncompletion in contemporary Indian politics. Our test indicates that onsimple cloze statements, fine-tuned BERT outputs results consistent\nwith the corpus.\nProbe BERTbaseBERTnationalBERTallcloze 1 Indira (0.82),\nSonia (0.04),Sanjay (0.01)Rahul (0.58),Fake (0.08),Priyanka (0.05) Rahul (0.6),Priyanka (0.04),\nSonia (0.03\ncloze 2 Kumar (0.16),\nSharma (0.14),Singh (0.07)Modi (0.77),Modiji (0.02),\nsir (0.01)Modi (0.70),\nModiji (0.02),\nRahul (0.01)\nTable 3 :Predicted completions with probabilities in parentheses.S. Palakodety et al. / Mining Insights from Large-Scale Corpora Using Fine-Tuned Language Models 1892\n5.2 Community perception tracking\nResearch question: Can we use fine-tuned LMs to track community\nperception? A trend of increasing polarization in the Indian political\nscene along religious lines has been reported recently [8, 7, 4]. Anal-\nysis of religious polarization in our corpus (along the lines of politicalpolarization in [13]) would require a reliable estimate of religious af-filiation. Hence, instead we focused on the tracking perception of thetwo prominent religions in India. We conduct several modificationsto our corpora to eliminate possibilities of inaccurate characterizationand employ different techniques to analyze this research question ofconsiderable social value.\nWe first construct a simple test to ascertain that fine-tuned BERT\nreflects discussions around religion in the corpus. In responseto the cloze sentence [My religion is MASK], the top two\nBERTbasepredictions are Christian and Catholic while the fine-\ntuned BERTallpredicts Islam and Hindu - in line with expectations.\nWe next construct two cloze statements: [Hindus are MASK](denoted as S\n1) and [Muslims are MASK] (denoted as S2), and\nquery BERTnational,BERTallandBERTbaseto estimate the per-\nception of these two religions. Among 4,381,623 unique bigrams, interms of frequency, [Hindus are] and[Muslims are] rank\n755\nthand 699th, respectively.\nTable 6 lists the top three completions suggested by different\nBERT models. Our findings highlight the following points. First,\nfine-tuning substantially altered the predictions; with BERTbase,\nbothS1andS2were completed with predominantly neutral terms.\nHowever, a marked shift in the nature of completions was observedwith models trained on the election corpus; the top-predicted wordsfor bothS\n1andS2were largely negative. Second, the negativity is\nnot merely one-sided – i.e., it is not the case that only one commu-nity is painted with negative words while the other is hardly at the re-ceiving end. Rather, both communities received a comparable shareof negative completions and almost mirrored each other hinting at apossible polarized political landscape based on religious identities.Research question: Is this analysis affected by BERT’s inability to\naccount for negation? It is possible that BERT’s predictions are in-\nfluenced by a prevalence of phrases containing negation (e.g., Hin-dus are not fools, Muslims are not terrorists). We queried the modelsobtained by finetuning on the corpora after removing any comment(∼20% of the corpus) containing one or more valence shifters (listedin Table 4). We found the orders of results were unchanged.\nnot, can’t, won’t, don’t, shouldn’t, mustn’t, should not,\nmust not, do not, cannot, will not, would not, wouldn’t,\nisn’t, is not, dare not, have not, might not, may not,\nneed not, ought not, shall not\nTable 4 :List of valence shifters we considered.\nResearch question: Is it possible that the analysis is affected by\nassociation between the two religious entities? In a later result, we\nfound that BERT developed an intuition that Modi is related to BJP .\nWe were curious to know if the mirroring predictions of the two com-munities’ perception is influenced by association, i.e., BERT figures\nout that Hindus and Muslims are related and then one community’sperception is reflected on the other. To eliminate this possibility, wefurther modified the corpora without any negation: (i) holding outall comments containing at least one high frequency term related to\nHinduism (listed in Table 7) (CHindu\nall) (ii) holding out all comments\ncontaining at least one high frequency term related to Islam (Table 7CIslam\nall. As shown in Table 5, our results are consistent with Table 6.BERTMuslim\nnationalBERTHindunational BERTMuslimall BERTHinduall\nS1 S2 S1 S2\nfools (0.09) fools (0.08) fools (0.10) fools (0.07)\nterrorists (0.06) terrorists (0.06) terrorists (0.06) terrorists (0.06)\nidiots (0.03) stupid (0.02) fool (0.05) stupid (0.02)\nTable 5 :BERT completion results for [Hindus are MASK] (de-\nnoted asS1) and [Muslims are MASK] (denoted as S2).\n(a) Hindus are\n (b) Muslims are\nFigure 2 : A word cloud visualization of [Hindus are] and\n[Muslims are].\n5.2.1 V alidation\nUsing a template-based word-cloud tool (results presented in Fig-\nure 2), a semantic lexicon induction tool, and manual inspection,(discussed later) we corroborate BERT’s finding that the community\nperception of both religions was largely negative.Sentiment analysis using SENTPROP [17]: Lexicon-based senti-\nment analysis is a well-established method for computing sentimentscores of documents [23]. In this scheme, tokens are assigned scoresand individual documents’ (comments in our case) scores are ob-tained by combining the constituent token scores (usually by sim-ple addition). For effective sentiment analysis, obtaining a domain-specific lexicon is crucial [37]. We induced a custom lexicon fromour own corpus using word embeddings trained with [18] and a lex-icon inducing algorithm (SENTPROP) from [17]. We used the sameset of seed words presented in [17] and our test for a positive ornegative comment simply adds the individual token scores and if thecumulative comment score is greater than 3 (or less than -3), the com-ment is considered positive (or negative).\nWe identified four high-frequency religious tokens for both reli-\ngions (Hindu, Hindus, Hinduism, Hindutva, Muslim, Muslims, Is-lam and Islamic) and list their scores in Table 7. We noticed thatthe scores for all these terms were negative and comparable acrossboth religions. For Cnational, 71.2% of the tokens were more posi-\ntive than any of the religious tokens we considered. For Call, 88.2%\nof the tokens were more positive than any of the religious tokens weconsidered.\nWe next divide both CallandCnationalinto two disjoint subsets.\nAreligious subset containing comments with at least one of the eight\nreligious tokens we considered and religious\ncits complement. In Ta-\nble 8, we present the percentage of positive and negative comments inthe religious and religious\ncsubsets. We found that compared to the\nreligiouscsubset, the relative increase in fraction of negative com-\nments was more than the relative increase in fraction of positive com-ments in the corresponding religious subset. We performed the same\nanalysis at a finer granularity of individual months. Our finding wasconsistent; religious discussion attracted more negativity than posi-tivity. We cannot come to a strong conclusion based on our findings,however, in addition to automated analysis, we sampled 100 com-ments from both religious and religious\ncand our manual inspection\naligns with our current findings.S. Palakodety et al. / Mining Insights from Large-Scale Corpora Using Fine-Tuned Language Models 1893\nBERTbaseBERTbaseBERTnationalBERTnationalBERTallBERTallS1 S2 S1 S2 S1 S2\nhere (0.09) Christians (0.11) fools (0.15) fools (0.11) fools (0.13) fools (0.09)\nminority (0.06) excluded (0.04) terrorists (0.07) terrorists (0.07) terrorists (0.05) terrorists (0.06)\nChristians (0.06) Muslim (0.04) fool (0.02) fool (0.02) idiots (0.03) terrorist (0.03)\nTable 6 :BERT completion results for [Hindus are MASK] (denoted as S1) and [Muslims are MASK] (denoted as S2). Among\n4,381,623 unique bigrams, in terms of frequency, [Hindus are] and[Muslims are] rank 755thand 699th, respectively.\nToken CallCnational\nHindu -0.78 -0.58\nHindus -0.79 -0.57\nHindutva -0.71 -0.51\nHinduism -0.99 -0.59\nMuslim -0.72 -0.49\nMuslims -0.80 -0.49\nIslam -0.91 -0.58\nIslamic -0.68 -0.59\nTable 7 : Sentiment of religious tokens.\nreligious subset religiouscsubset\npos = 35.16% pos = 27.87%\nCnationalneg = 18.55% neg = 6.24%\npos = 33.64% pos = 18.47%\nCallneg = 18.55% neg = 4.13%\nTable 8 :Sentiment analysis by partitioning the corpus into subsets containing\nreligious tokens and its complement.\nPresence of hate words: In our third and final analysis, we focus\non presence of hate tokens around the religious tokens. In the reli-\ngious subset of Call, the aforementioned religious tokens appeared\n151,919 times in 95,638 comments (3.94% of the entire corpus). Forevery such instance of a religious token in a comment, we consid-ered a left and right context of two words (i.e. a total of four sur-rounding words) around the religious token and computed the frac-tion of total instances that contained a hate word or a slur. For hatewords, we considered a combination of two previously-publishedlexicons [9, 20] of derogatory terms used in code-switched English(365 unique slurs). We found that at least one slur was present in7.22% of the contexts containing a religious token.\nIt may very well be the case that terms in these hate lexicons are\na common occurrence in Indian online discussions independent ofthe subject (religious or otherwise). In order to verify if that is thecase, we randomly sampled equal number of 4-grams (sequence of 4consecutive tokens) from the religious\ncsubset of Calland found that\nthe fraction of contexts containing a hate word (4.39 ±0.07%) was\nless indicating that when religion is discussed the presence of hatefulterms increases.\n5.3 Comparing popularity of political entities\nResearch question: What was the temporal trend of support for two\nmajor political parties: BJP and Congress?\nWe first consider two text templates: [vote for] and[will\nwin]. Among 4,381,623 unique bigrams, [vote for] and\n[will win] rank 16thand 269th, respectively and are the top\ntwo bigrams that can be used to express political preference (can-didate or party). The tokens that immediately follow/precede [vote\nfor]/[will win] are visualized in Figure 3 with the two main\ntakeaways:\n•Narendra Modi and BJP had overwhelming support.\n(a) vote for\n (b) will win\nFigure 3 : A word cloud visualization of [ vote for] and [will\nwin]. Among 4,381,623 unique bigrams, in terms of frequency,[vote for] and[will win] rank 16\nthand 269th, respec-\ntively.\n•As compared to Narendra Modi and BJP , Rahul Gandhi andCongress had substantially less support.\nWe now present our results probing fine-tuned BERT. We con-\nstructed two cloze statements: [Vote for MASK] (denoted as\nS\n3) and [MASK will win] (denoted as S4). In our next series\nof experiments, we chose the granularity of weekly results. We di-vide the comments into weekly subsets based on the week they wereposted yielding one corpus per week in the time-frame considered.Next, BERT was fine-tuned on each of these corpora yielding one\nfine-tuned BERT model per week. We queried each of these weekly\nfine-tuned BERT models with S\n3andS4and examined the results.\n5.3.1 Party-focused analysis\nFor every week, among the ranked predictions, BJP and Congressconsistently featured as the top-two political parties. We found thisresult consistent with the ground truth that indeed, these two partiesare the two most-popular national parties. In Figure 4, we plot thepredicted probabilities for BJP and Congress. As shown in Figure 4,both onCnationalandCall, BJP was assigned a higher probability\nthan Congress. In Figure 4(b), apparently, support for both partiesshowed a sharp decline in week 3. This week coincides with the pe-riod of heightened tensions between India and Pakistan [1] and asubstantial chunk of the corpus discussed a potential war and possi-ble outcomes. For the templates used for querying, the probabilitiesgot split among India and Pakistan (in addition to the political enti-ties) i.e. a substantial chunk of the users were discussing who wouldwin a hypothetical war (India/Pakistan will win).\n2468 1 0 1 2 1 4\n#Week00.10.20.30.40.5Predicted probabilityBJP (Cnational)\nCongress (Cnational)\nBJP (Call)\nCongress (Call)\n(a)[Vote for MASK]2468 1 0 1 2 1 4\n#Week00.10.20.30.40.5Predicted probabilityBJP (Cnational)\nCongress (Cnational)\nBJP (Call)\nCongress (Call)\n(b)[MASK will win]\nFigure 4 : Party focused analysis. BJP is plotted with saffron and\nCongress is plotted with green. Blue line indicates the time when vot-\ning starts. Solid lines indicate Cnational. Dotted lines indicate Call.S. Palakodety et al. / Mining Insights from Large-Scale Corpora Using Fine-Tuned Language Models 1894\nRobustness to phrase variation: One might argue that a simple fre-\nquentist analysis of plotting weekly occurrence [vote for BJP]\nor[vote for Congress] normalized by the total number of\nweekly occurrence of [vote for] can be equally effective in in-\ndicating BJP’s dominance over Congress throughout the entire pe-\nriod. However, for less common phrases with similar meaning, lackof exact match can make this type of frequentist analysis difficult.For example, [cast your vote to] has sparse presence in\nthe corpus (22 exact matches in the entire corpus) as compared to26,301 mentions of [vote for], indicating a simple template-\nbased matching (i.e. executing an exact phrase-match against thecomments) would not work (sophisticated embedding-based meth-ods may address this issue). Querying a language model has an ad-vantage for uncommon but similar meaning phrases as we could eas-ily compute and compare probabilities with this template.Comparison to Polls and Outcomes: Election laws in India ban\nthe release of polling information close to an election. Exit polls arethus released after the election. The vast majority of the polls pre-dicted a victory for the incumbent (with widely varying seat counts).This aligns with our discovery using the queries mentioned wherethe incumbent party, BJP , is assigned a higher probability than theopposition, Congress.\n5.3.2 Candidate-focused analysis\nWe now move to our candidate-focused analysis. We used the sameset of probes, S\n3andS4for our analysis and compared the two most-\npopular candidates: Narendra Modi (popularly referred to as Modi)and Rahul Gandhi (popularly, referred to as Rahul). As shown inFigure 5, Modi was overwhelmingly more popular than Rahul acrossthe entire time-period we considered. This finding is consistent withprevious finding [19] about 2014 elections and a Pew research sur-vey [6] stating that 88% of the surveyed Indian citizens viewed himfavorably. In contrast, the support for Rahul was very low. This find-ing is again consistent with the two following outcomes (i) Rahullost in a seat that was held by Congress party and his family for yearswhich was considered a party stronghold, and (ii) Rahul resigned asthe party president following Congress’s poor performance [5].An adversarial example to highlight BERT’s robustness and\nModi’s overwhelming popularity: We have already shown that\nBERT is robust to phrase variations. We next show a stronger re-\nsult. We remove any comment containing the phrase [vote for\nModi] (o r[Modi will win]) from the corpus and fine-tune\nBERT on the modified corpora. If BERT was only relying on count-\ning statistics without forming a deeper understanding of the corpus,S\n3andS4should be completed with Rahul with higher probability\nthan Modi. However, as shown in Table 9, Modi still received higherprobability than Rahul indicating that BERT could still infer stronger\nsupport for Modi from the rest of the comments.One user one comment: Fake accounts, bots, and a variety of man-\nual or automated mechanisms exist to drive popularity or attention toentities. We re-ran all our analyses on a corpus where only one com-ment is randomly sampled and retained per user (similar to one per-son one vote) with qualitatively similar results. Note that, this doesnot eliminate the effects caused by multiple fake accounts detectingwhich is beyond the scope of this paper.\n5.4 Deeper insights\nResearch question: Is it possible to mine deeper insights like iden-\ntifying the national priorities using BERT?Data set Removed phrase Probe P(Modi) P(Rahul)\nCall- S3 0.2345 0.0065\nCallV ote for Modi S3 0.1025 0.0042\nCall- S4 0.2549 0.0219\nCallModi will win S4 0.1721 0.0122\nTable 9 :Performance on the adversarial corpora.\n2468 1 0 1 2 1 4\n#Week00.10.20.30.40.5Predicted probabilityModi (Cnational)\nRahul (Cnational)\nModi (Call)\nRahul (Call)\n(a)[Vote for MASK]2468 1 0 1 2 1 4\n#Week00.10.20.30.40.5Predicted probabilityModi (Cnational)\nRahul (Cnational)\nModi (Call)\nRahul (Call)\n(b)[MASK will win]\nFigure 5 : Candidate focused analysis. Modi is plotted with saffron\nand Rahul is plotted with green. Blue line indicates the time when\nvoting starts. Solid lines indicate Cnational. Dotted lines indicate\nCall.\nSo far, we have seen that querying BERT can be effective in\n(i) investigating sentiment around an entity (ii) comparing relativepopularity between candidates and political parties and (iii) act-ing as a proxy for opinion/exit polls. In our next series of ex-periments, we explore if it is possible to obtain deeper insights.For this, we construct the following two cloze statements: [The\nbiggest problem of India is MASK] (denoted as S\n5)\nand[India’s biggest problem is MASK] (denoted as\nS6). For each month, we list top three predictions in Table 10.\nIn order to evaluate the effectiveness of BERT predictions, we\nwould require a baseline ground truth to compare against. For this,we consider the most-recent survey conducted by Pew research [12]among 2,521 respondents in India from May 23 to July 23, 2018.Note that, there is a considerable time-lag between the conductedsurvey and our analysis during which a major terror attack happenedin Pulwama which brought India and Pakistan almost to the brink ofa full-fledged war. Hence, we observed some discrepancies betweenthe survey’s findings and BERT predictions. We attribute these to the\nhighly significant and unexpected events that took place which ex-plain the minor discrepancies.\nThe bag of problems identified by BERTallonS\n5\n({terrorism, corruption, Kashmir, unemployment, poverty })and\nonS6({terrorism, P akistan, Kashmir, corruption, unemployment,\npoverty, })(see, Table 10) have substantial overlap indicating that\nthe predictions are robust to simple phrase variations. Three issuesidentified with both cloze statements: terrorism, corruption andunemployment featured in the top four issues identified in the Pewresearch survey establishing that fine-tuning BERT on a massive web\ncorpus can provide an interesting alternative to traditional surveys.\nWe next focus on the temporal nature of the predictions. While ter-\nrorism had a constant presence in the top three predictions from both\ncloze statements in all four months we considered, we notice that thepredicted probabilities for terrorism was substantially higher in the\nmonth of February a time-period in which the Pulwama terror attackoccurred. As the tensions between the two countries subsided, theother two pressing problems - corruption and unemployment startedreceiving more public attention. It is infeasible to conduct extensivefield-surveys on a monthly basis. However, our results indicate thatfrom a large data set of discussions on current events, it is possible tomine deeper insights and also analyze the temporal trends of publicS. Palakodety et al. / Mining Insights from Large-Scale Corpora Using Fine-Tuned Language Models 1895\nMonth BERTnationalonS5 BERTnationalonS6 BERTallonS5 BERTallonS6\nFebruary terrorism (0.24), Pakistan\n(0.17), corruption (0.14)terrorism (0.49), Pakistan(0.09), corruption (0.04) terrorism (0.28), corruption\n(0.16), Kashmir (0.06)terrorism (0.37), Pakistan\n(0.13), kashmir (0.07)\nMarch unemployment (0.14) terror-\nism (0.09), corruption (0.09)terrorism (0.20) Pakistan\n(0.09), Kashmir (0.05)corruption (0.47), terrorism\n(0.12), poverty (0.10)corruption (0.31), terrorism(0.30), poverty (0.05)\nApril unemployment (0.29) poverty\n(0.14) corruption (0.07)terrorism (0.12) unemploy-ment (0.10) corruption (0.05) corruption (0.36), unemploy-ment (0.21), terrorism (0.07) corruption (0.22), terrorism(0.15), Kashmir (0.07)\nMay unemployment (0.21), corrup-tion (0.19) terrorism (0.07) terrorism (0.18), corruption(0.13) unemployment (0.05) corruption (0.25), unemploy-ment (0.21), poverty (0.08) corruption (0.22), unemploy-ment (0.14), terrorism (0.12)\nTable 10 :Predicted completions with probabilities in parentheses.\nperception on national issues and priorities and LMs can provide a\ncost-effective, fast-turnaround alternative to traditional surveys.Nested queries: We explore if we can go deeper and identify lo-\ncal issues through nested querying. In what follows, we show apreliminary study that holds promise. We first queried BERTall\nwith the following cloze statement: [MASK is a major city\nin Tamil Nadu]. The result predicted with highest probabil-ity was Chennai. Next, we queried BERTall,BERTnational, and\nBERTTN,aBERT model fine-tuned only on subset of com-\nments generated from Tamil Y ouTube channels with the clozestatements: [Chennai’s biggest problem is MASK] and\n[The biggest problem of Chennai is MASK]. Our re-sults show that while BERTallandBERTnationalboth predicted cor-\nruption, terrorism and unemployment, the fine-tuned model specific\nto the state identified water crisis as one of the local issues. TheChennai water crisis [2] which started as a local issue snowballedinto a national crisis that started receiving global attention in June,a time-frame beyond our analysis period. However, our results indi-cate that early detection of localized issue through focused analysismerits deeper exploration.\n6 RETENTION OF PRIOR KNOWLEDGE\nOwing to the black-box nature of large scale language models, it isunclear how fine-tuning impacts the existing knowledge in a model.In this section, we conduct what is to the best of our knowledge thefirst analysis of how knowledge from the original model is carriedover to a fine-tuned model. We first re-iterate the following observa-tions about our corpus:1. Compared to typical training corpora used for the base models,\nour document lengths are considerably shorter.\n2. The overlap of facts is fairly limited owing to the focus of the\ncorpus.\nIntuition suggests that most of the knowledge must stay intact or\nuntouched by the fine-tuning step since the bulk of the corpus justdeals with opinions about the Indian election and with entities rele-vant to this and other (smaller-scale) contemporaneous events in theIndian subcontinent.\nWe use the cloze sentences from [27] which cover a variety of do-\nmains such as entities and relations from ConceptNet, Google-RE,SQuAD. The sentences also cover a broad range of formats (query-ing for subjects, and objects; numeric literal values like year of birthetc.).BERT achieves reasonable performance on this corpus of ques-\ntions and a strong argument is made for BERT’s ability to serve as an\nopen-domain Question-Answering (QA) model. Our experiment uti-lizes these very cloze sentences and by passing them as input to theBERTbase(our base model) and our finetuned BERTnational,w ea r e\nable to characterize the extent of knowledge lost during a fine-tuningstep. We report P@1 scores and analyze the types of errors made bythe fine-tuned model in Table 11.\nTable 11 shows a slight decline in performance in all corpora. Mc-Corpus Relation #Facts base national all\nbirth-place 2937 40.17 37.79 37.01\nGoogle-RE death-place 1825 24.57 19.88 18.40\nbirth-year 765 3.34 2.9 2.36\nConceptNet Total 11458 12.71 10.55 10.85\nSQuAD Total 305 13.11 7.5 10.16\nTable 11 :P@1 performance of the cloze statements on the BERTbase(base),\nBERTnational(national), and BERTall(all). We observe a slight decline in\nperformance across all corpora, and all types of relations.\nNemar’s test reveals these differences are statistically significant inmost cases. We next analyze the types of errors introduced in thefine-tuned models and describe some patterns observed.Numerical Entities: Google-RE contains a set of cloze statements\nwhere the masked word is the year of birth of a subject (relation:birth-year in Table 11). We observed that in a lot of cases, the fine-tuned model predicted 1947 as the year of birth regardless of entity.1947 is significant in Indian history (year of Independence from colo-nial rule). Our hypothesis is that year of birth has low support in thecorpus (given that even the base model performs poorly) and thus itis trivial to move the distribution of numerical literals towards thedistribution of years in the fine-tuning corpus.Location Entities: In the birth-place and death-place relations, the\ndecline in performance occurred due to a variety of mis-predictions.It is interesting to note that some of the mistakes were geographicallyclose to the correct answer, for instance [Tehran to Iran], [Glasgowto London], [Hartford to Greenwich]. We did not however observean over-representation of Indian locations in the result set. This ispossibly due to the limited mention of cities in our corpus (verifiedby manual inspection).\n7 CONCLUSIONS\nIn this paper, in the context of the 2019 Indian general election, weevaluate the viability of fine-tuned large-scale language models innavigating and mining insights from corpora. Our fine-tuned mod-els when queried reveal a variety of insights like temporal trendsof candidate popularity, evolving national priorities, concerns of apopulation and sentiment around religions. We demonstrate throughcarefully constructed experiments that language modeling is robustto sparsity of the phrases queried and can operate even in situationswhen template-matching would fail. We corroborate the mined in-sights with manual analyses involving word-cloud tools, lexicon sen-timent analysis tools, political outcomes and available surveys. Fur-ther, using our corpus, we produce a quantitative evaluation of a fine-tuned model’s retained knowledge, and provide insights about whatis retained, acquired, and forgotten. We posit that improved languagemodels of the future can provide a viable alternative to existing IRpipelines for analysis and mining.S. Palakodety et al. / Mining Insights from Large-Scale Corpora Using Fine-Tuned Language Models 1896\nREFERENCES\n[1] Bbc. https://www.bbc.com/news/world-asia-47366718. Online; ac-\ncessed 12-March-2019.\n[2] Cnn. https://www.cnn.com/2019/07/12/india/india-chennai-water-\ncrisis-train-intl/index.html. Online; accessed 16-Aug-2019.\n[3] Election commission of india. https://eci.gov.in/about/about-eci/the-\nfunctions-electoral-system-of-india-r2/. Online; accessed 16-Aug-\n2019.\n[4] New york times. https://www.nytimes.com/2019/04/11/world/asia/\nmodi-india-elections.html. Online; accessed 28-July-2019.\n[5] New york times. https://www.nytimes.com/2019/07/03/world/asia/\nrahul-gandhi-resigns.html. Online; accessed 12-March-2019.\n[6] Pew research center. https://www.pewresearch.org/global/2017/11/15/\nindia-modi-remains-very-popular-three-years-in/. Online; accessed\n16-Aug-2019.\n[7] Washington post. https://www.washingtonpost.com/world/asia pacific/\ndivided-families-and-tense-silences-us-style-polarization-arrives-in-\nindia/2019/05/18/734bfdc6-5bb3-11e9-98d4-844088d135f2 story.\nhtml. Online; accessed 28-July-2019.\n[8] RB Bhagat, ‘Census enumeration, religious identity and communal po-\nlarization in india’, Asian Ethnicity, 14(4), 434–448, (2013).\n[9] Aditya Bohra, Deepanshu Vijay, Vinay Singh, Syed Sarfaraz Akhtar,\nand Manish Shrivastava, ‘A dataset of hindi-english code-mixed socialmedia text for hate speech detection’, in Proceedings of the Second\nWorkshop on Computational Modeling of People’s Opinions, Personal-ity, and Emotions in Social Media, pp. 36–41, (2018).\n[10] Pete Burnap, Rachel Gibson, Luke Sloan, Rosalynd Southern, and\nMatthew Williams, ‘140 characters to victory?: Using twitter to predictthe uk 2015 general election’, Electoral Studies, 41, 230–233, (2016).\n[11] Fabio Celli, Evgeny Stepanov, Massimo Poesio, and Giuseppe Ric-\ncardi, ‘Predicting brexit: Classifying agreement is better than sentimentand pollsters’, in Proceedings of the Workshop on Computational Mod-\neling of People’s Opinions, Personality, and Emotions in Social Media(PEOPLES), pp. 110–118, (2016).\n[12] Pew Research Center. A sampling of public opinion in india, 2019.[13] Dorottya Demszky, Nikhil Garg, Rob V oigt, James Zou, Matthew\nGentzkow, Jesse Shapiro, and Dan Jurafsky, ‘Analyzing polarization insocial media: Method and application to tweets on 21 mass shootings’,inProceedings of the 17th Annual NAAC), (2019).\n[14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova,\n‘BERT: Pre-training of deep bidirectional transformers for languageunderstanding’, in Proceedings of the 2019 Conference of the North\nAmerican Chapter of the Association for Computational Linguistics:Human Language Technologies, V olume 1 (Long and Short Papers) ,\npp. 4171–4186, (June 2019).\n[15] Joseph DiGrazia, Karissa McKelvey, Johan Bollen, and Fabio Rojas,\n‘More tweets, more votes: Social media as a quantitative indicator ofpolitical behavior’, PloS one, 8(11), e79449, (2013).\n[16] Asgharali Engineer, Communal riots in post-independence India , Uni-\nversities Press, 1997.\n[17] William L Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky,\n‘Inducing domain-specific sentiment lexicons from unlabeled corpora’,inProceedings of EMNLP, volume 2016, p. 595. NIH Public Access,\n(2016).\n[18] Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas\nMikolov, ‘Bag of tricks for efficient text classification’, arXiv preprint\narXiv:1607.01759, (2016).\n[19] V adim Kagan, Andrew Stevens, and VS Subrahmanian, ‘Using twitter\nsentiment to forecast the 2013 pakistani election and the 2014 indianelection’, IEEE Intelligent Systems, 30(1), 2–5, (2015).\n[20] Raghav Kapoor, Yaman Kumar, Kshitij Rajput, Rajiv Ratn Shah, Pon-\nnurangam Kumaraguru, and Roger Zimmermann, ‘Mind your lan-guage: Abuse and offense detection for code-switched languages’,arXiv preprint arXiv:1809.08652, (2018).[21] Nora Kassner and Hinrich Sch ¨utze, ‘Negated lama: Birds cannot fly’,\narXiv preprint arXiv:1911.03343, (2019).\n[22] Aparup Khatua, Apalak Khatua, Kuntal Ghosh, and Nabendu Chaki,\n‘Can# twitter\ntrends predict election results? evidence from 2014 in-\ndian general election’, in 2015 48th Hawaii international conference\non system sciences, pp. 1676–1685. IEEE, (2015).\n[23] Brendan O’Connor, Ramnath Balasubramanyan, Bryan R Routledge,\nand Noah A Smith, ‘From tweets to polls: Linking text sentiment to\npublic opinion time series’, in F ourth International AAAI Conference\non Weblogs and Social Media, (2010).\n[24] Shriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Car-\nbonell, ‘Hope Speech Detection: A Computational Analysis of the\nV oice of Peace’, CoRR, abs/1909.12940, (2019).\n[25] Shriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Car-\nbonell, ‘V oice for the V oiceless: Active Sampling for Finding Com-ments Supporting the Rohingyas’, in Proceedings of the Thirty-F ourth\nAAAI Conference on Artificial Intelligence (AAAI-20), p. To appear,(2020).\n[26] Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner,\nChristopher Clark, Kenton Lee, and Luke Zettlemoyer, ‘Deep contex-tualized word representations’, in Proc. of NAACL, (2018).\n[27] Fabio Petroni, Tim Rockt ¨aschel, Sebastian Riedel, Patrick Lewis, An-\nton Bakhtin, Y uxiang Wu, and Alexander Miller, ‘Language models asknowledge bases?’, in Proceedings of the 2019 Conference on Empiri-\ncal Methods in Natural Language Processing and the 9th InternationalJoint Conference on Natural Language Processing (EMNLP-IJCNLP) ,\npp. 2463–2473, Hong Kong, China, (November 2019). Association forComputational Linguistics.\n[28] Erik Tjong Kim Sang and Johan Bos, ‘Predicting the 2011 dutch sen-\nate election results with twitter’, in Proceedings of the workshop on\nsemantic analysis in social media, pp. 53–60, (2012).\n[29] NC Saxena, ‘The nature and origin of communal riots in india’, Com-\nmunal riots in post-independence India, 60, (1984).\n[30] Parul Sharma and Teng-Sheng Moh, ‘Prediction of indian election us-\ning sentiment analysis on hindi twitter’, in 2016 IEEE International\nConference on Big Data (Big Data), pp. 1966–1971. IEEE, (2016).\n[31] Min Song, Meen Chul Kim, and Y oo Kyung Jeong, ‘Analyzing the po-\nlitical landscape of 2012 korean presidential election in twitter’, IEEE\nIntelligent Systems, 29(2), 18–26, (2014).\n[32] Chi Sun, Luyao Huang, and Xipeng Qiu, ‘Utilizing bert for aspect-\nbased sentiment analysis via constructing auxiliary sentence’, arXiv\npreprint arXiv:1903.09588, (2019).\n[33] Ian Talbot and Gurharpal Singh, The partition of India, Cambridge Uni-\nversity Press Cambridge, 2009.\n[34] Adam Tsakalidis, Nikolaos Aletras, Alexandra I Cristea, and Maria Li-\nakata, ‘Nowcasting the stance of social media users in a sudden vote:The case of the greek referendum’, in Proceedings of the 27th ACM In-\nternational Conference on Information and Knowledge Management ,\npp. 367–376. ACM, (2018).\n[35] Andranik Tumasjan, Timm O Sprenger, Philipp G Sandner, and Is-\nabell M Welpe, ‘Predicting elections with twitter: What 140 charactersreveal about political sentiment’, in F ourth international AAAI confer-\nence on weblogs and social media, (2010).\n[36] Ashish V aswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion\nJones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin, ‘Attentionis all you need’, in Advances in neural information processing systems,\npp. 5998–6008, (2017).\n[37] Leonid V elikovich, Sasha Blair-Goldensohn, Kerry Hannan, and Ryan\nMcDonald, ‘The viability of web-derived polarity lexicons’, in NAACL ,\npp. 777–785. Association for Computational Linguistics, (2010).\n[38] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan\nSalakhutdinov, and Quoc V . Le, ‘Xlnet: Generalized autoregressive pre-training for language understanding’, CoRR, abs/1906.08237 , (2019).S. Palakodety et al. / Mining Insights from Large-Scale Corpora Using Fine-Tuned Language Models 1897",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "z-pCPuC6WXw",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200349",
"forum_link": "https://openreview.net/forum?id=z-pCPuC6WXw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Fine-Grained Text Sentiment Transfer via Dependency Parsing",
"authors": [
"Lulu Xiao",
"Xiaoye Qu",
"Ruixuan Li",
"Jun Wang",
"Pan Zhou",
"Yuhua Li"
],
"abstract": "Fine-grained sentiment transfer demands to edit an input sentence on a given sentiment intensity while preserving its content, which largely extends traditional binary sentiment transfer. Previous works on sentiment transfer usually attempt to learn latent content representation disentangled from sentiment. However, it is difficult to completely separate these two factors and it is also not necessary. In this paper, we propose a novel model that learns the latent representation without disentanglement and leverages sentiment intensity as input to decoder for fine-grained sentiment control. Moreover, aligned sentences with the same content but different sentiment intensities are usually unavailable. Due to the lack of parallel data, we construct pseudo-parallel sentences (i.e, sentences with similar content but different intensities) to relieve the burden of our model. In specific, motivated by the fact that the sentiment word (e.g., “delicious”) has a close relationship with the non-sentiment context word (e.g., “food”), we use dependency parsing to capture the dependency relationship. The pseudo-parallel sentences are produced by replacing the sentiment word with a new one according to the specific context word. Besides, the difference between pseudo-parallel sentences and generated sentences and other constraints are utilized to guide the model precisely revising sentiment. Experiments on the Yelp dataset show that our method substantially improves the degree of content preservation and sentiment accuracy and achieves state-of-the-art performance.",
"keywords": [],
"raw_extracted_content": "Fine-Grained Text Sentiment Transfer via Dependency\nParsing\nLulu Xiao1and Xiaoye Qu1and Ruixuan Li1*and Jun Wang2and Pan Zhou1and Yuhua Li1\nAbstract. Fine-grained sentiment transfer demands to edit an input\nsentence on a given sentiment intensity while preserving its content,\nwhich largely extends traditional binary sentiment transfer. Previousworks on sentiment transfer usually attempt to learn latent content\nrepresentation disentangled from sentiment. However, it is difficult\nto completely separate these two factors and it is also not necessary.\nIn this paper, we propose a novel model that learns the latent rep-\nresentation without disentanglement and leverages sentiment inten-\nsity as input to decoder for fine-grained sentiment control. Moreover,\naligned sentences with the same content but different sentiment in-\ntensities are usually unavailable. Due to the lack of parallel data, we\nconstruct pseudo-parallel sentences (i.e, sentences with similar con-\ntent but different intensities) to relieve the burden of our model. In\nspecific, motivated by the fact that the sentiment word (e.g., “deli-cious”) has a close relationship with the non-sentiment context word(e.g., “food”), we use dependency parsing to capture the dependency\nrelationship. The pseudo-parallel sentences are produced by replac-\ning the sentiment word with a new one according to the specific\ncontext word. Besides, the difference between pseudo-parallel sen-tences and generated sentences and other constraints are utilized to\nguide the model precisely revising sentiment. Experiments on the\nYelp dataset show that our method substantially improves the degree\nof content preservation and sentiment accuracy and achieves state-\nof-the-art performance.\n1 Introduction\nText sentiment transfer is a common but difficult style transfer task inNatural Language Processing (NLP). The goal of sentiment transferis to change the sentiment of a sentence to the opposite while preserv-\ning its semantic meaning. Sentiment transfer has obtained board ap-\nplications in NLP , such as letter and review rewriting [20, 22], which\nattracts the attention of large numbers of researchers.\nPrevious works [33, 14] of sentiment transfer mainly focus on bi-\nnary sentiment (positive and negative) transfer. In this paper, we setour task on more general scenarios that revise sentences on a given\nsentiment intensity value ranging from 1 to 5 for fine-grained trans-\nfer, here the intensity 1 to 5 corresponds to strong negative, weak\nnegative, neutral, weak positive, and strong positive. For example,\ngiven the input sentence “the food was totally fine” with the senti-\nment intensity “4”, an output “the food was enough” may be desired\nto generate on the target sentiment “3” and “the food was forget-\n1Huazhong University of Science and Technology, China. Email: {xiao lulu,\nxiaoye, rxli, panzhou, idcliyuhua}@hust.edu.cn\n2Fujitsu Laboratories of America, USA, Email: [email protected]\n*Corresponding authortable” on the target sentiment “2”. Besides, an output sentence “the\nfood was totally wonderful” on the target sentiment “5” expresses\nstronger positive intensity and “the food was totally terrible”o nt h e\ntarget sentiment “1” has stronger negative intensity. The task of fine-\ngrained text sentiment transfer aims at modifying an input sentence to\nsatisfy a target sentiment intensity while keeping the original content.\nHowever, there are some limitations to this task and several problems\nin previous methods. First of all, there are no natural parallel data,\nhence we can not use a supervised way to train the transfer model.\nSecond, previous works like [16] attempt to disentangle a sentence\ninto the content part and the sentiment part, but it is difficult to com-\npletely separate them because these two parts are mixed together in\na complicated way. It usually leads to the semantic meaning of the\noriginal sentence and its corresponding generated sentence quite dif-\nferent.\nIn this paper, we propose an approach for editing sentences which\ncontains two parts: transfer module and pseudo-parallel module. Inthe transfer module, Gated Recurrent Unit (GRU) based encoder-\ndecoder architecture [2] is employed to revise sentences. The encoderencodes each input sentence into a latent representation without dis-entanglement, while the decoder generates sentences under the con-trol of sentiment intensity values. We also use a classifier to predict\nthe sentiment value of the generated sentence. The error between the\nsentiment value of the generated sentence and the target value pro-\nvides a signal to train the decoder. Due to the lack of parallel data,\npseudo-parallel sentences are introduced in the pseudo-parallel mod-\nule to guide the transfer module to generate sentences on a given sen-\ntiment intensity value. In specific, the pseudo-parallel module con-\nsists of two parts: dependency parsing and pseudo-parallel produc-\ntion. Pseudo-parallel sentences are pairs of sentences with similar\ncontent but different sentiment values. The key issue of producing\npseudo-parallel sentences is to accurately find the sentiment informa-\ntion of a source sentence and change it to satisfy the target sentiment.\nAs observed that the sentiment word “delicious” is suitable to de-scribe “food” instead of “staff”, and different sentiment words havedifferent sentiment intensity (e.g., “delicious” has a stronger positive\nsentiment than “ok”, “terrible” has a stronger negative sentiment than\n“so-so”). In the dependency parsing part, we first extract sentiment\nwords of a sentence and then leverage dependency parsing to find the\nnon-sentiment context word that has a specific dependency with the\nsentiment word. Subsequently, during pseudo-parallel production, all\nthe sentiment words describe the same context word are evaluated by\na scorer function and the most appropriate sentiment word is selected\nto replace the original one, thus we can obtain the pseudo-parallel\nsentence on a target sentiment. Finally, the reference loss between\nthe generated sentence and pseudo-parallel sentence combined with\nother constraints such as reconstruction loss is utilized to enhance theECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2003492228\nability of our model to modify sentences.\nWe compare our method with state-of-the-art approaches on the\ndataset of Yelp reviews. Automatic metrics and human metrics of\nexperiment results show the efficacy of our model.\nThe contributions of this paper are summarized as the following\nthree points:\n1. We propose a novel framework with the combination of a clas-\nsifier and sentiment controls to modify a sentence, in which thesentiment is not disentangled from sentence.\n2. To our best knowledge, this paper is the first work that introduces\ndependency parsing to the sentiment transfer task. Dependencyparsing is used to find context words related to sentiment wordsand produce pseudo-parallel sentences which provide a signal tothe model when revising sentences.\n3. Experiment results of automatic evaluation and human evaluation\nshow that our model outperforms state-of-the-art methods on bothcontent preservation and sentiment accuracy.\n2 Related Work\nRecently, deep learning obtains significant results in various com-puter vision and natural language processing tasks [36, 23]. Thestyle transfer on computer vision has also achieved exciting perfor-\nmance [9, 27, 35, 15, 12], which inspires researchers to propose the\ntask of style transfer on natural language text. After a surge of re-searches on this task, text style transfer has obtained significant re-sults [20, 22, 4, 10, 6, 3, 28, 29, 25]. Current methods of text style\ntransfer mainly focus on revising polarity attributes (e.g., sentiment,\nwriting style, gender, etc.) of text to the opposite while preserving\nattribute-independent content.\nDue to the lack of parallel sentences in training time, an unsu-\npervised way was used on existing methods. Some methods follow\nthe adversarial idea of Generative Adversarial Networks (GANs) [7]\nthat optimizes decoder/generator and discriminator/classifier in cy-\ncle. Yang et al. [31] use a language model as the discriminator toprovide richer and more stable feedback to guide the V ariational Au-toencoders (V AEs) [13] generating sentences. Fu et al. [5] proposetwo text style transfer models that employ adversarial training. Theencoders of both models extract the content of a sentence under thedirection of the classifier, but the first model utilizes a seq2seq [26]with two decoders on different styles, the second model just has onedecoder with style embedding. Zhao et al. [34] employ the extension\nmodel of adversarial autoencoder (AAE) [18] to generate sentences\nand apply it to style transfer. Hu et al. [8] combine the V AEs and at-\ntribute discriminators to efficiently generate semantic representations\nwith the wake-sleep algorithm. John et al. [11] disentangle style and\ncontent latent representations under the multi-task loss and the ad-\nversarial loss.\nAnother line of methods does not implement the adversarial idea.\nLi et al. [14] obtain the content of a sentence by deleting its senti-ment words, and retrieve similar context from the target style cor-pus to extract the sentiment information, then combine them into the\nneural network. Zhang et al. [32] leverage shared encoder-decoder\nmodel to learn the public attributes (semantic) of all instances, and\nprivate encoder-decoder model to learn the specific characteristics of\nthe corresponding attribute corpus. Xu et al. [30] propose a cycled re-\ninforcement learning model which includes the neutralization mod-\nule and emotionalization module. The neutralization module learns\ndisentangled representations and the emotionalization module adds\nsentiment to neutralize semantic content.In contrast, we consider more general scenarios that edit sen-\ntences on different sentiment intensity values for fine-grained trans-\nfer. There are few works of fine-grained sentiment transfer. Liao et al.[16] propose to learn disentangled content factor and sentiment factor\nby two separate encoders based on V AE, and then modify the contentunder the target sentiment. To better disentanglement, they model thecontent similarity and the sentiment differences of pseudo-parallelsentences. Luo et al. [17] propose a Seq2SentiSeq model combinedwith the sentiment intensity value and use cycle reinforcement learn-ing method to train the model. Different from them, we employ anautoencoder with sentiment intensity value as control and pseudo-parallel sentences produced by dependency parsing as references to\nrevise sentences.\n3 Method\nWe assume the set of all inputs in our model is Dv=\n{(x1,v1),..., (xn,vn)}, wherexiis a sentence, and vi∈Vis the\nsentiment intensity of xi. The values of Vare fine-grained senti-\nments ranging from 1 to 5. We define the sentences with sentiment\nvalues larger than 3 as positive, equal to 3 as neutral and the rest as\nnegative. The goal of this task is to generate a new sentence yfor\nan inputx. The sentiment value of xisvsrc,(x,vsrc)∈D. The\ngenerated sentence yshould satisfy the requirement of keeping the\ncontent similar to xand its sentiment value is the same as the target\nsentiment vtgt∈V. An overview of our system is depicted in Figure\n1. The top part is the dependency parsing module. It employs depen-dency parsing to find context words that have specific dependencieswith sentiment words in sentences. The bottom part is the transfermodule. The main framework here is a traditional encoder-decodernetwork trained with pairs of ( x,v\nsrc) as input to generate a sentence\nthat minimizes a set of constraints.\n3.1 Extraction\nTo analyze the dependencies between sentiment words and con-text words, we first need to extract sentiment words that havestrong power of sentiment polarity. We just consider to extract sen-timent words on sentiment polarity. Assuming all the input sen-tences on sentiment polarity is D\nr={(x1,r1),..., (xn,rn)},\nri∈{positive,negative} . An input sentence xis composed of\nN-words u={u1,...,u i,...,u n}, and the sentiment polarity of x\nisr. The way in Li et al. [14] is adopted to extract sentiment words\ninx, it computes the relative frequency of uias,\nf(ui,r)=(count (ui,Dr)+λ)\n(/summationtext\nr/prime∈{positive,negative },r/prime/negationslash=rcount(ui,Dr/prime)) +λ\n(1)\nwherecount (ui,Dr)is the times of uiappears in Drandλis\nthe smoothing parameter. If the relative frequency f(ui,r)ofuiis\nlarger than threshold γ, thenuiis considered as a sentiment word of\nx. We define α(x,vsrc)to be all of the sentiment words in x.\n3.2 Dependency Parsing\nAfter the extraction of sentiment words, we perform dependencyparsing in the sentence xto find the context words corresponding\nto the sentiment words. Dependent syntax expresses the entire sen-tence structure through the dependencies between each word. Thesedependencies constitute a dependent syntax tree whose root node isL.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2229\nFigure 1. Framework of our proposed method. Our approach contains two parts: transfer module and pseudo-parallel module which consists of dependency\nparsing and pseudo-parallel production. The dependency parsing contains two steps: (1) extract the sentiment words of an input sentence (2) analyze\ndependencies between words in the input to find the context words for specific sentiment words. In the pseudo-parallel production, the scorer is used to find the\nbest sentiment words according to the target sentiment to replace the originals, and then we can get the pseudo-parallel sentence x/prime. The bottom part is the\ntransfer module based on an encoder-decoder network. It modifies an input sentence xto a new one yunder the target sentiment vtgt.\nthe core predicate of the sentence. According to the dependencies in\nthe syntax tree, we can find two words with specific grammatical re-lations in the sentence, which are usually not adjacent. As shown inthe top part of Figure 1, each arrow denotes a dependency. The arrowpoints to the governed object, and the starting point of the arrow isthe dependent object. To decide which word has specific dependencywith the sentiment word, we just consider several fixed dependencieslike nsubj (nominal subject), dobj (direct object), amod (adjectivalmodifier), etc. For example, the word “food” in sentence “the foodtastes delicious” is the word we want to find that has certain depen-dency with sentiment word “delicious” instead of word “tastes” orothers. The word “food“ is the nominal subject of the sentiment word“delicious”.\nInput the best part, exceptional service and prices can not be beat!\n(x/prime,1) the worst part, dreadful service and prices can not be beat!\n(x/prime,2) the frustrating part, lousy service and prices can not be beat!\n(x/prime,3) the hot part, fine service and prices can not be beat!\n(x/prime,4) the best part, exceptional service and prices can not be beat!\n(x/prime,5) the gorgeous part, wonderful service and prices can not be beat!\nTable 1. Five pairs of pseudo-parallel sentences. The first line is the input\nsentence and other lines are pseudo-parallel sentences on five sentiment\nintensity values. The forth line is the same as the first line because the\nsentiment value of the input sentence is 4.Assuming ais a sentiment word in x,a∈α(x,vsrc),ois de-\nclared as the context word of aifohas a certain dependency de-\nscribed above with a. In this part, we do not need to consider senti-\nment intensity. We extract all the sentiment words in α(o,r)whose\ncontext word is oon ther(r∈(positive,negative)) corpus. For\nexample, positive sentences “the food tastes delicious” and “the foodtastes wonderful” describe the same context word “food”. The senti-ment words “delicious” and “wonderful” then are saved with “food”.Dependencies between sentiment words and corresponding contextwords are used to assist in producing pseudo-parallel sentences,which will be described in the following sections.\n3.3 Replace\nPseudo-parallel sentences are a pair of sentences that have the samesemantic content but different sentiment values as shown in Table 1.The way of constructing pseudo-parallel sentences is to replace eachsentiment word of source text with another optimal sentiment word.As mentioned above, a sentiment word has close dependency with itscontext word, so all the sentiment words of the context word can becandidates for replacement. Given an input (x,v\nsrc),ais a sentiment\nword ofx,a∈α(x,vsrc),ois the context word of a. There are k\ncandidate words in α(o,r)to replace a. The best candidate (ctgt)\nwhich minimizes the score will be used to replace aunder the target\nsentiment vtgt.\nctgt=argmin c{S(a,c)|c∈candidate k(a)} (2)\nwhereS(∗)is a weighted scorer function and candidate k(a)is\nall the candidates of sentiment word a. Function S(∗)measures can-L. Xiao et al. / Fine-Grained Text Sentiment Transfer via Dependency Parsing 2230\ndidates from different aspects, it mainly considers two factors in\nour setting: (a) how the candidate word satisfies the target sentimentvaluev\ntgt, (b) how similar with a the candidate word is. We use two\nways to measure them, as follows:\nSentiment difference: Sentiment difference measures the differ-\nence between the sentiment value of candidate word (c ) and the target\nvaluevtgt. How to compute the sentiment value of cis a key problem\nas it is unknown. Inspired by sentiWordNet [1], we use the averagesentiments of texts which contain the word cto represent the senti-\nment ofc.\nv\nc=/summationtext\n(x∈D,c∈x)vsrc\nnum(/summationtext\n(x∈D,c∈x)vsrc)(3)\nwherexis an input, x∈D,vsrcis the sentiment of\nx,num(/summationtext\n(x∈D,c∈x)vsrc)is the text numbers of cappears in D.\nThen, sentiment difference is computed as follow,\nrd(vc,vtgt)=|vc−vtgt| (4)\nSimilarity: Similarity indicates how similar the sentiment word\n(a) and candidate (c ) are. As observed that all candidates can re-\nplacea, but some candidates do not match the context. For example,\nthe sentiment word “delicious” on the text “the food is delicious” ismore likely to be replaced by “awesome” than “love”. Therefore, weshould find a similar word with ato replace according to:\nr\ns(a,c)=wordsim (a,c) (5)\nwherewordsim (a,c)is a cosine similarity based on word em-\nbedding between embedding vector of aandc.\nThe scorer function S(∗)is composed of all the measures above:\nS(a,c)=βdrd(vc,vtgt)+β srs(a,c) (6)\nwhereβdandβsare weight parameters. The pseudo-parallel sen-\ntences constructing algorithm is shown in Algorithm 1.\nAlgorithm 1 Pseudo-parallel sentence producing method based on\ndependency parsing.\nInput: a sentence xwith sentiment label vsrc, the target sentiment\nvtgt, context-sentiment words table T={o1:(a11,a12,...),o2:\n(a21,a22,...),...} .\n1:Extract sentiment words A={a1,a2,...} inxbased on Eq. 1\n2:Analyze dependencies R={(r1,w1,w11),(r2,w2,w21),...}\nbetween words in x\n3:foreachainAdo\n4: Find the non-sentiment word othat has special dependency\nwithainR\n5: Retrieve oin tableTand get all candidate words Co=\n{c1,c2,...} ofa\n6: Update table Twithoanda\n7: Compute sentiment value of each word in Cobased on Eq. 3\n8: Compute sentiment difference between sentiment value of\neach word in Coandvtgtbased on Eq. 4\n9: Compute similarity between each word in Coandabased on\nEq. 5\n10: Use scorer function find the best word cinCobased on Eq.\n6\n11: Replaceawithcto obtain the pseudo-parallel sentence x/prime\nwhose sentiment is vtgtofx\n12:end for3.4 Training\nOur model mainly employs the encoder-decoder framework, a nat-ural language text with sentiment label is as an input to the model.The encoder learns to encode the sentence into a hidden represen-tation and the decoder learns to generate sentence under the repre-sentation. However, the sentence generated by the decoder is a newtext that similar to the input, the decoder is not able to add senti-ments to it. Therefore, we apply a sentiment control and some con-\nstraints to our model. The sentiment control is an embedding of tar-\nget sentiment value, it is concatenated with the hidden representationas the input to the decoder. The constraints are a set of losses toenhance the abilities of content preservation and sentiment transferfor the model. We introduce a classifier to predict sentiment valuesof generated sentences. We denote the encoder-decoder frameworkbyG=(G\nenc,Gdec)and the classifier by C. We consider these\nfour types of losses as follows. The reconstruction loss and back-translation loss are employed to preserve the content of the sentences.In addition, to keep content unchanged, the reference loss also helpsto revise the sentiments.\nReconstruction loss: Reconstruction loss denotes the error of re-\nconstructing input x. Assuming z\nx=Genc(x)is the hidden repre-\nsentation of x.vsrcis the sentiment value of x. The decoder gen-\nerates sentence x≈PG(.|zx,vsrc)conditioned on zx,vsrc. The\nreconstruction loss is computed as:\nLrec=−logPG(x|zx,vsrc) (7)\nBack-translation loss: Lety=Gdec(x,vtgt)be the gener-\nating sentence of xon the target sentiment vtgt,zy=Genc(y)\nis the hidden representation of y. The decoder generates sentence\nx≈PG(.|zy,vsrc)conditioned on zy,vsrc. Back-translation loss is\nthe error of translating yintox, it is indicated as:\nLbt(x,v)=−logPG(x|zy,vsrc) (8)\nClassification loss: The classifier is used to predict the sentiment\nvalue of a text. To ensure the sentiment value of generating sentenceymatches the target sentiment v\ntgt, classification loss is used as a\nfeedback to guide the model. The classifier predicts the sentimentvaluev\ny=PC(.|zy)ofy.\nLc(vy,vtgt)=−logPC(vtgt|zy) (9)\nReference loss: The reference loss is the error of yand the\nPseudo-Parallel Sentence xtgtofxon target sentiment.\nLr(x,xtgt)=−logP(x/prime|y) (10)\nIn training, the classifier is trained with sentences and correspond-\ning sentiment labels as input and predicted sentiment as output. Sen-tence is first encoded into a latent representation through the GatedRecurrent Unit (GRU) based encoder and then as the input to the tra-ditional multi-classifier. After multiple iterations of batches inputs,the classifier is trained to minimize the loss and then is added to the\nencoder-decoder framework. The encoder-decoder network is trained\nwith source text x, target sentiment v\ntgtas input and pseudo-parallel\nsentence xtgtas reference, and new sentence yas output by mini-\nmizing:\nL=λ1Lrec+λ2Lbt+λ3Lc+λ4Lr (11)\nwhereλ1,λ2,λ3andλ4are hyper-parameters.L.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2231\n4 Experiments\nWe perform the experiments on two tasks: fine-grained sentiment\ntransfer and sentiment polarity transfer. On fine-grained sentimenttransfer, the sentiment of the source text should be transferred to atarget numeric value in 1,2,3,4,5. Sentiment polarity transfer mainly\nchanges the source text to a new sentence with the opposite sentiment\n(positive or negative). We apply automatic and human evaluations to\ncompare our approach with previous works on these two tasks.\n4.1 Dataset\nFor all the experiments, the dataset we use is the Yelp reviews from\nLiao et al. [16]. We use a more recent version of Stanford CoreNLP\nthan Liao et al. [16] which leads to a little different distribution, how-ever, our sentiment intensity is more accurate. After processing, ourdataset has about 600K sentences in total, among them 50K as thetest set, 10K as the validation set and the rest as the training set. Thedata distribution is shown in Table 2.\nsentiment interval [1,2) [2,3) [3,4) [4,5)\nsentence num 34576 233916 166566 169196\nTable 2. Numbers of Sentences in each sentiment interval.\n4.2 Model Setup\nFor all tasks, the encoder we use is 2 layers bidirectional GRU with250 dimensions hidden state. The decoder is also 2 layers of bidirec-tional GRU with attention mechanism, its dimension of hidden stateis set to 500. The output of the encoder also called hidden representa-tion concatenated with the target sentiment embedding is as input tothe decoder. The dimensions of sentiment embeddings are 128. En-coder (GRU) with dimension hidden size 200 and MLP with dimen-sion hidden size 100 constitute the classifier. The weights (β\nd,βs)of\nsentiment difference and similarity are respectively set to 1 and 0.5.For the weights (λ\n1,λ2,λ3,λ4)of the four losses, we tune them on\nthe validation data with different values, and finally they are respec-tively set to 0.7, 0.2, 0.3, 0.7.\n4.3 Comparative Methods\nWe compare our model with two state-of-the-art models, one ofwhich is specifically designed for the task of fine-grained sentiment\ntransfer and the other is mainly for the binary sentiment polarity\ntransfer.\nSequence Editing under Quantifiable Guidance (QuaSE) (Liao\net al. [16]): QuaSE first proposes the task of quantifiable sentimenttransfer, it uses two encoders to capture content and sentiment, one\ndecoder to generate text satisfied the requirement. To better disentan-\ngle the two factors, QuaSE uses pseudo-parallel sentences to enhance\nthe model. In the test stage, QuaSE assumes the sentiment of an in-\nput follows a Gaussian distribution, then chooses the best one in thedistribution to pass to the decoder. We use QuaSE as the comparative\nmethod for the task of fine-grained sentiment transfer and follow the\ndefault parameters in their codes.\nText Transfer Text by Cross-Alignment (TCA) (Shen et al.\n[24]): TCA maps an input sentence to a style-independent contentrepresentation and pass it to style-dependent decoders. It employsaligned auto-encoder instead of typical variational autoencoder toobtain two distributed constraints by the cross-aligned way and twodiscriminators to modify sentences. We use TCA for the sentimentpolarity transfer experiment following its suggested parameters.\n4.4 Evaluation Metrics\nThere are many evaluation metrics for the task of sentiment polaritytransfer, which can also be used for the task of fine-grained sentimenttransfer. Due to the lack of parallel corpora, we choose the opportunemetrics for our task, as follows.\nBLEU: BLEU[21] was originally used to measure the similarity\nbetween machine translation text and reference text. The value ofBLEU ranges from 0 to 1, we expand it to 0 to 100 as usually donein previous works. With the appearance of text style transfer, BLEUis also used for this task. But there is no reference text, so we calcu-late the BLEU value between source text and generation text, which\nevaluates the content preservation.\nEdit Distance: In the fields of information theory, linguistics, and\ncomputer science, edit distance is used to measure the similarity of\ntwo sequences. In general, the edit distance refers to the minimumnumber of single-character editing required to convert one word w\n1\nto another word w2.\nMAE: MAE, also known as Mean Absolute Error. In this task, we\nuse MAE to measure the mean error between the target sentimentvalue and the sentiment of generation sentence.\nMAE =1\n|s|/summationdisplay\nxi∈s|vi−vtgt| (12)\nwhere s is the set of generated sentences, viis the sentiment value\nof sentence xi∈spredicted by Stanford CoreNLP (Manning et al.\n[19]).\n4.5 Automatic Evaluation\nIn the fine-grained sentiment transfer experiment, our model is com-pared with QuaSE. Each input sentence is required to be converted tofive sentences whose sentiment values respectively satisfy the targetvalues 1,2,3,4 and 5. The training data in QuaSE is specially pro-cessed, so QuaSE still uses its own training data, and its test data isthe same as ours. We perform the MAE evaluation between the targetsentiment and the sentiment intensity of the generation sentence, andevaluate the edit distance and BLEU between the generation sentenceand the input sentence. The results are shown in Table 3 and all the\nresults are the average values for the whole dataset. “Original” refers\nto use original sentences to compute the evaluation metrics.\nThe MAE values of our model and QuaSE are smaller than “Orig-\ninal”, it demonstrates that we both have the ability to revise senti-\nments of texts. Moreover, the MAE values on the five sentiment in-tensity values of our model are smaller than QuaSE, the main reasonis that we use the error between the pseudo-parallel sentences andthe generated sentences and the classifier to provide effect and richerfeedback to the decoder. The feedback guides the model to bettergenerate sentences that satisfy a target sentiment. In contrast, QuaSE\nemploys a Gaussian distribution on sentiment factor, which is not so\nprecise. Besides, all the edit distances and the BLEU values of ourmodel are better than QuaSE. QuaSE respectively learns content andsentiment representation disentangled from an input sentence, but itis hard to completely separate them and may cause partial loss ofcontent. However, we do not learn the disentangled representation\nbut apply some constraints to keep content unchanged.L.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2232\nModelsMAE Edit Distance BLEU\nT=1 T=2 T=3 T=4 T=5 T=1 T=2 T=3 T=4 T=5 T=1 T=2 T=3 T=4 T=5\nOriginal 2.13 1.15 0.81 1.00 1.87 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A\nQuaSE 1.29 0.57 0.77 0.67 1.19 11.88 8.78 8.36 8.13 11.58 6.26 24.55 24.63 30.21 8.23\nOur Model 0.47 0.38 0.23 0.41 0.56 6.88 6.58 6.30 5.49 7.24 18.93 30.73 25.96 31.86 26.42\nTable 3. Automatic evaluation results of fine-grained sentiment transfer experiment. T refers to the target sentiment. MAE measures mean error between the\ntarget sentiment and the sentiment of the output. BLEU and Edit distance measure content similarity between the output and the source sentence.\nIn the sentiment polarity transfer experiments, QuaSE and TCA\nare used as the comparison models. We use Sentiment accuracy and\nBLEU as the measurement metrics. As mentioned in section 3, we\ndefine the sentences with sentiment values larger than 3 are posi-tive, smaller than 3 are negative. The results are shown in Table 4,we perform a sentiment accuracy value on transferring negative topositive, and vice versa. The accuracy values in both directions and\nthe BLEU value of our model are larger than TCA. Moreover, our\nmodel has a smaller accuracy value on negative to positive but larger\naccuracy value on positive to negative compared to QuaSE. In gen-\neral, our model achieves the best average accuracy value and BLEUvalue. It demonstrates that our model can better revise sentiments of\nsentences while preserving more content.\nNeg. to Pos. Pos. to Neg. Avg. accuracy BLEU\nTCA 73.80% 69.12% 71.46% 13.55\nQuaSE 89.81% 76.93% 83.36% 9.18\nOur model 85.37% 83.36% 84.36% 29.21\nTable 4. Automatic evaluation results of sentiment polarity transfer\nexperiment.\n4.6 Human Evaluation\nIn this part, we hire three workers to manually evaluate the quality of\n200 generated sentences that are randomly picked from each of ourmodel and the two competitive models. We use the “content preser-vation” to measure the content integrity of sentences and “fluency”to measure grammatical fluency of sentences. The scores of “con-tent preservation” range from 0 to 3 (score 0 means not preserved,1 means little preserved, 2 means partially preserved, 3 means fully\npreserved), the scores of “fluency” range from 1 to 4 (score 1 means\npoor grammar, 4 means fluent grammar).\nThe result is shown in Table 5. For the “content preservation” met-\nric, our model achieves the highest score, the main reason is that our\nmodel does not learn the disentangled latent representation as usedin QuaSE since the disentangled representation misses some contentinformation more or less. For the “fluency” metric, the score of ourmodel is also better than QuaSE and TCA. It may comes from thatour pseudo-parallel sentences keep the most grammatical structure ofthe original sentence. This feature also devotes to generating fluent\nsentences.\n4.7 Case Study\nTo directly present the effects of our model on fine-grained sentiment\ntransfer, some examples generated by our model are displayed in Ta-Content Preservation Fluency\n(Range:[0,3]) (Range:[1,4])\nTCA 1.41 2.58\nQuaSE 1.37 2.14\nOur model 1.86 2.88\nTable 5. Human evaluation results for three models on content\npreservation and fluency.\nble 6. Each sentence is revised to five sentences, and the sentimentvalues of them are in 1,2,3,4,5. The generated sentence on T=2 inthe first example is the same as the input sentence due to the originalsentiment label is 2. Similarly, the second example on T=3 and thethird example on T=4 are the same as the input sentences. For thefirst example, when T=1, “sloppy” and “over-priced” are changed tomore negative phrase ”worst” and the generated sentence on T=3 ex-\npresses neutral sentiment. Moreover, when T=4 and 5, the original\nsentence is revised to positive sentences that opposite to the input\nand “wonderful”, “actually excellent” on T=5 express strong posi-\ntive sentiment. For the second example, the input sentence is a neu-tral sentence that describes “seafood”. When T=1 and 2, the originalsentence is revised to express negative sentiment and the generatedsentences on T=4 and 5 express positive sentiments. Although, thegenerated sentences on T=1, 2 and 5 do not describe “seafood”, theydescribe “cake”, “beef” and “steak” that are similar to “seafood”.These indicate that our model is able to preserve most of the contentand revise the words which have the strong polarity of sentiment in a\nsentence. In some examples, like the third example on T=2 and T=5,\nthere have some problems of unacceptable sentences and duplicatesin word-level, it reminds that we need to reduce this problem.\n4.8 Ablation Study\nWe introduce a classifier and the other three constraints to guide theencoder-decoder to modify sentences . To show the effects of thethree losses, we perform ablation study under the MAE and BLEUmetrics. We remove the three losses separately and keep the othersunchanged. The results are shown in Table 7 and Table 8. The firstline in each table is the sentiment intensity value. In the experiments,\nwe just consider the values of 1, 3 and 5. The second line in each table\nshows the MAE/BLEU values of QuaSE in table 3 that are used for\ncomparison. The following three lines show the MAE/BLEU values\nunder the omission of reference loss, reconstruction loss and back-\ntranslation loss. The last line shows the MAE/BLEU values of all thelosses.L.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2233\nGenerated Sentence\nE.g. 1 the burger was sloppy and the food was over-priced. (input sentiment is 2)\nT=1 the burger was worst and the sauce food was worst!\nT=2 the burger was sloppy and the food was over-priced.\nT=3 the burger was extra and let packaged extra dogs they receive dogs.\nT=4 the burger was phenomenal and prompt service over it.\nT=5 the burger was wonderful and the food was actually excellent.\nE.g. 2 it was appropriately spicy, flavorful, and the seafood was not overcooked. (input sentiment is 3)\nT=1 it was flavorless spicy, flavorful, and the cake was worst breakfast.\nT=2 it was pretty bland spicy, especially, the beef was better!\nT=3 it was appropriately spicy, flavorful, and the seafood was not overcooked.\nT=4 it was great spicy, flavorful, and the great seafood, and great tasting.\nT=5 it was wonderful spicy, wonderful, and the wonderful steak.\nE.g. 3 moist bread, fresh ingredients, great flavor. (input sentiment is 4)\nT=1 waste mix, waste ingredients, waste flavor.\nT=2 bland, the bland ingredients, lousy flavor!\nT=3 had bread, had plenty of flavor.\nT=4 moist bread, fresh ingredients, great flavor.\nT=5 wonderful & fresh ingredients, ingredients, great flavor.\nTable 6. Sentences examples generated by our model on each target sentiment.\nAccording to the result in Table 8, the MAE values of “None” are\nsmaller than “Original” in Table 3. It demonstrates that the decoder\nwith the assist of the classifier is able to revise the sentiment inten-sity of sentences. The MAE values of removing each loss are smallerthan QuaSE, this means each loss we add to the model makes a con-tribution to revise sentiments. The average improvements in remov-ing each loss compared to “None” are 26.67%, 13.66%, and 19%.\nThe average decreases in removing each loss compared to “ALL” are\n31.33%, 44.33%, and 39%. These demonstrate that each loss makes\na certain contribution to sentiment modification. In Table 7, the aver-\nage decreases in removing each loss compared to “ALL” are 57.67%,21.13%, and 22.40%. It shows that each loss is helpful for contentpreservation especially the reference loss. Moreover, the MAE andBLEU values of “ALL” are the best in all the sentiment values. Itshows that the combination of the three losses is effective to enhancethe ability to modify sentiments of our model.\nT=1 T=3 T=5\nQuaSE 6.26 24.63 8.23\nNone 8.11 10.03 12.32\nLrec,Lbt 12.04 20.17 21.80\nLr,Lbt 15.68 23.71 25.58\nLr,Lrec 19.21 24.28 21.10\nALL 18.93 25.96 26.42\nTable 7. Ablation study on BLEU metric.\n5 Conclusions\nIn this work, we focus on the task of fine-grained sentiment trans-fer that requires to edit sentence on given numeric sentiment valuesT=1 T=3 T=5\nQuaSE 1.29 0.77 1.19\nNone 1.38 0.53 1.09\nLrec,Lbt 0.77 0.44 0.99\nLr,Lbt 1.26 0.41 0.92\nLr,Lrec 1.07 0.32 1.04\nALL 0.47 0.23 0.56\nTable 8. Ablation study on MAE metric.\nwhile keeping content unchanged. We propose a novel method basedon dependency parsing without learning disentangled representationas usually worked in the previous works. We produce pseudo-parallelsentences through dependency parsing and employ a set of lossesto give richer signals to enhance the model. Automatic and humanevaluations of experiments on the Yelp reviews demonstrate that ourmodel substantially outperforms the compared models. In the future,\nwe intend to expand our work on more attributes not only sentiment\nand long text transfer.\nACKNOWLEDGEMENTS\nThis work is supported by the National Key Research and De-\nvelopment Program of China under grants 2016YFB0800402and 2016QY01W0202, National Natural Science Foundation ofChina under grants U1836204, U1936108, 61572221, 61433006,U1401258, 61572222 and 61502185, and Major Projects of the Na-\ntional Social Science Foundation under grant 16ZDA092.L.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2234\nREFERENCES\n[1] Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani, ‘Senti-\nwordnet 3.0: an enhanced lexical resource for sentiment analysis and\nopinion mining.’.\n[2] Dzmitry Bahdanau, Kyunghyun Cho, and Y oshua Bengio, ‘Neural ma-\nchine translation by jointly learning to align and translate’, CoRR,\nabs/1409.0473, (2014).\n[3] Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang, ‘Style trans-\nformer: Unpaired text style transfer without disentangled latent repre-sentation’, in ACL , (2019).\n[4] C ´ıcero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi, ‘Fighting\noffensive language on social media with unsupervised text style trans-fer’, in ACL , (2018).\n[5] Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui\nYan, ‘Style transfer in text: Exploration and evaluation’, ArXiv,\nabs/1711.06861, (2017).\n[6] Hongyu Gong, Suma Bhat, Lingfei Wu, Jinjun Xiong, and Wen-Mei\nHwu, ‘Reinforcement learning based text style transfer without parallel\ntraining corpus’, in NAACL-HLT, (2019).\n[7] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David\nWarde-Farley, Sherjil Ozair, Aaron C. Courville, and Y oshua Bengio,\n‘Generative adversarial nets’, in NIPS, (2014).\n[8] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and\nEric P . Xing, ‘Toward controlled generation of text’, in ICML, (2017).\n[9] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros, ‘Image-\nto-image translation with conditional adversarial networks’, 2017 IEEE\nConference on Computer Vision and Pattern Recognition (CVPR),5967–5976, (2016).\n[10] Parag Jain, Abhijit Mishra, Amar Prakash Azad, and Karthik Sankara-\nnarayanan, ‘Unsupervised controllable text formalization’, in AAAI,\n(2018).\n[11] Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga V echtomova,\n‘Disentangled representation learning for non-parallel text style trans-\nfer’, in ACL , (2018).\n[12] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Ji-\nwon Kim, ‘Learning to discover cross-domain relations with generative\nadversarial networks’, in ICML, (2017).\n[13] Diederik P . Kingma and Max Welling, ‘Auto-encoding variational\nbayes’, CoRR, abs/1312.6114, (2013).\n[14] Juncen Li, Robin Jia, He He, and Percy Liang, ‘Delete, retrieve, gener-\nate: a simple approach to sentiment and style transfer’, in NAACL-HLT,\n(2018).\n[15] Yanghao Li, Naiyan Wang, Jiaying Liu, and Xiaodi Hou, ‘Demystifying\nneural style transfer’, in IJCAI, (2017).\n[16] Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, and Tong Zhang,\n‘Quase: Sequence editing under quantifiable guidance’, in EMNLP,\n(2018).\n[17] Fuli Luo, Peng Li, Pengcheng Yang, Jie Zhou, Y utong Tan, Baobao\nChang, Zhifang Sui, and Xu Sun, ‘Towards fine-grained text sentimenttransfer’, in ACL , (2019).\n[18] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Good-\nfellow, ‘Adversarial autoencoders’, ArXiv, abs/1511.05644, (2015).\n[19] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose\nFinkel, Steven Bethard, and David McClosky, ‘The stanford corenlp\nnatural language processing toolkit’, in ACL , (2014).\n[20] Igor Melnyk, C ´ıcero Nogueira dos Santos, Kahini Wadhawan, Inkit\nPadhi, and Abhishek Kumar, ‘Improved neural text attribute transfer\nwith non-parallel data’, ArXiv, abs/1711.09395, (2017).\n[21] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu, ‘Bleu:\na method for automatic evaluation of machine translation’, in ACL ,\n(2001).\n[22] Shrimai Prabhumoye, Y ulia Tsvetkov, Ruslan Salakhutdinov, and\nAlan W. Black, ‘Style transfer through back-translation’, in ACL ,\n(2018).\n[23] Xiaoye Qu, Zhikang Zou, Y u Cheng, Yang Yang, and Pan Zhou, ‘Ad-\nversarial category alignment network for cross-domain sentiment clas-sification’, in Proceedings of the 2019 Conference of the North Ameri-\ncan Chapter of the Association for Computational Linguistics: Human\nLanguage Technologies, V olume 1 (Long and Short Papers), pp. 2496–\n2508, (2019).\n[24] Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola,\n‘Style transfer from non-parallel text by cross-alignment’, in NIPS,\n(2017).[25] Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Maheswaran,\n‘Transforming delete, retrieve, generate approach for controlled text\nstyle transfer’, ArXiv, abs/1908.09368, (2019).\n[26] Ilya Sutskever, Oriol Vinyals, and Quoc V . Le, ‘Sequence to sequence\nlearning with neural networks’, in NIPS, (2014).\n[27] Yaniv Taigman, Adam Polyak, and Lior Wolf, ‘Unsupervised cross-\ndomain image generation’, ArXiv, abs/1611.02200, (2016).\n[28] Chen Wu, Xuancheng Ren, Fuli Luo, and Xu Sun, ‘A hierarchical rein-\nforced sequence operation method for unsupervised text style transfer’,\ninACL , (2019).\n[29] Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, and Songlin Hu,\n‘Mask and infill: Applying masked language model for sentiment trans-\nfer’, ArXiv, abs/1908.08039, (2019).\n[30] Jingjing Xu, Xu Sun, Qi Zeng, Xuancheng Ren, Xiaodong Zhang,\nHoufeng Wang, and Wenjie Li, ‘Unpaired sentiment-to-sentiment trans-\nlation: A cycled reinforcement learning approach’, in ACL , (2018).\n[31] Zichao Yang, Zhiting Hu, Chris Dyer, Eric P . Xing, and Taylor Berg-\nKirkpatrick, ‘Unsupervised text style transfer using language models asdiscriminators’, in NeurIPS, (2018).\n[32] Ye Zhang, Nan Ding, and Radu Soricut, ‘Shaped: Shared-private\nencoder-decoder for text style adaptation’, in NAACL-HLT, (2018).\n[33] Yi Zhang, Jingjing Xu, Pengcheng Yang, and Xu Sun, ‘Learning sen-\ntiment memories for sentiment modification without parallel data’, inEMNLP, (2018).\n[34] Junbo Jake Zhao, Y oon Kim, Kelly Zhang, Alexander M. Rush,\nand Yann LeCun, ‘Adversarially regularized autoencoders’, in ICML,\n(2017).\n[35] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros, ‘Un-\npaired image-to-image translation using cycle-consistent adversarialnetworks’, 2017 IEEE International Conference on Computer Vision\n(ICCV), 2242–2251, (2017).\n[36] Zhikang Zou, Xinxing Su, Xiaoye Qu, and Pan Zhou, ‘Da-net: Learn-\ning the fine-grained density distribution with deformation aggregation\nnetwork’, IEEE Access, 6, 60745–60756, (2018).L.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2235",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "_iUVlJm58a3",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200349",
"forum_link": "https://openreview.net/forum?id=_iUVlJm58a3",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Fine-Grained Text Sentiment Transfer via Dependency Parsing",
"authors": [
"Lulu Xiao",
"Xiaoye Qu",
"Ruixuan Li",
"Jun Wang",
"Pan Zhou",
"Yuhua Li"
],
"abstract": "Fine-grained sentiment transfer demands to edit an input sentence on a given sentiment intensity while preserving its content, which largely extends traditional binary sentiment transfer. Previous works on sentiment transfer usually attempt to learn latent content representation disentangled from sentiment. However, it is difficult to completely separate these two factors and it is also not necessary. In this paper, we propose a novel model that learns the latent representation without disentanglement and leverages sentiment intensity as input to decoder for fine-grained sentiment control. Moreover, aligned sentences with the same content but different sentiment intensities are usually unavailable. Due to the lack of parallel data, we construct pseudo-parallel sentences (i.e, sentences with similar content but different intensities) to relieve the burden of our model. In specific, motivated by the fact that the sentiment word (e.g., “delicious”) has a close relationship with the non-sentiment context word (e.g., “food”), we use dependency parsing to capture the dependency relationship. The pseudo-parallel sentences are produced by replacing the sentiment word with a new one according to the specific context word. Besides, the difference between pseudo-parallel sentences and generated sentences and other constraints are utilized to guide the model precisely revising sentiment. Experiments on the Yelp dataset show that our method substantially improves the degree of content preservation and sentiment accuracy and achieves state-of-the-art performance.",
"keywords": [],
"raw_extracted_content": "Fine-Grained Text Sentiment Transfer via Dependency\nParsing\nLulu Xiao1and Xiaoye Qu1and Ruixuan Li1*and Jun Wang2and Pan Zhou1and Yuhua Li1\nAbstract. Fine-grained sentiment transfer demands to edit an input\nsentence on a given sentiment intensity while preserving its content,\nwhich largely extends traditional binary sentiment transfer. Previousworks on sentiment transfer usually attempt to learn latent content\nrepresentation disentangled from sentiment. However, it is difficult\nto completely separate these two factors and it is also not necessary.\nIn this paper, we propose a novel model that learns the latent rep-\nresentation without disentanglement and leverages sentiment inten-\nsity as input to decoder for fine-grained sentiment control. Moreover,\naligned sentences with the same content but different sentiment in-\ntensities are usually unavailable. Due to the lack of parallel data, we\nconstruct pseudo-parallel sentences (i.e, sentences with similar con-\ntent but different intensities) to relieve the burden of our model. In\nspecific, motivated by the fact that the sentiment word (e.g., “deli-cious”) has a close relationship with the non-sentiment context word(e.g., “food”), we use dependency parsing to capture the dependency\nrelationship. The pseudo-parallel sentences are produced by replac-\ning the sentiment word with a new one according to the specific\ncontext word. Besides, the difference between pseudo-parallel sen-tences and generated sentences and other constraints are utilized to\nguide the model precisely revising sentiment. Experiments on the\nYelp dataset show that our method substantially improves the degree\nof content preservation and sentiment accuracy and achieves state-\nof-the-art performance.\n1 Introduction\nText sentiment transfer is a common but difficult style transfer task inNatural Language Processing (NLP). The goal of sentiment transferis to change the sentiment of a sentence to the opposite while preserv-\ning its semantic meaning. Sentiment transfer has obtained board ap-\nplications in NLP , such as letter and review rewriting [20, 22], which\nattracts the attention of large numbers of researchers.\nPrevious works [33, 14] of sentiment transfer mainly focus on bi-\nnary sentiment (positive and negative) transfer. In this paper, we setour task on more general scenarios that revise sentences on a given\nsentiment intensity value ranging from 1 to 5 for fine-grained trans-\nfer, here the intensity 1 to 5 corresponds to strong negative, weak\nnegative, neutral, weak positive, and strong positive. For example,\ngiven the input sentence “the food was totally fine” with the senti-\nment intensity “4”, an output “the food was enough” may be desired\nto generate on the target sentiment “3” and “the food was forget-\n1Huazhong University of Science and Technology, China. Email: {xiao lulu,\nxiaoye, rxli, panzhou, idcliyuhua}@hust.edu.cn\n2Fujitsu Laboratories of America, USA, Email: [email protected]\n*Corresponding authortable” on the target sentiment “2”. Besides, an output sentence “the\nfood was totally wonderful” on the target sentiment “5” expresses\nstronger positive intensity and “the food was totally terrible”o nt h e\ntarget sentiment “1” has stronger negative intensity. The task of fine-\ngrained text sentiment transfer aims at modifying an input sentence to\nsatisfy a target sentiment intensity while keeping the original content.\nHowever, there are some limitations to this task and several problems\nin previous methods. First of all, there are no natural parallel data,\nhence we can not use a supervised way to train the transfer model.\nSecond, previous works like [16] attempt to disentangle a sentence\ninto the content part and the sentiment part, but it is difficult to com-\npletely separate them because these two parts are mixed together in\na complicated way. It usually leads to the semantic meaning of the\noriginal sentence and its corresponding generated sentence quite dif-\nferent.\nIn this paper, we propose an approach for editing sentences which\ncontains two parts: transfer module and pseudo-parallel module. Inthe transfer module, Gated Recurrent Unit (GRU) based encoder-\ndecoder architecture [2] is employed to revise sentences. The encoderencodes each input sentence into a latent representation without dis-entanglement, while the decoder generates sentences under the con-trol of sentiment intensity values. We also use a classifier to predict\nthe sentiment value of the generated sentence. The error between the\nsentiment value of the generated sentence and the target value pro-\nvides a signal to train the decoder. Due to the lack of parallel data,\npseudo-parallel sentences are introduced in the pseudo-parallel mod-\nule to guide the transfer module to generate sentences on a given sen-\ntiment intensity value. In specific, the pseudo-parallel module con-\nsists of two parts: dependency parsing and pseudo-parallel produc-\ntion. Pseudo-parallel sentences are pairs of sentences with similar\ncontent but different sentiment values. The key issue of producing\npseudo-parallel sentences is to accurately find the sentiment informa-\ntion of a source sentence and change it to satisfy the target sentiment.\nAs observed that the sentiment word “delicious” is suitable to de-scribe “food” instead of “staff”, and different sentiment words havedifferent sentiment intensity (e.g., “delicious” has a stronger positive\nsentiment than “ok”, “terrible” has a stronger negative sentiment than\n“so-so”). In the dependency parsing part, we first extract sentiment\nwords of a sentence and then leverage dependency parsing to find the\nnon-sentiment context word that has a specific dependency with the\nsentiment word. Subsequently, during pseudo-parallel production, all\nthe sentiment words describe the same context word are evaluated by\na scorer function and the most appropriate sentiment word is selected\nto replace the original one, thus we can obtain the pseudo-parallel\nsentence on a target sentiment. Finally, the reference loss between\nthe generated sentence and pseudo-parallel sentence combined with\nother constraints such as reconstruction loss is utilized to enhance theECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2003492228\nability of our model to modify sentences.\nWe compare our method with state-of-the-art approaches on the\ndataset of Yelp reviews. Automatic metrics and human metrics of\nexperiment results show the efficacy of our model.\nThe contributions of this paper are summarized as the following\nthree points:\n1. We propose a novel framework with the combination of a clas-\nsifier and sentiment controls to modify a sentence, in which thesentiment is not disentangled from sentence.\n2. To our best knowledge, this paper is the first work that introduces\ndependency parsing to the sentiment transfer task. Dependencyparsing is used to find context words related to sentiment wordsand produce pseudo-parallel sentences which provide a signal tothe model when revising sentences.\n3. Experiment results of automatic evaluation and human evaluation\nshow that our model outperforms state-of-the-art methods on bothcontent preservation and sentiment accuracy.\n2 Related Work\nRecently, deep learning obtains significant results in various com-puter vision and natural language processing tasks [36, 23]. Thestyle transfer on computer vision has also achieved exciting perfor-\nmance [9, 27, 35, 15, 12], which inspires researchers to propose the\ntask of style transfer on natural language text. After a surge of re-searches on this task, text style transfer has obtained significant re-sults [20, 22, 4, 10, 6, 3, 28, 29, 25]. Current methods of text style\ntransfer mainly focus on revising polarity attributes (e.g., sentiment,\nwriting style, gender, etc.) of text to the opposite while preserving\nattribute-independent content.\nDue to the lack of parallel sentences in training time, an unsu-\npervised way was used on existing methods. Some methods follow\nthe adversarial idea of Generative Adversarial Networks (GANs) [7]\nthat optimizes decoder/generator and discriminator/classifier in cy-\ncle. Yang et al. [31] use a language model as the discriminator toprovide richer and more stable feedback to guide the V ariational Au-toencoders (V AEs) [13] generating sentences. Fu et al. [5] proposetwo text style transfer models that employ adversarial training. Theencoders of both models extract the content of a sentence under thedirection of the classifier, but the first model utilizes a seq2seq [26]with two decoders on different styles, the second model just has onedecoder with style embedding. Zhao et al. [34] employ the extension\nmodel of adversarial autoencoder (AAE) [18] to generate sentences\nand apply it to style transfer. Hu et al. [8] combine the V AEs and at-\ntribute discriminators to efficiently generate semantic representations\nwith the wake-sleep algorithm. John et al. [11] disentangle style and\ncontent latent representations under the multi-task loss and the ad-\nversarial loss.\nAnother line of methods does not implement the adversarial idea.\nLi et al. [14] obtain the content of a sentence by deleting its senti-ment words, and retrieve similar context from the target style cor-pus to extract the sentiment information, then combine them into the\nneural network. Zhang et al. [32] leverage shared encoder-decoder\nmodel to learn the public attributes (semantic) of all instances, and\nprivate encoder-decoder model to learn the specific characteristics of\nthe corresponding attribute corpus. Xu et al. [30] propose a cycled re-\ninforcement learning model which includes the neutralization mod-\nule and emotionalization module. The neutralization module learns\ndisentangled representations and the emotionalization module adds\nsentiment to neutralize semantic content.In contrast, we consider more general scenarios that edit sen-\ntences on different sentiment intensity values for fine-grained trans-\nfer. There are few works of fine-grained sentiment transfer. Liao et al.[16] propose to learn disentangled content factor and sentiment factor\nby two separate encoders based on V AE, and then modify the contentunder the target sentiment. To better disentanglement, they model thecontent similarity and the sentiment differences of pseudo-parallelsentences. Luo et al. [17] propose a Seq2SentiSeq model combinedwith the sentiment intensity value and use cycle reinforcement learn-ing method to train the model. Different from them, we employ anautoencoder with sentiment intensity value as control and pseudo-parallel sentences produced by dependency parsing as references to\nrevise sentences.\n3 Method\nWe assume the set of all inputs in our model is Dv=\n{(x1,v1),..., (xn,vn)}, wherexiis a sentence, and vi∈Vis the\nsentiment intensity of xi. The values of Vare fine-grained senti-\nments ranging from 1 to 5. We define the sentences with sentiment\nvalues larger than 3 as positive, equal to 3 as neutral and the rest as\nnegative. The goal of this task is to generate a new sentence yfor\nan inputx. The sentiment value of xisvsrc,(x,vsrc)∈D. The\ngenerated sentence yshould satisfy the requirement of keeping the\ncontent similar to xand its sentiment value is the same as the target\nsentiment vtgt∈V. An overview of our system is depicted in Figure\n1. The top part is the dependency parsing module. It employs depen-dency parsing to find context words that have specific dependencieswith sentiment words in sentences. The bottom part is the transfermodule. The main framework here is a traditional encoder-decodernetwork trained with pairs of ( x,v\nsrc) as input to generate a sentence\nthat minimizes a set of constraints.\n3.1 Extraction\nTo analyze the dependencies between sentiment words and con-text words, we first need to extract sentiment words that havestrong power of sentiment polarity. We just consider to extract sen-timent words on sentiment polarity. Assuming all the input sen-tences on sentiment polarity is D\nr={(x1,r1),..., (xn,rn)},\nri∈{positive,negative} . An input sentence xis composed of\nN-words u={u1,...,u i,...,u n}, and the sentiment polarity of x\nisr. The way in Li et al. [14] is adopted to extract sentiment words\ninx, it computes the relative frequency of uias,\nf(ui,r)=(count (ui,Dr)+λ)\n(/summationtext\nr/prime∈{positive,negative },r/prime/negationslash=rcount(ui,Dr/prime)) +λ\n(1)\nwherecount (ui,Dr)is the times of uiappears in Drandλis\nthe smoothing parameter. If the relative frequency f(ui,r)ofuiis\nlarger than threshold γ, thenuiis considered as a sentiment word of\nx. We define α(x,vsrc)to be all of the sentiment words in x.\n3.2 Dependency Parsing\nAfter the extraction of sentiment words, we perform dependencyparsing in the sentence xto find the context words corresponding\nto the sentiment words. Dependent syntax expresses the entire sen-tence structure through the dependencies between each word. Thesedependencies constitute a dependent syntax tree whose root node isL.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2229\nFigure 1. Framework of our proposed method. Our approach contains two parts: transfer module and pseudo-parallel module which consists of dependency\nparsing and pseudo-parallel production. The dependency parsing contains two steps: (1) extract the sentiment words of an input sentence (2) analyze\ndependencies between words in the input to find the context words for specific sentiment words. In the pseudo-parallel production, the scorer is used to find the\nbest sentiment words according to the target sentiment to replace the originals, and then we can get the pseudo-parallel sentence x/prime. The bottom part is the\ntransfer module based on an encoder-decoder network. It modifies an input sentence xto a new one yunder the target sentiment vtgt.\nthe core predicate of the sentence. According to the dependencies in\nthe syntax tree, we can find two words with specific grammatical re-lations in the sentence, which are usually not adjacent. As shown inthe top part of Figure 1, each arrow denotes a dependency. The arrowpoints to the governed object, and the starting point of the arrow isthe dependent object. To decide which word has specific dependencywith the sentiment word, we just consider several fixed dependencieslike nsubj (nominal subject), dobj (direct object), amod (adjectivalmodifier), etc. For example, the word “food” in sentence “the foodtastes delicious” is the word we want to find that has certain depen-dency with sentiment word “delicious” instead of word “tastes” orothers. The word “food“ is the nominal subject of the sentiment word“delicious”.\nInput the best part, exceptional service and prices can not be beat!\n(x/prime,1) the worst part, dreadful service and prices can not be beat!\n(x/prime,2) the frustrating part, lousy service and prices can not be beat!\n(x/prime,3) the hot part, fine service and prices can not be beat!\n(x/prime,4) the best part, exceptional service and prices can not be beat!\n(x/prime,5) the gorgeous part, wonderful service and prices can not be beat!\nTable 1. Five pairs of pseudo-parallel sentences. The first line is the input\nsentence and other lines are pseudo-parallel sentences on five sentiment\nintensity values. The forth line is the same as the first line because the\nsentiment value of the input sentence is 4.Assuming ais a sentiment word in x,a∈α(x,vsrc),ois de-\nclared as the context word of aifohas a certain dependency de-\nscribed above with a. In this part, we do not need to consider senti-\nment intensity. We extract all the sentiment words in α(o,r)whose\ncontext word is oon ther(r∈(positive,negative)) corpus. For\nexample, positive sentences “the food tastes delicious” and “the foodtastes wonderful” describe the same context word “food”. The senti-ment words “delicious” and “wonderful” then are saved with “food”.Dependencies between sentiment words and corresponding contextwords are used to assist in producing pseudo-parallel sentences,which will be described in the following sections.\n3.3 Replace\nPseudo-parallel sentences are a pair of sentences that have the samesemantic content but different sentiment values as shown in Table 1.The way of constructing pseudo-parallel sentences is to replace eachsentiment word of source text with another optimal sentiment word.As mentioned above, a sentiment word has close dependency with itscontext word, so all the sentiment words of the context word can becandidates for replacement. Given an input (x,v\nsrc),ais a sentiment\nword ofx,a∈α(x,vsrc),ois the context word of a. There are k\ncandidate words in α(o,r)to replace a. The best candidate (ctgt)\nwhich minimizes the score will be used to replace aunder the target\nsentiment vtgt.\nctgt=argmin c{S(a,c)|c∈candidate k(a)} (2)\nwhereS(∗)is a weighted scorer function and candidate k(a)is\nall the candidates of sentiment word a. Function S(∗)measures can-L. Xiao et al. / Fine-Grained Text Sentiment Transfer via Dependency Parsing 2230\ndidates from different aspects, it mainly considers two factors in\nour setting: (a) how the candidate word satisfies the target sentimentvaluev\ntgt, (b) how similar with a the candidate word is. We use two\nways to measure them, as follows:\nSentiment difference: Sentiment difference measures the differ-\nence between the sentiment value of candidate word (c ) and the target\nvaluevtgt. How to compute the sentiment value of cis a key problem\nas it is unknown. Inspired by sentiWordNet [1], we use the averagesentiments of texts which contain the word cto represent the senti-\nment ofc.\nv\nc=/summationtext\n(x∈D,c∈x)vsrc\nnum(/summationtext\n(x∈D,c∈x)vsrc)(3)\nwherexis an input, x∈D,vsrcis the sentiment of\nx,num(/summationtext\n(x∈D,c∈x)vsrc)is the text numbers of cappears in D.\nThen, sentiment difference is computed as follow,\nrd(vc,vtgt)=|vc−vtgt| (4)\nSimilarity: Similarity indicates how similar the sentiment word\n(a) and candidate (c ) are. As observed that all candidates can re-\nplacea, but some candidates do not match the context. For example,\nthe sentiment word “delicious” on the text “the food is delicious” ismore likely to be replaced by “awesome” than “love”. Therefore, weshould find a similar word with ato replace according to:\nr\ns(a,c)=wordsim (a,c) (5)\nwherewordsim (a,c)is a cosine similarity based on word em-\nbedding between embedding vector of aandc.\nThe scorer function S(∗)is composed of all the measures above:\nS(a,c)=βdrd(vc,vtgt)+β srs(a,c) (6)\nwhereβdandβsare weight parameters. The pseudo-parallel sen-\ntences constructing algorithm is shown in Algorithm 1.\nAlgorithm 1 Pseudo-parallel sentence producing method based on\ndependency parsing.\nInput: a sentence xwith sentiment label vsrc, the target sentiment\nvtgt, context-sentiment words table T={o1:(a11,a12,...),o2:\n(a21,a22,...),...} .\n1:Extract sentiment words A={a1,a2,...} inxbased on Eq. 1\n2:Analyze dependencies R={(r1,w1,w11),(r2,w2,w21),...}\nbetween words in x\n3:foreachainAdo\n4: Find the non-sentiment word othat has special dependency\nwithainR\n5: Retrieve oin tableTand get all candidate words Co=\n{c1,c2,...} ofa\n6: Update table Twithoanda\n7: Compute sentiment value of each word in Cobased on Eq. 3\n8: Compute sentiment difference between sentiment value of\neach word in Coandvtgtbased on Eq. 4\n9: Compute similarity between each word in Coandabased on\nEq. 5\n10: Use scorer function find the best word cinCobased on Eq.\n6\n11: Replaceawithcto obtain the pseudo-parallel sentence x/prime\nwhose sentiment is vtgtofx\n12:end for3.4 Training\nOur model mainly employs the encoder-decoder framework, a nat-ural language text with sentiment label is as an input to the model.The encoder learns to encode the sentence into a hidden represen-tation and the decoder learns to generate sentence under the repre-sentation. However, the sentence generated by the decoder is a newtext that similar to the input, the decoder is not able to add senti-ments to it. Therefore, we apply a sentiment control and some con-\nstraints to our model. The sentiment control is an embedding of tar-\nget sentiment value, it is concatenated with the hidden representationas the input to the decoder. The constraints are a set of losses toenhance the abilities of content preservation and sentiment transferfor the model. We introduce a classifier to predict sentiment valuesof generated sentences. We denote the encoder-decoder frameworkbyG=(G\nenc,Gdec)and the classifier by C. We consider these\nfour types of losses as follows. The reconstruction loss and back-translation loss are employed to preserve the content of the sentences.In addition, to keep content unchanged, the reference loss also helpsto revise the sentiments.\nReconstruction loss: Reconstruction loss denotes the error of re-\nconstructing input x. Assuming z\nx=Genc(x)is the hidden repre-\nsentation of x.vsrcis the sentiment value of x. The decoder gen-\nerates sentence x≈PG(.|zx,vsrc)conditioned on zx,vsrc. The\nreconstruction loss is computed as:\nLrec=−logPG(x|zx,vsrc) (7)\nBack-translation loss: Lety=Gdec(x,vtgt)be the gener-\nating sentence of xon the target sentiment vtgt,zy=Genc(y)\nis the hidden representation of y. The decoder generates sentence\nx≈PG(.|zy,vsrc)conditioned on zy,vsrc. Back-translation loss is\nthe error of translating yintox, it is indicated as:\nLbt(x,v)=−logPG(x|zy,vsrc) (8)\nClassification loss: The classifier is used to predict the sentiment\nvalue of a text. To ensure the sentiment value of generating sentenceymatches the target sentiment v\ntgt, classification loss is used as a\nfeedback to guide the model. The classifier predicts the sentimentvaluev\ny=PC(.|zy)ofy.\nLc(vy,vtgt)=−logPC(vtgt|zy) (9)\nReference loss: The reference loss is the error of yand the\nPseudo-Parallel Sentence xtgtofxon target sentiment.\nLr(x,xtgt)=−logP(x/prime|y) (10)\nIn training, the classifier is trained with sentences and correspond-\ning sentiment labels as input and predicted sentiment as output. Sen-tence is first encoded into a latent representation through the GatedRecurrent Unit (GRU) based encoder and then as the input to the tra-ditional multi-classifier. After multiple iterations of batches inputs,the classifier is trained to minimize the loss and then is added to the\nencoder-decoder framework. The encoder-decoder network is trained\nwith source text x, target sentiment v\ntgtas input and pseudo-parallel\nsentence xtgtas reference, and new sentence yas output by mini-\nmizing:\nL=λ1Lrec+λ2Lbt+λ3Lc+λ4Lr (11)\nwhereλ1,λ2,λ3andλ4are hyper-parameters.L.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2231\n4 Experiments\nWe perform the experiments on two tasks: fine-grained sentiment\ntransfer and sentiment polarity transfer. On fine-grained sentimenttransfer, the sentiment of the source text should be transferred to atarget numeric value in 1,2,3,4,5. Sentiment polarity transfer mainly\nchanges the source text to a new sentence with the opposite sentiment\n(positive or negative). We apply automatic and human evaluations to\ncompare our approach with previous works on these two tasks.\n4.1 Dataset\nFor all the experiments, the dataset we use is the Yelp reviews from\nLiao et al. [16]. We use a more recent version of Stanford CoreNLP\nthan Liao et al. [16] which leads to a little different distribution, how-ever, our sentiment intensity is more accurate. After processing, ourdataset has about 600K sentences in total, among them 50K as thetest set, 10K as the validation set and the rest as the training set. Thedata distribution is shown in Table 2.\nsentiment interval [1,2) [2,3) [3,4) [4,5)\nsentence num 34576 233916 166566 169196\nTable 2. Numbers of Sentences in each sentiment interval.\n4.2 Model Setup\nFor all tasks, the encoder we use is 2 layers bidirectional GRU with250 dimensions hidden state. The decoder is also 2 layers of bidirec-tional GRU with attention mechanism, its dimension of hidden stateis set to 500. The output of the encoder also called hidden representa-tion concatenated with the target sentiment embedding is as input tothe decoder. The dimensions of sentiment embeddings are 128. En-coder (GRU) with dimension hidden size 200 and MLP with dimen-sion hidden size 100 constitute the classifier. The weights (β\nd,βs)of\nsentiment difference and similarity are respectively set to 1 and 0.5.For the weights (λ\n1,λ2,λ3,λ4)of the four losses, we tune them on\nthe validation data with different values, and finally they are respec-tively set to 0.7, 0.2, 0.3, 0.7.\n4.3 Comparative Methods\nWe compare our model with two state-of-the-art models, one ofwhich is specifically designed for the task of fine-grained sentiment\ntransfer and the other is mainly for the binary sentiment polarity\ntransfer.\nSequence Editing under Quantifiable Guidance (QuaSE) (Liao\net al. [16]): QuaSE first proposes the task of quantifiable sentimenttransfer, it uses two encoders to capture content and sentiment, one\ndecoder to generate text satisfied the requirement. To better disentan-\ngle the two factors, QuaSE uses pseudo-parallel sentences to enhance\nthe model. In the test stage, QuaSE assumes the sentiment of an in-\nput follows a Gaussian distribution, then chooses the best one in thedistribution to pass to the decoder. We use QuaSE as the comparative\nmethod for the task of fine-grained sentiment transfer and follow the\ndefault parameters in their codes.\nText Transfer Text by Cross-Alignment (TCA) (Shen et al.\n[24]): TCA maps an input sentence to a style-independent contentrepresentation and pass it to style-dependent decoders. It employsaligned auto-encoder instead of typical variational autoencoder toobtain two distributed constraints by the cross-aligned way and twodiscriminators to modify sentences. We use TCA for the sentimentpolarity transfer experiment following its suggested parameters.\n4.4 Evaluation Metrics\nThere are many evaluation metrics for the task of sentiment polaritytransfer, which can also be used for the task of fine-grained sentimenttransfer. Due to the lack of parallel corpora, we choose the opportunemetrics for our task, as follows.\nBLEU: BLEU[21] was originally used to measure the similarity\nbetween machine translation text and reference text. The value ofBLEU ranges from 0 to 1, we expand it to 0 to 100 as usually donein previous works. With the appearance of text style transfer, BLEUis also used for this task. But there is no reference text, so we calcu-late the BLEU value between source text and generation text, which\nevaluates the content preservation.\nEdit Distance: In the fields of information theory, linguistics, and\ncomputer science, edit distance is used to measure the similarity of\ntwo sequences. In general, the edit distance refers to the minimumnumber of single-character editing required to convert one word w\n1\nto another word w2.\nMAE: MAE, also known as Mean Absolute Error. In this task, we\nuse MAE to measure the mean error between the target sentimentvalue and the sentiment of generation sentence.\nMAE =1\n|s|/summationdisplay\nxi∈s|vi−vtgt| (12)\nwhere s is the set of generated sentences, viis the sentiment value\nof sentence xi∈spredicted by Stanford CoreNLP (Manning et al.\n[19]).\n4.5 Automatic Evaluation\nIn the fine-grained sentiment transfer experiment, our model is com-pared with QuaSE. Each input sentence is required to be converted tofive sentences whose sentiment values respectively satisfy the targetvalues 1,2,3,4 and 5. The training data in QuaSE is specially pro-cessed, so QuaSE still uses its own training data, and its test data isthe same as ours. We perform the MAE evaluation between the targetsentiment and the sentiment intensity of the generation sentence, andevaluate the edit distance and BLEU between the generation sentenceand the input sentence. The results are shown in Table 3 and all the\nresults are the average values for the whole dataset. “Original” refers\nto use original sentences to compute the evaluation metrics.\nThe MAE values of our model and QuaSE are smaller than “Orig-\ninal”, it demonstrates that we both have the ability to revise senti-\nments of texts. Moreover, the MAE values on the five sentiment in-tensity values of our model are smaller than QuaSE, the main reasonis that we use the error between the pseudo-parallel sentences andthe generated sentences and the classifier to provide effect and richerfeedback to the decoder. The feedback guides the model to bettergenerate sentences that satisfy a target sentiment. In contrast, QuaSE\nemploys a Gaussian distribution on sentiment factor, which is not so\nprecise. Besides, all the edit distances and the BLEU values of ourmodel are better than QuaSE. QuaSE respectively learns content andsentiment representation disentangled from an input sentence, but itis hard to completely separate them and may cause partial loss ofcontent. However, we do not learn the disentangled representation\nbut apply some constraints to keep content unchanged.L.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2232\nModelsMAE Edit Distance BLEU\nT=1 T=2 T=3 T=4 T=5 T=1 T=2 T=3 T=4 T=5 T=1 T=2 T=3 T=4 T=5\nOriginal 2.13 1.15 0.81 1.00 1.87 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A\nQuaSE 1.29 0.57 0.77 0.67 1.19 11.88 8.78 8.36 8.13 11.58 6.26 24.55 24.63 30.21 8.23\nOur Model 0.47 0.38 0.23 0.41 0.56 6.88 6.58 6.30 5.49 7.24 18.93 30.73 25.96 31.86 26.42\nTable 3. Automatic evaluation results of fine-grained sentiment transfer experiment. T refers to the target sentiment. MAE measures mean error between the\ntarget sentiment and the sentiment of the output. BLEU and Edit distance measure content similarity between the output and the source sentence.\nIn the sentiment polarity transfer experiments, QuaSE and TCA\nare used as the comparison models. We use Sentiment accuracy and\nBLEU as the measurement metrics. As mentioned in section 3, we\ndefine the sentences with sentiment values larger than 3 are posi-tive, smaller than 3 are negative. The results are shown in Table 4,we perform a sentiment accuracy value on transferring negative topositive, and vice versa. The accuracy values in both directions and\nthe BLEU value of our model are larger than TCA. Moreover, our\nmodel has a smaller accuracy value on negative to positive but larger\naccuracy value on positive to negative compared to QuaSE. In gen-\neral, our model achieves the best average accuracy value and BLEUvalue. It demonstrates that our model can better revise sentiments of\nsentences while preserving more content.\nNeg. to Pos. Pos. to Neg. Avg. accuracy BLEU\nTCA 73.80% 69.12% 71.46% 13.55\nQuaSE 89.81% 76.93% 83.36% 9.18\nOur model 85.37% 83.36% 84.36% 29.21\nTable 4. Automatic evaluation results of sentiment polarity transfer\nexperiment.\n4.6 Human Evaluation\nIn this part, we hire three workers to manually evaluate the quality of\n200 generated sentences that are randomly picked from each of ourmodel and the two competitive models. We use the “content preser-vation” to measure the content integrity of sentences and “fluency”to measure grammatical fluency of sentences. The scores of “con-tent preservation” range from 0 to 3 (score 0 means not preserved,1 means little preserved, 2 means partially preserved, 3 means fully\npreserved), the scores of “fluency” range from 1 to 4 (score 1 means\npoor grammar, 4 means fluent grammar).\nThe result is shown in Table 5. For the “content preservation” met-\nric, our model achieves the highest score, the main reason is that our\nmodel does not learn the disentangled latent representation as usedin QuaSE since the disentangled representation misses some contentinformation more or less. For the “fluency” metric, the score of ourmodel is also better than QuaSE and TCA. It may comes from thatour pseudo-parallel sentences keep the most grammatical structure ofthe original sentence. This feature also devotes to generating fluent\nsentences.\n4.7 Case Study\nTo directly present the effects of our model on fine-grained sentiment\ntransfer, some examples generated by our model are displayed in Ta-Content Preservation Fluency\n(Range:[0,3]) (Range:[1,4])\nTCA 1.41 2.58\nQuaSE 1.37 2.14\nOur model 1.86 2.88\nTable 5. Human evaluation results for three models on content\npreservation and fluency.\nble 6. Each sentence is revised to five sentences, and the sentimentvalues of them are in 1,2,3,4,5. The generated sentence on T=2 inthe first example is the same as the input sentence due to the originalsentiment label is 2. Similarly, the second example on T=3 and thethird example on T=4 are the same as the input sentences. For thefirst example, when T=1, “sloppy” and “over-priced” are changed tomore negative phrase ”worst” and the generated sentence on T=3 ex-\npresses neutral sentiment. Moreover, when T=4 and 5, the original\nsentence is revised to positive sentences that opposite to the input\nand “wonderful”, “actually excellent” on T=5 express strong posi-\ntive sentiment. For the second example, the input sentence is a neu-tral sentence that describes “seafood”. When T=1 and 2, the originalsentence is revised to express negative sentiment and the generatedsentences on T=4 and 5 express positive sentiments. Although, thegenerated sentences on T=1, 2 and 5 do not describe “seafood”, theydescribe “cake”, “beef” and “steak” that are similar to “seafood”.These indicate that our model is able to preserve most of the contentand revise the words which have the strong polarity of sentiment in a\nsentence. In some examples, like the third example on T=2 and T=5,\nthere have some problems of unacceptable sentences and duplicatesin word-level, it reminds that we need to reduce this problem.\n4.8 Ablation Study\nWe introduce a classifier and the other three constraints to guide theencoder-decoder to modify sentences . To show the effects of thethree losses, we perform ablation study under the MAE and BLEUmetrics. We remove the three losses separately and keep the othersunchanged. The results are shown in Table 7 and Table 8. The firstline in each table is the sentiment intensity value. In the experiments,\nwe just consider the values of 1, 3 and 5. The second line in each table\nshows the MAE/BLEU values of QuaSE in table 3 that are used for\ncomparison. The following three lines show the MAE/BLEU values\nunder the omission of reference loss, reconstruction loss and back-\ntranslation loss. The last line shows the MAE/BLEU values of all thelosses.L.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2233\nGenerated Sentence\nE.g. 1 the burger was sloppy and the food was over-priced. (input sentiment is 2)\nT=1 the burger was worst and the sauce food was worst!\nT=2 the burger was sloppy and the food was over-priced.\nT=3 the burger was extra and let packaged extra dogs they receive dogs.\nT=4 the burger was phenomenal and prompt service over it.\nT=5 the burger was wonderful and the food was actually excellent.\nE.g. 2 it was appropriately spicy, flavorful, and the seafood was not overcooked. (input sentiment is 3)\nT=1 it was flavorless spicy, flavorful, and the cake was worst breakfast.\nT=2 it was pretty bland spicy, especially, the beef was better!\nT=3 it was appropriately spicy, flavorful, and the seafood was not overcooked.\nT=4 it was great spicy, flavorful, and the great seafood, and great tasting.\nT=5 it was wonderful spicy, wonderful, and the wonderful steak.\nE.g. 3 moist bread, fresh ingredients, great flavor. (input sentiment is 4)\nT=1 waste mix, waste ingredients, waste flavor.\nT=2 bland, the bland ingredients, lousy flavor!\nT=3 had bread, had plenty of flavor.\nT=4 moist bread, fresh ingredients, great flavor.\nT=5 wonderful & fresh ingredients, ingredients, great flavor.\nTable 6. Sentences examples generated by our model on each target sentiment.\nAccording to the result in Table 8, the MAE values of “None” are\nsmaller than “Original” in Table 3. It demonstrates that the decoder\nwith the assist of the classifier is able to revise the sentiment inten-sity of sentences. The MAE values of removing each loss are smallerthan QuaSE, this means each loss we add to the model makes a con-tribution to revise sentiments. The average improvements in remov-ing each loss compared to “None” are 26.67%, 13.66%, and 19%.\nThe average decreases in removing each loss compared to “ALL” are\n31.33%, 44.33%, and 39%. These demonstrate that each loss makes\na certain contribution to sentiment modification. In Table 7, the aver-\nage decreases in removing each loss compared to “ALL” are 57.67%,21.13%, and 22.40%. It shows that each loss is helpful for contentpreservation especially the reference loss. Moreover, the MAE andBLEU values of “ALL” are the best in all the sentiment values. Itshows that the combination of the three losses is effective to enhancethe ability to modify sentiments of our model.\nT=1 T=3 T=5\nQuaSE 6.26 24.63 8.23\nNone 8.11 10.03 12.32\nLrec,Lbt 12.04 20.17 21.80\nLr,Lbt 15.68 23.71 25.58\nLr,Lrec 19.21 24.28 21.10\nALL 18.93 25.96 26.42\nTable 7. Ablation study on BLEU metric.\n5 Conclusions\nIn this work, we focus on the task of fine-grained sentiment trans-fer that requires to edit sentence on given numeric sentiment valuesT=1 T=3 T=5\nQuaSE 1.29 0.77 1.19\nNone 1.38 0.53 1.09\nLrec,Lbt 0.77 0.44 0.99\nLr,Lbt 1.26 0.41 0.92\nLr,Lrec 1.07 0.32 1.04\nALL 0.47 0.23 0.56\nTable 8. Ablation study on MAE metric.\nwhile keeping content unchanged. We propose a novel method basedon dependency parsing without learning disentangled representationas usually worked in the previous works. We produce pseudo-parallelsentences through dependency parsing and employ a set of lossesto give richer signals to enhance the model. Automatic and humanevaluations of experiments on the Yelp reviews demonstrate that ourmodel substantially outperforms the compared models. In the future,\nwe intend to expand our work on more attributes not only sentiment\nand long text transfer.\nACKNOWLEDGEMENTS\nThis work is supported by the National Key Research and De-\nvelopment Program of China under grants 2016YFB0800402and 2016QY01W0202, National Natural Science Foundation ofChina under grants U1836204, U1936108, 61572221, 61433006,U1401258, 61572222 and 61502185, and Major Projects of the Na-\ntional Social Science Foundation under grant 16ZDA092.L.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2234\nREFERENCES\n[1] Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani, ‘Senti-\nwordnet 3.0: an enhanced lexical resource for sentiment analysis and\nopinion mining.’.\n[2] Dzmitry Bahdanau, Kyunghyun Cho, and Y oshua Bengio, ‘Neural ma-\nchine translation by jointly learning to align and translate’, CoRR,\nabs/1409.0473, (2014).\n[3] Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang, ‘Style trans-\nformer: Unpaired text style transfer without disentangled latent repre-sentation’, in ACL , (2019).\n[4] C ´ıcero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi, ‘Fighting\noffensive language on social media with unsupervised text style trans-fer’, in ACL , (2018).\n[5] Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui\nYan, ‘Style transfer in text: Exploration and evaluation’, ArXiv,\nabs/1711.06861, (2017).\n[6] Hongyu Gong, Suma Bhat, Lingfei Wu, Jinjun Xiong, and Wen-Mei\nHwu, ‘Reinforcement learning based text style transfer without parallel\ntraining corpus’, in NAACL-HLT, (2019).\n[7] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David\nWarde-Farley, Sherjil Ozair, Aaron C. Courville, and Y oshua Bengio,\n‘Generative adversarial nets’, in NIPS, (2014).\n[8] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and\nEric P . Xing, ‘Toward controlled generation of text’, in ICML, (2017).\n[9] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros, ‘Image-\nto-image translation with conditional adversarial networks’, 2017 IEEE\nConference on Computer Vision and Pattern Recognition (CVPR),5967–5976, (2016).\n[10] Parag Jain, Abhijit Mishra, Amar Prakash Azad, and Karthik Sankara-\nnarayanan, ‘Unsupervised controllable text formalization’, in AAAI,\n(2018).\n[11] Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga V echtomova,\n‘Disentangled representation learning for non-parallel text style trans-\nfer’, in ACL , (2018).\n[12] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Ji-\nwon Kim, ‘Learning to discover cross-domain relations with generative\nadversarial networks’, in ICML, (2017).\n[13] Diederik P . Kingma and Max Welling, ‘Auto-encoding variational\nbayes’, CoRR, abs/1312.6114, (2013).\n[14] Juncen Li, Robin Jia, He He, and Percy Liang, ‘Delete, retrieve, gener-\nate: a simple approach to sentiment and style transfer’, in NAACL-HLT,\n(2018).\n[15] Yanghao Li, Naiyan Wang, Jiaying Liu, and Xiaodi Hou, ‘Demystifying\nneural style transfer’, in IJCAI, (2017).\n[16] Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, and Tong Zhang,\n‘Quase: Sequence editing under quantifiable guidance’, in EMNLP,\n(2018).\n[17] Fuli Luo, Peng Li, Pengcheng Yang, Jie Zhou, Y utong Tan, Baobao\nChang, Zhifang Sui, and Xu Sun, ‘Towards fine-grained text sentimenttransfer’, in ACL , (2019).\n[18] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Good-\nfellow, ‘Adversarial autoencoders’, ArXiv, abs/1511.05644, (2015).\n[19] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose\nFinkel, Steven Bethard, and David McClosky, ‘The stanford corenlp\nnatural language processing toolkit’, in ACL , (2014).\n[20] Igor Melnyk, C ´ıcero Nogueira dos Santos, Kahini Wadhawan, Inkit\nPadhi, and Abhishek Kumar, ‘Improved neural text attribute transfer\nwith non-parallel data’, ArXiv, abs/1711.09395, (2017).\n[21] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu, ‘Bleu:\na method for automatic evaluation of machine translation’, in ACL ,\n(2001).\n[22] Shrimai Prabhumoye, Y ulia Tsvetkov, Ruslan Salakhutdinov, and\nAlan W. Black, ‘Style transfer through back-translation’, in ACL ,\n(2018).\n[23] Xiaoye Qu, Zhikang Zou, Y u Cheng, Yang Yang, and Pan Zhou, ‘Ad-\nversarial category alignment network for cross-domain sentiment clas-sification’, in Proceedings of the 2019 Conference of the North Ameri-\ncan Chapter of the Association for Computational Linguistics: Human\nLanguage Technologies, V olume 1 (Long and Short Papers), pp. 2496–\n2508, (2019).\n[24] Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola,\n‘Style transfer from non-parallel text by cross-alignment’, in NIPS,\n(2017).[25] Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Maheswaran,\n‘Transforming delete, retrieve, generate approach for controlled text\nstyle transfer’, ArXiv, abs/1908.09368, (2019).\n[26] Ilya Sutskever, Oriol Vinyals, and Quoc V . Le, ‘Sequence to sequence\nlearning with neural networks’, in NIPS, (2014).\n[27] Yaniv Taigman, Adam Polyak, and Lior Wolf, ‘Unsupervised cross-\ndomain image generation’, ArXiv, abs/1611.02200, (2016).\n[28] Chen Wu, Xuancheng Ren, Fuli Luo, and Xu Sun, ‘A hierarchical rein-\nforced sequence operation method for unsupervised text style transfer’,\ninACL , (2019).\n[29] Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, and Songlin Hu,\n‘Mask and infill: Applying masked language model for sentiment trans-\nfer’, ArXiv, abs/1908.08039, (2019).\n[30] Jingjing Xu, Xu Sun, Qi Zeng, Xuancheng Ren, Xiaodong Zhang,\nHoufeng Wang, and Wenjie Li, ‘Unpaired sentiment-to-sentiment trans-\nlation: A cycled reinforcement learning approach’, in ACL , (2018).\n[31] Zichao Yang, Zhiting Hu, Chris Dyer, Eric P . Xing, and Taylor Berg-\nKirkpatrick, ‘Unsupervised text style transfer using language models asdiscriminators’, in NeurIPS, (2018).\n[32] Ye Zhang, Nan Ding, and Radu Soricut, ‘Shaped: Shared-private\nencoder-decoder for text style adaptation’, in NAACL-HLT, (2018).\n[33] Yi Zhang, Jingjing Xu, Pengcheng Yang, and Xu Sun, ‘Learning sen-\ntiment memories for sentiment modification without parallel data’, inEMNLP, (2018).\n[34] Junbo Jake Zhao, Y oon Kim, Kelly Zhang, Alexander M. Rush,\nand Yann LeCun, ‘Adversarially regularized autoencoders’, in ICML,\n(2017).\n[35] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros, ‘Un-\npaired image-to-image translation using cycle-consistent adversarialnetworks’, 2017 IEEE International Conference on Computer Vision\n(ICCV), 2242–2251, (2017).\n[36] Zhikang Zou, Xinxing Su, Xiaoye Qu, and Pan Zhou, ‘Da-net: Learn-\ning the fine-grained density distribution with deformation aggregation\nnetwork’, IEEE Access, 6, 60745–60756, (2018).L.Xiao etal./Fine-Gr ained TextSentiment Transfer viaDependency Parsing 2235",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "8LE06pFhqsW",
"year": null,
"venue": "NeurIPS 2022 Accept",
"pdf_link": "/pdf/93d8e342a8a64d9336cbacc12abb8d067e522aec.pdf",
"forum_link": "https://openreview.net/forum?id=8LE06pFhqsW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E-MAPP: Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance",
"authors": [
"Can Chang",
"Ni Mu",
"Jiajun Wu",
"Ling Pan",
"Huazhe Xu"
],
"abstract": "A critical challenge in multi-agent reinforcement learning(MARL) is for multiple agents to efficiently accomplish complex, long-horizon tasks. The agents often have difficulties in cooperating on common goals, dividing complex tasks, and planning through several stages to make progress. We propose to address these challenges by guiding agents with programs designed for parallelization, since programs as a representation contain rich structural and semantic information, and are widely used as abstractions for long-horizon tasks. \nSpecifically, we introduce Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance(E-MAPP), a novel framework that leverages parallel programs to guide multiple agents to efficiently accomplish goals that require planning over $10+$ stages. \nE-MAPP integrates the structural information from a parallel program, promotes the cooperative behaviors grounded in program semantics, and improves the time efficiency via a task allocator. We conduct extensive experiments on a series of challenging, long-horizon cooperative tasks in the Overcooked environment. Results show that E-MAPP outperforms strong baselines in terms of the completion rate, time efficiency, and zero-shot generalization ability by a large margin.",
"keywords": [
"multi-agent reinforcement learning",
"program guided agents",
"long-horizon tasks"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "W313rgojzJq",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=W313rgojzJq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer e1mG (cont.)",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "NOizXF9fxtX",
"year": null,
"venue": "ETFA 2020",
"pdf_link": "https://ieeexplore.ieee.org/iel7/9210104/9211869/09212001.pdf",
"forum_link": "https://openreview.net/forum?id=NOizXF9fxtX",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Considering Safety Requirements in Design Phase of Future E/E Architectures",
"authors": [
"Hadi Askaripoor",
"Morteza Hashemi Farzaneh",
"Alois C. Knoll"
],
"abstract": "Without meeting safety requirements in the design of electric/electronic (E/E) architectures, achieving fully-automated vehicles is infeasible. However, considering architecture-related safety requirements (e.g. redundancy for fail-operational) in the design phase is a time-consuming task that requires domain-specific knowledge. This paper tackles this challenge by proposing a novel approach under development that takes topological safety aspects into account and transforms them into an optimization problem to generate safe topologies. We aim at accelerating the architecture design process while reducing unnecessary verification efforts as well as avoiding undesired functional safety violations.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "TUnLXG54vER",
"year": null,
"venue": "ECAI 2014",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-419-0-1221",
"forum_link": "https://openreview.net/forum?id=TUnLXG54vER",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Piano Music Companion",
"authors": [
"Andreas Arzt",
"Sebastian Böck",
"Sebastian Flossmann",
"Harald Frostel",
"Martin Gasser",
"Cynthia C. S. Liem",
"Gerhard Widmer"
],
"abstract": "We present a system that we call ‘The Piano Music Companion’ and that is able to follow and understand (at least to some extent) a live piano performance. Within a few seconds this system can identify the piece that is being played, and the position within the piece. It then tracks the progress of the performer over time via a robust score following algorithm. The companion is useful in multiple ways, e.g., it can be used for piece identification, music visualisation, during piano rehearsal and for automatic page turning.",
"keywords": [],
"raw_extracted_content": "The Piano Music Companion\nAndreas Arzt(1), Sebastian B ¨ock(1), Sebastian Flossmann(1), Harald Frostel(1),\nMartin Gasser(2), Cynthia C. S. Liem(3)and Gerhard Widmer(1,2)1\nAbstract. We present a system that we call ‘The Piano Music Com-\npanion’ and that is able to follow and understand (at least to some\nextent) a live piano performance. Within a few seconds this systemcan identify the piece that is being played, and the position within thepiece. It then tracks the progress of the performer over time via a ro-bust score following algorithm. The companion is useful in multipleways, e.g., it can be used for piece identification, music visualisation,during piano rehearsal and for automatic page turning.\n1 The Piano Music Companion\nThe piano music companion is a versatile system that can be used bypianists and more widely by consumers of piano music, in variousscenarios. It is able to identify, follow and understand live perfor-mances of classical piano music – at least to some extent. The com-panion has two important capabilities that we believe such a systemmust possess: (1) automatically identifying the piece it is listening to,and (2) following the progress of the performer(s) within the scoreover time.\nTo support these two capabilities, the companion is provided with\na database of sheet music in symbolic form, i.e., sequences of (note-on, pitch) pairs. Currently the database includes, amongst others, thecomplete solo piano works by Chopin and the complete Beethovenpiano sonatas, and consists of roughly 1,000,000 notes in total (about330 pieces). When listening to live music, the companion is able toidentify the piece that is being played, and the position within thepiece. It then tracks the progress of the performers over time, i.e., atany time the current position in the score is computed. Furthermore,it continuously re-evaluates its hypothesis and tries to match the cur-rent input stream to the complete database. Thus, it is able to followany action of the musician, e.g., jumps to a different position or anentirely different piece – as long as the piece is part of the database.The system is tolerant to performance errors and slight variations,and is robust to tempo changes. These capabilities enable various ap-plication, of which four are described in Sec. 2 below.\n2 Applications\nThe companion can be used to identify classical piano music. Wher-\never you are, and whatever the source of music, be it a live concert,a DVD or radio, only a few seconds of audio material are required toconfidently identify the piece and retrieve the name, the composer,and additional meta-data like the historical context of the piece, fa-mous interpretations, and where to buy recordings. We want to em-phasise that this task differs from audio identification as provided for\n1( 1 )Johannes Kepler University, Linz, Austria;(2)Austrian Research In-\nstitute for Artificial Intelligence, Vienna, Austria;(3)Delft University of\nTechnology, Delft, The Netherlandspopular music by services like Shazam [5]. Given a query, these ser-vices are able to identify exact copies of the audio in their database,i.e., instances of exactly the same performance of the piece. In con-trast to this, we are interested in the piece, i.e., the composition thatwas the basis for the performance. Hence, the companion should notrely on having every available performance as an audio file in the\ndatabase, but on a more general representation: the sheet music. Fur-thermore, this also ensures that the companion also works for (so farunknown) live performances.\nThe second use case is live music visualisation and performance\nenrichment. As, at any point in time, the companion knows exactlywhere in the sheet music the performer is, it can show visualisationssynchronised to the live music. In the simplest case it can show thesheet music itself, with a marker showing the current position. Whilethis is already helpful for listeners, more sophisticated visualisationsand enrichments are possible, like showing information about thestructure of the piece and the most important themes, and giving hintsabout what to listen for at specific moments.\nA third application is the use of the companion during piano re-\nhearsal, as the system can follow a performer and show the sheet mu-sic accordingly, even if the he/she repeats a section over and over oronly plays parts of the score he/she needs to rehearse. The musiciancan simply sit down at the piano, start the app on a tablet computer,query the piece by playing the first few beats and start practising. Thecompanion will follow all the actions and show the sheet music onthe screen (which has the additional benefit of not having to carryheavy books to practise sessions).\nFigure 1. The fully automatic page turner in action\nFinally, professional musicians can use the companion, on-stage,\nfor fully automatic page turning. Being able to track the live perfor-\nmance and thus at all times knowing where the musicians currentlyare in the piece, the system can control and automatically trigger amechanical page turning device that turns the sheet music page at theappropriate time. This way, musicians do not have to rely on a humanpage turner (who will always have to get in between the musician andECAI 2014\nT. Schaub et al. (Eds.)© 2014 The Authors and IOS Press.This article is published online with Open Access by IOS Press and distributed under the termsof the Creative Commons Attribution Non-Commercial License.\ndoi:10.3233/978-1-61499-419-0-12211221\nthe sheet music), nor risk taking their hands off the instrument to turn\nthe pages themselves. At the same time, an automatic page turner al-lows them to use the paper version of the score, which they normallyprefer. The automatic page turner has already been used in real pianorecitals in the context of various scientific gala events in Vienna. Thedevice in action is shown in Fig. 1.\n3 System Overview\nFig. 2gives an overview of the piano companion. As described\nabove, the system is able to automatically detect the played piece andthen track it over time. Two main components running in parallel en-able the companion to achieve this: (1) an ‘Instant Piece and PositionRecogniser’ and (2) a ‘Multi Agent Music Tracking System’.\nThe recogniser is based on an on-line piano music transcription al-\ngorithm [3], which takes the audio stream and translates it into sym-bolic information (a list of pitches with time stamps) using a bidirec-tional recurrent neural network. The most recently detected notes ofthe live performance are then matched to the database of sheet mu-sic via a tempo-independent fingerprinting method [1]. This processis continuously running in the background, regularly providing newpiece and (rough) position hypotheses for the tracking component.\nFigure 2. System OverviewThese hypotheses are then processed by a multi agent music track-\ning algorithm (for more information about the tracking algorithm see[2]), which tries to match the current audio input to these respectivepositions in the sheet music and follows the progress of the musicianover time. At each point in time, a single tracker is marked as active,i.e., represents our system’s belief about the current position in thedatabase.\n4 Conclusions and Future Work\nIn this paper we presented a piano music companion that is based ona very flexible music tracking algorithm. We see this system as a firststep towards our vision of a “Complete Classical Music Compan-ion”. Here, we are thinking about a system that is at your fingertipsanytime and anywhere, possibly as an app on a mobile device like atablet computer, and that provides you with information about whatis going on musically around you. Whatever piece, for whatever in-strumentation, and whoever the performers are, the companion willinform you about both the written music and the specifics of the on-going (live) performance, and guide you in the listening process.\nAn important step towards this goal is to lift the restriction to\npiano music only. We already have some encouraging preliminaryresults regarding tracking, i.e., we can track live performances bysymphonic orchestras, at least well enough for certain applicationslike synchronised visualisation of the sheet music. Thus, the limitingcomponent currently is the transcription system. Future work has tobe done to find a solution that is both fast and accurate enough forour intended application.\nAll this is further motivated by the fact that the companion will\nplay a practical role in the PHENICX\n2project [4], which has the\nbroad goal of “changing the way we experience classical music con-certs”, with one of the project partners being the world-renownedRoyal Concertgebouw Orchestra Amsterdam. Thus, future work willalso include finding ways of using the companion in the practicalcontext of a world-class orchestra.\nACKNOWLEDGEMENTS\nThis research is supported by the Austrian Science Fund (FWF) un-der project numbers Z159, TRP 109-N23 and the EU FP7 ProjectPHENICX (grant no. 601166).\nREFERENCES\n[1] Andreas Arzt, Gerhard Widmer, Sebastian B ¨ock, Reinhard Sonnleitner,\nand Harald Frostel, ‘Towards a complete classical music companion’,\nin Proceedings of the European Conference on Artificial Intelligence\n(ECAI), (2012).\n[2] Andreas Arzt, Gerhard Widmer, and Simon Dixon, ‘Automatic page\nturning for musicians via real-time machine listening’, in Proceedings of\nthe 18th European Conference on Artificial Intelligence (ECAI), (2008).\n[3] Sebastian B ¨ock and Markus Schedl, ‘Polyphonic piano note transcription\nwith recurrent neural networks’, in IEEE International Conference on\nAcoustics, Speech and Signal Processing (ICASSP), (2012).\n[4] Emilia G ´omez, Maarten Grachten, Alan Hanjalic, Jordi Janer, Sergi\nJorda, Carles F. Julia, Cynthia Liem, Agustin Martorell, Markus Schedl,and Gerhard Widmer, ‘PHENICX: Performances as Highly EnrichedaNd Interactive Concert experiences’, in Proceedings of the Sound and\nMusic Computing Conference (SMC), (2013).\n[5] Avery Wang, ‘An industrial strength audio search algorithm’, in Pro-\nceedings of the International Conference on Music Information Retrieval(ISMIR), (2003).\n2http://phenicx.upf.edu\nA. Arzt et al. / The Piano Music Companion 1222",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "HD6aWjZfGs",
"year": null,
"venue": "CoRR 2022",
"pdf_link": "http://arxiv.org/pdf/2206.02281v1",
"forum_link": "https://openreview.net/forum?id=HD6aWjZfGs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E^2VTS: Energy-Efficient Video Text Spotting from Unmanned Aerial Vehicles",
"authors": [
"Zhenyu Hu",
"Zhenyu Wu",
"Pengcheng Pi",
"Yunhe Xue",
"Jiayi Shen",
"Jianchao Tan",
"Xiangru Lian",
"Zhangyang Wang",
"Ji Liu"
],
"abstract": "Unmanned Aerial Vehicles (UAVs) based video text spotting has been extensively used in civil and military domains. UAV's limited battery capacity motivates us to develop an energy-efficient video text spotting solution. In this paper, we first revisit RCNN's crop & resize training strategy and empirically find that it outperforms aligned RoI sampling on a real-world video text dataset captured by UAV. To reduce energy consumption, we further propose a multi-stage image processor that takes videos' redundancy, continuity, and mixed degradation into account. Lastly, the model is pruned and quantized before deployed on Raspberry Pi. Our proposed energy-efficient video text spotting solution, dubbed as E^2VTS, outperforms all previous methods by achieving a competitive tradeoff between energy efficiency and performance. All our codes and pre-trained models are available at https://github.com/wuzhenyusjtu/LPCVC20-VideoTextSpotting.",
"keywords": [],
"raw_extracted_content": "E2VTS: Energy-Efficient Video Text Spotting from Unmanned Aerial Vehicles\nZhenyu Hu1*, Zhenyu Wu1*, Pengcheng Pi1, Yunhe Xue1, Jiayi Shen1, Jianchao Tan2, Xiangru Lian2,\nZhangyang Wang3, and Ji Liu2†\n1Texas A&M University2Kwai Inc.3The University of Texas at Austin\nAbstract\nUnmanned Aerial Vehicles (UAVs) based video text spot-\nting has been extensively used in civil and military do-\nmains. UAV’s limited battery capacity motivates us to de-\nvelop an energy-efficient video text spotting solution. In\nthis paper, we first revisit RCNN’s crop & resize training\nstrategy and empirically find that it outperforms aligned\nRoI sampling on a real-world video text dataset cap-\ntured by UAV . To reduce energy consumption, we further\npropose a multi-stage image processor that takes videos’\nredundancy, continuity, and mixed degradation into ac-\ncount. Lastly, the model is pruned and quantized be-\nfore deployed on Raspberry Pi. Our proposed energy-\nefficient video text spotting solution, dubbed as E2VTS ,\noutperforms all previous methods by achieving a com-\npetitive tradeoff between energy efficiency and perfor-\nmance. All our codes and pre-trained models are avail-\nable at https://github.com/wuzhenyusjtu/\nLPCVC20-VideoTextSpotting .\n1. Introduction\nUA V-based video text spotting is broadly applied in as-\nsistive navigation, automatic translation, road sign recog-\nnition, industrial monitoring, and disaster response, etc.\nA standard video text spotting model has four compo-\nnents: text detector, text recognizer, text tracker, and post-\nprocessing.\nExisting video text spotting solutions [37, 10] are purely\nperformance-driven and fail to take energy consumption\ninto account. Multi-frame related features are first obtained\nin frame-wise detection or tracking. Then, they are ag-\ngregated for enhancement in a cross-frame and multi-scale\nway for text recognition. Therefore, existing performance-\ndriven solutions are high in energy consumption and unsuit-\nable for resource-constrained UA V platforms.\n*The first two authors contribute equally and are listed alphabetically.\n†Correspondence to: Ji Liu with the AI Platform, Ytech Seat-\ntle AI Lab, FeDA Lab, Kwai Inc., Seattle WA 98004, USA, e-mail:\[email protected] iIn this paper, we propose an Energy-Efficient Video Text\nSpotting solution, dubbed as E2VTS. Our contribution can\nbe summarized as follows:\n•Novel Training & Inference Strategies: To obtain\nbetter text spotting performance, we revisit RCNN\nand empirically find that crop and resize outperforms\naligned RoI Pooling when connecting the text recog-\nnizer with the text detector. To further save energy\nconsumption, we propose a multi-stage image proces-\nsor to select the highest-quality frame in a sliding win-\ndow, reject text-free frames as well as crop non-text\nregions, and reject out-of-distribution frames.\n•Experiments: On a real-world UA V-captured text\nvideo dataset deployed on Raspberry Pi, we conducted\nthorough ablation studies on the proposed training and\ninference strategies. The evaluation metric takes both\nenergy consumption and text spotting performance\ninto consideration. Models are pruned and quantized\nbefore deployment.\n2. Related Work\n2.1. Text Reading in Images\nText Detection: Object detection-based [21, 48, 24, 47, 36]\nand sub-text components-based [11, 42, 32, 35, 2] are two\nstreams of solutions to text detection. Based on the obser-\nvation that any part of a text instance is still text, sub-text\ncomponents-based methods incorporate the inductive biases\nof the homogeneity and locality of text instances into model\ndesign.\nText Recognition: There are two major strategies in de-\ncoding text content from image encoded features from\nCNN, Connectionist Temporal Classification (CTC) [13]\nand the encoder-decoder framework [30]. CTC-based meth-\nods includes CRNN [29] and Sliding Convolutional Char-\nacter Models [46]. Encoder-Decoder Methods include\nR2AM [16] and Edit Probability (EP) [4].\nEnd-to-end Text Spotting: The representative end-to-end\ntext spotting are summarized into two categories, regular-\nshaped and arbitrary-shaped.\n1arXiv:2206.02281v1 [cs.CV] 5 Jun 2022\nRegular-shaped Text: Liet al. [17, 34] proposed the first\ndeep-learning based end-to-end trainable scene text spot-\nting method for horizontal text by incorporating RoI Pool-\ning [27] to join the detection and recognition stage. Deep\nTextSpotter [5] handled multi-orientation text instances\nwithout feature sharing between the detection and recogni-\ntion stages. End-to-End TextSpotter [14] and FOTS [22]\nadopted an anchor-free mechanism to improve both the\ntraining and inference speed. They use two similar sam-\npling strategies, i.e., Text-Alignment and RoIRotate, to ex-\ntract feature from arbitrary-oriented quadrilateral detection\nresults.\nArbitrary-shaped Text: Mask TextSpotter [19, 20] used\ncharacter-level supervision to simultaneously detect and\nrecognize characters and instance masks. Nonetheless, the\ncharacter-level ground truths are expensive, thus mostly un-\navailable for real data. RoI Masking [26] cropped out the\nfeatures from the predicted axis-aligned rectangular bound-\ning boxes and multiplied the features with the correspond-\ning instance segmentation mask. TextDragon [12] pro-\nposed RoISlide to transform the whole text features into\naxis-aligned features indirectly by transforming each local\nquadrangle sequentially. As the first one-stage text spotting\nmethod, CharNet [43] directly outputted bounding boxes of\nwords, with corresponding character labels. ABCNet [23]\nadaptively fitted arbitrarily-shaped text by a parameterized\nBezier curve and used BezierAlign layer to extract accurate\nconvolution features. CRAFTS [3] used the character re-\ngion feature from the detector as input character attention\nto the recognizer.\n2.2. Text Reading in Videos\nText Detection & Tracking: Wang et al. [33] proposed a\nmulti-scale feature sampling and warping network on ad-\njacent frames, and an attention-based multi-frame feature\naggregation mechanism to fuse the complementary text fea-\ntures from related frames. Wu et al. [41] explored Delau-\nnay triangulation to detect and track texts. The triangular\nmesh pattern reflects text properties, such as regular spacing\nbetween characters and constant stroke width, thus distin-\nguishable from non-text. Yang et al. [44] combined single-\nframe detection with cross-frame motion-based tracking.\nThe text association was formulated into a cost-flow net-\nwork. Tian et al. [31] located character candidates locally\nand searched text regions globally. Specifically, a multi-\nstrategy tracking based text detection approach [49] was\nused to globally search and select the best text region with\ndynamic programming. Wang et al. [38] proposed a fully\nconvolutional model based on a novel refine block structure,\nwhich refines the low-resolution semantic features with the\nhigh-resolution low-level features.\nEnd-to-End Text Spotting Wang et al . [37] proposed a\nmulti-frame tracking based method, where text detectionand recognition are done on each frame before recognized\ntexts are tracked over the video sequence. FREE [10, 9]\nproposed a text recommender to select the highest-quality\ntext from text streams for recognizing and released a large\nscale video text spotting dataset.\n3. E2VTS: An Energy-Efficient Video Text\nSpotting Solution\nOverview: The E2VTS two-step text spotting system\nadopts Efficient and Accurate Scene Text Detector (EAST)\nas the text detector, and Convolutional Recurrent Neural\nNetwork (CRNN) as the text recognizer. The recognizer is\nconnected with the detector via crop & resize. A multi-stage\nimage processor is proposed to further save the energy con-\nsumption. It has three stages, selecting the highest-quality\nframe in a sliding window, rejecting text-free images and\ncropping non-text regions, and rejecting out-of-distribution\nimages. The pipeline is shown in Figure 1,\n3.1. Revisiting RCNN: Crop & Resize vs. Aligned\nRoI Pooling\nWe compare two connecting mechanisms for the detector\nand the recognizer: crop+resize versus aligned RoI pooling.\nExamples of aligned RoI pooling include BezierAlign [23]\nfor arbitrary-shaped text and RoIRotate [22] for rotated text.\nGiven the predicted bounding box from the detector, in\ncrop+resize, the input to the recognizer is the cropped box\narea affinely transformed from the original image and re-\nsized to a fixed resolution. The detector and the recognizer\nare trained independently. In aligned RoI pooling, the in-\nput to the recognizer is the cropped box area affinely trans-\nformed from the feature map. The detector and the recog-\nnizer are trained jointly. Note that the text recognition loss\nuses the ground truth text regions instead of predicted text\nregions.\nUnlike the benchmarks in image-based text spotting,\nreal-world videos for text spotting are full of small size and\npoor quality text boxes. Consequently, crop+resize outper-\nforms aligned RoI pooling for two reasons. First, aligned\nRoI pooling losses the discriminative details for small size\ntext boxes due to the deep convolutions in the detector. In\ncontrast, crop+resize enlarges the input resolution of small\nsize text boxes and preserves their discriminative spatial de-\ntails [39]. Second, text recognition ( i.e., knowing what is\nthe text) is intrinsically more difficult than text detection\n(i.e., knowing where is the text). Thus, feature sharing and\njoint training will lead to sub-optimal performance for both\ntasks [8, 7].\n3.2. Multi-Stage Image Processor\nDifferent from a single image, video frames are redun-\ndant and continuous in the temporal domain. Comparing\n2\nt\nCrop & Resize\nConv1 \n32\nRes1\n32\nRes2\n64\nRes3\n128\nRes4\n256Feature Extraction\nConv 3x3, 64\nconfidence\nangle\nPost Processdistance\nBounding Boxes\nUpsampleUpsampleUpsampleUpsample\nConv 3x3, 256\nConv 3x3, 512Feature Merge\nOut-of-Distribution?\nNoEarly ExitYes\nCRNNFigure 1: Overview: E2VTS consists of two components.\nComponent one is a multi-stage image processor which se-\nlects the best frame within a window size and crops out\nthe background. Component two is a two-step crop &\nresize text spotting system including an EAST detector\nand a CRNN recognizer. The EAST detector is based on\nResNet34 backbone and outputs confidence, angle, and dis-\ntance. Out-of-distribution frames are rejected at ResNet\nLayer3.\none frame with its precedents and successors over certain\nmetrics is a natural filtering process to select the most suit-\nable frame for the later detection task. We also leverage\nsharp transitions of text regions to remove non-text back-\nground preliminarily to further boost efficiency. All these\nimplementations are based on simple signal processing al-gorithms which are significantly faster than neural network\nmodels.\n3.2.1 Stage I: Selecting the Highest-Quality Frame in\na Sliding Window\nProblem Definition: Blur is the major artifact in UA V\ncaptured videos due to camera shake, depth variation, ob-\nject motion or a combination of them [40, 15]. Among\nall the frames describing the same visual scene, the clear-\nest image gives the least amount of detector or recogni-\ntion error. Since blurred frames contain less energy in the\nhigh frequency components, in their associated power spec-\ntrum [45], the power tends to fall much faster with increas-\ning frequency, compared with clear frames. Therefore, the\naverage of the power spectra of clear frames is higher than\nthese degraded ones, as degraded ones have a steeper slope\non their power spectrum.\nImplementation: In Fig. 2, we propose a sliding win-\ndow mechanism and select the highest-quality frame in each\nwindow. Given a video containing Lframes, thei-th win-\ndowWiis obtained via:\nWi=S(i;N)(I1;:::;I L); (1)\nwhereSrepresents the sliding rule and Nis the window\nsize. The selected highest-quality frame in Wiis:\nIHQ= argmax\nI2WiG(I); (2)\nwhereGis the quality measure. We propose two measures\nin this work: variance of Laplacian[25] and average fast\nFourier transform (FFT) magnitude defined as:\nGFFT =1\nhwkFFT(I0)k;\nGLV=Var(kL\u0003I0);(3)\nwhereI0is a given frame with height hand widthw. FFT\nmagnitude measure is an approximation of power spec-\ntrum density in frequency domain. Variance of Laplacian\nstresses spatial information by counting sharp transitions in\nthe frame. These two measure works in a complementary\nway. Therefore, we integrate the two methods by taking a\nweighted average of two measures’ scores ranking over a\ncertain window. Let rank (I;W;G)denotes a function that\nreturns the rank of frame Iamong all the frames in the win-\ndowWscored by the quality measure Gin ascending order.\nThe selected highest-quality frame is:\nIHQ= argmax\nI2Wi[\u0015\u0001rank(I;W i;GFFT)\n+(1\u0000\u0015)\u0001rank(I;W i;GLV)];(4)\n3\nFigure 2: Sliding window for highest-quality frame selection. A window iterator is sliding over the temporally sub-sampled\nframes and quality scoring is conducted on the frames via the proposed measure. The highest ranked frame is selected.\nwhere\u0015is the relative weight parameter.\nIn practice, the video sequence is sub-sampled at rate r\nto further boost efficiency before applying the sliding win-\ndow filter. As a hyper-parameter, the sub-sample rate rhas\na great impact on the tradeoff between energy-efficiency\nand performance. Although setting higher sub-sample rate\ncould save more energy, it has higher chance to miss scenes\nfor text spotting.\n3.2.2 Stage II: Rejecting Text-free Images and Crop-\nping Non-Text Regions\nProblem Definition: Known for high time complexity\nand energy consumption, connected component-based text\ndetection depends on maximally stable extremal region\n(MSER) as character candidates, and stroke width trans-\nform (SWT) for filtering and pairing of connected compo-\nnents. Given the observation that cohesive characters com-\npose a word or sentence sharing similar properties such as\nspatial location, size, and stroke width, we turn to Canny\nedge detector [6] to locate the edge pixels that build the\ntext’s structure (a.k.a. contour).\nImplementation: In Fig. 3, we further reject text-free im-\nages and crop non-text regions. First, Canny edge detector\nis applied on the three channels of the input image Iyuv\nrepresented in YUV color space, and the three channels\n(Yc;Uc;Vc)are merged by bitwise OR (denoted as “ j”) op-\neration to obtain the edge map Ie. Then, morphological\nclosing is applied on the Ieto remove small holes and merge\nconnected components. If any text region is present in the\nimage, a binary image with continued text characters will be\nreturned. Next, the histogram map is obtained by summing\nup pixels along the xandyaxis1:\nHx[i] =hX\nk=1Ic[i;k];Hy[j] =wX\nk=1Ic[k;j]; (5)\nwherewandhare the width and height of the image. After\nthat, all the peaks2for these two histogram maps, i.e.,Px\nandPy, are found. Text regions are assumed to fall within\n1With the origin in lower left corner, the x-axis is running from left to\nright and the y-axis is running from bottom to up.\n2The peaks are all local maxima by comparing neighboring values in\nthe histogram.the peaks. Finally, Text-free images are rejected based on\ntwo preset thresholds ( \u0012;\u000b) on the peak intensities and num-\nbers, respectively. Note that the second-stage selector can-\nnot deal with images with complicated backgrounds, since\nthe peaks value varies along both axes without any identi-\nfiable pattern. Therefore, images with complicated back-\nground whose peaks are consistently high along the entire\nxandyaxis are accepted.\nCropping text regions improves the SNR3in the image.\nOn images with simple background, the text regions are as-\nsumed to lie between ( xl,yb) and (xr,yt). The coordinates\nof the text region are obtained from the peaks via\nxl;xr;yb;yt=Px[1];Px[\u00001];Py[1];Py[\u00001]:(6)\nThe details of the second-stage selector is shown in Algo-\nrithm 1.\nAlgorithm 1: Rejecting Text-free Images or Crop-\nping Text Regions\n1Initialization: \u0012,\u000b: predefined thresholds\n2Iyuv RGB2YUV (I)\n3Yc;Uc;Vc CannyEdge (Iyuv)\n4Ie YcjUcjVc\n5Ic MorphClose (Ie)\n6Hx;Hy Histogram (Ic)// sum up pixels\namong axis\n7Px;Py FindPeaks (Hx;Hy)\n8\u0016x;\u0016y Mean (Px);Mean (Py)\n// Whether the number of peaks or the\nmean of intensity is less than\npreset thresholds\n9ifCount (Px)\u0014\u0012or Count (Py)\u0014\u0012or\u0016x\u0014\u000bor\n\u0016y\u0014\u000bthen\n10 REJECT\n11else\n12 ACCEPT\n13xl;xr;yb;yt Px[1];Px[\u00001];Py[1];Py[\u00001]\n14 returnI[xl:xr;yb:yt]\n3We treat text related pixels as signal and all other pixels as noise.\n4\nFigure 3: Cropping text foreground: we use the histogram to analyze the edge information of the selected frame. If the\nnumber of peaks and the mean of intensity satisfy predefined thresholds, text bounding coordinates will be selected from\npeaks info. Otherwise, the frame will be discarded.\n3.2.3 Stage III: Rejecting Out-of-Distribution Images\nProblem Definition: A “trained” E2VTS model fwith\nfixed parameters is able to fit a distribution Xfdefined on\nthe image space. During inference, rejecting the out-of-\ndistribution images in an early-exit way could greatly re-\nduce energy consumption. The out-of-distribution rejection\nproblem [18] can be formulated as a binary classification.\nExamples of the positive cases and negative cases used to\ntrain the rejector are shown in Fig. 4.\nImplementation: Grad-CAM [28], a visual explanations\ntechnique via gradient-based localization, is deployed to\nlocate the first text semantic-aware layer lfor our model.\nThe outputs of the text semantic-aware layer Hlserve as\nthe high-level features to distinguish the out-of-distribution\nimages from the in-distribution ones. Support Vector Ma-\nchines (SVM) is used for binary classification on Hl. SVM\nis preferred over the deep model due to its small size in the\nnumber of parameters and low latency.\n4. Experiments\n4.1. Experiment Settings\n4.1.1 Datasets and Evaluation Protocols\nWe evaluate the proposed E2VTS approach on the LPCVC-\n20video text spotting dataset, abbreviated as LPCVC-20 .\nThe videos are captured by UA Vs flying indoors on the\ncorridors, where tons of posters and board signs with ro-\ntated text are presented. Five videos were used for training\nand one video was reserved for testing. After decomposing\nvideos into frames, we handpick text-related images as our\nsystem experiment datasets which include 7;886for train-\ning and 2;033for texting. Furthermore, the text was an-\nnotated using the Auto Labeling algorithm described in the\ncoming subsection. LPCVC-20 consists of images of reso-\nlution 3840\u00022160 ,1920\u00021080 , and 1280\u0002720.IoU, IoP, IoG4are used for detection and edit-distance\nis used for recognition. For each ground truth, the predicted\nbounding box with the maximum IoU is selected and the\nedit distance is calculated between the ground truth text la-\nbel and the predicted text.\nAuto Labeling: Image Registration-Aided Annotation\nfor Video Text Spotting Given the observation that tem-\nporally consecutive frames are describing the same scene,\nwe propose Auto-Labelling in Algorithm 2 to aid the video\nannotation, which utilizes the videos’ temporal redundancy\nand continuity. It takes advantage of feature matching and\nperspective transformation to transfer the annotated bound-\ning box from the source frame to the target frame. Figure 6\nshows the annotation results produced by Algorithm 2.\nAlgorithm 2: Auto-Labeling\n// Annotate the 1st frame\n1bs Annotate (V[1])// Number of frames\ndescribing the same scene\n2N Size(V)\n3fori 2toNdo\n// Next Adjacent frame\n4It V[i]// Feature Matching\n5k1;d1 SIFT(Is);k2;d2 SIFT(It)\n6m LoweRatioTest (BFMatcher (d1;d2))\n7ps;pt FilterKeyPts (m;k 1);FilterKeyPts (m;k 2)\n// Perspective Transformation\n8bt Perspective (bs;HomographyMatrix (ps;pt))\n9Is It;bs bt\n4Given a ground truth bounding box area G and predicted bounding box\narea P the IoU is (P\\G)=(P[G), IoP (a.k.a precision) is (P\\G)=P,\nand IoG (a.k.a recall) is (P\\G)=G.\n5\nFigure 4: The negative samples in the first row and positive samples in the second row are used to train the out-of-distribution\nrejector. Heavily-blurred, text-free, are truncated-text are all considered as negative cases to be rejected.\nFigure 5: Sample Images from the LPCVC-20 Video Text\nSpotting Dataset.\nEnergy Consumption Measurement A USB power me-\nter5is used to measure energy consumption. We connect\nthe USB power meter in series to the power supply of the\nRaspberry Pi. With this setup, the power meter can real-\ntime measure the current through the Raspberry Pi. Since\nthe voltage for Raspberry Pi is constantly 5V , we can cal-\nculate the energy consumption by recording the current val-\nues. The power meter is connected with a computer through\nBluetooth and the energy measurements of the Raspberry Pi\nare recorded using [1]. The timestamps for model inference\nare written down to measure the latency for the model.\n4.1.2 Model Compression\nPruning Pruning algorithm compresses neural network\nby removing redundant weights or channels of layers. For a\nRaspberry Pi, structured pruning is preferred over unstruc-\ntured pruning, since structured pruning does not require\nspecific hardware support for deployment. For our exper-\niment, we applied `1filter Pruner with a one-shot pruning\nstrategy and a sparsity rate of 0:7, which allowed our model\nto achieve the best trade-off between accuracy and energy\nefficiency.\nQuantization Quantization refers to techniques for us-\ning a reduced precision integer representation for weights\nand activations. For Raspberry Pi, Pytorch provides QN-\nNPACK backends which supports running quantized opera-\n5MakerHawk UM34C USB 3.0 Multimeter Bluetooth USB V oltmeter\nAmmeterTable 1: E2VTS results on LPCVC-20\nEditDistancenResolution 1200 600 300\nBBox Area/Char Count<=20 N/A 6.86 3.69\n<=60 5.25 2.12 3.60\n>60 1.93 2.00 2.92\nChar Count<=4 1.08 1.31 2.21\n<=8 1.86 2.08 3.56\n>8 4.20 3.79 5.32\nTotal 1.93 2.04 3.26\ntors efficiently on ARMS CPU. For our experiment, we ap-\nplied static post quantization on all convolutional and fully-\nconnected layers; and applied dynamic post quantization on\nthe LSTM modules in the CRNN model.\n4.2. Ablation Studies\n4.2.1 Crop & Resize vs. Aligned RoI Pooling\nIn this section, we conduct ablation studies for the two-step\nCrop and Resize E2VTS text spotting model and the two-\nstage Aligned RoIPool text spotting model. From Tables 1\nand 2 it can be seen that the E2VTS model performs better\nthan the Aligned RoIPool model at all resolutions. The dif-\nferent factors that influence the performance of the model\nare also measured. From Tables 1 and 2 it can be con-\ncluded that the greater bounding box to character count ratio\nand the lesser character count improves the recognition per-\nformance. Table 3 shows the deployment results of E2VTS\non Raspberry Pi.\n4.2.2 Multi-Stage Image Processor\nWe compare the overall performance for our method after\nincorporating different data level efficiency techniques. As\nshown in Table. 4, incorporating data level efficiency results\nin a better performance in both accuracy and efficiency.\nBased on the results in Table 4, Stage I data pre-\nprocessing greatly improves the accuracy of the model by\nselecting the best quality frame within a window size as\nthe model’s input. The latency and energy decrease slightly\n6\nFigure 6: Qualitative Results of Auto Labeling: extract features from the source frame and the target frame. Conduct feature\nmatching and perspective transform to update bounding boxes annotation. Repeat the process until the end of the scene.\nTable 2: FOTS results on LPCVC-20\nEditDistancenResolution 1200 600 300\nBBox Area/Char Count<=20 N/A 6.84 4.19\n<=60 4.56 2.84 5.19\n>60 2.62 3.48 5.97\nChar Count<=4 1.83 2.38 3.28\n<=8 2.98 4.21 6.12\n>8 3.95 4.92 8.61\nTotal 2.65 3.48 5.25\nTable 3: Performance, Latency, and Energy Measurement\nof E2VTS on Raspberry Pi\nModel IoU IoP IoG EditDistance Latency Avg Energy\nE2VTS 72.21 76.24 93.94 1.39 12.90 31.77\nTable 4: Ablation studies on the multi-stage image pro-\ncessor. Performance, latency, and energy consumption are\nevaluated.\nStage I Stage II Stage III Latency Energy EditDistance\nX 627.43 1841.49 0.78\nX 545.20 1349.23 1.14\nX 571.48 1482.31 1.05\nX X X 528.12 1267.2 0.96\ndue to the extra cost introduced by quality scoring and the\ndecrease of the video’s sub-sample rate. Stage II data pre-\nprocessing mainly decreases the latency and average energy\nconsumption of the model by improving the SNR in the im-\nage and rejecting low-quality and non-text frames. Stage III\ndata pre-processing also decreases latency and energy con-\nsumption by rejecting out-of-distribution frames at an early\nstage of the detection model. The integration of Stage I,\nII, and III data pre-processing benefits the model from theperspective of speed and energy consumption.\n4.2.3 Deployment on Raspberry Pi\nIn this section, we evaluate the overall performance of our\nmethod after incorporating different model level efficiency\ntechniques, which include pruning and quantization.\nTable 5: Ablation studies on pruning ( P) and quantization\n(Q).\nP Q Latency Energy EditDistance\nX 76.48 195.25 1.09\nX 56.67 164.73 1.12\nX X 12.90 39.23 1.14\nBased on the results in Table 5, model pruning and quan-\ntization significantly decrease latency and average energy\nconsumption respectively. Although implementing model\ncompression results in a sightly drop in accuracy, the trade-\noff between energy efficiency and accuracy shows that in-\ncorporating model level efficiency notably boosts overall\nperformance.\n5. Conclusion\nIn this paper, we proposed an energy-efficient video text\nspotting solution, dubbed as E2VTS, for Unmanned Aerial\nVehicles. E2VTS is an energy-efficiency driven model with-\nout compromising text spotting performance. The proposed\nsystem not only utilizes data level efficiency enhancement\ntechniques but also makes use of model level efficiency\nboosting methods such as pruning and quantization. Specif-\nically, a sliding window is used to select scene-wise highest\nquality frame; a Canny edge based algorithm is proposed to\nreject text-free images and non-text frames; a dynamic rout-\ning mechanism emphasizes the in-distribution inputs. Far\n7\nfrom the application on UA V devices, our video text spot-\nting system is competent for any energy-constrained sce-\nnario.\n6. Acknowledgement\nWe would like to express our sincere gratitude to Ytech\nSeattle AI Lab, FeDA Lab, Kwai Inc., for their generous\nfinancial and technical supports during our participation in\nthe LPCVC 2020 UA V Video Track.\nReferences\n[1] Usb power meter measurement github repository. https:\n//github.com/kolinger/rd-usb.git/ . 6\n[2] Youngmin Baek, Bado Lee, Dongyoon Han, Sangdoo Yun,\nand Hwalsuk Lee. Character region awareness for text de-\ntection. In CVPR , 2019. 1\n[3] Youngmin Baek, Seung Shin, Jeonghun Baek, Sungrae Park,\nJunyeop Lee, Daehyun Nam, and Hwalsuk Lee. Character\nregion attention for text spotting. In ECCV , 2020. 2\n[4] Fan Bai, Zhanzhan Cheng, Yi Niu, Shiliang Pu, and\nShuigeng Zhou. Edit probability for scene text recognition.\nInCVPR , 2018. 1\n[5] Michal Busta, Lukas Neumann, and Jiri Matas. Deep\ntextspotter: An end-to-end trainable scene text localization\nand recognition framework. In ICCV , 2017. 2\n[6] John Canny. A computational approach to edge detection.\nPAMI , 1986. 4\n[7] Bowen Cheng, Yunchao Wei, Honghui Shi, Rogerio Feris,\nJinjun Xiong, and Thomas Huang. Decoupled classification\nrefinement: Hard false positive suppression for object detec-\ntion. arXiv , 2018. 2\n[8] Bowen Cheng, Yunchao Wei, Honghui Shi, Rogerio Feris,\nJinjun Xiong, and Thomas Huang. Revisiting rcnn: On\nawakening the classification power of faster rcnn. In ECCV ,\n2018. 2\n[9] Zhanzhan Cheng, Jing Lu, Yi Niu, Shiliang Pu, Fei Wu, and\nShuigeng Zhou. You only recognize once: Towards fast\nvideo text spotting. In ACMMM , 2019. 2\n[10] Zhanzhan Cheng, Jing Lu, Baorui Zou, Liang Qiao, Yunlu\nXu, Shiliang Pu, Yi Niu, Fei Wu, and Shuigeng Zhou. Free:\nA fast and robust end-to-end video text spotter. TIP, 2020.\n1, 2\n[11] Dan Deng, Haifeng Liu, Xuelong Li, and Deng Cai. Pix-\nellink: Detecting scene text via instance segmentation. arXiv\npreprint arXiv:1801.01315 , 2018. 1\n[12] Wei Feng, Wenhao He, Fei Yin, Xu-Yao Zhang, and Cheng-\nLin Liu. Textdragon: An end-to-end framework for arbitrary\nshaped text spotting. In ICCV , 2019. 2\n[13] Alex Graves, Santiago Fern ´andez, Faustino Gomez, and\nJ¨urgen Schmidhuber. Connectionist temporal classification:\nlabelling unsegmented sequence data with recurrent neural\nnetworks. In ICML , 2006. 1\n[14] Tong He, Zhi Tian, Weilin Huang, Chunhua Shen, Yu Qiao,\nand Changming Sun. An end-to-end textspotter with explicit\nalignment and attention. In CVPR , 2018. 2[15] Orest Kupyn, Tetiana Martyniuk, Junru Wu, and Zhangyang\nWang. Deblurgan-v2: Deblurring (orders-of-magnitude)\nfaster and better. In ICCV , 2019. 3\n[16] Chen-Yu Lee and Simon Osindero. Recursive recurrent nets\nwith attention modeling for ocr in the wild. In CVPR , 2016.\n1\n[17] Hui Li, Peng Wang, and Chunhua Shen. Towards end-to-end\ntext spotting with convolutional recurrent neural networks.\nInICCV , 2017. 2\n[18] Shiyu Liang, Yixuan Li, and R Srikant. Enhancing the re-\nliability of out-of-distribution image detection in neural net-\nworks. In ICLR , 2018. 5\n[19] Minghui Liao, Pengyuan Lyu, Minghang He, Cong Yao,\nWenhao Wu, and Xiang Bai. Mask textspotter: An end-to-\nend trainable neural network for spotting text with arbitrary\nshapes. PAMI , 2019. 2\n[20] Minghui Liao, Guan Pang, Jing Huang, Tal Hassner, and Xi-\nang Bai. Mask textspotter v3: Segmentation proposal net-\nwork for robust scene text spotting. In ECCV , 2020. 2\n[21] Minghui Liao, Baoguang Shi, and Xiang Bai. Textboxes++:\nA single-shot oriented scene text detector. TIP, 2018. 1\n[22] Xuebo Liu, Ding Liang, Shi Yan, Dagui Chen, Yu Qiao, and\nJunjie Yan. Fots: Fast oriented text spotting with a unified\nnetwork. In CVPR , 2018. 2\n[23] Yuliang Liu, Hao Chen, Chunhua Shen, Tong He, Lianwen\nJin, and Liangwei Wang. Abcnet: Real-time scene text spot-\nting with adaptive bezier-curve network. In CVPR , 2020. 2\n[24] Jianqi Ma, Weiyuan Shao, Hao Ye, Li Wang, Hong Wang,\nYingbin Zheng, and Xiangyang Xue. Arbitrary-oriented\nscene text detection via rotation proposals. TMM . 1\n[25] Jos ´e Luis Pech-Pacheco, Gabriel Crist ´obal, Jes ´us Chamorro-\nMartinez, and Joaqu ´ın Fern ´andez-Valdivia. Diatom autofo-\ncusing in brightfield microscopy: a comparative study. In\nICPR . IEEE, 2000. 3\n[26] Siyang Qin, Alessandro Bissacco, Michalis Raptis, Yasuhisa\nFujii, and Ying Xiao. Towards unconstrained end-to-end text\nspotting. In ICCV , 2019. 2\n[27] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.\nFaster r-cnn: Towards real-time object detection with region\nproposal networks. TPAMI , 2016. 2\n[28] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das,\nRamakrishna Vedantam, Devi Parikh, and Dhruv Batra.\nGrad-cam: Visual explanations from deep networks via\ngradient-based localization. In ICCV , 2017. 5\n[29] Baoguang Shi, Xiang Bai, and Cong Yao. An end-to-end\ntrainable neural network for image-based sequence recogni-\ntion and its application to scene text recognition. TPAMI ,\n2016. 1\n[30] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to\nsequence learning with neural networks. NeurIPS , 2014. 1\n[31] Shu Tian, Wei-Yi Pei, Ze-Yu Zuo, and Xu-Cheng Yin. Scene\ntext detection in video by learning locally and globally. In\nIJCAI , 2016. 2\n[32] Zhuotao Tian, Michelle Shu, Pengyuan Lyu, Ruiyu Li, Chao\nZhou, Xiaoyong Shen, and Jiaya Jia. Learning shape-aware\nembedding for scene text detection. In CVPR , 2019. 1\n8\n[33] Lan Wang, Jiahao Shi, Yang Wang, and Feng Su. Video text\ndetection by attentive spatiotemporal fusion of deep convo-\nlutional features. In Proceedings of the 27th ACM Interna-\ntional Conference on Multimedia , pages 66–74, 2019. 2\n[34] Peng Wang, Hui Li, and Chunhua Shen. Towards end-to-end\ntext spotting in natural scenes. arXiv , 2019. 2\n[35] Wenhai Wang, Enze Xie, Xiang Li, Wenbo Hou, Tong Lu,\nGang Yu, and Shuai Shao. Shape robust text detection with\nprogressive scale expansion network. In CVPR , 2019. 1\n[36] Xiaobing Wang, Yingying Jiang, Zhenbo Luo, Cheng-Lin\nLiu, Hyunsoo Choi, and Sungjin Kim. Arbitrary shape scene\ntext detection with adaptive text region representation. In\nCVPR , 2019. 1\n[37] Xiaobing Wang, Yingying Jiang, Shuli Yang, Xiangyu Zhu,\nWei Li, Pei Fu, Hua Wang, and Zhenbo Luo. End-to-end\nscene text recognition in videos based on multi frame track-\ning. In ICDAR , 2017. 1, 2\n[38] Yang Wang, Lan Wang, Feng Su, and Jiahao Shi. Video\ntext detection with fully convolutional network and tracking.\nIn2019 IEEE International Conference on Multimedia and\nExpo (ICME) , pages 1738–1743. IEEE, 2019. 2\n[39] Jianchao Wu, Zhanghui Kuang, Limin Wang, Wayne Zhang,\nand Gangshan Wu. Context-aware rcnn: A baseline for ac-\ntion detection in videos. In ECCV . Springer, 2020. 2\n[40] Junru Wu, Xiang Yu, Ding Liu, Manmohan Chandraker, and\nZhangyang Wang. David: Dual-attentional video deblurring.\nInWACV , 2020. 3\n[41] Liang Wu, Palaiahnakote Shivakumara, Tong Lu, and\nChew Lim Tan. A new technique for multi-oriented scene\ntext line detection and tracking in video. TMM , 2015. 2\n[42] Yue Wu and Prem Natarajan. Self-organized text detection\nwith minimal post-processing via border learning. In ICCV ,\n2017. 1\n[43] Linjie Xing, Zhi Tian, Weilin Huang, and Matthew R Scott.\nConvolutional character networks. In ICCV , 2019. 2\n[44] Xue-Hang Yang, Wenhao He, Fei Yin, and Cheng-Lin Liu.\nA unified video text detection method with network flow. In\nICDAR , 2017. 2\n[45] Xin Yi and Mark Eramian. Lbp-based segmentation of defo-\ncus blur. TIP, 2016. 3\n[46] Fei Yin, Yi-Chao Wu, Xu-Yao Zhang, and Cheng-Lin Liu.\nScene text recognition with sliding convolutional character\nmodels. arXiv preprint arXiv:1709.01727 , 2017. 1\n[47] Chengquan Zhang, Borong Liang, Zuming Huang, Mengyi\nEn, Junyu Han, Errui Ding, and Xinghao Ding. Look more\nthan once: An accurate detector for text of arbitrary shapes.\nInCVPR , 2019. 1\n[48] Xinyu Zhou, Cong Yao, He Wen, Yuzhi Wang, Shuchang\nZhou, Weiran He, and Jiajun Liang. East: an efficient and\naccurate scene text detector. In CVPR , 2017. 1\n[49] Ze-Yu Zuo, Shu Tian, Wei-yi Pei, and Xu-Cheng Yin. Multi-\nstrategy tracking based text detection in scene videos. In\nICDAR . IEEE, 2015. 2\n9",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "fLWAHmDATx",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=fLWAHmDATx",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to reviewer e1dv",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "d6DDhv3ipj",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=d6DDhv3ipj",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer e1hj",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "2EKQo9FL9hw",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=2EKQo9FL9hw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to reviewer e1KK",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "UVMYSgCTUn0",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=UVMYSgCTUn0",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer E1ux",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "N4NAlauc3In",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=N4NAlauc3In",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer E13N",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "dXyz5PyaPwV",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=dXyz5PyaPwV",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer E15x",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "WvR-VmQgErH",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=WvR-VmQgErH",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer E1g8",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "FsJGYv8LViqx",
"year": null,
"venue": "ECOOP Workshops 2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=FsJGYv8LViqx",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quantitative Approaches in Object-Oriented Software Engineering",
"authors": [
"Fernando Brito e Abreu",
"Mario Piattini",
"Geert Poels",
"Houari A. Sahraoui"
],
"abstract": "The QAOOSE’2003 workshop brought together, for a full day, researchers and practitioners working on several aspects related to quantitative evaluation of software artifacts developed with the object-oriented paradigm. Ideas and experiences where shared and discussed. This report includes a summary of the technical presentations and subsequent discussions raised by them. Eleven out of twelve submitted position papers were presented, covering different aspects such as metrics formalization, new metrics (for coupling, cohesion, constraints or dynamic behavior) or metrics validation, to name a few. In the closing session the participants were able to discuss open issues and challenges arising from researching in this area, as well as they tried to forecast which will be the hot topics for research in the short to medium term.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "f-q9SQtrpPEQ",
"year": null,
"venue": "ECOOP Workshops 2006",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=f-q9SQtrpPEQ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quantitative Approaches in Object-Oriented Software Engineering",
"authors": [
"Fernando Brito e Abreu",
"Coral Calero",
"Yann-Gaël Guéhéneuc",
"Michele Lanza",
"Houari A. Sahraoui"
],
"abstract": "The QAOOSE 2006 workshop brought together, for a full day, researchers working on several aspects related to quantitative evaluation of software artifacts developed with the object-oriented paradigm and related technologies. Ideas and experiences were shared and discussed. This report includes a summary of the technical presentations and subsequent discussions raised by them. 12 out of 14 submitted position papers were presented, covering different aspects such as metrics, components, aspects and visualization, evolution, quality models and refactorings. In the closing session the participants were able to discuss open issues and challenges arising from researching in this area, and they also tried to forecast which will be the hot topics for research in the short to medium term.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "mD_-CDT9R5j",
"year": null,
"venue": "ECOOP Workshops 2000",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=mD_-CDT9R5j",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quantitative Approaches in Object-Oriented Software Engineering",
"authors": [
"Fernando Brito e Abreu",
"Geert Poels",
"Houari A. Sahraoui",
"Horst Zuse"
],
"abstract": "This report summarizes the contributions and discussion of the 4th ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering held in Sophia Antipolis on Tuesday, June 13, 2000. This workshop aims to provide a forum to discuss the current state of the art and the practice in the field of quantitative approaches in the OO field. This year, three aspects were particularly discussed: formal approaches, empirical studies and measurement of analysis and design models. Nine position papers were presented and discussed.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "_3z-i3tgbq",
"year": null,
"venue": "ECOOP Workshops 2004",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=_3z-i3tgbq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "8th Workshop on Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE 2004)",
"authors": [
"Coral Calero",
"Fernando Brito e Abreu",
"Geert Poels",
"Houari A. Sahraoui"
],
"abstract": "The workshop was a direct continuation of seven successful workshops, held at previous editions of ECOOP in Darmstadt (2003), Malaga (2002), Budapest (2001), Cannes (2000), Lisbon (1999), Brussels (1998) and Aarhus (1995). This time, as in previous editions, the workshop attracted participants from both academia and industry that are involved / interested in the application of quantitative methods in object oriented software engineering research and practice.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "IJq72z0t45a",
"year": null,
"venue": "ECOOP Workshops 2001",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=IJq72z0t45a",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quantitative Approaches in Object-Oriented Software Engineering",
"authors": [
"Fernando Brito e Abreu",
"Brian Henderson-Sellers",
"Mario Piattini",
"Geert Poels",
"Houari A. Sahraoui"
],
"abstract": "This report summarizes the contributions and debates of the 5th International ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE 2001), which was held in Budapest on 18–19 June, 2001. The objective of the QAOOSE workshop series is to present, discuss and encourage the use of quantitative methods in object-oriented software engineering research and practice. This year’s workshop included the presentation of eight position papers and one tutorial in the areas of “software metrics definition”, “software size, complexity and quality assessment”, and “software quality prediction models”. The discussion sessions focused on current problems and future research directions in QAOOSE.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "D7pLPeoFMQ2",
"year": null,
"venue": "ECOOP Workshops 1999",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=D7pLPeoFMQ2",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quantitative Approaches in Object-Oriented Software Engineering",
"authors": [
"Fernando Brito e Abreu",
"Horst Zuse",
"Houari A. Sahraoui",
"Walcélio L. Melo"
],
"abstract": "This full-day workshop was organized in four sessions. The first three were thematic technical sessions dedicated to the presentation of the recent research results of participants. Seven, out of eleven accepted submissions were orally presented during these three sessions. The first session also included a metrics collection tool demonstration. The themes of the sessions were, respectively, “Metrics Definition and Collection”, “Quality Assessment” and “Metrics Validation”. The last session was dedicated to the discussion of a set of topics selected by the participants.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "PGDFXSgULF5",
"year": null,
"venue": "ECOOP Workshops 2002",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=PGDFXSgULF5",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quantitative Approaches in Object-Oriented Software Engineering",
"authors": [
"Mario Piattini",
"Fernando Brito e Abreu",
"Geert Poels",
"Houari A. Sahraoui"
],
"abstract": "This report summarizes the contributions and debates of the 6th International ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE 2002), which was held in Malaga on 11 June, 2002. The objective of the QAOOSE workshop series is to present, discuss and encourage the use of quantitative methods in object-oriented software engineering research and practice. This year’s workshop included the presentation of eleven position papers in the areas of “software metrics definition”, “software size, complexity and quality assessment”, and “software quality prediction models”. The discussion sessions focused on current problems and future research directions in QAOOSE.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "88i0QuYpTDl",
"year": null,
"venue": "ECOOP Workshops 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=88i0QuYpTDl",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quantitative Approaches in Object-Oriented Software Engineering",
"authors": [
"Yann-Gaël Guéhéneuc",
"Christian F. J. Lange",
"Houari A. Sahraoui",
"Giovanni Falcone",
"Michele Lanza",
"Coral Calero",
"Fernando Brito e Abreu"
],
"abstract": "The QAOOSE 2007 workshop brought together, for half day, researchers working on several aspects related to quantitative evaluation of software artifacts developed with the object-oriented paradigm and related technologies. Ideas and experiences were shared and discussed. This report includes a summary of the technical presentations and subsequent discussions raised by them. Exceptionally this year, one of the founders of the workshop, Horst Zuse, gave a keynote on the Theoretical Foundations of Object-Oriented Measurement. Three out of the four submitted position papers were presented, covering different aspects such as measuring inconsistencies, visualizing metric values, and assessing the subjective quality of systems. In the closing session, the participants discussed open issues and challenges arising from researching in this area and tried to forecast what will be hot research topics in the short and medium terms.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "xaNCNTonJpa",
"year": null,
"venue": "Simul. 2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=xaNCNTonJpa",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Hybrid simulation of brain-skull growth",
"authors": [
"Jing Jin",
"Sahar Shahbazi",
"John E. Lloyd",
"Sidney S. Fels",
"Sandrine de Ribaupierre",
"Roy Eagleson"
],
"abstract": "This paper describes a hybrid model that includes both a standard finite element model and also volume-preserving structural modeling for a clinical application involving skull development in infants, with particular application to craniostosis modeling. To accommodate the growing brain, the skull needs to grow quickly in the first few months of life, and most of the growth of the skull at that time occurs at the sutures. Craniosynostosis, which is a developmental abnormality, occurs when one or more sutures are fused early in life (even in utero) while the skull is growing, resulting in an abnormal skull shape. To study normal brain–skull growth and to develop a model of craniosynostosis, we have developed a hybrid computational model to simulate the relationship between the growing deformable brain and the rigid skull. Our model is composed of the nine segmented skull plates as rigid surfaces, deformable sutures, and a volumetrically controllable deformable brain. The Cranial Index (ratio of biparietal width to fronto-occipital length) is measured during the simulation, showing a characteristic peak during development. Measures of linear growth along each dimension show characteristic increases over time. The hybrid simulation framework shows promise to support further investigations into abnormal skull development. By varying the properties of the sutures in our model, we can now simulate different craniosynostosis models, such as scaphocephaly and trigonocephaly. In this paper, we show results on the evolution of the Cranial Index as calculated using standard landmarks and compare to the normal index, and thereby evaluate our model by comparing it with patient data.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "tuuTjctvjTh",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=tuuTjctvjTh",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "-TL1rbvBQIl",
"year": null,
"venue": "ICMI 2020",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=-TL1rbvBQIl",
"arxiv_id": null,
"doi": null
}
|
{
"title": "First Workshop on Multimodal e-Coaches",
"authors": [
"Leonardo Angelini",
"Mira El Kamali",
"Elena Mugellini",
"Omar Abou Khaled",
"Yordan Dimitrov",
"Vera Veleva",
"Zlatka Gospodinova",
"Nadejda Miteva",
"Richard Wheeler",
"Zoraida Callejas",
"David Griol",
"Kawtar Benghazi Akhlaki",
"Manuel Noguera",
"Panagiotis D. Bamidis",
"Evdokimos I. Konstantinidis",
"Despoina Petsani",
"Andoni Beristain Iraola",
"Dimitrios I. Fotiadis",
"Gérard Chollet",
"Inés Torres",
"Anna Esposito",
"Hannes Schlieter"
],
"abstract": "T e-Coaches are promising intelligent systems that aims at supporting human everyday life, dispatching advices through different interfaces, such as apps, conversational interfaces and augmented reality interfaces. This workshop aims at exploring how e-coaches might benefit from spatially and time-multiplexed interfaces and from different communication modalities (e.g., text, visual, audio, etc.) according to the context of the interaction.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "akm1-NNE6TN",
"year": null,
"venue": "Discovery Science 2005",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=akm1-NNE6TN",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Semantic Enrichment of Data Tables Applied to Food Risk Assessment",
"authors": [
"Hélène Gagliardi",
"Ollivier Haemmerlé",
"Nathalie Pernelle",
"Fatiha Saïs"
],
"abstract": "Our work deals with the automatic construction of domain specific data warehouses. Our application domain concerns microbiological risks in food products. The MIEL++ system [2], implemented during the Sym’Previus project, is a tool based on a database containing experimental and industrial results about the behavior of pathogenic germs in food products. This database is incomplete by nature since the number of possible experiments is potentially infinite. Our work, developed within the e.dot project, presents a way of palliating that incompleteness by complementing the database with data automatically extracted from the Web. We propose to query these data through a mediated architecture based on a domain ontology. So, we need to make them compatible with the ontology. In the e.dot project [5], we exclusively focus on documents in Html or Pdf format which contain data tables. Data tables are very common presentation scheme to describe synthetic data in scientific articles. These tables are semantically enriched and we want this enrichment to be as automatic and flexible as possible. Thus, we have defined a Document Type Definition named SML (Semantic Markup Language) which can deal with additional or incomplete information in a semantic relation, ambiguities or possible interpretation errors. In this paper, we present this semantic enrichment step.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "OQLSe-5cPAQ",
"year": null,
"venue": "ECAI 2016",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-672-9-1775",
"forum_link": "https://openreview.net/forum?id=OQLSe-5cPAQ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "ONE - A Personalized Wellness System",
"authors": [
"Ajay Chander",
"Ramya Srinivasan"
],
"abstract": "As the world becomes increasingly digitally readable through a variety of sensors, digital services will play a key role in advising and supporting people towards a variety of goals. In this paper, we present a personalized wellness system that leverages techniques from cognitive science and machine learning to improve a user's well-being by suggesting daily micro-goals (e.g., “bring a healthy snack to work”), and by enabling social sharing of individual achievements. Specifically, we propose a method for estimating a user's likelihood of successfully completing a given micro-goal (“ONE”) and study the correlation between ONEs and users' actions to improve their chances of reaching their wellness objectives.",
"keywords": [],
"raw_extracted_content": "O N E-AP ersonalized Wellness System\nAjay Chander and Ramya Srinivasan1\nAbstract. As the world becomes increasingly digitally readable\nthrough a variety of sensors, digital services will play a key role\nin advising and supporting people towards a variety of goals. Inthis paper, we present a personalized wellness system that leveragestechniques from cognitive science and machine learning to improvea user’s well-being by suggesting daily micro-goals (e.g., “bring ahealthy snack to work”), and by enabling social sharing of individ-ual achievements. Specifically, we propose a method for estimatinga user’s likelihood of successfully completing a given micro-goal(“ONE”) and study the correlation between ONEs and users’ actionsto improve their chances of reaching their wellness objectives.\n1 INTRODUCTION\nHealth-tech, the use of technology to provide personalized healthcareand wellness, is in the midst of a furious renaissance [3]. Given thatlifestyle diseases lead to 75% of long-term healthcare costs, a partic-ular area of focus for healthtech is disease prevention and support ofpositive wellness behaviors [3]. Many sensors to track and visualizeour wellness behaviors have and are being built [6]. However, thesesystems assume that awareness of activity patterns and data will leadto behavior change and goal achievement, an assumption that is notnecessarily true. Our humanity brings with it biases, and as socialanimals, our behaviors tend to be more influenced by social informa-tion [4], [2]. While many health companies (e.g., Healthways) havetaken advantage of the popularity of social media sites such as Face-book and Twitter to create communities that promote participation inexercise and diet programs, they are not designed to offer benefits atan individual level [1].\nIn this work, we propose a personalized wellness system that im-\nproves the chance of a user successfully achieving a wellness ob-jective, wherein objectives could be focus areas such as nutrition,body, mind, etc. (Fig. 1a). The app, called ONE, is motivated bythe cognitive science concepts of novelty and social proof. ONEs\nare purposefully chosen simple micro-goals. A ONE comprises ofan unique image and a textual message (Fig. 1b). A new ONE every24 hours leverages novelty. Additionally, the number of people whohave completed a given ONE is also available, serving as a socialproof to influence action (Fig. 1c). A user can like (heart) or dislike aONE and also post a picture as a proof of having completed a ONE.A preliminary version of the app is available at https://goo.gl/hLSFsi\nWe build machine learning models that can provide an estimate of\na user’s likelihood of hearting/completing a new ONE. Specifically,we learn features that influence a user’s action— to understand whatis appealing/compelling for a user in terms of hearting/completing aONE. We leverage this information to suggest those ONEs that are\n1The authors are with Fujitsu Laboratories of America, USA and have\nequally contributed to the paper.more likely to be completed by a user and hence propel them towardsthe achievement of their wellness objectives.\nFigure 1. Illustration of the ONE app. Left: Wellness objectives which the\nusers can choose to work on, Center: A sample ONE, Right: Community\npage showing others who have completed a ONE\n2 METHODOLOGY\nThe first step in the design of a model that can determine a user’slikelihood of completing a new ONE (micro-goal) is in selecting fea-tures that are good predictors of the same. Towards this, we investi-gate multi-modal features, namely, the textual features and the imagefeatures associated with a ONE message, and user-specific featuressuch as their wellness objectives and their likes history.\n2.1 Feature Extraction and Validation\nBy choosing those features that exhibit high correlations with likes,we can ensure good prediction of a user liking a new ONE. Consider,for e.g., the correlation between wellness objectives and likes. LetP(u, v)represent the probability with which a user ulikes ONEs be-\nlonging to a wellness objective v. Thus, P(u, v)=\nlikes(u,v )\nlikes(u,V ), where\nV is the set of all wellness objectives. Averaging P(u, v)across all\nusers will provide the probability distribution for the event of an av-erage user liking from a wellness objective v.\nIn order to understand the choice of distribution that best fits the\ndata under consideration, a Cullen and Frey graph is plotted. Resultsindicated that the observation can be modeled by a Beta distribu-tion. In order to rule out a uniform fit to the data (which would bethe case if the users liked randomly across all wellness objectives),Kolmogorov-Smirnov (K-S) goodness of fit is performed with thenull hypothesis being that a normal distribution fits the data. The re-sult of the test indicates that the null hypothesis is to be rejected.ECAI 2016\nG.A. Kaminka et al. (Eds.)© 2016 The Authors and IOS Press.This article is published online with Open Access by IOS Press and distributed under the termsof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/978-1-61499-672-9-17751775\nThus, an average user does not like randomly, instead there are some\nfavorite wellness objectives, and this information can be leveragedto estimate their likelihood of liking a new ONE. In a similar man-ner, other predictors are established, the details of which is describednext.\n2.2 Feature Descriptors\nTable 1 provides a summary of the feature descriptors used to predictuser’s likelihood of liking a new ONE. Each user is represented by a7-D feature vector that captures textual (rows 1-3 of Table 1), image(rows 4-5) and user-specific information (rows 6-7).\nTable 1. Summary of feature descriptors.\nFeature Description\nPositive words count Words associated with a positive sentiment in\nthe ONE text\nLocal heart count Number of people who liked a ONE as is seen\nby the user under consideration\nONE ID Unique number associated with the\ntheme of the ONE text\nFace ID Binary value indicating presence/absence\nof a face in the ONE image\nEmotion ID Unique number associated with the\nemotion of the ONE image\nLikes count Total number of likes by the user normalized\nby the number of likes across all users\nObjective ID Unique number associated with the\nwellness objectives preferences of the user\nPositive words in the ONE message are counted based on their co-\noccurrence with the positive valence words of the AFINN-111 [5],a dataset consisting of 2477 words rated for valence between -5 and5. Based on the relevance of the contents of a ONE text with certainthemes (e.g., body, community, growth, mind, nature related), it isgiven a unique ONE ID (numbers 1-5). In a similar manner, based on\nthe emotion associated with the ONE image—anger, calmness, dis-\ngust, joy, affection and a neutral emotion, an emotion ID is attached\nwith the image (1-6). Each user is allowed to choose a set of wellness\nobjectives when they install the ONE app. The unique combinationof objectives a user chooses, is identified by the objective ID feature.The description of the rest of the features is clear from Table 1.\n2.3 ONE Achievement Prediction\nA user’s progress towards achievement of wellness objectives canbe predicted in terms of (1) user liking a new ONE, and (2) usercompleting a ONE/posting a picture of it. We explain the model withrespect to the user liking a new ONE, but the method is generalizableto picture posting prediction, except that likes count feature (Table 1)would now be the number of pictures posted in an objective category.\nWe build a Logistic regression (LR) model for each wellness ob-\njective. For training, each user uis represented by the 7-D feature\nvector (Sec 2.2) and the label is the probability that the user likes aONE from that wellness objective P(u, v)(Sec 2.1). L2 regulariza-\ntion is used in training the model. We use roughly\n2\n3of the data for\ntraining and the rest for testing. The testing instances are ranked inthe order of the obtained probabilities of liking. This, in turn, can beused to suggest ONEs that are more likely to be liked by the user. Themodel is validated by computing the normalized discounted cumula-tive gain ( NDCG ) using the predicted ranks (by LR model) and the\nactual ranks (Sec. 2.1) for a user.3 Experiments\nUser data is being continuously collected anonymously and storedusing Amazon Web Services as the mobile app’s backend. We main-tain multiple data stores, including:1. User goal-behavior: All aspects of user actions (e.g., hearting, post\npictures, un-hearting) towards completing the ONE are recorded intothis database. Each entry of the database has the timestamp of thecorresponding action, the user’s ID, the associated ONE, and the lo-cal heart count.\n2. User app-interactions: This database records all interactions of the\nusers with the app (e.g., entered the vision page, entered community\npage, etc.). Each entry contains user’s ID, time stamp of interaction,corresponding ONE, and actual interaction.\nWe report initial results on a dataset comprising of over 50 users\nspanning 3 months. We first analyzed the correlation between variousfeatures and user’s likes using Chi-squared correlation coefficient.Specifically, we set the null hypothesis that there does not exist a cor-relation between the chosen feature and users’ likes, and computedChi-square correlation coefficient at 95% significance level. Results\nindicated statistically significant correlation between emotion ID andlikes with most users liking ONEs that conveyed a feeling of “joy”and “affection”. Many users were interested in ONEs related to com-munity activities followed by those ONEs concerning mind. Whilethere was a high correlation between ObjectiveID, Tag ID and Likescount with users’ likes, the Face ID had the least correlation with\nlikes. In future, we would like to have a feature weighting strategy\nand also explore sequence of users’ interactions for better prediction.\nThe ONE app has been designed through a human-centered design\nprocess, which included testing various versions of the app and its in-terface and interactions with a variety of users in our lab. In order toevaluate the empirical prediction performance of the LR model, wecomputed the NDCG\n5defined byDCG 5\nIDCG 5wherein DCG 5is esti-\nmated from the LR model using the predicted probabilities of usersliking ONEs from a wellness objective, and IDCG\n5is computed\nfrom the actual number of likes by user across various objectives.Results gave a decent value of 0.8388. A Naive Bayes classifier wasalso tested, but LR model was the better one.\n4 Conclusions\nWe presented ONE, a personalized wellness system, that aids usersin achieving their goals by suggesting new micro-goals every day andby providing social information for behavioral support. By learningthe probability with which a user likes or completes a ONE, it is pos-sible to suggest those ONEs that are more likely to be completed bythe user, thus enabling goal achievement. Initial experiments demon-\nstrated promising performance of the system.\nREFERENCES\n[1] D. Centola, ‘Social media and the science of health behavior’, Circula-\ntion, 2135–2144, (2013).\n[2] N. Christakis and J. Fowler, ‘Social contagion theory: Examining dy-\nnamic social networks and human behavior’, Statistics in Medicine,\n(2011).\n[3] Rock Health, ‘Digital health consumer adoption’, (2015).\n[4] R. Nickerson, ‘Confirmation bias: A ubiquitous phenomenon in many\nguises’, Review of General Psychology, 175–220, (1998).\n[5] F. Nielsen, ‘A new anew: Evaluation of a word list for sentiment analysis\nin microblogs.’, ESQ W orkshop on Making Sense of Microposts, (2011).\n[6] Z. Y umak and P . Pearl, ‘Survey of sensor-based personal management\nsystems’, BioNanoScience, 254–269, (2013).A.Chander andR.Srinivasan /ONE–APersonalized Wellness System 1776",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "5C3PCKjOHSd",
"year": null,
"venue": "ECAI 2020",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA200339",
"forum_link": "https://openreview.net/forum?id=5C3PCKjOHSd",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Retrospective and Prospective Mixture-of-Generators for Task-Oriented Dialogue Response Generation",
"authors": [
"Jiahuan Pei",
"Pengjie Ren",
"Christof Monz",
"Maarten de Rijke"
],
"abstract": "Dialogue response generation (DRG) is a critical component of task-oriented dialogue systems (TDSs). Its purpose is to generate proper natural language responses given some context, e.g., historical utterances, system states, etc. State-of-the-art work focuses on how to better tackle DRG in an end-to-end way. Typically, such studies assume that each token is drawn from a single distribution over the output vocabulary, which may not always be optimal. Responses vary greatly with different intents, e.g., domains, system actions. We propose a novel mixture-of-generators network (MoGNet) for DRG, where we assume that each token of a response is drawn from a mixture of distributions. MoGNet consists of a chair generator and several expert generators. Each expert is specialized for DRG w.r.t. a particular intent. The chair coordinates multiple experts and combines the output they have generated to produce more appropriate responses. We propose two strategies to help the chair make better decisions, namely, a retrospective mixture-of-generators (RMoG) and a prospective mixture-of-generators (PMoG). The former only considers the historical expert-generated responses until the current time step while the latter also considers possible expert-generated responses in the future by encouraging exploration. In order to differentiate experts, we also devise a global-and-local (GL) learning scheme that forces each expert to be specialized towards a particular intent using a local loss and trains the chair and all experts to coordinate using a global loss. We carry out extensive experiments on the MultiWOZ benchmark dataset. MoGNet significantly outperforms state-of-the-art methods in terms of both automatic and human evaluations, demonstrating its effectiveness for DRG.",
"keywords": [],
"raw_extracted_content": "Retrospective and Prospective Mixture-of-Generators\nfor Task-Oriented Dialogue Response Generation\nJiahuan Pei and Pengjie Ren and Christof Monz and Maarten de Rijke1\nAbstract. Dialogue response generation (DRG) is a critical com-\nponent of task-oriented dialogue systems (TDSs). Its purpose is to\ngenerate proper natural language responses given some context, e.g.,historical utterances, system states, etc. State-of-the-art work focuseson how to better tackle DRG in an end-to-end way. Typically, suchstudies assume that each token is drawn from a single distributionover the output vocabulary, which may not always be optimal. Re-sponses vary greatly with different intents, e.g., domains, system ac-tions. We propose a novel mixture-of-generators network (MoGNet)for DRG, where we assume that each token of a response is drawnfrom a mixture of distributions. MoGNet consists of a chair genera-tor and several expert generators. Each expert is specialized for DRGw.r.t. a particular intent. The chair coordinates multiple experts andcombines the output they have generated to produce more appropri-ate responses. We propose two strategies to help the chair make bet-ter decisions, namely, a retrospective mixture-of-generators (RMoG)and a prospective mixture-of-generators (PMoG). The former onlyconsiders the historical expert-generated responses until the currenttime step while the latter also considers possible expert-generatedresponses in the future by encouraging exploration. In order to dif-ferentiate experts, we also devise a global-and-local (GL) learningscheme that forces each expert to be specialized towards a partic-ular intent using a local loss and trains the chair and all experts tocoordinate using a global loss. We carry out extensive experimentson the MultiWOZ benchmark dataset. MoGNet significantly outper-forms state-of-the-art methods in terms of both automatic and humanevaluations, demonstrating its effectiveness for DRG.\n1 INTRODUCTION\nTask-oriented dialogue systems (TDSs) have sparked considerableinterest due to their broad applicability, e.g., for booking flight tick-ets or scheduling meetings [32, 34]. Existing TDS methods can bedivided into two broad categories: pipeline multiple-module mod-els [2, 5, 34] and end-to-end single-module models [11, 30]. The for-mer decomposes the TDS task into sequentially dependent modulesthat are addressed by separate models while the latter proposes to usean end-to-end model to solve the entire task. In both categories, thereare many factors to consider in order to achieve good performance,such as user intent understanding [31], dialogue state tracking [37],and dialogue response generation (DRG). Given a dialogue context\n(dialogue history, states, retrieved results from a knowledge base,etc.), the purpose of DRG is to generate a proper natural languageresponse that leads to task-completion, i.e., successfully achievingspecific goals, and that is fluent, i.e., generating natural and fluent\n1University of Amsterdam, The Netherlands, email: {j.pei, p.ren, c.monz,\nderijke}@uva.nl\nFigure 1 : Density of the relative token frequency distribution for dif-\nferent intents (domains in the top plot, system actions in the bottom\nplot). We use kernel density estimation2to estimate the probability\ndensity function of a random variable from a relative token frequencydistribution.\nutterances.\nRecently proposed DRG methods have achieved promising re-\nsults (see, e.g., LaRLAttnGRU [36]). However, when generating a\nresponse, all current models assume that each token is drawn froma single distribution over the output vocabulary. This may be unrea-sonable because responses vary greatly with different intents, whereintent may refer to domain, system action, or other criteria for par-tioning responses, e.g., the source of dialogue context [24]. To sup-port this claim, consider the training set of the Multi-domain Wizard-of-Oz (MultiWOZ) benchmark dataset [4], where 67.4% of the dia-logues span across multiple domains and all of the dialogues spanacross multiple types of system actions. We plot the density of therelative token frequency distributions in responses of different intentsover the output vocabulary in Fig. 1. Although there is some overlapamong distributions, there are also clear differences. For example,when generating the token [entrance], it has a high probability of\nbeing drawn from the distributions for the intent of booking an at-\ntraction, but not from booking a taxi. Thus, we hypothesize that a\nresponse should be drawn from a mixture of distributions for multi-ple intents rather than from a single distribution for a general intent.\nWe propose a mixture-of-generators network (MoGNet) for DRG,\n2https://pandas.pydata.org/pandas-docs/stable/\nreference/api/pandas.DataFrame.plot.kde.htmlECAI 2020\nG.D. Giacomo et al. (Eds.)\n© 2020 The authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/FAIA2003392148\nFigure 2 : Overview of MoGNet. It illustrates how the model generates the token y3given sequence Xas an input in the process of generating\nthe whole sequence Yas a dialogue response.\nwhich consists of a chair generator and several expert generators.\nEach expert is specialized for a particular intent, e.g., one domain,\nor one type of action of a system, etc. The chair coordinates multi-ple experts and generates the final response by taking the utterancesgenerated by the experts into consideration. Compared with previ-ous methods, the advantages of MoGNet are at least two-fold: First,the specialization of different experts and the use of a chair for com-bining the outputs breaks the bottleneck of a single model [10, 19].Second, it is more easily traceable: we can analyze who is responsi-\nble when the model makes a mistake and generates an inappropriateresponse.\nWe propose two strategies to help the chair make good decisions,\ni.e., retrospective mixture-of-generators (RMoG) and prospective\nmixture-of-generators (PMoG). RMoG only considers the retrospec-\ntive utterances generated by the experts, i.e., the utterances generatedby all the experts prior to the current time step. However, a chairwithout a long-range vision is likely to make sub-optimal decisions.Consider, for example, these two responses: “what daywill you be\ntraveling?” and “what day and time would you like to travel?” If\nwe only consider these responses until the 2nd token (which RMoGdoes), then the chair might choose the first response due to the ab-sence of a more long-range view of the important token “time” lo-cated after the 2nd token. Hence, we also propose a PMoG, whichenables the chair to make full use of the prospective predictions ofexperts as well.\nTo effectively train MoGNet, we devise a global-and-local (GL)\nlearning scheme. The local loss is defined on a segment of data witha certain intent, which forces each expert to specialize. The globalloss is defined on all data, which forces the chair and all experts tocoordinate with each other. The global loss can also improve datautilization by enabling the backpropagation error of each data sampleto influence all experts as well as the chair.\nTo verify the effectiveness of MoGNet, we carry out experiments\non the MultiWOZ benchmark dataset. MoGNet significantly outper-forms state-of-the-art DRG methods, improving over the best per-forming model on this dataset by 5.64% in terms of overall perfor-mance (0.5* Inform+0.5*Success+BLEU ) and 0.97% in terms of re-\nsponse generation quality (Perplexity).\nThe main contributions of this paper are:•a novel MoGNet model that is the first framework that deviseschair and expert generators for DRG, to the best of our knowledge;\n•two novel coordination mechanisms, i.e., RMoG and PMoG, tohelp the chair make better decisions; and\n•a GL learning scheme to differentiate experts and fuse data effi-ciently.\n2 MIXTURE-OF-GENERATORS NETWORK\nWe focus on task-oriented DRG (a.k.a. the context-to-text generationtask [4]). Formally, given a current dialogue context X=(U,B,D ),\nwhereUis a combination of previous utterances, Bare the belief\nstates, and Dare the retrieved database results based on B, the goal\nof task-oriented DRG is to generate a fluent natural language re-sponseY=(y\n1,...,y n)that contains appropriate system actions\nto help users accomplish their task goals, e.g., booking a flight ticket.We propose MoGNet to model the generation probability P(Y|X).\n2.1 Overview\nThe MoGNet framework consists of two types of roles:\n•kexpert generators, each of which is specialized for a particu-\nlar intent, e.g., a domain, a type of action of a system, etc. Let\nD={(Xp,Yp)}|D|\np=1denote a dataset with |D|independent sam-\nples of (X,Y ). Expert-related intents partition Dintokpieces\nS={Sl}k\nl=1, whereSl/defines{(Xl\np,Yl\np)}|Sl|\np=1. ThenSlis used to\ntrain each expert by predicting Pl(Yl|Xl). We expect the l-th\nexpert to perform better than the others on Sl.\n•achair generator, which learns to coordinate a group of experts\nto make an optimal decision. The chair is trained to predict P(Y|\nX), where (X,Y )is a sample from D.\nFig. 2 shows our implementation of MoGNet; it consists of three\ntypes of components, i.e., a shared context encoder, kexpert de-\ncoders, and a chair decoder.\n2.2 Shared context encoder\nThe role of the shared context encoder is to read the dialogue con-textXand construct a representation. We follow Budzianowski et al.\n[3] and model the current dialogue context as a combination of userutterances U, belief states B, and retrieval results from a database D.\nFirst, we employ a Recurrent Neural Network (RNN) [7] to map\na sequence of input tokens U={w\n1,...,w m}to hidden vectorsJ. Pei et al. / Retrospective and Prospective Mixture-of-Generators for Task-Oriented Dialogue Response Generation 2149\nHU={hU\n1,...,hUm}. The hidden vector hiat thei-th step can be\nrepresented as:\nhU\ni,si=RNN( wi,hUi−1,si−1), (1)\nwhere wiis the embedding of the token wi. The initial state s0of\nthe RNN is set to 0.\nThen, we represent the current dialogue context xas a combina-\ntion of the user utterance representation hU\nm, the belief state vector\nhB, and the database vector hD:\nx= tanh(W uhU\nm+WbhB+WdhD), (2)\nwhere hU\nmis the final hidden state from Eq. 1;hBis a 0-1 vector\nwith each dimension representing a state (slot-value pair); hDis also\na 0-1 vector, which is built by querying the database with the current\nstateB. Each dimension of hDrepresents a particular result from the\ndatabase (e.g., whether a flight ticket is available).\n2.3 Expert decoder\nGiven the current dialogue context Xand the current decoded tokens\nY0:j−1, thel-th expert outputs the probability Pl(yl\nj|Y0:j−1,X)\nover the vocabulary Vat thej-th step by:\nPl(yl\nj|Y0:j−1,X) = softmax(UTol\nj+b)\nolj,slj=RNN( yj−1⊕clj,olj−1,slj−1),(3)\nwhere Uis the parameter matrix and bis bias; sl\njis the state vector,\nwhich is initialized by the dialogue context vector from the shared\ncontext encoder, i.e., sl\n0=x;yj−1is the embedding of the gener-\nated token at time step j−1;⊕is the concatenation operation; cl\nj\nis the context vector which is calculated with a concatenation atten-\ntion mechanism [1, 18] over the hidden representations from a sharedcontext encoder as follows:\nc\nl\nj=m/summationdisplay\ni=1αl\njihi\nαl\nji=exp(wl\nji)/summationtextm\ni=1exp(wl\nji)\nwl\nji=vT\nltanh (WT\nl(hi⊕sl\nj−1)+b l),(4)\nwhereαis a set of attention weights; ⊕is the concatenation opera-\ntion. Wl,bl,vlare learnable parameters, which are not shared by\ndifferent experts in our experiments.\n2.4 Chair decoder\nGiven the current dialogue context Xand the current decoded tokens\nY0:j−1, the chair decoder estimates the final token prediction distri-\nbutionP(yj|Y0:j−1,X)by combining the prediction probabilities\nfromkexperts. Here, we consider two strategies to leverage the pre-\ndiction probabilities from experts, i.e., RMoG and PMoG. The for-\nmer only considers expert generator outputs from history (until the\n(j−1)-th time step), which follows the typical neural Mixture-of-\nExperts (MoE) architecture [25, 27]. We propose the latter to makethe chair generator envision the future (i.e., after the (j−1)-th\ntime step) by exploring expert generator outputs from textra steps\n(t∈[1,n−j],t∈N).\nSpecifically, the chair determines the prediction P(y\nj|\nY0:j−1,X)as follows:\nP(yj|Y0:j−1,X)=βC\nj·P(yc\nj|Y0:j−1,X)\n+k/summationdisplay\nl=1(βl,R\nj+βl,P\nj)·P(yl\nj|Yl\n:j−1,X),(5)whereP(yc\nj|Y0:j−1,X)is the prediction probability from the chair\nitself;P(yl\nj|Y0:j−1,X)is the prediction probability from expert\nl;βj=[βC\nj,βl,R\nj,βl,P\nj]are normalized coordination coefficients,\nwhich are calculated as:\nβj=exp(vThj)/summationtextk\nl=1exp(vThl)\nhj=M L P ( [ P(yc\nj|Y0:j−1,X),hR\nj,hPj]).(6)\nβC\nj,βl,R\njandβl,P\njare estimated w.r.t. P(yc\nj|Y0:j−1,X),hR\njand\nhPj, respectively. hRjis a list of retrospective decoding outputs from\nall experts, which is defined as follows:\nhR\nj=P(y1\n1:j−1|y0,X)⊕···⊕P(yl\n1:j−1|y0,X)\n⊕P(yk\n1:j−1|y0,X),(7)\nwherey0is a special token “[BOS]” indicating the start of decoding;\nP(yl\n1:j−1|y0,X)is the output of expert lfrom the 1-st to the (j−\n1)-th step using Eq. 3; hP\njis a list of prospective decoding outputs\nfrom all experts, which is defined as follows:\nhP\nj=P(y1\nj:j+t|Y0:j−1,X)⊕···\n⊕P(yl\nj:j+t|Y0:j−1,X)\n⊕P(yk\nj:j+t|Y0:j−1,X),(8)\nwhereP(yl\nj:j+t|Y0:j−1,X)are the outputs of expert lfrom the\nj-th to (j+t)-th step. We obtain P(yl\nj:j+t|X)by forcing expert l\nto generate tsteps using Eq. 3 based on the current generated tokens\nY0:j−1.\n2.5 Learning scheme\nWe devise a global-and-local learning scheme to train MoGNet. Each\nexpertlis optimized by a localized expert loss defined on Sl, which\nforces each expert to specialize on one of the portions of data Sl.W e\nuse cross-entropy loss for each expert and the joint loss for all expertsis as follows:\nL\nexperts =k/summationdisplay\nl=1/summationdisplay\n(Xlp,Ylp)∈S ln/summationdisplay\nj=1μlyl\njlogP(yl\nj|Yl\n0:j−1,X),(9)\nwhereP(yl\nj|Yl\n0:j−1,X)is the token prediction by expert l(Eq. 3)\ncomputed on the r-th data sample; yl\njis a one-hot vector indicating\nthe ground truth token at j.\nWe also design a global chair loss to differentiate the losses in-\ncurred from different experts. The chair can attribute the source oferrors to the expert in charge. For each data sample in D, we calcu-\nlate the combined taken prediction P(y\nj|Y0:j−1,X)(Eq. 5). Then\nthe global loss becomes:\nLchair =|D|/summationdisplay\nr=1n/summationdisplay\nj=1yjlogP(yj|Y0:j−1,X). (10)\nOur overall optimization follows the joint learning paradigm that isdefined as a weighted combination of constituent losses:\nL=λ·L\nexperts +( 1−λ)·Lchair, (11)\nwhereλis a hyper-parameter to regulate the importance between the\nexperts and the chair for optimizing the loss.J. Pei et al. / Retrospective and Prospective Mixture-of-Generators for Task-Oriented Dialogue Response Generation 2150\n3 EXPERIMENTAL SETUP\n3.1 Research questions\nWe seek to answer the following research questions: (RQ1) Does\nMoGNet outperform state-of-the-art end-to-end single-module DRGmodels? (RQ2) How does the choice of a particular coordinationmechanism (i.e., RMoG, PMoG, or neither of the two) affect the per-formance of MoGNet? (RQ3) How does the GL learning schemecompare to using the general global learning as a learning scheme?\n3.2 Dataset\nOur experiments are conducted on the MultiWOZ [4] dataset. This isthe latest large-scale human-to-human TDS dataset with rich seman-tic labels, e.g., domains and dialogue actions, and benchmark resultsof response generation.\n3MultiWOZ consists of ∼10k natural conver-\nsations between a tourist and a clerk. It has 6 specific action-relateddomains, i.e., Attraction, Hotel, Restaurant, Taxi, Train, and Book-\ning, and 1 universal domain, i.e., General. 67.4% of the dialogues\nare cross-domain which covers 2–5 domains on average. The averagenumber of turns per dialogue is 13.68; a turn contains 13.18 tokenson average. The dataset is randomly split into into 8,438/1,000/1,000dialogues for training, validation, and testing, respectively.\n3.3 Model variants and baselines\nWe consider a number of variants of the proposed mixture-of-generators model:\n•MoGNet: the proposed model with RMoG and PMoG and GLlearning scheme.\n•MoGNet-P: the model without prospection ability by removingPMoG coordination mechanism from MoGNet.\n•MoGNet-P-R: the model removing the two coordination mecha-nisms and remaining GL learning scheme.\n•MoGNet-GL: the model that removes GL learning scheme fromMoGNet.\nSee Table 1 for a summary. Without further indications, the intents\nused are based on identifying eight different domains: Attraction,Booking, Hotel, Restaurant, Taxi, Train, General, and UNK.\nTable 1 : Model variants.\nβC\njβl,R\njβl,P\njλ\nMoGNet True True True 0.5\nMoGNet-P True True False 0.5\nMoGNet-P-R True False False 0.5MoGNet-GL True True True 0.0\nβC\nj,βl,R\nj,βl,P\njare from Eq. 5.“True” means we preserve it and learn it as\nit is. “False” means we remove it (set it to 0). λis from Eq. 11 and we report\ntwo settings, 0.0 and 0.5. See §5.2.\nTo answer RQ1, we compare MoGNet with the following methods\nthat have reported results on this task according to the official leader-board.\n4\n•S2SAttnLSTM. We follow the dominant Sequence-to-Sequence(Seq2Seq) model under an encoder-decoder architecture [5] andreproduce the benchmark baseline, i.e., single-module modelnamed S2SAttnLSTM [3, 4], based on the source code providedby the authors. See footnote 4.\n•S2SAttnGRU. A variant of S2SAttnLSTM, with Gated RecurrentUnits (GRUs) instead of LSTMs and other settings kept the same.\n3http://dialogue.mi.eng.cam.ac.uk/index.php/corpus/\n4The Context-to-Text Generation task at https://github.com/\nbudzianowski/multiwoz.•Structured Fusion. It learns the traditional dialogue modules andthen incorporates these pre-trained sequentially dependent mod-ules into end-to-end dialogue models by structured fusion net-works [20].\n•LaRLAttnGRU. The state-of-the-art model [36], which uses re-inforcement learning and models system actions as latent vari-ables. LaRLAttnGRU uses ground truth system action annotationsand user goals to estimate the rewards for reinforcement learningduring training.\n3.4 Evaluation metrics\nWe use the following commonly used evaluation metrics [4, 36]:\n•Inform: the fraction of responses that provide a correct entity outof all responses.\n•Success: the fraction of responses that answer all the requestedattributes out of all responses.\n•BLEU : for comparing the overlap between a generated response\nto one or more reference responses.\n•Score: defined as Score =( 0.5∗Inform +0.5∗Success +BLEU)∗\n100. This measures the overall performance in term of both taskcompletion and response fluency [20].\n•PPL: denotes the perplexity of the generated responses, which is\ndefined as the exponentiation of the entropy. This measures howwell a probability DRG model predicts a token in a response gen-eration process.\nWe use the toolkit released by Budzianowski et al. [3] to compute the\nmetrics.\n5Following their settings, we also use Score as the selection\ncriterion to choose the best model on the validation set and report theperformance of the model on the test set. We use a paired t-test tomeasure statistical significance (p< 0.01) of relative improvements.\n3.5 Implementation details\nTheoretically, the training time complexity of each data sample isO(n∗(k+1 )∗n), wherenis the number of response tokens. To\nreduce the computation cost, we assign j+t=nand compute the\nexpert prediction with Eq. 3. This means that the chair will makea final decision only after all the experts have decoded their finaltokens. Thus, the time complexity decreases to O(n∗(k+1 )+n ).\nFor a fair comparison, the vocabulary size is the same as\nBudzianowski et al. [4], which has 400 tokens. Out-of-vocabularywords are replaced with “[UNK]”. We set the word embedding sizeto 50 and all GRU hidden state sizes to 150. We use Adam [13] as ouroptimization algorithm with hyperparameters α=0.005,β\n1=0.9,\nβ2=0.999 and/epsilon1=1 0−8. We also apply gradient clipping [22]\nwith range [–5, 5] during training. We use l2regularization to allevi-\nate overfitting, the weight of which is set to 10−5. We set the mini-\nbatch size to 64. We use greedy search to generate the responses dur-ing testing. Please note that if a data point has multiple intents, thenwe assign it to each corresponding expert, respectively. The code isavailable online.\n6\n4 RESULTS\n4.1 Automatic evaluation\nWe evaluate the overall performance of MoGNet and the compara-\nble baselines on the metrics defined in §3.4. The results are shown\nin Table 2. First of all, MoGNet outperforms all baselines by alarge margin in terms of overall performance metric, i.e., satisfac-tion Score. It significantly outperforms the state-of-the-art baseline\nLaRLAttnGRU by 5.64% (Score) and 0.97 (PPL ). Thus, MoGNet\n5https://github.com/budzianowski/multiwoz.\n6https://github.com/Jiahuan-Pei/multiwoz-mdrgJ. Pei et al. / Retrospective and Prospective Mixture-of-Generators for Task-Oriented Dialogue Response Generation 2151\nTable 2 : Comparison results of MoGNet and the baselines.\nBLEU Inform Success Score PPL\nS2SAttnLSTM 18.90% 71.33% 60.96% 85.05 3.98\nS2SAttnGRU 18.21% 81.50% 68.80% 93.36 4.12\nStructured Fusion [20] 16.34% 82.70% 72.10% 93.74 –LaRLAttnGRU [36] 12.80% 82.78% 79.20% 93.79 5.22\nMoGNet 20.13%∗85.30%∗73.30% 99.43∗4.25\nBold face indicates leading results. Significant improvements over the best\nbaseline are marked with∗(paired t-test, p<0.01).\nnot only improves the satisfaction of responses but also improves\nthe quality of the language modeling process. MoGNet also achievesmore than 6.70% overall improvement over the benchmark baselineS2SAttnLSTM and its variant S2SAttnGRU. This proves the effec-tiveness of the proposed MoGNet model.\nSecond, LaRLAttnGRU achieves the highest performance in terms\nofSuccess, followed by MoGNet. However, it results in a 7.33%\ndecrease in BLEU and a 2.56% decrease in Inform compared to\nMoGNet. Hence, LaRLAttnGRU is good at answering all requestedattributes but not as good at providing more appropriate entities withhigh fluency as MoGNet. LaRLAttnGRU tends to generate moreslot values to increase the probability of answering the requested at-tributes. Take an extreme case as an example: if we force a model togenerate all tokens with slot values, then it will achieve an extremelyhigh Success b u tal o wBLEU.\nThird, S2SAttnLSTM is the worst model in terms of overall per-\nformance (Score). But it achieves the best PPL. It tends to gener-\nate frequent tokens from the vocabulary which exhibits better lan-guage modeling characteristics. However, it fails to provide usefulinformation (the requested attributes) to meet the user goals. By con-trast, MoGNet improves the user satisfaction (i.e., Score) greatly and\nachieves response fluency by taking specialized generations from allexperts into account.\n4.2 Human evaluation\nTo further understand the results in Table 2,we conducted a hu-\nman evaluation of the generated responses from S2SAttnGRU,LaRLAttnGRU, and MoGNet. We ask workers on Amazon Mechan-ical Turk (AMT)\n7to read the dialogue context, and choose the re-\nsponses that satisfy the following criteria: (i) Informativeness mea-\nsures whether the response provides appropriate information that isrequested by the user query. No extra inappropriate information isprovided. (ii) Consistency measures whether the generated response\nis semantically aligned with the ground truth response. (iii) Satisfac-\ntory measures whether the response has a overall satisfactory per-\nformance promising both Informativeness and Consistency. As with\nexisting studies [20], we sample one hundred context-response pairsto do human evaluation. Each sample is labeled by three workers.The workers are asked to choose either all responses that satisfy thespecific criteria or the “NONE” option, which denotes none of theresponses satisfy the criteria. To make sure that the annotations areof high quality, we calculate the fraction of the responses that satisfyeach criterion out of all responses that passes the golden test. That\nis, we only consider the data from the workers who have chosen thegolden response as an answer.\nThe results are displayed in Table 3. MoGNet performs better than\nS2SAttnGRU and LaRLAttnGRU on Informativeness because it fre-\nquently outputs responses that provide richer information (comparedwith S2SAttnGRU) and fewer extra inappropriate information (com-pared with LaRLAttnGRU). MoGNet obtains the best results, which\n7https://www.mturk.com/Table 3 : Results of human evaluation.\nS2SAttnGRU LaRLAttnGRU MoGNet\n/greaterorequalslant1/greaterorequalslant2/greaterorequalslant1/greaterorequalslant2 /greaterorequalslant1 /greaterorequalslant2\nInformativeness 56.79% 31.03% 76.54% 44.83% 80.25% 53.45%\nConsistency 45.21% 23.53% 71.23% 39.22% 80.82% 50.98%\nSatisfactory 26.79% 25.00% 44.64% 21.88% 60.71% 37.50%\nBold face indicates the best results. /greaterorequalslantnmeans that at least nAMT workers\nregard it as a good response w.r.t. Informativeness, Consistency and Satisfac-\ntory.\nmeans MoGNet is able to generate responses that are semanticallysimilar to the golden responses with large overlaps. The results ofLaRLAttnGRU outperforms S2SAttnGRU in all cases except for Sat-\nisfactory under the strict condition (/greaterorequalslant 2). This reveals that balancing\nbetween Informativeness and Consistency makes it difficult for the\nmturk workers to assess the overall quality measured by Satisfactory.\nIn this case, MoGNet receives the most votes on Satisfactory under\nthe strict condition (/greaterorequalslant 2) as well as the loose condition (/greaterorequalslant 1). This\nshows that the workers consider the responses from MoGNet moreappropriate than the other two models with a high degree of agree-ment. To sum up, MoGNet is able to generate user-favored responsesin addition to the improvements for automatic metrics.\n4.3 Coordination mechanisms\nIn Table 4we contrast the effectiveness of different coordination\nmechanisms. We can see that MoGNet-P loses 4.32% overall per-formance with a 0.62% decrease of BLEU, 5.90% decrease of In-\nform and 1.50% decrease of Success. This shows that the prospection\ndesign of the PMoG mechanism is beneficial to both task comple-tion and response fluency. Especially, most improvements come fromproviding more correct entities while improving generation fluency.MoGNet-P-R reduces 2.62% Score with 1.97% lower of BLEU, 0.2%\nlower of Inform and 1.10% of Success. Thus, the MoGNet framework\nis effective thanks to its design with two types of roles: the chair andthe experts.\nTable 4 : The impact of coordination mechanisms.\nBLEU Inform Success Score PPL\nMoGNet 20.13% 85.30% 73.30% 99.43 4.25\nMoGNet-P 19.51% 79.40% 71.80% 95.11 4.19\nMoGNet-P-R 18.16% 85.10% 72.20% 96.81 4.12\nUnderlined results indicate the worst results with a statistically significant de-\ncrease compared to MoGNet (paired t-test, p<0.01).\n4.4 Learning scheme\nWe use MoGNet-GL to refer to the model that removes the GL learn-\ning scheme from MoGNet and uses the general global learning in-stead. MoGNet-GL results in a sharp reduction of 6.95% overall per-formance with 0.80% of BLEU, 6.90% of Inform and 5.40% of Suc-\ncess. The main improvement is attributed to the strong task comple-tion ability. This shows the effectiveness and importance of the GLlearning scheme as it encourages each expert to specialize on a par-ticular intent while the chair prompts all experts to coordinate witheach other.\n5 ANALYSIS\nIn this section, we explore MoGNet in more detail. In particular, weexamine (i) whether the intent partition affects the performance ofMoGNet (§ 5.1); (ii) whether the improvements of MoGNet could\nsimply be attributed to having a larger number of parameters ( §5.2);\n(iii) how the hyper-parameter λ(Eq. 11) affects the performance ofJ. Pei et al. / Retrospective and Prospective Mixture-of-Generators for Task-Oriented Dialogue Response Generation 2152\nTable 5 : Impact of the learning scheme.\nBLEU Inform Success Score PPL\nMoGNet 20.13% 85.30% 73.30% 99.43 4.25\nMoGNet-GL 19.33% 78.40% 67.90% 92.48 3.97\nUnderlined results indicate the worst results with a statistically significant de-\ncrease compared with MoGNet (paired t-test, p<0.01).\nMoGNet (§ 5.2); and (iv) how RMoG, PMoG and GL influence DRG\nusing a small case study (§ 5.3).\n5.1 Intent partition analysis\nAs stated above, the responses vary a lot for different intents which\nare differentiated by the domain and the type of system action. There-fore, we experiment with two types of intents as shown in Table 6.\nTable 6 : Two groups of intents that are divided by domains and the\ntype of system actions.\nType Intents\nDomain Attraction, Booking, Hotel, Restaurant, Taxi, Train, General, UNK.\nActionBook, Inform, NoBook, NoOffer, OfferBook, OfferBooked, Select,\nRecommend, Request, Bye, Greet, Reqmore, Welcome, UNK.\nTo address (i), we compared two ways of partitioning intents.\nMoGNet-domain and MoGNet-action denote the intent partitionsw.r.t. domains and system actions, respectively. MoGNet-domain has8 intents (domains) and MoGNet-action has 14 intents (actions), asshown in Table 6. The results are shown in Table 7.\nTable 7 : Results of MoGNet with two intent partition ways.\nBLEU Inform Success Score PPL\nMoGNet-domain 20.13% 85.30% 73.30% 99.43 4.25\nMoGNet-action 17.28% 79.40% 69.70% 91.83 4.48\nMoGNet consistently outperforms the baseline S2SAttnGRU for\nboth ways of partitioning intents. Interestingly, MoGNet-domaingreatly outperforms MoGNet-action. We believe there are two rea-sons: First, the system actions are not suitable for grouping intentsbecause some partition subsets are hard to be distinguished fromeach other, e.g., OfferBook and OfferBooked. Second, some system\nactions only have a few data samples, simply not enough to special-ize the experts. The results show that different ways of partitioningintents may greatly affect the performance of MoGNet. Therefore,more effective intent partition methods, e.g., adaptive implicit intentpartitions, need to be explored in future work.\n5.2 Hyper-parameter analysis\nTo address (ii), we show the results of MoGNet and S2SAttnGRUwith different hidden sizes in Fig. 3. S2SAttnGRU outperformsMoGNet when the number of parameters is less than 0.6e7. However,MoGNet achieves much better results with more parameters. Mostimportantly, the results from both models show that a larger numberof parameters does not always mean better performance, which in-dicates that the improvement of MoGNet is not simply due to moreparameters.\nTo address (iii), we report the Score values of MoGNet with differ-\nent values of λ(Eq. 11), as shown in Fig. 4. When λ=0, no expert\nis trained on a particular intent. When λ=1, the model ignores the\nglobal loss, i.e., the RMoG and PMoG mechanisms are not used andthe chair is only trained as a general expert. We can see that thesetwo settings decrease the performance greatly which further verifies\nFigure 3 :Score of MoGNet and S2SAttnGRU with different number\nof parameters.\nFigure 4 :Score of MoGNet with different values of λ.\nthe effectiveness of RMoG and PMoG as well as the MoGNet frame-work. We also note that the performance of MoGNet is quite stablewhenλ∈[0.1,0.7]with the best performance for λ=0.7. Hence,\nMoGNet is not very sensitive to the hyper-parameter λ.\n5.3 Case study\nTo address (iv), we select an example to illustrate the influenceof RMoG, PMoG, and GL. Table 8 exhibits the responses gener-ated by comparable baselines (i.e., S2SAttnGRU, LaRLAttnGRU)and MoGNet variants as in Table 4. In red we highlight the to-kens that show the differences in terms of task completion. Gen-erally, MoGNet can generate more appropriate and meaningful re-sponses. Specifically, without PMoG, MoGNet-P and MoGNet-P-Rignore the fact that the attribute time is important for searching a train\nticket (1st turn) and omit the exact departure time ([value\ntime]) of\nthe train (2nd turn). Without GL, MoGNet-GL ignores the primarytime information need day(1st turn) and omits the implicit need\nof [value\nprice] (4th turn). There are also some low-quality cases,\ne.g., MoGNet and the baselines occasionally generate redundant andlengthy responses, because none of them has addressed this issue ex-plicitly during training.\n6 RELATED WORK\nTraditional models for DRG [8, 33] decompose the task into sequen-\ntially dependent modules, e.g., Dialogue State Tracking (DST) [37],Policy Learning (PL) [35], and Natural Language Generation(NLG) [21]. Such models allow for targeted failure analyses, butinevitably incur upstream propagation problems [5]. Recent workviews DRG as a source-to-target transduction problem, which mapsadialogue context to a response [11, 17, 31]. Sordoni et al. [28]\nshow that using an RNN to generate text conditioned on dialoguehistory results in more natural conversations. Later improvements in-clude the addition of attention mechanisms [16, 29], modeling thehierarchical structure of dialogues [26], or jointly learning beliefspans [15]. Strengths of these methods include global optimizationJ. Pei et al. / Retrospective and Prospective Mixture-of-Generators for Task-Oriented Dialogue Response Generation 2153\nTable 8 : Example responses from MoGNet with the ablation settings in a 4-turn dialogue.\nTurn User System\n1i am looking for a train departing\nfrom [value place] that will arrive\nat [value place].S2SAttnGRU: what daywould you like to travel ?\nLaRLAttnGRU: there are [value count] trains leaving [value place] on [value day] . what\ndaywould you like to travel ?\nMoGNet: what dayandtime would you like to travel?\nMoGNet-P: what daywill you be traveling?\nMoGNet-P-R: what daywill you be traveling?\nMoGNet-GL: there are [value count] trains ... do you have a time you would like to depart?\n2i would like to leave on [value day]\nand arrive no later than [value time].S2SAttnGRU: [train id] will arrive at [value time] .\nLaRLAttnGRU: i have train [train id] leaving at [value time] and arriving at [value time] .\nwould you like me to book this for you ?\nMoGNet: [train id] leaves at [value time] and arrives by [value time] .\nMoGNet-P: [train id] will arrive at [value time].\nMoGNet-P-R: the [train id] arrives at [value time].\nMoGNet-GL: i have train [train id] that leaves at [value time] and arrives at [value time].\n+All: would you like me to book that for you?\n3 can i please have the reference number? All: ... how many tickets do you need?\n4could you book [value count] tickets\nfor me and send me the referencenumber?S2SAttnGRU: i have booked you [value\ncount] tickets on the [train id] . your reference\nnumber is [train reference]. is there anything else i can help you with ?\nLaRLAttnGRU: i have booked [value count] tickets on [train id] . the reference number is\n[train reference] .\nMoGNet: i have booked [value count] tickets for you. the total fee is [value price] gbp\npayable at the station. your reference number is [train reference]. is there anything else\ni can help you with ?MoGNet-P: i have booked [value\ncount] tickets. the total fee is [value price] gbp payable\nat the station. your reference number is [train reference].\nMoGNet-P-R: booking was successful , the total fee is [value price] gbp payable at the\nstation. reference number is [train reference].\nMoGNet-GL: i have booked [value count] tickets for you. the reference number is\n[train reference]. is there anything else i can help you with ?\nand easier adaptation to new domains [5].\nThe studies listed above assume that each token of a response is\nsampled from a single distribution, given a complex dialogue con-\ntext. In contrast, MoGNet uses multiple cooperating modules, whichexploits the specialization capabilities of different experts and thegeneralization capability of a chair. Work most closely related to oursin terms of modeling multiple experts includes [6, 12, 14, 23]. Le\net al. [14] integrate a chat model with a question answering model us-ing an LSTM-based mixture-of-experts method. Their model is sim-ilar to MoGNet-GL-P (without PMoG and GL) except that they sim-ply use two implicit expert generators that are not specialized on par-ticular intents. Guo et al. [12] introduce a mixture-of-experts to usethe data relationship between multiple domains for binary classifica-tion and sequence tagging. Sequence tagging generates a set of fixedlabels; DRG generates diverse appropriate response sequence. Thedifferences between MoGNet and these two approaches are three-fold: First, MoGNet consists of a group of modules including achair generator and several expert generators; this design addressesthe module interdependence problem since each module is indepen-dent from the others. Second, the chair generator alleviates the errorpropagation problem because it is able to manage the overall errorsthrough an effective learning scheme. Third, the models of those twoapproaches cannot be directly applied to task-oriented DRG. The re-cently published HDSA [6] slightly outperforms MoGNet on Score\n(+0.07), but it overly relies on BERT [9] and graph structured di-alog acts. MoGNet follow the same modular TDS framework [23],but it preforms substantially better due to fitting the expert genera-tors with both retrospection and prospection abilities and adoptingthe GL learning scheme to conduct more effective learning.\n7 CONCLUSION AND FUTURE WORK\nIn this paper, we propose a novel mixture-of-generators network(MoGNet) model with different coordination mechanisms, namdely,RMoG and PMoG, to enhance dialogue response generation. We alsodevise a GL learning scheme to effectively learn MoGNet. Experi-ments on the MultiWOZ benchmark demonstrate that MoGNet sig-nificantly outperforms state-of-the-art methods in terms of both auto-matic and human evaluations. We also conduct analyses that confirmthe effectiveness of MoGNet, the RMoG and PMoG mechanisms, aswell as the GL learning scheme.\nAs to future work, we plan to devise more fine-grained expert\ngenerators and to experiment on more datasets to test MoGNet. Inaddition, MoGNet can be advanced in many directions: First, bettermechanisms can be proposed to improve the coordination betweenchair and expert generators. Second, it would be interesting to studyhow to do intent partition automatically. Third, it is also important toinvestigate how to avoid redundant and lengthy responses in order toprovide a better user experience.\nACKNOWLEDGEMENTS\nThis research was partially supported by Ahold Delhaize, the Asso-ciation of Universities in the Netherlands (VSNU), the China Schol-arship Council (CSC), and the Innovation Center for Artificial In-telligence (ICAI). All content represents the opinion of the authors,which is not necessarily shared or endorsed by their respective em-ployers and/or sponsors.J. Pei et al. / Retrospective and Prospective Mixture-of-Generators for Task-Oriented Dialogue Response Generation 2154\nREFERENCES\n[1] D. Bahdanau, K. Cho, and Y . Bengio. Neural machine transla-\ntion by jointly learning to align and translate. In ICLR, 2015.\n[2] A. Bordes and J. Weston. Learning end-to-end goal-oriented\ndialog. In ICLR, 2017.\n[3] P. Budzianowski, I. Casanueva, B.-H. Tseng, and M. Gasic. To-\nwards end-to-end multi-domain dialogue modelling. Technical\nreport, Cambridge University, 2018.\n[4] P. Budzianowski, T.-H. Wen, B.-H. Tseng, I. Casanueva,\nS. Ultes, O. Ramadan, and M. Gasic. Multiwoz - a large-scalemulti-domain wizard-of-oz dataset for task-oriented dialoguemodelling. In EMNLP, pages 5016–5026, 2018.\n[5] H. Chen, X. Liu, D. Yin, and J. Tang. A survey on dialogue\nsystems: Recent advances and new frontiers. ACM SIGKDD\nExplorations Newsletter, 19(2):25–35, 2017.\n[6] W. Chen, J. Chen, P. Qin, X. Yan, and W. Y . Wang. Seman-\ntically conditioned dialog response generation via hierarchicaldisentangled self-attention. In ACL, pages 3696–3709, 2019.\n[7] K. Cho, B. van Merri ¨enboer, C. Gulcehre, D. Bahdanau,\nF. Bougares, H. Schwenk, and Y . Bengio. Learning phraserepresentations using RNN encoder-decoder for statistical ma-chine translation. In EMNLP, pages 1724–1734, 2014.\n[8] P. Crook, A. Marin, V . Agarwal, K. Aggarwal, T. Anastasakos,\nR. Bikkula, D. Boies, A. Celikyilmaz, S. Chandramohan,Z. Feizollahi, R. Holenstein, M. Jeong, O. Z. Khan, Y .-B. Kim,E. Krawczyk, X. Liu, D. Panic, V . Radostev, N. Ramesh, J.-P.\nRobichaud, A. Rochette, S. L., and R. Sarikaya. Task comple-\ntion platform: A self-serve multi-domain goal oriented dialogueplatform. In NAACL , pages 47–51, 2016.\n[9] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-\ntraining of deep bidirectional transformers for language under-standing. In ACL, pages 4171–4186, 2019.\n[10] T. G. Dietterich. Ensemble methods in machine learning. In\nProceedings of the First International Workshop on MultipleClassifier Systems, pages 1–15, 2000.\n[11] M. Eric, L. Krishnan, F. Charette, and C. D. Manning. Key-\nvalue retrieval networks for task-oriented dialogue. In SIG-\nDIAL, pages 37–49, 2017.\n[12] J. Guo, D. J. Shah, and R. Barzilay. Multi-source domain adap-\ntation with mixture of experts. In EMNLP, pages 4694–4703,\n2018.\n[13] D. Kingma and J. Ba. Adam: A method for stochastic opti-\nmization. In ICLR, 2015.\n[14] P. Le, M. Dymetman, and J.-M. Renders. LSTM-based\nmixture-of-experts for knowledge-aware dialogues. In Pro-\nceedings of the 1st Workshop on Representation Learning forNLP, pages 94–99, 2016.\n[15] W. Lei, X. Jin, M.-Y . Kan, Z. Ren, X. He, and D. Yin. Se-\nquicity: Simplifying task-oriented dialogue systems with singlesequence-to-sequence architectures. In ACL, pages 1437–1447,\n2018.\n[16] J. Li, M. Galley, C. Brockett, G. P. Spithourakis, J. Gao, and\nB. Dolan. A persona-based neural conversation model. In ACL,\npages 994–1003, 2016.\n[17] J. Li, W. Monroe, T. Shi, S. Jean, A. Ritter, and D. Jurafsky. Ad-\nversarial learning for neural dialogue generation. In EMNLP,\npages 2157–2169, 2017.\n[18] T. Luong, H. Pham, and C. D. Manning. Effective approaches\nto attention-based neural machine translation. In EMNLP,\npages 1412–1421, 2015.[19] S. Masoudnia and R. Ebrahimpour. Mixture of experts: a lit-\nerature survey. Artificial Intelligence Review, 42(2):275–293,\n2014.\n[20] S. Mehri, T. Srinivasan, and M. Eskenazi. Structured fusion\nnetworks for dialog. In SIGDIAL, pages 165–177, 2019.\n[21] F. Mi, M. Huang, J. Zhang, and B. Faltings. Meta-learning\nfor low-resource natural language generation in task-orienteddialogue systems. In IJCAI, pages 3151–3157, 2019.\n[22] R. Pascanu, T. Mikolov, and Y . Bengio. On the difficulty of\ntraining recurrent neural networks. In ICML, pages 1310–1318,\n2013.\n[23] J. Pei, P. Ren, and M. de Rijke. A modular task-oriented di-\nalogue system using a neural mixture-of-experts. In SIGIR\nWorkshop on Conversational Interaction Systems, 2019.\n[24] J. Pei, A. Stienstra, J. Kiseleva, and M. de Rijke. Sentnet:\nSource-aware recurrent entity networks for dialogue responseselection. In 4th International Workshop on Search-Oriented\nConversational AI (SCAI), August 2019.\n[25] P. Schwab, D. Miladinovic, and W. Karlen. Granger-causal at-\ntentive mixtures of experts: Learning important features withneural networks. In AAAI, pages 4846–4853, 2019.\n[26] I. V . Serban, A. Sordoni, Y . Bengio, A. C. Courville, and\nJ. Pineau. Building end-to-end dialogue systems using genera-\ntive hierarchical neural network models. In AAAI, pages 3776–\n3784, 2016.\n[27] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le,\nG. Hinton, and J. Dean. Outrageously large neural networks:\nThe sparsely-gated mixture-of-experts layer. In ICLR, 2017.\n[28] A. Sordoni, M. Galley, M. Auli, C. Brockett, Y . Ji, M. Mitchell,\nJ.-Y . Nie, J. Gao, and B. Dolan. A neural network approachto context-sensitive generation of conversational responses. InNAACL-HLT, pages 196–205, 2015.\n[29] O. Vinyals and Q. Le. A neural conversational model. In ICML\nDeep Learning Workshop, 2015.\n[30] T.-H. Wen, D. Vandyke, N. Mrk ˇsi´c, M. Gasic, L. M. R. Bara-\nhona, P.-H. Su, S. Ultes, and S. Young. A network-based end-to-end trainable task-oriented dialogue system. In EACL, pages\n438–449, 2017.\n[31] T.-H. Wen, D. Vandyke, N. Mrk ˇsi´c, M. Gasic, L. M. R. Bara-\nhona, P.-H. Su, S. Ultes, and S. Young. A network-based end-to-end trainable task-oriented dialogue system. In EACL, pages\n438–449, 2017.\n[32] J. D. Williams, K. Asadi, and G. Zweig. Hybrid code net-\nworks: practical and efficient end-to-end dialog control withsupervised and reinforcement learning. In ACL, pages 665–\n677, 2017.\n[33] Z. Yan, N. Duan, P. Chen, M. Zhou, J. Zhou, and Z. Li. Building\ntask-oriented dialogue systems for online shopping. In AAAI,\npages 4618–4626, 2017.\n[34] S. Young, M. Ga ˇsi´c, B. Thomson, and J. D. Williams. POMDP-\nbased statistical spoken dialog systems: A review. Proceedings\nof the IEEE, 101(5):1160–1179, 2013.\n[35] Z. Zhang, M. Huang, Z. Zhao, F. Ji, H. Chen, and X. Zhu.\nMemory-augmented dialogue management for task-orienteddialogue systems. TOIS, 37(3):34, 2019.\n[36] T. Zhao, K. Xie, and M. Eskenazi. Rethinking action spaces for\nreinforcement learning in end-to-end dialog agents with latentvariable models. In NAACL , pages 1208–1218, 2019.\n[37] V . Zhong, C. Xiong, and R. Socher. Global-locally self-\nattentive encoder for dialogue state tracking. In ACL, pages\n1458–1467, 2018.J.Peietal./Retrospective andProspective Mixtur e-of-Gener atorsforTask-Oriented Dialo gueResponse Generation 2155",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "4VNeBCSK9G",
"year": null,
"venue": "EC-TEL 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=4VNeBCSK9G",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Digital Library Framework for Reusing e-Learning Video Documents",
"authors": [
"Paolo Bolettieri",
"Fabrizio Falchi",
"Claudio Gennaro",
"Fausto Rabitti"
],
"abstract": "The objective of this paper is to demonstrate the reuse of digital content, as video documents or PowerPoint presentations, by exploiting existing technologies for automatic extraction of metadata (OCR, speech recognition, cut detection, MPEG-7 visual descriptors, etc.). The multimedia documents and the extracted metadata are then indexed and managed by the Multimedia Content Management System (MCMS) MILOS, specifically developed to support design and effective implementation of digital library applications. As a result, the indexed digital material can be retrieved by means of content based retrieval on the text extracted and on the MPEG-7 visual descriptors (via similarity search), assisting the user of the e-Learning Library (student or teacher) to retrieve the items not only on the basic bibliographic metadata (title, author, etc.).",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "fpsJUkBm-6",
"year": null,
"venue": "EAMT 2006",
"pdf_link": "https://aclanthology.org/2006.eamt-1.30.pdf",
"forum_link": "https://openreview.net/forum?id=fpsJUkBm-6",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Using Patterns for Machine Translation",
"authors": [
"Stella Markantonatou",
"Sokratis Sofianopoulos",
"Vassiliki Spilioti",
"George Tambouratzis",
"Marina Vassiliou",
"Olga Yannoutsou"
],
"abstract": "Stella Makantonatou, Sokratis Sofianopoulos, Vassiliki Spilioti, George Tambouratzis, Marina Vassiliou, Olga Yannoutsou. Proceedings of the 11th Annual Conference of the European Association for Machine Translation. 2006.",
"keywords": [],
"raw_extracted_content": "Using patterns for Machine Translation (MT) \nStella Markantonatou, Sokratis Sofianopoulo s, Vassiliki Spilioti, George Tambouratzis, \nMarina Vassiliou, Olga Yannoutsou∗\nInstitute for Language and Speech Processi ng, Artemidos 6; 151 25 Athens (Greece) \n{marks | s_sofian | vspiliot | giorg_t | mvas | olga}@ilsp.gr \nAbstract. In this paper an innovative approach is presented for MT, which is based on pat-\ntern matching techniques, relies on extensive target language monolingual corpora and em-\nploys a series of similarity weights between the source and the target language. Our system \nis based on the notion of ‘patterns’, which are viewed as ‘models’ of target language \nstrings, whose final form is defined by the corpus. \n \n1. Introduction \nWith this work, we further explore the ideas \ntested within the METIS-I1 system (Dologlou \net al. 2003) which proved the feasibility of the \ninnovative idea that sound translations could \nbe received with hybrid MT that relied on monolingual corpora – rather than parallel ones – and flat bilingual lexica. This is the main difference between METIS systems and cor-\npus-based approaches (EBMT, SMT) which \nrely on bilingual corpora. For corpus-based MT approaches, which have taken the lead \nfrom rule-based ones (H utchins 1995), the \nbasic resources, i.e. parallel corpora, are \nscarce. Such corpora are rare and available for the very widely spoken languages only. In \naddition, they quite often represent a certain \nregister or sublanguage. Efforts to face the problem have focused on reducing the size of the required parallel corpus (Al Onaizan \n(2000), Brown (2003)). By resorting to mono-\nlingual corpora only, the METIS projects pur-sue a radically different solution to the prob-lem of scarcity of resources. However, \nMETIS-I too faced a serious problem of \nsparseness of data as it could manipulate only \n \n1METIS was funded by EU under the FET Open \nScheme (METIS-I, IST-2001- 32775), while METIS-II, \nthe continuation of METIS, is being funded under the \nFET-STREP scheme of FP6 (METIS-II, IST-FP6-\n003768). The assessment project METIS ended in Febru-\nary 2003, while the second phase started in October 2004 \nand has a 36 month duration. sentences as units. In METIS-II, the frame-\nwork of the present work (Markantonatou et al. \n2005), material at sub-sentential level, namely \nchunks, is exploited to generate translations. \nThe great promise with corpus-based ap-\nproaches lies in that ‘hard-to-manipulate’ lin-guistic information can be induced from the \ncorpus rather than be ing explicitly represented \nwith a constantly growing collection of rules. \nThe syntactic and semantic preferences of words (one of the reasons why the number of \nrules tends to explode in both hand-crafted and \ntree-bank induced grammars (Gaizauskas 1995)) constitute a large part of the implicit information provided by the corpus. A similar \nargument can be made about word order. Thus, \nwork on (various approaches to) corpus-based MT aimed at making do without resorting to \nany expensive linguistic resources such as \n(rich) computational lexica and grammars (Nagao 1984, Brown 1990). However, it has become evident that some amount of linguistic knowledge is necessary (see, for instance Pop-\nowich (2005) for the case of SMT and Carl & \nWay (2003) for various uses of linguistic re-sources in Example-Based MT). Actually, \nnowadays, investigation of hybrid systems \ncombining easy-to-obtain resources from all MT paradigms, rule-based included, is consid-ered a very promising path of research in the \nfield ((Nirenburg & Raskin (2004), Thurmair \n(2005)). \n \n \nIn the work presented here, an innovative \nhybrid approach is adopted, which relies on \ntarget language (TL) corpus information at \nsub-sentential level and employs pattern matching techniques. Many efforts to exploit sub-sentential evidence are reported in state-of-the-art MT and range from n-gram ap-\nproaches in SMT (Ney 2005) to sophisticated \nparsers’ output (Way 2003) and template alignment (McTait 2003) in EBMT. The pat-\ntern matching technique we present here uses \nthe monolingual corpus as a source of TL pat-terns and as a repository of implicit informa-tion, which is exploited to resolve issues re-\nlated with lexical affiliations in the TL (co-\noccurrence tendencies, argument selection) and to capture language-dependent properties such as word order. \n2. Patterns \nSeveral researchers in the corpus-based MT paradigm have reported on the use of patterns. \nHowever, these patterns differ from the pat-\nterns employed in the work presented here. Lepage (1997) employs sequences of words to improve matching with the source language (SL) side of the parallel corpus. Best matching \nscores are achieved when long SL strings of \nthe parallel corpus are identical with strings in the input sentence. No operations on strings are \nforeseen. McTait (2003), Brown (2003) and \nKitamura (2004) (among others) create pat-terns, namely sequences of words and vari-ables standing for sequences of words, both for \nmatching on the SL side and for generating \ntranslations. In the work presented here the term ‘pattern’ is not used in any of the ways presented above for two reasons: (a) there are \nno parallel corpora and there is no direct \nmatching of the SL string with strings in the same language and (b) more important, pat-\nterns are not viewed as fixed strings with or \nwithout slots for variables but as ‘models’ of \nTL strings, which receive their final form only after the corpus has been consulted. Consulta-tion of the corpus is performed with pattern \nmatching techniques. \nThe intuition behind patterns as used in the \nwork presented here is simple. The SL struc-ture consists of a verb and satellite chunks \nwhich are either arguments of the verb or modifiers denoting time, place or manner. In the general case, we would like to recover in \nthe TL the verbal meaning and the meaning \nconveyed with the satellite chunks. For in-stance, if an event is described in the SL in-\nvolving two participants and information about \ntime and place, we would like the translation to \nreport about the same event with the same number of participants and the same informa-tion about time and place. Crucially, however, \nwe do not require that all these meaning com-\nponents are of the same syntactic status across the language pair. This is achieved with the mechanism of the pattern matching algorithm, \nwhich employs a set of similarity weights (see \nSection 2.3) and allows for similar grammati-cal and syntactic categories in addition to iden-tical ones. In this sense an AdjP may match \nwith an AdjP, an NP or a PP in reduced simi-\nlarity order. \n2.1. Patterns in SL and TL \nPatterns are generated by the output of the chunkers used for both languages and are \nformed by chunks and their respective con-\nstituents. Depending on the phase of the matching algorithm different types of pattern are used, as the system concentrates on differ-ent types of information. It must be noted, \nhowever, that only a very small number of \npattern types is required. Thus, for both the SL and the TL only three typ es of pattern are used: \nthe Clause Pattern, the VG Pattern and the PP Pattern. \nClause Pattern \n(PP* token*)* VG (PP* token*)* [where ‘to-\nken’ refers to adverbials and punctuation] \nThe Clause pattern describes the overall \nstructure of a clause: the verbal group head and \nthe number, labels and heads of the chunks (if \nany exist). \nThe VG pattern describes the verb group. \nOther tokens such as adverbs for example, if \nfound within the verb phrase will be part of it, \nwhile if found in isolation, are not considered to form a pattern and will be treated in a differ-ent way. \nThe PP pattern describes both preposi-\ntional and noun chunks in terms of their con-\nstituent tokens. The generalisation here is that \na noun chunk can be represented as a preposi-\n \ntional one with an empty prepositional head. \nThis representation captures phrase category \nmismatches between SL and TL of the sort \nexemplified in (1). \n1. [pp ∅ [np_nom ο σκύλος ]] [ vg µπήκε] [pp \nστο [np_acc δωµάτιο]] \n[pp ∅ [np1 the dog]] [ vg entered] [ pp ∅ \n[np2 the room ]]2\n2.2. Pattern acquisition \nThis is a hybrid approach, because pattern acquisition is rule-based: already existing and \nrather trivial tools are used for both the SL and \nTL and include taggers, lemmatisers and chun-kers. Certainly, adjustments had to be made to both the SL and TL tools to improve compati-bility of the resulting patterns. \nThe TL corpus is processed off-line once \nand then stored in a re lational database of TL \npatterns containing (a) clause patterns indexed on the basis of their main verb and the number of their chunks and (b) PP patterns classified according to their head. \nThe pattern derived from the SL input, the \n“TL-like pattern” from now on, is created in \nreal time. The SL input is tagged, lemmatised, \nchunked and fed as input to a bilingual flat dictionary. All tokens from the SL string (2) are looked up in the lexicon and multiple trans-\nlation equivalents are derived (3). No score is \nrelated with the multiple translations. It must be stressed that one of the advantages of the pattern matching approach presented here is \nthat it does not rely on frequency information: \nas opposed to statistical approaches, the pattern matching one does not miss rare occurrences and combinations of words or patterns. \n2. [ppgof ∅ [np_nm Ο υπουργός ] [ np_ge \nΟικονο µικών ]] [ vgδιέλυσε ] [ ppgof ∅ \n[np_ac τη συνάντηση ] [ np_ge της επι-\nτροπής ]] [ ppgof για [ np_ac την \nκακοποίηση ] [np_ge ανηλίκων ]] \n(literal translation: The Finance Minis-ter broke up the committee meeting \nabout child abuse)3\n 2 NP1 and NP2 are chunk labels indicating the posi-\ntion of the TL PP patterns in relation to the VG pattern. \n3 Heads of PP patterns are marked with bold. 3. [ppgof ∅ [np_nm The minister / secretary] \n[np_ge Finance / economics]] [ vg break \nup / dissolve] [ ppgof ∅ [np_ac meeting / \nencounter] [ np_ge commission / commit-\ntee]] [ ppgof for / about [ np_ac abuse] \n[np_ge child / juvenile]] \nThe multiple TL-like patterns obtained are \nfed to the core translation engine to match \nthem against respective patterns in the TL cor-\npus. Thus, in our approach, rather than asking, as Nagao (1984) and the EBMT paradigm did, ‘tell me how you have translated it and I will repeat the translation’, we require that the algo-\nrithm, which we provide with TL-like strings, \nexploits corpus inform ation and elicits gram-\nmatical strings. \n2.3. Pattern matching \nAs mentioned before, METIS-II maps TL-like patterns onto patterns retrieved from the mono-\nlingual TL corpora. By addressing this match-\ning problem as a general, weighted assignment problem, METIS-II manages to resolve transla-tion issues without resorting to linguistic rules. \nMapping is carried out by comparing pat-\nterns in both languages and assigning scores. \nThe degree of similarity across patterns is re-\nvealed on the basis of appropriate information depending on the types of pattern compared. Scores are calculated with the use of a series of \nweights\n4, which provide information regarding \nthe similarity of tags, tokens, lemmata and chunk labels. Chunk labels denote categorical \nstatus apart from the label NP1 for the TL which denotes a pre-verbal nominal chunk adjacent to the verb group. For example, a tag similarity weight with a value of 0 indicates \nthat the two tags involved cannot be consid-\nered as ‘matching’ (e.g. a verbal tag will never map onto a prepositional tag), while a value of \n1 would mean that ‘matching’ is ideal. Weights \nare used by the assignment algorithm in order to achieve the optimum mapping. Thus the system manages to correct the word order and \ndelete/insert tokens. In the following section \nwe use an example to illustrate the overall procedure. \n \n4 For a discussion of the formulas used for score cal-\nculation, see Markantonat ou et al. (2005) and Tam-\nbouratzis et al. (2006). \n \n3. Patterns \nIn four distinct steps, the pattern-matching \nalgorithm proceeds gradually from wider pat-\nterns to narrower ones, ensuring that the largest \ncontinuous piece of information is retrieved as such, while mismatching areas are identified. We will illustrate the procedure by using the SL sentence in (4), where the clause pattern \nhas a VS order, which is ungrammatical for \nEnglish declarative, non-emphatic clauses: \n4. Συνήθως [vg διαρκούν ] για ώρες [pp ∅ \n[np_nm οι εβδοµαδιαίες συναντήσεις ] \n[np_ge των πιο βαρετών ανθρώπων ] \n(literal translation: Usually last for hours the weekly meetings of the most \nboring people) \nWe expect the system to produce string (5) \nwhich will then be fed to a morphological gen-erator for English (not yet implemented): \n5. [pp ∅ [np1 the weekly meeting] [ pp of \n[np2 the most boring people]] usually \n[vg last] for hour \nSL string (4) is tagged, lemmatised and \nchunked and the resulting TL-like pattern is fed to the system. At the first step the algo-\nrithm delimits the matching process within the clause boundaries. Therefore, the TL clause database is searched for clause patterns similar to the TL-like one in terms of the verbal head \nand the number of contained chunks, which should equal or exceed by up to 2 the chunk \nnumber of the TL-like pattern. The best match-\ning clause retrieved from the TL corpus at this \nstep is given in (6): \n6. One charge of the battery lasts for hours, even at top speed, \nAt the second step, the retrieved TL clause \npatterns are compared with the TL-like one at a \nlower level, namely, with respect to the type \nand head of the chunks contained. The degree of the patterns’ functional and lexical similar-ity is determined and the establishment of the \nchunk pattern order is achieved. Table (1) il-\nlustrates how the VS order of the TL-like pat-tern is fixed to the right SV order illustrated in (5), by relying on information implicit in the \ncorpus-retrieved sentence (6). \nMore specifically, the system manages to \nestablish the correct word order, after matching a TL-like PP pattern in nominative (np_nm) \nwith a TL PP pattern (NP1) that precedes the verb (Table 1). This matching is achieved by employing the respective similarity weight (Table 2), whose value is 1, when comparing \nthe chunk labels np_nm and NP1, thus ena-\nbling the algorithm to establish the structure (the SV order) in the final translation, before \nhandling the lexical differences between the \nheads and the tokens at a next step. \n \n \nTranslated Sentence: usually , the weekly meeting the most boring people last for hour \nCorpus Sentence: One charge of the battery lasts for hours, even at top speed , \nScore = 83.739136% pp([-{-}] np_nm (the{AT0} \nadjp([weekly{AJ}]) [meet-\ning{NN}])) pp ([of{-PRF}] \nnp_ge(the{AT0} \nadjp(most{AV0} [bor-\ning{AJ}]) [people{NN}] \n)) vg([last \n{VV]) pp ([for{PRP}] \nnp_ac \n([hour{NN}])) PAD \nPP([-{-}] NP_1(ADJ([one \n{CRD}]) [charge{NN1}])) 79% 61% 0% 61% 20% \nPP([of{PRF}] NP_2(the{AT0} \n[battery{NN1}])) 40% 79% 0% 78% 20% \nVG([last{VVZ}]) 0% 0% 100% 0% 20% \nPP([for{PRP}]NP_2([hour{NN\n2}])) 40% 78% 0% 100% 20% \nPP([at{PRP}]NP_2(ADJ([top{\nAJ0}])[speed {NN1}] )) 40% 78% 0% 78% 20% \nTable 1. Clause comparison based on chunk labels & chunk heads. \n \nNP_NM NP_1 1 \nNP_NM NP_2 0.1 \nTable 2. Chunk label comparison similarity weights. \nAt the third step, the pattern matching algo-\nrithm performs a detailed comparison between \nthe tokens contained in the TL chunk patterns \nand the respective TL-like ones, in order to establish degrees of lexical similarity and thus decide upon whether the TL chunk patterns will be (a) retained, (b) modified or (c) re-\nplaced (see Tables 3-6). \n \nScore = \n46.740738% pp \n(np_nm) -{-} the \n{AT0} weekly \n{AJ} meeting \n{NN} \nPP (NP1) \n-{-} 100% 0% 0% 0% \none{CRD} 0% 10% 17% 0% \ncharge{NN1} 0% 0% 0% 30% \nPAD 20% 20% 20% 0% \nTable 3. Detailed chunk comparison (low similarity). \nScore = \n48.0% pp \n(np_ge) of {-\nPRF}the \n{AT0} most \n{AV0} boring \n{AJ} people \n{NN} \nPP (NP2) \nof{PRF} 100% 0% 0% 0% 0% \nthe{AT0} 0% 100% 25% 0% 0% \nBattery \n{NN1} 0% 0% 0% 0% 30% \nPAD 20% 20% 20% 20% 0% \nPAD 20% 20% 20% 20% 0% \nTable 4. Detailed chunk comparison (low similarity). \nScore=100.0% last{VV} \nlast{VVZ} 100% \nTable 5. Detailed chunk comparison (high similarity). \nScore=100.0% for{PRP} hour{NN}\nfor{PRP} 100% 0% \nhour{NN2} 0% 100% \nTable 6. Detailed chunk comparison (high similarity). \n Tables 5 and 6 show that the chunks ‘last’ \nand ‘for hour’ are retained and will form part of the output string. The other two chunks \n(Tables 3 & 4) are handled at the fourth step of the algorithm: the chunk database is searched for appropriate chunk patterns, in an attempt to reduce any incompatibilities between the TL \nclause pattern and the TL-like one.\n5 The \nchunks that match best with the chunk patterns in the TL-like input string are located and, if \nnecessary, are minimally modified on the basis of co-occurrence information induced from the corpus with statistical means and form part of the output string. If no matching chunks are \nfound, the system indicates the problem, proc-\nesses the corresponding portion of the TL-like string with co-occurrence information and \nreturns the result. \nAs explained earlier in this section, the out-\nput of the procedure described consists of a \nsequence of lemmas. Token generation is fore-\nseen for next versions of the system. \n4. Evaluation \nThe system presented has been successfully \nevaluated for four language pairs (Greek, \nSpanish, Dutch, German Æ English) over a \ntest corpus of 60 sentences and compared to \nthe performance of a commercial translation system, namely SYSTRAN. \nTo that end, widely used benchmarks such \nas BLEU (Papineni et al. 2002) and NIST \n(2002) have been employed, which both rely \non n-grams of words and adopt a metric that compares experimentally-derived translations to a set of reference translations. \nThe evaluation results indicated for all four \nlanguage pairs, the proposed system generated consistently more accurate translations than \nSYSTRAN, while for some pairs this im-\nprovement in accuracy is statistically signifi-cant (see Figure 1). \nFor a more detailed description of the re-\nsults obtained see Tambouratzis et al. (2006). \n \n \n5 Due to space limitations it is not possible to present \nthe whole process in full detail. \n \n012345678910\n1234567 89 101112131415\nSentencesNIST translation acc uracySYSTRANMETIS-II\n \nFigu re 1: NI ST-derived translation accu racies for \neach of the 15 sentences with in the Greek-to-En glish \nexperiments, for SYSTRAN an d the proposed sy stem. \n5. Conclusion & further Research \nWe have reported o n the developm ent of a \nhybrid MT s ystem that relies on monolingual \nTL corpora, as opposed to all other con tempo-\nrary corpus-based approac hes to MT that rely \non parallel corpora. The system employs flat \nbilingual lexi ca as well as lemma and chunking \ninform ation t o create TL- patterns whi ch re-\nceive their final form (that is, lemmatized \ngrammatical TL strings) by consulti ng t he \n(chunked and lemmatised) TL corpus with \npattern match ing techniq ues. \nPattern matching concept ually relies on a \npredicate – argument co rrespondence of the \nsource and ta rget language construction s. This \nsame mecha nism handles any categorical (at \nphrase level ) and word-order divergences \nacross the lan guage pair. This set-up captures a \nlarge percentage of cases. Of course, there are \ndivergences t hat can not be captured w ith this \nmechanis m only, such as t he pair ‘ανέβηκε την \nσκάλα τρέχοντας ’ (EL) Æ ‘he ran up th e \nstairs’ (EN) where the SL verb corresp onds to \na TL particle, while the TL verb corresponds to \na SL gerund. However, t he work presented \nhere has not full y exploited the potential of the \nsystem as no rules have be en employed yet and \nthe lexicon contains only one-word entries \n(and no m ulti-word entries). \nResearch in the immediat e future will in-\nvestigate suc h issues as well a s the optim al \nway of d istributing work among the basic pat-\ntern matching algorithm , the lexicon and the rule com ponent. In any case, the latter will be \nkept as s mall as possible. \n6. Referen ces \nAL-ONAIZAN , Yaser , GERMANN , Ulrich. , HERMJA-\nKOB, Ulf, KNIGHT, Kevin, KOEHN , Philipp, MARCU , \nDaniel, Y AMA DA, Kenji (2000). Translatin g with \nScarce Res ources. Am erican Association for Artifi-\ncial In tellig ence con ference (AAAI '00 ), Au stin, \nTexas, 6 72-678. Retrieved fr om \nwww.isi.edu /natural-lan guage/projects/rewrite \nBROWN , Peter, C OCKE , John, DELLA PIETRA \nStepha n, DELLA PIETRA Vincent, J ELINEK Fredrick, \nLAFFERTY John, M ERCER Robert, R OOSIN Paul \n(1990). A Statistical A pproach to Machine Transla-\ntion. C omputational Linguistics, Vol . 16, No. 2, 7 9-\n85. \nBROWN , Ralf (2003) Clu stered Tra nsfer R ule In-\nduction for Example-Based Tran slation. In M. Carl \n& A . Way (eds.) Recent Advances in Example-\nBased Machi ne Tra nslation, Kl uwer Academ ic \nPublishers 287-305. \nCARL, Michael & WAY, Andy (2003). Introduction. \nIn M. Carl & A. Way (e ds.) Recent Adva nces in \nExam ple-Based Machine T ranslation. Kluwer Aca-\ndemic Publishers, xvii-xxx i. \nDOLOGLO U, Ioannis, M ARKANT ONA TOU, Stella, \nTAMBOURA TZIS, Ge orge, YANNOUTS OU, O lga, \nFOURLA , Athanassi a, and IOANNOU , Nikos (2003). \n‘Using Monolingual Co rpora for Statistical M a-\nchine Translation’. In Procee dings of \nEAMT/CLAW 20 03, Dub lin, Ireland, 61-68. \nGAIZAUSK AS, Robert. (199 5). Inv estigation s into \nthe Grammar Un derlying the Pe nn Treebank II. \nResearch M emorandum CS-95-25, De partment of \nCom puter Scie nce, Univ ersity of Sheffield. \nHUTCHINS , John (1995). M achine Translation: A \nbrief history. In E. F.K. K oerner and R.E. Ashe r \n(eds.). Concise history of the language sciences: \nfrom the Su merian s to Cogn itivists. Oxford : Per-\ngamon Press 431-445. \nKITAMUR A, Mihoko. (2004). Tra nslation Knowl-\nedge Ac quisition for Pattern-Based Machi ne Trans-\nlation. PhD, Nara In stitute of Scien ce and Tech nol-\nogy, Japan. \nLEPAGE , Yves. (1997). Str ing ap proximate pattern-\nmatching. In Proceedi ngs of the 55th Meeting of \nthe In formation Processin g Society of Japan, Fu-\nkuoka, Au gust 1997 139-140. \nMARKAN TON ATOU, Stella, S OFIANOPOULOS Sokra -\ntis, S PILIO TI Vassilik i, TAMBOURA TZIS Geo rge, \n \nVASSILIOU Marina, Y ANNOUTSOU Olga, and Ioan-\nnou Nikos (2005). “Monolingual Corpus-based MT \nusing Chunks”. In Proceedings of Workshop ‘Ex-ample Based Machine Translation’, 10th MT \nSummit, September 12-16, Phuket, Thailand 91-98. \nMCTAIT, Kevin. (2003). Translation Patterns, Lin-\nguistic Knowledge and Complexity in EBMT. In M. Carl & A. Way (eds.): Recent Advances in Ex-\nample-Based Machine Translation, Kluwer Aca-demic Publishers 307-338. \nNAGAO , Makoto (1984). A Framework of a Me-\nchanical Translation between Japanese and English \nby Analogy Principle. In A. Elithorn and R. Banerji \n(eds.) Artificial and Human Intelligence, North-\nHolland. \nNEY, Herman. (2005). One Decade of Statistical \nMachine Translation: 1996-2005. In Proceedings of the 10th MT Summit, September 12-16, Phuket, Thailand, i12-i17. \nNIRENBURG , Sergei & R ASKIN , Victor (2004). On-\ntological Semantics. The MIT press. \nNIST (2002). Automatic Evaluation of Machine \nTranslation Quality Using n-gram Co-occurrences Statistics. Retrieved from \nwww.nist.gov/speech/ \ntests/mt/ \nPAPINENI , Kishore, R OUKOS , Salim, W ARD, Todd, \nZHU, Wei-Jing (2002). BLEU: A Method for \nAutomatic Evaluation of Machine Translation. \nProceedings of the 40th Annual Meeting of the As-\nsociation for Computational Linguistics, Philadel-\nphia, U.S.A., 311-318. \nPOPOWICH , Fred, N EY, Herman. (2005). Exploiting \nPhrasal Lexica and Additional Morpho-Syntactic Language Resources for St atistical Machine Trans-\nlation with Scarce Training Data. EAMT 10\nth An-\nnual Conference, Budapest, Hungary. \nTAMBOURATZIS George, S OFIANOPOULOS Sokratis, \nSPILIOTI Vassiliki, V ASSILIOU Marina, Y ANNOUT-\nSOU Olga, and M ARKANTONATOU Stella (2006). \nPattern matching-based sy stem for Machine Trans-\nlation (MT). In Proceedings of “Advances in Artifi-\ncial Intelligence: 4th Hellenic Conference on AI, SETN 2006 (Heraklion, Crete, Greece, May 18-20, \n2006), Lecture Notes in Computer Science, Vol. \n3955, pp. 345-355. Springer Verlag. \nTHURMAIR , Gregor. (2005). Improving MT Quality: \nTowards a Hybrid MT Architecture in the Lin-guatec ‘Personal Translator’. Talk given at the 10\nth \nMT Summit, September 12-16, Phuket, Thailand. \nWAY, Andy (2003). Translating with Examples: \nThe LFG-DOT Models of Translation, In Recent Advances in Example-Base d Machine Translation, \nMichael Carl and Andy Way (eds.), Kluwer Aca-\ndemic Publishers 443-472. \n \n \n∗ Author names are given in alphabetical order.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "cRpZlx4yQo",
"year": null,
"venue": "EAMT 2023",
"pdf_link": "https://aclanthology.org/2023.eamt-1.6.pdf",
"forum_link": "https://openreview.net/forum?id=cRpZlx4yQo",
"arxiv_id": null,
"doi": null
}
|
{
"title": "BLEU Meets COMET: Combining Lexical and Neural Metrics Towards Robust Machine Translation Evaluation",
"authors": [
"Taisiya Glushkova",
"Chrysoula Zerva",
"André F. T. Martins"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "BLEU Meets C OMET : Combining Lexical and Neural Metrics\nTowards Robust Machine Translation Evaluation\nTaisiya Glushkova1,3Chrysoula Zerva1,3Andr ´e F. T. Martins1,2,3\n1Instituto de Telecomunicac ¸ ˜oes2Unbabel\n3Instituto Superior T ´ecnico & LUMLIS (Lisbon ELLIS Unit)\n{taisiya.glushkova, chrysoula.zerva, andre.t.martins }@tecnico.ulisboa.pt\nAbstract\nAlthough neural-based machine transla-\ntion evaluation metrics, such as COMET\norBLEURT , have achieved strong corre-\nlations with human judgements, they are\nsometimes unreliable in detecting certain\nphenomena that can be considered as crit-\nical errors, such as deviations in entities\nand numbers. In contrast, traditional eval-\nuation metrics such as BLEU orCHRF,\nwhich measure lexical or character overlap\nbetween translation hypotheses and human\nreferences, have lower correlations with hu-\nman judgements but are sensitive to such\ndeviations. In this paper, we investigate sev-\neral ways of combining the two approaches\nin order to increase robustness of state-of-\nthe-art evaluation methods to translations\nwith critical errors. We show that by us-\ning additional information during training,\nsuch as sentence-level features and word-\nlevel tags, the trained metrics improve their\ncapability to penalize translations with spe-\ncific troublesome phenomena, which leads\nto gains in correlation with human judg-\nments and on recent challenge sets on sev-\neral language pairs.1\n1 Introduction\nTrainable machine translation (MT) evaluation\nmodels, such as COMET (Rei et al., 2020) and\nBLEURT (Sellam et al., 2020), generally achieve\nhigher correlations with human judgments, thanks\n©2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1Our code and data are available at: https://github.com/\ndeep-spin/robust MTevaluationto leveraging pretrained language models. How-\never, they often fail at detecting certain types of\nerrors and deviations from the source, for exam-\nple related to translations of numbers and entities\n(Amrhein and Sennrich, 2022). As a result, their\nquality predictions are sometimes hard to interpret\nand not always trustworthy. In contrast, traditional\nlexical-based metrics, such as BLEU (Papineni et\nal., 2002) or CHRF(Popovi ´c, 2015)—despite their\nmany limitations—are considerably more sensitive\nto these errors, due to their nature, and are also\nmore interpretable, since the scores can be traced\nback to the character or n-gram overlap.\nThis paper investigates and compares methods\nthat combine the strengths of neural-based and lex-\nical approaches, both at the sentence level and at\nthe word level. This is motivated by the findings\nof previous works, which demonstrate in detail that\ntheCOMET MT evaluation metric struggles to han-\ndle errors like deviation in numbers, wrong named\nentities in generated translations, deletions that ex-\nclude important content from the source sentence,\ninsertions of extra words that are not present in the\nsource sentences, and a few others (Amrhein and\nSennrich, 2022; Alves et al., 2022). While data\naugmentation techniques alleviate the problem to\nsome extent (Alves et al., 2022), the gains seem to\nbe relatively modest. In this paper we investigate\nalternative methods that take advantage of lexical\ninformation and go beyond the use of various aug-\nmentation techniques and synthetic data.\nWe focus on increasing robustness of MT evalu-\nation systems to certain types of critical errors. We\nexperiment with the reference-based COMET met-\nric, which has access to reference translations when\nproducing quality scores. In order to make evalua-\ntion metrics more robust towards this type of errors,\nwe consider and compare three different ways of\nincorporating information from lexical-based evalu-\nation metrics into the neural-based COMET metric:\n•Simply ensembling the sentence-level metrics;\n•Using lexical-based sentence-level scores as\nadditional features through a bottleneck layer\nin the C OMET model;\n•Enhancing the word embeddings computed\nbyCOMET for the generated hypothesis with\nword-level tags. We generate these word-level\ntags using the Levenshtein (sub)word align-\nment between the hypothesis and the reference\ntokens.\nWe compare these three strategies with the recent\napproach of (Alves et al., 2022), which generates\nsynthetic data with injected errors from a large lan-\nguage model, and retrains COMET on training data\nthat has been augmented with these examples. We\nassess both the correlation with human judgments\nand using the recently proposed DEMETR bench-\nmark (Karpinska et al., 2022).\n2 Related Work\nRecently several challenge sets have been intro-\nduced, either within a scope of the WMT Metrics\nshared task (Freitag et al., 2022) or in general as a\nstep towards implementing more reliable MT eval-\nuation metrics: SMAUG (Alves et al., 2022) ex-\nplores sentence-level multilingual data augmenta-\ntion; ACES (Amrhein et al., 2022) is a translation\naccuracy challenge set that covers high number of\ndifferent phenomena and language pairs, includ-\ning a considerable number of low-resource ones;\nDEMETR (Karpinska et al., 2022) and HWTSC\n(Chen et al., 2022) aim at examining metrics ability\nto handle synonyms and to discern critical errors\nin translations; DFKI (Avramidis and Macketanz,\n2022) employs a linguistically motivated challenge\nset for two language directions (German ↔En-\nglish).\nApart from purely focusing on improving robust-\nness with augmentation of different phenomena,\nthere are works that combine usage of synthetic\ndata with other different methods. These methods\nuse more fine-grained information—aiming at iden-\ntifying both the position and the type of translation\nerrors on given source-hypothesis sentence pairs\n(Bao et al., 2023). As another source of useful\ninformation, word-level supervision can be consid-\nered, which has proven to be beneficial in tasks ofquality estimation and MT evaluation (Rei et al.,\n2022a; Rei et al., 2022b).\nThere have been other attempts to add linguis-\ntic features to automatic MT evaluation metrics,\ne.g. incorporating information from a multilin-\ngual knowledge graph into BERTScore. (Wu et\nal., 2022) proposed a metric that linearly combines\nthe results of BERTScore and bilingual named en-\ntity matching for reference-free machine translation\nevaluation. (Abdi et al., 2019) use an extensive set\nof linguistic features at word- and sentence- level\nto aid sentiment classification. Additionally, glass-\nbox features extracted from the MT model have\nbeen used successfully in the quality estimation\ntask (Fomicheva et al., 2020; Zerva et al., 2021;\nWang et al., 2021). For the incorporation of differ-\nent types of information to neural models early and\nlate fusion is commonly used with benefits on mul-\ntiple tasks and domains (Gadzicki et al., 2020; Fu\net al., 2015; Baltru ˇsaitis et al., 2018). To the best of\nour knowledge there have not been any attempts to\ncombine the representations of neural metrics with\nexternal features obtained by lexical-based metrics.\nMoreover, there are similar concerns regarding\nrobustness of evaluation models in non-MT related\ntasks (Chen and Eger, 2022). In general, it is de-\npicted that evaluation metrics perform rather well\non standard evaluation benchmarks but are vulner-\nable and unstable to adversarial examples. The\napproaches investigated in our paper aim to address\nthese limitations.\n3 Combination of Neural and Lexical\nMetrics\nIn this section we describe the methods we investi-\ngated in order to infuse the COMET with informa-\ntion on lexical alignments between the MT hypoth-\nesis and the reference.\n3.1 Metric ensembling\nA simple way to combine neural and lexical-based\nmetrics is through an ensembling strategy. To this\nend, we use a weighted ensemble of normalized\nBLEU,CHRFandCOMET scores. The weights for\neach metric are tuned on the same development set\nused for training the COMET models discussed in\nthis work (MQM WMT 2021) and presented in Ap-\npendix A. For normalisation we compute the mean\nand standard deviation to standardize the develop-\nment set for each metric and we use the same mean\nand standard deviation values to standardize the\ntest-set scores.\n3.2 Sentence-level lexical features\nA simple ensemble is limited because it does not\nlet the neural-based model learn the best way of\nincluding the information coming from the lexi-\ncal metrics—for example, the degree of additional\ninformation brought by the lexical metrics might\ndepend on the particular input.\nTherefore, we experiment with a more sophis-\nticated approach, where the lexical scores are in-\ncorporated in the COMET architecture as additional\nfeatures that are mapped to each instance in the\ndata, allowing the system to learn how to best take\nadvantage of these features. To this end, we adopt a\nlate fusion approach, employing a bottleneck layer\nto combine the lexical and neural features. The use\nof a bottleneck layer for late fusion in deep neu-\nral architectures has been used successfully across\ntasks, especially for multimodal fusion or fusion\nof features with vast differences in dimensionality\n(Petridis et al., 2017; Guo et al., 2018; Ding et\nal., 2022). In our implementation, the bottleneck\nlayer is inserted between two feed-forward layers\nin the original COMET architecture (see Fig. 1), im-\nplemented in a similar manner as in (Moura et al.,\n2020; Zerva et al., 2022) (see App. A).\n3.3 Word-level lexical features\nWhile the sentence-level features allow the model\nto account for lexical overlap, there is still no word-\nlevel information. Instead, we propose to leverage\nthe inferred alignments between the MT hypothe-\nsis and the reference words. To that end we adopt\nthe Translation Edit Rate (TER) alignment proce-\ndure that calculates the edit distance (cost) between\nthe translation and reference sentence. This align-\nment, produced with the Levenshtein dynamic pro-\ngramming algorithm, identifies the minimal sub-\nset of MT words that would need to be changed\n(modified, inserted, or deleted) in order to reach\nthe correct translation (reference) (Snover et al.,\n2009b). TER-based alignments have been widely\nused to evaluate translations with respect to post-\nedits (HTER) in automated post-editing as well as\nother generation tasks (Snover et al., 2009a; Elliott\nand Keller, 2014; Gupta et al., 2018; Bhattacharyya\net al., 2022). Recently, providing word-level super-\nvision using binary quality tags inferred via Mul-\ntidimensional Quality Metrics (MQM) error anno-\ntations, proved to be beneficial for MT evaluation\n(Rei et al., 2022a).In this work, for simplicity, we opted for calculat-\ning the alignments not on a word but on a sub-word\nlevel, employing the same tokenization convention\nused by the COMET encoder.2This allows to asso-\nciate a quality OK/BAD tag to each sub-word unit\nof the MT hypothesis input vector.\nWe then incorporate these quality tags to the\noriginal input for each translation sample which\nconsists of a triplet ⟨s, t, r⟩, where sis a source text,\ntis a machine translated text, and ris a reference\ntranslation. To leverage the estimated quality tags\nin the COMET architecture, we encode the tags as\na sequence of special tokens, w, and learn separate\nembeddings for the OK/BAD tokens. We can\nthus encode the quality tag sequence and obtain a\nword quality vector ⃗ wand then compute the sum\n⃗ σ=⃗t⊕⃗ wfor the sequence. We then extend the\npooling layer of COMET by adding both the ⃗ wand\n⃗ σrepresentations (see the architecture in Fig. 2).\n4 Experimental Design\nThe main focus of our experiments is to investigate\nhow the robustness of the MT evaluation models\ncan be improved and how the proposed settings\ncompare to each other and to a data augmentation\napproach proposed by (Alves et al., 2022). Our\ncomparisons address the correlation with human\njudgments and recent robustness benchmarks on\nMT evaluation datasets (§4).\nWe follow (Amrhein and Sennrich, 2022) – we\nuseCOMET (v1.0) (Rei et al., 2020) as the underly-\ning architecture for our MT evaluation models and\nfocus on making it more robust.\nHuman Judgements Data We consider two\ntypes of human judgments: direct assessments (DA)\n(Graham et al., 2013) and multi-dimensional quality\nmetric scores (MQM) (Lommel et al., 2014). For\ntraining, we use WMT 2017–2020 data from the\nMetrics shared task (Freitag et al., 2021b) with di-\nrect assessment (DA) annotations (see App. C). For\ndevelopment and test, we use the MQM annotations\nof the WMT 2021 and 2022 datasets, respectively3.\nChallenge Sets Data Furthermore, we evaluate\nour models using two challenge sets: DEMETR\n(Karpinska et al., 2022) and ACES (Amrhein et al.,\n2022).\n2We specifically used the XLMRobertaTokenizerFast Hug-\ngingface implementation with truncation and default\nmaxlength .\n3We opted for DA annotations to train due to the limited avail-\nability of MQM data\nFigure 1: The architecture of the COMET model with incorpo-\nrated sentence-level lexical features.\nFigure 2: The architecture of the COMET model with incorpo-\nrated word-level lexical features.\n•DEMETR is a diagnostic dataset with 31K\nEnglish examples (translated from 10 source\nlanguages) created for evaluating the sensi-\ntivity of MT evaluation metrics to 35 differ-\nent linguistic perturbations spanning semantic,\nsyntactic, and morphological error categories.\nEach example in DEMETR consists of (1) a\nsentence in one of 10 source languages, (2) an\nEnglish translation written by a human trans-\nlator, (3) a machine translation produced by\nGoogle Translate, and (4) an automatically per-\nturbed version of the Google Translate output\nwhich introduces exactly one mistake (seman-\ntic, syntactic, or typographical).\n•ACES is a translation accuracy challenge set\nbased on the MQM ontology. It consists of\n36,476 examples covering 146 language pairs\nand representing 68 phenomena. This chal-\nlenge set consists of synthetically generated\nadversarial examples, examples from repur-\nposed contrastive MT test sets, and manually\nannotated examples.\nBoth of these challenge sets allow measuring the\nsensitivity of the proposed approaches to various\nphenomena and assess their overall robustness.\nAugmentation We compare our methods against\nthe multilingual data augmentation approach\nSMAUG4proposed by (Alves et al., 2022). Specif-\n4The code is available at https://github.com/Unbabel/\nsmaug .ically, we use transformations that focus on devi-\nations in named entities and numbers since these\nare identified as the major weaknesses of COMET\n(Amrhein and Sennrich, 2022).\nModels In the experiments that follow, we use\nas baseline the vanilla COMET architecture trained\non WMT2017–2020 ( COMET ). We compare this\nbaseline against the model trained with augmented\ndata and our proposed approaches:\n•COMET + aug :COMET model trained on a\nmixture of original and augmented WMT2017–\n2020 data, where the percentage of the aug-\nmented data is 40%. We use the code pro-\nvided by the authors of SMAUG and apply\ntheir choice of hyperparameters, including the\noptimal percentage of the augmented data.\n•Ensemble : The weighted mean of BLEU,\nCHRFandCOMET normalized scores, where\nthe weights are optimized on the development\nset (MQM 2021) with regards to the Kendall’s\ntau correlations.\n•COMET + SL-feat. : The combination of\nCOMET and scores obtained from other met-\nrics, BLEU and CHRF, that are used as\nsentence-level (SL) features in a late fusion\nmanner.\n•COMET + WL-tags : The combination of\nCOMET and word-level OK/BAD tags that\ncorrespond to the subwords of the translation\nhypothesis.\nEvaluation For evaluation and analysis we:\n1.Compute standard correlation metrics on\nsegment-level between predicted scores and\nhuman judgements: Pearson r, Spearman ρ\nand Kendall’s tau;\n2.Use challenge sets, specifically DEMETR and\nACES, to analyse the robustness of MT Eval-\nuation systems to critical errors and specific\nperturbations.\nFor the challenge sets, we measure the ability of\nthe evaluation metric to rank the correct translations\nhigher than the incorrect ones by computing the\nofficial Kendall’s tau-like correlation as proposed\nin previous WMT Metrics shared tasks (Freitag et\nal., 2022; Ma et al., 2019):\nτ=Concordant −Discordant\nConcordant +Discordant, (1)\nwhere the “Concordant” is the number of times a\nmetric assigns a higher score to the “better” hy-\npothesis and “Discordant” is the number of times\na metric assigns a higher score to the “worse” hy-\npothesis.\n5 Results and Discussion\nIn this section, we show results for the aforemen-\ntioned methods, specifically the correlations with\nMQM annotations from WMT 2022 Metrics shared\ntask for 3 high-resource language pairs (English\n→German, English →Russian, Chinese →En-\nglish) in four domains: Conversation, E-commerce,\nNews and Social. In addition, we discuss evaluation\nresults obtained on two challenge sets.\n5.1 Correlation with Human Judgements\nOverall, by looking at Table 1 we can see that the\nmore sophisticated techniques of using additional\ninformation, whether it is lexical-based scores used\nas features, word-level tags based on token align-\nments or synthetically augmented data, outperform\nthe simple weighted average (ensemble) approach.\nThese findings are further supported when calculat-\ning performance for the Pearson rand Spearman ρ\ncoefficients, shown in Tables 9 and 10 respectively\nin the Appendix B.\nAcross all proposed methods, we observe that\nCOMET + aug andCOMET + SL-feat. have rel-\natively similar performance. In contrast, addingword-level tags ( COMET + WL-tags ) based on\nalignments between the translation hypothesis and\nthe reference seems to give a considerable gain in\nresults compared to the baseline COMET and the\nother approaches.\nAnother interesting observation is that the im-\nprovement in correlations can be especially noticed\ninZH-ENlanguage pair across all domains for\nCOMET + WL-tags model. Overall, we found\nthat adding the word-level quality supervision pro-\nvides the most consistent benefits in performance.\nHowever, since our main motivation is to address\nrobustness to specific errors, the correlations with\nMQM annotations serve primarily as a confirma-\ntion of the potential of the proposed methods; we\nprovide a more detailed performance analysis over\nthe multiple error types of different challenge sets\nin the next section.\n5.2 Results on Challenge Sets\n5.2.1 DEMETR\nFor DEMETR we analyse results on two levels\nof granularity: (1) performance over the full chal-\nlenge set, which is calculated via Kendall’s tau and\npresented in Table 2 which shows Kendall’s tau-like\ncorrelations per language pair; and (2) performance\ndepending on error severity, which is presented in\nand Table 3 and shows accuracy on detecting differ-\nent types of DEMETR perturbations for lexical and\nneural-based metrics, bucketed by error severity\n(baseline, critical, major, and minor errors).\nWe can observe that both the sentence- and word-\nlevel features outperform data augmentation meth-\nods, with the word-level method being the best\non average and for the majority of language pairs.\nThese findings indicate that the subword quality\ntags enable the model to attend more to the per-\nturbations of the high quality data, hence better\ndistinguishing the bad from the good translations\nof the same source.\nOne of the key findings from Table 3 is that the\nmodel which uses word-level information consis-\ntently outperforms the other methods across almost\nall severity buckets, with the exception of “critical”\nerror bucket. In combination with the findings on\nthe ACES challenge set (see section 5.2.2), it seems\nthat investigating approaches which target more nu-\nanced and complex error phenomena that lead to\n5For the statistical significance over correlations rwe use\nWilliams’ test and Fisher r−to−z′transform: f(r) =\n1\n2ln1+r\n1−rto calculate significance over the macro-averages,\nwithp <= 0.01.\nBLEU CHRF COMET ENSEMBLE COMET +aug C OMET +SL-feat. C OMET +WL-tagsEN-DEConversation 0.201 0.257 0.308 0.309 0.296 0.310 0.314\nE-commerce 0.179 0.212 0.326 0.318 0.311 0.322 0.322\nNews 0.167 0.202 0.361 0.356 0.330 0.355 0.369\nSocial 0.130 0.168 0.297 0.292 0.277 0.294 0.293EN-RUConversation 0.140 0.175 0.305 0.304 0.328 0.298 0.328\nE-commerce 0.202 0.221 0.372 0.371 0.382 0.369 0.391\nNews 0.125 0.164 0.373 0.367 0.366 0.384 0.370\nSocial 0.152 0.132 0.305 0.304 0.330 0.332 0.349ZH-ENConversation 0.125 0.160 0.283 0.282 0.295 0.283 0.298\nE-commerce 0.174 0.187 0.326 0.325 0.342 0.335 0.357\nNews 0.046 0.042 0.270 0.261 0.291 0.276 0.292\nSocial 0.162 0.190 0.319 0.316 0.313 0.315 0.330\nA VG 0.150 0.176 0.321 0.317 0.322 0.323 0.334†\nTable 1: Kendall’s tau correlation on high resource language pairs using the MQM annotations for Conversation, E-commerce,\nNews and Social domains collected for the WMT 2022 Metrics Task. Bold numbers indicate the best result for each domain in\neach language pair. †in the averaged scores indicates statistically significant difference to the other metrics5.\nBLEU CHRF C OMET ENSEMBLE COMET +aug C OMET +SL-feat. C OMET +WL-tags\nZH-EN 0.505 0.684 0.818 0.855 0.817 0.866 0.872\nDE-EN 0.655 0.802 0.909 0.926 0.917 0.942 0.957\nHI-EN 0.616 0.768 0.900 0.92 0.925 0.929 0.945\nJA-EN 0.521 0.722 0.850 0.883 0.83 0.907 0.891\nPS-EN 0.533 0.703 0.818 0.88 0.775 0.863 0.877\nRU-EN 0.552 0.724 0.898 0.91 0.894 0.950 0.949\nCZ-EN 0.541 0.755 0.875 0.917 0.863 0.87 0.920\nFR-EN 0.664 0.794 0.892 0.915 0.926 0.945 0.951\nES-EN 0.516 0.704 0.877 0.899 0.877 0.91 0.934\nIT-EN 0.601 0.774 0.912 0.924 0.906 0.936 0.945\nA VG 0.57 0.743 0.875 0.903 0.873 0.912 0.924†\nTable 2: Kendall’s tau-like correlation per language pair on DEMETR challenge set. Bold values indicate the best performance\nper language pair. †in the averaged scores indicates statistically significant difference to the other metrics.\nMetric Base Crit. Maj. Min. All\nlexical-based metrics\nBLEU 100.0 79.33 83.76 72.6 78.52\nCHRF 100.0 90.79 90.85 80.83 87.16\nneural-based metrics\nENSEMBLE 100.0 96.87 92.91 93.77 95.14\nCOMET 99.3 95.77 91.04 92.18 93.74\n+ aug 98.6 95.54 91.66 92.06 93.65\n+ SL-feat. 99.3 96.95 93.56 94.64 95.59\n+ WL-tags 99.2 96.48 93.9 96.36 96.2\nTable 3: Accuracy on DEMETR perturbations for both lexical-\nbased and neural-based metrics, shown bucketed by error sever-\nity (base, critical, major, and minor errors), including a micro-\naverage across all perturbations.\ncritical errors could further improve performance\nof neural metrics.\n5.2.2 ACES\nTo analyse general, high-level, performance\ntrends of the lexical and proposed approaches on\nthe ACES challenge set, we report Kendall’s tau\ncorrelation and the “ACES - Score” as proposed by(Amrhein et al., 2022), which is a weighted combi-\nnation of the 10 top-level categories in the ACES\nontology:\nACES-Score =sum\n\n5∗τaddition\n5∗τomission\n5∗τmistranslation\n5∗τovertranslation\n5∗τundertranslation\n1∗τuntranslated\n1∗τdo not translate\n1∗τreal-world knowledge\n1∗τwrong language\n0.1∗τpunctuation\n\n\n(2)\nThe weights in this formula correspond to the\nrecommended values in the MQM framework\n(Freitag et al., 2021a): weight = 5 for major,\nweight = 1 for minor and weight = 0.1for flu-\nency/punctuation errors. The ACES-Score results\ncan be seen in the last row of Table 4.\nBLEU CHRF C OMET ENSEMBLE COMET +aug C OMET +SL-feat. C OMET +WL-tags\nmajor (weight = 5)\naddition 0.748 0.644 0.349 0.367 0.52 0.443 0.427\nomission 0.427 0.784 0.704 0.828 0.706 0.724 0.666\nmistranslation -0.296 0.027 0.186 0.216 0.255 0.148 0.189\novertranslation -0.838 -0.696 0.27 0.176 0.308 0.086 0.304\nundertranslation -0.856 -0.592 0.08 -0.044 0.2 -0.18 0.12\nminor (weight = 1)\nuntranslated 0.786 0.928 0.709 0.894 0.58 0.618 0.686\ndo not translate 0.58 0.96 0.88 0.9 0.78 0.9 0.84\nreal-world knowl. -0.906 -0.307 0.195 0.176 0.202 0.109 0.162\nwrong language 0.659 0.693 0.071 0.052 0.159 0.185 0.087\nfluency/punctuation (weight = 0.1)\npunctuation 0.658 0.803 0.328 0.699 0.377 0.323 0.339\nACES-Score -2.89 3.189 9.833 9.807 11.704 7.949 10.339\nTable 4: Kendall’s tau-like correlations for 10 top-level categories in ACES challenge set.\nBLEU CHRF C OMET ENSEMBLE COMET +aug C OMET +SL-feat. C OMET +WL-tags\nEN-XX 0.034 0.329 0.201 0.340 0.256 0.183 0.206\nXX-EN -0.37 -0.046 0.283 0.26 0.329 0.222 0.285\nXX-YY -0.124 0.097 0.105 0.115 0.204 0.088 0.104\nA VG -0.153 0.127 0.196 0.238 0.263†0.164 0.198\nTable 5: Kendall’s tau-like correlation on ACES challenge set. †in the averaged scores indicates statistically significant\ndifference to the other metrics.\nOverall, as the ACES challenge set contains a\nlarger set of translation errors, and goes beyond sim-\nple perturbations to more nuanced error categories\nsuch as real-world knowledge and discourse-level\nerrors, we can see that the performance scores and\nbest metrics vary largely depending on the category.\nInterestingly, CHRFseems to outperform other met-\nrics especially in the categories that do not relate so\nmuch to replacements in the reference translation,\nbut rather relate to fully or partially wrong language\n(or punctuation) use. We note that these seem to\nbe largely cases that are not frequently found in\nMT training data, nor are they considered in pre-\nviously proposed data augmentation approaches,\nwhich could explain why neural metrics are out-\nperformed by baseline surface-level metrics, even\nunder investigated robustness modifications. Hence,\nthere seems to be room for further improvements in\nincorporating surface-based information in neural\nmetrics and enabling them to pay more attention\nto n-gram overlap. Instead, for the error categories\nthat depend on other perturbations, we can see that\nall robustness oriented modifications to COMET\nimprove the performance compared to the vanilla\nmodel, with augmentation achieving significantly\nhigher Kendall’s tau correlations.\nWhen looking at the overall picture and focusingon the ACES-Score which weights the errors by\nthe severity of the errors there seem to be only\ntwo methods that outperform the baseline COMET\nmodel, namely COMET + aug andCOMET + WL-\ntags, which achieve the best and second best ACES-\nScore respectively. Since these two approaches are\northogonal to each other, it seems that a promising\ndirection for future work is to explore options for\ncombining the two methods.\nNote that the overall behavior of lexical and\nneural-based metrics corroborates the findings pre-\nsented in the original paper. We can confirm that\nin our experiments the worst performing metric is\nalso BLEU , which is expected. However, it is hard\nto highlight the best performing metric based only\non the ACES-Score, the purpose of this analysis is\nmore so to find any interesting trends or any partic-\nular issues that some methods are handling better\nthan the others.\nSince the ACES dataset encompasses a high num-\nber of LPs, we aggregate the results into three\ngroups, EN-XX(out-of-English), XX-EN(into-\nEnglish) and XX-YY(LPs without English). We\nalso report the balanced average across all lan-\nguage pairs ( AVG). Results in Table 5 show that\nmethods which include augmented data during\ntraining achieve higher performance compared to\nother proposed options. As for additional sentence-\nlevel or word-level information, COMET + WL-\ntags slightly improves performance of the baseline\nCOMET across EN-XXandXX-ENaggregations\nand beats the approach that uses SL-features.\n6 Conclusion and Future Work\nIn this paper, we presented several approaches that\nuse interpretable string-based metrics to improve\nthe robustness of recent neural-based metrics such\nasCOMET . There are various ways of combining\nthese methods together: ensembling metrics, in-\ncorporating sentence-level features, or using word-\nlevel information coming from alignments between\nthe hypothesis and the reference. We observed that\nadding small changes to the architecture of COMET ,\neither by using sentence-level features based on\nBLEU and CHRFscores, or by incorporating word-\nlevel tags for the hypothesis, can lead to competitive\nperformance gains. To showcase the effectiveness\nof our proposed approaches, we evaluated them on\nthe most recent MQM test set that covers multiple\ndomains and language pairs, as well as on the chal-\nlenge sets that were introduced in the WMT 2022\nMetrics shared task, with encouraging results.\nIt is likely that our proposed approaches are com-\nplementary to each other, as well as to the data\naugmentation method we are comparing against\n(COMET+aug). An interesting direction for fu-\nture work is to study further the impact of using\nword-level tags of the hypothesis in other ways not\ncovered in this paper, e.g., in combination with\naugmentation approaches.\nAcknowledgements\nThis work was supported by the European Research\nCouncil (ERC StG DeepSPIN 758969), by EU’s\nHorizon Europe Research and Innovation Actions\n(UTTER, contract 101070631), by P2020 project\nMAIA (LISBOA-01-0247- FEDER045909), by\nthe Portuguese Recovery and Resilience Plan\nthrough project C645008882-00000055 (NextGe-\nnAI, Center for Responsible AI) and Funda c ¸˜ao\npara a Ci ˆencia e Tecnologia through contract\nUIDB/50008/2020.\nReferences\n[Abdi et al.2019] Abdi, Asad, Siti Mariyam Shamsud-\ndin, Shafaatunnur Hasan, and Jalil Piran. 2019. Deep\nlearning-based sentiment classification of evaluativetext based on multi-feature fusion. Information Pro-\ncessing & Management , 56(4):1245–1259.\n[Alves et al.2022] Alves, Duarte, Ricardo Rei, Ana C\nFarinha, Jos ´e G. C. de Souza, and Andr ´e F. T. Martins.\n2022. Robust MT evaluation with sentence-level mul-\ntilingual augmentation. In Proceedings of the Seventh\nConference on Machine Translation (WMT) , pages\n469–478, Abu Dhabi, United Arab Emirates (Hybrid),\nDecember. Association for Computational Linguis-\ntics.\n[Amrhein and Sennrich2022] Amrhein, Chantal and\nRico Sennrich. 2022. Identifying Weaknesses in Ma-\nchine Translation Metrics Through Minimum Bayes\nRisk Decoding: A Case Study for COMET. In Pro-\nceedings of the 2nd Conference of the Asia-Pacific\nChapter of the Association for Computational Lin-\nguistics and the 12th International Joint Conference\non Natural Language Processing , Online, November.\nAssociation for Computational Linguistics.\n[Amrhein et al.2022] Amrhein, Chantal, Nikita Moghe,\nand Liane Guillou. 2022. ACES: Translation accu-\nracy challenge sets for evaluating machine translation\nmetrics. In Proceedings of the Seventh Conference\non Machine Translation (WMT) , pages 479–513, Abu\nDhabi, United Arab Emirates (Hybrid), December.\nAssociation for Computational Linguistics.\n[Avramidis and Macketanz2022] Avramidis, Eleftherios\nand Vivien Macketanz. 2022. Linguistically moti-\nvated evaluation of machine translation metrics based\non a challenge set. In Proceedings of the Seventh\nConference on Machine Translation (WMT) , pages\n514–529, Abu Dhabi, United Arab Emirates (Hybrid),\nDecember. Association for Computational Linguis-\ntics.\n[Baltru ˇsaitis et al.2018] Baltru ˇsaitis, Tadas, Chaitanya\nAhuja, and Louis-Philippe Morency. 2018. Mul-\ntimodal machine learning: A survey and taxonomy.\nIEEE transactions on pattern analysis and machine\nintelligence , 41(2):423–443.\n[Bao et al.2023] Bao, Keqin, Yu Wan, Dayiheng Liu,\nBaosong Yang, Wenqiang Lei, Xiangnan He, Derek F\nWong, and Jun Xie. 2023. Towards fine-grained in-\nformation: Identifying the type and location of trans-\nlation errors. arXiv preprint arXiv:2302.08975 .\n[Bhattacharyya et al.2022] Bhattacharyya, Pushpak, Ra-\njen Chatterjee, Markus Freitag, Diptesh Kanojia, Mat-\nteo Negri, and Marco Turchi. 2022. Findings of the\nwmt 2022 shared task on automatic post-editing. In\nProceedings of the Seventh Conference on Machine\nTranslation (WMT) , pages 109–117.\n[Chen and Eger2022] Chen, Yanran and Steffen Eger.\n2022. Menli: Robust evaluation metrics from\nnatural language inference. arXiv preprint\narXiv:2208.07316 .\n[Chen et al.2022] Chen, Xiaoyu, Daimeng Wei,\nHengchao Shang, Zongyao Li, Zhanglin Wu,\nZhengzhe Yu, Ting Zhu, Mengli Zhu, Ning Xie,\nLizhi Lei, Shimin Tao, Hao Yang, and Ying Qin.\n2022. Exploring robustness of machine translation\nmetrics: A study of twenty-two automatic metrics\nin the WMT22 metric task. In Proceedings of the\nSeventh Conference on Machine Translation (WMT) ,\npages 530–540, Abu Dhabi, United Arab Emirates\n(Hybrid), December. Association for Computational\nLinguistics.\n[Ding et al.2022] Ding, Ning, Sheng-wei Tian, and Long\nYu. 2022. A multimodal fusion method for sarcasm\ndetection based on late fusion. Multimedia Tools and\nApplications , 81(6):8597–8616.\n[Elliott and Keller2014] Elliott, Desmond and Frank\nKeller. 2014. Comparing automatic evaluation mea-\nsures for image description. In Proceedings of the\n52nd Annual Meeting of the Association for Compu-\ntational Linguistics (Volume 2: Short Papers) , pages\n452–457.\n[Fomicheva et al.2020] Fomicheva, Marina, Shuo Sun,\nLisa Yankovskaya, Fr ´ed´eric Blain, Francisco\nGuzm ´an, Mark Fishel, Nikolaos Aletras, Vishrav\nChaudhary, and Lucia Specia. 2020. Unsupervised\nquality estimation for neural machine translation.\nTransactions of the Association for Computational\nLinguistics , 8:539–555.\n[Freitag et al.2021a] Freitag, Markus, George Foster,\nDavid Grangier, Viresh Ratnakar, Qijun Tan, and\nWolfgang Macherey. 2021a. Experts, errors, and\ncontext: A large-scale study of human evaluation for\nmachine translation. Transactions of the Association\nfor Computational Linguistics , 9:1460–1474.\n[Freitag et al.2021b] Freitag, Markus, Ricardo Rei, Ni-\ntika Mathur, Chi-kiu Lo, Craig Stewart, George Fos-\nter, Alon Lavie, and Ond ˇrej Bojar. 2021b. Results\nof the WMT21 metrics shared task: Evaluating met-\nrics with expert-based human evaluations on TED\nand news domain. In Proceedings of the Sixth Confer-\nence on Machine Translation , pages 733–774, Online,\nNovember. Association for Computational Linguis-\ntics.\n[Freitag et al.2022] Freitag, Markus, Ricardo Rei, Ni-\ntika Mathur, Chi-kiu Lo, Craig Stewart, Eleftherios\nAvramidis, Tom Kocmi, George Foster, Alon Lavie,\nand Andr ´e F. T. Martins. 2022. Results of WMT22\nmetrics shared task: Stop using BLEU – neural met-\nrics are better and more robust. In Proceedings of the\nSeventh Conference on Machine Translation (WMT) ,\npages 46–68, Abu Dhabi, United Arab Emirates (Hy-\nbrid), December. Association for Computational Lin-\nguistics.\n[Fu et al.2015] Fu, Zhikang, Bing Li, Jun Li, and Shuhua\nWei. 2015. Fast film genres classification combin-\ning poster and synopsis. In Intelligence Science and\nBig Data Engineering. Image and Video Data Engi-\nneering: 5th International Conference, IScIDE 2015,\nSuzhou, China, June 14-16, 2015, Revised Selected\nPapers, Part I 5 , pages 72–81. Springer.[Gadzicki et al.2020] Gadzicki, Konrad, Razieh Kham-\nsehashari, and Christoph Zetzsche. 2020. Early vs\nlate fusion in multimodal convolutional neural net-\nworks. In 2020 IEEE 23rd international conference\non information fusion (FUSION) , pages 1–6. IEEE.\n[Graham et al.2013] Graham, Yvette, Timothy Baldwin,\nAlistair Moffat, and Justin Zobel. 2013. Continuous\nmeasurement scales in human evaluation of machine\ntranslation. In Proceedings of the 7th Linguistic Anno-\ntation Workshop and Interoperability with Discourse ,\npages 33–41, Sofia, Bulgaria, August. Association\nfor Computational Linguistics.\n[Guo et al.2018] Guo, Lili, Longbiao Wang, Jianwu\nDang, Linjuan Zhang, and Haotian Guan. 2018.\nA feature fusion method based on extreme learning\nmachine for speech emotion recognition. In 2018\nIEEE International Conference on Acoustics, Speech\nand Signal Processing (ICASSP) , pages 2666–2670.\nIEEE.\n[Gupta et al.2018] Gupta, Ankush, Arvind Agarwal,\nPrawaan Singh, and Piyush Rai. 2018. A deep\ngenerative framework for paraphrase generation. In\nProceedings of the aaai conference on artificial intel-\nligence , volume 32.\n[Karpinska et al.2022] Karpinska, Marzena, Nishant Raj,\nKatherine Thai, Yixiao Song, Ankita Gupta, and Mo-\nhit Iyyer. 2022. DEMETR: Diagnosing evaluation\nmetrics for translation. In Proceedings of the 2022\nConference on Empirical Methods in Natural Lan-\nguage Processing , pages 9540–9561, Abu Dhabi,\nUnited Arab Emirates, December. Association for\nComputational Linguistics.\n[Lommel et al.2014] Lommel, Arle, Hans Uszkoreit,\nand Aljoscha Burchardt. 2014. Multidimen-\nsional quality metrics (mqm): A framework for\ndeclaring and describing translation quality metrics.\nTradum `atica , (12):0455–463.\n[Ma et al.2019] Ma, Qingsong, Johnny Wei, Ond ˇrej Bo-\njar, and Yvette Graham. 2019. Results of the\nWMT19 metrics shared task: Segment-level and\nstrong MT systems pose big challenges. In Proceed-\nings of the Fourth Conference on Machine Transla-\ntion (Volume 2: Shared Task Papers, Day 1) , pages\n62–90, Florence, Italy, August. Association for Com-\nputational Linguistics.\n[Moura et al.2020] Moura, Jo ˜ao, Miguel Vera, Daan van\nStigt, Fabio Kepler, and Andr ´e F. T. Martins. 2020.\nIST-unbabel participation in the WMT20 quality esti-\nmation shared task. In Proceedings of the Fifth Con-\nference on Machine Translation , pages 1029–1036,\nOnline, November. Association for Computational\nLinguistics.\n[Papineni et al.2002] Papineni, Kishore, Salim Roukos,\nTodd Ward, and Wei-Jing Zhu. 2002. Bleu: a method\nfor automatic evaluation of machine translation. In\nProceedings of the 40th Annual Meeting of the Associ-\nation for Computational Linguistics , pages 311–318,\nPhiladelphia, Pennsylvania, USA, July. Association\nfor Computational Linguistics.\n[Petridis et al.2017] Petridis, Stavros, Zuwei Li, and\nMaja Pantic. 2017. End-to-end visual speech recog-\nnition with lstms. In 2017 IEEE international con-\nference on acoustics, speech and signal processing\n(ICASSP) , pages 2592–2596. IEEE.\n[Popovi ´c2015] Popovi ´c, Maja. 2015. chrF: character\nn-gram F-score for automatic MT evaluation. In Pro-\nceedings of the Tenth Workshop on Statistical Ma-\nchine Translation , pages 392–395, Lisbon, Portugal,\nSeptember. Association for Computational Linguis-\ntics.\n[Rei et al.2020] Rei, Ricardo, Craig Stewart, Ana C Far-\ninha, and Alon Lavie. 2020. COMET: A neural\nframework for MT evaluation. In Proceedings of the\n2020 Conference on Empirical Methods in Natural\nLanguage Processing (EMNLP) , pages 2685–2702,\nOnline, November. Association for Computational\nLinguistics.\n[Rei et al.2022a] Rei, Ricardo, Jos ´e G. C. de Souza,\nDuarte Alves, Chrysoula Zerva, Ana C Farinha,\nTaisiya Glushkova, Alon Lavie, Luisa Coheur, and\nAndr ´e F. T. Martins. 2022a. COMET-22: Unbabel-\nIST 2022 submission for the metrics shared task.\nInProceedings of the Seventh Conference on Ma-\nchine Translation (WMT) , pages 578–585, Abu Dhabi,\nUnited Arab Emirates (Hybrid), December. Associa-\ntion for Computational Linguistics.\n[Rei et al.2022b] Rei, Ricardo, Marcos Treviso, Nuno M.\nGuerreiro, Chrysoula Zerva, Ana C Farinha, Chris-\ntine Maroti, Jos ´e G. C. de Souza, Taisiya Glushkova,\nDuarte Alves, Luisa Coheur, Alon Lavie, and Andr ´e\nF. T. Martins. 2022b. CometKiwi: IST-unbabel\n2022 submission for the quality estimation shared\ntask. In Proceedings of the Seventh Conference on\nMachine Translation (WMT) , pages 634–645, Abu\nDhabi, United Arab Emirates (Hybrid), December.\nAssociation for Computational Linguistics.\n[Sellam et al.2020] Sellam, Thibault, Dipanjan Das, and\nAnkur Parikh. 2020. BLEURT: Learning robust met-\nrics for text generation. In Proceedings of the 58th\nAnnual Meeting of the Association for Computational\nLinguistics , pages 7881–7892, Online, July. Associa-\ntion for Computational Linguistics.\n[Snover et al.2009a] Snover, Matthew, Nitin Madnani,\nBonnie Dorr, and Richard Schwartz. 2009a. Fluency,\nadequacy, or hter? exploring different human judg-\nments with a tunable mt metric. In Proceedings of the\nfourth workshop on statistical machine translation ,\npages 259–268.\n[Snover et al.2009b] Snover, Matthew G, Nitin Madnani,\nBonnie Dorr, and Richard Schwartz. 2009b. Ter-plus:\nparaphrase, semantic, and alignment enhancements\nto translation edit rate. Machine Translation , 23:117–\n127.[Wang et al.2021] Wang, Ke, Yangbin Shi, Jiayi Wang,\nYuqi Zhang, Yu Zhao, and Xiaolin Zheng. 2021. Be-\nyond glass-box features: Uncertainty quantification\nenhanced quality estimation for neural machine trans-\nlation. arXiv preprint arXiv:2109.07141 .\n[Wu et al.2022] Wu, Zhanglin, Min Zhang, Ming Zhu,\nYinglu Li, Ting Zhu, Hao Yang, Song Peng, and Ying\nQin. 2022. Kg-bertscore: Incorporating knowledge\ngraph into bertscore for reference-free machine trans-\nlation evaluation. In 11th International Joint Con-\nference on Knowledge Graphs, IJCKG2022. To be\npubliushed .\n[Zerva et al.2021] Zerva, Chrysoula, Daan Van Stigt, Ri-\ncardo Rei, Ana C Farinha, Pedro Ramos, Jos ´e GC\nde Souza, Taisiya Glushkova, Miguel Vera, Fabio\nKepler, and Andr ´e FT Martins. 2021. Ist-unbabel\n2021 submission for the quality estimation shared\ntask. In Proceedings of the Sixth Conference on Ma-\nchine Translation , pages 961–972.\n[Zerva et al.2022] Zerva, Chrysoula, Taisiya Glushkova,\nRicardo Rei, and Andr ´e F. T. Martins. 2022. Dis-\nentangling uncertainty in machine translation eval-\nuation. In Proceedings of the 2022 Conference on\nEmpirical Methods in Natural Language Processing ,\npages 8622–8641, Abu Dhabi, United Arab Emirates,\nDecember. Association for Computational Linguis-\ntics.\nAModel Implementation and Parameters\nTable 8 shows the hyperparameters used to train the\nfollowing prediction models: COMET ,COMET +\nSL-feat. andCOMET + WL-tags . For the baseline\nwe used the code available at https://github.\ncom/Unbabel/COMET and we trained the model on\nWMT17-WMT20 DA data (in the table we refer to\nit asCOMET ).\nFor the ENSEMBLE we tune three weights on the\ndevelopment set with grid search, by optimizing\nKendall tau correlations (see Table 6).\nBLEU CHRF C OMET\nweights 0.02513 0.04523 0.92965\nTable 6: Tuned weights on the MQM 2021 set for the weighted\nensemble.\nThe bottleneck size parameter for COMET + SL-\nfeat. model was tuned using development set. This\nset covers three language pairs (English →German,\nEnglish →Russian, Chinese →English) and two\ndomains (ted and newstest). Kendall tau correlation\nwas computed over the whole dataset without con-\nsidering different domains separately (see Table 7).\n64 128 256 512\nEN-DE0.223 0.216 0.217 0.225\nEN-RU0.305 0.279 0.275 0.281\nZH-EN 0.319 0.330 0.325 0.315\nAVG 0.282 0.275 0.272 0.274\nTable 7: Kendall’s tau-like correlation per language pair on\ndevelopment set for different bottleneck sizes. Bold values\nindicate the best performance per language pair.\nB Correlation with Human Judgements\nWe present here results on MQM 2022 set for Pear-\nson and spearman correlations (see Tables 9 and 10\naccordingly). We can see that especially for Spear-\nmanρthe findings are aligned with the findings\non Kendall tau correlations. Instead, for the Pear-\nsonrwhich is more sensitive to outliers, we can\nsee that the augmentation method outperforms the\nfeature-based modifications.\nC Training Data Statistics\nThe combined WMT training data (from 2017 to\n2020) has 950069 segments and covers the follow-\ning language pairs (total number is 32): Cs-En,\nDe-Cs, De-En, De-Fr, En-Cs, En-De, En-Et, En-Fi,\nEn-Gu, En-Ja, En-Kk, En-Lt, En-Lv, En-Pl, En-Ru,En-Ta, En-Tr, En-Zh, Et-En, Fi-En, Fr-De, Gu-En,\nJa-En, Kk-En, Km-En, Lt-En, Pl-En, Ps-En, Ru-En,\nTa-En, Tr-En, Zh-En.\nHyperparameter COMET COMET + SL-feat. COMET + WL-tags\nEncoder Model XLM-R (large) XLM-R (large) XLM-R (large)\nOptimizer Adam Adam Adam\nNo. frozen epochs 0.3 0.3 0.3\nLearning rate 3e-05 3e-05 3e-05\nEncoder Learning Rate 1e-05 1e-05 1e-05\nLayerwise Decay 0.95 0.95 0.95\nBatch size 4 4 4\nLoss function Mean squared error Mean squared error Mean squared error\nDropout 0.15 0.15 0.15\nHidden sizes [3072, 1024] [3072, 1024] [3072, 1024]\nEncoder Embedding layer Frozen Frozen Frozen\nBottleneck layer size - 64 -\nFP precision 32 32 32\nNo. Epochs (training) 2 2 2\nTable 8: Hyperparameters used to train different prediction methods.\nBLEU CHRF COMET ENSEMBLE COMET + aug + SL-feat. + WL-tagsEN-DEConversation 0.228 0.285 0.371 0.376 0.378 0.379 0.400\nEcommerce 0.173 0.222 0.376 0.373 0.380 0.383 0.341\nNews 0.220 0.260 0.521 0.521 0.492 0.506 0.526\nSocial 0.172 0.220 0.367 0.367 0.375 0.382 0.351EN-RUConversation 0.155 0.185 0.372 0.369 0.418 0.350 0.400\nEcommerce 0.249 0.287 0.488 0.488 0.510 0.507 0.481\nNews 0.169 0.230 0.469 0.467 0.464 0.477 0.448\nSocial 0.213 0.143 0.324 0.328 0.371 0.343 0.385ZH-ENConversation 0.160 0.206 0.340 0.338 0.370 0.343 0.358\nEcommerce 0.220 0.230 0.391 0.391 0.438 0.400 0.440\nNews 0.097 0.078 0.340 0.334 0.383 0.364 0.359\nSocial 0.161 0.177 0.351 0.347 0.358 0.343 0.373\nA VG 0.185 0.210 0.393 0.392 0.411 0.398 0.405\nTable 9: Pearson correlation on high resource language pairs using the MQM annotations for Conversation, Ecommerce, News\nand Social domains collected for the WMT 2022 Metrics Task. Bold numbers indicate the best result for each domain in each\nlanguage pair.\nBLEU CHRF COMET ENSEMBLE COMET + aug + SL-feat. + WL-tagsEN-DEConversation 0.262 0.337 0.401 0.403 0.385 0.404 0.409\nEcommerce 0.235 0.278 0.421 0.411 0.403 0.416 0.417\nNews 0.224 0.273 0.478 0.472 0.438 0.471 0.486\nSocial 0.173 0.222 0.389 0.383 0.361 0.386 0.384EN-RUConversation 0.183 0.230 0.400 0.397 0.427 0.389 0.428\nEcommerce 0.276 0.303 0.502 0.501 0.514 0.499 0.528\nNews 0.171 0.224 0.499 0.492 0.490 0.514 0.495\nSocial 0.212 0.186 0.425 0.423 0.455 0.460 0.483ZH-ENConversation 0.166 0.211 0.375 0.369 0.385 0.370 0.389\nEcommerce 0.241 0.259 0.449 0.448 0.467 0.459 0.487\nNews 0.063 0.057 0.364 0.352 0.393 0.373 0.394\nSocial 0.219 0.256 0.424 0.421 0.418 0.419 0.439\nA VG 0.202 0.236 0.427 0.423 0.428 0.430 0.445\nTable 10: Spearman correlation on high resource language pairs using the MQM annotations for Conversation, Ecommerce,\nNews and Social domains collected for the WMT 2022 Metrics Task. Bold numbers indicate the best result for each domain in\neach language pair.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.