metadata
dict
paper
dict
review
dict
citation_count
int64
0
0
normalized_citation_count
int64
0
0
cited_papers
listlengths
0
0
citing_papers
listlengths
0
0
{ "id": "C3p_Rj0TBq", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=C3p_Rj0TBq", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "_P-ooh65kS", "year": null, "venue": "EC2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=_P-ooh65kS", "arxiv_id": null, "doi": null }
{ "title": "Prophet Inequalities for I.I.D. Random Variables from an Unknown Distribution.", "authors": [ "José R. Correa", "Paul Dütting", "Felix A. Fischer", "Kevin Schewior" ], "abstract": "A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E [Xτ]≥α• E [maxt Xt]. What makes this problem challenging is that the decision whether τ", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "2fqHDUd9_B", "year": null, "venue": "ECAL2017", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=2fqHDUd9_B", "arxiv_id": null, "doi": null }
{ "title": "Shepherding with robots that do not compute.", "authors": [ "Anil Özdemir", "Melvin Gauci", "Roderich Gross" ], "abstract": "We examine the problem solving capabilities of swarms of computation- and memory-free agents. Each agent has a single line-of-sight sensor providing two bits of information. The agent maps this information directly onto constant motor commands. In previous work, we showed that such simplistic agents can solve tasks requiring them to organize spatially (multi-robot aggregation and circle formation) and manipulate passive objects (clustering). In the present work, we address the shepherding problem, where the computation- and memory-free agents—the shepherds—are tasked to gather and move a group of dynamic agents—the sheep—towards a pre-defined goal. The shepherds and sheep are modelled as e-puck robots using computer simulations. Our findings show that the shepherding problem does not fundamentally require arithmetic computation or memory to be solved. The obtained controller solution is robust with respect to sensory noise, and copes well with changes in the number of sheep.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "JPAUFf9u4Q", "year": null, "venue": "E2DC@e-Energy 2016", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=JPAUFf9u4Q", "arxiv_id": null, "doi": null }
{ "title": "Learning-based power prediction for data centre operations via deep neural networks", "authors": [ "Yuanlong Li", "Han Hu", "Yonggang Wen", "Jun Zhang" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "lri_iAbpn_r", "year": null, "venue": "MIDL 2023 Oral", "pdf_link": "/pdf/ef254aeccd7177afc23382bcbd89f741046b9132.pdf", "forum_link": "https://openreview.net/forum?id=lri_iAbpn_r", "arxiv_id": null, "doi": null }
{ "title": "E(3) x SO(3) - Equivariant Networks for Spherical Deconvolution in Diffusion MRI", "authors": [ "Axel Elaldi", "Guido Gerig", "Neel Dey" ], "abstract": "We present Roto-Translation Equivariant Spherical Deconvolution (RT-ESD), an $E(3)\\times SO(3)$ equivariant framework for sparse deconvolution of volumes where each voxel contains a spherical signal. Such 6D data naturally arises in diffusion MRI (dMRI), a medical imaging modality widely used to measure microstructure and structural connectivity. As each dMRI voxel is typically a mixture of various overlapping structures, there is a need for blind deconvolution to recover crossing anatomical structures such as white matter tracts. Existing dMRI work takes either an iterative or deep learning approach to sparse spherical deconvolution, yet it typically does not account for relationships between neighboring measurements. This work constructs equivariant deep learning layers which respect to symmetries of spatial rotations, reflections, and translations, alongside the symmetries of voxelwise spherical rotations. As a result, RT-ESD improves on previous work across several tasks including fiber recovery on the DiSCo dataset, deconvolution-derived partial volume estimation on real-world in vivo human brain dMRI, and improved downstream reconstruction of fiber tractograms on the Tractometer dataset. Our implementation is available at \\url{https://github.com/AxelElaldi/e3so3_conv}.", "keywords": [ "Equivariance", "Diffusion", "MRI", "fODF", "Geometric Deep Learning", "Spherical Deep Learning" ], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "iFrMZGUVQfp", "year": null, "venue": "eBISS 2011", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=iFrMZGUVQfp", "arxiv_id": null, "doi": null }
{ "title": "The GoOLAP Fact Retrieval Framework", "authors": [ "Alexander Löser", "Sebastian Arnold", "Tillmann Fiehn" ], "abstract": "We discuss the novel problem of supporting analytical business intelligence queries over web-based textual content, e.g., BI-style reports based on 100.000’s of documents from an ad-hoc web search result. Neither conventional search engines nor conventional Business Intelligence and ETL tools address this problem, which lies at the intersection of their capabilities. Three recent developments have the potential to become key components of such an ad-hoc analysis platform: significant improvements in cloud computing query languages, advances in self-supervised keyword generation techniques and powerful fact extraction frameworks. We will give an informative and practical look at the underlying research challenges in supporting ”Web-Scale Business Analytics” applications that we met when building GoOLAP, a system that already enjoys a broad user base and over 6 million objects and facts.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "en2SEuECnpg", "year": null, "venue": "EAIA 1990", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=en2SEuECnpg", "arxiv_id": null, "doi": null }
{ "title": "Three Lectures on Situation Theoretic Grammar", "authors": [ "Robin Cooper" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "HiHWMiLP035", "year": null, "venue": "ICLR 2022 Submitted", "pdf_link": "/pdf/b83bf8fb27156c92d7d4fa61ebe900d6cb9cd5f7.pdf", "forum_link": "https://openreview.net/forum?id=HiHWMiLP035", "arxiv_id": null, "doi": null }
{ "title": "E$^2$CM: Early Exit via Class Means for Efficient Supervised and Unsupervised Learning", "authors": [ "Alperen Gormez", "Erdem Koyuncu" ], "abstract": "State-of-the-art neural networks with early exit mechanisms often need considerable amount of training and fine-tuning to achieve good performance with low computational cost. We propose a novel early exit technique, E$^2$CM, based on the class means of samples. Unlike most existing schemes, E$^2$CM does not require gradient-based training of internal classifiers. This makes it particularly useful for neural network training in low-power devices, as in wireless edge networks. In particular, given a fixed training time budget, E$^2$CM achieves higher accuracy as compared to existing early exit mechanisms. Moreover, if there are no limitations on the training time budget, E$^2$CM can be combined with an existing early exit scheme to boost the latter's performance, achieving a better trade-off between computational cost and network accuracy. We also show that E$^2$CM can be used to decrease the computational cost in unsupervised learning tasks.", "keywords": [ "class means", "early exit", "efficient neural networks" ], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "zJDHFzNWGCF", "year": null, "venue": "EAMT 2010", "pdf_link": "https://aclanthology.org/2010.eamt-1.17.pdf", "forum_link": "https://openreview.net/forum?id=zJDHFzNWGCF", "arxiv_id": null, "doi": null }
{ "title": "Integration of statistical collocation segmentations in a phrase-based statistical machine translation system", "authors": [ "Marta R. Costa-jussà", "Vidas Daudaravicius", "Rafael E. Banchs" ], "abstract": "Marta R. Costa-jussa, Vidas Daudaravicius, Rafael E. Banchs. Proceedings of the 14th Annual conference of the European Association for Machine Translation. 2010.", "keywords": [], "raw_extracted_content": "Integration of statistical collocation segmentations in a phrase-based\ns\ntatistical machine translationsystem\nMarta R. Costa-juss `a∗, VidasDaudaravicius†and Rafael E. Banchs∗\n∗Barcelona Media Research Center\nAv Diagonal, 177, 9th floor, 08018 Barcelona, Spain\n{marta.ruiz,rafael.banchs}@barcelonamedia.org\n†Faculty of Informatics, Vytautas Magnus University\nVileikos 8, Kaunas, Lithuania\[email protected]\nAbstract\nThis study evaluates the impact of inte-\ngrating two different collocation segmen-\ntationsmethodsinastandardphrase-based\nstatistical machine translation approach.\nThe collocation segmentation techniques\nare implemented simultaneously in the\nsource and target side. Each resulting col-\nlocation segmentation is used to extract\ntranslationunits. Experimentsarereported\nin the English-to-Spanish Bible task and\npromising results (an improvement over\n0.7 BLEU absolute) are achieved in trans-\nlation quality.\n1 Introduction\nMachine Translation (MT) investigates the use of\ncomputer software to translate text or speech from\nonelanguagetoanother. Statisticalmachinetrans-\nlation (SMT) has become one of the most popu-\nlar MT approaches given the combination of sev-\neral factors. Among them, it is relatively straight-\nforward to build an SMT system given the freely\navailable software and, additionally, the system\nconstruction does not require of any language ex-\nperts.\nNowadays, one of the most popular SMT ap-\nproaches is the phrase-based system (Koehn et al.,\n2003) which implements a maximum entropy ap-\nproach based on a combination of feature func-\ntions. The Moses system (Koehn et al., 2007)\nis an implementation of this phrase-based ma-\nchine translation approach. An input sentence\nis first split into sequences of words (so-called\nphrases),whicharethenmappedone-to-onetotar-\nget phrases using a large phrase translation table.\nc/circlecopyrt2010 European Association forMachine Translation.Introducing chunking in the standard phrase-\nbased SMT system is a relatively frequent\nstudy (Zhou et al., 2004; Wang et al., 2002; Ma\net al., 2007). Chunking may be used either to\nimprove reordering or to enhance the translation\ntable. For example, authors in (Zhang et al.,\n2007) present a shallow chunking based on syn-\ntactic information and they use the chunks to re-\norder phrases. Other studies report the impact on\nthe quality of word alignment and in translation\nafter using various types of multi-word expres-\nsions which can be regarded as a type of chunks,\nsee (Lambert and Banchs, 2006) or sub-sentential\nsequences (Macken et al., 2008; Groves and Way,\n2005). Chunking is usually performed on a syn-\ntacticorsemanticbasiswhichforcestohaveatool\nfor parsing or similar. We propose to introduce\nthe collocation segmentation developed by (Dau-\ndaravicius, 2009) which is language independent.\nThis collocation segmentation was applied in key-\nword assigment task and a high classification im-\nprovement was achieved (Daudaravicius, 2010).\nWe use this collocation segmentation technique\nto enrich the phrase translation table. The phrase\ntranslationtableiscomposedofphraseunitswhich\ngenerally are extracted from a word aligned paral-\nlel corpus. Given this word alignment, an extrac-\ntion of contiguous phrases is carried out (Zens et\nal., 2002), specifically all extracted phrases fulfill\nthefollowingrestrictions: allsource(target)words\nwithin a phrase are aligned only to target (source)\nwords within the same phrase.\nThis paper is organized as follows. First, we\ndetail the different collocation segmentation tech-\nniques proposed. Secondly, we make a brief de-\nscription of the phrase-based SMT system and\nhow we introduce the collocation segmentation to\nimprove the phrase-based SMT system. Then,\n[EAMT May 2010 St Raphael, France]\nwe present experiments performed in an standard\nphrase-based system comparing the phrase extrac-\ntion. Finally,we presenttheconclusions.\n2 Collocation segmentation\nThe Dice score is used to measure the associa-\ntion strength of two words. This score is used,\nfor instance, in the collocation compiler XTract\n(Smadja, 1993) and in the lexicon extraction sys-\ntem Champollion (Smadja and Hatzivassiloglou,\n1996). Diceis definedas follows:\nDice(x;y) =2f(x, y)\nf(x) +f(y)\nwhere f(x, y)isthefrequencyofco-occurrence\nofxandy, andf(x)andf(y)the frequencies of\noccurrence of xandyanywhere in the text. If x\nandytendtooccurinconjunction,theirDicescore\nwill be high. The text is seen as a changing curve\nof the word associativity values (see Figure 1 and\nFigure2).\nThe collocation segmentation is the process of\ndetecting the boundaries of collocation segments\nwithin a text. A collocation segment is a piece of\na text between boundaries. The boundaries are set\nin two steps. First, we set the boundary between\ntwo words within a text where the Dice value is\nlower than a threshold. The threshold value is set\nmanually and is kept at the Dice value of exp(-\n8)in our experiment CS-1(i.e. Collocation Seg-\nmentation type 1), and the Dice value of exp(-4)\nin our experiment CS-2(i.e. Collocation Segmen-\ntation type 2). This decision was based on the\nshape of the curve found in (Daudaravicius and\nMarcinkeviciene,2004). ThethresholdforCS-1is\nkept very low, and many weak word associations\nare considered. The threshold for CS-2 is high to\nkeep together only strongly connected words. The\nhigher threshold value makes shorter collocation\nsegments. Shorter collocation segments are more\nconfident collocations and we may expect better\ntransaltion results. Nevertheless, the results of our\nstudy show that longer collocation segments are\npreferable. Second, we introduce an average min-\nimum law (AML). The average minimum law is\nappliedtothethreeadjacentDicevalues(i.e.,four\nwords). Thelawisexpressedas follows:\nDice(xi−2, xi−1) +Dice(xi, xi+1)\n2>\nDice(xi−1, xi)− →xi−1boundaryx iThe boundary of a segment is set at the point,\nwhere the value of collocability is lower than the\naverage of preceding and following values of col-\nlocability. The example of setting the boundaries\nfor English sentence is presented in Figure 1, and\nitshowsasentenceandDicevaluesbetweenword\npairs. Almost all values are higher than an arbi-\ntrary chosen level of the threshold. Most of the\nboundaries in the example sentence are made by\nthe use of the average minimum law. This law\nidentifiessegmentorcollocationboundariesbythe\nchange of Dice value. This approach is new and\ndifferent from other widely used statistical meth-\nods (Tjong-Kim-Sang and S., 2000). For instance,\nthe general method used by Choueka (Choueka,\n1988) is the following: for each length n, (1≤\nn≤6), produce all the word sequences of length\nnand sort them by frequency; impose a thresh-\noldfrequency14. Xtractisdesignedtoextractsig-\nnificant bigrams, and then expands 2-Grams to n-\nGrams (Smadja, 1993). Lin (Lin, 1998) extends\nthe collocation extraction methods with syntactic\ndependency triples. Such collocation extraction\nmethods are performed on a dictionary level. The\nresultofthisprocessisadictionaryofcollocations.\nOur collocation segmentation is performed within\na text and the result of this process is a segmented\ntext(seeFigure3).\nThe segmented text could be used later to cre-\nate a dictionary of collocations. Such dictionary\naccepts all collocation segments. The main dif-\nference from Choueka and Smadja methods is\nthat our proposed method accepts all collocations\nand no significance tests for collocations are per-\nformed. The main advantage of this segmentation\nis the ability to perform collocation segmentation\nusing plain corpora only, and no manually seg-\nmented corpora or other databases and language\nprocessing tools are required. Thus, this approach\ncouldbeusedsuccessfullyinmanyNLPtaskssuch\nas statistical machine translation, information ex-\ntraction,informationretrievalandetc.\nThedisadvantageofcollocationsegmentationis\nthat the segments do not always conform to the\ncorrect grammatical and lexical phrases. E.g., in\nFigure1anappropriatesegmenationoftheconsec-\nutive set of words on the seventh day would give\nsegments onandthe seventh day . But the collo-\ncation segmentation takes on theandseventh day\nsegmentation. This happens because we have no\nextra information about structure of grammatical\nFigure1: Thesegmentboundaries of theEnglishSentence.\nFigure2: Thesegmentboundaries ofthe SpanishSentence.\nphsases. On the other hand, it is important to no-\ntice that the collocation segmentation of the same\ntranslated text is similar for different languages,\neven if a word or phrase order is different (Dau-\ndaravicius, 2010). Therefore, even if collocation\nsegments are not grammatically well formed, the\ncollocation segments are more or less symetrical\nfor different languages. The same sentence from\nBible corpus is segmented and the result is shown\nin Figures 1 and 2. As future work, it is neces-\nsary to make a thorough evaluation of conformity\nof the proposed collocation segmentation method\ntophrase-basedsegmentationbyusingparsers.\n3 Phrase-basedSMTsystem\nThe basic idea of phrase-based translation is to\nsegmentthegivensourcesentenceintounits(here-\nafter called phrases), then translate each phrase\nandfinallycomposethetargetsentencefromthese\nphrasetranslations.\nBasically, a bilingual phrase is a pair of m\nsource words and ntarget words. For extraction\nfromabilingualwordalignedtrainingcorpus,two\nadditional constraints are considered: words are\nconsecutive,and,theyareconsistentwiththeword\nalignmentmatrix.\nGiven the collected phrase pairs, the phrasetranslation probability distribution is commonly\nestimatedbyrelativefrequencyinbothdirections.\nThe translation model is combined together\nwith the following six additional feature func-\ntions: the target language model, the word and\nthe phrase bonus and the source-to-target and\ntarget-to-source lexicon model and the reorder-\ning model. These models are optimized in the\ndecoder following the procedure described in\nhttp://www.statmt.org/jhuws/ .\n4 Integration ofthe collocation\nsegmentationin the phrase-basedSMT\nsystem\nThe collocation segmentation provides a new seg-\nmentation of the data. One straightforward ap-\nproachistousethecollocationsegmentsaswords,\nandtobuildanewphrase-basedSMTsystemfrom\nscratch. Therefore, phrases are composed from\ncollocation segments. However, we have tested\nthat this approach does not yield to better results.\nThe reason for worse results could be the insuf-\nficient amount of data to build a transaltion table\nwith reliable statistics. The collocation segmenta-\nton increases the size of a dictionary more than 5\ntimes (Daudaravicius, 2010), and we need a suf-\nficient size corpus to get better results than base\nFigure3: Thecollocation segmentationof thebeginingof theBible.\nline. But the size of parallel corpora is limited by\nthe number of texts we are able to gather. There-\nfore, we propose to integrate collocation segments\nintostandardSMT.InsteadofbuildinganewSMT\nsystem from scrach, we enrich the base SMT with\ncollocaton segments.\nIn this work, we integrate the collocation-\nsegmentationas follows.\n1. First, we build a baseline phrase-based sys-\ntemwhichiscomputedasreportedinthesec-\ntionabove.\n2. Second, we build a collocation-based system\nwhich uses collocation segments as words.\nThe main difference of this system is that\nphrases are composed of collocations instead\nof words.\n3. Third,weconvertthesetofcollocation-based\nphrases (which was computed in step 2) into\na set of phrases composed by words. For\nexample, given the collocation-based phrase\ninthesightof|||delante, it is converted into\nthephrase inthesightof |||delante.\n4. Fourth, we consider the union of the baseline\nphrase-based extracted phrases (computed in\nstep 1) and the collocation-based extracted\nphrases (computed in step 2 and modified\nin step 3). That is, the set of standard\nphrases is combined with the set of modified\ncollocation-phrases.\n5. Finally, the phrase translation table is com-\nputed over the concatenated set of extracted\nphrases. This phrase table contains the stan-\ndardphrase-basedmodelswhichwerenamed\nin section 3: relative frequencies, lexical\nprobabilities and phrase bonus. Notice that\nsome pairs of phrases can be generated in\nboth extractions. Then this phrases will have\na higher score when computing the relativefrequencies. The IBM probabilities are com-\nputed atthelevelof words.\nHereinafter, this approach will be referred to as\nconcatenate-based approach ( CONCAT ). Figure 4\nshowsan exampleof phraseextraction.\nThe goal of the integration of the collocations\nsegmentation into the base SMT system is to in-\ntroduce new phrases into translation table and\nsmoothing of the relative frequencies of the trans-\nlationphraseswhichappearinbothsegmentations.\nAdditionally, the concatenation of two translation\ntablesgivesthepossibilitytohighlightthosetrans-\nlation phrases that are recognized in both trans-\nlation tables. Therefore, this allows to ‘vote’ for\nthe better translation phrases adding a new feature\nfunction which is ‘1’ in case of appearing in both\nsegmentations or’0’ intheoppositecase.\n5 Experimentalframework\nThe phrase-based system used in this paper\nis based on the well-known MOSES toolkit,\nwhich is nowadays considered as a state-of-the-\nart SMT system (Koehn et al., 2007). The\ntraining and weights tuning procedures are ex-\nplained in details in the above-mentioned pub-\nlication, as well as, on the MOSES web page:\nhttp://www.statmt.org/moses/ .\n5.1 Corpusstatistics\nExperiments were carried out on the English to\nSpanishBibletask,whichhavebeenproventobea\nvalid NLP resource (Chew et al., 2006). The main\nadvantages of using this corpus are that it is the\nworld’s most translated book, with translations in\nover 2,100 languages (often, multiple translations\nper language) and easy availability, often in elec-\ntronic form and in the public domain; it covers a\nvariety of literary styles including narrative, po-\netry, and correspondence; great care is taken over\nthe translations; it has a standard structure which\nFigure 4: Example of the phrase extraction process in the CONCAT approach. New phrases added by\nthecollocation-basedsystemaremarkedwitha ∗∗.\nallows parallel alignment on a verse-by-verse ba-\nsis; and, perhaps surprisingly, its vocabulary ap-\npears to have a high rate of coverage (as much as\n85%) of modern-day language. The Bible is small\ncompared to many corpora currently used in com-\nputationallinguisticsresearch,butstillfallswithin\nthe range of acceptability based on the fact that\nother corpora of similar size are used (see IWSLT\nInternationalEvaluationCampaign1).\nTable 1 shows the main statistics of the data\nused, namely the number of sentences, words and\nvocabulary,for eachlanguage.\n5.2 Collocation Segment statistics\nHereweanalysethecollocationsegmentstatistics.\nTable 2 shows the number of tokens and types of\ncollocation segments. We see that the number of\ntypes of collocation segments is around 6 times\nhigher than the number of types of words. The in-\ncrease is different for Spanish and English. The\n1http://mastarpj.nict.go.jp/IWSLT2009/Spanish English\nTrainingSentences 28,887 28,887\nTokens 781,113 848,776\nTypes 28,178 13,126\nDevelopmentSentences 500 500\nTokens 13,312 14,562\nTypes 2,879 2,156\nTestSentences 500 500\nTokens 13,170 14,537\nTypes 2,862 2,095\nTable 1:Bible corpus: training, development and\ntestdatasets.\nCS-1segmentation increased the number of types\nforSpanishtrainingsetby4times,andforEnglish\nby 6.5 times. Therefore, the dictionaries for Span-\nish and English become comparable in size. This\nallows to expect better alignment, and that is in-\ndeed in our experiments. The CS-2segmentation\nincreased the number of types for Spanish train-\nSpanish English\nTrainingSentences 28,887 28,887\nTokensCS-1 407,505 456,608\nTypesCS-1 109,521 84,789\nTokensCS-2 524,916 549,585\nTypesCS-2 57,893 37,030\nTable2:Tokensandtypesofcollocationsegments.\ning set by 2 times, and for English by 2.8 times.\nThe dictionaries are still comparably different in\nsize. In section 4.5 we show that CS-1segmenta-\ntionprovidesthebestresults. Thisresultmayindi-\ncate initial number of types before alignment is an\nimportant feature. The number of types should be\ncomparableinordertoachievethebestalignment,\nandthebesttranslationresultsafterward. Thismay\nexplain why CS-1segmentation contributes to ob-\ntain higher quality translations than CS-2segmen-\ntation,as willbeshowninSection4.5.\n5.3 Experimental systems\nWe build four different systems: the phrase-based\n(PB), with two different phrase length limits, and\nthe concatenate-based ( CONCAT ) SMT system,\nwhich has two versions: one for each type of seg-\nmentationpresentedabove.\nPhrase length is understood as the maximum\nnumber of words either in the source or the target\npart. In our experiments, the CONCAT systems\ncatenated the baseline system which used phrases\nupto10wordstogetherwiththeunitscomingfrom\nthe collocation segmentation which was limited to\n10. This collocation segmentation limitation al-\nlowed for translation units of a maximum of 20\nwords. In order to make a fair comparison, we\nused two baseline systems, one with a maximum\nof 10 words ( PB-10) and another of maximum of\n20words( PB-20)per translationunit.\n5.4 Translation unitsanalysis\nThis section analyses the translation units that\nwere used in the test set (i.e. the highest scoring\ntranslationunitsfoundbythedecoder).\nAdding more phrases (in the PB-20system)\nwithout any selection leads to a phrase table of\n7Mtranslationunits,whereasusingour CONCAT-\n1proposal the phrase table contains 4.6M transla-\ntion units and in the CONCAT-2 , the phrase table\ncontains5.3Mtranslationunits. Thatmeansa35%\nreductionof thetotaltranslationunitvocabulary.Table 3 shows average and maximum length\nof the translation units used in the test set. The\ncollocation segmentation influences the length of\ntranslation phrases. Neither the CONCAT-1 nor\nCONCAT-2 approach does not use longer phrases\nin average. In fact, the segmentation reduces the\naverage length of the translation unit. This result\nmay be surprising, because a segmentation which\nuses chunks instead of words may be expected to\nincreasetheaveragelengthofthetranslationunits.\nIn the next section, we will see that using longer\nphrasesdonotimprovethetranslation. Noticethat\nthe literature showed that using longer phrases do\nnotprovidebetter translation(Koehnet al.,2003).\n5.5 Automatictranslationevaluation\nThe translation performance of the four experi-\nmental systemsis evaluatedandshowninTable4.\nIn fact, an indirect composition of phrases with\nthehelpofthesegmentationallowstogetbetterre-\nsultsthanastraightforwardcompositionoftransla-\ntion phrases from single words. However, adding\nphrases using the standard algorithm can lead to\nslightlyworsetranslations(Koehnetal.,2003).\nThebesttranslationresultswereachievedbyin-\ntegrating collocation segmentation 1, which uses\nlonger collocation segments, into the SMT sys-\ntem. This result shows that shorter collocations,\ni.e. more confident collocations, do not improve\nresults. This could be due to ability of the base\nSMT system to capture collocations in the similar\nway as the collocation segmentation 2 does. The\ncollocation segmentation 1 introduces longer col-\nlocation that the base SMT system is not able to\ncapture. Thus, longer collocations improves base\nSMTsystembetter thanshortercollocations.\nThe results show that the higher average of the\nlength of translation phrases do not necessarly\nlead to better translations (see table 3). The im-\nprovement of translation quality (when using the\ncollocation segmentation) may indicate that short\nphrasescomingfromthecollocationsegmentation\nhave a better association between words and lead\nto a better translation. It is difficult to make a con-\nclusionabouttheimportanceofthemeasureofthe\naverage length of the phrase in the translation ta-\nble. Therefore, the average phrase length measure\nalone is not a reliable feature, and does not give\nimportant information and could cheat the conlu-\nsions. Thisisclearlyseeninourresults: theBLEU\nscore of PB–10 and CONCAT-2 are very close,\nPB-10PB-20CONCAT-1 CONCAT-2\nSourcephraseaveragelength 2.512.562.36 2.27\nSourcephrasemaximumlength 102010 16\nTargetphraseaveragelength 2.322.342.13 2.05\nTargetphrasemaximumlength 102010 10\nTable3:Translationunitlength statisticsusedinthetestset.\nbut the average length of phrases are too different,\nand appear in the oposite sides of the CONCAT-\n1 value. Futher studies could show what features\ncould be used to describe the quality of the trans-\nlationdictionary.\nCollocation segmentation is capable to intro-\nduce new translation units that are useful in the fi-\nnal translation system and to smooth the relative\nfrequencies of those units which were already in\nthe baseline translation table. The improvement is\nalmost of +0.6 point BLEU in the test set. Fur-\nther experiments could be dedicated to investigate\nthe separate improvement due to (1) new transla-\ntion units or (2) smoothing (in case they give in-\ndependent gains). From now on, the comparison\nismadewiththebestbaseline( PB-10)systemand\nthebestCONCAT (CONCAT-1 )system,whichob-\ntainedthe bestresultsintheautomatic evaluation.\nWefoundoutthatacertainnumberofsentences\nproducedthesameoutputwithdifferentsegmenta-\ntion. When comparing the best CONCAT with the\nbest baseline ( PB-10) systems’ outputs, 165 sen-\ntences produced the same output (in most cases\nwith different segmentation). The last row in ta-\nble 4 shows BLEU when evaluating only the sen-\ntenceswhichweredifferent(Subset-Test,335sen-\ntences). In this case, the BLEU improvement\nreaches +0.75.\n5.6 Translation analysis\nWeperformedamanualanalysisofthetranslation.\nWecompared100outputsentencesfromthebase-\nlineandthe CONCAT system.\nNosignificantadvantagesofthebaselinesystem\nwastracked,whereasthecollocationsegmentation\nallowstoimprovetranslationqualityinthefollow-\ningways (onlysentencesubsegmentsareshown):\n1. Notremovalof words.\nBas:llam´osunombreNo ´e:\n+CS:llam´osunombreNo ´e,diciendo:\nREF:llam´osunombreNo ´e,diciendo:\n2. Better choiceof prepositions.Bas:declarar´apor juramento\n+CS:declarar´abajo juramento\nREF:declarar´abajo juramento\n3. Better choice oftranslationunits.\nBas:.|||;\n+CS:.|||.\nREF:.\n4. Better preservationof idiomacity.\nBas:podr´as comer pan\n+CS:comer´as pan\nREF:comer´as pan\n5. Better selectionof aphrasestructure.\nBas:cuando´elconoce\n+CS:cuando´elllegueasaberlo\nREF:cuando´elllegueasaberlo\n6 Conclusionsandfurtherresearch\nThis work explored the feasibility for improving\na standard phrase-based statistical machine trans-\nlation system by using a novel collocation seg-\nmentation method for translation unit extraction.\nExperiments were carried out with the English-to-\nSpanish Bible corpus task. A small but significant\ngainintranslationBLEUwasobtainedwhencom-\nbiningtheseunits withthestandardsetofphrases.\nFuture research in this area is envisioned in the\nfollowing main directions: to study how the col-\nlocations learned on the Bible corpus differ from\nthoselearnedonmoregeneralcorpora;toimprove\ncollocationsegmentationqualityinordertoobtain\nmorehuman-liketranslationunitsegmentations;to\nexplore the use of a specific feature function for\nhelping the translation systems to select transla-\ntion units from both categories (collocation seg-\nmentsandconventionalphrases)accordingtotheir\nrelative importance at each decoding step; and to\nevaluate the impact of new translation units vs.\nsmoothing.\nPB-10PB-20CONCAT-1 CONCAT-2\nTest 35.6835.6036.28 35.82\nSubset-Test 33.65–34.40 –\nTable4:Translationresultsintermsof BLEU.\n7 Acknowledgements\nThis work has been partially funded by the Span-\nish Department of Education and Science through\ntheJuan de la Cierva fellowship program and the\nBUCEADOR project (TEC2009-14094-C04-01).\nTheauthorsalsowantstothanktheBarcelonaMe-\ndia Innovation Centre for its support and permis-\nsiontopublishthis research.\nReferences\nChew, P. A, S. J Verzi, T. L Bauer, and J. T McClain.\n2006. Evaluationofthebibleasaresourceforcross-\nlanguage information retrieval. In Proceedings of\nthe Workshop on Multilingual Language Resources\nand Interoperability , pages 68–74.\nChoueka, Y. 1988. Looking for needles in a haystack,\nor locating interesting collocational expressions in\nlarge textual databases. In Proceedings of the RIAO\nConference on User-Oriented Content-Based Text\nandImageHandling ,pages21–24,Cambridge,MA.\nDaudaravicius,V.andRMarcinkeviciene. 2004. Grav-\nity counts for the boundaries of collocations. In-\nternationalJournalofCorpusLinguistics ,9(2):321–\n348.\nDaudaravicius, V. 2009. Automatic identification of\nlexicalunits. AninternationalJournalofComputing\nand Informatics. Special Issue Computational Lin-\nguistics.\nDaudaravicius, Vidas. 2010. The influence of colloca-\ntion segmentation and top 10 items to keyword as-\nsignment performance. In 11th International Con-\nferenceonIntelligentTextProcessingandComputa-\ntional Linguistics, Springer Verlag, LNCS , page 12,\nIasi,Romania.\nGroves, D. and A. Way. 2005. Hybrid data-driven\nmodels of machine translation. Machine Transla-\ntion, 19(3):301323.\nKoehn, P., F.J. Och, and D. Marcu. 2003. Statistical\nphrase-basedtranslation. In ProceedingsoftheHLT-\nNAACL, pages 48–54, Edmonton.\nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen,\nC.Moran,R.Zens,C.Dyer,O.Bojar,A.Constantin,\nandE.Herbst. 2007. Moses: Opensourcetoolkitfor\nstatisticalmachinetranslation. In Proceedingsofthe\nACL,pages177–180,Prague,CzechRepublic,June.Lambert, P. and R. Banchs. 2006. Grouping multi-\nword expressions according to part-of-speech in sta-\ntistical machine translation. In Proceedings of the\nEACL, pages 9–16, Trento.\nLin, D. 1998. Extracting collocations from text cor-\npora. In First Workshop on Computational Termi-\nnology, Montreal.\nMa, Y., N. Stroppa, and A. Way. 2007. Alignment-\nguided chunking. In Proc. of TMI 2007 , pages 114–\n121, Skvde, Sweden.\nMacken, L., E. Lefever, and V. Hoste. 2008.\nLinguistically-based sub-sentential alignment for\nterminology extractionfrom a bilingual automotive\ncorpus. In ProceedingsofCOLING ,pages529–536,\nMachester.\nSmadja, F.and McKeown, K. R. and V. Hatzivas-\nsiloglou. 1996. Translation collocations for bilin-\nguallexicons: Astatisticalapproach. Computational\nLinguistics , 22(1):1–38.\nSmadja, F. 1993. Retrieving collocations from text:\nXtract.Computational Linguistics , 19(1):143–177.\nTjong-Kim-Sang, E. and Buchholz S. 2000. Introduc-\ntion to the conll-2000 shared task: Chunking. In\nProc. of CoNLL-2000 and LLL-2000 , pages 127–\n132, Lisbon, Portugal.\nWang, W., J. Huang, M. Zhou, and C. Huang. 2002.\nStructurealignmentusingbilingualchunks. In Proc.\nof COLING 2002 , Taipei.\nZens,R.,F.J.Och,andH.Ney. 2002. Phrase-basedsta-\ntisticalmachinetranslation. InJarke,M.,J.Koehler,\nand G. Lakemeyer, editors, KI - 2002: Advances in\nartificialintelligence ,volumeLNAI2479,pages18–\n32. Springer Verlag, September.\nZhang, Y., R. Zens, and H. Ney. 2007. Chunk-level\nreordering of source language sentences with auto-\nmatically learned rules for statistical machine trans-\nlation. In Proc. of the Human Language Technol-\nogy Conf. (HLT-NAACL’06):Proc. of the Workshop\non Syntax and Structure in Statistical Translation\n(SSST), pages 1–8, Rochester, April.\nZhou, Y., C. Zong, and X. Bo. 2004. Bilingual chunk\nalignmentinstatisticalmachinetranslation. In IEEE\nInternational Conference on Systems, Man and Cy-\nbernetics, volume 2, pages 1401–1406, Hague.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "5-G0TKKKDtG", "year": null, "venue": "ECAL2005", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=5-G0TKKKDtG", "arxiv_id": null, "doi": null }
{ "title": "Self-assembly on Demand in a Group of Physical Autonomous Mobile Robots Navigating Rough Terrain.", "authors": [ "Rehan O'Grady", "Roderich Groß", "Francesco Mondada", "Michael Bonani", "Marco Dorigo" ], "abstract": "Consider a group of autonomous, mobile robots with the ability to physically connect to one another (self-assemble). The group is said to exhibit functional self-assembly if the robots can choose to self-assemble in response to the demands of their task and environment [15]. We present the first robotic controller capable of functional self-assembly implemented on a real robotic platform. The task we consider requires a group of robots to navigate over an area of unknown terrain towards a target light source. If possible, the robots should navigate to the target independently. If, however, the terrain proves too difficult for a single robot, the robots should self-assemble into a larger group entity and collectively navigate to the target. We believe this to be one of the most complex tasks carried out to date by a team of physical autonomous robots. We present quantitative results confirming the efficacy of our controller. This puts our robotic system at the cutting edge of autonomous mobile multi-robot research.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "h6phZYPxeK6", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=h6phZYPxeK6", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "K2YM3d4StLF", "year": null, "venue": "EALS 2014", "pdf_link": "https://ieeexplore.ieee.org/iel7/7000192/7009492/07009514.pdf", "forum_link": "https://openreview.net/forum?id=K2YM3d4StLF", "arxiv_id": null, "doi": null }
{ "title": "A recurrent meta-cognitive-based Scaffolding classifier from data streams", "authors": [ "Mahardhika Pratama", "Jie Lu", "Sreenatha G. Anavatti", "José Antonio Iglesias" ], "abstract": "A novel incremental meta-cognitive-based Scaffolding algorithm is proposed in this paper crafted in a recurrent network based on fuzzy inference system termed recurrent classifier (rClass). rClass features a synergy between schema and scaffolding theories in the how-to-learn part, which constitute prominent learning theories of the cognitive psychology. In what-to-learn component, rClass amalgamates the new online active learning concept by virtue of the Bayesian conflict measure and dynamic sampling strategy, whereas the standard sample reserved strategy is incorporated in the when-to-learn constituent. The inference scheme of rClass is managed by the local recurrent network, sustained by the generalized fuzzy rule. Our thorough empirical study has ascertained the efficacy of rClass, which is capable of producing reliable classification accuracies, while retaining the amenable computational and memory burdens.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "HKbEFfD3v0Y", "year": null, "venue": "E2DC 2012", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=HKbEFfD3v0Y", "arxiv_id": null, "doi": null }
{ "title": "Modeling and Simulation of Data Center Energy-Efficiency in CoolEmAll", "authors": [ "Micha vor dem Berge", "Georges Da Costa", "Andreas Kopecki", "Ariel Oleksiak", "Jean-Marc Pierson", "Tomasz Piontek", "Eugen Volk", "Stefan Wesner" ], "abstract": "In this paper we present an overview of the CoolEmAll project which addresses the important problem of data center energy efficiency. To this end, CoolEmAll aims at delivering advanced simulation, visualization and decision support tools along with open models of data center building blocks to be used in simulations. Both building blocks and the toolkit will take into account aspects that have major impact on actual energy consumption such as cooling solutions, properties of applications, and workload and resource management policies. In the paper we describe the CoolEmAll approach, its expected results and an environment for their verification.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "3IbIwIUAxAY", "year": null, "venue": "EBCCSP 2022", "pdf_link": "https://ieeexplore.ieee.org/iel7/9845455/9845499/09845664.pdf", "forum_link": "https://openreview.net/forum?id=3IbIwIUAxAY", "arxiv_id": null, "doi": null }
{ "title": "A toolbox for neuromorphic perception in robotics", "authors": [ "Julien Dupeyroux", "Stein Stroobants", "Guido C. H. E. de Croon" ], "abstract": "The third generation of artificial intelligence (AI) introduced by neuromorphic computing is revolutionizing the way robots and autonomous systems can sense the world, process the information, and interact with their environment. Research towards fulfilling the promises of high flexibility, energy efficiency, and robustness of neuromorphic systems is widely supported by software tools for simulating spiking neural networks, and hardware integration (neuromorphic processors). Yet, while efforts have been made on neuromorphic vision (event-based cameras), it is worth noting that most of the sensors available for robotics remain inherently incompatible with neuromorphic computing, where information is encoded into spikes. To facilitate the use of traditional sensors, we need to convert the output signals into streams of spikes, i.e., a series of events (+1,-1) along with their corresponding timestamps. In this paper, we propose a review of the coding algorithms from a robotics perspective and further supported by a benchmark to assess their performance. We also introduce a ROS (Robot Operating System) toolbox to encode and decode input signals coming from any type of sensor available on a robot. This initiative is meant to stimulate and facilitate robotic integration of neuromorphic AI, with the opportunity to adapt traditional off-the-shelf sensors to spiking neural nets within one of the most powerful robotic tools, ROS.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "UevSbA0H2_2", "year": null, "venue": "ECAL2003", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=UevSbA0H2_2", "arxiv_id": null, "doi": null }
{ "title": "A Multi-agent Based approach to Modelling and Rendering of 3D Tree Bark Textures.", "authors": [ "Ban Tao", "Changshui Zhang", "Shu Wei" ], "abstract": "Multi-Agent System (MAS) has been a wide used and effective method to solve distributed AI problems. In this paper, we simplify the biological mechanism in tree bark growth and build a MAS model to simulate the generation of tree barks. The epidermis of the bark serves as the environment of the MAS while splits and lenticels are modelled as split agents and lenticel agents. The environment records the geometrics formed by the interactions of the agents during the life cycles. Visualization of the geometrics can result in realistic 3D tree bark textures which can give much fidelity to computer graphics applications.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "RRK0bFrFCJya", "year": null, "venue": "ECAL2003", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=RRK0bFrFCJya", "arxiv_id": null, "doi": null }
{ "title": "Evolving Aggregation Behaviors in a Swarm of Robots.", "authors": [ "Vito Trianni", "Roderich Groß", "Thomas Halva Labella", "Erol Sahin", "Marco Dorigo" ], "abstract": "In this paper, we study aggregation in a swarm of simple robots, called s − bots, having the capability to self-organize and self-assemble to form a robotic system, called a swarm − bot. The aggregation process, observed in many biological systems, is of fundamental importance since it is the prerequisite for other forms of cooperation that involve self-organization and self-assembling. We consider the problem of defining the control system for the swarm − bot using artificial evolution. The results obtained in a simulated 3D environment are presented and analyzed. They show that artificial evolution, exploiting the complex interactions among s − bots and between s − bots and the environment, is able to produce simple but general solutions to the aggregation problem.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "GqmCqI6rbt2A", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122696.pdf", "forum_link": "https://openreview.net/forum?id=GqmCqI6rbt2A", "arxiv_id": null, "doi": null }
{ "title": "Variants of Recursive Consequent Parameters Learning in Evolving Neuro-Fuzzy Systems", "authors": [ "Edwin Lughofer" ], "abstract": "A wide variety of evolving (neuro-)fuzzy systems (E(N)FS) approaches have been proposed during the last 10 to 15 years in order to handle (fast and real-time) data stream mining and modeling processes by dynamically updating the rule structure and antecedents. The current denominator in the update of the consequent parameters is the usage of the recursive (fuzzily weighted) least squares estimator (R(FW)LS), as being applied in almost all E(N)FS approaches. In this paper, we propose and examine alternative variants for consequent parameter updates, namely multi-innovation RFWLS, recursive corr-entropy and especially recursive weighted total least squares. Multi-innovation RLS guarantees more stability in the update, whenever structural changes (i.e. changes in the antecedents) in the E(N)FS are performed, as the rule membership degrees on (a portion of) past samples are actualized before and properly integrated in each update step. Recursive corr-entropy addresses the problematic of outliers by down-weighing the influence of (atypically) higher errors in the parameter updates. Recursive weighted total least squares takes into account also a possible noise level in the input variables (and not solely in the target variable as in RFWLS). The approaches are compared with standard RFWLS i.) on three data stream regression problems from practical applications, affected by (more or less significant) noise levels and one embedding a known drift, and ii.) on a realworld time-series based forecasting problem, also affected by noise. The results based on accumulated prediction error trends over time indicate that RFWLS can be largely outperformed by the proposed alternative variants.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "H1V8G-0Iz", "year": null, "venue": null, "pdf_link": "/pdf/387eeb47ef2b850977cfac2d964057330b755ef8.pdf", "forum_link": "https://openreview.net/forum?id=H1V8G-0Iz", "arxiv_id": null, "doi": null }
{ "title": "eCommerceGAN: A Generative Adversarial Network for e-commerce", "authors": [ "Ashutosh Kumar", "Arijit Biswas", "Subhajit Sanyal" ], "abstract": "E-commerce companies such as Amazon, Alibaba, and Flipkart process billions of orders every year. However, these orders represent only a small fraction of all plausible orders. Exploring the space of all plausible orders could help us better understand the relationships between the various entities in an e-commerce ecosystem, namely the customers and the products they purchase. In this paper, we propose a Generative Adversarial Network (GAN) for e-commerce orders. Our contributions include: (a) creating a dense and low-dimensional representation of e-commerce orders, (b) train an ecommerceGAN (ecGAN) with real orders to show the feasibility of the proposed paradigm, and (c) train an ecommerce-conditional- GAN (ec2GAN) to generate the plausible orders involving a particular product. We evaluate ecGAN qualitatively to demonstrate its effectiveness. The ec2GAN is used for various kinds of characterization of possible orders involving cold-start products.", "keywords": [ "E-commerce", "Generative Adversarial Networks", "Deep Learning", "Order Embedding", "Product Recommendation" ], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "AGwwR_r9azn", "year": null, "venue": "E2EMON 2007", "pdf_link": "https://ieeexplore.ieee.org/iel5/4261330/4261331/04261338.pdf", "forum_link": "https://openreview.net/forum?id=AGwwR_r9azn", "arxiv_id": null, "doi": null }
{ "title": "Traffic Trace Artifacts due to Monitoring Via Port Mirroring", "authors": [ "Jian Zhang", "Andrew W. Moore" ], "abstract": "Port-mirroring techniques are supported by many of today's medium and high-end Ethernet switches. The ubiquity and low-cost of port mirroring has made it a popular method for collecting packet traces. Despite its wide-spread use little work has been reported on the impacts of this monitoring method upon the measured network traffic. In particular, we focus upon each of delay and jitter (tinting difference), packet-reordering, and packet-loss statistics. We compare the port-mirroring method with inserting a passive TAP (test access point), such as a fibre splitter, into a monitored link. Despite a passive TAP being transparent to monitored traffic, port-mirroring popularity arises from its limited set-up disruption, and (potentially) easier management This paper documents experimental comparison of traffic using the passive TAP and port-mirroring functionality, and shows that port-mirroring will introduce significant changes to the inter-packet timing, packet-reordering, and packet-loss - even at very low levels of utilisation.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "UasCXDalEPZ", "year": null, "venue": "E2EMON 2005", "pdf_link": "https://ieeexplore.ieee.org/iel5/10461/33206/01564465.pdf", "forum_link": "https://openreview.net/forum?id=UasCXDalEPZ", "arxiv_id": null, "doi": null }
{ "title": "Autonomous end to end QoS monitoring", "authors": [ "Constantine Elster", "Danny Raz", "Ran Wolff" ], "abstract": "Verifying that each flow in the network satisfies its QoS requirements is one of the biggest scalability challenges in the current DiffServ architecture. This task is usually performed by a centralized allocation entity that monitors the flows' QoS parameters. Efficient detection of problematic flows is even more challenging when considering aggregated information such as the end to end delay suffered by packets belonging to a specific flow. Known oblivious and reactive monitoring techniques do not scale well when the number of flows and the length of their paths increase, and when the network load increases. This is due both to load on the centralized bandwidth allocation entity and to the excessive number of monitoring and control messages needed. We propose a new monitoring paradigm termed autonomous monitoring, in which the network itself (i.e. the routers along the flow path) is responsible to discover when a violation of the SLA occurs (or is soon to occur). Only in such cases the centralized allocation entity is notified, and can take the required actions. We study the performance of this new distributed algorithm through theoretical analysis and extensive simulations. Our results indicate that in addition to dramatically reducing the load from the centralized allocation entity, the amount of network traffic needed is relatively small and thus the new monitoring scheme scales well.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "oBbSGJLAzC", "year": null, "venue": "E2EMON 2005", "pdf_link": "https://ieeexplore.ieee.org/iel5/10461/33206/01564467.pdf", "forum_link": "https://openreview.net/forum?id=oBbSGJLAzC", "arxiv_id": null, "doi": null }
{ "title": "InTraBase: integrated traffic analysis based on a database management system", "authors": [ "Matti Siekkinen", "Ernst W. Biersack", "Guillaume Urvoy-Keller", "Vera Goebel", "Thomas Plagemann" ], "abstract": "Internet traffic analysis as a research area has attracted lots of interest over the last decade. The traffic data collected for analysis are usually stored in plain files and the analysis tools consist of customized scripts each tailored for a specific task. As data are often collected over a longer period of time or from different vantage points, it is important to keep metadata that describe the data collected. The use of separate files to store the data, the metadata, and the analysis scripts provides an abstraction that is much too primitive. The information that \"glues\" these different files together is not made explicit but is solely in the heads of the people involved in the activity. As a consequence, manipulating the data is very cumbersome, does not scale, and severely limits the way these data can be analyzed. We propose to use a database management system (DBMS) that provides the infrastructure for the analysis and management of data from measurements, related metadata, and obtained results. We discuss the problems and limitations with today's approaches, describe our ideas, and demonstrate how our DBMS-based solution, called InTraBase, addresses these problems and limitations. We present the first version of our prototype and preliminary performance analysis results.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ZSjNviOrwKl", "year": null, "venue": "E2EMON 2006", "pdf_link": "https://ieeexplore.ieee.org/iel5/10987/34624/01651274.pdf", "forum_link": "https://openreview.net/forum?id=ZSjNviOrwKl", "arxiv_id": null, "doi": null }
{ "title": "Object-Relational DBMS for Packet-Level Traffic Analysis: Case Study on Performance Optimization", "authors": [ "Matti Siekkinen", "Ernst W. Biersack", "Vera Goebel" ], "abstract": "Analyzing Internet traffic at packet level involves generally large amounts of raw data, derived data, and results from various analysis tasks. In addition, the analysis often proceeds in an iterative manner and is done using ad-hoc methods and many specialized software tools. These facts together lead to severe management problems that we propose to address using a DBMS-based approach, called In TraBase. The challenge that we address in this paper is to have such a database system (DBS) that allows to perform analysis efficiently. Off-the-shelf DBMSs are often considered too heavy and slow for such usage because of their complex transaction management properties that are crucial for the usage that they were originally designed for. We describe in this paper the design choices for a generic DBS for packet-level traffic analysis that enable good performance and describe how we implement them in the case of the InTraBase. Furthermore, we demonstrate their importance through performance measurements on the InTraBase. These results provide valuable insights for researchers who intend to utilize a DBMS for packet-level traffic analysis.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Cy8maxG_MEI5", "year": null, "venue": "EASE 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=Cy8maxG_MEI5", "arxiv_id": null, "doi": null }
{ "title": "Human Values Violations in Stack Overflow: An Exploratory Study", "authors": [ "Sara Krishtul", "Mojtaba Shahin", "Humphrey O. Obie", "Hourieh Khalajzadeh", "Fan Gai", "Ali Rezaei Nasab", "John C. Grundy" ], "abstract": "A growing number of software-intensive systems are being accused of violating or ignoring human values (e.g., privacy, inclusion, and social responsibility), and this poses great difficulties to individuals and society. Such violations often occur due to the solutions employed and decisions made by developers of such systems that are misaligned with user values. Stack Overflow is the most popular Q&A website among developers to share their issues, solutions (e.g., code snippets), and decisions during software development. We conducted an exploratory study to investigate the occurrence of human values violations in Stack Overflow posts. As comments under posts are often used to point out the possible issues and weaknesses of the posts, we analyzed 2,000 Stack Overflow comments and their corresponding posts (1,980 unique questions or answers) to identify the types of human values violations and the reactions of Stack Overflow users to such violations. Our study finds that 315 out of 2,000 comments contain concerns indicating their associated posts (313 unique posts) violate human values. Leveraging Schwartz’s theory of basic human values as the most widely used values model, we show that hedonism and benevolence are the most violated value categories. We also find the reaction of Stack Overflow commenters to perceived human values violations is very quick, yet the majority of posts (76.35%) accused of human values violation do not get downvoted at all. Finally, we find that the original posters rarely react to the concerns of potential human values violations by editing their posts. At the same time, they usually are receptive when responding to these comments in follow-up comments of their own.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "QASfXDoo7Jl", "year": null, "venue": "eBISS 2012", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=QASfXDoo7Jl", "arxiv_id": null, "doi": null }
{ "title": "Machine Learning Strategies for Time Series Forecasting", "authors": [ "Gianluca Bontempi", "Souhaib Ben Taieb", "Yann-Aël Le Borgne" ], "abstract": "The increasing availability of large amounts of historical data and the need of performing accurate forecasting of future behavior in several scientific and applied domains demands the definition of robust and efficient techniques able to infer from observations the stochastic dependency between past and future. The forecasting domain has been influenced, from the 1960s on, by linear statistical methods such as ARIMA models. More recently, machine learning models have drawn attention and have established themselves as serious contenders to classical statistical models in the forecasting community. This chapter presents an overview of machine learning techniques in time series forecasting by focusing on three aspects: the formalization of one-step forecasting problems as supervised learning tasks, the discussion of local learning techniques as an effective tool for dealing with temporal data and the role of the forecasting strategy when we move from one-step to multiple-step forecasting.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "F-iLp5bDOLU", "year": null, "venue": "eBISS 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=F-iLp5bDOLU", "arxiv_id": null, "doi": null }
{ "title": "Context-Aware Business Intelligence", "authors": [ "Rafael Berlanga Llavori", "Victoria Nebot" ], "abstract": "Modern business intelligence (BI) is currently shifting the focus from the corporate internal data to external fresh data, which can provide relevant contextual information for decision-making processes. Nowadays, most external data sources are available in the Web presented under different media such as blogs, news feeds, social networks, linked open data, data services, and so on. Selecting and transforming these data into actionable insights that can be integrated with corporate data warehouses are challenging issues that have concerned the BI community during the last decade. Big size, high dynamicity, high heterogeneity, text richness and low quality are some of the properties of these data that make their integration much harder than internal (mostly relational) data sources. In this lecture, we review the major opportunities, challenges, and enabling technologies to accomplish the integration of external and internal data. We also introduce some interesting use case to show how context-aware data can be integrated into corporate decision-making.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "bCDQFHex0EY", "year": null, "venue": "eBISS 2017", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=bCDQFHex0EY", "arxiv_id": null, "doi": null }
{ "title": "Three Big Data Tools for a Data Scientist's Toolbox", "authors": [ "Toon Calders" ], "abstract": "Sometimes data is generated unboundedly and at such a fast pace that it is no longer possible to store the complete data in a database. The development of techniques for handling and processing such streams of data is very challenging as the streaming context imposes severe constraints on the computation: we are often not able to store the whole data stream and making multiple passes over the data is no longer possible. As the stream is never finished we need to be able to continuously provide, upon request, up-to-date answers to analysis queries. Even problems that are highly trivial in an off-line context, such as: “How many different items are there in my database?” become very hard in a streaming context. Nevertheless, in the past decades several clever algorithms were developed to deal with streaming data. This paper covers several of these indispensable tools that should be present in every big data scientists’ toolbox, including approximate frequency counting of frequent items, cardinality estimation of very large sets, and fast nearest neighbor search in huge data collections.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "mTJxI0G-9tS", "year": null, "venue": "eBISS 2013", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=mTJxI0G-9tS", "arxiv_id": null, "doi": null }
{ "title": "Introduction to Pattern Mining", "authors": [ "Toon Calders" ], "abstract": "We present an overview of data mining techniques for extracting knowledge from large databases with a special emphasis on the unsupervised technique pattern mining. Pattern mining is often defined as the automatic search for interesting patterns and regularities in large databases. In practise this definition most often comes down to listing all patterns that exceed a user-defined threshold for a fixed interestingness measure. The simplest such problem is that of listing all frequent itemsets: given a database of sets, called transactions, list all sets of items that are subset of at least a given number of the transactions. We revisit the two main strategies for mining all frequent itemsets: the breadth-first Apriori algorithm and the depth-first FPGrowth, after which we show what are the main issues when extending to more complex patterns such as listing all frequent subsequences or subgraphs. In the second part of the paper we then look into the pattern explosion problem. Due to redundancy among patterns, most often the list of all patterns satisfying the frequency thresholds is so large that post-processing is required to extract useful information from them. We give an overview of some recent techniques to reduce the redundancy in pattern collections using statistical methods to model the expectation of a user given background knowledge on the one hand, and the minimal description length principle on the other.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "luGBhGcld1p", "year": null, "venue": "EAIT 2017", "pdf_link": "https://link.springer.com/content/pdf/10.1007/s10639-016-9539-0.pdf", "forum_link": "https://openreview.net/forum?id=luGBhGcld1p", "arxiv_id": null, "doi": null }
{ "title": "A peer assessment approach to project based blended learning course in a Vietnamese higher education", "authors": [ "Viet Anh Nguyen" ], "abstract": "This article presents a model using peer assessment to evaluate students taking part in blended - learning courses (BL). In these courses, teaching activities are carried out in the form of traditional face-to-face (F2F) and learning activities are performed online via the learning management system Moodle. In the model, the topics of courses are built as a set of projects and case studies for the attending students divided into groups. The result of the implementation of projects is evaluated and ranked by all course participants and is one of the course evaluation criteria for lecturers. To assess learners more precisely, we propose a multi-phase assessment model in evaluating all groups and the group members. The result of each student in the group based on himself evaluation, evaluations of the team members, the tearcher and all students in the course. There are 107 students, who participated in the course entitled “web application development”, are divided into 20 groups conducting the course in the field of information technology is deployed in the form of blended learning through peer assessment. The results of student’s feedback suggested that the usage of various peer assessment created positive learning effectiveness and more interesting learning attitude for students. The survey was conducted with the students through the questionnaire, each question with scale 5-point Likert scale that ranged from 1 (very unsatisfied) to 5 (very statisfied) to investigate the factors: Collaboration, Assessment, Technology showed that students were satisfied with our approach.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "HJMRvsAcK7", "year": null, "venue": null, "pdf_link": "/pdf/6a4ff08a9813a460c46f2ceae551e213c432c95b.pdf", "forum_link": "https://openreview.net/forum?id=HJMRvsAcK7", "arxiv_id": null, "doi": null }
{ "title": "Dynamic Pricing on E-commerce Platform with Deep Reinforcement Learning", "authors": [ "Jiaxi Liu", "Yidong Zhang", "Xiaoqing Wang", "Yuming Deng", "Xingyu Wu", "Miaolan Xie" ], "abstract": "In this paper we develop an approach based on deep reinforcement learning (DRL) to address dynamic pricing problem on E-commerce platform. We models real-world E-commerce dynamic pricing problem as Markov Decision Process. Environment state are defined with four groups of different business data. We make several main improvements on the state-of-the-art DRL-based dynamic pricing approaches: 1. We first extend the application of dynamic pricing to a continuous pricing action space. 2. We solve the unknown demand function problem by designing different reward functions. 3. The cold-start problem is addressed by introducing pre-training and evaluation using the historical sales data. Field experiments are designed and conducted on real-world E-commerce platform, pricing thousands of SKUs of products lasting for months. The experiment results shows that, on E-commerce platform, the difference of the revenue conversion rates (DRCR) is a more suitable reward function than the revenue only, which is different from the conclusion from previous researches. Meanwhile, the proposed continuous action model performs better than the discrete one.", "keywords": [ "reinforcement learning", "dynamic pricing", "e-commerce", "revenue management", "field experiment" ], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "7LRBatZAo5", "year": null, "venue": "EC2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=7LRBatZAo5", "arxiv_id": null, "doi": null }
{ "title": "Simple versus Optimal Contracts.", "authors": [ "Paul Dütting", "Tim Roughgarden", "Inbal Talgam-Cohen" ], "abstract": "We consider the classic principal-agent model of contract theory, in which a principal designs an outcome-dependent compensation scheme to incentivize an agent to take a costly and unobservable action. When all of the model parameters---including the full distribution over principal rewards resulting from each agent action---are known to the designer, an optimal contract can in principle be computed by linear programming. In addition to their demanding informational requirements, however, such optimal contracts are often complex and unintuitive, and do not resemble contracts used in practice. This paper examines contract theory through the theoretical computer science lens, with the goal of developing novel theory to explain and justify the prevalence of relatively simple contracts, such as linear (pure commission) contracts. First, we consider the case where the principal knows only the first moment of each action's reward distribution, and we prove that linear contracts are guaranteed to be worst-case optimal, ranging over all reward distributions consistent with the given moments. Second, we study linear contracts from a worst-case approximation perspective, and prove several tight parameterized approximation bounds.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "G3gp8s6FObYR", "year": null, "venue": "EC2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=G3gp8s6FObYR", "arxiv_id": null, "doi": null }
{ "title": "Posted Pricing and Prophet Inequalities with Inaccurate Priors.", "authors": [ "Paul Dütting", "Thomas Kesselheim" ], "abstract": "In posted pricing, one defines prices for items (or other outcomes), buyers arrive in some order and take their most preferred bundle among the remaining items. Over the last years, our understanding of such mechanisms has improved considerably. The standard assumption is that the mechanism has exact knowledge of probability distribution the buyers' valuations are drawn from. The prices are then set based on this knowledge. We examine to what extent existing results and techniques are robust to inaccurate prior beliefs. That is, the prices are chosen with respect to similar but different probability distributions. We focus on the question of welfare maximization. We consider all standard distance measures on probability distributions, and derive tight bounds on the welfare guarantees that can be derived for all standard techniques in the various metrics.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "nZDnJsaSn0L", "year": null, "venue": "NeHuAI@ECAI 2020", "pdf_link": "http://ceur-ws.org/Vol-2659/beaudoin.pdf", "forum_link": "https://openreview.net/forum?id=nZDnJsaSn0L", "arxiv_id": null, "doi": null }
{ "title": "Identifying the \"right\" level of explanation in a given situation", "authors": [ "Valérie Beaudouin", "Isabelle Bloch", "David Bounie", "Stéphan Clémençon", "Florence d'Alché-Buc", "James Eagan", "Winston Maxwell", "Pavlo Mozharovskyi", "Jayneel Parekh" ], "abstract": null, "keywords": [], "raw_extracted_content": "IDENTIFYING THE “RIGHT” LEVEL OF\nEXPLANATION IN A GIVEN SITUATION\nVal´erie Beaudouin1and Isabelle Bloch2and David Bounie1and St´ephan Cl ´emenc ¸on2and\nFlorence d’Alch ´e-Buc2and James Eagan2and Winston Maxwell1and\nPavlo Mozharovskyi2and Jayneel Parekh2 1\nAbstract. We present a framework for defining the “right” level of\nexplainability based on technical, legal and economic considerations.\nOur approach involves three logical steps: First , define the main con-\ntextual factors, such as who is the audience of the explanation, the\noperational context, the level of harm that the system could cause,\nand the legal/regulatory framework. This step will help characterize\nthe operational and legal needs for explanation, and the correspond-\ning social benefits. Second , examine the technical tools available,\nincluding post-hoc approaches (input perturbation, saliency maps...)\nand hybrid AI approaches. Third , as function of the first two steps,\nchoose the right levels of global and local explanation outputs, taking\ninto the account the costs involved. We identify seven kinds of costs\nand emphasize that explanations are socially useful only when total\nsocial benefits exceed costs.\n1 INTRODUCTION\nThis paper summarizes the conclusions of a longer paper [1] on\ncontext-specific explanations using a multidisciplinary approach. Ex-\nplainability is both an operational and ethical requirement. The op-\nerational needs for explainability are driven by the need to increase\nrobustness, particularly for safety-critical applications, as well as en-\nhance acceptance by system users. The ethical needs for explainabil-\nity address harms to fundamental rights and other societal interests\nwhich may be insufficiently addressed by the purely operational re-\nquirements. Existing works on explainable AI focus on the computer\nscience angle [18], or on the legal and policy angle [20]. The origi-\nnality of this paper is to integrate technical, legal and economic ap-\nproaches into a single methodology for reaching the optimal level of\nexplainability. The technical dimension helps us understand what ex-\nplanations are possible and what the trade-offs are between explain-\nability and algorithmic performance. However explanations are nec-\nessarily context-dependent, and context depends on the regulatory\nenvironment and a cost-benefit analysis, which we discuss below.\nOur approach involves three logical steps: First , define the main\ncontextual factors, such as who is the audience of the explanation,\nthe operational context, the level of harm that the system could cause,\nand the legal/regulatory framework. This step will help characterize\nthe operational and legal needs for explanation, and the correspond-\ning social benefits. Second , examine the technical tools available,\n1Copyright c\r2020 for this paper by its authors. Use permitted under\nCreative Commons License Attribution 4.0 International (CC BY 4.0).\n1. I3, T ´el´ecom Paris, CNRS, Institut Polytechnique de Paris, France –\n2. LTCI, T ´el´ecom Paris, Institut Polytechnique de Paris, France – email:\[email protected] post-hoc approaches (input perturbation, saliency maps...)\nand hybrid AI approaches. Third , as function of the first two steps,\nchoose the right levels of global and local explanation outputs, taking\ninto the account the costs involved.\nThe use of hybrid solutions, combining machine learning and sym-\nbolic AI, is a promising field of research for safety-critical applica-\ntions, and applications such as medicine where important bodies of\ndomain knowledge must be associated with algorithmic decisions.\nAs technical solutions to explainability converge toward hybrid AI\napproaches, we can expect that the trade-off between explainability\nand performance will become less acute. Explainability will become\npart of performance. Also, as explainability becomes a requirement\nfor safety certification, we can expect an alignment between opera-\ntional/safety needs for explainability and ethical/human rights needs\nfor explainability. Some of the solutions for operational explainabil-\nity may serve both purposes.\n2 DEFINITIONS\nAlthough several different definitions exist in the literature [1], we\nhave treated explainability and interpretability as synonyms [16], fo-\ncusing instead on the key difference between “global” and “local”\nexplainability/interpretability. Global explainability means the abil-\nity to explain the functioning of the algorithm in its entirety, whereas\nlocal explainability means the ability to explain a particular algorith-\nmic decision [7]. Local explainability is also known as “post hoc”\nexplainability.\nTransparency is a broader concept than explainability [6], because\ntransparency includes the idea of providing access to raw informa-\ntion whether or not the information is understandable. By contrast,\nexplainability implies a transformation of raw information in order\nto make it understandable by humans. Thus explainability is a value-\nadded component of transparency. Transparency and explainability\ndo not exist for their own sake. Instead, they are enablers of other\nfunctions such as traceability and auditability, which are critical in-\nputs to accountability. In a sense, accountability is the nirvana of al-\ngorithmic governance [15] into which other concepts, including ex-\nplainability, feed.\n3 THREE FACTORS DETERMINING THE\n“RIGHT” LEVEL OF EXPLANATION\nOur approach identifies three considerations that will help lead to\nthe right level of explainability: the contextual factors (an input),\nthe available technical solutions (an input), and the explainability\nchoices regarding the form and detail of explanations (the outputs).\n\n3.1 Contextual factors\nWe have identified four kinds of contextual factors that will help\nidentify the various reasons why we need explanations and choose\nthe most appropriate form of explanation (output) as a function of\nthe technical possibilities and costs. The four contextual factors are:\n\u000fAudience factors: Who is receiving the explanation? What is their\nlevel of expertise? What are their time constraints? These will\nprofoundly impact the level of detail and timing of the explana-\ntion [5, 7].\n\u000fImpact factors: What harms could the algorithm cause and how\nmight explanations help? These will determine the level of social\nbenefits associated with the explanation. Generally speaking, the\nhigher the impact of the algorithm, the higher the benefits flowing\nfrom explanation [8].\n\u000fRegulatory factors: What is the regulatory environment for the ap-\nplication? What fundamental rights are affected? These factors are\nexamined in Section 5 and will help characterize the social bene-\nfits associated with an explanation in a given context.\n\u000fOperational factors: To what extent is explanation an operational\nimperative? For safety certification? For user trust? These factors\nmay help identify solutions that serve both operational and ethi-\ncal/legal purposes.\n3.2 Technical solutions\nAnother input factor relates to the technical solutions available\nfor explanations. Post-hoc approaches such as LIME [18], Kernal-\nSHAP [14] and saliency maps [21] generally strive to approximate\nthe functioning of a black-box model by using a separate explanation\nmodel. Hybrid approaches tend to incorporate the need for explana-\ntion into the model itself. These approaches include:\n\u000fModifying objective or predictor function;\n\u000fProducing fuzzy rules, close to natural language;\n\u000fOutput approaches [22];\n\u000fInput approaches, which pre-process the inputs to the machine\nlearning model, making the inputs more meaningful and/or bet-\nter structured [1];\n\u000fGenetic fuzzy logic.\nThe range of potential hybrid approaches, i.e. approaches that com-\nbine machine learning and symbolic or logic-based approaches, is\nalmost unlimited. The examples above represent only a small selec-\ntion. Most of the approaches, whether focused on inputs, outputs, or\nconstraints within the model, can contribute to explainability, albeit\nin different ways. Explainability by design mostly aims at incorpo-\nrating explainability in the predictor model.\n3.3 Explanation output choices\nThe output of explanation will be what is actually shown to the rel-\nevant explanation audience, whether through global explanation of\nthe algorithm’s operation, or through local explanation of a particu-\nlar decision.\nThe output choices for global explanations will include the fol-\nlowing:\n\u000fAdoption of a “user’s manual” approach to present the functioning\nof the algorithm as a whole [10];\n\u000fThe level of detail to include in the user’s manual;\u000fWhether to provide access to source code, taking into account\ntrade secret protection and the sometimes limited utility of source\ncode to the relevant explanation audience [10, 20];\n\u000fInformation on training data, including potentially providing a\ncopy of the training data [10, 13, 17];\n\u000fInformation on the learning algorithm, including its objective\nfunction;\n\u000fInformation on known biases and other inherent weaknesses of the\nalgorithm; identifying use restrictions and warnings.\nThe output choices for local explanations will include the follow-\ning:\n\u000fCounterfactual dashboards, with “what if” experimentation avail-\nable for end-users [20, 24];\n\u000fSaliency maps to show the main factors contributing to decision;\n\u000fDefining the level of detail, including how many factors and rele-\nvant weights to present to end-users;\n\u000fLayered explanation tools, permitting a user to access increasing\nlevels of complexity;\n\u000fAccess to individual decision logs [11, 26];\n\u000fWhat information should be stored in logs, and for how long?\n4 EXPLAINABILITY AS AN OPERATIONAL\nREQUIREMENT\nMuch of the work on explainability in the 1990s, as well as the\nnew industrial interest in explainability today, focus on explanations\nneeded to satisfy users’ operational requirements. For example, the\ncustomer may require explanations as part of the safety validation\nand certification process for an AI system, or may ask that the sys-\ntem provide additional information to help the end user (for example,\na radiologist) put the system’s decision into a clinical context.\nThese operational requirements for explainability may be required\nto obtain certifications for safety-critical applications, since the sys-\ntem could not go to market without those certifications. Customers\nmay also insist on explanations in order to make the system more\nuser-friendly and trusted by users. Knowing which factors cause cer-\ntain outcomes increases the system’s utility because the decisions\nare accompanied by actionable insights, which can be much more\nvaluable than simply having highly-accurate but unexplained pre-\ndictions [25]. Understanding causality can also enhance quality by\nmaking models more robust to shifting input domains. Customers\nincreasingly consider explainability as a quality feature for the AI\nsystem. These operational requirements are distinct from regulatory\ndemands for explainability, which we examine in Section 5, but may\nnevertheless lead to a convergence in the tools used to meet the vari-\nous requirements.\nExplainability has an important role in algorithmic quality con-\ntrol, both before the system goes to market and afterwards, because\nit helps bring to light weaknesses in the algorithm such as bias that\nwould otherwise go unnoticed [9]. Explainability contributes to “to-\ntal product lifecycle” [23] or “safety lifecycle” [12] approaches to\nalgorithmic quality and safety.\nThe quality of machine learning models is often judged by the\naverage accuracy rate when analyzing test data. This simple mea-\nsure of quality fails to reflect weaknesses affecting the algorithm’s\nquality, particularly bias and failure to generalize. Explainability so-\nlutions presented can assist in identifying areas of input data where\nthe performance of the algorithm is poor, and identify defects in the\nlearning data that lead to bad predictions. Traditional approaches to\nsoftware verification and validation (V&V) are ill-adapted to neu-\nral networks [3, 17, 23]. The challenges relate to neural networks’\nnon-determinism, which makes it hard to demonstrate the absence\nof unintended functionality, and to the adaptive nature of machine-\nlearning algorithms [3, 23]. Specifying a set of requirements that\ncomprehensively describe the behavior of a neural network is con-\nsidered the most difficult challenge with regard to traditional V&V\nand certification approaches [2, 3]. The absence of complete require-\nments poses a problem because one of the objectives of V&V is to\ncompare the behavior of the software to a document that describes\nprecisely and comprehensively the system’s intended behavior [17].\nFor neural networks, there may remain a degree of uncertainty about\njust what will be the output for a given input.\n5 EXPLAINABILITY AS A LEGAL\nREQUIREMENT\nThe legal approaches to explanation are different for government de-\ncisions and for private sector decisions. The obligation for govern-\nments to give explanations has constitutional underpinnings, for ex-\nample the right to due process under the United States Constitution,\nand the right to challenge administrative decisions under European\nhuman rights instruments. These rights require that individuals and\ncourts be able to understand the reasons for algorithmic decisions,\nreplicate the decisions to test for errors, and evaluate the proportion-\nality of systems in light of other affected human rights such as the\nright to privacy. In the United States, the Houston Teachers case2\nillustrates how explainability is linked to the constitutional guaran-\ntee of due process. In Europe, the Hague District Court decision on\nthe SyLI algorithm3shows how explainability is closely linked to\nthe European constitutional principle of proportionality. France has\nenacted a law on government-operated algorithms4, which includes\nparticularly stringent explainability requirements: disclosure of the\ndegree and manner in which the algorithmic processing contributed\nto the decision; the data used for the processing and their source; the\nparameters used and their weights in the individual processing; and\nthe operations effected by the processing.\nFor private entities, a duty of explanation generally arises when\nthe entity becomes subject to a heightened duty of fairness or loyalty,\nwhich can happen when the entity occupies a dominant position un-\nder antitrust law, or when it occupies functions that create a situation\nof trust or dependency vis`a vis users. A number of specific laws im-\npose algorithmic explanations in the private sector. One of the most\nrecent is Europe’s Platform to Business Regulation (EU) 2018/1150,\nwhich imposes a duty of explanation on online intermediaries and\nsearch engines with regard to ranking algorithms. The language in\nthe regulation shows the difficult balance between competing princi-\nples: providing complete information, protecting trade secrets, avoid-\ning giving information that would permit bad faith manipulation of\nranking algorithms by third parties, and making explanations eas-\nily understandable and useful for users. Among other things, online\nintermediaries and search engines must provide a “reasoned descrip-\ntion” of the “main parameters” affecting ranking on the platform,\nincluding the “general criteria, processes, specific signals incorpo-\nrated into algorithms or other adjustment or demotion mechanisms\n2Local 2415 v. Houston Independent School District , 251 F. Supp. 3d 1168\n(S.D. Tex. 2017).\n3NJCM v. the Netherlands , District Court of The Hague, Case n. C-09-\n550982-HA ZA 18-388, February 5, 2020.\n4French Code of Relations between the Public and the Administration, arti-\ncles L. 311-3-1 et seq.used in connection with the ranking.”5These requirements are more\ndetailed than those in Europe’s General Data Protection Regulation\nEU 2016/679 (GDPR), which requires only “meaningful informa-\ntion about the logic involved.”6In the United States, banks already\nhave an obligation to provide the principal reasons for any denial of a\nloan.7A proposed bill in the United States called the Algorithmic Ac-\ncountability Act would impose explainability obligations on certain\nhigh-impact algorithms, including an obligation to provide “detailed\ndescription of the automated decision system, its design, its training,\ndata, and its purpose.”8\n6 THE BENEFITS AND COSTS OF\nEXPLANATIONS\nLaws and regulations generally impose explanations when doing so\nis socially beneficial, that is, when the collective benefits associated\nwith providing explanations exceed the costs. When considering al-\ngorithmic explainability, where the law has not yet determined ex-\nactly what form of explainability is required and in which context,\nthe costs and benefits of explanations will help fill the gaps and define\nthe right level of explanation. The cost-benefit analysis will help de-\ntermine when and how explanations should be provided, permitting\nvarious trade-offs to be highlighted and managed. For explanations to\nbe socially useful, benefits should always exceed the costs. The ben-\nefits of explanations are closely linked to the level of impact of the\nalgorithm on individual and collective rights [5, 8]. For algorithms\nwith low impact, such as a music recommendation algorithms, the\nbenefits of explanation will be low. For a high-impact algorithm such\nas the image recognition algorithm of an autonomous vehicle, the\nbenefits of explanation, for example in finding the cause of a crash,\nwill be high.\nExplanations generate many kinds of costs, some of which are not\nobvious. We have identified seven categories of costs:\n\u000fDesign and integration costs, which may be high because explana-\ntion requirements will vary among different applications, contexts\nand geographies, meaning that a one-size-fits-all explanation so-\nlution will rarely be sufficient [9];\n\u000fSacrificing prediction accuracy for the sake of explainability\ncan result in lower performance, thereby generating opportunity\ncosts [5];\n\u000fThe creation and storage of decision logs create operational costs\nbut also tensions with data privacy principles which generally re-\nquire destruction of logs as soon as possible [11, 26];\n\u000fForced disclosure of source code or other algorithmic details may\ninterfere with constitutionally-protected trade secrets [4];\n\u000fDetailed explanations on the functioning of an algorithm can fa-\ncilitate gaming of the system and result in decreased security;\n\u000fExplanations create implicit rules and precedents, which the de-\ncision maker will have to take into account in the future, thereby\nlimiting her decisional flexibility in the future [19];\n\u000fMandating explainability can increase time to market, thereby\nslowing innovation [9].\nFor high-impact algorithmic decisions, these costs will often be\noutweighed by the benefits of explanations. But the costs should nev-\nertheless be considered in each case to ensure that the form and level\n5Regulation 2018/1150, recital 24.\n6Regulation 2016/679, article 13(2)(f).\n712 CFR Part 1002.9.\n8Proposed Algorithmic Accountability Act, H.R. 2231, introduced April 10,\n2019.\nof detail of mandated explanations is adapted to the situation. The net\nsocial benefit (total benefits less total costs) should remain positive.\n7 CONCLUSION: CONTEXT-SPECIFIC AI\nEXPLANATIONS BY DESIGN\nRegulation of AI explainability remains largely unexplored territory,\nthe most ambitious efforts to date being the French law on the ex-\nplainability of government algorithms and the EU regulation on Plat-\nform to Business relations. However, even in those instances, the\nlaw leaves many aspects of explainability open to interpretation. The\nform of explanation and the level of detail will be driven by the four\ncategories of contextual factors described in this paper: audience fac-\ntors, impact factors, regulatory factors, and operational factors. The\nlevel of detail of explanations – global or local – would follow a\nsliding scale depending on the context, and the costs and benefits at\nstake. One of the biggest costs of local explanations will relate to\nstorage of individual decision logs. The kind of information stored in\nthe logs, and the duration of storage, will be key questions to address\nwhen determining the right level of explainability. Hybrid solutions\nattempt to create explainability by design, mostly by incorporating\nexplainability in the predictor model. While generally addressing op-\nerational needs, these hybrid approaches may also serve ethical and\nlegal explainability needs. Our three-step method involving contex-\ntual factors, technical solutions, and explainability outputs will help\nlead to the “right” level of explanation in a given situation.\nFuture work aims at instantiating the proposed three steps to re-\nalistic and concrete problems, to give insight in the feasibility and\nvalue of the method to provide the right level of explanation.\nREFERENCES\n[1] Val ´erie Beaudoin, Isabelle Bloch, David Bounie, St ´ephan Cl ´emencon,\nFlorence d’Ach ´e Buc, James Eagan, Maxwell Winston, Pavlo\nMozharovskyi, and Jayneel Parekh, ‘Flexible and context-specific AI\nexplainability: a multidisciplinary approach’, Technical report, ArXiv,\n(2020).\n[2] Siddhartha Bhattacharyya, Darren Cofer, D Musliner, Joseph Mueller,\nand Eric Engstrom, ‘Certification considerations for adaptive systems’,\nin2015 IEEE International Conference on Unmanned Aircraft Systems\n(ICUAS) , pp. 270–279, (2015).\n[3] Markus Borg, Cristofer Englund, Krzysztof Wnuk, Boris Duran,\nChristoffer Levandowski, Shenjian Gao, Yanwen Tan, Henrik Kaijser,\nHenrik L ¨onn, and Jonas T ¨ornqvist, ‘Safely entering the deep: A review\nof verification and validation for machine learning and a challenge elic-\nitation in the automotive industry’, Journal of Automotive Software En-\ngineering ,1(1), 1–19, (2019).\n[4] Jenna Burrell, ‘How the machine ‘thinks’: Understanding opac-\nity in machine learning algorithms’, Big Data & Society ,3(1),\n2053951715622512, (2016).\n[5] Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam\nGershman, David O’Brien, Stuart Schieber, James Waldo, David Wein-\nberger, and Alexandra Wood, ‘Accountability of ai under the law: The\nrole of explanation’, arXiv preprint arXiv:1711.01134 , (2017).\n[6] European Commission, ‘Communication from the Commission to the\nEuropean Parliament, the Council, the European Economic and Social\nCommittee and the Committee of the Regions - Building trust in hu-\nman centric artificial intelligence (com(2019)168)’, Technical report,\n(2019).\n[7] Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini,\nFosca Giannotti, and Dino Pedreschi, ‘A survey of methods for explain-\ning black box models’, ACM Computing Surveys (CSUR) ,51(5), 93,\n(2018).\n[8] AI HLEG, ‘High-level expert group on artificial intelligence’, Ethics\nGuidelines for Trustworthy AI , (2019).\n[9] ICO, ‘Project ExplAIn interim report’, Technical report, Information\nCommissioner’s Office, (2019).[10] IEEE, ‘Ethically aligned design: A vision for prioritizing human well-\nbeing with autonomous and intelligent systems’, IEEE Global Initiative\non Ethics of Autonomous and Intelligent Systems , (2019).\n[11] Joshua A Kroll, Solon Barocas, Edward W Felten, Joel R Reidenberg,\nDavid G Robinson, and Harlan Yu, ‘Accountable algorithms’, U. Pa. L.\nRev.,165, 633, (2016).\n[12] Zeshan Kurd and Tim Kelly, ‘Safety lifecycle for developing safety crit-\nical artificial neural networks’, in Computer Safety, Reliability, and Se-\ncurity , eds., Stuart Anderson, Massimo Felici, and Bev Littlewood, pp.\n77–91, Berlin, Heidelberg, (2003). Springer Berlin Heidelberg.\n[13] David Lehr and Paul Ohm, ‘Playing with the data: what legal scholars\nshould learn about machine learning’, UCDL Rev. ,51, 653, (2017).\n[14] Scott M Lundberg and Su-In Lee, ‘A unified approach to interpreting\nmodel predictions’, in Advances in Neural Information Processing Sys-\ntems, pp. 4765–4774, (2017).\n[15] OECD, Artificial Intelligence in Society , 2019.\n[16] OECD, Recommendation of the Council on Artificial Intelligence ,\n2019.\n[17] Gerald E Peterson, ‘Foundation for neural network verification and val-\nidation’, in Science of Artificial Neural Networks II , volume 1966, pp.\n196–207. International Society for Optics and Photonics, (1993).\n[18] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin, ‘Why should\nI trust you?: Explaining the predictions of any classifier’, in 22nd ACM\nSIGKDD International Conference on Knowledge Discovery and Data\nMining , pp. 1135–1144, (2016).\n[19] Frederick Schauer, ‘Giving reasons’, Stanford Law Review , 633–659,\n(1995).\n[20] Andrew Selbst and Solon Barocas, ‘The intuitive appeal of explainable\nmachines’, SSRN Electronic Journal ,87, (01 2018).\n[21] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman, ‘Deep in-\nside convolutional networks: Visualising image classification models\nand saliency maps’, arXiv preprint arXiv:1312.6034 , (2013).\n[22] Philip S. Thomas, Bruno Castro da Silva, Andrew G. Barto, Stephen\nGiguere, Yuriy Brun, and Emma Brunskill, ‘Preventing undesirable be-\nhavior of intelligent machines’, Science ,366(6468), 999–1004, (2019).\n[23] US Food and Drug Administration, ‘Proposed regulatory framework for\nmodifications to artificial intelligence/machine learning (AI/ML)-based\nsoftware as a medical device’, Technical report, (2019).\n[24] Sandra Wachter, Brent Mittelstadt, and Chris Russell, ‘Counterfactual\nexplanations without opening the black box: Automated decisions and\nthe gpdr’, Harv. JL & Tech. ,31, 841, (2017).\n[25] Max Welling, ‘Are ML and statistics complementary?’, in IMS-ISBA\nMeeting on ‘Data Science in the Next 50 Years , (2015).\n[26] Alan FT Winfield and Marina Jirotka, ‘The case for an ethical black\nbox’, in Annual Conference Towards Autonomous Robotic Systems , pp.\n262–273. Springer, (2017).", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "PsNILwmCQ-w", "year": null, "venue": "EAPCogSci 2015", "pdf_link": "https://ceur-ws.org/Vol-1419/paper0116.pdf", "forum_link": "https://openreview.net/forum?id=PsNILwmCQ-w", "arxiv_id": null, "doi": null }
{ "title": "How Metalinguistic Negation Differs from Descriptive Negation: ERP Evidence", "authors": [ "Chungmin Lee" ], "abstract": null, "keywords": [], "raw_extracted_content": "How Metalinguisti c Negation Differs f rom Descriptive Negat ion: ERP Evidence \n \nChungmin Lee ([email protected] ) \nDepartment of Linguistics , Gwanak -ro 1, Gwanak -gu \nSeoul , 151-742, Korea \n \nAbstract \nThis talk explores degree adverbial modifiers licensed \nexclusive ly by metalinguistic negation (MN), and compare s \nthem with those licensed by descriptive negation ( DN) such \nas NPIs. It show s how MN-licensing is more marked than \nDN-licensing in prosody and then attempts to show how \nanomalies arising from misplacing MN -licensed adverbs in \nDN-requiring short form negation sentences elicit the \napproximate N400 but not the P600 in ERPs. This strongly \nsuggests that such anomalies are meaning -related and tends to \nsupport the pragmatic ambiguity position by Horn than the \ncontex tualist or relevance -theoretic approach. \nKeywords: metalinguistic negation; descriptive negation; \nmarkedness; prosody; ERPs; N400; pragmatic ambiguity; \ncontextualist \n1. Markedness of MN Adverbials \nSo far researchers have worked more on negative polarity \narguments and modifiers , which are licensed by descriptive \nnegation (DN) . The NPIs here simply reinforce the \nfalsification of the propositional contents. They are \ntherefore emphatic in general (Potts 2010, Israel 2004). \nCrosslinguistically and diachronical ly, NPIs have typically \ndeveloped from minimizers with ‘even ’ (Lee 1993, Y. Lee \nand Horn 1994, Lee 1999, Lee 2010 a.o.). \n(1) a. amwu -to o-ci anh –ass-ta (Korean = K) anyone -even \ncome -not-PAST -DEC \n‘Not anyone came.’ =b. ∼∃x (x: person ’ (x)) [came (x)] \nb. dare-mo ko-nakat -ta (Japanese = J) \nc. shwei -ye mei-you lai (Chinese = C) \n(2) a. theibul -i tomwuci wumciki -ci anh-nun-ta (K)table –\nNOM at all move -CI not -PRES -DEC ‘The table does not \nmove at all.’ =b. ∼∃x (x: way’ (x)) [move ’ (t)(in x)] \nb. teeburu wa mattaku ugoka -nai (J) \nc. zhuo -zi gen -be budong (C) \nMN, on the other hand, is used to reject, object to or rectify \na previous utterance ‘on any grounds whatever’ ((Horn \n1985 ), (Ducrot 1972) ). In (3), what is negated is not the \nproposition ‘I am happy ’ in its reference or truth but the \ndegree of happiness expressed by the adjective ‘HAPPY ’ in \nthe scale of happiness. The speaker objects to the way how \nit is put by the interpocutor. Typically, the expres sion \n‘HAPPY ’ occurs or is assumed to occur in a previous \nutterance. Because the first clause in (3) does not falsify its \npositive proposition but object to the degree of happiness, \nthe following clarification clause can assert a higher degree \nof happiness – ‘ECSTATIC ’ without creating a \ncontradiction, even though ecstatic entails happy in the \nHorn or entailment scale. \n(3) I’m not HAPPY; I’m ECSTATIC . (No contradiction arises ) \nIn this metalinguistic use o f negation, a negative polarity \nitem such as at al l, which co -occurs with DN, as in (2), cannot intervene. See * I’m not HAPPY at all ; I’m \nECSTATIC . A metalinguistic use of negation cannot be \nreplaced by a prefixal negation, either, as in * I’m unhappy ; \nI’m ECSTATIC . Therefore , we cannot include Geurts ’ \n(1998) ‘propositional ’ denial as one of the MN -like denials. \nIrony also has some sense of refutation , based on the \ngeneral or mutual assumption, expectation or hope for ‘a \npicnic day’ as a mental representation or thought , as in (4) \n(‘echoic use’ (Sperbe r and Wilson 1986 ; Carston 1996) . It is \nnegative, although expressed affirmatively . \n(4) It’s a lovely /fine/great day for a picnic! \nMN is an echoic rebuttal of whatever aspect of an \nexpression in a previous utterance to assert a rectifying \nexpression. There fore, the speaker ’s implicit inner \nalternative Q in C ontrastive Focus can be assumed to \nprecede it, as in (3 ’) and its initial reply equivalent to MN \ncan be assumed to be (5a), with the pair of expressions \nconnected by SN but (sino Spanish and sonder n Germ an), \nand its bi -clausal manifestation with no but is (5b), whose \nintonation is the L*(+H) L - H% of incredulity , distinct from \nthe Contrastive Topic intonation L+H* L - H% (Lee 2006, \nConstant 2012) . \n(3’) Are you HAPPY or ECSTATIC ? \n(5) a. I’m not HAPPY but ECSTATIC . \n b. I’m not HAPPY ; I’m ECSTATIC . \nThis paper explores degree modifiers licensed by MN, \nand compare s them with those licensed by DN and show s \nhow MN-licensing is more marked than DN-licensing in \nprosody first. The MN -licensed degree modifier A LITTLE \nin (6) forms a rising high peak of 254Hz after another peak \nof not (MN) in Fig. 1 . This is in sharp contrast with those \nNPI-like minimizers licensed by DN in (7), one of which \nforms the a bit/a little !H downstep with 211.7Hz, preceded \nby a high H* not. Because of the distinct and marked MN \nintonation for (6) and other cases, the rectification or \nclarification clause may not follow; the conveyed meanings \nwhich may be called conventional implicatures, not \ncancellable, seem to be more assertive than ‘implicatures. ’ \nAs a result, the purport of (6) is affirmative whereas that of \n(7) is negative, although their written form is one and the \nsame , creating ambiguity in English. \n(6) She is not A LITTLE upset . (She is VERY upset .) \n(7) She is NOT a little upset . [even a little ] (She is not upset \nat all, is quite composed.) Sentences for our phonetic \nexperiments are modified from Bolinger (1972) . \n \nFig 1 a-little-MN: a double of rising accent peaks \n698\nIn Korean, the marked intonation of the MN -licensed \nadverbial POTHONG ‘commonly, ’ with a high pitch of \n375Hz on the adverb, is sharply contrasted with the \nintonation of the adverb of the same form with the scalar \nmarker –to ‘even ’ [pothong uro–to] attached to function as \nan NPI for DN (as in ‘--- not do well even commonly ’), \nwhich generates a comparatively low pitch of 295Hz on the \nadverb. The MN adverbial is prosodically marked. \nNow turn to the syntactic aspects of Korean negation to \nsee how MN is syntactically marked as well . The MN -\nlicensed stressed degree modifiers POTHONG and YEKAN , \nboth ‘commonly, ’ require external negation , as in (11a), \nlong form negation , as in (11b), or copula negation , as in \n(9c), but they cannot oc cur in a p ositive declarative S, as in \n(9d). In contrast, short form negation is typically for DN in \nKorean. Therefore, if the MN -licensed stressed degree \nmodifiers POTHONG or YEKAN occurs in short form \nnegation sentence, the result is anomalous , as in ( 8). \n(8) a. Mia-ka POTHONG yeppu -n kes-i ani-i -ya. \nM -Nom commonly pretty -PreN COMP -Nom not-Cop-Dec \n[extern -neg] \n‘Mia is not COMMONLY pretty .’ ~> Mia is exceedingl y \npretty. ’ \nb. Mia-ka POTHONG /Yekan yeppu -ci anh-a \nM -NOM commonly pretty -CI not -DEC (= a) [long-f \nneg]1 \n c. Mia-ka POTHONG (-i) ani -ya. [cop-neg]2 \n M -NOM common (NOM) not -DEC \n ‘Mia is not COMMON /ORDINARY. ’ ~> Mia is \nextraordinary. \n d. *Mia-ka POTHONG /Yekan yeppu -e. (with no negation) \n M-NOM commonly/relatively pretty -DEC \n(9) * Mia-ka POTHONG an yeppu -e. [short form neg ] \n(K) \nM -NOM commonly not pretty –DEC \nCf. Mia-ka cenhye an yeppu -e [NPI] \n at all \n ‘Mia is not pretty at all. ’ \nIn C, if bu ‘not’ co-occurs with an immediately following \nmain predicate to negate, it is interpreted as DN, not \nallowi ng a rectifying clause, as in (10 ). If it is, however, \nfollowed by the Focus marker shi (from ‘be’) first and then \nthe main predicate, it forms a bi -clausal MN construction \nwith shi in the rectifying clause, as in (11 ). An overt (or \ncovert) modal may replace shi for MN -licensing. The \n \n1 The syntactic form of external negation may favor M N both in Korean \nand English but external negation is not a sufficient condition for MN. \nAn NPI in the complement clause is not happily licensed. \n(a) ??It is not the case that anyone came. (ExtN) \n(b) ?? amu-to o-n key ani -ya (ExtN) (K) \n2 This may be regarded as a variant of external negation, as property \nnegation. negation of (11 ) can be assumed to be external (or cleft) S \nnegation in the Conrastive Focus construction. The MN \nconstruction is crucially connected to the SN ‘but’ \ncoordination in C as in ( 12), anira in Korean, naku in J, ma \nin Vietnamese, etc. (Lee 2010). \n(10) a. Ta bu g ao. #Ta feichang gao. (C) (cf. Wible et al 2000) \n3sg NEG be tall 3sg be extremely tall \nb. Ta bu rang wo qu. #Ta bi wo qu. \n3sg NEG let 1sg go 3sg force 1sg go \n(11) a. Ta bu shi gao. Ta shi feichang gao. \n3sg NEG FOC tall 3sg FOC extremely tall \nb. Ta bu shi rang wo qu. Ta shi bi wo qu. \n3sg NEG FOC let 1sg go 3sg FOC force 1sg go \n c. Ta bu hui rang wo qu. Ta hui bi wo qu. \n3sg NEG able let 1sg go 3sg ab le force1sg go \n(12) a. Wo bu shi xihuan ta, er-shi ai ta. \nI not like her but love her \n ‘I don ’t LIKE CF her but LOVE CF her.’ \n b. Ta bu shi gao, ershi pang. [content also matters] \n 3sg NEG FOC tall SN fat \n‘(S)he is no t tall but fat.’ \nLikewise in Chinese, YIBAN de ‘commonly ’ is an MN -\nlicensed degree adverb and freely occurs in an MN \nsentence, as in (13a), conveying a higher degree expression. \nBut it cannot occur in a positive sentence, as in (13b), nor in \na DN sentence, as in (14). Similarly in Japanese, the degree \nmodifier fuTSUU is typically licensed by MN to convey a \nhigher degree, as in (15). \n(13) a. Ta bu shi yibande piaoliang . (C) \n she MN commonly beautiful \n ‘She i s not COMMONLY beautiful .’ ~> (S )he is very beautiful . \n b. *Ta yiban de piyaoliang.3 \n(14) *Ta bu yiban de piyaoliang . (C) \n (s)he NEG commonly beautiful \n(15) a. fuTSUU -no kawaisa ja -nai [--- ja naku honto -no kawaisa -\nda] (J) \n common –of prettiness not MN much -of prettiness \n ‘(She) is not COMMONLY pretty.’ ~> She is very pretty. \n b. fuTSUU janai [fuTSUU ja naku sugoi ] \n common (Adj) not MN extraordinary \n ‘Not COMMON.’ (EXTRAORDINARY) \nCross linguistically in general, if ds is the echoic standard \ndegree of the predicate, its metalinguistically negated \nutterance generates its positive proposition with a higher \ndegree d > ds of the same predicate. The epistemic agent is \nthe speaker in a simple s entence, but it can be the subject in \nan embedded reported speech or complex attitude sentence. \nYEKAN in Korean and YIBAN de in Chinese are fixed as \nMN-licensed modifiers whereas POTHONG (uro) in Korean \nand fuTSUU in Japanese may have their unstressed uses i n \npositive utterances; pothong as an adverb is used in a \n \n3 Sojung Im (pc) brought this to my attention. The string bu yiban de in \n(14) was not found in the Peking University corpus and the anomal y of (14) \nwas confirmed by several native speakers of Chinese. \n699\ndifferent quantificational meaning ‘usually ’ and as a \npredicative noun pothong in K and fuTSUU in J they have \ntheir positive degree meaning of ‘common standard. ’4 \nEnglish has no counterpart of the MN -licensed echoic \nstandard degree modifier ‘common, ’ except the stressed \nMN-licensed below the middle degree modifier ‘A \nLTTLE ’/’A BIT, ’ previously discussed. \nWith those marked prosodic features and /or syntactic \nenvironments, MN -licensed degree modifiers c an take place \ncross -linguistically, as opposed to DN -licensed ones. We \nwill turn now to the next step: ERP studies. \n \n2. ERPs for MN Adverbials \nWe conducted ERP experiments with MN adverbials data \ntwice. In the two experiments, we tried to see what hap pens \nwhen MN -requiring adverbial s are placed in a short form \nnegation (typically exclusively used for DN) in Korean , not \nproperly in an external negation or a long form negation . \nNaturally we presented well -formed MN sentences with MN \nadverbials and ill -formed short form negation sentences \nwith MN adverbials in contrast. In Experiment 1, written \nsentences were presented visually, whereas in Experiment 2, \nspoken sentences were presented auditorily. \nERP E xperiment 1 Data Set A : Well -formed Extern al \nNegation with STRESSED MN adverbial in red color vs . \nill-formed Short Form Negation with STRESSED MN \nadverbial all in red. 10 well -formed (with 5 POTHONG \nsentences and 5 YEKAN sentences) , 10 ill -formed \nsentences (with 5 POTHONG sentences and 5 YEKAN \nsentences) , with 80 fillers , counterbalanced and presented to \neach. \n요즘 │ 아이들은 │ 보통 │큰 게 │ 아니야 \nthese days children commonly tall-Comp not -Cop-Dec \n‘It is not that these days children are COMMONLY tall. ’ \nFig 2 well-formed : MN-licens ed 보통 is in external \nnegation \n저 영화 │ 어제 │ 보통 │안 │ 졸렸어 \nthat movie yesterday commonly not boring \n‘It is not that that movie yesterday was commonly boring. ’ \nFig 3 ill-formed : MN-licens ed 보통 is in short form \nnegation \nProcedure, EEG Measu rement and Analysis \na. Subjects were presented with written sentences visually by E-\nPrime 2.0 our s timulus presentation software . \nb. Ag/AgC1 electrodes and Brainamp were used ;. VEOG and \nHEOG were employed with online filtering at 0.1Hz -70Hz, \nsampling rat e at 500Hz, and the impedance of electrodes under \n10 kΩ. \n \n4 See the degree expressions with a copula in a positive utterance, all \nunstressed: \na. Pothong -i-ya (K) b. FuTSUU –desu (J) Comm\non-COPUL A-DEC Common -COPUL A-DEC ‘That’s commo\nn (ordinary) (in degree/standard) .’ c. To measure individual subjects’ brainwave responses to each \nstimulus, the waves by each stimulus were divided by the time \nunits at which each stimulus was presented. In Experiment 1 \nwith Set A, the averages of the divided waveforms from all the \nelectrodes were measured to get respective significant P -values . \nBy targeting the average of all subjects’ ERP responses, we \nproduced the final, grand average curve of ERP responses with \nthe N400 , as shown in Fig 12. \n \nDiscussion of Experiment 1 on Written Visual Data \nWhat do the results of Experiment 1 say? The N400 ERP \nresults on Cz in Fig 12 , the g rand average of four subjects’ \nbrain -wave curves , reveal that some meaning -related \nanomaly occurred from dat a Set A of the contrast between \nthe well -formed external MN sentences with the MN -\nlicensed degree adverbials and the ill -formed short form \nnegation sentences with the same MN -licensed degree \nadverbials. In the Set A experiment, when a subject ’s eyes \nin the external negation condition reach the MN -licensed \ndegree adverb marked in red, (s)he must expect an adjective \nor adverb to be modified by the MN adverb and the \ncomplement clause ending, followed by external negation. \nBut in the short form negation conditi on, when the subject ’s \neyes reach the same MN -licensed degree adverb marked in \nred, (s)he must expect exactly the same external negation \n(or a long form negation) that can license the MN degree \nadverb but in fact (s)he encounters the short form negati on \nin the fourth column, followed by an adjective or adverb to \nbe modified. (S)he would then be in a conflict between the \nMN adverb and the DN. An MN adverb cannot be licensed \nor interpreted by DN, which implies that MN and DN are \ndistinctly used at least in pragmatic meaning. \nThe adverb in red must have been charitably interpreted \nas a stressed MN adverb. Similarly, even without red for the \nadverb in the case of the intended ill -formed unstressed \nadverb condition in the external negation sentence in Set 1 , \nbecause of the forceful MN bias of the external negation, \nparticipants seem to have interpreted the adverb in black \ncharitably as (stressed) MN -licensed degree adverb and that \nseems to be why no results appeared. \n \nExperiment 2: ERP Analysis of MN Adverbi als in \nSpoken Sentences \n \nMethod \nSubjects \n15 undergraduate subjects (4 females and 11 males) with \na mean age of 23.53 years (range: from 20 to 34, \nundergraduate Seoul National University students) \n700\nparticipated for a cash payment of W25, 000 (about \n$25/h our). All were standard (Seoul -Gyeonggi) Korean \nspeakers, right -handed, not weak -sighted, with no history of \nneurological disorders. These conditions were announced \nbeforehand in the internet recruitment and were met in the \nsubjects ’ written experiment protocol in the lab. . \nStimuli \nIn Experiment 2, recorded auditory sentences, unlike the \nwritten sentences in Experiment 1, were presented. The \nmatch (well -formed) condition with the stressed MN -\nlicensed degree adverb in external negation sentence vs. the \nmismatch (ill -formed) condition here with the same stressed \nMN-licensed degree adverb in short form negation sentence \nis the same as in Experiment 1 (Set A). The only difference \nlies in that the MN adverb was in red in written sentences of \nexternal neg ation and short form negation in Experiment 1 \nbut the same MN adverb was heard or auditory in recorded \nsentences of external negation and short form negation in \nExperiment 2. \nIn the match (well -formed) condition , 30 external \nnegation sentences (15 with pothong ‘commonly ’ and 15 \nwith yekan ‘ordinarily ’) were prepared, and in the mismatch \n(ill-formed) condition, 30 short from negation sentences (15 \nwith pothong ‘commonly ’ and 15 with yekan ‘ordinarily ’), \n60 experimental sentences in total , were prepared, as well \nas 80 filler sentences , totaling 140 sentences . The MN-\nlicensed degree adverbs were all stressed in the spoken \nsentences. Each subject heard all these types, but with each \nsentence randomly assigned to one type. \nThe Well -forme d Condition sentences and the Ill-\nformed Condition sentences were constructed in the same \nfashion as done for Experiment 1. \n \nProcedure, EEG Measurement and Analysis \nIn order to keep the participants attentive during the whole \nsession, they were told to pr ess M if the sentence just heard \nis natural and to press Z if not natural, at the end of each \nsentence heard. From this test, we could distinguish a group \nof seven participants who made the wrong opposite \nresponses 11 to 30 times from the rest who made les s than \nsix wrong responses. We eliminated the seven ill -behaved \nsubjects from the analysis. Because a last minite E -Prime \nprogramming error (of placing a pair of anomalous \nsentences in a row) was found, one relevant subject was also \neliminated and the tota l left for analysis was seven (7) \nsubjects. \nSignific ant differences were detected at the five \nelectrode sites near the center (particularly C4) with the \nN400 effect in Experiment 2 . This is slightly different from \nExperiment 1, where the locus was exact ly Cz (center) of \nthe scalp. In order to decrease the noise effect, the ERP \nsignals were down sampled to 30Hz (and the +-200uv ones \n(30-40 out of 115~117) were eliminated ). \nBy employing the t -value of the T -Test as the Test \nStatistics in Permutation Test, we obtained the following: \n(16) a. From the five electrode sites (C4, \nCP2, CP5, P4, P7) significant differences between the mismatch (ill -formed) (S10 in the E -Prime) \ncondition and the match (well -formed) (S20 in the \nE-Prime) condition were obtained . 5,000 t imes \nrepeated; α=0.05, [IMG1] . \nb. ANOVA: The following were examin ed: \n(i) subject s (random) x experiment manipulation \n(repeated measures) (ii) electrode s (random) x \nexperiment manipulation (repeated measures) \nAn F1 repeated measures ANOVA with hemisphere s (2) \nx ROIs (electrodes) x manipulation is desirable but will be \naddressed in a later refinement with the total raw data. \n \nDiscussion of Experiment 2 \n \nAs indicated, the N400 effect was elicited from the five \nelectrode sites near the center on both hemisph eres \nincluding C4 in Experiment 2 with the spoken sentences in \nwhich MN -licensed degree adverbs placed in the matching \nexternal (MN) sentences vs. those placed in the \nmismatching short -form negation (DN) sentences. A certain \ndifference with the results of Experiment 1 with the written \nsentences lies in that the N400 effect was elicited from \nchannel Cz (center) in Experiment 1. The difference may be \ndue to visual vs. auditory data. The same perspicuous \nnegativity with the N400 effect in Experiment 2, however , \nshould be caused by the same meaning -related anomalies. \nThe N400 is ‘qualitatively distinct ’ from the P600, which is \na reflection of syntactic anomalies such as number and \ngender agreement, phrase structure, verb subcategorization , \nverb tense, constituen t movement, case, and subject -verb \nhonorification agreement to be added in this work (see \nOsterthout et al (1999) for the distinction, stating that the \nERP brain responses to semantic/pragmatic anomalies \n(selection restriction violation etc.) is dominated by a large \nincrease in the N400 component and the response to a \ndisparate set of syntactic anomalies is dominated by a large -\namplitude positive shift. See Kutas et al (2011) for a survey \nof ERP N400 and meaning. \n \nFig 19: The N400 elicited at C4. \n \n3. Ge neral Discussion of ERPs for MN \nAdverbials \nThe m arkedness hierarchy of the three different types of S must be: \n701\n(17) MN S> DN S> Affirma tive S5 (DN = descriptive negation) \nMN reveals phonetic and/or syntactic prominence in Contrastive \nFocus ( CF) in contras t to DN in English /Korean. Because the \nstressed POTHONG/YEKAN in Korean cannot appear in a positive \nsentence, as in (11d), researchers so far could not distinguish this \nfrom NPIs in Korean lingui stics (Cho et al 2002 ; Whitman et al \n2004 ). But crucially the y cannot co -occur in a negative sentence. A \nlong form negation in Korean can license either an NPI or an MN \nadverb but only separately. See (1a) with an NPI and (11b) with an \nMN adverb, both licensed by long form negation. Not the same \nnegatio n can, howeve r, licens e both NPI and MN -adverb at the \nsame time.6 Observe (1 8). \n(18) *amwu yeca–to POTHONG/YEKAN yeppu -ci anha \n any woman -even commonly pretty -conn not(LF) \n ‘Not any woman is commonly pretty.’ (Intended) \nRegarding the dis tinct functions between MN and DN, unlike \nscholars such as Russell (1905) and Karttunen & Peters (1979) , \nwho advocate the semantic ambiguity position, Horn (1985, 1989) \ntakes the pragmatic ambiguity position. Horn ’s position is based on \nthe unavailability of the implicated upper bound of weak scalar \npredicates (e.g. ---we don ’t like coffee, we love it), which he argues \nis pragmatic. It is a denying of the assertability or felicity of an \nutterance or statement rather than negating the truth of a \nproposition. His pragmatic ambiguity must be between two uses \nMN and DN in his still one semantic negation monoguist position. \nLevinson ’s (2000) criticism that even a semantically negated \nstatement doesn ’t have any implicatures is not tenable. Some more \nechoic, nonver idical contexts may license MN uses, often \nrhetorically. I argue that the prosodically frozen MN uses of A \nLITTLE , POTHONG (K), and fuTSUU (J) and lexicalized MN \nuses of YEKAN (K) and YIBANde (C) have their pragmatic \nmeaning associated with MN. On the othe r hand, the context -\ndriven or relevance -theoretic approach by Sperber & Wilson \n(1986), Carston (1988, 1998), Noveck et al (2007), Breheny et al. \n(2006) and Noh et al (2013) also as monoguists argue that there is \nno pragmatic ‘ambiguity ’ or separate MN use/ meaning and that \nscalar implicature is by the pragmatic enrichment of the scalar term \ninvolved. So, the literal form a or b as excluding a and b is due to \nthe contextual enrichment from inclusive (‘literal ’) to exclusive , \nnot by default for them. But consi der ‘not a or b ’ by DN becoming \n‘not a and b’=’neither a nor b.’ We need MN to get a and b from a \nor b .’ To settle the debate, we need empirical, experimental \nevidence. \nIn the case of English and other intonation -based MN languages, \nprosody distinction elicits the MN vs. DN ambiguity (with the \nfrozen MN ∼MN adverb intonation), as in (6) vs. (7). Here \nsemantically weak degree adverbs like ‘a little ’ were involved . In \nKorean and Japanese, stress (prosod y) distinction (less in J ) elicits \nthe same ambiguity but on the standard degree adverb such as \n‘commonly. ’ Furthermore, some lexicalized MN -licensed de gree \nadverbs developed in K and C , as in yekan ‘ordinarily ’ and ibande \n‘commonly. ’ The MN -licensed adverbs placed in short form \nnegation (DN) sentence in contrast to those in external negation \n(MN) sentence eli cited the N400. \n \n5 Giora (20 06) takes the symmetry position between (descriptive) \nnegation and affirmation. \n6 A similar phenomenon in English has been indicated: an NPI cannot \nappear in MN , as in (a). (Karttunen et al (1979:46 47). \n(a) *Chris didn ’t manage to solve any of the problems ---he managed\n to solve all of them. (Horn 1989, 374). Unlike the contradictory pairs with explicit or implicit negation \ninvolved in the past experiments, which often didn ’t elicit any \nimmediate N400 effect and needed previous proper linguistic \ncontexts for due expectation s (Staab et al 2008), the distinction \nbetween MN and DN is not necessarily context -dependent because \nof MN ’s marked prosodic and/or lexical features that require MN \nand the necessary conveyed implicature or following clarification \nclause. \nI give an independent support to my claim that pragmatic \nmeaning anomalies elicit the N400. Sakai ’s (2013) ERP studies on \nJapanese honorific processing show : If you address a boy by \n“Kato -sama ” honorifically, it is mismatched with the context and \nelicits the N400 when in contrast with callin g him “Kato. ” \nNoh et al (2013) report in a rare valuable psycholinguistic eye -\ntracking experiment on MN that the subjects ’ processing times at \nthe clarification clauses were not different between MN and DN in \ntheir eye -tracking experiments, claiming that their results support \nthe contextualist or relevance theory. As indicated, this theory has \nno separate use or pragmatic ‘meaning ’ and therefore no \nambiguity; MN is also truth -functional for them. But the Korean \nexamples this study employed are dubious ; the first “MN” example \nthe authors provided is the following short form negation an ‘not’: \n(18) (7) a. Yuna -nun ton -ul an pel-ess-e; ssule moa -ss-e. \nYuna -TC money -AC not make -PST-DC; rake in -PST-DC \n“Yuna didn ’t make money; she raked in money. ” \nAs we alre ady explained , the short form negation an ‘not’ is \ntypically used as DN in Korean. Then, what can we expect from \nthe bi-clausal construction in (18 )? Sheer contradiction and it is. \nNative Korean speakers who are not biased will all agree. The \nEnglish bi -causal MN construction is prosodically marked and \ncannot allow for the concessive But/but before the clarification \nclause. Therefore, if the combined use condition is met, MN can \ninvolve even truth -conditional entailment cases and that ’s why \nHorn ’s definitio n has the expression ‘on any grounds whatever .’ \nThe following utterance: \n(19) I’m not HAPPY; (*but) I ’m MISERABLE \nis an MN case for Horn even though miserable entails ∼happy , \nnot creating any contra diction. The first clause of (19 ) objects to \nthe expression HAPPY and asserts the salient alternative \nclarification clause.7 Compare it with (3), where not leads to a \ncontradiction if read descriptively. This is not an MN for \ncontextualists. Of course, there are quite a few researchers who do \nnot adopt this claim and narrow down the range of MN cases. \nAlthough this is still debatable, taking such “DN” examples \noccurring in external negation that typically licenses MN is not \nconvincing; for Horn, they are simply other cases of MN. This is \nparticularly true of pairs of expressives or emotion -charged \nexpressions such as wangtaypak ‘hit the jackpot ’ vs. \nphwungpipaksan ‘break into fragments, ’ occurring in MN -\nlicensing constructions in Korean. Either one of the two \nexpressives may be metalinguistically negated. The part icipants \nmight have skipped ‘non-sensible ’ MNs quickly ‘with a fast effect ’ \n(in their sensicality test, the mean sensicality of MNs was \nsignif icantly lower than that of DNs) and might have read sensible \nMNs slower than DN ones with a slow effect, resulting in ‘no \ndifference ’ between conditions. As the reviewer supposed, this is \nrather in support of the ‘meaning ’ approach than their contextualist \nposition. MN -licensing is most optimal in external negation and far \nless optimal in long form negation. The long form negation tends \nto lead to DN by default, although it can license MN. The intended \n \n7 In German, the SN ‘but’ is employed for this situation: Ich bin nicht \nglueclich, sondern ungluecklich . \n \n702\nMN alternatives in contrast may become more easily non -sensible \nin long form nega tion than in external negation and they are \ndoomed to be non -sensible in short form negation. \n \n4. Concluding Remarks \n \nWe made the distinction between two types of modifiers: those \nlicensed exclusively by MN and those by DN. The former are \nsome MN -licensed degree adverbs, which are prosodically, \nlexically and syntactically conditioned, and the latter are NPIs, \nwhich reinforce negation unlike the former. The distinction \nsuggests that MN and DN have distinct functions and uses, even if \nwe assume that there is one single logical negation, departing from \nRusse ll (1905) and Karttunen et al (1979) . Horn’ s (1985, 1989) \npragmatic ambiguity position is in contrast to the context -driven or \nrelevance -theoretic approach by Sperber et al (1986), Carston \n(1988, 1 998), who deny that there is pragmatic ‘ambiguity’ and \nclaim that scalar implicature is by the pragmatic enrichment of the \nscalar term involved. How can we settle the debate? \nWe are curious about possible empirical, experimental evidence \nthat may shed lig ht on the debate. A hypothesis can be: if the \nstressed MN -licensed degree adverb POTHONG /YEKAN co-\noccurs with short form negation (DN) in a sentence, the adverb \nwill not be licensed by MN, which is absent, and as a result the \nsentence will be anomalous . But would it be meaning -based or \nstructure -based? With this in mind, we conducted two types of \nERP experiments on MN for the fisrt time as far as we know : in \nExperiment 1 (pilot), the pair of written sentences (with the \nstressed adverb in red) was presented and by targeting the average \nof all the four subjects’ ERP responses, we produced the final, \ngrand average curve of ERP responses with the N400 over Cz, the \ncentral site. In Experiment 2, fifteen subjects participated. In the \nwell-formed condition, 30 exte rnal negation sentences, with \npothong ‘commonly’ and yekan ‘ordinarily,’ and in the ill -formed \ncondition, 30 short fo rm negation sentences, with stressed pothong \nand yekan , as well as 80 fillers, were presented all in recorded \nsound. The N400 effect rangin g near 400ms from onset was \nelicited from the five electrode sites near the center including C4 in \nthis experiment with the spoken sentences. Also, a significant \nnegativity signal around 700ms was detected. This is an interesting \ndifference with the result s of Experiment 1, where a rather typical \nN400 effect was observed. However, nothing like the P600 was \ndetected. \nWe need more data and analyses but we tentatively claim that \nthe N400 effect was elicited from the two conditions and that if \nthis turns out to be valid it shows that the anomaly is meaning -\nrelated, though pragmatic. This tends to be in support of the \npragmatic ambiguity position than the contextualist non -ambiguity \napproach. This is just the first step in the direction of researching \nbrain respo nses to anomalies involving MN -licensed degree \nmodifiers. \n \nReferences (Selected) \nBreheny, R., Katsos, N., Williams, J. (2006 ). Are scalar implicatures \ngenerated by default? Cognition 100 (3), 434 -463. \nBurton -Roberts, Noël, (1989 ). On Horn ’s dilemm a: presupposition and \nnegation. Journal of Linguistics 25, 95 --125. \nCarston, Robyn. 1996 . Metalinguistic Negation and Echoic Use, Journal of \nPragmatics 25, 309 -330. \nCho, S. and Lee, H. (2002 ). Syntactic and Pragmatic Properties of NPI \nYekan in Korean. In N. Akatsuka et al (eds.) Japanese/Korean \nLinguistics 10. CSLI. \nDucrot, O. (1972 ). Dire et ne pas dire. Hermann. Horn, L. (1985 ). Metalinguistic Negation and Pragmatic Ambiguity, \nLanguage 61, 121-74. \nIsrael, Michael (1996 ). Polarity Sensitivity as Lexical Semantics. \nLinguistics & Phil osophy 19, 619 -666. \nKuno, S. and J. Whitman (2004 ). Licensing of Multiple Negative Polarity \nItems , in Studies in Korean Syntax and Semantics . Seoul: Pagijong. \nKutas, Marta Kutas and Kara D. Federmeier (2011 ). Thirty Years and \nCounting: Finding Meaning in the N400 Component of the Event -\nRelated Brain Potential (ERP) , Annual Review of Psychology 62:14.1 –\n27. \nLee, Chungmin (1993). Frozen expressions and semantic representation , \nLanguage Research 29: 301-326. \nLee, Chungmin 2006 . Contrastive Topic/Focus and Polarity in Discourse, \nWhere Semantics Meets Pragmatics (K. von Heusinger and K. Turner \n(eds)), CRiSPI 15, 381 -420, Elsevier. \nLee, Y oung -Suk and L aurence Horn 1994. Any as indefinite plus even. MS. \nYale University. \nLevinson, S. (2000 ). Presumptive Meaning: The Theory of Generalized \nConversational Implicature . MIT Press, Cambridge, MA. \nNoh, Eun-Ju, Hyeree Choo, Sungryong Koh (2013 ). Processing \nmetalinguistic negation: Evidence from eye -tracking experiments, \nJournal of Pragmatics 57: 1-18. \nNoveck , Ira and Dan Sperber (2007). The why and how of experimental \npragmatics: The case of ‘scalar inferences ,’ in Noel Roberts (ed.) \nAdvances in Pragmatics, Palgrave . \nOsterhout, Lee and Janet Nicol (1999 ). On the distinctiveness, \nindependence, a nd time course of the brain responses to syntactic and \nsemantic anomalies. Langauge and Cognitive Processes 14:3, 283 -317. \nPotts, Chris (2010 ). On the negativity of negation. In David Ludz and Nan \nLi (eds.) Proceedings of SALT 30. \nRecanati , Francois (1993). Direct Reference: From Language to Thought , \nBlackwell . \nRussell, B. (1905 ). On denoting . Mind , 14. Blackwell. \nSakai, H. (2013 ). Computation for Syntactic Dependency at Language \nCulture Interface: A View from ERP Studies on Japanese Honorific \nProcessing. Hiroshima U. Konkuk U Talk. \nSperber, D. D. Wilson. (2004 ). Relevance Theory, in G. Ward and L. Horn \n(eds) Handbook of Pragmatics . Oxford: Blackwell, 607 -632. \nStaab, Jenny, Thomas P . Urbach, and Marta Kutas (2008 ). Negation \nProcessing in Context Is Not (Always) Delayed, in Jamie Alexandre \n(Ed.) Center for Research in Language, UCSD. \nWhitman, John and Susumo Kuno (2004 ). Licensing of Multiple Negative \nPolarity Items. Studies in Korea n Syntax and Semantics ed. by Susumu \nKuno , 207-228. Seoul : Pagijong . \nWible, David and Eva Chen (2000 ). Linguistic Limits on Metalinguistic \nNegation: Evidence from Mandarin and English, Language and \nLingu istics. \n \nAcknowledgments \nI thank Sung -Eun Lee, and Sungryong Koh for their technical \ncontributions to t he ERP experiments and to Yoonjung Kang and Jeff \nHolliday for their contributions to the phonetic experiments . I am also \ngrateful to Larry Horn and Michael Israel for their comm ents on one of the \nearliest versions and the CIL19 presentation. This work was supported by \nthe National Research Foundation under (Excellent Scholar) Grant No. 100 -\n20090049 through Korean Government. \n \n \n703", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "qPwsT0CDnA", "year": null, "venue": "EAPCogSci 2015", "pdf_link": "https://ceur-ws.org/Vol-1419/paper0026.pdf", "forum_link": "https://openreview.net/forum?id=qPwsT0CDnA", "arxiv_id": null, "doi": null }
{ "title": "On Mental Imagery in Lexical Processing: Computational Modeling of the Visual Load Associated to Concepts", "authors": [ "Daniele Paolo Radicioni", "Francesca Garbarini", "Fabrizio Calzavarini", "Monica Biggio", "Antonio Lieto", "Katiuscia Sacco", "Diego Marconi" ], "abstract": null, "keywords": [], "raw_extracted_content": "On Mental Imagery in Lexical Processing:\nComputational Modeling of the Visual Load Associated to Concepts\nDaniele P. Radicioni\u001f, Francesca Garbarini , Fabrizio Calzavarini\u001e\nMonica Biggio , Antonio Lieto\u001f, Katiuscia Sacco , Diego Marconi\u001e\n([email protected])\n\u001fDepartment of Computer Science, Turin University { Turin, Italy\n\u001eDepartment of Philosophy, Turin University { Turin, Italy\n Department of Psychology, Turin University { Turin, Italy\nAbstract\nThis paper investigates the notion of visual load , an es-\ntimate for a lexical item's e\u000ecacy in activating mental\nimages associated with the concept it refers to. We elab-\norate on the centrality of this notion which is deeply\nand variously connected to lexical processing. A com-\nputational model of the visual load is introduced that\nbuilds on few low level features and on the dependency\nstructure of sentences. The system implementing the\nproposed model has been experimentally assessed and\nshown to reasonably approximate human response.\nKeywords: Visual imagery; Computational modeling;\nNatural Language Processing.\nIntroduction\nOrdinary experience suggests that lexical competence,\ni.e. the ability to use words, includes both the abil-\nity to relate words to the external world as accessed\nthrough perception ( referential tasks) and the ability to\nrelate words to other words in inferential tasks of sev-\neral kinds (Marconi, 1997). There is evidence from both\ntraditional neuropsychology and more recent neuroimag-\ning research that the two aspects of lexical competence\nmay be implemented by partly di\u000berent brain processes.\nHowever, some very recent experiments appear to show\nthat typically visual areas are also engaged by purely\ninferential tasks, not involving visual perception of ob-\njects or pictures (Marconi et al., 2013). The present work\ncan be considered as a preliminary investigation aimed\nat verifying this main hypothesis, by investigating the\nfollowing issues: i)to what extent the visual load asso-\nciated with concepts can be assessed, and which sort of\nagreement exists among humans about the visual load\nassociated to concepts; ii)which features underlie the\nvisual load associated to concepts; and iii)whether the\nnotion of visual load can be grasped and encapsulated\ninto a computational model.\nAs it is widely acknowledged, one main visual cor-\nrelate of language is imageability , that is the property\nof a particular word or sentence to produce an experi-\nence of imagery: in the following, we focus on visual im-\nagery (thus disregarding acoustic ,olfactory and tactile\nimagery), which we denote as visual load . The visual\nload is related to the easiness of producing visual im-\nagery when an external linguistic stimulus is processed.Intuitively, words like `dog' or `apple' refer to concrete\nentities and are associated with a high visual load, im-\nplying that these terms immediately generate a mental\nimage. Conversely, words like `algebra' or `idempotence'\nare hardly accompanied by the production of vivid im-\nages. Although the construct of visual load is closely\nrelated to that of concreteness, concreteness and visual\nload can clearly dissociate, in that i)some words have\nbeen rated high in visual load but low in concreteness,\nsuch as some concrete nouns that have been rated low\nin visual load (Paivio, Yuille, & Madigan, 1968); and,\nconversely, ii)abstract words such as `bisection' are as-\nsociated with a high visual load.\nThe notion of visual load is relevant to many disci-\nplines, in that it contributes to shed light on a wide vari-\nety of cognitive and linguistic tasks and helps explaining\na plethora of phenomena observed in both impaired and\nnormal subjects. In the next Section we survey a mul-\ntidisciplinary literature showing how mental imagery af-\nfects memory, learning and comprehension; we consider\nhow imagery is characterized at the neural level; and we\nshow how visual information is exploited in state-of-the-\nart Natural Language Processing research. In the subse-\nquent Section we illustrate the proposed computational\nmodel for providing concepts with their visual load char-\nacterization. We then describe the experiments designed\nto assess the model through an implemented system, re-\nport and discuss the obtained results. Conclusion will\nsummarize the work done and provide an outlook on fu-\nture work.\nRelated Work\nAs regards linguistic competence, it is generally ac-\ncepted that visual load facilitates cognitive perfor-\nmance (Bergen, Lindsay, Matlock, & Narayanan, 2007),\nleading to faster lexical decisions than not-visually\nloaded concepts (Cortese & Khanna, 2007). For ex-\nample, nouns with high visual load ratings are remem-\nbered better than those with low visual load ratings in\nlong-term memory tests (Paivio et al., 1968). More-\nover, visually loaded terms are easier to recognize for\nsubjects with deep dyslexia, and individuals respond\n181\nbered better than those with low visual load ratings inlong-term memory tests (Paivio, 1986, 1991). Yet, vi-sually loaded terms are easier to recognize for subjectswith deep dyslexia, and individuals respond more quicklyand accurately when making judgments about visuallyloaded sentences (Kiran & Tuchtenhagen, 2005). Neu-ropsychological researches have shown that many apha-sic patients perform better with linguistic items thatare easier to elicit visual imagery (Goodglass, Hyde, &Blumstein, 1969; Coltheart, 1980), although the oppo-site pattern has also been documented (Breedin, Sa↵ran,& Coslett, 1994; Cipolotti & Warrington, 1995; Warring-ton, 1975).Visual imageability ofconceptsevoked by words andsentences is commonly known to a↵ect brain activity.While visuosemantic processing regions, such as left in-ferior temporal gyrus and fusiform gyrus revealed greaterinvolvement during the comprehension of highly image-able words and sentences (Bookheimer et al., 1998; Mel-let, Tzourio, Denis, & Mazoyer, 1998), other seman-tic brain regions (i.e., superior and middle temporalcortex) are selectively activated by low-imageable sen-tences (Mellet et al., 1998; Just, Newman, Keller, McE-leney, & Carpenter, 2004). Furthermore, a mountingnumber of studies suggests that words encoding di↵er-ent visual properties (such as color, shape, motion, etc.)are processed in cortical areas that overlap with some ofthe same areas that are activated during visual percep-tion of those properties (Kemmerer, 2010).Investigating the visual features associated to linguis-tic input can be useful to many Natural Language Pro-cessing tasks, such as individuating verbs subcategoriza-tion frames (Bergsma & Goebel, 2011), enriching the tra-ditional extraction of distributional semantics from textwith a multimodal approach, integrating textual featureswith visual ones (Bruni, Tran, & Baroni, 2011, 2014). Fi-nally, visual attributes are at the base of the developmentof annotated corpora and resources that can be used toextend text-based distributional semantics by groundingword meanings on visual features, as well (Silberer, Fer-rari, & Lapata, 2013).The computational model for mental imagery de-scribed by Glasgow and Papadias (1992) aims at recon-structing image representations for retrievingvisualandspatialinformation: the first mode is concerned withhow objects look like, and the second one regards theirplacement within a visual scene. This bipartite schemeis justified based on neural accounts. In fact, visual andspatial correlates of imagery seem to have a direct coun-terpart in the cortical pathways involved in vision: whilethe temporal cortex is involved in recognizing objectsthemselves, on the other side the parietal cortex is acti-vated for accessing spatial information (Mishkin, Unger-leider, & Macko, 1983). The model proposes three stagesof image representation —each featured by its own kindL’animalechemangiabananesuunalbero`elascimmiaTheanimalthateatsbananasonatreeisthemonkeyFigure 1: The dependency tree corresponding to a stim-ulus.of information processing—, including adeep represen-tation, which is a semantic network stored in long-termmemory that contains a hierarchical representation ofimage descriptions; thespatial representationintendedfor collecting image components along with their spatialfeatures; thevisual representationthat builds on an oc-cupancy array, storing information such as shape, size,etc..ModelThe modeling phase has been characterized by the needof defining the notion of visual load in a uniform andcomputationally tractable manner. Such concept, infact, is used by and large in literature with di↵erentmeanings, thus giving raise to di↵erent levels of ambi-guity. We define visual load as the concept representinga direct indicator (a numeric value) of the e\u0000cacy for alexical item to activate mental images associated to theconcept referred to by the lexical item. Consequently,we expect that visual load also represents an indirectmeasure of the probability of activation of a brain areadeputed to the visual processing.We conjecture that visual load is situated at the inter-section oflexicalandsemanticspaces, mostly associatedto the semantic level. That is, the visual load is primar-ily associated to aconcept, although lexical phenomenaliketerms availability(implying that the most frequentlyused terms are easier to recognize than those seen lessoften (Tversky & Kahneman, 1973)) can also a↵ect it.Based on the work by Kemmerer (2010) we explore thehypothesis that a limited number of primitive elementscan be used to characterize and evaluate the visual loadassociated to concepts. Namely, Kemmerer’s SimulationFramework allows to grasp information about a wide va-riety of concepts and properties used to denote objects,events and spatial relations. Three main visual semanticcomponents have been individuated that, in our opin-ion, are also suitable to be used as di↵erent dimensionsalong which to characterize the concept of visual load.They are:colorproperties,shapeproperties, andmo-tionproperties. The perception of these properties isexpected to occur in a immediate way, such that “dur-ing our ordinary observation of the world, these threeattributes of objects are tightly bound together in uni-Figure 1: The (simpli\fed) dependency tree correspond-\ning to the sentence `The animal that eats bananas on a\ntree is the Monkey'.\nmore quickly and accurately when making judgments\nabout visually loaded sentences (Kiran & Tuchtenhagen,\n2005). Neuropsychological research has shown that\nmany aphasic patients perform better with linguistic\nitems that more easily elicit visual imagery (Coltheart,\n1980), although the opposite pattern has also been doc-\numented (Cipolotti & Warrington, 1995).\nVisual imageability of concepts evoked by words and\nsentences is commonly known to a\u000bect brain activity.\nWhile visuosemantic processing regions, such as left in-\nferior temporal gyrus and fusiform gyrus revealed greater\ninvolvement during the comprehension of highly image-\nable words and sentences (Bookheimer et al., 1998; Mel-\nlet, Tzourio, Denis, & Mazoyer, 1998), other seman-\ntic brain regions (i.e., superior and middle temporal\ncortex) are selectively activated by low-imageable sen-\ntences (Mellet et al., 1998; Just, Newman, Keller, McE-\nleney, & Carpenter, 2004). Furthermore, a growing num-\nber of studies suggests that words encoding di\u000berent vi-\nsual properties (such as color, shape, motion, etc.) are\nprocessed in cortical areas that overlap with some of the\nareas that are activated during visual perception of those\nproperties (Kemmerer, 2010).\nInvestigating the visual features associated to linguis-\ntic input can be useful to build semantic resources de-\nsigned to deal with Natural Language Processing (NLP)\nproblems, such as individuating verbs subcategorization\nframes (Bergsma & Goebel, 2011), enriching the tradi-\ntional extraction of distributional semantics from text\nwith a multimodal approach, integrating textual features\nwith visual ones (Bruni, Tran, & Baroni, 2014). Finally,\nvisual attributes are at the base of the development of\nannotated corpora and resources that can be used to ex-\ntend text-based distributional semantics by grounding\nword meanings on visual features, as well (Silberer, Fer-\nrari, & Lapata, 2013).\nModel\nAlthough much work has been invested in di\u000berent ar-\neas for investigating imageability in general and visual\nimagery in particular, at the best of our knowledge no\nattempt has been carried out to formally characterize\nvisual load, and no computational model has been de-\nvised to compute how visually loaded are sentences andlexicalized concepts therein. We propose a model that\nrelies on a simple hypothesis additively combining few\nlow-level features, re\fned by exploiting syntactic infor-\nmation.\nThe notion of visual load, in fact, is used by and large\nin literature with di\u000berent meanings, thus giving rise to\ndi\u000berent levels of ambiguity. We de\fne visual load as the\nconcept representing a direct indicator (a numeric value)\nof the e\u000ecacy for a lexical item to activate mental images\nassociated to the concept referred to by the lexical item.\nWe expect that visual load also represents an indirect\nmeasure of the probability of activation of brain areas\ndeputed to the visual processing.\nWe conjecture that the visual load is primarily as-\nsociated to concepts , although lexical phenomena like\nterms availability (implying that the most frequently\nused terms are easier to recognize than those seen less\noften (Tversky & Kahneman, 1973)) can also a\u000bect it.\nBased on the work by Kemmerer (2010) we explore the\nhypothesis that a limited number of primitive elements\ncan be used to characterize and evaluate the visual load\nassociated to concepts. Namely, Kemmerer's Simulation\nFramework allows to grasp information about a wide va-\nriety of concepts and properties used to denote objects,\nevents and spatial relations. Three main visual semantic\ncomponents have been individuated that, in our opin-\nion, are also suitable to be used as di\u000berent dimensions\nalong which to characterize the concept of visual load.\nThey are: color properties, shape properties, and mo-\ntion properties. The perception of these properties is\nexpected to occur in a immediate way, such that \\dur-\ning our ordinary observation of the world, these three\nattributes of objects are tightly bound together in uni-\n\fed conscious images\" (Kemmerer, 2010). We added a\nfurther perceptual component related to size. More pre-\ncisely, our assumption is that information about the size\nof a given concept can also contribute, as an adjoint fac-\ntor and not as a primitive one, to the computation of a\nvisual load value for the considered concept.\nIn this setting, we represent each concept/property as\naboolean -valued vector of four elements, each encoding\nthe following information: lemma , morphological infor-\nmation on POS (part of speech), and then whether the\nconsidered concept/property conveys information about\ncolor ,shape ,motion and size.1For example, this piece\nof information\ntable,Noun,1,1,0,1 (1)\ncan be used to indicate that the concept table (associated\nwith a Noun, and di\u000bering, e.g., from that associated\nwith a Verb) conveys information about color, shape and\nsize, but not about motion. In the following, these are\n1We adopt here a simpli\fcation, since we are assuming\nthat the pair hlemma ;POS iis su\u000ecient to identify a con-\ncept/property, and that in general we can access items by\ndisregarding the word sense disambiguation problem, which\nis known as an open problem in the \feld of NLP.\n182\nTUP parserdefinitiondz}|{The big carnivore with yellow and black stripes is thetargetTz}|{...tiger|{z}stimulusstDictionary annotatedwith featureslemma,POS,\u0000col,\u0000sha,\u0000mot,\u0000sizlemma,POS,\u0000col,\u0000sha,\u0000mot,\u0000sizlemma,POS,\u0000col,\u0000sha,\u0000mot,\u0000siz.....Dependency structureL’animalechemangiabananesuunalbero`elascimmiaTheanimalthateatsbananasonatreeisthemonkeyFigure 1: The dependency tree corresponding to a stimulus.the following information:lemma, morphological infor-mation on POS (part of speech), and then whether theconsidered concept/property conveys information aboutcolor,shape,motionandsize.1For example, this pieceof informationfinger,Noun,1,1,0,1(1)can be used to indicate that the concept finger (associ-ated to a Noun, and di↵ering, e.g., from that associatedto a Verb) conveys information about color, shape andsize, but not about motion. In the following these are re-ferred to as the visual features\u0000·associated to the givenconcept.We have then built a dictionary by extracting it from aset ofstimuli(illustrated hereafter) composed by simplesentences describing a concept; and manually annotatedthe visual features associated to each concept. The as-signment of features scores has been conducted by theauthors on a purely introspective basis.Di↵erent weighting schemes~w={↵,\u0000,\u0000}have beentested in order to set features contribution to the visualload associated to a conceptc, that results from com-putingVL(c,~w)=Xi\u0000i=↵(\u0000col+\u0000sha)+\u0000\u0000mot+\u0000\u0000siz.(2)For the experimentation we set↵to 1.35,\u0000to 1.1 and\u0000to.9.To the ends of combining the contribution of conceptsin a sentencesto the overallVLscore fors, we adapttheprinciple of compositionality2to the visual load do-main. In other words, we assume that the visual load ofa sentence can be computed by starting from the visual1We adopt here a simplification, since we are assumingthat the pairhlemma,POSiis su\u0000cient to identify a con-cept/property, and that in general we can access items bydisregarding the word sense disambiguation problem, whichis actually an open problem in the field of Natural LanguageProcessing (Vidhu Bhala & Abirami, 2014).2This principle states that the meaning of an expressionis a function of the meanings of its parts and of the way theyare syntactically combined: to get the meaning of a sentencewe combine words to form phrases, then we combine phrasesto form clauses, and so on (Partee, 1995).load of the concepts denoted by the lexical items in thesentence, that isVL(s)=Pc2sVL(c).The calculation of theVLscore also accounts for thedependency structure of the input sentences. The syn-tactic structure of sentences is computed through theTurin University Parser (TUP) in the dependency for-mat (Lesmo, 2007). Dependency formalisms representsyntactic relations by connecting adominantword, thehead (e.g., the verb ‘fly’ in the sentenceThe eagle flies)and adominatedword, the dependent (e.g., the noun‘eagle’ in the same sentence). The connection betweenthese two words is usually represented by using labeleddirected edges (e.g.,subject): the collection of all depen-dency relations of a sentence forms a tree, rooted in themain verb (see the parse tree illustrated in Figure 1).The dependency structure is relevant in our approach,because we assume that some sort ofreinforcementef-fect may apply in cases where both a word and its de-pendent(s) (or governor(s)) are associated to some visualfeature. For example, a phrase like ‘with black stripes’is expected to evoke mental images in a more vivid waythan its elements taken in isolation (that is, ‘black’ and‘stripes’), and its visual load is expected to still growif we add a coordinated term, like in ‘withyellow andblack stripes’. Yet, theVLwould –recursively– grow ifwe added a governor term (like ‘furwith yellow and blackstripes’). We then introduced a parameter⇠to controlthe contribution of the aforementioned features in casethe corresponding terms are linked in the parse tree bya modifier/argument relation (denoted asmodandargin Equation 3).VL(ci)=(⇠VL(ci)i f9cjs.t.mod(ci,cj)_arg(ci,cj)VL(ci) otherwise.(3)In the experimentation⇠was set to 1.2.The stimuli in the dataset are pairs consisting ofa definitiondand a targetT(st=hd, Ti), such asdefinitiondz}|{The big carnivore with yellow and black stripes is thetargetTz}|{...tiger|{z}stimulusst.The visual load associated tostcomponents, given theL’animalechemangiabananesuunalbero`elascimmiaTheanimalthateatsbananasonatreeisthemonkeyFigure 1: The dependency tree corresponding to a stimulus.the following information:lemma, morphological infor-mation on POS (part of speech), and then whether theconsidered concept/property conveys information aboutcolor,shape,motionandsize.1For example, this pieceof informationfinger,Noun,1,1,0,1(1)can be used to indicate that the concept finger (associ-ated to a Noun, and di↵ering, e.g., from that associatedto a Verb) conveys information about color, shape andsize, but not about motion. In the following these are re-ferred to as the visual features\u0000·associated to the givenconcept.We have then built a dictionary by extracting it from aset ofstimuli(illustrated hereafter) composed by simplesentences describing a concept; and manually annotatedthe visual features associated to each concept. The as-signment of features scores has been conducted by theauthors on a purely introspective basis.Di↵erent weighting schemes~w={↵,\u0000,\u0000}have beentested in order to set features contribution to the visualload associated to a conceptc, that results from com-putingVL(c,~w)=Xi\u0000i=↵(\u0000col+\u0000sha)+\u0000\u0000mot+\u0000\u0000siz.(2)For the experimentation we set↵to 1.35,\u0000to 1.1 and\u0000to.9.To the ends of combining the contribution of conceptsin a sentencesto the overallVLscore fors, we adapttheprinciple of compositionality2to the visual load do-main. In other words, we assume that the visual load ofa sentence can be computed by starting from the visual1We adopt here a simplification, since we are assumingthat the pairhlemma,POSiis su\u0000cient to identify a con-cept/property, and that in general we can access items bydisregarding the word sense disambiguation problem, whichis actually an open problem in the field of Natural LanguageProcessing (Vidhu Bhala & Abirami, 2014).2This principle states that the meaning of an expressionis a function of the meanings of its parts and of the way theyare syntactically combined: to get the meaning of a sentencewe combine words to form phrases, then we combine phrasesto form clauses, and so on (Partee, 1995).load of the concepts denoted by the lexical items in thesentence, that isVL(s)=Pc2sVL(c).The calculation of theVLscore also accounts for thedependency structure of the input sentences. The syn-tactic structure of sentences is computed through theTurin University Parser (TUP) in the dependency for-mat (Lesmo, 2007). Dependency formalisms representsyntactic relations by connecting adominantword, thehead (e.g., the verb ‘fly’ in the sentenceThe eagle flies)and adominatedword, the dependent (e.g., the noun‘eagle’ in the same sentence). The connection betweenthese two words is usually represented by using labeleddirected edges (e.g.,subject): the collection of all depen-dency relations of a sentence forms a tree, rooted in themain verb (see the parse tree illustrated in Figure 1).The dependency structure is relevant in our approach,because we assume that some sort ofreinforcementef-fect may apply in cases where both a word and its de-pendent(s) (or governor(s)) are associated to some visualfeature. For example, a phrase like ‘with black stripes’is expected to evoke mental images in a more vivid waythan its elements taken in isolation (that is, ‘black’ and‘stripes’), and its visual load is expected to still growif we add a coordinated term, like in ‘withyellow andblack stripes’. Yet, theVLwould –recursively– grow ifwe added a governor term (like ‘furwith yellow and blackstripes’). We then introduced a parameter⇠to controlthe contribution of the aforementioned features in casethe corresponding terms are linked in the parse tree bya modifier/argument relation (denoted asmodandargin Equation 3).VL(ci)=(⇠VL(ci)i f9cjs.t.mod(ci,cj)_arg(ci,cj)VL(ci) otherwise.(3)In the experimentation⇠was set to 1.2.The stimuli in the dataset are pairs consisting ofa definitiondand a targetT(st=hd, Ti), such asdefinitiondz}|{The big carnivore with yellow and black stripes is thetargetTz}|{...tiger|{z}stimulusst.The visual load associated tostcomponents, given theweighting scheme~w, is then computed as follows:VL(d,~w)=Pc2dVL(c)(4)VL(T,~w)=VL(T).(5)Aggiungere figura e descrizione di alto livello dellapipeline.ExperimentationMaterials and MethodsForty-five healthy volunteers (23 females and 22 males),19\u000052 years of age (mean±sd= 25.7±5.1),w e r erecruited for the experiment. One of them was excludedbecause she was outlier with respect to the group. Noneof the subjects had a history of psychiatric or neuro-logical disorders. All participants gave their written in-formed consent before taking part to the experimentalprocedure, which was approved by the ethical commit-tee of the University of Turin, in accordance with theDeclaration of Helsinki (BMJ 1991; 302: 1194). Par-ticipants were all na¨ ıve to the experimental procedureand to the aims of the study.The set of stimuli was devised by the multidisciplinaryteam of philosophers, neuropsychologists and computerscientists in the frame of a broader project aimed at in-vestigating both the role of visual load in concepts in-volved in inferential and referential tasks.Experimental design and procedureParticipantswere asked to perform an inferential task “Naming bydefinition”. During the task a sentence was pronouncedand the subjects were instructed to listen to the stim-ulus given in the headphones and to overtly name, asaccurately and as fast as possible, the target word cor-responding to the definition, using a microphone con-nected to a response box. Auditory stimuli were pre-sented through the E-Prime software, which was alsoused to record data on accuracy and reaction times.Furthermore, at the end of the experimental session,the subjects were administered a questionnaire: they hadto rate on a 1\u00007 Likert scale the intensity of the visualload they perceived as related to each target and to eachdefinition.The factorial design of the study included two within-subjects factors, in which the visual load of both targetand definition was manipulated. The resulting four ex-perimental conditions were as follows:VVVisual Target—Visual Definition (e.g., ‘The bird ofprey with great wings flying over the mountains is the...eagle’);VNVVisual Target—Non-Visual Definition (e.g., Thehottest of the four elements of the ancients is...fire);NVVNon-Visual Target—Visual Definition (e.g., Thenose of Pinocchio stretched when he said a...lie);NVNVNon-Visual Target—Non-Visual Definition(e.g., The quality of people that easily solve di\u0000cultproblems is said...intelligence).For each condition, there were 48 sentences, for overall192 sentences. Each trial lasted about 30 minutes. Thenumber of words (nouns and adjectives) and the (syn-tactic dependency) structure of the considered sentenceswere homogeneous within conditions.The same set of stimuli used for the human experi-ment was given in input to the system implementing theproposed computational model. The system was used tocompute the visual load score associated to (lexicalized)concepts according to Eq. 4 and 5, implementing the vi-sual load model in Eq. 2, with the system’s parametersset to the aforementioned values.Data analysisThe participants’ performance in the “naming by def-inition” task was evaluated by recording, for each re-sponse, the reaction timeRT, in milliseconds, and theaccuracyAC, as the percentage of the correct answers.Then, for each subject, bothRTandACwere com-bined in theInverse E\u0000ciency Score(IES), by usingthe formula IES = (RT·AC)/100. IES is a metrics com-monly used to aggregate reaction time and accuracy andto summarize them. The mean IES value was used asdependent variable and entered in a 2⇥2 repeated mea-sures ANOVA with ‘target’ (two levels: ‘visual’ and ‘not-visual’) and ‘definition’ (two levels: ‘visual’ and ‘not-visual’) as within-subjects factors.Post hoccomparisonswere performed by using the Duncan test.The scores obtained by the participants in the vi-sual load questionnaire were analyzed by using pairedT-tests, two tailed. Two comparisons were performedfor visual and not-visual targets, and for visual and not-visual definitions.The computational model results were analyzed by us-ing paired T-tests, two tailed. Two comparisons wereperformed for visual and not-visual targets and for vi-sual and not-visual definitions.Correlations between IES, computational modeland visual load questionnaire. We also explored theexistence of correlations between IES, the visual loadquestionnaire and the computational model output byusing linear regressions. For both the IES values andthe questionnaire scores, we calculated for each item themean of the 30 subjects’ responses. In a first model, weused the visual-load questionnaire scores as independentvariable to predict the participants’ performance (withthe IESas dependent variable); in a second model, weused the computational data as independent variable topredict the participants’ visual load evaluation (with thequestionnaire scores as independent variable).System implementing thecomputational model ofVLhlemma,POSi{hlemma,POSihlemma,POSihlemma,POSi.....Morphological informationFigure 2: The pipeline to compute the VLscore according to the proposed computational model.\nreferred to as the visual features \u001e\u0001associated with the\ngiven concept.\nWe have then built a dictionary by extracting it from\na set of stimuli (illustrated hereafter) composed of sim-\nple sentences describing a concept; next, we have man-\nually annotated the visual features associated with each\nconcept. The automatic annotation of visual properties\nassociated with concepts is deferred to future work: it\ncan be addressed either through a classical Information\nExtraction approach building on statistics, or in a more\nsemantically-principled way.\nDi\u000berent weighting schemes ~ w=f\u000b;\f;\rghave been\ntested in order to determine the features' contribution to\nthe visual load associated with a concept c, that results\nfrom computing\nVL(c;~ w) =X\ni\u001ei=\u000b(\u001ecol+\u001esha)+\f\u001emot+\r\u001esiz:(2)\nFor the experimentation we set \u000bto 1:35,\fto 1:1 and\n\rto:9: these assignments re\rect the fact that color and\nshape information is considered more important, in the\ncomputation of VL.\nTo the ends of combining the contribution of concepts\nin a sentence sto the overall VLscore fors, we adopted\nthe following additive schema: VL(s) =P\nc2sVL(c).\nThe computation of the VLscore also accounts for\nthe dependency structure of the input sentences. The\nsyntactic structure of sentences is computed by the\nTurin University Parser (TUP) in the dependency for-\nmat (Lesmo, 2007). Dependency formalisms represent\nsyntactic relations by connecting a dominant word, the\nhead (e.g., the verb `\ry' in the sentence The eagle \ries )\nand a dominated word, the dependent (e.g., the noun`eagle' in the same sentence). The connection between\nthese two words is usually represented by using labeled\ndirected edges (e.g., subject ): the collection of all depen-\ndency relations of a sentence forms a tree, rooted in the\nmain verb (see the parse tree illustrated in Figure 1).\nThe dependency structure is relevant in our approach,\nbecause we assume that a reinforcement e\u000bect may ap-\nply in cases where both a word and its dependent(s) (or\ngovernor(s)) are associated with visual features. For ex-\nample, a phrase such as `with black stripes' is expected\nto evoke mental images in a more vivid way than its el-\nements taken in isolation (that is, `black' and `stripes'),\nmoreover its visual load is expected to further grow if\nwe add a coordinated term, as in `with yellow and black\nstripes'. Moreover, the VLwould {recursively{ grow if\nwe added a governor term (like ` furwith yellow and black\nstripes'). We then introduced a parameter \u0018to control\nthe contribution of the aforementioned features in case\nthe corresponding terms are linked in the parse tree by\na modi\fer/argument relation (denoted as mod andarg\nin Equation 3).\nVL(ci) =(\n\u0018VL(ci) if9cjs.t.mod(ci;cj)_arg(ci;cj)\nVL(ci) otherwise.\n(3)\nIn the experimentation \u0018was set to 1 :2.\nThe stimuli in the dataset are pairs consisting of\na de\fnition dand a target T(st=hd;Ti), such as\nde\fnitiondz }| {\nThe big carnivore with yellow and black stripes is thetargetTz}|{\n:::tiger| {z }\nstimulusst.\nThe visual load associated to stcomponents, given the\n183\nweighting scheme ~ w, is then computed as follows:\nVL(d;~ w) =P\nc2dVL(c) (4)\nVL(T;~ w) = VL(T): (5)\nThe whole pipeline from the input parsing to compu-\ntation of the VLfor the considered stimulus has been\nimplemented as a computer program; its main steps in-\nclude the parsing of the stimulus, the extraction of the\n(lexicalized) concepts by exploiting the output of the\nmorphological analysis, and the tree traversal of the de-\npendency structure resulting from the parsing step. The\nmorphological analyzer has been preliminarily fed with\nthe whole set of stimuli, and its output has been anno-\ntated with the visual features and stored into a dictio-\nnary. At run time, the dictionary is accessed based on\nmorphological information, then used to retrieve the val-\nues of the features associated with the concepts in the\nstimulus. The output obtained by the proposed model\nhas been compared with the results obtained in a behav-\nioral experimentation as described below.\nExperimentation\nMaterials and Methods\nThirty healthy volunteers, native Italian speakers, (16\nfemales and 14 males), 19 \u000052 years of age (mean\n\u0006sd= 25:7\u00065:1), were recruited for the experiment.\nNone of the subjects had a history of psychiatric or neu-\nrological disorders. All participants gave their written\ninformed consent before participating in the experimen-\ntal procedure, which was approved by the ethical com-\nmittee of the University of Turin, in accordance with\nthe Declaration of Helsinki (World Medical Association,\n1991). Participants were all na \u0010ve to the experimental\nprocedure and to the aims of the study.\nExperimental design and procedure Participants\nwere asked to perform an inferential task \\Naming from\nde\fnition\". During the task a sentence was pronounced\nand the subjects were instructed to listen to the stim-\nulus given in the headphones and to overtly name, as\naccurately and as fast as possible, the target word cor-\nresponding to the de\fnition, using a microphone con-\nnected to a response box. Auditory stimuli were pre-\nsented through the E-Prime software, which was also\nused to record data on accuracy and reaction times. Fur-\nthermore, at the end of the experimental session, the\nsubjects were administered a questionnaire: they had to\nrate on a 1\u00007 Likert scale the intensity of the visual\nload they perceived as related to each target and to each\nde\fnition.\nThe factorial design of the study included two within-\nsubjects factors, in which the visual load of both target\nand de\fnition was manipulated. The resulting four ex-\nperimental conditions were as follows:\nVV Visual Target|Visual De\fnition (e.g., `The bird ofprey with great wings \rying over the mountains is the\n:::eagle');\nVNV Visual Target|Non-Visual De\fnition (e.g., The\nhottest of the four elements of the ancients is :::\fre);\nNVV Non-Visual Target|Visual De\fnition (e.g., The\nnose of Pinocchio stretched when he told a :::lie);\nNVNV Non-Visual Target|Non-Visual De\fnition\n(e.g., The quality of people that easily solve di\u000ecult\nproblems is said :::intelligence).\nFor each condition, there were 48 sentences, 192 sen-\ntences overall. Each trial lasted about 30 minutes. The\nnumber of words (nouns and adjectives), their balancing\nacross stimuli, and the (syntactic dependency) structure\nof the considered sentences were uniform within condi-\ntions, so that the most relevant variables were controlled.\nThe same set of stimuli used for the human experiment\nwas given in input to the system implementing the com-\nputational model.\nData analysis\nThe participants' performance in the \\Naming from def-\ninition\" task was evaluated by recording, for each re-\nsponse, the reaction time RT, in milliseconds, and the\naccuracy AC, computed as the percentage of correct an-\nswers. The answers were considered correct if the target\nword was plausibly matched with the de\fnition. Then,\nfor each subject, both RTandACwere combined in\ntheInverse E\u000eciency Score (IES), by using the formula\nIES = ( RT=AC)\u0001100. IES is a metrics commonly used\nto aggregate reaction time and accuracy, and to summa-\nrize them (Townsend & Ashby, 1978). The mean IES\nvalue was used as the dependent variable and entered\nin a 2\u00022 repeated measures ANOVA with `target' (two\nlevels: `visual' and `non-visual') and `de\fnition' (two lev-\nels: `visual' and `non-visual') as within-subjects factors.\nPost hoc comparisons were performed by using the Dun-\ncan test.\nThe scores obtained by the participants in the visual\nload questionnaire were analyzed by using unpaired T-\ntests, two tailed. Two comparisons were performed for\nvisual and non-visual targets, and for visual and non-\nvisual de\fnitions. The computational model results were\nanalyzed by using unpaired T-tests, two tailed. Two\ncomparisons were performed for visual and non-visual\ntargets and for visual and non-visual de\fnitions.\nCorrelations between IES, computational model\nand visual load questionnaire . We also explored the\nexistence of correlations between IES, the visual load\nquestionnaire and the computational model output by\nusing linear regressions. For both the IES values and\nthe questionnaire scores, we computed for each item the\nmean of the 30 subjects' responses. In a \frst model, we\nused the visual load questionnaire scores as independent\nvariable to predict the participants' performance (with\n184\nFigure 3: The graph shows, for each condition, the mean\nIES with standard error.\nIESas the dependent variable); in a second model, we\nused the computational data as independent variable to\npredict the participants' visual load evaluation (with the\nquestionnaire scores as the independent variable). In\norder to verify the consistency of the correlation e\u000bects,\nwe also performed linear regressions where we controlled\nfor three covariate variables: the number of words, their\nbalancing across stimuli and the syntactic dependency\nstructure.\nResults\nThe ANOVA showed a signi\fcant e\u000bect of the within-\nsubject factors \\target\" ( F1;29= 14:4;p <0:001), sug-\ngesting that the IES values were signi\fcantly lower in\nthe visual than in the non-visual targets, and \\de\fni-\ntion\" (F1;29= 32:78;p<0:001), suggesting that the IES\nvalues were signi\fcantly lower in the visual than in the\nnon-visual de\fnitions. This means that, for both the tar-\nget and the de\fnition, the participants' performance was\nsigni\fcantly faster and more accurate in the visual than\nin the non-visual condition. We also found a signi\fcant\ninteraction \\target*de\fnition\" ( F1;29= 7:54;p= 0:01).\nBased on the Duncan post hoc comparison, we veri\fed\nthat this interaction was explained by the e\u000bect of the\nvisual de\fnitions of the visual targets (VV condition),\nin which the participants' performance was signi\fcantly\nfaster and more accurate than in all the other conditions\n(VNV; NVV; NVNV), as shown in Figure 3.\nBy comparing the questionnaire scores for visual\n(mean\u0006sd= 5:69\u00060:55) and non-visual (mean \u0006sd=\n4:73\u00060:71) de\fnitions we found a signi\fcant di\u000berence\n(p < 0:001; unpaired T-test, two tailed). By compar-\ning the questionnaire scores for visual (mean \u0006sd=\n6:32\u00060:4) and non-visual (mean \u0006sd= 4:23\u00060:9)\ntargets we found a signi\fcant di\u000berence ( p < 0:001).\nThis suggest that our arbitrary categorization of each\nsentences within the four conditions was supported by\nFigure 4: Linear regression \\Inverse E\u000eciency Score\n(IES) by Visual Load Questionnaire\". The mean score\nin the Visual Load Questionnaire, reported on 1 \u00007 Lik-\nert scale, was used as an independent variable to predict\nthe subjects' performance, as quanti\fed by the IES.\nthe general agreement of the subjects. By compar-\ning the computational model scores for visual (mean\n\u0006sd= 4:0\u00062:4) and non-visual (mean \u0006sd= 2:9\u00062:0)\nde\fnitions we found a signi\fcant di\u000berence ( p <0:001;\nunpaired T-test, two tailed). By comparing the compu-\ntational model scores for visual (mean \u0006sd= 2:53\u00061:29)\nand non-visual (mean \u0006sd= 0:26\u00060:64) targets we\nfound a signi\fcant di\u000berence ( p <0:001). This suggest\nthat we were able to computationally model the visual-\nload of both targets and descriptions, describing it as a\nlinear combination of di\u000berent low-level features: color,\nshape, motion and dimension.\nResults correlations . By using the visual load ques-\ntionnaire scores as independent variable we were able\nto signi\fcantly ( R2= 0:4;p<0:001) predict the partici-\npants' performance (that is, their IES values), illustrated\nin Figure 4. This means that the higher the participants'\nvisual score for a de\fnition, the better the participants'\nperformance in giving the correct response (or, alterna-\ntively, the lower the IES value).\nBy using the computational data as independent vari-\nable we were able to signi\fcantly ( R2= 0:44;p<0:001)\npredict the participants' visual load evaluation (their\nquestionnaire scores), as shown in Figure 5. This means\nthat a correlation exists between the computational pre-\ndiction about the visual load of the de\fnitions and the\nparticipants visual load evaluation: the higher is the\ncomputational model result, the higher is the partici-\npants' visual score in the questionnaire. We also found\nthat these e\u000bects were still signi\fcant in the regres-\nsion models where the number of words, their balancing\nacross stimuli and the syntactic dependency structure\nwas controlled for.\n185\nFigure 5: Linear regression \\Visual Load Questionnaire\nby Computational Model\". The mean value obtained\nby the Computational model was used as an indepen-\ndent variable to predict the subjects' scores on the Visual\nLoad Questionnaire, reported on 1 \u00007 Likert scale.\nConclusions\nIn the next future we plan to extend the representation\nof the conceptual information by grounding the concep-\ntual representation on a hybrid representation composed\nof conceptual spaces and ontologies (Lieto, Minieri, Pi-\nana, & Radicioni, 2015; Lieto, Radicioni, & Rho, 2015).\nAdditionally, we plan to integrate the current model in\nthe context of cognitive architectures.\nAcknowledgments\nThis work has been partly supported by the Project The\nRole of the Visual Imagery in Lexical Processing , grant\nTO-call03-2012-0046, funded by Universit\u0012 a degli Studi\ndi Torino and Compagnia di San Paolo.\nReferences\nBergen, B. K., Lindsay, S., Matlock, T., & Narayanan, S.\n(2007). Spatial and linguistic aspects of visual imagery\nin sentence comprehension. Cognitive Sci ,31(5), 733{\n764.\nBergsma, S., & Goebel, R. (2011). Using visual infor-\nmation to predict lexical preference. In RANLP (pp.\n399{405).\nBookheimer, S., Ze\u000ero, T., Blaxton, T., Gaillard, W.,\nMalow, B., & Theodore, W. (1998). Regional cerebral\nblood \row during auditory responsive naming: evi-\ndence for cross-modality neural activation. Neurore-\nport,9(10), 2409{2413.\nBruni, E., Tran, N.-K., & Baroni, M. (2014). Multimodal\ndistributional semantics. J. Artif. Intell. Res. ,49, 1{\n47.\nCipolotti, L., & Warrington, E. K. (1995). Semantic\nmemory and reading abilities: A case report. J INT\nNEUROPSYCH SOC ,1(01), 104{110.Coltheart, M. (1980). Deep dyslexia: A right hemisphere\nhypothesis. Deep dyslexia , 326{380.\nCortese, M. J., & Khanna, M. M. (2007). Age of acquisi-\ntion predicts naming and lexical-decision performance\nabove and beyond 22 other predictor variables: An\nanalysis of 2,342 words. Q J Exp Psychol A ,60(8),\n1072{1082.\nJust, M. A., Newman, S. D., Keller, T. A., McEleney,\nA., & Carpenter, P. A. (2004). Imagery in sentence\ncomprehension: an fmri study. Neuroimage ,21(1),\n112{124.\nKemmerer, D. (2010). Words and the Mind: How words\ncapture human experience. In B. Malt & P. Wol\u000b\n(Eds.), (chap. How Words Capture Visual Experience\n- The Perspective from Cognitive Neuroscience). Ox-\nford Scholarship Online.\nKiran, S., & Tuchtenhagen, J. (2005). Imageability ef-\nfects in normal spanish{english bilingual adults and in\naphasia: Evidence from naming to de\fnition and se-\nmantic priming tasks. Aphasiology ,19(3-5), 315{327.\nLesmo, L. (2007, June). The Rule-Based Parser of the\nNLP Group of the University of Torino. Intelligenza\nArti\fciale ,2(4), 46{47.\nLieto, A., Minieri, A., Piana, A., & Radicioni, D. P.\n(2015). A knowledge-based system for prototypical\nreasoning. Connection Science ,27(2), 137{152.\nLieto, A., Radicioni, D. P., & Rho, V. (2015, July).\nA Common-Sense Conceptual Categorization System\nIntegrating Heterogeneous Proxytypes and the Dual\nProcess of Reasoning. In Proc. of IJCAI 2015. Buenos\nAires, Argentina: AAAI Press.\nMarconi, D. (1997). Lexical competence . MIT Press.\nMarconi, D., Manenti, R., Catricala, E., Della Rosa,\nP. A., Siri, S., & Cappa, S. F. (2013). The neural\nsubstrates of inferential and referential semantic pro-\ncessing. Cortex ,49(8), 2055{2066.\nMellet, E., Tzourio, N., Denis, M., & Mazoyer, B. (1998).\nCortical anatomy of mental imagery of concrete nouns\nbased on their dictionary de\fnition. Neuroreport ,\n9(5), 803{808.\nPaivio, A., Yuille, J. C., & Madigan, S. A. (1968). Con-\ncreteness, imagery, and meaningfulness values for 925\nnouns. Journal of experimental psychology ,76, 1.\nSilberer, C., Ferrari, V., & Lapata, M. (2013). Models\nof semantic representation with visual attributes. In\nAcl 2013 proceedings (pp. 572{582).\nTownsend, J. T., & Ashby, F. G. (1978). Methods of\nmodeling capacity in simple processing systems. Cog-\nnitive theory ,3, 200{239.\nTversky, A., & Kahneman, D. (1973). Availability: A\nheuristic for judging frequency and probability. Cog-\nnitive psychology ,5(2), 207{232.\nWorld Medical Association. (1991). Code of Ethics:\nDeclaration of Helsinki. BMJ ,302, 1194.\n186", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "xlIBOB0LTBC", "year": null, "venue": "EAPCogSci 2015", "pdf_link": "http://ceur-ws.org/Vol-1419/paper0020.pdf", "forum_link": "https://openreview.net/forum?id=xlIBOB0LTBC", "arxiv_id": null, "doi": null }
{ "title": "Structure Learning in Bayesian Networks with Parent Divorcing", "authors": [ "Ulrich von Waldow", "Florian Röhrbein" ], "abstract": null, "keywords": [], "raw_extracted_content": "Structure Learning in Bayesian Networks with Parent Divorcing\nUlrich von Waldow ([email protected])\nTechnische Universit ¨at M ¨unchen, Arcisstr. 21\n80333 Munich, Germany\nFlorian R ¨ohrbein (fl[email protected])\nTechnische Universit ¨at M ¨unchen, Arcisstr. 21\n80333 Munich, Germany\nAbstract\nBayesian networks (BNs) are an essential tool for the model-\ning of cognitive processes. They represent probabilistic knowl-\nedge in an intuitive way and allow to draw inferences based\non current evidence and built-in hypotheses. In this paper, a\nstructure learning scheme for BNs will be examined that is\nbased on so-called Child-friendly Parent Divorcing (CfPD).\nThis algorithm groups together nodes with similar properties\nby adding a new node to the existing network. The updating\nof all affected probabilities is formulated as an optimization\nproblem. The resulting procedure reduces the size of the con-\nditional probability tables (CPT) significantly and hence im-\nproves the efficiency of the network, making it suitable for\nlarger networks typically encountered in cognitive modelling.\nKeywords: Bayesian network; learning; inference; adding\nnodes; probability computation\nIntroduction\nA BN is a directed acyclic graph (DAG) with a probability\ndistribution P, which is expressed as CPT on each node or\non each variable, respectively. We assume binary nodes, thus\na value can be true orf alse . The CPTs have a size of 2k,\nwhere kis the number of parents of a node. Let pbe a node\nof the BN and let Obe the set of parents of p. The aim is\nto reduce the size of the CPT of node p, by splitting the set\nof parent nodes Ointo smaller sets O1;O2withO1[O2=O.\nTherefore, we introduce a new node, x, as a new parent of p\nand reset the connections of the network. The conclusion is a\nsmaller CPT of node p. Of course there are several rules and\nnet topologies, that need to be taken into account. In addition,\nthe probabilities of node pchange and new probabilities for x\nmust be computed.\nThis paper describes the procedure of Child-friendly Parent\nDivorcing (R ¨ohrbein, Eggert, & K ¨orner, 2009) and the behav-\nior as well as the variation of the considered BN. The findings\nare explained and illustrated in a small example. The effect\nof the reducing size of the total number of entries in the CPTs\nis shown in a simulation, where the CfPD is processed on five\nrandom BNs. Furthermore, a cognition scheme is shown, that\nmodels a BN as scenario for recognizing objects by providing\nevidence about their location and shape. Therefore, the nodes\nof the network are arranged in different levels.\nChild-Friendly Parent Divorcing\nStandard Parent Divorcing\nThe underlying technique for the CfPD learning algorithm is\nbased on a design tool that is explained in (Olesen, Kjaerluff,Jensen, & Jensen, 1989). In the paper Parent Divorcing is de-\nfined as a technique to design more efficient Bayesian mod-\nels. By introducing a new node as a divisive node, the number\nof incoming edges of a selected node is reduced. This influ-\nences the number of entries of the CPT of the selected node,\ntherefore, the computational efficiency is increased. Figure 1\nshows a simple example of standard Parent Divorcing. On the\nleft side, node phas a CPT with size 2m. After inserting node\nx, the node pin the resulting graph on the right-hand side has\na CPT of size 2k+1<2m(Neapolitan, 2003).\nFigure 1: Simple Example of Parent Divorcing according to\n(Olesen et al., 1989). The left side represents the initial graph. In-\nserting xas an intermediate node splits the parent nodes into two sets\nof nodes.\nModification\nIn general the following statement holds for a BN: Let kbe\nthe number of parents of a node x, then the size of the CPT of\nxis 2k. Moreover let nbe the total number of nodes xiwith\ni2n, then if every node xidoes not have more then kparents,\nthe total network needs less then n\u00012kentries.\nThe Parent Divorcing mentioned above is now modified in\nsuch a way, that the resulting sets of parent nodes are not ar-\nbitrary, but similar in some way, or, that they have a feature\nin common respectively. Hence the aim is to find a type of\nsimilarity between those nodes that should be divorced into a\nset of groups (henceforth called the to-be-divorced parents ).\nHere the structure, or rather the connectivity come into con-\nsideration, precisely the number of common child nodes.\nLetD= (V;E)be a DAG with Vas the set of nodes and E\nas the set of directed edges. The CfPD aims to find a set of\nnodes O=fo1:::omgwithO\u0012Vthat have at least one child\nnode p12Vwith p1=2Oin common. These nodes should\nthen be examined if there is another node p22Vandp2=2O\nthat has a set of parents QwithQ\u001aO.\nIn summary: p1has the set of parents O=fo1;:::;omgand\np2has the set of parents Q\u001aO. Therefore Ois the set of the\nto-be-divorced parents. Next, a node xis inserted in V, which\ndivides the to-be-divorced parents into two sets, which are\nO1=fo1;:::;omgnQandO2=Q. Furthermore, Par(p1) =\n146\nO1\\xandPar(p2) =O2hold. Figure 4 shows this example\nwithO1=fo1;o2gandO2=fo3;o4g.\nFunctionality and Rules\nThe section above provides an introduction and a small exam-\nple of the CfPD. This section will explain the algorithm in de-\ntail. The algorithm is a combination of structure and parame-\nter learning. In the following, let D= (V;E)be a DAG with\nVas the set of nodes or random variables, respectively with\njVj=nandEas the set of directed edges with jEj\u0014n\u0001(n\u00001)\n2.\nIn addition let Pbe a joint probability distribution of the vari-\nables in V, then B= (D;P)is a BN (Neapolitan, 2003). Sum-\nming up, the algorithm can be grouped into five main steps:\n1.Scanning the network, to find out if CfPD could be exe-\ncuted.\n2.Selecting a node that meets the expectations.\n3.Adding a new node and updating the connections, which\nmeans adding and deleting some links.\n4.Computing the parameters of the new node and the modi-\nfied nodes.\n5.Handling of already learned nodes.\nThese five steps are explained in detail in the following sec-\ntion.\nDoCihldFriendlyParentDivorcing(bnet)\n0==n:=number of nodes of Bayesian network bnet\n1:if(scanning (bnet) =true)f\n2: forall i2nf\n3: p1:=max(Par(i))\n4: if(8j2Child (Par(p1))np1:jPar(j)\\Par(p1)j<2)f\n5: i:=inp1!return to line 3.\n6:g\n7: elseif (8j2Child (Par(p1)):jmax(Par(j))j\u00152)\n8: p2:=rand(max(Par(j))\n9:g\n10: elsefp2:=max(Par(j))g\n11:g\n12: O:=Par(p1)\n13: O0:=Par(p2)\n14: O2:=O\\O0\n15: O1:=OnO2\n16: delete (edges ):O2!p1\n17: insert (x):O2!x^x!p1\n18: computing (P(x))^computing (P(p1))\n19:g\n20:elsefChild-friendly Parent Divorcing is not possible g\nFigure 2: Pseudocode of Child-firendly Parent Divorcing.\nscanning(bnet)\n1:forall (i2n)f\n2:if(9i:Par(i)\u00153^9j2Child (Par(i))^j6=i:\n3 Par(i)\\Par(j)>=2)f\n3: return true\n4:g\n5:elsefreturn f alseg\n6:g\nFigure 3: Pseudocode of the scanning step.Scanning: This step scans the graph Dto check if it satis-\nfies the expectations so that CfPD could be used. Figure 3\nshows the pseudocode of this step. The following expecta-\ntions must be met: (1) There must be a node p12Vthat has\nat least three parent nodes (line 2), otherwise a divorce will\nbe inefficient. Assume Par(p1) =O\u0012VwithjOj=mand\nm\u00153. (2) There must be a set of nodes O0\u0012OwithjO0j=k\nandk\u00152 that have another child node p22Vin common.\n(3) If p1is the only node that the to-be-divorced parents have\nin common, the algorithm should not be processed because\nthe provided feature of having another child node in common\nis not satisfied. If one of these points is not satisfied, CfPD is\nnot possible or should not be executed (for point 3).\nSelecting If the scanning step returns true, there may be\nseveral nodes that meet the requirements. The pseudocode\nin figure 3 handles these conditions. In regards to step 1. of\nthe scanning step, we are looking for a node whose number\nof parents are above a certain threshold t\u00153. If there is more\nthan one nodefp1;:::;pkgthat comes into question because\ntheir number of parents is higher than or equal to the threshold\n(thus Par(pi)\u0015twith i21;:::;k) we should select the node\nwith the maximum number of parents, hence maxfjPar(pi)jg\n(line 3). This is because the node has the largest CPT, which\nwe are trying to reduce. Probably there are two or more nodes\nwith the same, maximum number of parent nodes, we will\ncall this set M, so8pm2M:jPar(pm)j=max(jPar(pi)j). In\nthat case the algorithm should choose that node which meets\nthe requirements that the intersection of the parent nodes of\nanother node pj6=pmand the parent nodes of pmare max-\nimized, hence maxfjPar(pm)\\Par(pj)jg, where pm2M.\nAgain there may be several nodes that fullfil these conditions.\nThen the algorithm should choose randomly (line 7). We also\ncould go almost deeper in selecting the best fitting node, but\nthere always may be more than one node with the same fea-\ntures. So we decided to choose randomly at that point.\nAfter selecting the node that will be used, a second deci-\nsion has to be made. Lets say we select p1as the node\nwhose parents are the to-be-divorced parents (set O). Then\nwe have to choose a second node pj6=p1with at least two\nparents, that are in the set of the to-be-divorced parents:\nPar(pj)\\O\u00152. If there are none except p1, we decide not\nto split the set O. Otherwise we choose the node pj, such that\n8pj6=p1:max(jPar(pj)j). Thus at least two of the sets of\nthe to-be-divorced parent nodes have a child, excluding p1,\nin common. Of course it can occur that not only one node\npjmeets these expectations. If this is the case, the algorithm\nshould randomly select again.\nAdding The third step is adding a divorcing node, which\nwill be named x, and reordering the connections of the links.\nThat means adding new directed edges to xand one outcom-\ning edge from x. This will be executed in such a way that\nthe to-be-divorced parents are split into two sets. Assume\nwe chose p1and p2in the step above. Then Par(p1) =\nfo1;:::;omg=Ois the set of the to-be-divorced parents\n147\nand Par(p2) =O0. The intersection of OandO0identi-\nfies the splitting of the set O. We split the set Oin two\npartsO1andO2, such that the following statement holds:\nO1=fo1;:::;okg2O=OnO\\O0andO2=fok+1;:::;omg=\nO\\O0. Figure 4 shows an example with m=4.\n(a) initial graph\n (b)p1)getting O\n(c)p2)getting O0\n(d) Inserting xand\ndivorce O)get-\ntingO1andO2\n(e) resulting graph\nFigure 4: A simple example of Child-friendly Parent Divorcing\nComputing By adding a new node to a BN, some changes\nmust be made to the entries of the CPTs. The CPT of node\nxmust be set up and the CPT of p1must be adjusted. If\nyou look to the example in figure 4a, p1has 24=16 entries\nin its CPT. Compared to 4e, where the number of entries in\nthe CPT of p1is 23=8, this is twice as much. However\nCfPD should not change any probabilities, more precisely it\nshould not change the joint probability distribution PofB.\nReferring to the small example in figure 4, we compute the\njoint probability distribution of p1in the resulting graph. Let\nAbe the initial graph and Bbe the resulting graph after adding\na new node. Hence the equation\nPA(p1;p2;o1;o2;o3;o4) =PB(p1;p2;x;o1;o2;o3;o4)(1)\nmust hold. We can now use the chain rule (Neapolitan, 2003)\nto compute the joint probability distribution:\nPA(p1;p2;o1;o2;o3;o4) =\n=P(p1jo1;o2;o3;o4)\u0001\n\u0001P(p2jo3;o4)\u0001P(o1)\u0001P(o2)\u0001P(o3)\u0001P(o4)\nPB(p1;p2;x;o1;o2;o3;o4) =\n=å\nxP(p1jx;o1;o2)\u0001P(xjo3;o4)\u0001\n\u0001P(p2jo3;o4)\u0001P(o1)\u0001P(o2)\u0001P(o3)\u0001P(o4)\nNow inserting these two equations into (1) results in:\nP(p1jo1;o2;o3;o4) =å\nxP(p1jx;o1;o2)\u0001P(xjo3;o4)\n=P(p1jx=T;o1;o2)\u0001P(x=Tjo3;o4)+\n+P(p1jx=F;o1;o2)\u0001P(x=Fjo3;o4)This equation lead us to the following general formula:\nP(p1jo1;:::; om) =\n=P(p1jx=T;o1;:::; ok)\u0001P(x=Tjok+1;:::; om)+\n+P(p1jx=F;o1;:::ok)\u0001P(x=Fjok+1;:::; om)(2)\nAdditionally, we can fix the conditional probability of p1with\na little trick, by assuming a relationship. The motivation is,\naccording to (R ¨ohrbein et al., 2009), that the link between x\nandp1reflects a kind of taxonomic relationship. Variable x\nrepresents a subclass for p1and therefore the property P(p1=\nT)is 1, if the value of the newly-inserted variable xistrue\nholds. Thus\nP(p1=Tjx=T;o1;:::;ok) =1 (3)\nP(p1=Fjx=T;o1;:::;ok) =0 (4)\nUsing the equation (3) in (2) leads to:\nP(p1jo1;:::;om) =1\u0000[1\u0000P(p1jx=F;o1;:::;ok)]\u0001\n\u0001[1\u0000P(x=Tjok+1;:::;om)](5)\nThe equations (2) and (5) will subsequently be implemented\nas an optimization problem to ensure that the probability of\noccurrence of node p1in the resulting graph is the same as in\nthe initial graph.\nHandling If new knowledge of a scenario or of a BN, re-\nspectively emerges, a new node and new links must be in-\nserted. Thus it is essential for the algorithm to specify which\nnode is a divorcing node and to set the connections appropri-\natly. An example is shown on the graph in figure 4. Assume\nthere is new knowledge about p1andp2expressed in a vari-\nable om+1. In the initial case, the new node is inserted and\nlinks are set to p1andp2. However, if the new knowledge\nemerges after divorcing the parent nodes (i.e. after inserting\nx), the algorithm must identify the new knowledge om+1as\npart of the to-be-divorced set and consequently set the links\ntoxandp2.\nScenarios\nThere are several net topologies that need to be examined.\nAs a result there are different scenarios of reordering and in-\nserting new nodes, depending on the net topology. Figure 5\nshows four different types of scenarios (topologies).\na) In this case, it would not be efficient to introduce a new\nnode, x, because the size of the set of the parents that can\nbe divorced is 1 (i.e. ok+1is the only parent node of the set\nof the to-be-divorced parents that has two children nodes in\ncommon). The insertion of a new node xdoes not reduce\nthe size of the CPT of node p1. Thus a divorce is redun-\ndant. The algorithm only divides a set with at least two\nconsidered nodes.\nb) This case describes the classical case of CfPD. The\nnodes p1and p2have a set of parent nodes in common:\n148\n(a) not efficient\n(b) classical\n(c) total connection\n(d) more Iterations\nFigure 5: Different topologies, that can occur in a BN. The left side\nshows the intitial configuration, the right side shows the result after\ntheCfPD\nok+1;:::om. In this case the algorithm selects the node with\nthe highest number of incoming edges, which is the node\nwith the most parents. If the number is equal, it will choose\nrandomly. The node xis inserted and divides the two sets\nof parent nodes: o1;:::okandok+1;:::om.\nc) Here we have a total connection. This means that every\nnode o1;:::omis connected to the nodes p1and p2. The\nalgorithm inserts a node, x, that divides each node oifrom\nanother. In this case, the size of the CPT is not minimized,\nbut the sum of entries is reduced. Initially the sum of all\nentries before is m+2m+2mand afterwards we have a sum\nofm+2m+2 afterwards.\nd) This case shows a scenario where a node p1has a number\nof parent nodes in common with more than only one other\nnode. In this case the algorithm can be processed two times\nand hence inserts more than one dividing node is inserted.\nValue computation as an optimization problem\nIn this section, the computation of the probabilities of the\nchanging and the newly-inserted nodes are formulated as\nan optimization problem. The aim is to minimize the er-\nror between the absolute probability of the occurrence of\nP(p1=Tjo1;:::; om)of the initial graph ( Pbe f ore ) and the ab-\nsolute probability of the occurrence of P(p1=Tjx;o1;:::ok)\nin the net resulting from the CfPD ( Pa fter).\nInitial values Let us define Pbe f ore first: Assume in the ini-\ntial BN we have chosen p1in step 2 of the algorithm. Then\nmis the number of parents of p1andPar(p1) =fo1;:::omg.Pbe f ore represents the absolute probability of the event, that\nP(p1=T):\nPbe f ore =å\no1:::å\nomP(o1)\u0001:::\u0001P(om)\u0001P(p1=Tjo1;:::;om)(6)\nNow let us define Pa fter: Assume we introduced a new node x\nin the resulting BN. This node splits the original set of parents\ninfo1;:::okgandfok+1;:::omg.Pa fter can then be calculated\nas follows:\nPa fter=å\no1:::å\nokå\nxP(o1)\u0001:::\n\u0001P(ok)\u0001P(x)\u0001P(p1=Tjo1;:::ok;x)(7)\nThe optimization problem Now we can formulate the op-\ntimization problem as a cost-minimizing function, where\nthe cost is the difference between Pbe f ore and Pa fter.\nThere are two sets of decision variables: å\no1:::å\nokå\nxP(p1=\nTjo1;:::; ok;x)andå\nok+1:::å\nomP(x=Tjok+1;:::om). The op-\ntimization problem for the first case, where no correlation\nbetween p1andxis assumed (see equation (2) above) is as\nfollows:\n1: minimize Pbe f ore\u0000Pa fter\nsubject to\n2:å\nok+1:::å\nom[P(x=Fjok+1;:::; om) =\n=1\u0000P(x=Tjok+1;:::; om)]\n3:Pbe f ore\u0000Pa fter\u00150\n4:å\no1:::å\nom[P(p1=Tjo1;:::; om)\u0014\n\u0014å\nxP(p1=Tjo1;:::; ok;x)\u0001P(xjok+1;:::; om)]\n5: 0\u0014å\nok+1:::å\nomP(x=Tjok+1;:::; om)\u00141\n6: 0\u0014å\no1:::å\nokå\nxP(p1=Tjo1;:::; ok;x)\u00141(8)\nTaking into account that there is a correlation between p1and\nxwe have to replace line 6 of the optimization problem above\naccording to (3) by:\n7: 0\u0014å\no1:::å\nokP(p1=Tjo1;:::; ok;x=F)\u00141\n8:å\no1:::å\nokP(p1=Tjo1;:::; ok;x=T) =1(9)\nLine 1 represents the objective function. The aim is to\nminimize the difference between Pbe f ore andPa fter, thus we\nhave a minimization problem. Lines 2-8 represent the con-\nstraints that must be fulfilled. Line 2 assigns the values for\nthe probability P(x=Fjok+1;:::; om)for all combinations\noffok+1;:::; omg. In line 3 we assure that the new prob-\nability Pbe f ore is not larger than the probability Pa fter. In\ncase of an error, i.e. Pbe f ore6=Pa fter, the absolute probabil-\nity of the occurrence of p1after the CfPD is less than be-\nfore or in other words the occurrence is not overestimated.\nThis constraint has been chosen because intutitively it seems\n149\nmore reliable to underestimate the occurrence of a variable\nthan predicting an underestimation of a non-occurrence. Al-\nthough, sometimes it seems more reliable to overestimate the\noccurrence of a variable at the time when e.g. a variable de-\nscribes the failure of part of a system for example. In that\ncase, the user has to differentiate and to determine whether\nto use Pbe f ore or rather Pa fter as minuend or subtrahend in\nline 3 and accordingly to use the \u0014for underestimation or\nthe\u0015otherwise (see line 4). The fourth line assures that\nthe probabilities, precisely the JPD of P(p1jo1;:::; ok;x)is\nnot larger than before. The last two lines only ensure the\nnon-negativity of the decision variables, and that their val-\nues are not larger than 1. Assuming the correlation between\nxand p1, the constraint for P(p1jo1;:::; ok;x)is fixed by\nP(p1=Tjo1;:::; ok;x=T) =1 according to (3). Hence line\n6 of (8) is replaced by line 7 and 8 of (9). We took the\nBNB= (G;P)from figure 4a with its JPD PandPbe f ore =\nP(p1jo1;o2) =0:6202 and performed the CfPD. This yields\nto the network B0= (G0;P0)in 4e. We calculated the JPT’s of\np1andxusing the optimization approach on the one side and\nthe EM-algorithm with 10 samples on the other. Now we can\ncalculate the values for Popt(p1jo1;o2)andP10\nEM(p1jo1;o2)as\nwell as the JPD P0\noptof net B0for the optimization approach\nand the JPD P010\nEMfor the EM-algorithm respectively. Using\ntheBayes Net Toolbox for Matlab we receive the following\nresults: Popt(p1jo1;o2) =0:6202, P10\nEM(p1jo1;o2) =0:7554,\nP0\nopt=0:9915 and P010\nEM=0:8620. As we can see the opti-\nmization approach yields to a result, that do not change the\nprobability of occurrence of p1. Also the JPD only has a very\nlittle deviation. Whereas the result of the EM-algorithm with\nsample size 10 has a deviation of more then 13 % regarding\nthe absolut probability of p1. Also the JPD of the complete\nnet varied of almost 14%. But we can obtain also good results\nfor the EM-algorithm by incrementing the sample size. E.g.\nwe receive P50\nEM(p1jo1;o2) =0:6342 and P050\nEM=0:9534 for a\ndatabase with 50 samples. Although the sample size could be\nincreased almost further we will not get any better results for\nthe JPD, then using the optimization approach.\nTraining Data\nWe created five different, randomly connected BNs with 30\nnodes and 250 edges. Afterwards the CfPD algorithm was ex-\necuted 10 times on these networks repeatedly. Table 1 shows\nthe total number of CPT entries for the 10 iterations. Figure\n6 represents the decreasing trend of the number of entries for\neach iteration. Already in the first iteration we can observe\niteration BNet 1 BNet 2 BNet 3 BNet 4 BNet 5\n0 (start) 339243 1138913 311404 829771 285098\n1 210347 93409 181612 313803 156202\n2 79405 78081 116844 184907 125514\n3 63597 62273 85164 120139 92780\n4 47597 54625 69836 88011 76972\n5 31597 46451 53486 55755 60654\nTable 1: Number of entries of the five random BNs. The first row\n(start) represents the total number of nodes before executing the al-\ngorithm.\na significant reduction of size. This is evident because thenode with the highest possible number of parents and hence\nwith the largest CPT is selected and reduced. For example\nthe number of parents of the chosen node in BNet 5is 19,\nhence this node has 219=524288 CPT entries. After exe-\ncuting CfPD on this net, the chosen node has 8 parents after-\nwards and the newly-inserted node has 12 parents. Hence the\ntotal number is reduced by 219\u0000(28+212) =431996 entries.\nFigure 6 represents the trend of 10 iterations. For example\n0200000400000600000800000100000012000001400000\n012345678910BNet1BNet2BNet3BNet4BNet5\nFigure 6: Processing the CfPD algorithm on 5 randomly-connected\nBNs with 30 nodes and 250 links.\nBNet 5has 40 nodes and only 14666 CPT entries in the re-\nsulting graph, which is 1.54% according to the initial graph.\nTaking BNet 5as an example again, the algorithm can be pro-\ncessed 84 times on this network, which will lead to a network\nwith 114 nodes and 760 (0.07%) CPT entries. Thus the size\nof reduction is decreasing by each iteration. The number of\nedges in a BN is restricted byn\u0001(n\u00001)\n2. On the one hand, look-\ning at a network with a high connectivity that has a higher\nnumber of edges provides the opportunity to execute CfPD\nseveral times and to lead to a high reduction of the number\nof CPT entries. On the other hand, if we consider a network\nthat is less connected (hence it has only a few edges), we can\nonly execute the algorithm a limited number of times and the\nreduction in the size of CPT entries will also be reduced. Fig-\nure 7 shows the difference of five iterations between five BNs\nwith 30 nodes and 100 edges on the one side and 350 edges\non the other. Therefore, the improvement regarding the total\nnumber of CPTs is very high at the beginning, but is flat-\ntened by raising the number of iterations. If we look at Bnet 2\nfrom 6, the number of CPT entries after 14 iterations is 8255\ncompared to the start, when the network had 412167 entries,\nwhich is an improvement of over 98%. Therefore we have to\nweigh the improvement of reducing the number of entries in\nthe CPTs on the one hand against adding a new node to the\nnet on the other.\nCognition\nWe have shown that summing up nodes with a common fea-\nture by introducing a new node can improve the number of\nCPT entries significantly. We also pointed out that a high\nnumber of iterations will reduce the number of entries contin-\nuously, but the improvement decreases after each processing.\nNow let us make a little extension to the learning scheme\n150\nFigure 7: Decreasing trend of the number of CPTs after the first 5\niterations for two different random BNs. Left: BN with 30 nodes\nand 100 edges. Right: BN with 30 nodes and 350 edges\nof CfPD by choosing an appropriate node or more precisely,\na convenient level, manually. Assume we have a learning\nscheme that identifies ob jects by defining the actual place\nas well as the shape of the object. A simple example can\nbe seen in figure 8. In this example, we have three levels\nof identifiers: place, shape and object see figure 8a. There\nare four example places in level one, three different shapes\nin level two and 5 example objects in level three. By obtain-\ning knowledge about the place and the shape, the observer,\nalso called agent, can identify a unique object with a suitable\nprobability. CfPD may help us to improve these levels. Figure\n8b is the resulting graph after executing CfPD on the second\nlevel, which means that the parents of the shape nodes, so\nlevel one, became divided. Assume the following scenario:\nkitchenliving roomgardenbathroom\nsquareroundtriangular\ntablecuppotballtowelID: garden\n(a) A cognition scheme for identifying ob-\njects according their location and shape\nkitchenliving roomgardenbathroom\nsquareroundtriangulartablecuppotballtowelangled\n(b) Graph after processing CfPD on the sec-\nond level named shapes .\nFigure 8: Example of a cognition scheme. 8a consists of a\n3-level scheme with 12 nodes, 27 links and 92 CPT entries\nresults in a cognition scheme with four levels, 13 nodes, 25\nlinks and 80 CPT entries\nThe agent recognizes the situation, that he is in the kitchen\nand distinguishes a square-shaped object. Thus we have ob-\ntained evidence for the first and second level. Then the agentcan assume that this object is a table with high probability\nand e.g. a cup with very low probability. CfPD can be used to\nimprove a level of this network. If the level shapes includes\nforms such as triangular, squared andround , the algorithm\ncan group together the first two by creating a new node and\nhence a new level called, for example angled .\nSummary\nBy adding a new node to a BN and updating the links in the\nnet, new probabilities must be computed and existing prob-\nabilities must be changed in order that the probability of oc-\ncurrence of the observed node does not change. Therefore we\nformulated the problem as an optimization approach and min-\nimized the costs between the probability of occurrence of the\nobserved node before and afterwards. Other objectives, such\nas the Kullback-Leibler divergence might also be possible and\ncan be used instead of the probability of occurrence. This will\nrequire some testing and might be a subject for future work.\nWe tested the optimization problem on the classical approach\napplied in this paper, using FICO XPress IVE 7.6 with ran-\ndom probabilities. The computed probabilities ensured the\nsame probability of occurrence for the observed node at the\ninitial network as well as for the node at the resulting net.\nWe showed that the size of the CPTs can be reduced in\nevery step by repeating the algorithm to an existing BN. Fur-\nthermore the improvement during the first iterations is quite\nconsiderable and particularly in the case of highly-connected\nnetworks, a significant reduction in the total number of CPT\nentries can be achieved. By reducing the sizes of the CPTs\nthe efficiency increases distinctly.\nWe provide an example how the CfPD algorithm can be\nused for cognition of objects. The algorithm is processed on\na level of a BN instead of searching for a node with a large\nCPT. As a result the number of CPT entries is, of course,\nreduced and a new level is created that partitioned the chosen\nlevel more precisely.\nReferences\nBishop, C. M. (2006). Pattern recognition and machine\nlearning . Springer.\nFriedman, N. (n.d.). Learning belief networks in the presence\nof missing values and hidden variables..\nJensen, F. V . (2007). Bayesian networks and decision graphs .\nSpringer.\nNeapolitan, R. E. (2003). Learning bayesian networks . Pren-\ntice Hall.\nOlesen, K. G., Kjaerluff, U., Jensen, F., & Jensen, F. V .\n(1989). A munin network for the median nerve - a case\nstudy on loops. Applied Artificial Intelligence .\nPearl, J. (1986). Fusion, propagation, and structuring in belief\nnetworks. Artificial Intelligence .\nR¨ohrbein, F., Eggert, J., & K ¨orner, E. (2009). Child-friendly\ndivorcing: Incremental hierarchy learning in bayesian net-\nworks. In IJCNN 2009.\n151", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "pWF9dT7zVmm", "year": null, "venue": "EAPCogSci 2015", "pdf_link": "http://ceur-ws.org/Vol-1419/paper0020.pdf", "forum_link": "https://openreview.net/forum?id=pWF9dT7zVmm", "arxiv_id": null, "doi": null }
{ "title": "Structure Learning in Bayesian Networks with Parent Divorcing", "authors": [ "Ulrich von Waldow", "Florian Röhrbein" ], "abstract": null, "keywords": [], "raw_extracted_content": "Structure Learning in Bayesian Networks with Parent Divorcing\nUlrich von Waldow ([email protected])\nTechnische Universit ¨at M ¨unchen, Arcisstr. 21\n80333 Munich, Germany\nFlorian R ¨ohrbein (fl[email protected])\nTechnische Universit ¨at M ¨unchen, Arcisstr. 21\n80333 Munich, Germany\nAbstract\nBayesian networks (BNs) are an essential tool for the model-\ning of cognitive processes. They represent probabilistic knowl-\nedge in an intuitive way and allow to draw inferences based\non current evidence and built-in hypotheses. In this paper, a\nstructure learning scheme for BNs will be examined that is\nbased on so-called Child-friendly Parent Divorcing (CfPD).\nThis algorithm groups together nodes with similar properties\nby adding a new node to the existing network. The updating\nof all affected probabilities is formulated as an optimization\nproblem. The resulting procedure reduces the size of the con-\nditional probability tables (CPT) significantly and hence im-\nproves the efficiency of the network, making it suitable for\nlarger networks typically encountered in cognitive modelling.\nKeywords: Bayesian network; learning; inference; adding\nnodes; probability computation\nIntroduction\nA BN is a directed acyclic graph (DAG) with a probability\ndistribution P, which is expressed as CPT on each node or\non each variable, respectively. We assume binary nodes, thus\na value can be true orf alse . The CPTs have a size of 2k,\nwhere kis the number of parents of a node. Let pbe a node\nof the BN and let Obe the set of parents of p. The aim is\nto reduce the size of the CPT of node p, by splitting the set\nof parent nodes Ointo smaller sets O1;O2withO1[O2=O.\nTherefore, we introduce a new node, x, as a new parent of p\nand reset the connections of the network. The conclusion is a\nsmaller CPT of node p. Of course there are several rules and\nnet topologies, that need to be taken into account. In addition,\nthe probabilities of node pchange and new probabilities for x\nmust be computed.\nThis paper describes the procedure of Child-friendly Parent\nDivorcing (R ¨ohrbein, Eggert, & K ¨orner, 2009) and the behav-\nior as well as the variation of the considered BN. The findings\nare explained and illustrated in a small example. The effect\nof the reducing size of the total number of entries in the CPTs\nis shown in a simulation, where the CfPD is processed on five\nrandom BNs. Furthermore, a cognition scheme is shown, that\nmodels a BN as scenario for recognizing objects by providing\nevidence about their location and shape. Therefore, the nodes\nof the network are arranged in different levels.\nChild-Friendly Parent Divorcing\nStandard Parent Divorcing\nThe underlying technique for the CfPD learning algorithm is\nbased on a design tool that is explained in (Olesen, Kjaerluff,Jensen, & Jensen, 1989). In the paper Parent Divorcing is de-\nfined as a technique to design more efficient Bayesian mod-\nels. By introducing a new node as a divisive node, the number\nof incoming edges of a selected node is reduced. This influ-\nences the number of entries of the CPT of the selected node,\ntherefore, the computational efficiency is increased. Figure 1\nshows a simple example of standard Parent Divorcing. On the\nleft side, node phas a CPT with size 2m. After inserting node\nx, the node pin the resulting graph on the right-hand side has\na CPT of size 2k+1<2m(Neapolitan, 2003).\nFigure 1: Simple Example of Parent Divorcing according to\n(Olesen et al., 1989). The left side represents the initial graph. In-\nserting xas an intermediate node splits the parent nodes into two sets\nof nodes.\nModification\nIn general the following statement holds for a BN: Let kbe\nthe number of parents of a node x, then the size of the CPT of\nxis 2k. Moreover let nbe the total number of nodes xiwith\ni2n, then if every node xidoes not have more then kparents,\nthe total network needs less then n\u00012kentries.\nThe Parent Divorcing mentioned above is now modified in\nsuch a way, that the resulting sets of parent nodes are not ar-\nbitrary, but similar in some way, or, that they have a feature\nin common respectively. Hence the aim is to find a type of\nsimilarity between those nodes that should be divorced into a\nset of groups (henceforth called the to-be-divorced parents ).\nHere the structure, or rather the connectivity come into con-\nsideration, precisely the number of common child nodes.\nLetD= (V;E)be a DAG with Vas the set of nodes and E\nas the set of directed edges. The CfPD aims to find a set of\nnodes O=fo1:::omgwithO\u0012Vthat have at least one child\nnode p12Vwith p1=2Oin common. These nodes should\nthen be examined if there is another node p22Vandp2=2O\nthat has a set of parents QwithQ\u001aO.\nIn summary: p1has the set of parents O=fo1;:::;omgand\np2has the set of parents Q\u001aO. Therefore Ois the set of the\nto-be-divorced parents. Next, a node xis inserted in V, which\ndivides the to-be-divorced parents into two sets, which are\nO1=fo1;:::;omgnQandO2=Q. Furthermore, Par(p1) =\n146\nO1\\xandPar(p2) =O2hold. Figure 4 shows this example\nwithO1=fo1;o2gandO2=fo3;o4g.\nFunctionality and Rules\nThe section above provides an introduction and a small exam-\nple of the CfPD. This section will explain the algorithm in de-\ntail. The algorithm is a combination of structure and parame-\nter learning. In the following, let D= (V;E)be a DAG with\nVas the set of nodes or random variables, respectively with\njVj=nandEas the set of directed edges with jEj\u0014n\u0001(n\u00001)\n2.\nIn addition let Pbe a joint probability distribution of the vari-\nables in V, then B= (D;P)is a BN (Neapolitan, 2003). Sum-\nming up, the algorithm can be grouped into five main steps:\n1.Scanning the network, to find out if CfPD could be exe-\ncuted.\n2.Selecting a node that meets the expectations.\n3.Adding a new node and updating the connections, which\nmeans adding and deleting some links.\n4.Computing the parameters of the new node and the modi-\nfied nodes.\n5.Handling of already learned nodes.\nThese five steps are explained in detail in the following sec-\ntion.\nDoCihldFriendlyParentDivorcing(bnet)\n0==n:=number of nodes of Bayesian network bnet\n1:if(scanning (bnet) =true)f\n2: forall i2nf\n3: p1:=max(Par(i))\n4: if(8j2Child (Par(p1))np1:jPar(j)\\Par(p1)j<2)f\n5: i:=inp1!return to line 3.\n6:g\n7: elseif (8j2Child (Par(p1)):jmax(Par(j))j\u00152)\n8: p2:=rand(max(Par(j))\n9:g\n10: elsefp2:=max(Par(j))g\n11:g\n12: O:=Par(p1)\n13: O0:=Par(p2)\n14: O2:=O\\O0\n15: O1:=OnO2\n16: delete (edges ):O2!p1\n17: insert (x):O2!x^x!p1\n18: computing (P(x))^computing (P(p1))\n19:g\n20:elsefChild-friendly Parent Divorcing is not possible g\nFigure 2: Pseudocode of Child-firendly Parent Divorcing.\nscanning(bnet)\n1:forall (i2n)f\n2:if(9i:Par(i)\u00153^9j2Child (Par(i))^j6=i:\n3 Par(i)\\Par(j)>=2)f\n3: return true\n4:g\n5:elsefreturn f alseg\n6:g\nFigure 3: Pseudocode of the scanning step.Scanning: This step scans the graph Dto check if it satis-\nfies the expectations so that CfPD could be used. Figure 3\nshows the pseudocode of this step. The following expecta-\ntions must be met: (1) There must be a node p12Vthat has\nat least three parent nodes (line 2), otherwise a divorce will\nbe inefficient. Assume Par(p1) =O\u0012VwithjOj=mand\nm\u00153. (2) There must be a set of nodes O0\u0012OwithjO0j=k\nandk\u00152 that have another child node p22Vin common.\n(3) If p1is the only node that the to-be-divorced parents have\nin common, the algorithm should not be processed because\nthe provided feature of having another child node in common\nis not satisfied. If one of these points is not satisfied, CfPD is\nnot possible or should not be executed (for point 3).\nSelecting If the scanning step returns true, there may be\nseveral nodes that meet the requirements. The pseudocode\nin figure 3 handles these conditions. In regards to step 1. of\nthe scanning step, we are looking for a node whose number\nof parents are above a certain threshold t\u00153. If there is more\nthan one nodefp1;:::;pkgthat comes into question because\ntheir number of parents is higher than or equal to the threshold\n(thus Par(pi)\u0015twith i21;:::;k) we should select the node\nwith the maximum number of parents, hence maxfjPar(pi)jg\n(line 3). This is because the node has the largest CPT, which\nwe are trying to reduce. Probably there are two or more nodes\nwith the same, maximum number of parent nodes, we will\ncall this set M, so8pm2M:jPar(pm)j=max(jPar(pi)j). In\nthat case the algorithm should choose that node which meets\nthe requirements that the intersection of the parent nodes of\nanother node pj6=pmand the parent nodes of pmare max-\nimized, hence maxfjPar(pm)\\Par(pj)jg, where pm2M.\nAgain there may be several nodes that fullfil these conditions.\nThen the algorithm should choose randomly (line 7). We also\ncould go almost deeper in selecting the best fitting node, but\nthere always may be more than one node with the same fea-\ntures. So we decided to choose randomly at that point.\nAfter selecting the node that will be used, a second deci-\nsion has to be made. Lets say we select p1as the node\nwhose parents are the to-be-divorced parents (set O). Then\nwe have to choose a second node pj6=p1with at least two\nparents, that are in the set of the to-be-divorced parents:\nPar(pj)\\O\u00152. If there are none except p1, we decide not\nto split the set O. Otherwise we choose the node pj, such that\n8pj6=p1:max(jPar(pj)j). Thus at least two of the sets of\nthe to-be-divorced parent nodes have a child, excluding p1,\nin common. Of course it can occur that not only one node\npjmeets these expectations. If this is the case, the algorithm\nshould randomly select again.\nAdding The third step is adding a divorcing node, which\nwill be named x, and reordering the connections of the links.\nThat means adding new directed edges to xand one outcom-\ning edge from x. This will be executed in such a way that\nthe to-be-divorced parents are split into two sets. Assume\nwe chose p1and p2in the step above. Then Par(p1) =\nfo1;:::;omg=Ois the set of the to-be-divorced parents\n147\nand Par(p2) =O0. The intersection of OandO0identi-\nfies the splitting of the set O. We split the set Oin two\npartsO1andO2, such that the following statement holds:\nO1=fo1;:::;okg2O=OnO\\O0andO2=fok+1;:::;omg=\nO\\O0. Figure 4 shows an example with m=4.\n(a) initial graph\n (b)p1)getting O\n(c)p2)getting O0\n(d) Inserting xand\ndivorce O)get-\ntingO1andO2\n(e) resulting graph\nFigure 4: A simple example of Child-friendly Parent Divorcing\nComputing By adding a new node to a BN, some changes\nmust be made to the entries of the CPTs. The CPT of node\nxmust be set up and the CPT of p1must be adjusted. If\nyou look to the example in figure 4a, p1has 24=16 entries\nin its CPT. Compared to 4e, where the number of entries in\nthe CPT of p1is 23=8, this is twice as much. However\nCfPD should not change any probabilities, more precisely it\nshould not change the joint probability distribution PofB.\nReferring to the small example in figure 4, we compute the\njoint probability distribution of p1in the resulting graph. Let\nAbe the initial graph and Bbe the resulting graph after adding\na new node. Hence the equation\nPA(p1;p2;o1;o2;o3;o4) =PB(p1;p2;x;o1;o2;o3;o4)(1)\nmust hold. We can now use the chain rule (Neapolitan, 2003)\nto compute the joint probability distribution:\nPA(p1;p2;o1;o2;o3;o4) =\n=P(p1jo1;o2;o3;o4)\u0001\n\u0001P(p2jo3;o4)\u0001P(o1)\u0001P(o2)\u0001P(o3)\u0001P(o4)\nPB(p1;p2;x;o1;o2;o3;o4) =\n=å\nxP(p1jx;o1;o2)\u0001P(xjo3;o4)\u0001\n\u0001P(p2jo3;o4)\u0001P(o1)\u0001P(o2)\u0001P(o3)\u0001P(o4)\nNow inserting these two equations into (1) results in:\nP(p1jo1;o2;o3;o4) =å\nxP(p1jx;o1;o2)\u0001P(xjo3;o4)\n=P(p1jx=T;o1;o2)\u0001P(x=Tjo3;o4)+\n+P(p1jx=F;o1;o2)\u0001P(x=Fjo3;o4)This equation lead us to the following general formula:\nP(p1jo1;:::; om) =\n=P(p1jx=T;o1;:::; ok)\u0001P(x=Tjok+1;:::; om)+\n+P(p1jx=F;o1;:::ok)\u0001P(x=Fjok+1;:::; om)(2)\nAdditionally, we can fix the conditional probability of p1with\na little trick, by assuming a relationship. The motivation is,\naccording to (R ¨ohrbein et al., 2009), that the link between x\nandp1reflects a kind of taxonomic relationship. Variable x\nrepresents a subclass for p1and therefore the property P(p1=\nT)is 1, if the value of the newly-inserted variable xistrue\nholds. Thus\nP(p1=Tjx=T;o1;:::;ok) =1 (3)\nP(p1=Fjx=T;o1;:::;ok) =0 (4)\nUsing the equation (3) in (2) leads to:\nP(p1jo1;:::;om) =1\u0000[1\u0000P(p1jx=F;o1;:::;ok)]\u0001\n\u0001[1\u0000P(x=Tjok+1;:::;om)](5)\nThe equations (2) and (5) will subsequently be implemented\nas an optimization problem to ensure that the probability of\noccurrence of node p1in the resulting graph is the same as in\nthe initial graph.\nHandling If new knowledge of a scenario or of a BN, re-\nspectively emerges, a new node and new links must be in-\nserted. Thus it is essential for the algorithm to specify which\nnode is a divorcing node and to set the connections appropri-\natly. An example is shown on the graph in figure 4. Assume\nthere is new knowledge about p1andp2expressed in a vari-\nable om+1. In the initial case, the new node is inserted and\nlinks are set to p1andp2. However, if the new knowledge\nemerges after divorcing the parent nodes (i.e. after inserting\nx), the algorithm must identify the new knowledge om+1as\npart of the to-be-divorced set and consequently set the links\ntoxandp2.\nScenarios\nThere are several net topologies that need to be examined.\nAs a result there are different scenarios of reordering and in-\nserting new nodes, depending on the net topology. Figure 5\nshows four different types of scenarios (topologies).\na) In this case, it would not be efficient to introduce a new\nnode, x, because the size of the set of the parents that can\nbe divorced is 1 (i.e. ok+1is the only parent node of the set\nof the to-be-divorced parents that has two children nodes in\ncommon). The insertion of a new node xdoes not reduce\nthe size of the CPT of node p1. Thus a divorce is redun-\ndant. The algorithm only divides a set with at least two\nconsidered nodes.\nb) This case describes the classical case of CfPD. The\nnodes p1and p2have a set of parent nodes in common:\n148\n(a) not efficient\n(b) classical\n(c) total connection\n(d) more Iterations\nFigure 5: Different topologies, that can occur in a BN. The left side\nshows the intitial configuration, the right side shows the result after\ntheCfPD\nok+1;:::om. In this case the algorithm selects the node with\nthe highest number of incoming edges, which is the node\nwith the most parents. If the number is equal, it will choose\nrandomly. The node xis inserted and divides the two sets\nof parent nodes: o1;:::okandok+1;:::om.\nc) Here we have a total connection. This means that every\nnode o1;:::omis connected to the nodes p1and p2. The\nalgorithm inserts a node, x, that divides each node oifrom\nanother. In this case, the size of the CPT is not minimized,\nbut the sum of entries is reduced. Initially the sum of all\nentries before is m+2m+2mand afterwards we have a sum\nofm+2m+2 afterwards.\nd) This case shows a scenario where a node p1has a number\nof parent nodes in common with more than only one other\nnode. In this case the algorithm can be processed two times\nand hence inserts more than one dividing node is inserted.\nValue computation as an optimization problem\nIn this section, the computation of the probabilities of the\nchanging and the newly-inserted nodes are formulated as\nan optimization problem. The aim is to minimize the er-\nror between the absolute probability of the occurrence of\nP(p1=Tjo1;:::; om)of the initial graph ( Pbe f ore ) and the ab-\nsolute probability of the occurrence of P(p1=Tjx;o1;:::ok)\nin the net resulting from the CfPD ( Pa fter).\nInitial values Let us define Pbe f ore first: Assume in the ini-\ntial BN we have chosen p1in step 2 of the algorithm. Then\nmis the number of parents of p1andPar(p1) =fo1;:::omg.Pbe f ore represents the absolute probability of the event, that\nP(p1=T):\nPbe f ore =å\no1:::å\nomP(o1)\u0001:::\u0001P(om)\u0001P(p1=Tjo1;:::;om)(6)\nNow let us define Pa fter: Assume we introduced a new node x\nin the resulting BN. This node splits the original set of parents\ninfo1;:::okgandfok+1;:::omg.Pa fter can then be calculated\nas follows:\nPa fter=å\no1:::å\nokå\nxP(o1)\u0001:::\n\u0001P(ok)\u0001P(x)\u0001P(p1=Tjo1;:::ok;x)(7)\nThe optimization problem Now we can formulate the op-\ntimization problem as a cost-minimizing function, where\nthe cost is the difference between Pbe f ore and Pa fter.\nThere are two sets of decision variables: å\no1:::å\nokå\nxP(p1=\nTjo1;:::; ok;x)andå\nok+1:::å\nomP(x=Tjok+1;:::om). The op-\ntimization problem for the first case, where no correlation\nbetween p1andxis assumed (see equation (2) above) is as\nfollows:\n1: minimize Pbe f ore\u0000Pa fter\nsubject to\n2:å\nok+1:::å\nom[P(x=Fjok+1;:::; om) =\n=1\u0000P(x=Tjok+1;:::; om)]\n3:Pbe f ore\u0000Pa fter\u00150\n4:å\no1:::å\nom[P(p1=Tjo1;:::; om)\u0014\n\u0014å\nxP(p1=Tjo1;:::; ok;x)\u0001P(xjok+1;:::; om)]\n5: 0\u0014å\nok+1:::å\nomP(x=Tjok+1;:::; om)\u00141\n6: 0\u0014å\no1:::å\nokå\nxP(p1=Tjo1;:::; ok;x)\u00141(8)\nTaking into account that there is a correlation between p1and\nxwe have to replace line 6 of the optimization problem above\naccording to (3) by:\n7: 0\u0014å\no1:::å\nokP(p1=Tjo1;:::; ok;x=F)\u00141\n8:å\no1:::å\nokP(p1=Tjo1;:::; ok;x=T) =1(9)\nLine 1 represents the objective function. The aim is to\nminimize the difference between Pbe f ore andPa fter, thus we\nhave a minimization problem. Lines 2-8 represent the con-\nstraints that must be fulfilled. Line 2 assigns the values for\nthe probability P(x=Fjok+1;:::; om)for all combinations\noffok+1;:::; omg. In line 3 we assure that the new prob-\nability Pbe f ore is not larger than the probability Pa fter. In\ncase of an error, i.e. Pbe f ore6=Pa fter, the absolute probabil-\nity of the occurrence of p1after the CfPD is less than be-\nfore or in other words the occurrence is not overestimated.\nThis constraint has been chosen because intutitively it seems\n149\nmore reliable to underestimate the occurrence of a variable\nthan predicting an underestimation of a non-occurrence. Al-\nthough, sometimes it seems more reliable to overestimate the\noccurrence of a variable at the time when e.g. a variable de-\nscribes the failure of part of a system for example. In that\ncase, the user has to differentiate and to determine whether\nto use Pbe f ore or rather Pa fter as minuend or subtrahend in\nline 3 and accordingly to use the \u0014for underestimation or\nthe\u0015otherwise (see line 4). The fourth line assures that\nthe probabilities, precisely the JPD of P(p1jo1;:::; ok;x)is\nnot larger than before. The last two lines only ensure the\nnon-negativity of the decision variables, and that their val-\nues are not larger than 1. Assuming the correlation between\nxand p1, the constraint for P(p1jo1;:::; ok;x)is fixed by\nP(p1=Tjo1;:::; ok;x=T) =1 according to (3). Hence line\n6 of (8) is replaced by line 7 and 8 of (9). We took the\nBNB= (G;P)from figure 4a with its JPD PandPbe f ore =\nP(p1jo1;o2) =0:6202 and performed the CfPD. This yields\nto the network B0= (G0;P0)in 4e. We calculated the JPT’s of\np1andxusing the optimization approach on the one side and\nthe EM-algorithm with 10 samples on the other. Now we can\ncalculate the values for Popt(p1jo1;o2)andP10\nEM(p1jo1;o2)as\nwell as the JPD P0\noptof net B0for the optimization approach\nand the JPD P010\nEMfor the EM-algorithm respectively. Using\ntheBayes Net Toolbox for Matlab we receive the following\nresults: Popt(p1jo1;o2) =0:6202, P10\nEM(p1jo1;o2) =0:7554,\nP0\nopt=0:9915 and P010\nEM=0:8620. As we can see the opti-\nmization approach yields to a result, that do not change the\nprobability of occurrence of p1. Also the JPD only has a very\nlittle deviation. Whereas the result of the EM-algorithm with\nsample size 10 has a deviation of more then 13 % regarding\nthe absolut probability of p1. Also the JPD of the complete\nnet varied of almost 14%. But we can obtain also good results\nfor the EM-algorithm by incrementing the sample size. E.g.\nwe receive P50\nEM(p1jo1;o2) =0:6342 and P050\nEM=0:9534 for a\ndatabase with 50 samples. Although the sample size could be\nincreased almost further we will not get any better results for\nthe JPD, then using the optimization approach.\nTraining Data\nWe created five different, randomly connected BNs with 30\nnodes and 250 edges. Afterwards the CfPD algorithm was ex-\necuted 10 times on these networks repeatedly. Table 1 shows\nthe total number of CPT entries for the 10 iterations. Figure\n6 represents the decreasing trend of the number of entries for\neach iteration. Already in the first iteration we can observe\niteration BNet 1 BNet 2 BNet 3 BNet 4 BNet 5\n0 (start) 339243 1138913 311404 829771 285098\n1 210347 93409 181612 313803 156202\n2 79405 78081 116844 184907 125514\n3 63597 62273 85164 120139 92780\n4 47597 54625 69836 88011 76972\n5 31597 46451 53486 55755 60654\nTable 1: Number of entries of the five random BNs. The first row\n(start) represents the total number of nodes before executing the al-\ngorithm.\na significant reduction of size. This is evident because thenode with the highest possible number of parents and hence\nwith the largest CPT is selected and reduced. For example\nthe number of parents of the chosen node in BNet 5is 19,\nhence this node has 219=524288 CPT entries. After exe-\ncuting CfPD on this net, the chosen node has 8 parents after-\nwards and the newly-inserted node has 12 parents. Hence the\ntotal number is reduced by 219\u0000(28+212) =431996 entries.\nFigure 6 represents the trend of 10 iterations. For example\n0200000400000600000800000100000012000001400000\n012345678910BNet1BNet2BNet3BNet4BNet5\nFigure 6: Processing the CfPD algorithm on 5 randomly-connected\nBNs with 30 nodes and 250 links.\nBNet 5has 40 nodes and only 14666 CPT entries in the re-\nsulting graph, which is 1.54% according to the initial graph.\nTaking BNet 5as an example again, the algorithm can be pro-\ncessed 84 times on this network, which will lead to a network\nwith 114 nodes and 760 (0.07%) CPT entries. Thus the size\nof reduction is decreasing by each iteration. The number of\nedges in a BN is restricted byn\u0001(n\u00001)\n2. On the one hand, look-\ning at a network with a high connectivity that has a higher\nnumber of edges provides the opportunity to execute CfPD\nseveral times and to lead to a high reduction of the number\nof CPT entries. On the other hand, if we consider a network\nthat is less connected (hence it has only a few edges), we can\nonly execute the algorithm a limited number of times and the\nreduction in the size of CPT entries will also be reduced. Fig-\nure 7 shows the difference of five iterations between five BNs\nwith 30 nodes and 100 edges on the one side and 350 edges\non the other. Therefore, the improvement regarding the total\nnumber of CPTs is very high at the beginning, but is flat-\ntened by raising the number of iterations. If we look at Bnet 2\nfrom 6, the number of CPT entries after 14 iterations is 8255\ncompared to the start, when the network had 412167 entries,\nwhich is an improvement of over 98%. Therefore we have to\nweigh the improvement of reducing the number of entries in\nthe CPTs on the one hand against adding a new node to the\nnet on the other.\nCognition\nWe have shown that summing up nodes with a common fea-\nture by introducing a new node can improve the number of\nCPT entries significantly. We also pointed out that a high\nnumber of iterations will reduce the number of entries contin-\nuously, but the improvement decreases after each processing.\nNow let us make a little extension to the learning scheme\n150\nFigure 7: Decreasing trend of the number of CPTs after the first 5\niterations for two different random BNs. Left: BN with 30 nodes\nand 100 edges. Right: BN with 30 nodes and 350 edges\nof CfPD by choosing an appropriate node or more precisely,\na convenient level, manually. Assume we have a learning\nscheme that identifies ob jects by defining the actual place\nas well as the shape of the object. A simple example can\nbe seen in figure 8. In this example, we have three levels\nof identifiers: place, shape and object see figure 8a. There\nare four example places in level one, three different shapes\nin level two and 5 example objects in level three. By obtain-\ning knowledge about the place and the shape, the observer,\nalso called agent, can identify a unique object with a suitable\nprobability. CfPD may help us to improve these levels. Figure\n8b is the resulting graph after executing CfPD on the second\nlevel, which means that the parents of the shape nodes, so\nlevel one, became divided. Assume the following scenario:\nkitchenliving roomgardenbathroom\nsquareroundtriangular\ntablecuppotballtowelID: garden\n(a) A cognition scheme for identifying ob-\njects according their location and shape\nkitchenliving roomgardenbathroom\nsquareroundtriangulartablecuppotballtowelangled\n(b) Graph after processing CfPD on the sec-\nond level named shapes .\nFigure 8: Example of a cognition scheme. 8a consists of a\n3-level scheme with 12 nodes, 27 links and 92 CPT entries\nresults in a cognition scheme with four levels, 13 nodes, 25\nlinks and 80 CPT entries\nThe agent recognizes the situation, that he is in the kitchen\nand distinguishes a square-shaped object. Thus we have ob-\ntained evidence for the first and second level. Then the agentcan assume that this object is a table with high probability\nand e.g. a cup with very low probability. CfPD can be used to\nimprove a level of this network. If the level shapes includes\nforms such as triangular, squared andround , the algorithm\ncan group together the first two by creating a new node and\nhence a new level called, for example angled .\nSummary\nBy adding a new node to a BN and updating the links in the\nnet, new probabilities must be computed and existing prob-\nabilities must be changed in order that the probability of oc-\ncurrence of the observed node does not change. Therefore we\nformulated the problem as an optimization approach and min-\nimized the costs between the probability of occurrence of the\nobserved node before and afterwards. Other objectives, such\nas the Kullback-Leibler divergence might also be possible and\ncan be used instead of the probability of occurrence. This will\nrequire some testing and might be a subject for future work.\nWe tested the optimization problem on the classical approach\napplied in this paper, using FICO XPress IVE 7.6 with ran-\ndom probabilities. The computed probabilities ensured the\nsame probability of occurrence for the observed node at the\ninitial network as well as for the node at the resulting net.\nWe showed that the size of the CPTs can be reduced in\nevery step by repeating the algorithm to an existing BN. Fur-\nthermore the improvement during the first iterations is quite\nconsiderable and particularly in the case of highly-connected\nnetworks, a significant reduction in the total number of CPT\nentries can be achieved. By reducing the sizes of the CPTs\nthe efficiency increases distinctly.\nWe provide an example how the CfPD algorithm can be\nused for cognition of objects. The algorithm is processed on\na level of a BN instead of searching for a node with a large\nCPT. As a result the number of CPT entries is, of course,\nreduced and a new level is created that partitioned the chosen\nlevel more precisely.\nReferences\nBishop, C. M. (2006). Pattern recognition and machine\nlearning . Springer.\nFriedman, N. (n.d.). Learning belief networks in the presence\nof missing values and hidden variables..\nJensen, F. V . (2007). Bayesian networks and decision graphs .\nSpringer.\nNeapolitan, R. E. (2003). Learning bayesian networks . Pren-\ntice Hall.\nOlesen, K. G., Kjaerluff, U., Jensen, F., & Jensen, F. V .\n(1989). A munin network for the median nerve - a case\nstudy on loops. Applied Artificial Intelligence .\nPearl, J. (1986). Fusion, propagation, and structuring in belief\nnetworks. Artificial Intelligence .\nR¨ohrbein, F., Eggert, J., & K ¨orner, E. (2009). Child-friendly\ndivorcing: Incremental hierarchy learning in bayesian net-\nworks. In IJCNN 2009.\n151", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "uLgyS19lC5a", "year": null, "venue": "EAPCogSci 2015", "pdf_link": "http://ceur-ws.org/Vol-1419/paper0078.pdf", "forum_link": "https://openreview.net/forum?id=uLgyS19lC5a", "arxiv_id": null, "doi": null }
{ "title": "Word Similarity Perception: an Explorative Analysis", "authors": [ "Alice Ruggeri", "Loredana Cupi", "Luigi Di Caro" ], "abstract": null, "keywords": [], "raw_extracted_content": "Word Similarity Perception: an Explorative Analysis\nAlice Ruggeri ([email protected])\nCentre for Cognitive Science, University of Turin\nLoredana Cupi ([email protected])\nDepartment of Foreign Languages and Literatures and Modern Cultures, University of Turin\nLuigi Di Caro ([email protected])\nDepartment of Computer Science, University of Turin\nAbstract\nNatural language is a medium for expressing things belonging\nto conceptual and cognitive levels, made of words and gram-\nmar rules used to carry semantics. However, its natural am-\nbiguity is the main critical issue that computational systems\nare generally asked to solve. In this paper, we propose to go\nbeyond the current conceptualization of word similarity , i.e.,\nthe building block of disambiguation at computational level.\nFirst, we analyze the origin of the perceived similarity, study-\ning how conceptual, functional, and syntactic aspects influence\nits strength. We report the results of a two-stages experiment\nshowing clear similarity perception patterns. Then, based on\nthe insights gained in the cognitive tests, we developed a com-\nputational system that automatically predicts word similarity\nreaching high levels of accuracy.\nIntroduction\nWords are symbolic entities referring to something which fills\na portion of an autopoietic space made of conceptual, cog-\nnitive and contextual information. These three aspects are\nfundamental to understand the meaning ascribed to linguistic\nexpressions.\nOne of the most important building block in almost all\nComputational Linguistics tasks is the computation of sim-\nilarity scores between texts at different levels: words, sen-\ntences and discourses. Since manual numeric annotations\nof word-word similarity revealed a low agreement between\nthe annotators, cognitive studies can help improve computa-\ntional systems by discovering what lies behind the perception\nof similairty between words and their referenced concepts.\nThe concept of similarity has been extensively studied in the\nCognitive Science community, since it is fundamental in the\nhuman cognition. We tend to rely on similarity to generate\ninferences and categorize objects into kinds when we do not\nknow exactly what properties are relevant, or when we can-\nnot easily separate an object into separate properties. when\nspecific knowledge is available, then a generic assessment of\nsimilarity is less relevant (G. L. Murphy & Medin,1985).\nSince words co-occur in textual representations (mutually\ninfluencing one each other) it is possible to make experiments\non contextual information to analyze what and how influence\nthe perception of similarity. Let us consider two words as two\nmental representations. The intersection between them can\nbe seen as the context that may also help define the correct\nsimilarity of the words in a text. For example, sugar andsalt\ncan be easily associated to the context kitchen , whereas salt\nandseaintersect in another part of the mental representations.From a computational perspective, being words ambigu-\nous by nature, the disambiguation process (i.e., Word Sense\nDisambiguation) is one of the most studied tasks in Compu-\ntational Linguistics. To make an example, the term count can\nmean many things like nobleman orsum. Using contextual\ninformation, it is often possible to make a choice. Again,\nthis choice is done by means of comparisons among contexts,\nthat are still made of words. In other terms, we may state that\nthecomputational part of almost all computational linguistics\nresearch is about the calculus of matching scores between lin-\nguistic items, i.e., words similarity . But what’s behind words\nsimilarity?\nThere exist many annotated data related to similarity and\nrelatedness between words, like wordsim-353 (Finkelstein\net al.,2001) and SimLex-999 (Hill, Reichart, & Korho-\nnen,2014). A large part of the proposed computational sys-\ntems aims at finding relatedness between words, instead of\nsimilarity. Relatedness is more general than similarity, since\nit refers to a generic correlation (like cradle andbaby , that\nare words representing dissimilar concepts which, however,\nshare similar contexts).\nOne problem of similarity, as often faced in literature or\nannotated in datasets, is that it cannot be a static value. In-\ndeed, as the authors of these resources state in their works, the\nagreement between the annotators is usually not high (around\n50-70%). The reason is trivial, however: people can give dif-\nferent degrees of importance with respect to the specific char-\nacteristics of the concepts to compare. If we ask one to say\nhow much dogis similar to cat, the right answer can only be\n“it depends ”. While we can all agree about the fact that the\nconcept dogis quite similar to cat, we cannot say 0.7 rather\nthan 0.9 (in the range [0,1]) with certainty. Different aspects\ncan be taken into account: are we measuring the form of the\nanimal, or its behaviour? In both cases, it depends on which\npart of the animal and which actions we are considering to\nmake a choice. For instance, dogs use to return thrown ob-\njects. From this point of view, dogs and cats are dissimilar.\nIn the light of this, our contribution provides the basis for\nunderstanding what lies behind a similarity between words\nand their referenced concepts. First, we analyze syntactic,\nconceptual and functional aspects of the similarity percep-\ntion; then, we develop a computational system which is able\nto predict similarity by leveraging contextual information.\n482\nThe Cognitive Experiment\nIn this work, we present two tests to analyze how linguistic\nconstructions are perceived by humans in terms of strength of\nsemantic similarity and if there exists a functionality-based\nconnection that has an influence on its perception. The ex-\nperiment was presented to 96 users, having different ages and\nprofessions, without any particular cognitive or linguistic dis-\norder.\nTest on single words\nThe first test of the experiment regards the perception of the\nsimilarity between single words1. In particular, the goal was\nto analyze how the users focus on the functional links be-\ntween the words, and more importantly if such functional-\nbased similarity is a preferential perception channel com-\npared to the conceptual-based one.\nWords are ambigous, and many resources have been re-\nleased with the goal of defining all the possible senses\nof a word (i.e., WordNet). Word Sense Disambiguation\n(Bhattacharyya & Khapra,2012) is the task of resolving the\nambiguity of a word in a given context. Notice that, in our\nexperiment, we do not need any disambiguation of the words,\nsince this process is embodied in the human cognition, thus\nthe users of the test will autonomously represent their subjec-\ntive sense to associate to the words under comparison.\nThen, since we wanted to compare conceptual with func-\ntional preferences, we designed the test as a comparison be-\ntween two word pairs, one involving conceptually-related\nwords and one with words linked by direct functionalities.\nTo generalize, let us consider the words a,b, and cwith the\nconceptual word pair a-band the functional word pair a-c.\nThe user is asked to mark the most similar word (among b\nandc) to associate to a, and so the most correlated word pair.\nThe users were not aware of the goal of the test and of the\ndifference between the word pairs.\nSince words and actions present a high variability in terms\nof conceptual range (or their mental representation), we put\nparticular attention to the choice of the word pairs, according\nto the following principles:\nConceptual granularity If we think at the words object and\nthing , we probably do not have enough information to\nmake significant comparisons due to their large and unde-\nfined conceptual boundaries. The same happens in cases\nwhen two words represent very specific concepts such as\nlactose andamino acid . The word pairs of the proposed\ntest have been selected by considering this constraint (and\nso they include words which are not too specific nor too\ngeneral).\nConcreteness Words may have direct links with concrete ob-\njects such as “ table ” and “ dog”. In other cases, words\nsuch as “ justice ” and “ thought ” represent abstract con-\ncepts. Since it is not clear how this may affect the per-\n1Notice that “ similarity between words ” is intended as the simi-\nlarity between the concepts they bring to mind.ception of similarity, we decided to keep concrete words\nonly.\nSemantic coherence Another criterion used for the selection\nof the words was the level of semantic similarity between\nthe word pairs to compare. To better analyze whether the\nfunctional aspect plays a significant role in the similarity\nperception, we extracted conceptual and functional pairs\nof words which had similar semantic closeness according\nto a standard semantic similarity calculation. In the light\nof this, we used a Latent Semantic Space calculated over\nalmost 1 million of documents coming from the collection\nof literary text contained in the project Gutenberg page2.\nThe selected conceptual and functional word pairs had the\nproperty of having a very close semantic similarity (the\nscore differences were less than 0,01 in a [0,1] range).\nThe test was composed by three word pairs, to leave it sim-\nple and to be not affected by users tiredness. Then, instead of\nrandomly selecting three different word pairs, we wanted to\nconsider three cases in which the functional links between\nthe words have distinct levels of importance. Our assumption\nwas that the more the importance of the functional link be-\ntween two words in a pair, the more its perceived similarity\n(and thus the user preferences with respect to the conceptual\nword pair). For this reason we added a final criterion:\nIncreasing relevance of the functional aspect To estimate\nthe importance of the functional aspect that relates two\nwords we analyzed the number of actions (or verbs) in\nwhich they are usually involved with. In our test, the func-\ntional word pairs salt-water ,nail-polish , and ring-finger\nhave a functional link of 0.0033, 0.01014, and 0.06255re-\nspectively (see Table 1). These values are calculated in the\nfollowing way: given the total number of existing verbs\nNV(rw) for the root word rwand the number of effective\nusages EU(sw) with the second word of the pair sw, we\ncomputed the functional link Fl(rw, sw) of the functional\npair as EU(sw) / NV(rw) .\nTable 1: The chosen word pairs in the first test.\nRoot word Conceptual Pair Functional Pair [F. link]\nrw rw - sw rw - sw [Fl(rw,sw) ]\nsalt salt - sugar salt - water [0.003]\nnail nail - finger nail - polish [0.0101]\nring ring - necklace ring - finger [0.0625]\nWe then considered a backup test set with random word\npairs matching with the same above-mentioned criteria, col-\nlecting a total of 24 answers on 24 different cases of word\n2http://promo.net/pg/\n3For the verbs “ to put ”, “to add ” and “ to get ”\n4For the verbs “ to apply ” and “ to use ”\n5For the verbs “ to put ” and “ to wear ”\n483\npairs such as the main one of Table 1. This was done to prove\nthe reliability of the test, seeing whether the results and the\nanalyses show a similar trend, being independent from the se-\nlection of the words. The results of the whole test is described\nin the final part of this section.\nTest on Phrases\nThe second test of the experiment concerns the perception of\nthe similarity between phrases, o multi-word linguistic con-\nstructions6. The goal was to analyze how the syntactic con-\ntext of a target word influences the perception of similarity\namong entire phrases. More in detail, we wanted to discover\npossible differences of such perception along different syn-\ntactic roles. We considered a simple syntactic structure of the\ntype subject-verb-object .\nGiven a root sentence such as “ Mario sings the song ”, we\ncreated three variations by changing the subject, the verb, and\nthe direct object. For example, by changing “ Mario ” with\n“The bird ” we obtained “ The bird sings the song ”. The com-\nplete set of replacements are shown in Table 3.\nWe presented to the 96 users a total of 4 sentences (see Ta-\nble 2), that with its 4 variations produce a total of 16 pairs of\nsentences to be analyzed by the users in terms of perceived\nsimilarity, as in the first test. For each sentence, the users had\nto indicate the degree of similarity of the original sentence\nwith one of its variation using a value in the range [0,10]7.\n0 means no semantic similarity between the two phrases and\n10 means total equality. The grammatical changes made on\nthe original sentences were chosen maintaining the semantic\nvalidity (i.e., all the sentences represent valid mental repre-\nsentations).\nTable 2: The chosen phrases in the second test.\nPhrase ID Phrase\n(a) Mario sings the song\n(b) Alan drives the car\n(c) Alice writes the book\n(d) Marco does the homeworks\nTable 3: The word replacements for subjects (SC), verbs (VC)\nand direct objects (OC).\nReplacement (a) (b) (c) (d)\nSC bird robot computer software\nVC writes cleans cleans gives\nOC verse band sheet pasta\n6Even in this case, “ similarity ” is intended as the similarity be-\ntween the concepts related to the phrases.\n7We used a [0,10] range instead of a [0,1] range as in the previous\ntest because it represents a more human-understandable and intuitive\nvotation.Interpretation of the results\nIn this section, we give a preliminary interpretation of the\nresults on the collected answers.\nIn the first test on single words, we can state that, gener-\nally, conceptual and functional word pairs are differently per-\nceived according to the importance of the funcional link in\nthe functional word pair. This shows that words and their ref-\nerenced concepts are mainly compared in terms of conceptual\nsimilarity, but when there exists importafnt functionalities be-\ntween them, this influences the users preference towards the\nfunctional word pair ??.\nFor example, the similarity of the sugar-salt pair results to\nbe stronger compared to the water-salt one, since the action to\nadd=put the salt in the water is “a needle in a haystack ” with\nrespect to all the actions related to water andsaltindepen-\ndently. This means that there is no exclusive action between\nwater andsalt(i.e., there are many actions that involve wa-\nter). An opposite example is represented by the word pair\nring-finger , since the action to put=wear the ring on the fin-\ngeris much more exclusive than in the previous case. Such\npreference could be explained by stating that all word pairs,\nespecially with words that underlie actions, have a strong vi-\nsual representation that makes them quickly perceivable.\nTable 4: Results showing the percentage of preferences in the\nchoice of the most (perceived) correlated word pairs of the\nfirst test.\nCase Word pairs N. of preferences %\n1a. salt - sugar 75 78%\n1b. salt - water 21 22%\n2a. nail - finger 44 46%\n2b. nail - polish 52 54%\n3a. ring - necklace 16 17%\n3b. ring - finger 80 83%\nTable 5: Results on the backup of the first test (with 24 differ-\nent cases including 8 low-FL cases, 8 medium-FL cases and\n8 high-FL cases). The results are in line with the ones of the\nmain test shown in Table 4.\nFunct. Pair Pref. w.r.t conceptual pair\nFunct. Pair (low FL) 1 out of 8 ( 12.5% )\nFunct. Pair (medium FL) 5 out of 8 ( 62.5% )\nFunct. Pair (high FL) 7 out of 8 ( 87.5% )\nThis result is also in line with what stated by (Cohen et\nal.,2002), i.e., words that have a functionality-based relation-\nship can have a more complex visual component that makes\nsuch correlation weaker.\nIn Figure 1, we show the users preferences for the second\ntest. In the case of verb replacement (VC) we can notice a\nhigh meaning change in terms of similarity perception (sim-\nilarity values close to 0), so the verb represents the real root\n484\nFigure 1: Results of the second test, showing the change\nscores in terms of word similairty perception after subject,\nverb and object replacements. SC stands subject change, VC\nfor verb change, and OC for object change.\nof the mental representations. The case of the subject change\n(SC) shows a less important decrease of similarity perception,\nwhile the object change (OC) resulted to be the less relevant\nsyntactic role influencing the meaning of the whole phrase.\nThe Computational Analysis\nIn the previous section we studied the role of the context (on\ndifferent levels) within the process of word similarity percep-\ntion. Since the results indicated that both functional aspects\nand syntactic roles have an impact on how people perceive\nsimilarity, we experimented a computational approach for the\nautomatic estimation of the similarity based on functional and\nsyntax-aware contextual information.\nIn particular, we used the large and freely-available seman-\ntic resource ConceptNet8. A partial overview of the semantic\nknowledge contained in ConceptNet is illustrated in Table 6.\nConceptNet is a resource based on common-sense rather than\nlinguistic knowledge since it contains much more function-\nbased information (e.g., all the actions an object can or can-\nnot do) contained in even complex syntactic structures. The\nidea is also to exploit users perception of reality (the actual\norigin of ConceptNet) instead of the result of top-down ex-\npert building of ontologies (e.g., WordNet). ConceptNet con-\ntains important semantic problems related to covarage, utility\nof semantic information and coherence, but we used it as a\nblack box due to its largeness and common-sense nature. A\ndeep analysis of this resource is out of the scope of this paper.\nThe experiment started from the transformation of a word-\nword-score similarity dataset into a context-based dataset in\nwhich the words are replaced by sets of semantic information\ntaken from ConceptNet. The aim was to figure out which\nsemantic facts make the similarity between two words per-\nceivable .\nWe used the dataset SimLex-999 (Hill et al.,2014) that con-\ntains one thousand word pairs that were manually annotated\nwith similarity scores. The inter-annotation agreement is 0.67\n8http://conceptnet5.media.mit.edu/Table 6: Some of the existing relations in ConceptNet, with\nexample sentences in English.\nRelation Example sentence\nIsA NP is a kind of NP.\nLocatedNear You are likely to find NP near NP.\nUsedFor NP is used for VP.\nDefinedAs NP is defined as NP.\nHasA NP has NP.\nHasProperty NP is AP.\nCapableOf NP can VP.\nReceivesAction NP can be VP.\nHasPrerequisite NP—VP requires NP—VP.\nMotivatedByGoal You would VP because you want VP.\nMadeOf NP is made of NP.\n... ...\n(Spearman correlation). We leveraged ConceptNet to retrieve\nthe semantic information associated to the words of each pair,\nthen keeping the intersection. For example, considering the\npair rice-bean , ConceptNet returns the following set of se-\nmantic information for the term rice:\n[hasproperty-edible, isa-starch, memberof-oryza,\natlocation-refrigerator, usedfor-survival, atlocation-\natgrocerystore, isa-food, isa-domesticateplant,\nrelatedto-grain, madeof-sake, isa-grain, receivesaction-\ncook, atlocation-pantry, atlocation-ricecrisp, atlocation-\nsupermarket, ...]\nThen, the semantic information for the word bean are:\n[usedfor-fillbeanbagchair, atlocation-infield,\natlocation-can, usedfor-nutrition, usedfor-cook,\natlocation-atgrocerystore, usedfor-grow, atlocation-\nfoodstore, isa-legume, usedfor-count, isa-\ndomesticateplant, atlocation-cookpot, atlocation-\nbeansoup, atlocation-soup, isa-vegetable, ...]\nFinally, the intersection produces the following set:\n[atlocation-atgrocerystore, isa-domesticateplant, at-\nlocationpantry]\nAt this point, for each non-empty intersection, we created\none instance of the type:\n<semantic information >,<similarity score >\nand computed a standard term-document matrix, where the\nterm is a semantic term within the set of semantic informa-\ntion retrieved from ConceptNet and the document dimension\nrepresents the word pairs of the original dataset. After this\npreprocessing phase, the score attribute is discretized into two\nbins:\n\u000fnon-similar class - range in the dataset [0, 5]\n485\n\u000fsimilar class - range in the dataset [5.1, 10]\nThe splitting of the data into two clusters allowed us to\nexperiment a classic supervised classification system, where\na Machine Learning tool (a Support Vector Machine, in our\ncase) has been used to learn a binary model for automatically\nclassifying similar andnon-similar word pairs. The result of\nthe experiment is shown in Table 7. Noticeably, the classi-\nfier has been able to reach a quite good accuracy (65.38%\nof correctly classified word pairs), considering that the inter-\nannotation agreement of the original data is only 0.67 (Spear-\nman correlation).\nTable 7: Classification results in terms of Precision, Recall,\nand F-measure. The total accuracy is 65.38%.\nPrecision Recall F-Measure Class\n0,697 0,475 0,565 non-similar\n0,633 0,815 0,713 similar\n0,664 0,654 0,643 weighted total\nNotice that similar word pairs are generally easier to iden-\ntify with respect to non-similar ones.\nRelated Work\nThis paper presents an idea which combines linguistic, cogni-\ntive and computational perspectives. In this section, we men-\ntion those theoretical and empirical methods that inspired our\nmotivational basis.\nLinguistic Background\nThe difficulty of defining the meaning of meaning has to do\nwith some tricky issues like lexical ambiguity and polysemy,\nvagueness, contextual variability of word meaning, etc. As a\nmatter of fact, words are organized in lexicon as a complex\nnetwork of semantic relation which are basically subsumed\nwithin Saussure’s paradigmatic (the axis of combination) and\nsyntagmatic (the axis of choice) axes (Saussure,1983).\nSome authors (Chaffin & Herrmann,1984) have already\nsuggested theoretical and empirical taxonomies of semantic\nrelations consisting of some main families of relation (such\nas contrast, similars, class inclusion, part-whole, etc.). As\nMurphy points out (M. L. Murphy,2003), lexicon has become\nmore central in linguistic theories and, even if there is no a\nwidely accepted theory on its internal semantic structure and\nhow lexical information are represented in it, the semantic re-\nlations among words are considered in scholarly literature as\nrelevant to the structure of both lexical and conceptual infor-\nmation and it is generally believed that relations among words\ndetermine meaning.\nCognitive Background\nAlthough words perception could seem immediate, the input\nwe perceive is recognized and trasformed mediating back-\nground and contextual information, within a dynamic and co-\noperative process. The well-known semiotic triangle (Ogden,Richards, Malinowski, & Crookshank,1946) introduced by\ndifferent authors over time represents a first reference for\nour study. People use symbols (our words) to communicate\nmeanings (the effective content). The meaning is something\nuntangible, which can be though even without any concrete\npresence. The last point is then the physical reference, i.e.,\nthe object in the reality9. Note that there is no connection be-\ntween symbols and references, since only imagined meanings\ncan allow the two to be linked.\nInteraction is another important aspect that has been inves-\ntigated in literature. Indeed, the actions change the type of\nperception of an object, which models itself to fit with the\ncontext of use. Then, the Gestalt theory (K ¨ohler,1929) con-\ntains different notions about the perception of meaning ac-\ncording to interaction and context. In particular, the core of\nthe model is the complementarity between the figure and the\nground . In our case, a word is the figure and the ground is\nthe context that lets emerge its specific sense. Finally, James\nGibson introduced the concept of affordances as the cognitive\ncues that an object exposes to the external world, indicating\nways of use (Gibson,1977). In cognitive and computational\nlinguistics, this theory can be inherited to model words as ob-\njects and contexts as their interaction with the world.\nComputational Background\nIn this section, we review the main works that are related to\nour contribution from a computational perspective. Natural\nLanguage Processing represents an active research commu-\nnity whose focus is letting machines communicate by under-\nstanding semantics within linguistic expressions. Ontology\nLearning (Cimiano,2006) is the task of automatic extracting\nstructured semantic knowledge from texts, and it well fits the\nscope of this paper. Nevertheless, Word Sense Disambigua-\ntion (WSD) (Stevenson & Wilks,2003) is maybe the most re-\nlated NLP task, whose aim is to capture the correct mean-\ning of a word in a context. Generally speaking, many other\ntasks have the problem of comparing linguistic items in order\nto make choices to pass from syntax to semantics. Named\nEntity Recognition (NER) (Nadeau & Sekine,2007;Marrero,\nUrbano, S ´anchez-Cuadrado, Morato, & G ´omez-Berb ´ıs,2013)\nis the task of identifying entities like people, organizations\nand locations in texts. This is often done by comparing words\nin contexts to some learned patterns. In general, many other\nNLP tasks are based on the evaluation of similarity scores\n(Manning & Sch ¨utze,1999).\nNowadays, there exists a large set of available semantic\nresources that can be used in Natural Language Processing\ntechniques in order to understand the hidden meaning of per-\nceived similarity between two words or concepts. For exam-\nple, ConceptNet contains semantic information that are usu-\nally associated with common terms (even if not correctly dis-\nambiguated). By analyzing the relationship betweeen anno-\ntated similarity scores and semantic information it is possi-\n9The existing terminology is quite varying: symbol-\nthought/reference/referent (Aristotele); object-representation-\ninterpretant (Peirce); signified-sign-referent (De Saussure)\n486\nble to create predictive models which automatically deduce\nwords similarity by dynamically weighting words features\nbased on their mutual interaction.\nIf we consider the objects / agents / actions to be terms in\ntext sentences, we can try to extract their meaning and se-\nmantic constraints by using the idea of affordances. For in-\nstance, let us think to the sentence “ The squirrel climbs the\ntree”. In this case, we need to know what kind of subject\n“squirrel ” is to figure out (and visually imagine) how the ac-\ntion will be performed. According to this, no particular issues\ncome out from the reading of this sentence. Let us now con-\nsider the sentence “ The elephant climbs the tree ”. Even if the\ngrammatical structure of the sentence is the same as before,\nthe agent of the action is different, and it obviously creates\nsome semantic problems. In fact, from this case, some con-\nstraints arise; in order to climb a tree, the subject needs to\nfit to our mental model of something that can climb a tree.\nIn addition, this also depends on the mental model of “ tree”.\nMoreover, different agents can be both correct subjects of an\naction whilst they may produce different meanings in terms\nof how the action will be mentally performed. Consider the\nsentences “ The cat opens the door ” and “ The man opens the\ndoor ”. In both cases, some implicit knowledge suggests the\nmanner the action is done: while in the second case we may\nthink at the cat that opens the door leaning to it, in the case\nof the man we probably imagine the use of a door handle. A\nstudy of these language dynamics can be of help for many\nNLP tasks like Part-Of-Speech tagging as well as more com-\nplex operations like dependency parsing and semantic rela-\ntions extraction. Some of these concepts are latently stud-\nied in different disciplines related to statistics. Distributional\nSemantics (DS) (Baroni & Lenci,2010) represents a class of\nstatistical and linguistic analysis of text corpora that tries to\nestimate the validity of connections between subjects, verbs,\nand objects by means of statistical sources of significance.\nConclusions and Future Work\nIn this paper, we proposed a combined analysis of linguis-\ntic, cognitive and computational aspects to assess the nature\nof words similarity. First, we studied how word similarity\nperception is influenced in terms of conceptual, functional\nand syntactic roles. In future work, our aim is to extend the\nsample of users on more specific cases. Still, by changing\nthe language of the users, we can have results that take into\naccount the cultural ground, understanding if and how word\nsimilarity depends on it. On the other side, we stressed the\nimportance of computational understanding of similarity to\nimprove Computational Linguistics tasks which are based on\nit, usually without any analysis of contextual information. In\nparticular, we used the large semantic knowledge contained\nin ConceptNet to create a Support Vector Machine classifier\nto predict word similarity based on an annotated dataset. In\nfuture work, we will extend our experimental analysis to val-\nidate existing similarity datasets and to produce predictive\nmodels for the automatic identification of human-readablesimilarity scores.\nReferences\nBaroni, M., & Lenci, A. (2010). Distributional memory: A\ngeneral framework for corpus-based semantics. Com-\nputational Linguistics ,36(4), 673–721.\nBhattacharyya, P., & Khapra, M. (2012). Word sense disam-\nbiguation. Emerging Applications of Natural Language\nProcessing: Concepts and New Research , 22.\nChaffin, R., & Herrmann, D. J. (1984). The similarity and\ndiversity of semantic relations. Memory & Cognition ,\n12(2), 134–141.\nCimiano, P. (2006). Ontology learning from text . Springer.\nCohen, L., Leh ´ericy, S., Chochon, F., Lemer, C., Rivaud, S.,\n& Dehaene, S. (2002). Language-specific tuning of\nvisual cortex? functional properties of the visual word\nform area. Brain ,125(5), 1054–1069.\nFinkelstein, L., Gabrilovich, E., Matias, Y ., Rivlin, E., Solan,\nZ., Wolfman, G., et al. (2001). Placing search in con-\ntext: The concept revisited. In Proceedings of the 10th\ninternational conference on world wide web (pp. 406–\n414).\nGibson, J. J. (1977). The Theory of Affordances. In R. Shaw\n& J. Bransford (Eds.), Perceiving, acting, and knowing:\nToward an ecological psychology. Lawrence Erlbaum.\nHill, F., Reichart, R., & Korhonen, A. (2014). Simlex-999:\nEvaluating semantic models with (genuine) similarity\nestimation. arXiv preprint arXiv:1408.3456 .\nK¨ohler, W. (1929). Gestalt psychology.\nManning, C. D., & Sch ¨utze, H. (1999). Foundations of sta-\ntistical natural language processing . MIT press.\nMarrero, M., Urbano, J., S ´anchez-Cuadrado, S., Morato, J.,\n& G ´omez-Berb ´ıs, J. M. (2013). Named entity recog-\nnition: Fallacies, challenges and opportunities. Com-\nputer Standards & Interfaces ,35(5), 482–489.\nMurphy, G. L., & Medin, D. L. (1985). The role of theories\nin conceptual coherence. Psychological review ,92(3),\n289.\nMurphy, M. L. (2003). Semantic relations and the lexicon .\nCambridge University Press.\nNadeau, D., & Sekine, S. (2007). A survey of named entity\nrecognition and classification. Lingvisticae Investiga-\ntiones ,30(1), 3–26.\nOgden, C. K., Richards, I. A., Malinowski, B., & Crook-\nshank, F. G. (1946). The meaning of meaning . Har-\ncourt, Brace & World New York.\nProffitt, D. (2006). Embodied perception and the economy\nof action. Perspectives on psychological science ,1(2),\n110–122.\nSaussure, F. d. (1983). Course in general linguistics, trans.\nR. Harris, London: Duckworth .\nStevenson, M., & Wilks, Y . (2003). Word-sense disambigua-\ntion. The Oxford Handbook of Comp. Linguistics , 249–\n265.\n487", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "105OVLiWAeT", "year": null, "venue": "EAPCogSci 2015", "pdf_link": "http://ceur-ws.org/Vol-1419/paper0031.pdf", "forum_link": "https://openreview.net/forum?id=105OVLiWAeT", "arxiv_id": null, "doi": null }
{ "title": "Decoding (un)Known Opponent's Game Play, a Real-Life Badminton Eye Tracking Study", "authors": [ "Aditi Mavalankar", "Snigdha Dagar", "Kavita Vemuri" ], "abstract": null, "keywords": [], "raw_extracted_content": "Decoding (un)known opponent's game play, a real-life badminton eye tracking\nstudy\nAditi Mavalankar, Snigdha Dagar, Kavita Vemuri*\nInternational Institute of Information Technology, Hyderabad, India.\nEmail: [email protected] , snigdha.dagar@research .iiit.ac.in, [email protected]\nAbstract\nIs the underlying cognitive processes different when playing\nwith an opponent whose game play is familiar to that of an\nunknown? This study filtered the advance cues extracted by expert\nand amateur players when paired with an opponent whose game\nplay is familiar to that of an unknown opponent. Our data collected\nin a real-life naturalistic game play conditions suggests that at the\nbeginning of the game and for the first serve only the opponent's\ntorso is crucial for cues and as the game progresses the information\nfrom the feet seems to be sufficient for the expert players in\ncontrast to the data from the amateur player. Subsequently the\npreparatory or quiet-eye period for the serves at the beginning of\nthe game play was higher than for later serves for all player sets.\nThe preparation time for known opponents by expert players was\nhigher for the first serve than for the unknown opponents but by\nthe fifth serve the duration was negligible. Analysis of complete\nrallies show that post-serve attention allocation to opponent's\nracket and shuttle-in-flight is paramount in the play for both sets of\nopponents. Taken together the results of this investigation suggests\nthat expert player's visual attention was distinguishable to that of\nan amateur player and expert players quickly decode unknown\nopponent's competence fairly early in the game play and follow\nconsistent pattern of visual search. The results from the\npreliminary experimental data suggest the possibility of\nunderstanding how humans employ dynamic pattern recognition\nmodels in visual-search.\nIntroduction \nData collected from real-life naturalistic conditions\nprovides insight into anticipation, prediction and rapid re-\nadjustments processes applied by players in a sport like\nbadminton. Analysis of data like eye gaze collected from\nplayers engaged in real-time naturalistic game play can\nprovide an accurate reflection of player behavior. Inferences\non the underlying cognitive and motor skills can be derived\nto a certain extent from two main indices, visual search\npatterns and fixation duration in the preparatory,\nanticipatory and execution phases of the game play. In this\nstudy we report scanpath analysis reflecting the visual\nsearch in later two phases with emphasis on the quiet eye\nperiod in the preparatory phase. We do this by comparing\ndata from three players paired against opponents with whom\nthey have played before and others whose game play was\nunknown. A wearable eye tracker (Tobii Glasses I) was\nused to collect saccadic eye movement and the fixation\nduration, which is an estimate of attention allocation at\nparticular regions of interest important for game strategy,\nthough it was shown that other factors like stress can also\ninfluence fixations (Abernethy, 1988, 1990). The quiet-eye\nperiod in our case is the preparatory phase of the player justbefore executing the serve. This period is defined as the\ntime taken to access task relevant cues and strategize\nappropriate motor actions (Vickers, 1996). \n Previous studies on badminton which looked at\ndifferences in experts versus novice game play using spatial\nocclusion concept (Abernethy & Russel, 1987a,b) show the\nformer exhibited better anticipatory behavior while the later\nneeded more information for decision making. In the same\nstudy a video recording of game play was shown to novice\nand expert players who wore a mobile eye tracker and the\nfixation duration at five distinct regions – shuttle-in-flight,\nopponent's arm,racket,head,face,legs – revealed that both\ngroups have similar early fixations regions though time on\nthe racket and arm was more for the expert while it was the\nhead area for the novice. Secondly they report that the order\nof the fixation on the cues was not dissimilar. A similar\nstudy on tennis (Goulet et al., 1989) aimed to understand\nvisual search pattern reported that focus was on\nshoulder/trunk of the opponent in the preparation stage and\nthen shifts to the racket during the execution phase while\nnovices depend on more cues by using pre-recorded game\nplay as stimulus. Singer et al., (1996) also used simulated\ntennis play and found differences between skilled and non-\nskilled player in visual-search , reaction time and decision\naccuracy with non-skilled players fixating longer on the\nopponent's head and less systematic in the tracking of the in-\nflight ball. It was also shown that player's ability to\nanticipate opponent's intentions from postural cues is an\nadvantage (Rowe & McKenna, 2001) and a skill that is\nacquired over time by players. \n Using a more sensitive eye tracker, Abernethy (1990)\nconducted experiment with video recorded squash game\nplay projected on a huge screen on the wall of the squash\ncourt. The players were positioned in the court and the data\nshowed that experts fixate on head/arm more than on the\nracket as compared to novice players, from which they\ninferred that experts are capable of eliciting advance cues\njust from posture. The fixation times were not different for\nthe two groups and visual search pattern variation was not\nevident. A meta-analysis of three decades of work (Mann\net al. 2007) which compared the attentional allocation of\nexperts and novices report that the former have fewer long\nduration fixations translating to possible higher information\nextraction ( Williams et al., 1999) quicker. The quiet-eye\ntime , as another gaze behavior index, of experts was found\nto be higher when compared to less-skilled players across\n211\nstudies on a wide range of domains including in sports\n(Mann et al 2007). Other studies on badminton looked at\nthe dynamic patterns from the players mutual spatial\ndisplacement within the court as an important variable for\nspeed scalar product estimates which showed increased\nstroke variability to disrupt stable patterns ( Chow et al.,\n2014).\nThe ability to gauge the expertise levels of the opponent\nand play optimally is a strategy applied by even reasonably\ngood players in elite sports like badminton. As in any\ncompetitive setting the player needs to quickly gauge and\ndecipher the opponent tactics from overt visual cues like\nfacial expressions, body posture and spatial position on the\ncourt and also covert memory models formed from previous\nencounters with the opponent. In the absence of prior\ninformation, the player needs to quickly build the same\nearly on in the game to win. An expert player's skill is based\non the ability to analyze opponent’s strength and weakness\nand evolve response strategy accordingly. Of interest to\ncognitive research and to sports personnel is the dynamic\nprocess applied by expert players who are able to quickly\ndecode an unknown opponents game play, which is the\nfocus of the present study. Towards this we collected data\nin a open-to- sky mud badminton court in near natural game\nplay conditions from 3 highly rated badminton players\npitched against six known and unknown opponents. \nMethods \nParticipants\nTwo players (P1, P2) rated 9/10 with at least 6-7 years of\nexperience and having participated in inter-college\ntournaments and a third player (P3) who was rated 7/10 but\nhas not taken part in any serious competitive matches were\npaired with 6 opponents (O1,O2,O3,O4,O5,O6) each. Of the\n6 opponents, O1,O2,O3 were comparable in expertise to P1\nand P2, while the rest of the opponents were fairly good\nplayers. O1 and O2 had played in practice sessions with P1\nand P2 while the other opponents game play was unknown.\nP3 had played with O1 and O3 before. The participants\nwere all in the age group of 18-23 years and right-handed.\nThe known and unknown opponents were mixed to take\ncare of any habituation that might occur in the players. All\nplayers had given their consent before taking part in the\nexperiment. \nProcedure\nThe eye movements were recorded from head-mounted\ntracking device from Tobii (Glasses 1,\nhttp://www.tobii.com/), the recording unit connected to the\nglasses is the size of a smart-phone and hooked onto the\nparticipants track-pants and hence allows for natural play.\nAll the experiments were carried out in the same familiar\nopen-air court in naturalistic conditions and the frame-grab\nin Figure 1 shows the court with heatmap overlaid from a\nrecording. For each pair of participants 10 random servesfrom the three players were collected, as the data was\ncollected from naturalistic real-time game play, utmost\neffort was made to ensure that opponent and player's spatial\nposition in the court was constant through out the forehand-\nserves with minimal variation in the velocity of release of\nthe shuttle. The participants were allowed to continue the\nrally till one of them dropped a shot. After all the sets were\ncompleted the player was asked to rate the opponent's play.\non a scale of 1-10. The rating was taken purely on the basis\nof their game play on that particular day. The fixation\nduration above a threshold of 70ms and the scanpath was\nanalyzed at three phases: a) the preparatory b) actual\nexecution of the serve by the player and c) the complete\nrally.\nFigure 1: The badminton court where the data was collected\nand the heatmap sample from one of the recordings.\nData analysis\nEye movement data was recorded at 30 frames per second. \nThe video recording from the eye tracker was analyzed \nframe by frame using the Tobii's studio. Heat maps were \ngenerated for each serve from preparation time of a serve to\nwhen the rally is dropped by either one of the players. \nThese heat maps provide a relative measure of the duration \nof gaze of the player in the different areas of interest in the \nscene. From the coordinates the fixation duration at each \ngaze position in the scan path was estimated with main \nregions of interest being the opponent's – torso, feet and \nracket and the shuttle for 4 serves out of the 10. The \nselected serves were the first, second, fifth and the eighth \nfor all the sets. The scanpath of the player is represented in \nthe form of state diagrams, wherein each fixation duration at\na position of interest is a state and change in eye movements\nis the transition between the states. \nResults \nThe scan path before and just after the serve is analyzed\nfrom two views a) quiet eye during the preparatory phase,\n212\nand b) the visual search pattern as the game progresses.\nFour serves (s1,s2,s5,s8) for each pair of players was\nanalyzed to look at differences in salient cues a player\ngathers in order to strategize an optimal serve and the\nvariation across the serves. Table 1 lists the average\nfixation period of the first and last two serves in the\npreparatory phase period grouped for known and unknown\nopponents. The preparatory period is the time just before the\nserve is executed by the player. For the known opponents\nthe average preparatory time was higher (824ms) than for\nthe unknown (670ms) for the first serves while the time for\nsecond set of serves was slightly higher for the\nunknown(435ms) than the knows (332ms). The preparatory\ntime for the first serves were higher than for the later serves\nfor both set of opponents. \nTable 1: The average fixation duration in milliseconds for\npreparatory or quiet-time period for known and unknown\nplayers.\nThe scan path data from the preparation to execution of\nthe serve gives insights to the visual cues gathered by the\nplayer to plan the serve, predict the return and anticipate the\nresponse. Figure 2 is the state diagram representation for the\n4 serves, for one known (O1) and an unknown (O5)\nopponent for each of the players. The scanpaths of all\nopponents were analyzed but not included in the figure due\nto size issues. The higher rated player, P1's first landing\nfixation is the opponent upper body for all the unknowns\nand for 1 known player of the 6 opponents for the first serve\n(red, Figure 2a), from second serve on the first fixation was\nthe opponent's feet consistent across all opponents.\nAttention was also allocated to the opponent's upper body\nafter executing the serve and tracking the shuttle in-flight\nfor 4 out of the 6 opponents. The first fixation position for\nP2 (figure 2b) was the opponent's upper body for 3\nopponents – 2 unknown- in the first serve while for 2 others\n(both unknown) it was the opponent's feet and for one it was\nbelow the 70ms threshold, so not considered. As was the\ncase with P1, from the 2nd serve the first fixation point was\nthe opponent's feet. In the case of the amateur player, P3,\nthe first landing position was also the opponent's upper body\nin 4 (one known and rest unknown) but gets random fromthe second serve switching between feet, upper body and\nshuttle. Interestingly P3 did not shift attention to the\nopponent's body after executing the serves whereas P1 and\nP2 fixate on the opponent for 4 opponents after executing\nthe serve. \nFigure 2: state diagram representing the scan path from\npreparation to execute a serve and just after. Red: s1 (serve\n1), Blue: s2, Green: s5 and Brown: s8. a) P1 with an known\nopponent -O1 and an unknown -O5. b) P2's serve with\nknown -O1 and unknown -O5 and c) P3's with known O1\nand unknown -O5. The nodes/states color code: opponent\n(before serve) – dark grey. Opponent's feet (before feet) –\nlight grey. Opponent's racket – blank/white. Shuttle –\nlavender. Opponent (after serve) – green.\nThe detailed scan-path diagrams of two competitive\nrallies of the eighth serve of players P1 and P2 is shown in\nFigure 3, with opponents O1 and O5. As can be inferred\nfrom the sample set of data, at the beginning the players\nattention is on opponent or opponent's feet but shifts to\nopponent's racket and shuttle during the actual rally\nespecially when paired with a known and higher rated\nopponent (O1) a trend that was noticed from the analysis of\nother rallies with known players. For unknown opponents\nthe attention away from shuttle or racket was dependent on\nthe rally duration and the type of shot hence, no consistent\npattern was discernible. For the amateur player (P3) the\nfixations were random shifting to opponent and feet during\nthe rally frequently. serve: s1,s2Serve: s5,s8\ntime (ms)time(ms)\nKnown \nP1 940515\nP2 755297\nP3 779185\nAverage 824332\nUnknown\nP1 721494\nP2 464387\nP3 827426\nAverage 670435\n213\nFigure 3: State diagram representation of the scanpath\ndata for 2 rallies each of a) P1 and b) P2 players with O1\nand O5. The action or response that triggers the state change\n(gaze change) is indicated by the lines connecting the states.\nDiscussion and Conclusions\nThe aim of the study was to investigate cognitive and\nmotor skill differences when a badminton player is paired\nagainst known and unknown players. The visual search\npatterns suggest a deviation in the first relevant visual cue\ngathered by the player in the first serve as against\nsubsequent serves, a trend that is noticed when playing with\neither known or unknown opponents. The explanation could\nbe the need for player to 'seize' the facial expressions for\ncues on anxiety, nervousness or casualness to make an\nestimate or guess the expertise levels (unknown opponent)\nor intensity for the current game play (known). For example\nan easy casual countenance might be perceived to indicate a\nfairly good player and cues. And the posture of the torso can\npossibly give clues about the planned response especially\nfor unknown players in the beginning of the game play.\nAfter the first serve and the subsequent rally from the\nsecond serve on, the first landing gaze is the opponent's feet\nfor nearly all pairs which could be due to either the player\nability to retain a memory of the opponent's facialexpressions or from the position of the feet the player could\ndeduce the upper body stance or a combination of both. \nAdditionally, the better players (P1 & P2) tend to look at\nthe opponent after the serve for some the serves, which\ncould be either a function of the type of serve which\nrequires the player to reconfirm the opponents body cues to\ngauge motion pattern or the time for response was longer\ndue to in-air flight time of the shuttle and attention shifts to\nthe opponent. The second gaze to the opponent could also\nbe to fine-tune the game play by recording the current\nspatial location and plan to position the return at a location\nfurther away. For the amateur player (P3), after the serve the\nopponent upper body was not tracked for any opponent\nwhich means the player is missing important overt and\ncovert cues. The landing fixation for ensuing serves was\neither the feet or upper body, and hence no pattern is\nfollowed in contrast to higher rated players P1 and P2. This\ncould mean that amateurs have not evolved an optimal\nscanpath or the ability to elicit visual cues. In studies\ncomparing expert versus novice (Mann et al., 2007;\nAbernethy & Russell, 1987b) no difference was found in\nvisual search, but the paradigm applied was to analyze\nplayers response to a serve whereas in our study the interest\nwas visual cognition applied by player to predict opponent's\nresponse to a serve. Further experiments with amateurs\nneed to be conducted to validate our preliminary finding. \nThough the scanpath pattern from known and unknown\nopponents was almost similar the preparatory duration or\nquiet eye period shows that for known and unknown\n214\nopponents the first two serves were higher across all the\nthree players and by the fifth serve the quiet time was\nsignificantly lower. This could be because post the first\nserve the opponent's upper body is not allocated attention.\nAdditionally the time period for the first serves for known\nopponent was higher than for unknown and the difference is\nlower as the game progresses that is, by the fifth serve. The\nlonger eye time for higher rated known players when\nplaying with equally skilled opponents could possibly mean\na structured planning processes applied by these players or\neven anxiety at the beginning of the play ( Williams &\nElliott, 1999). The differences in information processing\nfrom visual cues of the expert players (P1 & P2) and the\namateur (P3) player is similar to the findings reported in\nnovice versus expert comparison studies ((Abernethy &\nRussell, 1987b).\nIn conclusion, the experiment conducted in real-life\nsettings and with very few motor control instructions means\nwe acquired natural actions but it also threw up data\nanalysis challenges and confident assertions were not\nexactly possible. Nevertheless from the current set of data,\nwe can infer that players decode unknown opponents game\nplay fairly early in the game formulate patterns from visual\ncues efficiently. An observation we noted was the\nimmersive play by the participants due to fewer restrictions\nin motor actions or game play mechanisms. Future work\ncould consider more participants of national or international\nstandards to set the baseline for preparatory visual search\npattern and fixation times. Models from the coaches or\nprofessional players can be used by players to optimize\nacquisition of covert and overt information.\nAcknowledgments\nFunding acknowledgment: Partially funded under the\nserious games project of the National Programme on\nPerception Engineering Phase II, Department of Information\ntechnology (DIT)/DIETY, Government of India. \nReferences \nAbernethy, B. (1988). Visual search in sport and\nergonomics: Its relationship to selective attention and\nperformance expertise. Human Performance, 4, 205-235.\nAbernethy, B. (1990). Expertise, visual search, and \ninformation pick-up in squash. Perception, 19, 63–77.\nAbernethy, B., & Russell, D.G. (1987a). Expert-novice\ndifferences in an applied selective attention task. Journal of\nSport Psychology, 9, 326-345.Abernethy, B., & Russell, D.G. (1987b). The relationship\nbetween expertise and visual search strategy in a racquet\nsport. Human Movement Science , 6, 283-319.\nChow, Y.J., Seifert, L., Hérault, R., Chai, J.Y.C.S, Lee,\nC.Y.M. (2014). A dynamical system perspective to\nunderstanding badminton singles game play. Human\nMovement Science 33,70–84\nGoulet, C., Bard, C., & Fleury, M. (1989). Expertise \ndifferences in preparing to return a tennis serve: A visual \ninformation processing approach. Journal of Sport & \nExercise Psychology, 11, 382–398.\nMann, D. T. Y., Williams, A. M., Ward, P., & Janelle, C.\nM. (2007). Perceptual-cognitive expertise in sport: A\nmeta-analysis. Journal of Sport & Exercise Psychology , \n29, 457–478.\nRowe, R. M., & McKenna, F. P. (2001). Skilled \nanticipation in real-world tasks: Measurements of \nattentional demands in the domain of tennis. Journal of \nExperimental Psychology: Applied, 7, 60–67.\nSinger, R. N., Cauraugh, J. H., Chen, D., Steinberg, G. \nM., & Frehlich, S. G. (1996). Visual search, anticipation, \nand reactive comparisons between highly-skilled and \nbeginning tennis players. Journal of Applied Sport \nPsychology, 8, 9–26.\nVickers, J.N. (1996). Visual control while aiming at a far\ntarget. Journal of Experimental Psychology: Human\nPerception and Performance , 22, 342-354.\nWilliams, A. M., & Elliott, D. (1999). Anxiety, expertise,\nand visual search strategy in karate. Journal of Sport &\nExercise Psychology, 21, 362–375.\n215", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Lq5Ko7M5ml5", "year": null, "venue": "EAPCogSci 2015", "pdf_link": "http://ceur-ws.org/Vol-1419/paper0092.pdf", "forum_link": "https://openreview.net/forum?id=Lq5Ko7M5ml5", "arxiv_id": null, "doi": null }
{ "title": "A Vector Representation of Fluid Construction Grammar Using Holographic Reduced Representations", "authors": [ "Yana Knight", "Michael Spranger", "Luc Steels" ], "abstract": null, "keywords": [], "raw_extracted_content": "A vector representation of Fluid Construction Grammar\nusing Holographic Reduced Representations\nYana Knight1and\nMichael Spranger2and\nLuc Steels1\n1Artificial Intelligence Laboratory,\nFree University of Brussels (VUB)\nPleinlaan 2, 1050 Brussels, Belgium\nDr. Aiguader 88, Barcelona 08003, Spain\n2Sony Computer Science Laboratories\n3-14-13 Higashigotanda, 141-0022 Tokyo, Japan\nAbstract\nThe question of how symbol systems can be instantiated in\nneural network-like computation is still open. Many technical\nchallenges remain and most proposals do not scale up to realis-\ntic examples of symbol processing, for example, language un-\nderstanding or language production. Here we use a top-down\napproach. We start from Fluid Construction Grammar, a well-\nworked out framework for language processing that is compat-\nible with recent insights into Construction Grammar and inves-\ntigate how we could build a neural compiler that automatically\ntranslates grammatical constructions and grammatical process-\ning into neural computations. We proceed in two steps. FCG is\ntranslated from symbolic processing to numeric processing us-\ning a vector symbolic architecture, and this numeric processing\nis then translated into neural network computation. Our exper-\niments are still in an early stage but already show promise.\nKeywords: Vector Symbolic Architectures; Fluid Construc-\ntion Grammar; Connectionist Symbol Processing\nIntroduction\nSince the early days of cognitive science in the late nineteen\nfifties, there has been a struggle to reconcile two approaches\nto model intelligence and cognition: a symbolic and a nu-\nmeric one. The symbolic approach postulates an abstract\nlayer with symbols, symbolic structures, and operations over\nthese symbolic structures, so that it is straightforward to im-\nplement the kind of analysis that logicians, linguists, and psy-\nchologists tend to make. AI researchers have built remarkable\ntechnology to support such implementations based on high\nlevel ‘symbolic’ languages like LISP.\nThe numeric approach wants to look at cognitive process-\ning in terms of numeric operations. It is motivated by the fact\nthat biological neuronal networks are dynamical systems and\nthat numeric processing can model self-organizing processes.\nSo the numeric approach tries to get intelligent behavior with-\nout needing to postulate symbolic structures and operations\nexplicitly. There have been several waves exploiting this nu-\nmeric approach under the head of neural networks and most\nrecently deep learning.\nThe symbolic approach has proven its worth in model-\ning very large scale language systems, search engines, prob-\nlem solvers, models of expert knowledge, ontological and\nepisodic memory, etc., but most of these applications rely\nheavily on a human analyst who identifies the relevant sym-\nbols and symbol processing operations. It is usually claimedthat the symbolic approach is unable to deal with learning\nand grounding, but this criticism often ignores work within\nthe large field of (symbolic) machine learning and work on\ngrounding symbolic representations in perception and action\nby physical robots. While the numeric approach has proven\nits worth in the domains of pattern recognition which includes\nfeature extraction, category formation, and pattern detection,\nit has not been equally successful in the implementation of\n‘true’ physical symbol systems (Newell & Simon, 1976).\nMore specifically, it turns out to be non-trivial to represent\na group of properties of an object (a feature structure), to\ncompare feature-structures to each other, and to handle vari-\nable binding and feature structure merging - all operations\nwhich many researchers have argued to be necessary for intel-\nligence. We believe the symbolic and the numeric approach\ncan only be reconciled when they are viewed as two levels of\ndescription of the same system whereby the former describes\nand models natural objects at a higher level than the latter.\nEach level has its own abstractions at which regularities are\nrevealed and each own laws of operation. It is necessary and\nhighly interesting to find out how the different levels map to\neach other. This paper sets some small steps in this direc-\ntion. We do not go immediately from the symbol level to\nthe numeric level but rather use a two-step process: mapping\nthe symbolic level to a symbolic vector layer, as suggested\nby several researchers (Hinton, 1990; Neumann, 2001; Plate,\n1994; Gayler, Levy, & Bod, 2010) and then mapping this\nlayer to a possible neural implementation level in terms of\npopulations of neurons, which has also been explored already\nin (Eliasmith, 2013).\nThis paper focuses only on the first step. Experiments have\nalso been done for the second step using the Nengo frame-\nwork (Eliasmith, 2013) but are not reported here. The pa-\nper begins by introducing Fluid Construction Grammar as a\nchallenging test case for studying how to map symbolic pro-\ncessing to numeric processing. It then proceeds to describe\na potential approach for the translation of FCG to vector\nform, namely Holographic Reduced Representations (HRR).\nFinally, it presents the results of experiments using HRR to\nproduce a vector representation of FCG feature structures and\ncore operators.\n560\nFCG and its key operations\nFluid Construction Grammar is a computational platform for\nimplementing construction grammars (Steels, 2011). It is\na typical example of a complex symbol system addressing\na core competence of the human brain, namely the repre-\nsentation and processing (comprehension, production, learn-\ning) of language. FCG was originally designed for modeling\nlanguage learning and language change (Steels, 2012), and\nlanguage-based robot interaction (Steels & Hild, 2012). More\nrecently research has focused on challenging problems in lin-\nguistics and broader coverage grammars. The components of\nFCG are symbols, feature structures, transient structures and\nconstructions.\nSymbols are the elementary units of information. They\nstand in for syntactic categories (like ‘noun’ or ‘plural’),\nsemantic categories (like ‘animate’ or ‘future’), unit-names\n(e.g. ‘noun-phrase-17’), grammatical functions (like ‘sub-\nject’ or ‘head’), ordering relations of words and phrases (e.g.\n‘meets’ or ‘preceeds’), meaning-predicates, etc. A basic\ngrammar of a human language like English would certainly\nfeature thousands of such symbols, and the set of meaning-\npredicates is basically open-ended. Symbols can be bound to\nvariables, which are written as names with a question-mark\nin front as: ?unit, ?gender, ?subject, etc. Symbol names are\nchosen to make sense for us but of course the FCG interpreter\nhas no clue what they mean. The meaning of a symbol only\ncomes from its functions in the rest of the system.\nFeature structures are a way to group information about\na particular linguistic unit, for example, a word or a phrase.\nA feature structure has a name to index it (which is again a\nsymbol, possibly a variable) and a set of features and val-\nues. Construction grammars group all features of a unit to-\ngether, whatever the level. So a feature structure has phonetic\nand phonologic features, morphological information, syntac-\ntic and semantic categories, pragmatic information, as well\nas structural information about the many possible relations\nbetween units (constituent structure, functional structure, ar-\ngument structure, information structure, temporal structure,\netc.). All of these are represented explicitly using features\nand values. The values of a feature can be elementary sym-\nbols, sets of symbols (e.g. the constituents of a phrase form\na set), sequences or feature structures, thus allowing a hierar-\nchically structured feature structure.\nFeature structures are used to represent transient struc-\ntures . These are the structures built up during comprehen-\nsion and production. The features are grouped into a semantic\npole, which contains the more semantic oriented features, in-\ncluding pragmatics and semantic categorisations, and a syn-\ntactic pole, which contains the form-oriented features. For\ncomprehension, the initial transient structure contains all the\ninformation that could be gleaned from the form of the ut-\nterance by perceptual processes and then this transient struc-\nture is progressively expanded until it contains enough infor-\nmation to interpret the utterance. For production, the initial\ntransient structure contains the meaning to be expressed andthen this structure is transformed until enough information is\npresent to render a concrete utterance. There are often multi-\nple ways to expand a transient structure so a search space is\nunavoidable.\nConstructions are also represented as feature structures\nand they are more abstract than transient structures. They typ-\nically contain variables that can be bound to the elements of\ntransient structures and they contain less information about\nsome of the units. Constructions have a conditional part\nwhich has to match with the transient structure they try to ex-\npand and a contributing part which they add to the transient\nstructure if the conditional part matches. The conditional part\nis decomposed into a production lock which constrains the\nactivation of a construction in production and a comprehen-\nsion lock which constrains the construction in comprehen-\nsion. When the lock fits with the transient structure, all infor-\nmation from the construction which is not there yet is merged\ninto the transient structure. So match and merge are the most\nbasic fundamental operations of the grammar.\nHere is a simplified example of the double object construc-\ntion (Goldberg, 1995) handling phrases like “she gave him a\nbook”. It has a unit for the clause as a whole (?ditransitive-\nclause) and for the different constituents (?NP-1, ?verb, ?NP-\n2 and ?NP-3). The conditional part is on the right-hand side\nof the arrow and the contributing part on the left-hand side.\nUnits in the conditional part have a comprehension lock (on\ntop) and a production lock (below). The \u0014sign between units\nmeans ‘immediately preceeds’.\n2\n64?ditransive-clause\nconstituents:\nf?NP-1, ?verb,\n?NP-2, ?NP-3g3\n752\n6664?verb\nsem-valence:\nfreceiver(?receiver)g\nsyn-valence:\nfind-obj(?NP-2)g3\n7775 \n2\n6666664?ditransive-clause\n# predicates:\nfcause-receive(?event),\ncauser(?event,?causer),\ntransferred(?event,?transferred),\nreceiver(?event ?receiver) g\n/03\n77777752\n66664?NP-1\nsem-function: referring\nreferent: ?causer\nsem-cat:fanimateg\nphrasal-cat: NP\ncase: nominative3\n77775\u0014\n2\n6666666664?verb\nreferent: ?event\nsem-function: predicating\nsem-valence:\nfactor(?causer),\nundergoer(?transferred) g\nsyn-valence:\nfsubj(?NP-1),\ndir-obj(?NP-3)g3\n7777777775\u00142\n66664?NP-2\nsem-function: referring\nsem-cat:fanimateg\nreferent: ?receiver\nphrasal-cat: NP\ncase: not-nominative3\n77775\u0014\n2\n66664?NP-3\nsem-function: referring\nsem-cat:fphysobjg\nreferent: ?transferred\nphrasal-cat: NP\ncase: not-nominative3\n77775\n561\nA regular speaker of a language knows probably something\non the order of half a million constructions. So it is not possi-\nble to simply throw them in one bag and try constructions ran-\ndomly. FCG therefore features various mechanisms to fine-\ntune which construction should be selected and, if more than\none construction matches, which one should be pursued fur-\nther. They include priming networks, organisation of con-\nstructions into sets, partial orderings, a scoring mechanism,\nfootprints preventing some constructions from becoming ac-\ntive, etc.\nObviously Fluid Construction Grammar is a sophisticated\ncomputational formalism but all the mechanisms it proposes\nare absolutely necessary to achieve accurate (as opposed to\napproximate) comprehension and correct production of utter-\nances given a particular meaning. Due to the space limita-\ntions, the reader is referred to the rapidly growing literature\non FCG for more details (see also emergent-languages.org).\nHolographic Reduced Representations\nThe kind of structures used in FCG can be represented us-\ning the AI techniques provided by symbolic programming\nlanguages such as LISP. It is very non-trivial to implement\nFCG but doable and adequate implementations exist. We now\ncome to the key question: Can similar mechanisms also be\nimplemented using a numeric approach? This means that the\nbasic elements of FCG are encoded in a numeric format and\nthe basic FCG-operations are translated into numeric opera-\ntions over them. Various efforts to implement symbolic sys-\ntems in neural terms have already been undertaken (Shastri\n& Ajjanagadde, 1993). The key problem however is scaling:\nThe number of neurons required to represent the millions of\nsymbols in human-scale grammars so far becomes biologi-\ncally totally unrealistic (Eliasmith, 2013).\nVector-based approaches known as Vector Symbolic Ar-\nchitectures (VSA) have demonstrated promising results for\nrepresenting and manipulating symbolic structures using dis-\ntributed representations. Smolensky’s tensor product is one\nof the simplest variants of VSA (Smolensky, 1990). How-\never, the main problem with this approach is that a ten-\nsor binding results in an n2-dimensional vector and in case\nof recursive representations does not scale well (Eliasmith,\n2013). Alternative approaches have been suggested, such\nas Binary Spatter Codes (Kanerva, 1997) and Holographic\nReduced Representations (HRR) (Plate, 1994), where bind-\ning is done with circular convolution, which results in a n-\ndimensional vector, so that the number of dimensions does\nnot increase. Hinton explored distributed representations\nbased on the idea of reduced representations (Hinton, 1990)\nand later (Neumann, 2001) demonstrated that connectionist\nrepresentational schemes based on the concept of reduced\nrepresentation and on the functional composition of hierarchi-\ncal structures can support structure-sensitive processes which\nshow a degree of systematicity. VSAs provide a means for\nrepresenting structured knowledge using distributed vector\nrepresentations and as such provide a way to translate sym-bolic to vector representations (Eliasmith, 2013). Since vec-\ntors can be used by many machine learning methods (for ex-\nample, neural networks, support-vector machines, etc.), once\na symbol system has been translated to a vector space archi-\ntecture, a subsequent implementation of such a system in nu-\nmeric terms should give us access to the machine learning\nmethods associated with distributed representations.\nGiven the claims made by these various authors, we de-\ncided to explore VSA, more specifically Holographic Re-\nduced Representations (HRR), for implementing Fluid Con-\nstruction Grammar, and then further translate this representa-\ntion using existing neural mappings (Eliasmith, 2013). The\nremainder of this section reflects on what is required for the\nmapping from FCG to VSA.\nRepresenting FCG entities\nSymbols A symbol in FCG can be mapped to a randomly\ngenerated n-dimensional vector. All the elements of the vec-\ntors are drawn from a normal distribution N(0,1/n) following\n(Plate, 1994). The symbol and its symbol vector are stored in\nan error-correction memory as explained later.\nFeature-value pairs A feature-value pair (the primary\ncomponent of a feature structure) can be mapped to a circular\nconvolution of two vectors, the feature vector and the value\nvector. Following (Plate, 1994), we define convolution as\nfollows:\nZ=X\nY (1)\nzj=n\u00001\nå\nk=0xkyj\u0000k f or j =0; :::::; n\u00001 (2)\nOnce we have a combined feature-value vector, we can, given\nthe feature, extract the value, using circular correlation (Plate,\n1994; Neumann, 2001), which convolves the pair with the\napproximate inverse of the feature vector (Plate, 1994):\nX=Z\bY (3)\nxj=n\u00001\nå\nk=0zkyj+k f or j =0; :::::; n\u00001 (4)\nFeature-set A feature-set consists of a set of feature-value\npairs. This can be mapped to HRR using vector addition\n(Plate, 1994):\nZ=X\nY+T\nS (5)\nFeature structures Feature structures in FCG consist of\nfeature-sets combined into units. Each unit has a unique name\nwhich is stored as a symbol in the symbol memory. A feature\nstructure is constructed in the same way as a feature-set, i.e by\nconvolution and addition, except that to also include units, we\nnow convolve unit-feature-value triples rather than feature-\nvalue pairs . The feature structure is the addition of all triples:\nZ=U\nX\nY+U\nT\nS (6)\n562\nSince we can represent feature structures, we can also rep-\nresent transient structures as well as constructions of arbitrary\nlength.\nParts of the structure (also called a trace) can be retrieved\nusing the correlation operation (Equation 3). For example,\ngiven U\nXand the whole structure, we can obtain Y .\nHowever, correlation on traces is noisy. A trace preserves\njust enough information to recognise the result but not to re-\nconstruct it. Therefore, we need an error-correction memory\nthat stores vectors for possible units, features and values. The\nmemory is used to compare the noisy output of the correla-\ntion operation with all vectors known to the system. Various\ncomparison measures can be used, however, the most stan-\ndard one is dot product, which for two normalized vectors is\nequal to the cosine of their angle Neumann (2001). We define\nthe following similarity for two vectors:\nsim(X;Y) =X\fY\njjXjjjjYjj(7)\nThis similarity is used to retrieve the vector stored in the\nerror-correction memory with the highest similarity to the\noutput of correlation. That vector represents the most plau-\nsible value of the feature in a particular trace.\nMatching and Merging\nThe promise of distributed representations is that they can do\nvery fast operations over complete feature structures (such as\ncomparing them) without traversing the components as would\nbe done in a symbolic implementation. Let us see how far we\nmight be able to get without decomposition. FCG basically\nneeds (i) the ability to copy a feature structure, (ii) to compare\ntwo feature structures (matching) and (iii) to merge a feature\nstructure with another one.\nCopying It is not difficult to copy two feature structures\nbecause it means to copy the two vectors. However we of-\nten need to replace all variables in a feature structure either\nby new variables (e.g. when copying a construction before\napplying it) or by their bindings (e.g. when creating the ex-\npanded transient structure after matching). It has been sug-\ngested that copy-with-variation can be done by convolving\nthe current structure A with a transformation vector T (Plate,\n1994):\nA\nT=B (8)\nThe transformation vector is first constructed by convolving\nthe new values with the inverse of the current values, then\nadding up the pairs by vector addition. For example, in order\nto set the value of the lex-cat feature with the current value\n?x to the new value which is the binding of ?x, e.g. noun,\nthe inverse of ?x should be convolved with noun. The full\ntransformation vector is\nx0\ny+z0\nw (9)\nSuch vectors can be hand-constructed (Plate, 1994) – which\nis not desirable – or learnt from examples as shown in\n(Neumann, 2002).Matching In general, matching two feature structures can\nbe done by the same principle that is used in the error-\ncorrecting memory, i.e. similarity (Equation 7). Since we use\nthe dot product as our similarity measure, we have a compu-\ntationally fast operation, which is well understood mathemat-\nically. Using dot product provides us with a way to compare a\nfeature structure to every structure in a pool of structures and\nto find the structure with the highest similarity as the closest\nmatch. However this ignores two problems we have not tack-\nled yet: (i) Match in FCG is an includes rather than similar-\nity operation: if the source structure is a subset of the target\nstructure, they should still match, even if the similarity be-\ntween the two structures is low. In fact, this is very common\nbecause the lock of a construction is always a subset of the\ntransient structure, and (ii) This does not take variable bind-\nings yet into account.\nMerging Merging two feature structures is straightforward\nbecause their respective vector representations can simply be\nadded. It is possible to deal with variable bindings by first\ntransforming both feature structures by replacing the vari-\nables by their bindings as discussed earlier. However, there\nare also some tricky issues to be resolved (e.g. variables may\nbe bound to other variables making a variable-chain and then\none of these has to be substituted for all the others).\nPreliminary implementation experiments\nWe now report on first steps in implementing the FCG !VSA\nmapping described above. Experiments were carried out in\nPython using the mathematical extension numpy.\nFeature encoding and retrieval First, we tested the preci-\nsion of value retrieval from a feature set and a feature struc-\nture. We were particularly interested in the relationship be-\ntween HRR vector dimensionality, length of FCG feature\nstructure and retrieval accuracy. We therefore tested differ-\nent lengths of FCG feature sets/structures (5, 50, 500) vs di-\nmensionality of HRR vectors (10, 100, 1,000 etc). We did\n100 runs for each combination (results averaged). Each time\nHRR vectors for individual features were random-initialized\nand combined into a feature-structure representing HRR vec-\ntors using convolution and addition. Then we attempted to\nretrieve all feature values and measured the number of cor-\nrect retrievals divided by the original FCG feature sets length.\nFigure 1 (top) illustrates how precision score increases with\nvectors of higher dimensionality, consistent with previous ex-\nperiments with HRR (Neumann, 2001). To encode FCG fea-\nture sets with an average length of about 50-100 features, we\nrequired around 3,000-dimensional HRR vectors. This figure\nalso illustrates how differences in HRR vector dimensionality\nare related to the cardinality of the feature-set. For example,\nin order to represent and successfully retrieve all values from\na 5-pair set, around 300 dimensions appears to be sufficient,\nwhile a 500-pair feature-set requires just over 30,000. Our\nfeature-values pairs behave in accordance with (Plate, 1994),\nwhich can described as follows:\nn=3:16(k\u00000:25)lnm\nq3(10)\n563\nwhere n is a lower bound for the dimensionality of the vectors\nin order for retrieval to have a probability of error q; k is the\nnumber of pairs in a trace and m is the number of vectors\nin the error-correction memory. For example, to have a q of\n10\u00001in a 5-pair trace with 1,500 items in the error-correction\nmemory, n should be approximately 213. For a smaller q\nof 10\u00002, around 300 dimensions is required. This roughly\nfollows the n and q observed and illustrated in Figure 1 (top).\nFeature structures (triples) behave similarly to pairs al-\nthough dimensions required to encode triples increase.\nFigure 1 (bottom) illustrates how both feature sets\nand feature structures scale for various structure sizes\n(5;10;50;100;500;1,000 pairs/triples). These results can be\ndirectly translated to FCG. A toy grammar starts at 1-5 units\nper construction with 1-10 feature-value pairs in each. A\nmore complex grammar can have around 10 units with ap-\nproximately the same number of pairs in each unit. Repre-\nsented as triples, such structures can be encoded in vectors\nof around 6,000 dimensions. Really large grammars of 30\nunits and 30 feature-values pairs in each unit require roughly\n100,000 dimensions.\nFigure 1: Top: The effects of dimensionality on precision scores\nin feature sets and structures of various length. Bottom: Scaling of\nsets and structures (vector dimensionality at which precision scores\nbecome 1.0).\nMatching We numerically investigated if HRR representa-\ntions can be used to implement the FCG match operation in\ntwo phases: We investigated changes in sim (X;Y)under var-ious conditions working with feature sets (rather than feature\nstructures) for simplicity.\nFirst, we investigated how similarity (Equation 7) between\ntwo HRR vectors responds to structural changes vs changes\nin underlying feature values. Figure 2 (top, bottom) shows\nthat changes to feature values result in a greater decrease in\nsimilarity (reaching 0.0 after 102for a 100-pair structure)\nthan structural changes i.e. adding new pairs, which led to\na more gradual similarity degradation (reaching 0.0 after 105\nfor the same structure size). The difference between these\ntwo types of changes is important in FCG, where structures\ncan be structurally different and still match, while structures\nthat for example, differ in feature values, should not. This\nfinding is also in line with previous experiments comparing\nHRR structures using dot product (Plate, 1994), where simi-\nlarity was more sensitive to the content of feature-value pairs\nrather than the quanitiy of pairs.\nFigure 2: Top: Comparison of similarity values for changes in\nstructure vs changes in bindings for a structure of 10,000 pairs.\nBottom: Comparison of similarity as structures of different original\nlength are extended.\nWhen adding or removing new pairs, similarity (Equation\n7) is affected as illustrated in Figure 3. Initially, both struc-\ntures contained 1,000 pairs; the second structure was subse-\nquently changed by an order of magnitude at a time. Thus the\nfirst structure gradually became a subset of the second. The\ngraph illustrates that as the structure becomes extended with\nnew pairs, similarity of the two structures begins to drop, de-\n564\nspite the fact that the structures share the initial 1,000 pairs.\nHowever, this degradation is very gradual, and similarities\nreach 0.0 only after 105new pairs have been added. Further-\nmore, it can be seen that for larger structures such degradation\nis more gradual than for smaller ones (see Figure 2 bottom).\nFor example, for 10-pair structures similarity is almost 0.0\nafter 103new pairs have been added. This is still, however,\nfairly gradual, considering that a 10-pair structure had 1,000\npairs added to it before becoming dissimilar to the original.\nThe drop in sim (X;Y)appears to be asymmetrical: removing\npairs gives lower similarity than adding the same number of\npairs. This is expected since removing pairs results in less\nshared information between the structures than adding pairs.\nThese findings are good and bad news at the same time.\nOn the one hand it is good that feature value changes have\na more drastic effect on sim (X;Y)than structure changes.\nOn the other hand, the system will have to be able to au-\ntonomously find out whether two HRR vectors represent fea-\nture sets which differ structurally or in terms of feature values.\nPossibly this distinction can be learnt. But finding a solution\nthat is invariant to HRR vector dimension size and feature set\ncardinality is likely not an easy task. Another problem is the\ncommutative nature of sim (X;Y)which essentially does not\nallow to determine which is the more inclusive feature set.\nFigure 3: Gradual changes to a feature set of 1,000 feature-\nvalue pairs and their effects on similarity values.\nConclusions\nThis paper has speculated how a linguistically and computa-\ntionally adequate formalism for language, namely Fluid Con-\nstruction Grammar, could be represented in a Vector Sym-\nbolic Architecture, more specifically Holographic Reduced\nRepresentations, as a step towards a neural implementation.\nWe proposed a number of steps and reported some prelim-\ninary implementation experiments with promising results.\nThe main conclusion so far is that a number of fundamen-\ntal issues remain to be solved to make the FCG !VGA map-\nping fully operational, particularly the issue of implementing\na matching operation that uses a binding list and possibly ex-\ntends it while matching takes place.Acknowledgments\nResearch reported in this paper was funded by the Marie\nCurie ESSENCE ITN and carried out at the AI lab, Vrije Uni-\nversiteit Brussel and the Institut de Biologia Evolutiva (UPF-\nCSIC), Barcelona, financed by the FET OPEN Insight Project\nand the Marie Curie Integration Grant EVOLAN. We are in-\ndebted to comments from Johan Loeckx and Emilia Garcia\nCasademont.\nReferences\nEliasmith, C. (2013). How to build a brain: A neural ar-\nchitecture for biological cognition . New York, NY: Oxford\nUniversity Press.\nGayler, R. W., Levy, S. D., & Bod, R. (2010). Explanatory\naspirations and the scandal of cognitive neuroscience. In\nProceedings of the first annual meeting of the bica society\n(pp. 42–51). Amsterdam: IOS Press.\nGoldberg, A. (1995). Constructions: A construction gram-\nmar approach to argument structure . Chicago: University\nof Chicago Press.\nHinton, G. (1990). Mapping part-whole hierarchies into con-\nnectionist networks. Artificial Intelligence ,46, 47–75.\nKanerva, P. (1997). Fully distributed representation. In Proc.\nreal world computing symposium (pp. 358–365). Tsukuba-\ncity, Japan: Real World Computing Partnership.\nNeumann, J. (2001). Holistic processing of hierarchical\nstructures in connectionist networks . Doctoral dissertation,\nSchool of Informatics,The University of Edinburgh UK.\nNeumann, J. (2002). Learning the systematic transformation\nof holographic reduced representations. Cognitive Systems\nResearch ,3, 227–235.\nNewell, A., & Simon, H. A. (1976). Computer science as\nempirical inquiry: Symbols and search. Communications\nof the ACM ,19, 113–126.\nOhlsson, S., & Langley, P. (1985). Identifying solution paths\nin cognitive diagnosis (Tech. Rep. No. CMU-RI-TR-85-2).\nPittsburgh, PA: Carnegie Mellon University, The Robotics\nInstitute.\nPlate, T. A. (1994). Distributed representations and nested\ncompositional structure . Doctoral dissertation, Graduate\nDepartment of Computer Science, University of Toronto,.\nShastri, L., & Ajjanagadde, V . (1993). From simple associa-\ntions to systematic reasoning: A connectionist representa-\ntion of rules, variables, and dynamic bindings. Behavioral\nand Brain Sciences ,16, 417?494.\nSmolensky, P. (1990). Tensor product variable binding and\nthe representation of symbolic structures in connectionist\nsystems. Artificial Intelligence ,46, 159–216.\nSteels, L. (Ed.). (2011). Design patterns in Fluid Construc-\ntion Grammar . Amsterdam: John Benjamins.\nSteels, L. (Ed.). (2012). Experiments in cultural language\nevolution . Amsterdam: John Benjamins.\nSteels, L., & Hild, M. (Eds.). (2012). Language grounding in\nrobots. New York: Springer-Verlag.\n565", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "NuBVOoSnlTh", "year": null, "venue": null, "pdf_link": "https://www.imperial.ac.uk/media/imperial-college/medicine/mrc-gida/2020-05-21-COVID19-Report-23.pdf", "forum_link": "https://openreview.net/forum?id=NuBVOoSnlTh", "arxiv_id": null, "doi": null }
{ "title": "State-level tracking of COVID-19 in the United States", "authors": [ "H Juliette T Unwin", "Swapnil Mishra", "Valerie C Bradley", "Axel Gandy", "Michaela Vollmer", "Thomas Mellan", "Helen Coupland", "Kylie Ainslie", "Charles Whittaker", "Jonathan Ish-Horowicz", "Sarah Lucie Filippi", "Xiaoyue Xi", "Melodie Monod", "Oliver Ratmann", "Michael Hutchinson", "Fabian Valka", "Harrison Zhu", "Iwona Hawryluk", "Philip Milton", "Marc Baguelin", "Adhiratha Boonyasiri", "Nick Brazeau", "Lorenzo Cattarino", "Giovanni Charles", "Laura V Cooper", "Zulma Cucunuba", "Gina Cuomo-Dannenburg", "Bimandra Djaafara", "Ilaria Dorigatti", "Oliver J Eales", "Jeff Eaton", "Sabine van Elsland", "Richard FitzJohn", "Katy Gaythorpe", "William Green", "Timothy Hallett", "Wes Hinsley", "Natsuko Imai", "Ben Jeffrey", "Edward Knock", "Daniel Laydon", "John Lees", "Gemma Nedjati-Gilani", "Pierre Nouvellet", "Lucy Okell", "Alison Ower", "Kris V Parag", "Hayley A Thompson", "Robert Verity", "Patrick Walker", "Caroline Walters", "Yuanrong Wang", "Oliver J Watson", "Lilith Whittles", "Azra Ghani", "Neil M Ferguson", "Steven Riley", "Christl Donnelly", "Samir Bhatt", "Seth Flaxman" ], "abstract": "As of 20 May 2020, the US Centers for Disease Control and Prevention reported 91,664 confirmed or probable COVID- 19-related deaths, more than twice the number of deaths reported in the next most severely impacted country. In order to control the spread of the epidemic and prevent health care systems from being overwhelmed, US states have imple- mented a suite of non-pharmaceutical interventions (NPIs), including “stay-at-home” orders, bans on gatherings5.7*, and business and school closures.\nWe model the epidemics in the US at the state-level, using publicly available death data within a Bayesian hierarchical semi-mechanistic framework. For each state, we estimate the time-varying reproduction number (the average number of secondary infections caused by an infected person), the number of individuals that have been infected and the number of individuals that are currently infectious. We use changes in mobility as a proxy for the impact that NPIs and other behaviour changes have on the rate of transmission of SARS-CoV-2. We project the impact of future increases in mobility, assuming that the relationship between mobility and disease transmission remains constant. We do not address the potential effect of additional behavioural changes or interventions, such as increased mask-wearing or testing and tracing strategies.\nNationally, our estimates show that the percentage of individuals that have been infected is 4.1% [3.7%-4.5%], with wide variation between states. For all states, even for the worst affected states, we estimate that less than a quarter of the population has been infected; in New York, for example, we estimate that 16.6% [12.8%-21.6%] of individuals have been infected to date. Our attack rates for New York are in line with those from recent serological studies [1] broadly supporting our modelling choices.\nThere is variation in the initial reproduction number, which is likely due to a range of factors; we find a strong association between the initial reproduction number with both population density (measured at the state level) and the chronological date when 10 cumulative deaths occurred (a crude estimate of the date of locally sustained transmission).\nOur estimates suggest that the epidemic is not under control in much of the US: as of 17 May 2020, the reproduction number is above the critical threshold (1.0) in 24 [95% CI: 20-30] states. Higher reproduction numbers are geographically clustered in the South and Midwest, where epidemics are still developing, while we estimate lower reproduction numbers in states that have already suffered high COVID-19 mortality (such as the Northeast). These estimates suggest that caution must be taken in loosening current restrictions if effective additional measures are not put in place.\nWe predict that increased mobility following relaxation of social distancing will lead to resurgence of transmission, keep- ing all else constant. We predict that deaths over the next two-month period could exceed current cumulative deaths by greater than two-fold, if the relationship between mobility and transmission remains unchanged. Our results suggest that factors modulating transmission such as rapid testing, contact tracing and behavioural precautions are crucial to offset the rise of transmission associated with loosening of social distancing.\nOverall, we show that while all US states have substantially reduced their reproduction numbers, we find no evidence that any state is approaching herd immunity or that its epidemic is close to over.", "keywords": [], "raw_extracted_content": "24May2020 ImperialCollegeCOVID-19ResponseTeam\nDOI:https://doi.org/10.25561/79231 Page1Report 23: State-level tracking of COVID-19 in the United States\nH Juliette T Unwin\u0003, Swapnil Mishra\u00032, Valerie C Bradley\u0003, Axel Gandy\u0003, Michaela A C Vollmer, Thomas Mellan, Helen \nCou-pland, Kylie Ainslie, Charlie Whittaker, Jonathan Ish-Horowicz, Sarah Filippi, Xiaoyue Xi, Melodie Monod, Oliver \nRatmann, Michael Hutchinson, Fabian Valka, Harrison Zhu, Iwona Hawryluk, Philip Milton, Marc Baguelin, Adhiratha \nBoonyasiri, Nick Brazeau, Lorenzo Cattarino, Giovanni Charles, Laura V Cooper, Zulma Cucunuba, Gina Cuomo-\nDannenburg, Bimandra Djaafara, Ilaria Dorigatti, Oliver J Eales, Jeff Eaton, Sabine van Elsland, Richard FitzJohn, Katy \nGaythorpe, William Green, Timothy Hallett, Wes Hinsley, Natsuko Imai, Ben Jeffrey, Edward Knock, Daniel Laydon, John \nLees, Gemma Nedjati-Gilani, Pierre Nouvellet, Lucy Okell, Alison Ower, Kris V Parag, Igor Siveroni, Hayley A Thompson, \nRobert Verity, Patrick Walker, Caroline Walters, Yuanrong Wang, Oliver J Watson, Lilith Whittles, Azra Ghani, Neil M \nFerguson, Steven Riley, Christl A. Donnelly, Samir Bhatt1;\u0003 and Seth Flaxman\u0003\nDepartment of Infectious Disease Epidemiology, Imperial College London \nDepartment of Mathematics, Imperial College London\nWHO Collaborating Centre for Infectious Disease Modelling\nMRC Centre for Global Infectious Disease Analytics\nAbdul Latif Jameel Institute for Disease and Emergency Analytics, Imperial College London\nDepartment of Statistics, University of Oxford \n\u0003Contributed equally. \n1Correspondence: [email protected] correspondence: [email protected]\nSUGGESTED CITATION \nH Juliette T Unwin, Swapnil Mishra, Valerie C Bradley et al. Report 23: State-level tracking of COVID-19 in \nthe United States (21-05-2020), doi: https://doi.org/10.25561/79231.\nThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 \nInternational License.\n\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nSummary\nAs of 20 May 2020, the US Centers for Disease Control and Prevention reported 91,664 confirmed or probable COVID-\n19-related deaths, more than twice the number of deaths reported in the next most severely impacted country. In\norder to control the spread of the epidemic and prevent health care systems from being overwhelmed, US states have\nimplemented a suite of non-pharmaceutical interventions (NPIs), including “stay-at-home” orders, bans on gatherings,\nandbusinessandschoolclosures.\nWe model the epidemics in the US at the state-level, using publicly available death data within a Bayesian hierarchical\nsemi-mechanisticframework. Foreachstate,weestimatethetime-varyingreproductionnumber(theaveragenumberof\nsecondaryinfectionscausedbyaninfectedperson),thenumberofindividualsthathavebeeninfectedandthenumber\nof individuals that are currently infectious. We use changes in mobility as a proxy for the impact that NPIs and other\nbehaviourchangeshaveontherateoftransmissionofSARS-CoV-2. Weprojecttheimpactoffutureincreasesinmobility,\nassuming that the relationship between mobility and disease transmission remains constant. We do not address the\npotentialeffectofadditionalbehaviouralchangesorinterventions,suchasincreasedmask-wearingortestingandtracing\nstrategies.\nNationally,ourestimatesshowthatthepercentageofindividualsthathavebeeninfectedis4.1%[3.7%-4.5%],withwide\nvariation between states. For all states, even for the worst affected states, we estimate that less than a quarter of the\npopulationhasbeeninfected;inNewYork,forexample,weestimatethat16.6%[12.8%-21.6%]ofindividualshavebeen\ninfectedtodate. OurattackratesforNewYorkareinlinewiththosefromrecentserologicalstudies[1]broadlysupporting\nourmodellingchoices.\nThereisvariationintheinitialreproductionnumber,whichislikelyduetoarangeoffactors;wefindastrongassociation\nbetweentheinitialreproductionnumberwithbothpopulationdensity(measuredatthestatelevel)andthechronological\ndatewhen10cumulativedeathsoccurred(acrudeestimateofthedateoflocallysustainedtransmission).\nOur estimates suggest that the epidemic is not under control in much of the US: as of 17 May 2020, the reproduction\nnumberisabovethecriticalthreshold(1.0)in24[95%CI:20-30]states. Higherreproductionnumbersaregeographically\nclusteredintheSouthandMidwest,whereepidemicsarestilldeveloping,whileweestimatelowerreproductionnumbers\ninstatesthathavealreadysufferedhighCOVID-19mortality(suchastheNortheast). Theseestimatessuggestthatcaution\nmustbetakeninlooseningcurrentrestrictionsifeffectiveadditionalmeasuresarenotputinplace.\nWepredictthatincreasedmobilityfollowingrelaxationofsocialdistancingwillleadtoresurgenceoftransmission,keep-\ning all else constant. We predict that deaths over the next two-month period could exceed current cumulative deaths\nbygreaterthantwo-fold, iftherelationshipbetweenmobilityandtransmissionremainsunchanged. Ourresultssuggest\nthat factors modulating transmission such as rapid testing, contact tracing and behavioural precautions are crucial to\noffsettheriseoftransmissionassociatedwithlooseningofsocialdistancing.\nOverall, we show that while all US states have substantially reduced their reproduction numbers, we find no evidence\nthatanystateisapproachingherdimmunityorthatitsepidemicisclosetoover.\nWeinvitescientificpeerreviewshere: https://openreview.net/group?id=-Agora/COVID-19\nDOI:https://doi.org/10.25561/79231 Page2\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n1 Introduction\nThefirstdeathcausedbyCOVID-19intheUnitedStatesiscurrentlybelievedtohaveoccurredinSantaClara,Californiaon\nthe6thFebruary[2]. InApril2020,thenumberofdeathsattributedtoCOVID-19intheUnitedStates(US)surpassedthat\nofItaly[3]. ThroughoutMarch2020,USstategovernmentsimplementedavarietyofnon-pharmaceuticalinterventions\n(NPIs),suchasschoolclosuresandstay-at-homeorders,tolimitthespreadofSARS-CoV-2andhelpmaintainthecapacity\nof health systems to treat as many severe cases of COVID-19 as possible. Courtemanche et al.[4] use an event-study\nmodel to determine that such NPIs were successful in reducing the growth rate of COVID-19 cases across US counties.\nWesimilarlyseektoestimatetheimpactofNPIsonCOVID-19transmission,butdosowithasemi-mechanisticBayesian\nmodel that reflects the underlying process of disease transmission and relies on mobility data released by companies\nsuchasGoogle[5]. Mobilitymeasuresrevealstarkchangesinbehaviourfollowinglarge-scalegovernmentinterventions,\nwith individuals spending more time at home and correspondingly less time at work, at leisure centres, shopping, and\non public transit. Some state governments, like the Colorado Department of Public Health, have already begun to use\nsimilar mobility data to adjust guidelines over social distancing [6]. As more and more states ease the stringency of\ntheirNPIs,futurepolicydecisionswillrelyontheinteractionbetweenmobilityandNPIsandtheirsubsequentimpacton\ntransmission.\nInapreviousreport[7],weintroducedanewBayesianstatisticalframeworkforestimatingtherateoftransmissionand\nattackratesforCOVID-19. Ourapproachinfersthetime-varyingreproductionnumber, Rt,whichmeasurestransmission\nintensity. Wecalculatethenumberofnewinfectionsthroughcombiningpreviousinfectionswiththegenerationinterval\n(the distribution of times between infections). The number of deaths is then a function of the number of infections\nand the infection fatality rate (IFR). We estimate the posterior probability of our parameters given the observed data,\nwhileincorporatingprioruncertainty. This makesour approachempiricallydrivenwhile incorporatingas manysources\nofuncertaintyaspossible. Inthisreport,similarto[8,9],weadaptouroriginalframeworktomodeltransmissioninthe\nUSatthestatelevel. Inourformulationweparameterise Rtasafunctionofseveralmobilitytypes. Ourparameterisation\nofRtmakestheexplicitassumptionthatchangesintransmissionarereflectedthroughmobility. Whilewedoattemptto\naccountforresidualvariation,wenotethattransmissionwillalsobeinfluencebyadditionalfactorsandsomeoftheseare\nconfoundedcausallywithmobility. Weutilisepartialpoolingofparameters,whereinformationissharedacrossallstates\ntoleverageasmuchsignalaspossible,butindividualeffectsarealsoincludedforstate-andregion-specificidiosyncrasies.\nOur partial pooling model requires only one state to provide a signal for the impact of mobility, and then this effect is\nshared across all states. While this sharing can potentially lead to initial over or under estimation effect sizes, it also\nmeansthataconsistentsignalforallstatescanbeestimatedbeforethatsignalispresentedinanindividualstateswith\nlittledata.\nWe infer plausible upper and lower bounds (Bayesian credible interval summaries of our posterior distribution) of the\ntotal population that have been infected by COVID-19 (also called the cumulative attack rate or attack rate). We also\nestimatethe effectivenumber ofindividuals currentlyinfectiousgivenour generationdistribution. Weinvestigatehow\nthereproductionnumberhaschangedovertimeandstudytheheterogeneityinstartingandendingratesbystate,date,\nand population density. We assess whether there is evidence that changes in mobility have so far been successful at\nreducingRtto less than 1. To assess the risk of resurgence when interventions are eased, we use simple scenarios of\nincreased mobility and simulate forwards in time. From these simulations we study how sensitive individual states are\nDOI:https://doi.org/10.25561/79231 Page3\n24May2020 ImperialCollegeCOVID-19ResponseTeam\ntoresurgence,andtheplausiblemagnitudeofthisresurgence.\nDetails of the data sources and a technical description of our model and are found in Sections 4 and 5 respectively.\nGenerallimitationsofourapproacharepresentedbelowintheconclusions.\n2 Results\n2.1 Mobilitytrends,interventionsandeffectsizes\nMobility dataprovidea proxy for the behavioural changes thatoccur in response to non-pharmaceutical interventions.\nFigure 1 shows trends in mobility for the 50 states and the District of Columbia (see Section 4 for a description of the\nmobilitydimensions). RegionsarebasedonUSCensusDivisions,modifiedtoaccountforcoordinationbetweengroups\nofstategovernments[10]. Thesetrendsarerelativetoastate-dependentbaseline,whichwascalculatedshortlybefore\nthe COVID-19 epidemic. For example, a value of \u000020% in the transit station trend means that individuals, on average,\nare visiting and spending 20% less time in transit hubs than before the epidemic. In Figure 1, we overlay the timing of\ntwomajorstate-wideNPIs(stayathomeandemergencydecree)(see[11]fordetails). Wealsonoteintuitivechangesin\nmobilitysuchasthespikeon11thand12thAprilforEaster. Inourmodel,weusethetimespentatone’sresidenceand\ntheaverageoftimespentatgrocerystores,pharmacies,recreationcentres,andworkplaces. Forstatesinwhichthe2018\nAmericanCommunitySurveyreportsthatmorethan20%oftheworkingpopulationcommutesonpublictransportation,\nwealsousethetimespentattransithubs(includinggasstationsetc.)[12].\nTo justify the use of mobility as a proxy for behaviour, we regress average mobility against the timings of major NPIs\n(represented as step functions). The median correlation between the observed average mobility and the linear predic-\ntionsfromNPIswasapproximately89%(seeAppendixA).Weobservedreducedcorrelationwhenlagging(forwardand\nbackwards)thetimingofNPIssuggestingimmediateimpactonmobility. WemakenoexplicitcausallinkbetweenNPIs\nandmobility,however,thisrelationshipisplausiblycausallylinkedbutisconfoundedbyotherfactors.\nThe mobility trends data suggests that the United States’ national focus on the New York epidemic may have led to\nsubstantial changes in mobility in nearby states, like Connecticut, prior to any mandated interventions in those states.\nThis observation adds support to the hypothesis that mobility can act as a suitable proxy for the changes in behaviour\ninducedbytheimplementationofthemajorNPIs. Infurthercorroboration,apollconductedbyMorningConsult/Politico\non26thMarch2020foundthat81%ofrespondentsagreedthat“Americansshouldcontinuetosocialdistanceforaslong\nasisneededtocurbthespreadofcoronavirus,evenifitmeanscontinueddamagetotheeconomy”[13]. Whilesupport\nforstrongsocial distancinghas sinceerodedslightly(70%agreein thesamepollconductedlateron10May2020), the\noverall high support for social distancing suggests strong compliance with NPIs, and that the changes to mobility that\nwe observe over the same time period are driven by adherence to those policy recommendations. However, we note\nthatmobilityalonecannotcapturealltheheterogeneityintransmissionrisk. Inparticular,itcannotcapturetheimpact\nof case-based interventions (such as testing and tracing). To account for this residual variation missed by mobility we\nuseasecond-order,weekly,autoregressiveprocess. Thisautoregressiveprocessisanadditionalterminourparametric\nequationfor Rtandaccountsforresidualnoisebycapturingacorrelationstructurewherecurrent Rtiscorrelatedwith\npreviousweeks Rt(seeFigures12).\nDOI:https://doi.org/10.25561/79231 Page4\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nFigure 2 shows the average global effect sizes for the mobility types used in our model. Estimates for the regional and\nstate-leveleffectsizesareincludedinAppendixB.Wefindthatincreasedtimespentinresidencesreducestransmission\nby54.3%[17.8%-80.8%],andthatdecreasesinoverallaveragemobilityreducedtransmissionby62.7%[43.1%-74.5%].\nThesetwoeffectsarelikelyrelated-aspeoplespendlesstimeinpublicspaces,capturedbyouraveragemobilitymetric,\ntheyconverselyspendmoretimeathome. Overall,thisdecreasesthenumberofpeoplewithwhomtheaverageindivid-\nualcomesintocontact,thusslowingtransmission,evenifmoretimeathomemayincreasetransmissionwithinasingle\nresidence. We find time spent in transit hubs does not have a significant effect on transmission. The impact of transit\nmobility is in contrast to what we observed in Italy [8], and likely reflects higher reliance on cars and less use of public\ntransitintheUSthanEurope[14].\nThe learnt random effects from the autoregressive process are shown in Appendix C. These results show that mobility\nexplains most of the changes in transmission in places without advanced epidemics, as evidenced by the flat residual\nvariation. However,forregionswithadvancedepidemics,suchasNewYorkorNewJersey,thereisevidenceofadditional\ndecreases in transmission that cannot be explained by mobility alone. These may capture the impact of other control\nmeasures, such as increased testing, as well as behavioural responses not captured by mobility, like increased mask-\nwearingandhand-washing.\n2.2 Impactofinterventionsonreproductionnumbers\nWe estimate a national average initial reproduction number ( Rt=0) of 2.2 [0.3 Montana - 5.0 New York] and find that,\nsimilartoinfluenzatransmissionincities(seeDalziel etal.[15]),Rt=0iscorrelatedwithpopulationdensity(Figure3)1.\nDalzieletal.hypothesizethatmorepersonalcontactoccursinmoredenselypopulatedareas, thusresultinginalarger\nRt=0.\nRt=0isalsonegativelycorrelatedwithwhenastateobservedcumulative10deaths(Figure3). Thisnegativecorrelation\nimpliesthatstatesbeganlocallysustainedtransmissionlaterhadalower Rt=0. Apossiblehypothesisforthiseffectisthe\nonsetofbehaviouralchangesinresponsetootherepidemicsintheUS.Analternativeexplanationisthattheestimatesof\ntheearlygrowthratesoftheepidemicsinthestatesaffectedearliestarebiasedupwardsbytheearlynationalramp-up\nofsurveillanceandtesting. Despite Rt=0beinghighlyvariable,inpartduetothefactorsdiscussedabove,themajority\nofstateshavegenerallydecreasedtheir Rtsincethefirst10deathswereobserved(Figure4). Weestimatethat26states\nhave a posterior mean Rtof less than one but only 8 have 95% credible intervals that are completely below one. A\nposteriormean Rtbelowoneandcredibleintervalthatincludesonesuggeststhattheepidemicislikelyundercontrolin\nthat state, but the potential for increasing transmission cannot be ruled out. Therefore, our results show that very few\nstates have conclusively controlled their epidemics. Of the ten states with the highest current Rt, half are in the Great\nLakes region (Illinois, Ohio, Minnesota Indiana, and Wisconsin). In Figure 5 we show the geographical variation in the\nposterior probability that Rtis less than 1; green states are those with probability that Rtis below 1 is high, and pink\nstatesarethosewithlowprobability. Thecloseravalueisto100%,themorecertainwearethattherateoftransmission\nisbelow1andthatnewinfectionsarenotincreasingatpresent. ThisisincontrasttomanyEuropeancountriesthathave\nconclusivelyreducedtheir Rtlessthanoneatpresent[7].\n1We also considered the relationship of Rtwith a population density weighted by proportion of the total population of the state in each census\ntract. Thiswaslessstronglycorrelatedto Rt=0.\nDOI:https://doi.org/10.25561/79231 Page5\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nFigure 1: Comparison of mobility data from Google with government interventions for the 50 states and the District of\nColumbia. The solid lines show average mobility (across categories “retail & recreation”, “grocery & pharmacy”, “work-\nplaces”),thedashedlinesshow“transitstations”andthedottedlinesshow“residential”. Interventiondatesareindicated\nbyshapesasshowninthelegend;seeSection4formoreinformationabouttheinterventionsimplemented. Thereisa\nstrongcorrelationbetweentheonsetofinterventionsandreductionsinmobility.\nDOI:https://doi.org/10.25561/79231 Page6\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n●\n●●\nTransitResidentialAverage mobility\n0%\n(no effect on transmissibility)25% 50% 75% 100%\n(ends transmissibility) \nRelative % reduction in RtMobility\nFigure 2: Covariate effect sizes: Average mobility combines “retail & recreation”, “grocery & pharmacy”, “workplaces”.\nTransitstationsisonlyusedasacovariateforstatesinwhichmorethan20%oftheworkingpopulationcommutesusing\npublictransportation. Weplotestimatesoftheposteriormeaneffectsizesand95%credibleintervalsforeachmobility\ncategory. The relative % reduction in Rtmetric is interpreted as follows: the larger the percentage, the more Rtde-\ncreases, meaningthediseasespreadsless; a100%relativereductionendsdiseasetransmissibilityentirely. Thesmaller\nthepercentage,thelesseffectthecovariatehasontransmissibility. A0%relativereductionhasnoeffecton Rtandthus\nnoeffectonthetransmissibilityofthedisease,whileanegativepercentreductionimpliesanincreaseintransmissibility.\n●●\n●●●\n●\n●●\n●\n●●\n●●\n●\n●●●●\n●\n●●\n●\n●\n●\n●●\n●●●●\n●●●\n●●●\n●\n●\n●●●\n●●\n●●●\n●●\n●NJNY\nDERI\nHI\n012345\n0 250 500 750 1000 1250\nPopulation densityInitial Rt●\n●\n●\n●\n●\n●\n●\n●Great Lakes\nGreat Plains\nMountain\nNortheast Corridor\nPacific\nSouth Atlantic\nSouthern Appalachia\nTOLA\n(a)\n●●\n●●●\n●\n●●\n●\n●●\n●●\n●\n●●●●\n●\n●●\n●\n●\n●\n●●\n●●●●\n●●●\n●●●\n●\n●\n●●●\n●●\n●●●\n●●\n●\nWANY\n012345\nMar 01 Mar 15 Apr 01 Apr 15 May 01\nDate of 10 cumulative deathsInitial Rt●\n●\n●\n●\n●\n●\n●\n●Great Lakes\nGreat Plains\nMountain\nNortheast Corridor\nPacific\nSouth Atlantic\nSouthern Appalachia\nTOLA (b)\nFigure3: Comparisonofinitial Rt=0withpopulationdensity(a)anddateof10cumulativedeaths(b). R-squaredvalues\nare0.466and0.449respectively.\nDOI:https://doi.org/10.25561/79231 Page7\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n●\n●●\n●●\n●●\n●\n●●●\n●●\n●●●●\n●\n●●●\n●\n●●\n●●\n●●\n●●●\n●●\n●●●\n●●\n●●\n●\n●●\n●●\n●\n●●●\n●\n●●\n●\n●\n●●\n●\n●\n●●\n●\n●\n●●\n●\n●\n●\n●\n●●\n●●\n●\n●\n●●\n●\n●\n●●\n●●\n●●\n●\n●\n●\n●\n●●\n●●\n●\n●\n●\n●\n●●\n●\n●\n●●\n●\n●\n●●\n●\n●\n●●\n●●\n●\n●\n●●\n●\n●\n●●\n●●\n●\n●\n●●\n●\n●\n●●\n●\n●\n●\n●\n●●\n●\n●\n●●\n●\n●\n●\n●\n●●\n●●\n●\n●\n●\n●\n●\nTexasArizonaIllinoisColoradoOhioMinnesotaIndianaIowaAlabamaWisconsinMississippiTennesseeFloridaVirginiaNew MexicoMissouriDelawareSouth CarolinaMassachusettsNorth CarolinaCaliforniaPennsylvaniaLouisianaMarylandNebraskaGeorgiaOklahomaNevadaNew HampshireOregonWashingtonConnecticutArkansasUtahRhode IslandNew JerseyKansasKentuckyMichiganDistrict of ColumbiaNew YorkSouth DakotaMaineNorth DakotaIdahoVermontWest VirginiaAlaskaWyomingHawaiiMontana\n0 1 2 3 4 5 6\nRt●●\n●●\n●●\n●●\n●●\n●●\n●●\n●●Great Lakes\nGreat Plains\nMountain\nNortheast Corridor\nPacific\nSouth Atlantic\nSouthern Appalachia\nTOLA\n●Initial\nCurrent\nFigure4: State-levelestimatesofinitial Rtandthecurrentaverage Rtoverthepastweek. Thecoloursindicateregional\ngroupingasshowninFigure1\n.\nFigure5showsthatwhileweareconfidentthatsomestateshavecontrolledtransmission,wearesimilarlyconfidentthat\nmanystateshavenot. Specifically,wearemorethan50%surethat Rt>1in25states. Thereissubstantialgeographical\nclustering; most states in the Midwest and the South have rates of transmission that suggest the epidemic is not yet\nunder control. We do note here that many states with Rt<1are still in the early epidemic phase with few deaths so\nfar.\n2.3 TrendsinCOVID-19transmission\nIn this section we focus on five states: Washington, New York, Massachusetts, Florida, and California. These states\nrepresent a variety of COVID-19 government responses and outbreaks that have dominated the national discussion of\nDOI:https://doi.org/10.25561/79231 Page8\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n0%25%50%75%100%Probability R t<1\nFigure5: Ourestimatesoftheprobabilitythat Rtislessthanone(epidemiccontrol)foreachstate.\nCOVID-19. Figure62showsthetrendsforthesestates(trendsforallotherstatescanbefoundinappendixD).Regressing\naveragemobilityagainstthetimingofNPIsyieldedanaveragecorrelationofaround \u001897%. Alongwiththestrongvisual\ncorrespondence,theseresultssuggestthatthatinterventionshavehadaverystrongeffectonmobility,whichgivenour\nmodelling assumptions, translates into effects on transmission intensity. We also note that there are clear day-of-the-\nweek fluctuations from the mobility data that affect transmission; these fluctuations are small compared to the overall\nreductionsinmobility.\nOnFebruary29th2020,Washingtonstateannouncedthenation’sfirstCOVID-relateddeathandbecamethefirststateto\ndeclareastateofemergency. DespiteobservingitsfirstCOVID-19deathonlyadayafterWashingtonstate,NewYorkdid\nnotdeclareastateofemergencyuntil7March2020. Weestimatethat RtbegantodeclineinWashingtonstatebefore\nitdidinNewYork,likelyduetoearlierintervention,butthatstay-at-homeordersinbothstatessuccessfullyreduced Rt\ntolessthanone. However,weestimatethat RtinWashingtonhasincreasedinrecentweeksandiscurrentlyaboveone,\nwhile it remains below one in New York (New York - 0.7 [0.4-1.1] and Washington - 0.9 [0.6-1.3]). Approximately one\nweekafterNewYork,Massachusettsissuedastay-at-homeorderbutthemean Rtremainsaboutone(1.1[0.7-1.4]). In\nFlorida,Rtreducednoticeablybeforethestay-at-homeorder,suggestingthatbehaviourchangestartedbeforethestay-\nat-homeorder. However,increasinginmobilityappearstohavedriventransmissionuprecently(1.2[0.8-1.6]). California\nimplementedearlyinterventionsinSanFrancisco[16],andwasthefirststatetoissueastay-at-homeorder[17],butthe\nmeanRtstillremainsgreaterthanone(1.0[0.7-1.4]). Forallthefivestatesshownherethereisconsiderableuncertainty\naroundthecurrentvalueof Rt.\n2.4 Attackrates\nWe show the percentage of total population infected, or cumulative attack rate, in Table 1 for all 50 states and the\nDistrict of Columbia. In general, the attack rates across states remain low; we estimate that the average percentage\nof people that have been infected by COVID-19 is 4.1% [3.7%-4.5%]. However, this low national average masks a stark\n2Deathdatauntil17May2020isincludedinourmodelanddisplayedintheplots;infectionsand Rtaredisplayedconsistentwiththeavailability\nofGooglemobilitydata,until9thMay2020.\nDOI:https://doi.org/10.25561/79231 Page9\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n010203040\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsWashington\n01,0002,0003,0004,0005,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections●\n0123\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayRtTiming\n●\n●Started\nEased\nInterventions\n●Emergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0300600900\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsNew York\n050,000100,000150,000200,000250,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections●\n0.02.55.07.5\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayRtTiming\n●\n●Started\nEased\nInterventions\n●Emergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0100200300\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsMassachusetts\n010,00020,00030,00040,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections●\n0246\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayRtTiming\n●\n●Started\nEased\nInterventions\n●Emergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0255075100\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsFlorida\n05,00010,00015,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections●\n012345\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayRtTiming\n●\n●Started\nEased\nInterventions\n●Emergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n050100150\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsCalifornia\n010,00020,00030,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections●\n012345\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayRtTiming\n●\n●Started\nEased\nInterventions\n●Emergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\nFigure 6: State-level estimates of infections, deaths, and Rtfor Washington, New York, Massachusetts, Florida, and\nCalifornia. Left:daily number of deaths, brown bars are reported deaths, blue bands are predicted deaths, dark blue\n50% credible interval (CI), light blue 95% CI. Middle:daily number of infections, brown bars are reported confirmed\ncases,bluebandsarepredictedinfections,CIsaresameasleft. Afterwards,ifthe Rtisabove1,thenumberofinfections\nwill start growing again. Right:time-varying reproduction number Rtdark green 50% CI, light green 95% CI. Icons are\ninterventionsshownatthetimetheyoccurred.\nDOI:https://doi.org/10.25561/79231 Page10\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nTable1: Posteriormodelestimatesofpercentageoftotalpopulationinfectedasof17May2020.\nState %oftotalpopulation\ninfected(mean[95%\ncredibleinterval])State %oftotalpopulation\ninfected(mean[95%\ncredibleinterval])\nAlabama 1.9%[1.2%-3.0%] Montana 0.2%[0.0%-0.4%]\nAlaska 0.2%[0.0%-0.7%] Nebraska 1.2%[0.7%-2.0%]\nArizona 2.3%[1.4%-4.0%] Nevada 1.8%[1.3%-2.7%]\nArkansas 0.5%[0.3%-0.8%] NewHampshire 2.2%[1.3%-3.6%]\nCalifornia 1.6%[1.1%-2.5%] NewJersey 16.1%[11.9%-21.7%]\nColorado 4.6%[3.1%-7.3%] NewMexico 2.6%[1.6%-4.3%]\nConnecticut 13.3%[9.7%-18.3%] NewYork 16.6%[12.8%-21.6%]\nDelaware 5.4%[3.5%-8.7%] NorthCarolina 1.1%[0.7%-1.7%]\nDistrictofColumbia 10.8%[7.6%-15.4%] NorthDakota 0.9%[0.5%-1.6%]\nFlorida 1.3%[0.9%-2.0%] Ohio 2.6%[1.7%-4.0%]\nGeorgia 2.7%[1.9%-3.8%] Oklahoma 1.0%[0.7%-1.4%]\nHawaii 0.1%[0.0%-0.3%] Oregon 0.4%[0.2%-0.6%]\nIdaho 0.6%[0.3%-0.8%] Pennsylvania 5.5%[3.7%-8.6%]\nIllinois 7.1%[4.5%-11.2%] RhodeIsland 6.8%[4.8%-9.9%]\nIndiana 5.0%[3.2%-7.9%] SouthCarolina 1.2%[0.8%-1.8%]\nIowa 2.5%[1.5%-4.3%] SouthDakota 1.0%[0.5%-1.9%]\nKansas 0.9%[0.6%-1.3%] Tennessee 0.7%[0.5%-1.2%]\nKentucky 1.0%[0.7%-1.4%] Texas 1.4%[0.8%-2.4%]\nLouisiana 8.0%[6.0%-11.0%] Utah 0.5%[0.3%-0.9%]\nMaine 0.5%[0.3%-0.8%] Vermont 0.8%[0.5%-1.3%]\nMaryland 5.6%[3.9%-8.3%] Virginia 2.2%[1.4%-3.4%]\nMassachusetts 13.0%[9.3%-18.3%] Washington 1.9%[1.4%-2.7%]\nMichigan 5.9%[4.5%-7.8%] WestVirginia 0.5%[0.3%-0.7%]\nMinnesota 3.1%[1.8%-5.2%] Wisconsin 1.2%[0.8%-1.8%]\nMississippi 3.8%[2.4%-6.1%] Wyoming 0.3%[0.1%-0.6%]\nMissouri 1.7%[1.1%-2.7%] National 4.1%[3.7%-4.5%]\nDOI:https://doi.org/10.25561/79231 Page11\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nheterogeneityacrossstates. NewYorkandNewJerseyhavethehighestestimatedattackrates,of16.6%[12.8%-21.6%]\nand16.1%[11.9%-21.7%]respectively,andConnecticut,Massachusetts,andWashington,D.C.allhaveattackratesover\n10%. Conversely,otherstatesthathavedrawnattentionforearlyoutbreaks,suchasCalifornia,Washington,andFlorida,\nhave attack rates of around 1%, and other states where the epidemic is still early, like Maine, having estimated attack\nrates of less than 1%. We note here that there is the possibility of under reporting of deaths in these states. Under\nreportingofCOVID-19attributabledeathswillresultinanunderestimateoftheattackrates. Wenoteherethatwehave\nfoundourestimatestobereasonablyrobustinsettingswherethereissignificantunderreporting(e.g. Brazil[9]).\nFigure7showstheeffectivenumberofinfectiousindividualsandthenumberofnewlyinfectedindividualsonanygiven\ndayforeachofthe8regionsinourmodel. Theeffectivenumberofinfectiousindividualsiscalculatedusingthegeneration\ntime distribution, where individuals are weighted by how infectious they are over time. The fully infectious average\nincludes asymptotic and symptomatic individuals. Currently, we estimate that there are 1344000 [368000 - 3320000]\ninfectiousindividualsacrossthewholeoftheUS,whichcorrespondsto0.42%[0.11%-1.03%]ofthepopulation. Table\n2 shows the number currently infected across different states is highly heterogeneous. Figure 7 shows that despite\nnew infections being in a steep decline, the number of people still infectious, and therefore able to sustain onward\ntransmission, can still be large. This discrepancy underscores the importance of testing and case based isolation as a\nmeans to control transmission. We note that the expanding cone of uncertainty is in part due to uncertainties arising\nfromthelagbetweeninfectionsanddeaths,butalsofromtrendsinmobility. Statelevelestimatesofthetotalnumber\nofinfectiousindividualsovertimearegiveninAppendixEandthecurrentnumberofinfectiousindividualsaregivenin\nFigure2.\n2.5 Scenarios\nThe relationship between mobility and transmission is the principle mechanism affecting values of Rtin our model.\nTherefore, we illustrate the impact of likely near-term scenarios for Rtover the next 8 weeks, under assumptions of\nrelaxationsofinterventionsleadingtoincreasedmobility. Wenotethatmobilityisactinghereasaproxyforthenumber\nof potentially infectious contacts. Our mobility scenarios [18] do not account for additional interventions that may be\nimplemented,suchasmasstestingandcontacttracing. Itisalsolikelythatwheninterventionsareliftedbehaviourmay\nmodify the effect sizes of mobility and reduce the impact of mobility on transmission. Factors such as increased use of\nmasksandincreasedadherencetosocialdistancingareexamples. Giventhesefactorswecautionthereadertolookat\nourscenariosaspessimistic,butillustrativeofthepotentialrisks.\nWe define scenarios based on percent return to baseline mobility, which is by definition 0. As an example, say that\ncurrentlymobilityis50%lowerthanbaseline,or-50%,perhapsduetotheintroductionofsocial-distancingNPIs. Then,a\n20%increaseofmobilityfromitscurrentlevelis \u000050%\u0003(1\u000020%) =\u000040%. Similarly,ifmobilityinresidencesincreased\nby10%followingastay-at-homeorder,our20%scenarioreducesthistoan8%increaseoverbaseline. Thisassumesthat\npeoplehavebeguntoresumepre-lockdownbehaviour,buthavenotyetreturnedtobaselinemobility. Weholdthis20%\nreturntobaselineconstantforthedurationofthe8-weekscenario.\nWepresentthreescenarios(a)constantmobility(mobilityremainsatcurrentlevelsfor8weeks),(b)20%returntopre-\nstay-at-homemobilityfromcurrentlevelsand(c)40%returntopre-stay-at-homemobilityfromcurrentlevels. Wejustify\nDOI:https://doi.org/10.25561/79231 Page12\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nTOLA South AtlanticMountain Great Lakes Southern AppalachiaPacific Great Plains Northeast Corridor\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 May 10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 May10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 May 10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 May 10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 May10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 May 10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 May 10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 May0500,0001,000,0001,500,0002,000,000\n030,00060,00090,000120,000\n0100,000200,000300,000400,000025,00050,00075,000100,000\n0250,000500,000750,000\n0100,000200,000300,0000100,000200,000\n050,000100,000150,000200,000250,000\nNumber of people\n# new infections [95% CI] # new infections [50% CI] # current infections [95% CI] # current infections [50% CI]\nFigure7: Estimatesfortheeffectivenumberofinfectiousindividualsonadayinpurple(lightpurple,95%CI,darkpurple\n50%CI)andfornewlyinfectedpeopleperdayinblue(lightblue,95%CI,darkblue: 50%CI).\nDOI:https://doi.org/10.25561/79231 Page13\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nTable2: Posteriormodelestimatesofthenumberofcurrentlyinfectiousindividualsasof17May2020.\nState Numberofinfectious\nindividuals(mean[95%\ncredibleinterval])State Numberofinfectious\nindividuals(mean[95%\ncredibleinterval])\nAlabama 15,000[4,000-37,000] Montana 0[0-1,000]\nAlaska 0[0-2,000] Nebraska 3,000[0-10,000]\nArizona 40,000[12,000-93,000] Nevada 5,000[0-14,000]\nArkansas 1,000[0-4,000] NewHampshire 4,000[0-12,000]\nCalifornia 92,000[26,000-228,000] NewJersey 94,000[26,000-227,000]\nColorado 47,000[15,000-110,000] NewMexico 10,000[2,000-24,000]\nConnecticut 40,000[11,000-93,000] NewYork 84,000[13,000-246,000]\nDelaware 9,000[2,000-22,000] NorthCarolina 14,000[3,000-35,000]\nDistrictofColumbia 7,000[1,000-18,000] NorthDakota 0[0-2,000]\nFlorida 39,000[10,000-95,000] Ohio 54,000[17,000-125,000]\nGeorgia 28,000[6,000-72,000] Oklahoma 1,000[0-5,000]\nHawaii 0[0-1,000] Oregon 1,000[0-4,000]\nIdaho 0[0-1,000] Pennsylvania 96,000[23,000-251,000]\nIllinois 176,000[54,000-395,000] RhodeIsland 6,000[1,000-16,000]\nIndiana 52,000[12,000-134,000] SouthCarolina 7,000[1,000-19,000]\nIowa 18,000[5,000-41,000] SouthDakota 1,000[0-5,000]\nKansas 1,000[0-4,000] Tennessee 6,000[1,000-17,000]\nKentucky 2,000[0-7,000] Texas 90,000[27,000-218,000]\nLouisiana 29,000[6,000-75,000] Utah 1,000[0-5,000]\nMaine 0[0-1,000] Vermont 0[0-1,000]\nMaryland 37,000[9,000-91,000] Virginia 27,000[6,000-66,000]\nMassachusetts 96,000[27,000-232,000] Washington 9,000[1,000-26,000]\nMichigan 21,000[4,000-59,000] WestVirginia 0[0-1,000]\nMinnesota 36,000[10,000-88,000] Wisconsin 7,000[1,000-22,000]\nMississippi 22,000[6,000-51,000] Wyoming 0[0-1,000]\nMissouri 16,000[4,000-41,000] National 1344000[368000-\n3320000]\nDOI:https://doi.org/10.25561/79231 Page14\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nConstant mobility Increased mobility 20% Increased mobility 40%Washington New York Massachusetts Florida California\n 2 Mar16 Mar30 Mar13 Apr27 Apr11 May25 May 8 Jun22 Jun 6 Jul 2 Mar16 Mar30 Mar13 Apr27 Apr11 May25 May 8 Jun22 Jun 6 Jul 2 Mar16 Mar30 Mar13 Apr27 Apr11 May25 May 8 Jun22 Jun 6 Jul0100200300400\n01,0002,0003,000\n05001,0001,500\n01,0002,0003,0004,000\n01,0002,0003,0004,0005,000Daily number of deaths\nFigure8: State-levelscenarioestimatesofdeathsforWashington,NewYork,Massachusetts,FloridaandCalifornia. The\nribbon shows the 95% credible intervals (CIs) for each scenario. The first column of plots show the results of scenario\n(a) where mobility is kept constant at pre-stay-at-home levels, the middle column shows results for scenario (b) where\nthereis a20%returntopre-epidemicmobility, andtherightcolumnshowsscenario(c)wherethereisa 40%returnto\npre-epidemicmobility.\ntheselectionofthesescenariosbyexamininghowmobilityhaschangedinstatesthathavealreadybeguntorelaxsocial\ndistancingguidelines. Forexample,Colorado’sstay-at-homeorderexpiredonthe26thofApril,andactivitylevelreported\nbytheColoradoDepartmentofPublicHealthhasrecoveredapproximately30%ofthedecreaseobservedfollowinginitial\nimplementationofNPIs[6]. Figure8showstheestimatednumberofdeathsforeachscenariointhefivestatesdiscussed\nabove: Washington,NewYork,Massachusetts,Florida,andCalifornia. Resultsforallthestatesmodelledareincludedin\nAppendix F. These estimates are certainly notforecasts and are based on multiple assumptions, but they illustrate the\npotential consequences of increasing mobility across the general population: in almost all cases, after 8 weeks, a 40%\nreturntobaselineleadstoanepidemiclargerthanthecurrentwave.\nDOI:https://doi.org/10.25561/79231 Page15\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n3 Conclusions\nInthisreportweuseaBayesiansemi-mechanisticmodeltoinvestigatetheimpactoftheseNPIsviachangesinmobility.\nOur model uses mobility to predict the rate of transmission, neglecting the potential effect of additional behavioural\nchanges or interventions such as testing and tracing. While mobility will explain a large amount of the variance in Rt,\nthere is likely to be substantial residual variation which will be geographically heterogeneous. We attempt to account\nfor this residual variation through a second order, weekly, autoregressive process. This stochastic process is able to\npick up variation drive by the data but is unable to determine associations or causal mechanisms. Figure 12 shows the\nresidualvariationcapturedbytheautoregressiveprocess,andgiventheselinesareflatforthemajorityofstates,wecan\nconcludethatmuchofthevariationweseeintheobserveddeathdatacanbeattributedtomobility. However,thereare\nstates,suchasNewYork,wherethisresidualeffectislargewhichsuggeststhatadditionalfactorshavecontributedtothe\nreductionin Rt. Wehypothesisethesecouldbebehaviouralchangesbuttestingthishypothesiswillrequireadditional\ndata.\nWefindthatthestartingreproductionnumberisassociatedwithpopulationdensityandthechronologicaldateofepi-\ndemic onset. These two relationships suggest two dimensions which may influence the starting reproduction number\nandunderscorethevariabilitybetweenstates. Wearecautioustodrawanycausalrelationshipsfromtheseassociations;\nourresultshighlightthatmoreadditionalstudiesofthesefactorsareneedatfinerspatialscales.\nWe find that the posterior mean of the current reproduction is above 1 in 9 states, with 95% confidence, and above 1\nin 25 states with 50% confidence. These current reproduction numbers suggest that in many states the US epidemic is\nnot under control and caution must be taken in loosening current interventions without additional measures in place.\nThe high reproduction numbers are geographically clustered in the southern US and Great Plains region, while lower\nreproductionnumbersareobservedinareasthathavesufferedhighCOVID-19mortality(suchastheNortheastCorridor).\nWe simulate forwards in time a partial return of mobility back to pre-COVID levels, while keeping all else constant, and\nfind substantial resurgence is expected. In the majority of states, the deaths expected over a two-month period would\nexceedcurrentlevelsbymorethantwo-fold. Thisincreaseinmobilityismodestandheldconstantfor8weeks. However,\nthese results must be heavily caveated: our results do not account for additional interventions that may be introduced\nsuch as mass testing, contact tracing and changing work place/transit practices. Our results also do not account for\nbehavioural changes that may occur such as increased mask wearing or changes in age specific movement. Therefore,\nour scenarios are pessimistic in nature and should be interpreted as such. Given these caveats, we conjecture at the\npresenttimethat,intheabsenceofadditionalinterventions(suchasmasstesting), additional behaviouralmodifications\nareunlikelytosubstantiallyreduce Rtinoftheirown.\nWeestimatethenumberofindividualsthathavebeeninfectedbySARS-CoV2todate. Ourattackratesaresensitiveto\ntheassumedvaluesofinfectionfatalityrate(IFR).Weaccountforeachindividualstate’sagestructure,andfurtheradjust\nforcontactmixingpatterns[19]. ToensureassumptionsaboutIFRdonothaveundueinfluenceonourconclusions,we\nincorporatepriornoiseintheestimate,andperformasensitivityanalysisusingdifferentcontactmatrices. Also,ourattack\nratesforNewYorkareinlinewiththosefromrecentserologicalstudies[1]. Weshowthatwhilereductionsinthedaily\ninfections continue, the reservoir of infectious individuals remains large. This reservoir also implies that interventions\nshould remain in place longer than the daily case count implies, as trends in the number of infectious individuals lags\nbehind. Themagnitudeofdifferencebetweennewlyinfectedandcurrentlyinfectedindividualssuggestthatmasstesting\nDOI:https://doi.org/10.25561/79231 Page16\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nandisolationcouldbeaneffectiveintervention.\nOur results suggest that while the US has substantially reduced its reproduction numbers in all states, there is little\nevidencethattheepidemicisundercontrolinthemajorityofstates. Withoutchangesinbehaviourthatresultinreduced\ntransmission,orinterventionssuchasincreasedtestingthatlimittransmission,newinfectionsofCOVID-19arelikelyto\npersist,and,inthemajorityofstates,grow.\n4 Data\nOur model uses daily real-time state-level aggregated data published by New York Times (NYT) [20] for New York State\nandJohnHopkinsUniversity(JHU)[3]fortheremainingstates. Thereisnosinglesourceofconsistentandreliabledata\nfor all 50 states. We acknowledge that data issues such as under reporting and time lags can influence our results. In\nprevious reports [8, 9, 7] we have shown our modelling methodology is generally robust to these data issues due to\npooling. However,wedorecognisenomodellingmethodologywillbeabletosurmountalldataissues;thereforethese\nresults should be interpreted as the best estimates based on current data, and are subject to change with future data\nconsolidation. JHUandNYTprovideinformationonconfirmedcasesanddeathsattributabletoCOVID-19,howeveragain,\nthecasedataarehighlyunrepresentativeoftheincidenceofinfectionsduetounder-reportingandsystematicandstate-\nspecific changes in testing. We, therefore, use only deaths attributable to COVID-19 in our model. While the observed\ndeaths still have some degree of unreliability, again due to changes in reporting and testing, we believe the data are of\nsufficientfidelitytomodel. ForagespecificpopulationcountsweusedatafromtheU.S.CensusBureauin2018[21]. The\ntimingofsocialdistancingmeasureswascollatedbytheUniversityofWashington[11].\nWeusetheGoogleMobilityReport[5]3whichprovidesdataonmovementintheUSAbystatesandhighlightsthepercent\nchangeinvisitsto:\n•Grocery&pharmacy: Mobilitytrendsforplaceslikegrocerymarkets,foodwarehouses,farmersmarkets,speciality\nfoodshops,drugstores,andpharmacies.\n•Parks: Mobility trends for places like local parks, national parks, public beaches, marinas, dog parks, plazas, and\npublicgardens.\n•Transitstations: Mobilitytrendsforplaceslikepublictransporthubssuchassubway,bus,andtrainstations.\n•Retail & recreation: Mobility trends for places like restaurants, cafes, shopping centres, theme parks, museums,\nlibraries,andmovietheatres.\n•Residential:Mobilitytrendsforplacesofresidence.\n•Workplaces: Mobilitytrendsforplacesofwork.\nThemobilitydatashowlengthofstayatdifferentplacescomparedtoabaseline. Itisthereforerelative,i.e.mobilityof\n-20%meansthat,comparedtonormalcircumstancesindividualsareengaginginagivenactivity20%less.\n3WeusemobilitydatafromGoogle,whichwaslastupdatedon 9thMay2020. Fordatesafter 9thMay2020,weimputethemobilitydatawiththe\nmedianoflastsevendays.\nDOI:https://doi.org/10.25561/79231 Page17\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n5 Methods\nWe introduced a new Bayesian framework for estimating the transmission intensity and attack rate (percentage of the\npopulationthathasbeeninfected)ofCOVID-19fromthereportednumberofdeathsinapreviousreport[7]4. Thisframe-\nworkusesthetime-varyingreproductionnumber Rttoinformalatentfunctionforinfections,andthentheseinfections,\ntogetherwithprobabilisticlags,arecalibratedagainstobserveddeaths. Observeddeaths,whilestillsusceptibletounder\nreportinganddelays,aremorereliablethanthereportednumberofconfirmedcases,althoughtheearlyfocusofmost\nsurveillance systems on cases with reported travel histories to China may have missed some early deaths. Changes in\ntesting strategies during the epidemic mean that the severity of confirmed cases as well as the reporting probabilities\nchangedintimeandmaythushaveintroducedbiasinthedata.\nInthisreport,weadaptouroriginalBayesiansemi-mechanisticmodeloftheinfectioncycletothestatesintheUSA.We\ninferplausibleupperandlowerbounds(Bayesiancredibleintervals)ofthetotalpopulationsinfected(attackrates)and\nthe reproduction number over time ( Rt). In our framework we parametrize Rtas a function of Google mobility data.\nWefitthemodeljointlytoCOVID-19datafromallregionstoassesswhetherthereisevidencethatchangesinmobility\nhavesofarbeensuccessfulatreducing Rtbelow1. Ourmodelisapartialpoolingmodel,wheretheeffectofmobilityis\nshared,butregion-andstate-specificmodifierscancapturedifferencesandidiosyncrasiesamongtheregions.\nWenotethatfuturedirectionsshouldfocusonembeddingmobilityinrealisticcontactmechanismstoestablishacloser\nrelationshiptotransmission.\n5.1 Modelspecifics\nWeobservedailydeaths Dt;mfordayst2 f1;:::;n gandstatesm2 f1;:::;M g. Thesedailydeathsaremodelledusing\napositivereal-valuedfunction dt;m=E[Dt;m]thatrepresentstheexpectednumberofdeathsattributedtoCOVID-19.\nThedailydeaths Dt;mareassumedtofollowanegativebinomialdistributionwithmean dt;mandvariance dt;m+d2\nt;m\n\u001e,\nwhere followsapositivehalfnormaldistribution,i.e.\nDt;m\u0018NegativeBinomial \ndt;m;dt;m+d2\nt;m\n !\n;\n \u0018 N+(0;5):\nHere, N(\u0016;\u001b)denotes a normal distribution with mean \u0016and standard deviation \u001b. We say that Xfollows a positive\nhalfnormaldistribution N+(0;\u001b)ifX\u0018 jYj,whereY\u0018 N (0;\u001b).\nTo mechanistically link our function for deaths to our latent function for infected cases, we use a previously estimated\nCOVID-19 infection fatality ratio (IFR, probability of death given infection) together with a distribution of times from\ninfection to death \u0019. Details of this calculation can be found in [22, 23]. From the above, every region has a specific\nmeaninfectionfatalityratioifr m(seeAppendixG).Toincorporatetheuncertaintyinherentinthisestimateweallowthe\n4Similartoourpreviousreport[7],weseedeachepidemicfor5consecutivedaysstarting30daysbeforethestatereachedtencumulativedeaths.\nWyomingdidnotreportmorethan10deathsinourdataset,soweuseathresholdoffiveinstead.\nDOI:https://doi.org/10.25561/79231 Page18\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nifrmforeverystatetohaveadditionalnoisearoundthemean. Specificallyweassume\nifr\u0003\nm\u0018ifrm\u0001N(1;0:1):\nWe believe a large-scale contact survey similar to polymod [19] has not been collated for the USA, so we assume the\ncontactpatternsaresimilartothoseintheUK.Weconductedasensitivityanalysis,showninAppendixG,andfoundthat\ntheIFRcalculatedusingthecontactmatricesofotherEuropeancountrieslaywithintheposteriorofifr\u0003\nm.\nUsing estimated epidemiological information from previous studies [22, 23], we assume the distribution of times from\ninfectiontodeath \u0019(infection-to-death)tobe\n\u0019\u0018Gamma (5:1;0:86) +Gamma (17:8;0:45):\nTheexpectednumberofdeaths dt;m,onagivenday t,forstatemisgivenbythefollowingdiscretesum:\ndt;m=ifr\u0003\nmt\u00001X\n\u001c=0c\u001c;m\u0019t\u0000\u001c;\nwherec\u001c;misthenumberofnewinfectionsonday \u001cinstatemandwhere\u0019isdiscretizedvia \u0019s=Rs+0:5\ns\u00000:5\u0019(\u001c)d\u001cfor\ns= 2;3;:::;and\u00191=R1:5\n0\u0019(\u001c)d\u001c,where\u0019(\u001c)isthedensityof \u0019.\nThetruenumberofinfectedindividuals, c,ismodelledusingadiscreterenewalprocess. Wespecifyagenerationdistri-\nbutiongwithdensity g(\u001c)as:\ng\u0018Gamma (6:5;0:62):\nGiventhegenerationdistribution,thenumberofinfections ct;monagivenday t,andstatem,isgivenbythefollowing\ndiscreteconvolutionfunction:\nct;m=St;mRt;mt\u00001X\n\u001c=0c\u001c;mgt\u0000\u001c; (1)\nSt;m= 1\u0000Pt\u00001\ni=0ci;m\nNm\nwhere, similartotheprobabilityofdeathfunction, thegenerationdistributionisdiscretizedby gs=Rs+0:5\ns\u00000:5g(\u001c)d\u001cfor\ns= 2;3;:::;andg1=R1:5\n0g(\u001c)d\u001c. The population of state mis denoted by Nm. We include the adjustment factor\nSt;mtoaccountforthenumberofsusceptibleindividualsleftinthepopulation.\nWeparametrise Rt;masalinearfunctionoftherelativechangeintimespent(fromabaseline)\nRt;m=R0;m\u0001f(\u0000(3X\nk=1Xt;m;k\u000bk)\u0000Yt;m\u000bregion\nr(m)\u0000Zt;m\u000bstate\nm\u0000\u000fm;w m(t)); (2)\nwheref(x) = 2exp(x)/(1 +exp(x))istwicetheinverselogitfunction. Xt;m;karecovariatesthathavethesameeffect\nforallstates, Yt;misacovariatethatalsohas aregion-specificeffect, r(m)2 f1;:::;R gistheregionastateisin (see\nFigure 1),Zt;mis a covariate that has a state-specific effect and \u000fm;w m(t)is a weekly AR(2) process, centred around 0,\nthatcapturesvariationbetweenstatesthatisnotexplainedbythecovariates.\nThepriordistributionfor R0;m[24]waschosentobe\nR0;m\u0018 N (3:28;\u0014)with\u0014\u0018 N+(0;0:5);\nDOI:https://doi.org/10.25561/79231 Page19\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nwhere\u0014isthesameamongallstates.\nIn the analysis of this paper we chose the following covariates: Xt;m;1=Maverage\nt;m,Xt;m;2=Mtransit\nt;m,Xt;m;3=\nMresidential\nt;m,Yt;m=Maverage\nt;m, andZt;m=Mtransit\nt;m, where the mobility variables are from [5] and defined as follows\n(allareencodedsothat0isthebaselineand1isafullreductionofthemobilityinthisdimension):\n•Maverage\nt;misanaverageofretailandrecreation,groceriesandpharmacies, andworkplaces. Anaverageistakenas\nthesedimensionsarestronglycollinear.\n•Mtransit\nt;misencodingmobilityforpublic transporthubs. Forstateswhereless than20% of theworkingpopulation\naged 16 and over uses public transportation, we set Mtransit\nt;m = 0, i.e. this mobility has no effect on transmission.\nForstatesinwhichmorethan20%oftheworkingpopulationcommutesusingpublictransportation, Mtransit\nt;misthe\nmobilityontransit.\n•Mresidential\nt;marethemobilitytrendsforplacesofresidences.\nTheweekly,state-specificeffectismodelledasaweeklyAR(2)process,centredaround 0withstationarystandarddevia-\ntion\u001bwthatstartsonthedayaftertheemergencydecreeininastate. Beforetheemergencydecree,thereisnorandom\nweeklyeffect,so \u000f1;m= 0. Afterwards,theAR(2)processstartswith \u000f2;m\u0018 N (0;\u001b\u0003\nw),\n\u000fw;m\u0018 N (\u001a1\u000fw\u00001;m+\u001a2\u000fw\u00002;m;\u001b\u0003\nw)form= 3;4;::: (3)\nwith independent priors on \u001a1and\u001a2that are normal distributions conditioned to be in [0;1]; the prior for \u001a1is a\nN(0:8;:05)distribution conditioned to be in [0;1]the prior for \u001a2is aN(0:1;:05)distribution, conditioned to be in\n[0;1]. The prior for \u001bw, the standard deviation of the stationary distribution of \u000fwis chosen as \u001bw\u0018 N+(0;:2).\nThe standard deviation of the weekly updates to achieve this standard deviation of the stationary distribution is \u001b\u0003\nw=\n\u001bwp\n1\u0000\u001a2\n1\u0000\u001a2\n2\u00002\u001a2\n1\u001a2/(1\u0000\u001a2).\nTheconversionfromdaystoweeksisencodedin wm(t). Wesetwm(t) = 1forallt\u0014temergency\nm,whichisthedayofthe\nemergencydecreeinthatstate. Then,every7days, wmisincremented,i.e. wm(t) =bmax(t\u0000temergency\nm \u00001;0)/7c+ 2\nfort>temergency\nm. Duetothelagbetweeninfectionanddeath,ourestimatesof Rtinthelasttwoweeksbeforetheendof\nourobservationsare(almost)notinformedbycorrespondingdeathdata. Therefore,weassumethatthelasttwoweeks\nhavethesamerandomweeklyeffectastheweek3weeksbeforetheendofobservation.\nThepriordistributionforthesharedcoefficientswerechosentobe\n\u000bk\u0018 N (0;0:5);k= 1;:::; 3;\nandthepriordistributionforthepooledcoefficientswerechosentobe\n\u000bregion\nr \u0018 N (0;\rr);r= 1;:::;R;with\rr\u0018 N+(0;0:5);\n\u000bstate\nm\u0018 N (0;\rs);m= 1;:::;Mwith\rs\u0018 N+(0;0:5):\nWe assume that seeding of new infections begins 30 days before the day after a state has cumulatively observed 10\ndeaths. From this date, we seed our model with 6 sequential days of an equal number of infections: c1;m=\u0001 \u0001 \u0001=\nDOI:https://doi.org/10.25561/79231 Page20\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nc6;m\u0018Exponential (1\n\u001c), where\u001c\u0018Exponential (0:03). These seed infections are inferred in our Bayesian posterior\ndistribution.\nWe estimated parameters jointly for all states in a single hierarchical model. Fitting was done in the probabilistic pro-\ngramminglanguageStan[25]usinganadaptiveHamiltonianMonteCarlo(HMC)sampler.\n6 Acknowledgements\nWe would like to thank Amazon AWS and Microsoft Azure for computational credits and we would like to thank the\nStan development team for their ongoing assistance. This work was supported by Centre funding from the UK Medical\nResearchCouncilunderaconcordatwiththeUKDepartmentforInternationalDevelopment,theNIHRHealthProtection\nResearchUnitinModellingMethodologyandCommunityJameel.\nReferences\n[1]S M Kissler et al. “Reductions in commuting mobility predict geographic differences in SARS-CoV-2 prevalence in\nNewYorkCity”.In:(2020). URL:http://nrs.harvard.edu/urn-3:HUL.InstRepos:42665370 .\n[2]SantaClaraCountyPublicHealth. CountyofSantaClaraIdentifiesThreeAdditionalEarlyCOVID-19Deaths .2020.\nURL:https://www.sccgov.org/sites/covid19/Pages/press-release-04-21-20-early.aspx .\n[3]E Dong, H Du, and L Gardner. “An interactive web-based dashboard to track COVID-19 in real time”. eng. In: The\nLancet.Infectiousdiseases (Feb.2020),pp.1473–3099. ISSN:1474-4457. URL:https://pubmed.ncbi.nlm.nih.\ngov/32087114https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7159018/ .\n[4]C Courtemanche et al. “Strong Social Distancing Measures In The United States Reduced The COVID-19 Growth\nRate”.In: HealthAffairs 39.7(2020).\n[5]AAktayetal.“GoogleCOVID-19CommunityMobilityReports:AnonymizationProcessDescription(version1.0)”.\nIn:ArXivabs/2004.0(2020).\n[6]J Bayham et al. Colorado Mobility Patterns During the COVID-19 Response . 2020.URL:http://www.ucdenver.\nedu/academics/colleges/PublicHealth/coronavirus/Documents/Mobility%20Report_final.pdf .\n[7]SFlaxmanetal. Report13:Estimatingthenumberofinfectionsandtheimpactofnon-pharmaceuticalinterventions\nonCOVID-19in11Europeancountries .2020.\n[8]MVollmeretal. Report20:UsingmobilitytoestimatethetransmissionintensityofCOVID-19inItaly:Asubnational\nanalysiswithfuturescenarios .2020.\n[9]TAMellanetal. Report21-EstimatingCOVID-19casesandreproductionnumberinBrazil .2020.\n[10]M.Reston,K.Sgueglia,andC.Mossburg. GovernorsonEastandWestcoastsformpactstodecidewhentoreopen\neconomies . 2020.URL:https://edition.cnn.com/2020/04/13/politics/states-band-together-\nreopening-plans/index.html .\nDOI:https://doi.org/10.25561/79231 Page21\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n[11]NFullmanetal. State-levelsocialdistancingpoliciesinresponsetoCOVID-19intheUS .2020.URL:http://www.\ncovid19statepolicy.org .\n[12]UnitedStatesCensusBureau. ExploreCensusData .2020.URL:https://data.census.gov/cedsci/ .\n[13]MorningConsult. HowtheCoronavirusOutbreakIsImpactingPublicOpinion .Accessedon15/05/2020.2020. URL:\nhttps://morningconsult.com/form/coronavirus-outbreak-tracker/ .\n[14]J Stromberg. The real reason American public transportation is such a disaster . 2015.URL:https://www.vox.\ncom/2015/8/10/9118199/public-transportation-subway-buses .\n[15]BDDalzieletal.“UrbanizationandhumidityshapetheintensityofinfluenzaepidemicsinU.S.cities”.In: Science\n362.6410 (2018), pp. 75–79. ISSN: 0036-8075. eprint: https://science.sciencemag.org/content/362/\n6410/75.full.pdf .URL:https://science.sciencemag.org/content/362/6410/75 .\n[16]CityandCountyofDepartmentofPublicHealthSanFrancisco. ORDEROFTHEHEALTHOFFICERNo.C19-07c .2020.\nURL:https://sf.gov/sites/default/files/2020-04/2020.04.29%20FINAL%20%28signed%29%\n20Health%20Officer%20Order%20C19-07c-%20Shelter%20in%20Place.pdf .\n[17]LGamioSMervoshJLeeandNPopovich. SeeWhichStatesAreReopeningandWhichAreStillShutDown .2020.\nURL:https://www.nytimes.com/interactive/2020/us/states-reopen-map-coronavirus.html .\n[18]KEC Ainslie et al. “Evidence of initial success for China exiting COVID-19 social distancing policy after achieving\ncontainment”.In: WellcomeOpenResearch 5.81(2020).\n[19]J Mossong et al. “Social Contacts and Mixing Patterns Relevant to the Spread of Infectious Diseases”. In: PLOS\nMedicine 5.3(Mar.2008),pp.1–1. URL:https://doi.org/10.1371/journal.pmed.0050074 .\n[20]M Smith et al. Coronavirus (Covid-19) Data in the United States . 2020.URL:https://github.com/nytimes/\ncovid-19-data .\n[21]Censusreporter. Censusreporter .2020.URL:https://censusreporter.org .\n[22]RVerityetal.“EstimatesoftheseverityofCOVID-19disease”.In: LancetInfectDis (2020).\n[23]P Walker et al. Report 12: The Global Impact of COVID-19 and Strategies for Mitigation and Suppression . 2020.\nURL:https://www.imperial.ac.uk/mrc-global-infectious-disease-analysis/news--wuhan-\ncoronavirus/ .\n[24]YLiuetal.“ThereproductivenumberofCOVID-19ishighercomparedtoSARScoronavirus”.In: JournalofTravel\nMedicine (2020).ISSN:17088305.\n[25]B Carpenter et al. “ Stan: A Probabilistic Programming Language”. In: Journal of Statistical Software 76.1 (2017),\npp.1–32. ISSN:1548-7660. URL:http://www.jstatsoft.org/v76/i01/ .\nA Mobilityregressionanalysis\nIn Figure 9 we regress NPIs against average mobility. We parameterise NPIs as piece-wise constant functions that are\nzero when the intervention has not been implemented and one when it has. We evaluate the correlation between the\npredictionsfromthelinearmodelandtheactualaveragemobility. Wealsolagthetimingofinterventionsandinvestigate\nitsimpactonpredictedcorrelation.\nDOI:https://doi.org/10.25561/79231 Page22\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n05101520\n0.000.250.500.751.00\nCorrelationNumber of States\n0.40.50.60.70.8\n−20 −10 010 20\nLagCorrelation\nFigure9: Mobilityregressionanalysis.\nB Effectsizes\n●\n●\n●●\n●\n●●●\nTOLASouthern AppalachiaSouth AtlanticPacificNortheast CorridorMountainGreat PlainsGreat Lakes\n−25% 0% 25% 50% \nRelative % reduction in RtAverage mobility\nFigure10: Regionalcovariateeffectsizeplots.\nDOI:https://doi.org/10.25561/79231 Page23\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n●\n●\n●\n●●\n●\n●\n● WashingtonNew YorkNew JerseyMassachusettsMarylandIllinosDistrict of ColumbiaCalifornia\n−25% 0% 25% \nRelative % reduction in RtTransit\nFigure11: State-levelcovariateeffectsizeplots.\nC State-specificweeklyeffectsafteremergencydecree\nOurmodelincludesastate-specificweeklyeffect \u000fw;m(seeequations2,3)foreveryweek waftertheemergencydecree\nof that state. As described in Section 5, We assign an autoregressive process with mean 0 as prior to this effect. This\nweeklyeffectisheldconstantforthe4weeksuptothepresentweek. Figure12showstheposteriorofthiseffectonthe\nsamescaleasinFigure2,thatis,thepercentreductionin Rtwithmobilityvariablesheldconstant5. Valuesabove0have\ntheinterpretationthatthestate-specificweeklyeffectlowersthereproductionnumber Rt;m,i.e.transmissionforweek\ntandstatemislowerthanwhatisexplainedbythemobilitycovariates.\n5Drawsfromtheposterioraretransformedwith 1\u0000f(\u0000\u000fm;w m(t)),where f(x) = 2exp(x)/(1 +exp(x))istwicetheinverselogitfunction.\nDOI:https://doi.org/10.25561/79231 Page24\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nHawaiiTexas Louisiana FloridaCalifornia Arizona New Mexico Oklahoma Arkansas Mississippi Alabama Georgia South CarolinaNevada Utah Colorado Kansas Missouri Tennessee Kentucky West Virginia North Carolina MarylandOregon Idaho Wyoming Nebraska Iowa Illinois Indiana Ohio Virginia District of Columbia DelawareWashington Montana North Dakota South Dakota Minnesota Wisconsin Michigan Pennsylvania New Jersey Rhode IslandNew York Connecticut MassachusettsAlaska Vermont New Hampshire Maine\n−50%0%50%\n−50%0%50%\n−50%0%50%\n−50%0%50%\n−50%0%50%\n−50%0%50%\n−50%0%50%\n−50%0%50%Effect\nFigure12: Percentreductionin Rtduetotheweekly,state-levelautoregressiveeffectaftertheemergencydecree.\nDOI:https://doi.org/10.25561/79231 Page25\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nD Modelpredictionsforallstates\nState-level estimates of infections, deaths and Rt. Left: daily number of deaths, brown bars are reported deaths, blue\nbands are predicted deaths, dark blue 50% credible interval (CI), light blue 95% CI. Middle: daily number of infections,\nbrownbarsarereportedinfections,bluebandsarepredictedinfections,CIsaresameasleft. Thenumberofdailyinfec-\ntions estimated by our model drops immediately after an intervention, as we assume that all infected people become\nimmediatelylessinfectiousthroughtheintervention. Afterwards,ifthe Rtisabove1,thenumberofinfectionswillstart\ngrowingagain. Right: time-varyingreproductionnumber Rtdarkgreen50%CI,lightgreen95%CI.Iconsareinterventions\nshownatthetimetheyoccurred.\n0.000.250.500.751.00\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsAlaska\n0100200300\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0102030\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsAlabama\n02,0004,0006,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n02468\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsArkansas\n0100200300400500\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0204060\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsArizona\n05,00010,00015,00020,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\nDOI:https://doi.org/10.25561/79231 Page26\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n050100150\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsCalifornia\n010,00020,00030,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n050100\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsColorado\n05,00010,00015,00020,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n050100150200\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsConnecticut\n05,00010,00015,00020,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n05101520\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsDistrict of Columbia\n01,0002,0003,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0481216\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsDelaware\n01,0002,0003,0004,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0255075100\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsFlorida\n05,00010,00015,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\nDOI:https://doi.org/10.25561/79231 Page27\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n0255075100\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsGeorgia\n03,0006,0009,00012,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n00122\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsHawaii\n0100200300400500\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0.00.51.01.52.02.5\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0246\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsIdaho\n0200400600\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n050100150200\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsIllinois\n030,00060,00090,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0246\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n050100150\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsIndiana\n010,00020,00030,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n05101520\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsIowa\n02,0004,0006,0008,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\nDOI:https://doi.org/10.25561/79231 Page28\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n0510\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsKansas\n05001,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n05101520\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsKentucky\n05001,0001,5002,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n050100\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsLouisiana\n05,00010,00015,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n012345\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsMaine\n0100200300400500\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0255075100125\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsMaryland\n05,00010,00015,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0100200300\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsMassachusetts\n010,00020,00030,00040,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0246\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\nDOI:https://doi.org/10.25561/79231 Page29\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n050100150200250\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsMichigan\n010,00020,00030,00040,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n010203040\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsMinnesota\n05,00010,00015,00020,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0102030\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsMississippi\n02,5005,0007,500\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0204060\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsMissouri\n02,0004,0006,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n01234\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsMontana\n0100200300400500\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0.00.51.01.52.02.5\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n051015\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsNebraska\n04008001,2001,600\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\nDOI:https://doi.org/10.25561/79231 Page30\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n01020\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsNevada\n05001,0001,5002,0002,500\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n05101520\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsNew Hampshire\n05001,0001,500\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0100200300400500\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsNew Jersey\n020,00040,00060,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n02468\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n051015\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsNew Mexico\n01,0002,0003,0004,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0300600900\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsNew York\n050,000100,000150,000200,000250,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0.02.55.07.5\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0102030\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsNorth Carolina\n02,0004,0006,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\nDOI:https://doi.org/10.25561/79231 Page31\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n0246\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsNorth Dakota\n0100200300400\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n050100150\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsOhio\n05,00010,00015,00020,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n05101520\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsOklahoma\n05001,0001,5002,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n036912\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsOregon\n02505007501,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0100200300\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsPennsylvania\n010,00020,00030,00040,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0102030\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsRhode Island\n01,0002,0003,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\nDOI:https://doi.org/10.25561/79231 Page32\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n010203040\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsSouth Carolina\n01,0002,0003,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n012345\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsSouth Dakota\n0100200300400500\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0481216\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsTennessee\n01,0002,0003,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0204060\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsTexas\n020,00040,00060,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0246\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsUtah\n0250500750\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n01234\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsVermont\n0100200300\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\nDOI:https://doi.org/10.25561/79231 Page33\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n0204060\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsVirginia\n03,0006,0009,00012,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n012345\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n010203040\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsWashington\n01,0002,0003,0004,0005,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n0246\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsWest Virginia\n0100200300400\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0123\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n05101520\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsWisconsin\n01,0002,0003,0004,000\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n01234\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\n01234\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 May18 MayDaily number of deathsWyoming\n0100200300400\n10 Feb24 Feb 9 Mar23 Mar 6 Apr20 Apr 4 MayDaily number of infections\n0.00.51.01.52.02.5\n10 Feb 24 Feb 9 Mar 23 Mar 6 Apr 20 Apr 4 MayRtTiming\nStarted\nEased\nInterventions\nEmergency decree\nRestrict public events\nBusiness closure\nRestaurant closure\nSchool closure\nStay at home mandate\nCredible intervals\n50%\n95%\nE Effectivenumberofinfectiousindividualsforallstates\nThe effective number of infectious individuals, c\u0003, on a given day is calculated by weighing how infectious a previously\ninfectedindividualisonagivenday. Thefullyinfectiousaverageincludesasymptoticandsymptomaticindividuals. Esti-\nmatesoftheeffectivenumberofinfectiousindividualsforallstatescanbefoundinFigure13.\nDOI:https://doi.org/10.25561/79231 Page34\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nHawaiiTexas Louisiana FloridaCalifornia Arizona New Mexico Oklahoma Arkansas Mississippi Alabama Georgia South CarolinaNevada Utah Colorado Kansas Missouri Tennessee Kentucky West Virginia North Carolina MarylandOregon Idaho Wyoming Nebraska Iowa Illinois Indiana Ohio Virginia District of Columbia DelawareWashington Montana North Dakota South Dakota Minnesota Wisconsin Michigan Pennsylvania New Jersey Rhode IslandNew York Connecticut MassachusettsAlaska Vermont New Hampshire Maine\n24 Feb23 Mar20 Apr18 May24 Feb23 Mar20 Apr18 May 24 Feb23 Mar20 Apr18 May\n24 Feb23 Mar20 Apr18 May 24 Feb23 Mar20 Apr18 May24 Feb23 Mar20 Apr18 May 24 Feb23 Mar20 Apr18 May 24 Feb23 Mar20 Apr18 May\n24 Feb23 Mar20 Apr18 May24 Feb23 Mar20 Apr18 May24 Feb23 Mar20 Apr18 May24 Feb23 Mar20 Apr18 May01,0002,0003,000\n02,5005,0007,50010,00012,500\n050,000100,000150,000200,000250,000\n05,00010,00015,00020,000\n05,00010,00015,00020,00005001,0001,5002,0002,500\n050,000100,000150,000\n0100,000200,000300,000400,000\n05,00010,00015,00020,000\n025,00050,00075,000100,0000500,0001,000,0001,500,000\n0100,000200,000\n020,00040,00060,000\n010,00020,00030,000\n05,00010,00015,00020,000\n025,00050,00075,000100,000050,000100,000\n01,0002,0003,000\n020,00040,00060,000050,000100,000150,000200,000250,000\n050,000100,000\n05,00010,000\n010,00020,00030,00005,00010,00015,00020,000\n0100,000200,000300,000400,000\n05,00010,00015,000\n010,00020,00030,00040,00050,000025,00050,00075,000\n010,00020,00030,00040,000\n010,00020,00030,00040,000\n01,0002,0003,0004,000\n025,00050,00075,000100,000125,00001,0002,0003,0004,000\n02,5005,0007,50010,000\n02,5005,0007,500\n04,0008,00012,000\n050,000100,000150,000200,00001,0002,000\n05001,0001,5002,000\n030,00060,00090,000120,000\n05,00010,00015,00020,00025,00001,0002,000\n01,0002,0003,000\n01,0002,0003,0004,0005,000\n025,00050,00075,000100,00005001,0001,5002,000\n010,00020,00030,00040,000\n02,0004,0006,000\n04,0008,00012,00016,000\n050,000100,000150,000200,000250,000\n05001,0001,5002,0002,500Total number of infectious people\nFigure13: Estimatesfortheeffectivenumberofinfectiousindividualsovertime. Thelightpurpleregionshowsthe95%\ncredibleintervalsandthedarkpurpleregionshowsthe50%credibleintervals.\nTobemoreprecise,theeffectivenumberofinfectiousindividualsofinfectiousindividuals, c\u0003,iscalculatedbyfirstrescal-\ning the generation distribution by its maximum, i.e. g\u0003\n\u001c=g\u001c\nmaxtgt. Based on (1), the number of infectious individuals is\nthencalculatedfromthenumberofpreviouslyinfectedindividuals, c,usingthefollowing:\nc\u0003\nt;m=t\u00001X\n\u001c=0c\u001c;mg\u0003\nt\u0000\u001c;\nwherect;misthenumberofnewinfectionsonday tinstatem. Aplotofg\u0003\n\u001ccanbefoundinFigure14.\n●●●●●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●●●●●\n0 5 10 15 200.00.20.40.60.81.0\nnubmer of days τ since infectioninfectiousness gτ*\nFigure14: Infectiousness g\u0003\n\u001cofaninfectedindividualovertime.\nDOI:https://doi.org/10.25561/79231 Page35\n24May2020 ImperialCollegeCOVID-19ResponseTeam\nF Scenarioresultsforallstates\nWeshowherestatelevelscenarioplotsofanincreaseofmobility20%and40%ofcurrentlevels.\nHITX LA FLCA AZ NM OK AR MS AL GA SCNV UT CO KS MO TN KY WV NC MDOR ID WY NE IA IL IN OH VA DC DEWA MT ND SD MN WI MI PA NJ RINY CT MAAK VT NH ME\n 1 Mar 1 Apr 1 May 1 Jun 1 Jul 1 Mar 1 Apr 1 May 1 Jun 1 Jul 1 Mar 1 Apr 1 May 1 Jun 1 Jul\n 1 Mar 1 Apr 1 May 1 Jun 1 Jul 1 Mar 1 Apr 1 May 1 Jun 1 Jul 1 Mar 1 Apr 1 May 1 Jun 1 Jul 1 Mar 1 Apr 1 May 1 Jun 1 Jul 1 Mar 1 Apr 1 May 1 Jun 1 Jul\n 1 Mar 1 Apr 1 May 1 Jun 1 Jul 1 Mar 1 Apr 1 May 1 Jun 1 Jul 1 Mar 1 Apr 1 May 1 Jun 1 Jul 1 Mar 1 Apr 1 May 1 Jun 1 Jul012345\n050100150200\n05001,0001,500\n050100\n05010015020001234\n0100200300\n02505007501,0001,250\n0204060\n02505007501,00001,0002,0003,000\n05001,0001,5002,000\n05001,0001,500\n0250500750\n0100200300400\n01,0002,0003,0004,00005001,0001,5002,0002,500\n0246\n02004006000100200300400\n05001,000\n0102030\n020040060002505007501,000\n01,0002,0003,000\n0200400600\n010020030003006009001,200\n0200400600\n0200400600\n051015\n0200400600051015\n04080120\n0102030\n0255075100\n02,0004,0006,0000246\n01234\n02505007501,000\n010020030001234\n0246\n0204060\n04008001,2001,600010203040\n0100200300400\n050100150\n0100200300\n01,0002,0003,0004,0005,000\n00122Daily number of deaths\nScenarios Increased mobility 40% Increased mobility 20% Constant mobility\nFigure15: State-levelscenarioestimatesfordeaths. Theblueribbonshowsthe95%credibleintervals(CIs)forscenario\n(a)wheremobilityiskeptconstantatpre-lockdownlevels,theyellowribbonshowsthesameCIsforscenario(b)where\nthereisa20%returntopre-epidemicmobilityandscenario(c)wherethereisa40%returntopre-epidemic.\nG Sensitivityanalysistoinfectionfatalityratio\nGeographic-specificcontactsurveysareimportantforcalculatingtheweightedIFRvaluesaccordingtothemethodsin[22,\n23]. There is no large-scale cross-generational contact survey, similar to the polymod survey [19], implemented in the\nUSA. Therefore, it was important to understand if the model was robust to changes in the underlying contact survey.\nWe calculated the IFRs using three different contact matrices: UK, France and Netherlands. We believe that the USA is\nculturally closest to that UK out of those countries we had contact matrices for, but also considered France where we\nsawthegreatestmixingoftheelderlyandtheNetherlandswhichshowedthe averagebehaviouroftheEuropeanstudies\nusedin[23]. WefoundthattheIFR,calculatedforeachstateusing thethreecontactmatrices, laywithintheposterior\nof IFR in our model (Figure 16). We also noted that our results remained approximately constant when using the IFR\ncalculatedfromthethreedifferentcontactmatricesasthemeanofthepriorIFRinoutmodel,seeSection5.\nSince we are using the same contact matrix across all the states, the differences in IFR are due to the population de-\nmographics and not due to differential contacts. The low IFR in Texas and Utah reflects the younger population there\nwhereasthehigherIFRinFloridaandMaineisduetotheolderpopulation. Thisisalimitationofourmethods.\nDOI:https://doi.org/10.25561/79231 Page36\n24May2020 ImperialCollegeCOVID-19ResponseTeam\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n●\n● WyomingWisconsinWest VirginiaWashingtonVirginiaVermontUtahTexasTennesseeSouth DakotaSouth CarolinaRhode IslandPennsylvaniaOregonOklahomaOhioNorth DakotaNorth CarolinaNew YorkNew MexicoNew JerseyNew HampshireNevadaNebraskaMontanaMissouriMississippiMinnesotaMichiganMassachusettsMarylandMaineLouisianaKentuckyKansasIowaIndianaIllinoisIdahoHawaiiGeorgiaFloridaDistrict of ColumbiaDelawareConnecticutColoradoCaliforniaArkansasArizonaAlaskaAlabama\n0.006 0.008 0.010 0.012\nifrState●\n●\n●\n●France\nNetherlands\nUK\nPosterior\nFigure16: SensitivityanalysisforIFR.Thered,greenandbluedotsshowtheIFRvaluescalculatedaccordingto[22,23]\nusing the French, Dutch and and UK contact matrices respectively. The purple dot shows the mean of our posterior\nestimatesfortheIFRrunusingtheUKcontactmatrixestimateandthepurpleerrorshowsthe95%credibleintervalsof\nthedistribution.\nDOI:https://doi.org/10.25561/79231 Page37", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "p75nPPnFgI", "year": null, "venue": "ECAI2016", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-672-9-586", "forum_link": "https://openreview.net/forum?id=p75nPPnFgI", "arxiv_id": null, "doi": null }
{ "title": "Multi-Class Probabilistic Active Learning.", "authors": [ "Daniel Kottke", "Georg Krempl", "Dominik Lang", "Johannes Teschner", "Myra Spiliopoulou" ], "abstract": "This work addresses active learning for multi-class classification. Active learning algorithms optimize classifier performance by successively selecting the most beneficial instances from a pool of unlabeled instances to be labeled by an oracle. In this work, we study the influence of the following factors for active learning: (1) an instance's impact, (2) its posterior, and (3) the reliability of this posterior. To do so, we propose a new decision-theoretic approach, called multi-class probabilistic active learning (McPAL). Building on a probabilistic active learning framework, our approach is non-myopic, fast, and optimizes a performance measure (like accuracy) directly. Considering all influence factors, McPAL determines the expected gain in performance to compare the usefulness of instances. For this purpose, it calculates the density weighted expectation over the true posterior and over all possible labeling combinations in a closed-form solution. Thus, in contrast to other multi-class algorithms, it considers the posterior's reliability which improved the performance. In our experimental evaluation, we show that the combination of the selected influence factors works best and that McPAL is superior in comparison to various other multi-class active learning algorithms on six datasets.", "keywords": [], "raw_extracted_content": "Multi-Class Probabilistic Active Learning\nDaniel Kottke1and Georg Krempl1and\nDominik Lang2and Johannes Teschner2and Myra Spiliopoulou3\nAbstract. This work addresses active learning for multi-class\nclassification. Active learning algorithms optimize classifier perfor-\nmance by successively selecting the most beneficial instances froma pool of unlabeled instances to be labeled by an oracle. In thiswork, we study the influence of the following factors for active learn-ing: (1) an instance’s impact, (2) its posterior, and (3) the reliabil-ity of this posterior. To do so, we propose a new decision-theoreticapproach, called multi-class probabilistic active learning (McPAL).\nBuilding on a probabilistic active learning framework, our approach\nis non-myopic, fast, and optimizes a performance measure (like accu-racy) directly. Considering all influence factors, McPAL determinesthe expected gain in performance to compare the usefulness of in-stances. For this purpose, it calculates the density weighted expecta-tion over the true posterior and over all possible labeling combina-tions in a closed-form solution. Thus, in contrast to other multi-classalgorithms, it considers the posterior’s reliability which improved theperformance. In our experimental evaluation, we show that the com-bination of the selected influence factors works best and that McPALis superior in comparison to various other multi-class active learningalgorithms on six datasets.\n1 INTRODUCTION\nIn supervised classification, prediction models are learned from la-beled training data. In some applications, unlabeled data is avail-able or easy to collect but the labeling (annotation) of this data isexpensive, time-consuming or exhausting. For such applications, ac-tive learning methods provide solutions that optimize the labelingprocess by selecting the most useful unlabeled instances to be passed\nto an oracle for labeling. Thereby, active learning aims to achieve\nhigh performance with as few labeled instances as possible [23].\nA particular and little researched challenge [26] in active learning\nis its generalization to multi-class settings, with multinomial ratherthan binary labels. The few works that have addressed this task sofar mostly use either uncertainty sampling for active learning withsupport vector machines, thereby concentrating on instances close tothe anticipated decision boundary [6, 12, 29], optionally extendedby information about density or diversity [4, 14]. Others use ex-pected error reduction by simulating the impact of a label acquisitionon the whole dataset to determine the expected performance [13].Both approaches have known limitations [7, 15]: the former fast,information-theoretic heuristic often fails in exploring the dataspace,the latter decision-theoretic method has high computation time.\n1Knowledge Management and Discovery Lab, Otto von Guericke University,\nMagdeburg, Germany, email: {daniel.kottke, georg.krempl}@ovgu.de\n2Faculty of Computer Science, Otto von Guericke University, Magdeburg,\nGermany, email: {dominik.lang, johannes.teschner}@st.ovgu.de\n3Knowledge Management and Discovery Lab, Otto von Guericke University,\nMagdeburg, Germany, email: [email protected] contribute a multi-class active learning approach that com-\nbines the advantages of the approaches mentioned above, i.e. opti-mizing expected performance directly while being nearly as fast asuncertainty sampling. Following the recently proposed probabilisticactive learning framework [17], the key idea is to compute the expec-tation over the true posterior by incorporating the number of labels ina neighborhood of the label candidate as a proxy for the posterior’sreliability. The resulting score is weighted with the density which weuse as a proxy for the new label’s impact on the whole dataset. We\ncompare our approach with the most relevant state-of-the art methods\nfrom the literature and present experiments on six datasets.\nIn addition, we expose the three influence factors that are used in\nour method: the posterior, the reliability of that posterior, and the im-\npact of a labeling candidate. We explain their role in active learningand evaluate their effect experimentally. To the best of our knowl-edge, we are the first that use the number of labels inside a candi-date’s neighborhood for multi-class active learning, which we showto has a strong impact on the learner’s performance. Furthermore,by adding another decision-theoretic method to propositions in thecomparative study of [14], we contribute to the important researchquestion on how to combine the posteriors of many classes into onecomparable score.\nThe next section summarizes the related work by introducing the\nbasic approaches of multi-class active learning. The main sectionpresents our new approach including an analysis of its characteris-\ntics, and is followed by our experimental evaluation. The paper is\nconcluded with a summarizing discussion.\n2 RELATED WORK\nActive learning aims to optimize the annotation of unlabeled in-stances (candidates), by selecting the ones that improve a given clas-sifier’s performance the most [23]. As active learning in general is farmore researched than multi-class active learning, we concentrate onthe most relevant work before summarizing multi-class approaches.\nMost active learning techniques define a usefulness score for each\nlabel candidate. A simple but common information-theoretic heuris-tic is to use the instances with highest uncertainty [18]. This uncer-\ntainty sampling method chooses instances near the classifier’s current\ndecision boundary, i.e. instances with a posterior probability near the\ndecision threshold (for binary cases 0.5). Related approaches like us-\ning the posteriors’ entropy have been addressed in [23]. In contrast,\nthe decision-theoretic expected error reduction approach estimates acandidate’s usefulness by simulating its label’s realizations and mea-suring the resulting model’s performance on a representative set ofevaluation instances [21]. This computationally expensive calcula-tion of the expected performance over all possible labels and the in-stances of the representative set builds the usefulness score [3].ECAI 2016\nG.A. Kaminka et al. (Eds.)\n© 2016 The Authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/978-1-61499-672-9-586586\nKrempl et al. [16] argue that using posterior estimates directly in\nthe expectation step leads to inaccuracies. They observed that these\nposterior estimates are highly unreliable especially having only fewlabeled instances. Probabilistic active learning [17] therefore tries toovercome these difficulties by introducing label statistics that includethe posterior of the positive class (they only consider binary classi-fication tasks) and the number of nearby labels as a proxy for reli-ability. The usefulness score is calculated with the expectation over\nthe true posterior as well as over the possibly appearing labels. Other\napproaches aim to reduce the classification variance by using an en-\nsemble of classifiers and request instances where the ensemble’s dis-\nagreement is high [24].\nFor active learning with multiple classes, the main challenge is\nthe mapping of posterior values into a comparable score to select\nthe most useful labeling candidate. K ¨orner and Wrobel [14] ana-\nlyzed different heuristics that have been also used by other papers:(1) usual confidence-based uncertainty sampling chooses the in-stance with the lowest posterior for the best decision, which is com-parable to selecting the instances near the decision boundary (seealso [5, 11, 28, 29]), (2) entropy-based sampling chooses the in-stance with highest posterior entropy (see also [30]), (3) Best-vs-Second-Best (BvsSB) sampling (also called margin-based) uses thedifference between the posterior of the best and the second best class(see also [5, 12]), and (4) sampling using a specific disagreement thatcombines margin-based disagreement with the maximal probability\n4.\nExpected error reduction-based methods have also been consid-\nered for multi-class active learning. Joshi et al. [13] proposed an al-gorithm called V alue of Information (V oI) that estimates the expected\nmisclassification costs plus the expected labeling costs. They com-pare the performance of the current classifier and each hypotheticalclassifier which are evaluated for each labeling candidate and eachclass on an evaluation set. As these algorithms take long for execu-tion, the authors propose three approximations for speedup. For mu-\nsic annotation applications, Chen et al. [4] developed a method that\nfinds a set of instances to be labeled based on a volume criterion (sim-ilar to SVM volume reduction [25]), a density score that favors denseregions and a diversity score that enforces diversity among instancesfrom the labeling set. More recently, Guo and Wang [6] developeda stepwise method consisting of an initial selection of instances tobe labeled (via random, clustering or discrepancy), followed by anactive learning step. This is based on the characteristics of One-versus-Rest (OvR) Support V ector Machines (SVMs) where a label-ing candidate can belong to one class with support from zero, one ormore than one OvR SVMs. To choose the next instance for labeling,they define a rejection score, a compatibility score and an uncertaintyscore, and propose rules on how these score have to be considered.Wang et al. [26] propose an ambiguity-based multi-class approachthat uses possibilistic membership from One-vs-Rest SVMs. Thesemembership values are between 0 and 1 but do not necessarily sum\nup to one like posteriors. Their ambiguity measure is based on fuzzy\nlogic operations and has a parameter γwhich has to be optimized and\nis not known in advance. A more theoretical work on cost-sensitivemulti-class active learning is given by [1]. He analyzed the regretand label complexity for data with labels that are generated with ageneralized linear model.\nSome approaches consider settings with different costs for mis-\nclassifying an instance of a specific class [5, 13]. Additionally, [13]also includes annotation cost, i.e. the cost of labeling one instance.\n4Note, that the selection of instance based on confidence and BvsSB would\nbe exactly the same in a two-class problem but is different for multiple\nclasses (see [23]).The acquisition of instances can be done in a successive manner or\nin form of instance batches. Most approaches choose to acquire in-stances one-by-one, except for [4, 30]. Besides SVMs (often usedwith a probabilistic version), [14] used an ensemble of trees, [11]proposed a probabilistic version of the k-nearest-neighbor (pKNN)classifier, [5] tested their algorithms on a random forest, and [30]used random walks over a markov chain.\n3 OUR METHOD\nIn this section, we propose probabilistic active learning for mul-tiple classes, an extension of the binary version proposed in[16]. In the first subsection, we present the active learningframework and explain our influence factors. Next, we proposeour Multi-class Probabilistic Active Learning (McPAL) approach,\nfollowed by the derivation of a closed-form solution. Finally, we con-clude our results and compare its behavior to existing approaches inan analytical way.\n3.1 AL framework and influence factors\nIn an active, multi-class classification tasks with Cdifferent classes,\neach instance has a feature vector /vector xand a label y∈{1,...,C},\nwhich is unknown at the beginning. As shown in Fig. 1, the activelearner successively selects the most useful instances /vector x\n∗from the\ncandidate pool Uand requests its label yfrom the oracle. After re-\ntraining the classifier with the new labeled set L∪(/vector x∗,y), this pro-\ncedure is repeated until the budget bis consumed. In our setting, the\nactive component’s decision is based on outputs (posteriors and dis-tribution of labeled instances) of a generative probabilistic classifier[19], which is updated according to the contents of L.\nfunction al_framework(U){\nL={ }\ncl = init_classifier()\nfor(i=1; i<=b; i++){\nx*= active_learning(U, cl, L)\ny = ask_oracle(x *)\nU = remove(U, {x *})\nL = append(L, {x *, y})\ncl = train_classifier(L)\n}\n}\nFigure 1. Pseudocode of the active learning framework\nThroughout our research on active learning, we identified different\ninfluence factors that affect active learning positively. The labeling\ncandidate’s class posterior ˆP(y|/vector x)is the most commonly used one,\nas it indicates the probability of an instance /vector xto be classified as y.\nFor simplicity, we denote /vectorˆpas the vector of estimated posterior prob-\nabilities, i.e. ˆpi=ˆP(y=i|/vector x),1≤i≤C. If the posteriors for all\nclasses are similar, this indicates a high uncertainty of the classifierat the instance’s location /vector x. Here, we have to distinguish between the\naleatoric uncertainty that is caused by high Bayesian error, and theepistemic uncertainty, which is caused by a lack of information [22].We are not able to reduce the aleatoric uncertainty, but we can ac-quire more labels to reduce the epistemic uncertainty in the currentlyconsidered neighborhood.\nMeasuring the number of nearby labels nas a proxy for the reli-\nability of the class posterior enables the separation of the aleatoric\nand the epistemic uncertainty. The higher this number is, the moreD.Kottkeetal./Multi-Class Probabilistic Active Learning 587\nlikely it is for the observed posterior ˆpto be close to the unknown\ntrue posterior.\nThe third influence factor is the impact on the whole dataset.\nWeighting the usefulness score by the instances’ density as a proxy\nfor its impact prefers instances in dense regions over those in sparseones. We assume that it is more beneficial to focus on regions with\nhigh density as more future classification decision benefit from the\ninformation increment there.\nOne of the most important questions in multi-class active learn-\ning is how to combine the different posteriors to one comparable\nscore [14]. In binary situations, this function ˆp/mapsto→Ris only one-\ndimensional as ˆp\n2=1−ˆp1and can be easily visualized. Three-\nclass problems typically are visualized with ternary plots (see also[14, 23]). In Fig. 2, we show a ternary heatmap plot where thedarker shades indicate higher usefulness. This is a barycentric coor-dinate system, where each position stands for one specific posteriorprobability. The figure shows the usefulness values for confidence-based sampling (Conf), and for the Best-vs-Second-Best (BvsSB)approach. The entropy-based score has a more circular shape (notshown here) [23].\n0.000.170.330.500.670.831.00\n1.000.830.670.500.330.170.00\n0.00 0.17 0.33 0.50 0.67 0.83 1.00p1^p2^\np3^0.000.170.330.500.670.831.00\n1.000.830.670.500.330.170.00\n0.00 0.17 0.33 0.50 0.67 0.83 1.00p1^p2^\np3^Conf BvsSB\nFigure 2. Ternary heatmap plot of the usefulness of confidence-based\n(Conf) and Best-vs-Second-Best (BvsSB) sampling. Dark color indicates\nhigh usefulness of a posterior in that barycentric coordinate system.\nIn the next section, we propose our method, which combines all\nthree influence factors in a decision-theoretic way. Then, we visu-alize the behavior of McPAL (without the density weight) also withternary plots, and evaluate our theory of influence factors experimen-tally comparing their effects on active learning performance in Sec.4.2. Our mathematical symbols are summarized in Tab. 1.\n5\nC - Number of classes\nY={1,...,C } - V ector of all possible labels\nL - Set of labeled instances (x,y)\nU - Set of unlabeled instances (x,.)\n/vector p=(p1,...,p C) - V ector of true posteriors\n/vectork=(k1,...,k C) - V ector of frequency estimates\nn=/summationtextki - Number of observed labels (reliability)\n/vectorˆp=/vectork/n - V ector of observed posteriors\n/vectord=(d1,...,d C) - Decision vector (see Eq. 8)\nm∈N - Number of hypothetically considered labels\n/vectorl=(l1,...,lC)∈NC- V ector representing the number of hypo-\nthetic labels per class (/summationtextli=m)\nTable 1. Overview of used mathematical symbols.\n3.2 Multi-class probabilistic active learning\nIn probabilistic active learning for two classes, it is assumed that theappearance of a label of class yis a Bernoulli experiment [17]. A\nlabel of class iin the neighborhood of an instance /vector xappears with\n5All unspecified iterators start at i=1 and end at C.a probability of P(y=i|/vector x)= :p ibuilding the vector of true\nposteriors /vector p. For multiple classes, we naturally generalize the 2-class\nBinomial distribution to a Multinomial one. The probability of ob-serving a specific labeling situation /vectorkgiven the true posterior /vector pis\nthen calculated according to Eq. 1. Each entry k\niin the vector /vectork\nrepresents the number of instances with label i,1≤i≤Cin the\nneighborhood of /vector x. This vector also indicates the number of observed\nlabelsn=/summationtextki, which is used as the reliability proxy ( /vectork=n·/vectorˆp).\nWe use the generalized multinomial coefficient for non-integer argu-ments containing the Γfunction by Legendre [20].\nP(/vectork|/vector p)= Multinomial\n/vector p(/vectork)=/parenleftBigg/summationtextki\nk1,...,k C/parenrightBigg\n·/productdisplay/parenleftBig\npki\ni/parenrightBig\n(1)\n=Γ((/summationtextki)+1)/producttext(Γ(ki+1))·/productdisplay/parenleftBig\npki\ni/parenrightBig\n(2)\nIn the active learning setting, we do not know the true posteriors\n/vector p, but we are able to estimate the number of observations /vectork. To de-\ntermine a probability distribution for the true posterior, we take the\nnormalized likelihood function [16] as given in Eq. 3-5.\nL(/vector p|/vectork)=P(/vectork|/vector p) (3)\nP(/vector p|/vectork)=L(/vector p|/vectork)/integraltext\n/vectorp/primeL(/vectorp/prime|/vectork)d/vectorp/prime=Γ(/summationtext(ki+1))\nΓ((/summationtextki)+1)·L(/vector p|/vectork)(4)\n=Γ(/summationtext(ki+1))/producttext(Γ(ki+1))·/productdisplay/parenleftBig\npki\ni/parenrightBig\n(5)\nThe density function P(/vector p|/vectork)has its maximum for /vector p=/vectorˆpand the\nvariance decreases by increasing n=/summationtextki.\nGiven a performance measure like accuracy, a Bayesian optimal\nclassifier [16] selects the most probable class ˆy(based on its ob-\nserved frequency kˆy) according to Eq. 6. The true posterior pˆyof this\nselected class corresponds to the resulting accuracy, as expressed by\nthe performance function in Eq. 7.\nˆy= arg max\ny∈{1,...,C }(ky) (6)\nperf/parenleftbig/vectork|/vector p/parenrightbig\n=pˆy (7)\n=/productdisplay\npdi\nidi=/braceleftBigg\n1i fi=ˆy\n0i fi/negationslash=ˆy(8)\nGiven such a performance function, we calculate the expected cur-\nrent performance for the neighborhood around /vector xwith observed fre-\nquencies in /vectork:\nexpCurPerf/parenleftbig/vectork/parenrightbig\n=E\n/vector p/bracketleftBig\nperf/parenleftbig/vectork|/vector p/parenrightbig/bracketrightBig\n(9)\n=/integraldisplay\n/vector pP(/vector p|/vectork)·perf/parenleftbig/vectork|/vector p/parenrightbig\nd/vector p (10)\nThe goal of our approach is (1) to estimate the gain of performance\nresulting from an upcoming label based on the set of unlabeled dataUand of labeled data Land (2) to choose the candidate with the\nmaximal gain (see Eq. 11). Having chosen a generative, probabilisticclassifier cllike the Parzen window classifier [3] or the probabilistic\nk-nearest-neighbor [11], we are able to count the number of labeledoccurrences per class given a kernel function K(see Eq. 12). The\nkernel function is a similarity score with K(/vectorx,/vectorx)=1 . Finally, we\ndefine our active learning score as the density weighted performancegain given in Eq. 13.D.Kottkeetal./Multi-Class Probabilistic Active Learning 588\n/vector x∗= arg max\n/vector x∈U(alScore/parenleftbig\n/vector x|L,U)/parenrightbig\n(11)\n/vectork=c l/parenleftbig\n/vector x|L/parenrightbig\n;ki=/summationdisplay\n{(/vectorx/prime,y/prime)∈L:y/prime=i}K(/vectorx,/vectorx/prime) (12)\nalScore/parenleftbig\n/vector x|L,U/parenrightbig\n=P(/vector x|L∪U)·perfGain/parenleftbig\ncl/parenleftbig\n/vector x|L/parenrightbig/parenrightbig\n(13)\nWe determine the performance gain in Eq. 14 by the difference be-\ntween the expected performance considering mnew labels and the\nexpected current performance. The latter is simply calculated as inEq. 9, the more general expected performance (see Eq. 15) con-siders multiple possibilities of a labeling. Therefore, we addition-ally calculate the expectation value over these possible labelings\n/vectorl=(l\n1,...,l C)∈NC. Given a number of hypothetical labels that\nare allowed to be acquired m∈N,/summationtextli=min one step, the label-\ning vector represents the change of observations that would be addedto the/vectorkvector if this labeling would be obtained. Hence, after re-\nceiving a labeling /vectorl, the classifier output changes to /vectork+/vectorl. Note that\nthis calculation is exact for m=1 , but only an approximation for\nm>1, as it is unlikely to have another instance /vectorx\n/primeat exactly the\nsame location as the current label candidate /vector x(similarity of /vector xand/vectorx/prime\nshould be 1to be exact). However, as we only select one instance for\nlabeling at each step, this effect is negligible. Finally, we divide thegain bymto have the average gain per label acquisition.\nperfGain/parenleftbig/vectork/parenrightbig\n=m a x\nm≤M/parenleftbigg1\nm/parenleftbig\nexpPerf/parenleftbig/vectork,m/parenrightbig\n−expCurPerf/parenleftbig/vectork/parenrightbig/parenrightbig/parenrightbigg\n(14)\nexpPerf/parenleftbig/vectork,m/parenrightbig\n=E\n/vector p/bracketleftbigg\nE\n/vectorl/bracketleftBig\nperf/parenleftbig/vectork+/vectorl|/vector p/parenrightbig/bracketrightBig/bracketrightbigg\n(15)\nThe labeling /vectorlis multinomial distributed given the true posterior:\nP(/vectorl|/vector p)= Multinomial /vector p(/vectorl)=Γ((/summationtextli)+1)/producttext(Γ(li+1))·/productdisplay/parenleftBig\npli\ni/parenrightBig\n(16)\nWith help of these equations it is possible to determine the next bestinstance for labeling as given in Eq. 13 numerically. Achieving agood numerical performance would be computationally expensiveand highly dependent on the number of classes Cas well as the step\nwidth for integrating the true posterior /vector p.\nHence, we propose a closed-form solution for this approach in the\nfollowing section that reduces the computational cost seriously.\n3.3 Fast closed-form solution\nTo get rid of numerical integration, it is sufficient to simplify the ex-pected performance, as the expected current performance is a specialcase of the former (see Eq. 17ff.).\nexpCurPerf/parenleftbig/vectork/parenrightbig\n=e x p P e r f/parenleftbig/vectork,0/parenrightbig\n(17)\nexpPerf/parenleftbig/vectork,m/parenrightbig\n=E\n/vector p/bracketleftbigg\nE\n/vectorl/bracketleftBig\nperf/parenleftbig/vectork+/vectorl|/vector p/parenrightbig/bracketrightBig/bracketrightbigg\n(18)\n=/integraldisplay\n/vector pP(/vector p|/vectork)·/summationdisplay\n/vectorlP(/vectorl|/vector p)·perf/parenleftbig/vectork+/vectorl|/vector p/parenrightbig\nd/vector p (19)\n=/summationdisplay\n/vectorl/integraldisplay\n/vector pP(/vector p|/vectork)·P(/vectorl|/vector p)·perf/parenleftbig/vectork+/vectorl|/vector p/parenrightbig\nd/vector p (20)=/summationdisplay\n/vectorl/integraldisplay\n/vector pΓ(/summationtext(ki+1))/producttext(Γ(ki+1))·/productdisplay/parenleftBig\npki\ni/parenrightBig\n·Γ((/summationtextli)+1)/producttext(Γ(li+1))·/productdisplay/parenleftBig\npli\ni/parenrightBig\n·perf/parenleftbig/vectork+/vectorl|/vector p/parenrightbig\nd/vector p (21)\n=/summationdisplay\n/vectorlΓ(/summationtext(ki+1))/producttext(Γ(ki+1))·Γ((/summationtextli)+1)/producttext(Γ(li+1))\n·/integraldisplay\n/vector p/productdisplay/parenleftBig\npki+li\ni/parenrightBig\n·perf/parenleftbig/vectork+/vectorl|/vector p/parenrightbig\nd/vector p (22)\nAfter separating the normalization factors from the integral, we\nsimplify the integral by inserting the performance from Eq. 8 and by\ncalculating the definite integral as above in Eq. 4.\n/integraldisplay\n/vector p/productdisplay/parenleftBig\npki+li\ni/parenrightBig\n·perf/parenleftbig/vectork+/vectorl|/vector p/parenrightbig\nd/vector p (23)\n=/integraldisplay\n/vector p/productdisplay/parenleftBig\npki+li\ni/parenrightBig\n·/productdisplay\npdi\nid/vector p (24)\n=/integraldisplay\n/vector p/productdisplay/parenleftBig\npki+li+di\ni/parenrightBig\nd/vector p=/producttextΓ(ki+li+di+1)\nΓ(/summationtext(ki+li+di+1))(25)\nReinserting the integral into Eq. 22 and sorting the terms yields\nthe following equations.\nexpPerf/parenleftbig/vectork,m/parenrightbig\n=/summationdisplay\n/vectorlΓ(/summationtext(ki+1))/producttext(Γ(ki+1))\n·Γ((/summationtextli)+1)/producttext(Γ(li+1))·/producttextΓ(ki+li+di+1)\nΓ(/summationtext(ki+li+di+1))(26)\n=/summationdisplay\n/vectorlΓ(/summationtext(ki+1))\nΓ(/summationtext(ki+li+di+1))\n·/producttextΓ(ki+li+di+1)/producttext(Γ(ki+1))·Γ((/summationtextli)+1)/producttext(Γ(li+1))(27)\nThe first and second factors are simplified as follows.\nΓ(/summationtext(ki+1))\nΓ(/summationtext(ki+li+di+1))(28)\n=Γ(/summationtext(ki+1))\nΓ(/summationtext(ki+1)+(/summationtextli)+(/summationtextdi)))(29)\n=⎛\n⎜⎝/parenleftbig/summationtext(ki+li+di+1)/parenrightbig\n−1/productdisplay\nj=/summationtext(ki+1)1\nj⎞\n⎟⎠Γ(/summationtext(ki+1))\nΓ(/summationtext(ki+1))(30)\n=/parenleftbig/summationtext(ki+li+di+1)/parenrightbig\n−1/productdisplay\nj=/summationtext(ki+1)1\nj(31)\n/producttextΓ(ki+li+di+1)/producttext(Γ(ki+1))=/productdisplayΓ(ki+li+di+1)\nΓ(ki+1)(32)\n=/productdisplay/parenleftBig/producttextki+li+di\nj=ki+1j/parenrightBig\nΓ(ki+1)\nΓ(ki+1)=/productdisplay⎛\n⎝ki+li+di/productdisplay\nj=ki+1j⎞⎠ (33)\nUsing Eq. 27, 31 and 33, we get the fast version of the expectedD.Kottkeetal./Multi-Class Probabilistic Active Learning 589\nperformance, a value within [0,1].\nexpPerf/parenleftbig/vectork,m/parenrightbig\n=/summationdisplay\n/vectorl⎛\n⎜⎝/parenleftbig/summationtext(ki+li+di+1)/parenrightbig\n−1/productdisplay\nj=/summationtext(ki+1)1\nj⎞\n⎟⎠\n·/productdisplay⎛\n⎝ki+li+di/productdisplay\nj=ki+1j⎞⎠·Γ((/summationtextl\ni)+1)/producttext(Γ(li+1))(34)\nNow, the final McPAL usefulness score from Eq. 13 is calculated\nusing Eq. 14 and Eq. 34.\nAs an example, we calculate the expected performance for m=0\nwhich is equivalent to the expected current performance. As men-tioned before, ˆy= arg max\ny∈{1,...,C }(ky).\nexpPerf/parenleftbig/vectork,0/parenrightbig\n=/summationdisplay\n/vectorl⎛\n⎝(/summationtext(ki+1)+(/summationtextli)+(/summationtextdi))−1/productdisplay\nj=/summationtext(ki+1)1\nj⎞⎠\n·/productdisplay⎛⎝\nki+li+di/productdisplay\nj=ki+1j⎞⎠·Γ((/summationtextl\ni)+1)/producttext(Γ(li+1))(35)\n=⎛⎝/summationtext(ki+1)+0+1− 1/productdisplay\nj=/summationtext(ki+1)1\nj⎞⎠·(k\nˆy+1)·1=kˆy+1/summationtext(ki+1)(36)\n3.4 Characteristics of McPAL\nAs briefly discussed in Sec. 3.1, there are different ways to combine\nthe posterior estimates /vectorˆpfrom the classifier to determine a useful-\nness score. The examples in Fig. 2 show different shapes that lead todifferent behavior, which is evaluated in Sec. 4.\nFig. 3 shows the ternary heatmap plots for the performance gain\nfunction of the McPAL algorithm, i.e. the active learning score with-out the density weight. In contrast to all other multi-class activelearning approaches, McPAL does not only consider the observedprobability /vectorˆpbut also includes the reliability n=/summationtextk\ni, which is\nsummarized in the frequency vector /vectork=n·/vectorˆp. This extends the\nternary plot by an additional degree of freedom. Therefore, we pro-vide two exemplary figures, one showing the behavior for n=1 ,\nand one for n=2 .\nThe left plot of Fig. 3 shows a similar but not identical shape\nas the confidence based (Conf in Fig. 2). While contour lines forconfidence-based sampling are linear, these of McPAL are slightlyconcave. The highest gain is in the center, which represents regionsof absolute uncertainty as the posteriors are equal. The lowest gainsare in the corners of the triangle. An increase of reliability nde-\ncreases the gain (see right plot), as the epistemic uncertainty (causedby lack of information) decreases. This means that there are situa-tions where instances with a non-equal posterior vector are preferredover those with equal posteriors if there is more evidence that theequal posteriors are more likely to be correct.\nThe number of hypothetical label acquisitions Min the neighbor-\nhood of a labeling candidate is bounded by the globally availablebudget. In the beginning, it is sufficient to have M=1 , as one in-\nstance has the highest average benefit for the classification task. Overtime, we need more hypothetical labels to achieve this benefit. In ourexperiments, it was sufficient to set M=2 . Applications with more\nlabels should adjust the Mto greater values accordingly.\nFrom a decision-theoretic view, it is more reasonable to prefer con-\nfidence based active learning over entropy or best-vs-second-best, butFigure 3. Ternary plot for performance gain for situations with\nn=/summationtextki=1 (left) and n=2 (right).\nthe reliability makes a huge difference in the performance as the nextsection will show.\n4 EV ALUATION\nThe goals of our evaluation are twofold: on the one hand, we showthe advantage of combining our previously defined impact factors,and on the other hand we compare our multi-class probabilistic activelearning approach with state-of-the-art methods. All experiments areconducted based on the setup explained in the following subsection.\n4.1 Experimental setup\nThe proposed method and several other active learning strategies aretested on six datasets, labeling instances successively until the avail-able budget of b=6 0 label acquisitions has been exhausted. This is\ndone on multiple, seed-based splits of the datasets into independenttraining and test subsets (training 67%, test 33% of the data) wherethe number of different training-test-splits for the smaller datasets(ecoli, glass, iris, wine) is 100 and for the large datasets (vehicle,yeast) is set to 50 due to execution time. All experiments are re-ported by its mean and standard deviation of misclassification costacross all splits. Additionally, we compared each algorithm on alldatasets against our method McPAL to determine if our method issignificantly better. Therefore, we used a Wilcoxon signed rank test[27] at a p-value of 0.05 and performed the Hommel procedure [10]\nto prevent the results from errors induced by multiple testing.\nThe most used visualization of evaluation results are learning\ncurves, which plot the performance in comparison to the number ofacquired labels. Our learning curves in Fig. 4 and 5 show the classi-fication error of each active learner on the y-axis, the standard devi-ation of the error across all splits indicated as an error bar, and thenumber of instances sampled for the labeled set on the x-axis. In ad-dition to these plots, the results are given in Tab. 4, showing the errorand standard deviation of the different active learning methods for allused datasets. The tables show the learner’s performance at three dif-ferent steps, i.e. after 20, 40 and 60 labels have been acquired. Since60 is the maximum number of sampled instances in the experiments,these steps show the performance in the beginning, intermediate andend phase of the learning process. All results are reported separatelyfor each classifier and dataset. We computed our experiments on acomputer cluster running the Neurodebian [8] system.\nBesides the proposed method of this paper, six other active\nlearning strategies are used. The McPAL method is executed withM=2 , as higher M just increased the execution time but\ndid not change the performance. As a standard baseline, we usea randomly sampling method (Rand). Confidence-based sampling\n(Conf) selects the instance with the lowest maximal posterior ( x\n∗=\nD. Kottke et al. / Multi-Class Probabilistic Active Learning 590\narg minx∈Umaxy∈Yˆpy) [11]. The next approach uses the shan-\nnon entropy to model the uncertainty of an instance (Entr) [13].\nBest-vs-Second-Best (BvsSB) samples this instance of the unlabeled\nset that minimizes the difference of the posterior probabilities ofthe most probable and the second most probable class [12, 13, 14].Maximum-Expected-Cost (MaxECost) determines the value of an\ninstance based on the expected cost associated with the misclassifica-\ntion of that instance. Consequently, the learner samples the instance\ntied to this score [5]. The last strategy belongs to the expected error\nreduction based methods. The original V alue of Information (VoI)\ncriterion as suggested by Joshi et al. [13] selects the instance /vector xthat\nminimizes a risk measure defined by them. It has to be mentionedthat the computational effort of this algorithm forced us to excludeit from the experiments on the vehicle and yeast datasets, since theypossess a large number of instances and/or classes, leading to infea-sible execution times.\nActive learning algorithms require robust classifiers for robust use-\nfulness estimation. Therefore, we choose generative classifiers [19],namely the Parzen window classifier (PWC) [3], and a probabilis-tic variant of the k-nearest-neighbor classifier (pKNN, with k=9 ;\nreceived good results for our classification tasks (between 3 and 9classes)) proposed by Jain and Kapoor [11]. These classifiers can be\nused with any arbitrary similarity function. As the optimization of the\noverall performance level is not the scope of this paper, we choose to\nsimply standardize each attribute (z-standardization) and use an uni-\nvariate Gaussian kernel with fixed standard deviation of σ=0.7for\nall datasets and active learning algorithms. This ensures fair compa-\nrability that is independent of a classifier bias.\nTable 2. Datasets with the number of instances, the number of attributes\nand the class frequencies.\nDataset #Inst. #Attr. #Instances per class\nEcoli 336 8 143, 77, 52, 35, 20, 5, 2, 2\nGlass 214 10 70, 76, 17, 13, 9, 29Iris 150 4 50, 50, 50V ehicle 846 18 212, 217, 218, 199Wine 178 13 59, 71, 48\nY east 1484 8 463, 429, 244, 163, 51, 44, 35, 30, 20\nWe evaluate our algorithm on six multi-class datasets from the\nUCI repository [2]. The distribution of classes and the number of\ninstances and attributes are summarized in Tab. 2. The ecoli dataset\nwas originally used for predicting protein localization sites in eu-karyotic cells. The attributes describe properties of proteins. Glass\nwas originally generated for classification of types of glass left ata crime scene. The attributes describe chemical ingredients to pre-dict for example whether the glass is from a car window or a win-dow of a building. The iris dataset classifies the type of an iris plant,\nthe features describe measures of the plant. V ehicle contains features\nof car models for predicting the manufacturer. The attributes of the\nwine dataset describe the chemical ingredients of a wine instance.\nThe class values are derived from three different cultivars. The yeast\ndataset is also used for predicting the localization site of protein in\nbacteria. The first column, which held the sequence name, was re-moved.\nThe complete results together with an implementation are avail-\nable at our companion website\n6.\n6http://kmd.cs.ovgu.de/res/mcpal/4.2 Impact of influence factors\nIn Sec. 3.1, we introduced three different influence factors that areconsidered in McPAL. Fig. 4 shows learning curves on selecteddatasets and classifiers of McPAL variants with different input pa-rameters using the previously described experimental setup. Thereby,we aim to measure the importance of the different influence factorsposterior, reliability, and impact. In addition to the original McPAL\nalgorithm, we show variants that exclude information either (1) aboutthe reliability by normalizing the /vectorkvector to/summationtext/vectork=n=1 (denoted\nw/o reliability), or (2) about the posterior by replacing thekernel frequency estimate with a uniform one k\ni=n/C,1≤i≤C\n(denoted w/o posterior), or (3) about the density by setting it\nto a constant (denoted w/o impact).\nOur selection in Fig. 4 shows that the combination of all influence\nfactors works best. In some cases, the variant without impact is betterthan the McPAL method. We explain this behavior with the fact thatthe density, which is used as a proxy for the impact of a label on thecomplete dataset, gets inaccurate. Especially when there are manylabels added to the dataset, this estimate gets worse as the influence\nalso depends on the explicit label situation on the dataset. Neverthe-\nless, the density improved the overall performance although leaving\nit out is less critical than leaving out one of the other factors.\nEspecially the results on yeast with the PWC are interesting. Here,\nleaving out the reliability or the posterior leads to no performance\nimprovement, but unifying these approaches (McPAL) achieves thelowest error.\n4.3 Competitiveness of our method\nFig. 5 shows the learning curves of the experiment results with thepKNN classifier, Tab. 4 shows the results using the PWC. As shownin Tab. 4 the McPAL algorithm outperforms its competitors consis-tently on 4 of the 6 datasets (best performance highlighted in boldtext), for the first 20 sampled instances even on 5 out of 6. Using thePWC, our method is only the second best by a close margin after 40and 60 samples on the vehicle data. After 20 samples random sam-pling performed best. On the wine dataset, our method scores bestat 20 sampled instances but falls behind Entr later. As wine data is\neasy to learn, it is important to mention that the performance almost\nconverged at 30 labels. In general the BvsSB andEntr algorithms\nseem to be the most consistent competitors to McPAL in the experi-\nments, the former being the best scoring on the vehicle dataset after40 samples and the latter outperforming McPAL on the wine datasetafter 40 samples.\nA good active learning algorithm is characterized by a fast con-\nvergence to a good final performance. As can be seen in Fig. 5, ourproposed method manages to reduce the classification error quickerthan its competitors, in some cases even starting out with a lowererror (e.g. ecoli, glass, yeast). Over all datasets, McPAL reduces theerror quicker than the other algorithms in the early steps. On top ofthat, the McPAL algorithm shows a lower standard deviation acrossall trials compared its competitors (indicated by the error bars in theplots and the brackets in Tab. 4), making it not only the best perform-ing but also the most stable method in the experiments.\nFor another perspective on the results, the performance of the al-\ngorithms in comparison to randomly sampling instances (Rand, greydotted line) should be considered. In case of both the vehicle andyeast dataset McPAL’s competitors surpass random instance sam-pling only late in the learning process in terms of classification error.Even on the iris dataset Conf, BvsSB andVoI struggle to perform\nbetter than random selection.D.Kottkeetal./Multi-Class Probabilistic Active Learning 591\nFigure 4. Learning curves of mean misclassification cost (including standard deviation as error bars) different variants of the McPAL algorithm on all six\ndatasets. The upper plots show results from the pKNN classifier, the lower ones with the PWC.\nTable 3. Mean execution time for each algorithm for choosing one\ninstance for labeling on the specified dataset in s(sorted by dataset size)\nDataset McPAL BvsSB MaxEC. Conf Entr V oI Rand\nIris 0.363 0.085 0.083 0.097 0.092 15.94 0.001\nWine 0.584 0.145 0.148 0.153 0.147 36.22 0.001Glass 1.794 0.200 0.205 0.204 0.204 136.1 0.001Ecoli 4.590 0.306 0.317 0.313 0.308 518.5 0.001V ehicle 2.128 0.389 0.394 0.385 0.386 NA 0.001Y east 28.06 1.175 1.207 1.171 1.186 NA 0.001\nIn Tab. 3, we summarized the mean execution time of all algo-\nrithms on every dataset. Our proposed method does require moretime to sample an instance than its competitors with exception of theV oI algorithm, which takes much longer than any other algorithmused in the experiments. Due to the higher complexity of the McPALmethod in comparison to more simple methods like uncertainty-\nbased ones, a longer execution time is to be expected. Considering\nthe performance and stability of McPAL mentioned before, the in-creased time requirement is still a good trade off. In contrast to thefast methods, McPAL has an additional factor which is the sum overeach labeling that is dependent on the mvalue.\n5 CONCLUSION\nThis paper addresses active learning for multiple classes. This chal-lenging topic opens up different aspects like the combination ofthe posterior vector into one comparable score. In this paper, we\nproposed a new multi-class probabilistic active learning method\n(McPAL) that addresses this problem in a decision-theoretic way. To\nthis end, we developed a generalized probabilistic model that com-bines all of our mentioned influence factors impact, posterior, andthe reliability of the posterior. Our approach directly optimizes a per-formance measure like accuracy, is non-myopic and fast. We showedhow the influence factors depend on each other in our probabilisticframework and evaluated their behavior in multiple experiments. Es-pecially the combination of the posterior and its reliability makes ahuge difference. Our experimental comparison with the most relevantmulti-class active learning approaches shows that McPAL is superiorin most cases or at least comparable. We suggest that our approachcan still be optimized by replacing the proxies of our influence fac-tors by even more appropriate ones, which will be part of our futureresearch. The complete results together with an implementation are\navailable at our companion website\n7.\nACKNOWLEDGEMENTS\nWe would like to thank the reviewers, Michael Hanke, Alex Waite\nand our colleague Pawel Matuszyk for all discussions. Ternary plotsare generated with python-ternary [9].\n7http://kmd.cs.ovgu.de/res/mcpal/D.Kottkeetal./Multi-Class Probabilistic Active Learning 592\nFigure 5. Learning curves of mean misclassification cost (including standard deviation as error bars) of McPAL and its competitors on all six datasets using\nthe pKNN classifier.\nTable 4. Mean misclassification cost and its standard deviation of the all algorithms on all six datasets using the Parzen window classifier. We report the\nresults after 20, 40, and 60 acquired labels. The best method is printed in bold numbers. Results showing significant superiority of McPAL against other\nalgorithms are indicated with *.\n20 samples ecoli glass iris vehicle wine yeast\nMcPAL 22.70 (±4.45) 30.17 (±4.22) 3.94 (±1.97 ) 149.14 (± 11.94) 2.66 (±1.43) 275.24 (±26.35)\nBvsSB 24.75 (± 4.84) * 35.95 (± 5.57 ) * 12.63 (± 7.06 ) * 148.68 (± 18.25) 2.80 (± 1.67 ) 289.90 (± 23.13)*\nMaxECost 25.42 (± 6.63) * 33.33 (± 5.09) * 8.23 (± 6.24) * 155.98 (± 17.71) 2.95 (± 1.88) 294.20 (± 32.95)*\nConf 24.64 (± 7.07 ) * 33.93 (± 5.02) * 12.48 (± 7.32) * 156.52 (± 17.19) 2.90 (± 1.79) 292.92 (± 34.42)*\nEntr 26.94 (± 8.01) * 33.04 (± 5.50) * 14.61 (± 3.17 ) * 153.44 (± 18.82) 3.41 (± 1.76 ) * 298.60 (± 32.63)*\nVoI 40.14 (± 9.59) * 38.20 (± 3.98) * 16.55 (± 2.67 ) * NA 2.89 (± 2.68)N A\nRand 32.52 (± 7.89) * 36.69 (± 5.00) * 9.91 (± 4.47 )* 145.38 (±13.27 ) 4.35 (± 3.04) * 300.12 (± 23.56 )*\n40 samples ecoli glass iris vehicle wine yeast\nMcPAL 19.15 (±4.06 ) 29.14 (±4.22) 2.85 (±1.58) 125.88 (± 8.99) 1.78 (± 1.06 ) 258.36 (±24.40)\nBvsSB 21.02 (± 4.42) * 32.28 (± 4.36 ) * 11.78 (± 7.78)* 122.90 (±14.43) 1.92 (± 1.26 ) 273.52 (± 22.95)*\nMaxECost 20.80 (± 4.10) * 29.70 (± 4.46 ) 7.70 (± 6.44) * 131.82 (± 14.44) 1.90 (± 1.16 ) 274.54 (± 30.65)*\nConf 19.60 (± 4.30) 29.79 (± 4.87 ) 11.69 (± 7.79) * 133.56 (± 14.90) * 1.94 (± 1.19) 276.36 (± 32.40)*\nEntr 23.55 (± 4.80) * 30.64 (± 4.61) * 13.88 (± 3.49) * 139.02 (± 18.57 )* 1.77 (±1.14) 284.38 (± 28.05)*\nV oI 41.46 (± 7.22) * 38.06 (± 3.78) * 16.74 (± 2.58) * NA 1.92 (± 1.89)N A\nRand 29.80 (± 6.57 ) * 34.57 (± 5.18) * 8.28 (± 4.03) * 129.88 (± 13.31) 2.65 (± 1.61) * 281.84 (± 25.48)*\n60 samples ecoli glass iris vehicle wine yeast\nMcPAL 18.41 (±3.69) 27.08 (±3.95) 5.81 (±2.54) 115.26 (± 7.60) 1.63 (± 1.06 ) 244.12 (±20.71)\nBvsSB 19.69 (± 4.44) * 29.71 (± 4.22) * 12.71 (± 7.64)* 113.42 (±9.95) 1.76 (± 1.13) 259.68 (± 22.66 )*\nMaxECost 20.29 (± 4.55) * 27.99 (± 4.25) 8.12 (± 5.62) * 120.06 (± 12.42) * 1.66 (± 1.03) 257.60 (± 26.75)*\nConf 19.91 (± 4.29) * 28.46 (± 4.59) * 12.40 (± 7.59) * 122.34 (± 13.39) * 1.62 (± 1.12) 259.98 (± 25.76 )*\nEntr 22.54 (± 4.55) * 31.65 (± 4.91) * 11.94 (± 4.07 ) * 126.06 (± 14.60)* 1.53 (±1.00) 272.44 (± 24.93)*\nV oI 34.20 (± 5.78) * 37.22 (± 4.72) * 15.06 (± 3.49) * NA 1.54 (± 1.22)N A\nRand 28.32 (± 5.65) * 33.55 (± 5.17 ) * 6.92 (± 2.76 ) * 123.28 (± 13.26 ) * 2.30 (± 1.43) * 276.42 (± 26.98)*D.Kottkeetal./Multi-Class Probabilistic Active Learning 593\nREFERENCES\n[1] Alekh Agarwal, ‘Selective sampling algorithms for cost-sensitive mul-\nticlass prediction’, in Proceedings of the 30th International Conference\non Machine Learning, pp. 1220–1228, (2013).\n[2] Arthur Asuncion and David J. Newman. UCI machine learning reposi-\ntory, 2015.\n[3] Olivier Chapelle, ‘Active learning for parzen window classifier’, in Pro-\nceedings of the Tenth International Workshop on Artificial Intelligence\nand Statistics, pp. 49–56, (2005).\n[4] Gang Chen, Tian-jiang Wang, Li-yu Gong, and Perfecto Herrera,\n‘Multi-class support vector machine active learning for music annota-tion’, International Journal of Innovative Computing, Information and\nControl, 6(3), 921–930, (2010).\n[5] Po-Lung Chen and Hsuan-Tien Lin, ‘Active learning for multiclass\ncost-sensitive classification using probabilistic models’, in Conference\non Technologies and Applications of Artificial Intelligence (TAAI) , pp.\n13–18, (2013).\n[6] Husheng Guo and Wenjian Wang, ‘An active learning-based svm multi-\nclass classification model’, Pattern Recognition, 48(5), 1577–1597,\n(2015).\n[7] Isabelle Guyon, Gavin Cawley, Gideon Dror, Vincent Lemaire, and\nAlexander Statnikov, ‘Active learning challenge’, Challenges in ma-\nchine learning, 6, (2012).\n[8] Y aroslav O Halchenko and Michael Hanke, ‘Open is not enough. let’s\ntake the next step: an integrated, community-driven computing platformfor neuroscience’, Frontiers in neuroinformatics, 6, (2012).\n[9] Marc Harper, Bryan Weinstein, Cory Simon, chebee7i, Nick Swanson-\nHysell, The Gitter Badger, Maximiliano Greco, and Guido Zuidhof.python-ternary: Ternary plots in python, December 2015.\n[10] Gerhard Hommel, ‘A stage-wise rejective multiple test procedure based\non a modified bonferroni test’, Biometrika, 75, 383–386, (2010).\n[11] Paril Jain and Ajay Kapoor, ‘Active learning for large multi-class prob-\nlems’, in IEEE Conference on Computer Vision and Pattern Recogni-\ntion (CVPR), pp. 762–769. IEEE, (2009).\n[12] Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos, ‘Multi-\nclass active learning for image classification’, in IEEE Conference on\nComputer Vision and Pattern Recognition (CVPR), pp. 2372–2379,(June 2009).\n[13] Ajay J Joshi, Fatih Porikli, and Nikolaos P Papanikolopoulos, ‘Scal-\nable active learning for multiclass image classification’, IEEE Trans-\nactions on Pattern Analysis and Machine Intelligence, 34(11), 2259–\n2273, (2012).\n[14] Christine K ¨orner and Stefan Wrobel, ‘Multi-class ensemble-based ac-\ntive learning’, in European Conference on Machine Learning (ECML),\n687–694, Springer, (2006).\n[15] Daniel Kottke, Georg Krempl, and Myra Spiliopoulou, ‘Probabilistic\nactive learning in data streams’, in Advances in Intelligent Data Analy-\nsis XIV - 14th Int. Symposium, IDA 2015, St. Etienne, France , eds., Tijl\nDe Bie and Elisa Fromont, volume 9385 of Lecture Notes in Computer\nScience, pp. 145–157. Springer, (2015).\n[16] Georg Krempl, Daniel Kottke, and Vincent Lemaire, ‘Optimised prob-\nabilistic active learning (OPAL) for fast, non-myopic, cost-sensitive ac-tive classification’, Machine Learning (Special Issue of ECML PKDD\n2015), 100(2), 449–476, (2015).\n[17] Georg Krempl, Daniel Kottke, and Myra Spiliopoulou, ‘Probabilistic\nactive learning: A short proposition’, in Proceedings of the 21st Eu-\nropean Conference on Artificial Intelligence (ECAI2014), August 18\n– 22, 2014, Prague, Czech Republic, eds., Torsten Schaub, GerhardFriedrich, and Barry O’Sullivan, volume 263 of Frontiers in Artificial\nIntelligence and Applications, pp. 1049–1050. IOS Press, (2014).\n[18] David D. Lewis and William A. Gale, ‘A sequential algorithm for train-\ning text classifiers’, in Proceedings of the 17th Annual International\nACM SIGIR Conference on Research and Development in InformationRetrieval, SIGIR ’94, pp. 3–12, New Y ork, NY , USA, (1994). Springer-V erlag New Y ork, Inc.\n[19] Andrew Y . Ng and Michael I. Jordan, ‘On discriminative vs. genera-\ntive classifiers: A comparison of logistic regression and naive bayes’,inAdvances in Neural Information Processing Systems 14 (NIPS), pp.\n841–848, (2001).\n[20] William H. Press, Brian P . Flannery, Saul A. Teukolsky, and William T.\nV etterling, Numerical Recipes in F ortran 77: The Art of Scientific Com-\nputing, Cambridge University Press, 2 edn., 1992.\n[21] Nicholas Roy and Andrew McCallum, ‘Toward optimal active learningthrough sampling estimation of error reduction’, in Proc. of the 18th\nInt. Conf. on Machine Learning, ICML 2001, Williamstown, MA, USA,pp. 441–448, San Francisco, CA, USA, (2001). Morgan Kaufmann.\n[22] Robin Senge, Stefan B ¨osner, Krzysztof Dembczy ´nski, J ¨org Haasenrit-\nter, Oliver Hirsch, Norbert Donner-Banzhoff, and Eyke H ¨ullermeier,\n‘Reliable classification: Learning classifiers that distinguish aleatoricand epistemic uncertainty’, Information Sciences, 255, 16–29, (January\n2014).\n[23] Burr Settles, Active Learning, number 18 in Synthesis Lectures on Ar-\ntificial Intelligence and Machine Learning, Morgan and Claypool Pub-lishers, 2012.\n[24] Katrin Tomanek and Udo Hahn, ‘Reducing class imbalance during ac-\ntive learning for named entity annotation’, in Proceedings of the 5th\nInternational Conference on Knowledge Capture (K-CAP), September\n1-4, 2009, Redondo Beach, California, USA, eds., Y olanda Gil and\nNatasha Fridman Noy, pp. 105–112. ACM, (2009).\n[25] Simon Tong and Edward Chang, ‘Support vector machine active learn-\ning for image retrieval’, in Proceedings of the Ninth ACM International\nConference on Multimedia, pp. 107–118. ACM, (2001).\n[26] R. Wang, C. Y . Chow, and S. Kwong, ‘Ambiguity-based multiclass ac-\ntive learning’, IEEE Transactions on Fuzzy Systems, 24(1), 242–248,\n(Feb 2016).\n[27] Frank Wilcoxon, ‘Individual comparisons by ranking methods’, Bio-\nmetrics bulletin, 1(6), 80–83, (1945).\n[28] Rong Y an and Alexander Hauptmann, ‘Multi-class active learning for\nvideo semantic feature extraction’, in IEEE International Conference\non Multimedia and Expo (ICME), volume 1, pp. 69–72. IEEE, (2004).\n[29] Rong Y an, Jie Y ang, and Alexander Hauptmann, ‘Automatically label-\ning video data using multi-class active learning’, in Proceedings of the\nNinth IEEE International Conference on Computer Vision, pp. 516–\n523. IEEE, (2003).\n[30] Yi Y ang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G\nHauptmann, ‘Multi-class active learning by uncertainty sampling withdiversity maximization’, International Journal of Computer Vision,\n113(2), 113–127, (2014).D.Kottkeetal./Multi-Class Probabilistic Active Learning 594", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "MwkFoZMPsPK", "year": null, "venue": "ECAI2012", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-098-7-786", "forum_link": "https://openreview.net/forum?id=MwkFoZMPsPK", "arxiv_id": null, "doi": null }
{ "title": "ArvandHerd: Parallel Planning with a Portfolio.", "authors": [ "Richard Anthony Valenzano", "Hootan Nakhost", "Martin Müller", "Jonathan Schaeffer", "Nathan R. Sturtevant" ], "abstract": "ArvandHerd is a parallel planner that won the multicore sequential satisficing track of the 2011 International Planning Competition (IPC 2011). It assigns processors to run different members of an algorithm portfolio which contains several configurations of each of two different planners: LAMA-2008 and Arvand. In this paper, we demonstrate that simple techniques for using different planner configurations can significantly improve the coverage of both of these planners. We then show that these two planners, when using multiple configurations, can be combined to construct a high performance parallel planner. In particular, we will show that ArvandHerd can solve more IPC benchmark problems than even a perfect parallelization of LAMA-2011, which won the satisficing track at IPC 2011. We will also show that the coverage of ArvandHerd can be further improved if LAMA-2008 is replaced by LAMA-2011 in the portfolio.", "keywords": [], "raw_extracted_content": "ArvandHerd: Parallel Planning with a Portfolio\nRichard Valenzano, Hootan Nakhost, Martin M ¨uller, Jonathan Schaeffer1and Nathan Sturtevant2\nAbstract. ArvandHerd is a parallel planner that won the multi-\ncore sequential satisficing track of the 2011 International Planning\nCompetition (IPC 2011). It assigns processors to run different mem-bers of an algorithm portfolio which contains several configurationsof each of two different planners: LAMA-2008 andArvand.I n\nthis paper, we demonstrate that simple techniques for using differ-ent planner configurations can significantly improve the coverage\nof both of these planners. We then show that these two planners,\nwhen using multiple configurations, can be combined to construct\na high performance parallel planner. In particular, we will show\nthatArvandHerd can solve more IPC benchmark problems than\neven a perfect parallelization of LAMA-2011, which won the sat-\nisficing track at IPC 2011. We will also show that the coverage of\nArvandHerd can be further improved if LAMA-2008 is replaced\nbyLAMA-2011 in the portfolio.\n1 Introduction\nIn recent years, the rate at which processor speed is increasing hascurtailed, while the proliferation and availability of multi-core tech-nology has substantially increased. This development suggests that tobest utilize modern hardware when constructing automated satisfic-ing planning systems, it is necessary to consider parallel approaches.\nPast work on building parallel planning systems, such as PBNF\n[2] and HDA* [12], has generally focused on parallelizing a singleheuristic search algorithm. While these approaches have successfully\nimproved run-time, satisficing planners that use these or similar tech-\nniques on shared memory machines should not be expected to solve\nmany more problems than their single-core counterparts. This would\nbe true for even a perfect parallelization of LAMA-2011 [18], the\nwinner of the single-core sequential satisficing track of IPC 2011,that runs exactly ktimes faster than the single-core version when run\nonkcores. Given a time limit T, the performance of such a k-core\nsystem can be simulated by running LAMA-2011 fork·Ttime and\ncounting any problem solved within this time limit as having beensolved by the k-core parallelization in time T. This simulation indi-\ncates that even with such a speedup, coverage only increases slightly.For example, when given a 6GB memory limit and a 30minute\ntime limit, even the 8-core version of this perfect parallelization of\nLAMA-2011 would solve only 6more problems than the 721 solved\nby the standard single-core version when tested on all 790 problems\nfrom the 2006, 2008, and 2011 IPC competitions.\nAn alternative to parallelizing a single algorithm is to run mem-\nbers of an algorithm portfolio in parallel. This involves tackling each\nproblem using a set of strategies that differ in either their configura-\ntion (ie. different parameter values or other settings) or in the under-\n1University of Alberta, Canada, email: {valenzan, nakhost, mmueller,\njonathan}@cs.ualberta.ca\n2University of Denver, USA, email: [email protected] algorithm, and running these strategies simultaneously on dif-ferent cores. This technique is inspired by two considerations. First,\nplanners are expected to solve problems from a diverse set of do-\nmains, and no single algorithm can be expected to dominate all oth-ers on all domains. Second, this approach offers a simple alternativeto the difficult process of parallelizing a single-core algorithm and itmostly avoids overhead from communication and synchronization.\nParallelizing a single memory-heavy algorithm in a shared-\nmemory environment can also be problematic as it is often the avail-able memory that limits coverage. In these cases, any speedup seen\nonly causes memory to be exhausted more quickly. This behaviour\nis seen in the simulated LAMA-2011 parallelization, as the 8-core\nsimulation ran out of memory on 52problems. This means that re-\ngardless of how many more cores are used, at most 738 of the 790\nproblems can be solved using LAMA-2011 without an increase in\nmemory. However, running a low-memory planner alongside a high-memory planner in a parallel portfolio can increase the coverage ifthe portfolio is selected properly.\nArvandHerd is the first parallel planning system to success-\nfully combine disparate planning approaches to create a state-of-the-art parallel planner for shared memory machines. It won the multi-core sequential satisficing track of IPC 2011 [4] and was designed\nspecifically to avoid the inherent limitations of parallelizing a sin-\ngle memory-heavy planning algorithm that were described above.Planners competing in this track were run on a 4-core machine with\na maximum of 30 minutes of run-time and 6 GB of memory. In\nArvandHerd, three cores were used to run a set of configurations ofthe linear-space random-walk-based planner Arvand, and the final\nprocessor was used to run the W A*-based LAMA-2008 planner.\nThis paper extends the description of ArvandHerd that was\ngiven in [20] by providing full documentation of the design choicesmade with respect to coverage, and then by analyzing the effective-\nness of these choices. We will demonstrate that each of the Arvand\nandLAMA-2008 planners can be enhanced through the use of mul-\ntiple configurations and restarts. While these techniques have been\nsuccessfully applied in the satisfiability community, we demonstratethat they are similarly successful in planning. In Arvand, we will\nalso describe an online learning configuration selection system which\neffectively speeds up the search. Finally, we will show that com-\nbining these two planners in a parallel portfolio solves more IPC\nbenchmark problems than several state-of-the-art planners, even ifthey could be effectively parallelized.\n2 Related Work\nWork in the area of parallel planning has typically focused on theparallelization of heuristic search algorithms. This includes HDA*[12] and PBNF [2], two recent and successful parallelizations of A*which exhibit impressive speedups in distributed and shared memoryECAI 2012\nLuc De Raedt et al. (Eds.)\n© 2012 The Author(s).\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License.\ndoi:10.3233/978-1-61499-098-7-786786\nsystems, respectively. As both algorithms involve parallelizing A*,\nthey will also have the same memory limitations as A* on shared-memory machines. As such, these algorithms cannot be expected toimprove the coverage of A* except where search time is limited.\nNote, there is nothing precluding the use of the parallel algorithms\nin a portfolio. If a parallelized algorithm is included, it can be allottedseveral cores on which to run while the remaining cores will run the\nrest of the portfolio. Alternatively, the parallelized algorithm can be\nrun until it hits some resource limit, at which point the remainder of\nthe portfolio will be run. This approach would benefit from both the\nspeedups seen with the parallelized algorithm and the coverage im-\nprovements seen with a portfolio. As such, research into parallelizingindividual search algorithms remains important work.\nUsing multiple configurations has also previously been shown to\neffectively improve the coverage and run-time of several single-agentsearch algorithms [21]. These ideas were central in the constructionofArvandHerd which takes the idea a step further by using multi-\nple planners in addition to multiple configurations.\nThe portfolio approach was initially shown to effectively trade-\noff run-time and risk for heuristic algorithms for computa-tionally hard problems [11]. At IPC 2011, this approach wasalso used successfully in the single-core satisficing track byFast-Downward Stone Soup (FDSS) [9], which solved the\nsecond most problems of all competing planners [4]. This planner\nemploys a variety of planning configurations in sequence, by giving\neach a time limit and restarting upon failure with a new configura-\ntion. The time limit assigned to each configuration is determined off-\nline based upon the coverage that these configurations collectivelyachieved on a training set. However, while FDSS configurations dif-\nfer in the heuristics used and the search enhancements employed,they do not differ in their operator ordering, which will be shown toimprove coverage in Section 5.1. All of the configurations are also ei-ther W A* or hill-climbing based and so this portfolio is not as diverseas that of ArvandHerd. As a result, it lags behind ArvandHerd\nin coverage as we will show in Section 6.1.\nBelow we also discuss the use of restarts in W A*-based plan-\nners as a way to improve coverage. While LAMA-2008 already uses\nrestarts, it does so in a very different way: it restarts with a less greedy\nconfiguration whenever a solution is found in an effort to find better\nquality solutions [16]. The restarts we consider have been shown to\nsignificantly improve coverage in SA T-solvers due to the long-tailed\ndistribution of the problem-solving time [5]. The results below sug-gest that planning domains exhibit a similar property.\nIn IPC 2011, ArvandHerd solved 236 out of 280 problems [4].\nAs our focus is on coverage and no other multi-core planner solvedmore than 184 problems (by AyAlsoPlan [3]), we do not compare\nwith any other competition planners below.\n3 Parallel Planning with a Portfolio\nIn order to maximize the coverage of a planning portfolio, the portfo-lio members should be complementary in terms of their strengths andweaknesses. This requires diversity in the set of planners selected. Ifthe portfolio is to be used on a shared-memory machine, then an-\nother important design decision relates to the way in which memory\nis partitioned between the planners. For example, if a two-plannerportfolio contains two approaches for which lots of memory is es-\nsential, the collective coverage may suffer if each planner is assigned\nonly half of the available memory. Avoiding this behaviour is there-fore integral for properly selecting a portfolio and was an importantconsideration when building ArvandHerd.3.1 The ArvandHerd Portfolio\nLAMA-2008 [17], the winner of the sequential satisficing track of\nIPC 2008, was a state-of-the-art planner prior to IPC 2011 makingit a natural selection as a member of the ArvandHerd portfolio.\nLAMA-2008 is W A*-based and can be memory-heavy. As such, al-\nthough the ArvandHerd portfolio contains several configurations\nofLAMA-2008, it avoids the memory-partitioning issues mentioned\nabove by running only a single LAMA-2008 configuration at a time.\nThe additional LAMA-2008 configurations are only used if the first\nruns out of memory, in which case the planner restarts with another\nconfiguration. In Section 5, this planner will be described in moredetail and this restarting procedure will be evaluated.\nTheArvandHerd portfolio also contains several configurations\nofArvand [15]. This planner uses a random-walk-based search\nwhich makes it ideal for use alongside LAMA-2008 in a portfolio\nfor several reasons. First, this approach is very different from W A*and it can solve some problems that the systematic search of W A* isunable to handle. Secondly, domains in which Arvand exhibits poor\nbehaviour are often successfully tackled by W A*-based approaches.\nFinally, Arvand has low memory requirements, and so when it is\nrun alongside LAMA-2008 in a shared-memory system, the major-\nity of the memory can be assigned to LAMA-2008, thereby avoiding\nthe memory-partitioning issue described above. Arvand will be de-\nscribed in further detail in Section 4.\n3.2 ArvandHerd Architecture\nAs both Arvand and LAMA-2008 are built on top of\nFast Downward [6],ArvandHerd is run from a single binary.\nArvandHerd usesFast Downward’s preprocessor to translate\nthe PDDL problem description to an SAS+-like representation [7].\nThis preprocessor has not been parallelized. While doing so would\nspeed up ArvandHerd, we consider this an orthogonal problem to\nthat of parallelizing the search component and leave it as future work.As all planners tested in this paper require this preprocessing step, it\nis not counted against the time limits used in the experiments below.\nWhen ArvandHerd begins its search, separate threads are\nspawned to run different portfolio members. In a k-core machine set-\nting, k−1threads will be running a parallelization of Arvand while\nthe remaining thread runs LAMA-2008.\n4 The Arvand Planner\nArvand is a sequential satisficing planner that uses heuristically-\nevaluated random walks as the basis for its search [15]. The execu-\ntion of Arvand consists of a series of search episodes. In the sim-\nplest version of Arvand, each search episode begins with nrandom\nwalks, with each walk being a sequence of mlegal random actions\noriginating from the initial state (s i), where nandmare parameters.\nThe heuristic value of the final state reached by each random walk isalso computed using the FF heuristic function [10]. Once all nwalks\nhave been performed, the search jumps to the end of the walk whose\nfinal state, s, has the lowest heuristic value. Arvand then runs a new\nset of nrandom walks, only this time the walks originate from state\ns. This is followed by another jump to the end of the most promising\nwalk from this new set of walks. This process repeats until either agoal state is found, or some number of jumps are made without anyimprovement in the heuristic values being seen. In the latter case, thecurrent search episode is terminated, and the planner restarts with anew episode that begins with random walks originating from s\ni.R.Valenzano etal./ArvandHer d:ParallelPlanning with aPortfolio 787\nArvand has also been enhanced with smart restarts [14]. For\nthis technique, the trajectories found during the most effective search\nepisodes are stored in a walk pool. When a new episode begins, it\nthen starts from a state randomly selected from a trajectory that itself\nwas randomly selected from the walk pool, instead of from the initial\nstate. This technique has been shown to improve Arvand’s cover-\nage [14]. Note, smart restarts have been disabled in the experiments\nin Sections 4.1 and 4.3 when experimenting with different ways to\nuse multiple configurations. This was done so as to evaluate these\ntechniques in isolation of the communication between configurations\nthat a walk pool allows. However, smart restarts are employed in the\nmore complete systems evaluated in Sections 4.4 and 6.\n4.1Arvand Configurations\nThere are a number of parameters in Arvand that can greatly af-\nfect its performance on a domain-by-domain basis. Perhaps the most\nimportant of these relates to the biasing of the random action selec-\ntion.Arvand allows for random action selection to either be biased\nto avoid actions that have previously led to dead-ends (referred to asMDA) or to be biased towards using preferred operators (referred toasMHA). These different biasing strategies have been shown to each\nbe useful for different domains [15].\nA set of parameters related to the random walk length can also\ngreatly affect performance. In Arvand, this length is adjusted online\nif little progress is being made in the heuristic values seen during a setof random walks. The initial walk length, the frequency with which\nwalks are lengthened, and the factor by which they are lengthened\n(called the extending rate) are all parameters affecting this process.\nThe average performance of six different configurations over 5\nruns on each of 480 problems is shown in Table 1. Configurations\nare given a maximum of 30minutes and 2GB per run. Configura-\ntions 1and4correspond to the default configurations that use one of\nMDA or MHA. The remaining four configurations are constructedby modifying these default configurations by either its initial walklength or its extending rate, but not both, simultaneously.\nTable 1. Performance of different Arvand configurations.\nConfig Bias TypeInitial Walk Extending Av. Num\nLength Rate Solved\n1 MDA 11 .5 400.8\n2 MDA 12 .0 414.2\n3 MDA 10 1.5 397.8\n4 MHA 11 .5 338.8\n5 MHA 12 .0 348.0\n6 MHA 10 1.5 386.0\nThese experiments were performed on a cluster of machines each\nwith two 2.19 GHz AMD Opteron 248 processors with 1MB of L2\ncache3. The problem set consists of all problems in the 2006 and\n2008 planning competitions, except for sokoban from IPC 2008\nwhich was omitted as previous testing has indicated that Arvand\nperforms poorly in this domain regardless of how it is configured.\nWhile the MDA configurations outperform the MHA configura-\ntions, this is not true in all domains [15]. For example, configuration1(MDA) only solves an average of 16.2 problems in the IPC 2006\npathways domain while configuration 4(MHA) solves all 30prob-\nlems. This suggests that a combination of configurations is needed.\n3While three different clusters were used in the experiments presented in this\npaper, any comparisons made are between planners/techniques that were\ntested on the same cluster4.2 Combining Different Arvand Configurations\nA simple way to combine kconfigurations in a single-core version\nofArvand is to run each for 1800/k seconds. This technique will\nbe referred to as uniform time partitioning. We can evaluate this ap-\nproach by calculating the expected coverage based on the run-time\nof each configuration on its own. To do so, let P(p, c, t )denote the\nprobability that Arvand with configuration cwill solve problem p\nin at most tseconds. Now let Pk(p, C)denote the probability that\npis solved when using uniform time partitioning with a set of con-\nfigurations C={c1,c2, ..., c k}. As the searches performed by the\ndifferent configurations are independent, the following holds:\nPk(p, C)=1−P(p, c0,1800/k )∗...∗P(p, c k,1800/k )\nwhere P(p, c i,1800/k )=1−P(p, c i,1800/k ). Given a problem\nset, the expected number of problems solved is then the sum of theseprobabilities over all problems. For the values of P(p, c, t ), we will\nuse the empirically determined probability that pis solved in time\nlimit tas seen during the experiments summarized in Table 1.\nGiven the 6configurations tested in Table 1, there are /parenleftbig6\nk/parenrightbig\npossible\nportfolios for any portfolio size ksuch that 1≤k≤6. When k=\n2, the best configuration set of all 15possible sets is expected to\nsolve 436.3 problems, an increase of 22.1 over the average number\nsolved by the single best configuration alone. In fact, all but 2of\nthese 15configuration sets improved over the best configuration in\nits own set. For k=4 andk=6 , the expected coverage of the best\nsets are 434.4 and431.4 , respectively. These diminishing returns are\nto be expected since an increase in kdecreases the amount of time\nany individual configuration will run. In contrast, the coverage of the\nworst set for any kreaches its highest point at k=6 , and surpasses\nthe performance of the single best configuration alone when k=4 .\nIn practice, instead of starting with a new configuration every\n1800/k seconds, we alternate amongst the configurations in a round-\nrobin fashion. For each of k=2 ,k=4 , and k=6 , we tested this\napproach with the configuration set of size kwith the best expected\nuniform time partitioning performance. The coverage of alternation\nis slightly better, as it averages 435.4 ,439.8 , and 439.6 forkvalues\nof2,4, and 6, respectively, over 5runs per problems. This occurs\nbecause alternation spends more time using the best configuration\non any problem. For example, if two configurations, c1andc2, are\nused on a single problem p, and c1is less effective on pthan c2, then\nsearch episodes using c1will stop making progress and restart more\nquickly than those using c2. The available run-time will therefore\nskew more towards the longer, more effective c2configurations, than\nto the shorter, quickly-restarting c1configurations.\n4.3 Configuration Selection as a Bandit Problem\nArvand was also enhanced through the use of an online configura-\ntion selection system which, while not increasing coverage, did de-\ncrease run-time. Given a set of configurations C, the system selects\na configuration for the next search episode from Cbased on the per-\nformance of the configurations during previous episodes. This sys-tem views configuration selection as an instance of the multi-armed\nbandit problem. This paradigm requires the definition of a payofffunction for search episodes. For this system, the reward given to asearch episode eperformed with configuration cis given as follows:\nwhere h(v)is the heuristic value of state v,sis the state on the trajec-\ntory of ethat achieved the lowest heuristic value, and s\niis the initial\nstate, the reward given to cismax(0, 1−h(s)/h(si)).R.Valenzano etal./ArvandHer d:ParallelPlanning with aPortfolio 788\nUsing this reformulation of configuration selection, configura-\ntions can be selected online using any multi-armed bandit algorithm.\nArvand uses UCB [1], which begins by performing a single search\nepisode with every configuration. After this stage, the configuration\nselected for the next episode is given by\narg max\nc∈Cr(c)+q·/radicalbig\nlnt(c)/t\nwhere r(c)is the average reward seen thus far for configuration c,\nt(c)is the number of search episodes performed with configuration\ncso far, tis the total number of search episodes, and qis a parameter\ncalled the UCB constant value. This algorithm has been shown to\nhave strong theoretical guarantees on its ability to balance between\nfocusing on effective selections and exploring the alternatives [1].\nIn order to give the UCB system some quick search episodes so\nas to more quickly identify the more useful configurations, the fre-quency with which episodes restarted was initially set high and thengradually decreased. The resulting system was then tested on a se-lection of sets of the configurations in Table 1. In general, the UCBsystem did not significantly change the coverage of Arvand when\ncompared to the use of round-robin configuration selection but it didimprove run-time. For example, four different values of the UCB\nconstant value were tested on the configuration set of size 2with\nthe best expected uniform cost partitioning performance. Recall that\nround-robin selection solved 435.4 problems when applied to this\nsame set. Of the UCB constant values tested ( 0.1,0.5,1.0, and 5.0),\nthe most problems solved when using the UCB selector was 439.4\nand the least was 437.2 . However, if we only consider the 399 prob-\nlems solved on all five runs per problem by either a UCB systemor round-robin, we see that even the UCB constant resulting in thelongest run-time results in a 2.75 times speedup, while the value with\nthe shortest run-time sees a 3.90 times speedup.\n4.4 Parallelizing Arvand\nForArvandHerd, a simple parallelization of Arvand was devel-\noped in which each core runs an independent search episode. Theonly communication between cores is through the use of a sharedwalk pool and a shared UCB configuration selector. When a core hascompleted a search episode, it submits the corresponding trajectoryto the shared walk pool, and gets a trajectory in return. The core alsosubmits the reward for its current configuration to the shared UCBsystem and in return is given a configuration to use in its next searchepisode. The correctness of the walk pool and UCB learner are main-tained by limiting access to each to only one thread at a time. As the\nsearch episodes dominate execution time, there is little synchroniza-\ntion or contention overhead caused by sharing these resources.\nParallel Arvand was tested with different numbers of cores on\nthe790 problems from IPCs 2006, 2008, and 2011. These experi-\nments were performed on a cluster of machines, each with two 4-\ncore 2.8 GHz Intel Xeon E546s processors with 6MB of L2 cache.\nThe configuration set used is identical to the set used in IPC 2011. It\nincludes configurations 1,4, and 6from Table 1 and another MDA\nconfiguration with an extending rate of 1.5and an initial walk length\nof3. This configuration set was selected manually prior to IPC 2011\nbased on familiarity with Arvand, and also before the expected cov-\nerage analysis described above had been performed. We use this set\nin our experiments below so as to evaluate how parallel Arvand\ncontributed to ArvandHerd’s success.\nTable 2 shows the average number of problems solved over five\nruns per problem when using different numbers of cores. In addi-\ntion, the table shows how much faster the multi-core versions werein comparison to the single-core version on the 639 problems that\nwere solved on all five runs regardless of the number of cores used.\nTable 2. The performance of parallel Arvand.\nNumber of Cores\n12348\nCoverage 660.4 668.0 671.4 677.8 679.6\nSpeedup Factor 1.01 .92 .53 .03 .4\nWhile 8-core Arvand solved 19.2 more problems on average\nthan the 1-core version, parallel Arvand is still unable to solve as\nmany as the 721 problems that LAMA-2011 solved. Domain-by-\ndomain analysis also indicates that domains in which the single-\ncore version exhibits poor performance are often also difficult forthe multi-core versions. For example, neither the single-core nor the8-core version of Arvand can solve even one of the 20barman\nproblems from IPC 2011. This suggests that there is a limit in the\ncoverage that can be achieved in this domain through parallelizing\nArvand. However, LAMA-2008 can solve 15of these problems,\nthus making the case for the use of a portfolio.\n5 The LAMA-2008 Planner\nLAMA-2008 is a W A*-based planner that won the sequential satis-\nficing track of IPC 2008 [8]. It uses a number of planning techniquesincluding multiple heuristic functions, preferred operators, and de-\nferred heuristic evaluation [17]. Below, we briefly describe the set of\nheuristics that we used in this system and show that LAMA-2008’s\ncoverage can be improved through the use of restarts.\nLAMA-2008 can use several heuristics to guide search through a\nprocess called multi-heuristic best-first search [6]. Previous analysis\nof this planner has shown that strong performance can be achieved if\nthe set of heuristics used includes both the landmark-count heuristic\n(LM ) and a version of the FF Heuristic which ignores action costs\n(FFs) [17]. For details on these heuristics, see [10] and [17]. Wefound that if we also included a third heuristic function, a versionof FF which did adjust for action costs ( FFc), then LAMA-2008\ncould solve 464 of the 550 problems taken from the 2008 and 2011\ncompetitions, as opposed to just 449 when this heuristic is omitted.\nNote, before IPC 2011 the LAMA-2008 code was re-factored so\nas to make it thread-safe. Subsequent experimentation has shown thatthese changes do not significantly impact the coverage of the systemwhen run on a single core. As such, though the experiments use ournewer version of LAMA-2008, the results shown below are expected\nto hold if the techniques considered were implemented in the origi-\nnal. Also of note, as FFs and FFc are identical in unit cost domains,\nwe only use one of these heuristics in the IPC 2006 domains (which\nhave no action costs).\n5.1 Randomizing Operator Order\nThe use of multiple operator operator orderings has previously beenshown to yield an effective W A*-based parallel planning system [21].Below, we will show that this is also true when using a complete plan-ning system like LAMA-2008 even though it already uses multiple\nheuristics to introduce diversity.\nRecall that LAMA-2008 uses deferred heuristic evaluation. When\nusing this technique, the heuristic value of a state sis not calculated\nuntil sis expanded. When sis generated, the heuristic value used for\nsis actually the heuristic value of the parent of s. This technique can\noften improve search time by decreasing the number of expensiveR.Valenzano etal./ArvandHer d:ParallelPlanning with aPortfolio 789\nheuristic evaluations. It will also increase the number of ties, as any\ntwo children c1andc2with the same parent pthat are achieved with\nequal cost actions will have the same f-cost. A unit cost domain rep-\nresents an extreme case of this phenomenon; all children of the same\nstate will be assigned the same f-cost. This increase in the number\nof ties can increase the variance in coverage found with different op-\nerator orderings.\nThis variance can be seen when experimenting with random op-\nerator ordering inLAMA-2008. This technique involves randomly\nre-ordering the list of children of an expanded state before those newstates are added to any open list. For this experiment, LAMA-2008\nwas configured to use random operator ordering in greedy best-firstsearch (GBFS) with the FFs and LM heuristics and preferred opera-\ntors. This system was tested on all 510 problems from the 2006 and\n2008 competitions for 5runs per problem with a 30 minute time limit\nand a 2 GB memory limit. These experiments were run on a clusterof machines each with two AMD Opteron 250 2.4GHz processors,\neach with 1 MB of L2 cache. While the average number of prob-lems solved is 431.8 , if the best random seed had been selected on\na problem-by-problem basis, 448 problems would have been solved.\nIf the worst seed had been selected on a problem-by-problem basis,only 415 problems would have been solved.\nThis variance suggests the use of restarts, whereby if some re-\nsource limit has been reached without a solution having been found,\nthe search starts fresh with a new random seed. The expected effec-\ntiveness of using restarts can be calculated using the technique de-scribed in Section 4.2. Table 3 shows this data for different types\nof search and different numbers of restarts. In these experiments,\nLAMA-2008 uses FFs and LM when running GBFS, and FFc and\nLM when using W A* (ie. w=7 represents a weight 7 search).\nTable 3. Expected performance of LAMA-2008 using restarts.\nSearch Number of Restarts\nType 01248 1 6\nGBFS 431.8 437.0 438.5 440.3 437.3 427.8\nw= 7 403.6 408.2 409.1 409.0 405.4 397.7\nw= 1 207.2 209.1 209.8 207.3 205.4 194.9\nTable 3 shows that a small number of restarts can help improve the\nexpected number of problems solved, though too many restarts candegrade performance. This is true for all configurations tested includ-ing weights 10and5which are not shown in the table. The estima-\ntion technique also indicates that if LAMA-2008 is set to restart not\njust with a new random seed but also with a different configuration,the additional diversity would help to further improve coverage. For\nexample, when restarting 4times such that each of GBFS, w=1 0 ,\nw=7 ,w=5 , and w=1 are run for a maximum of six minutes,\nthe expected coverage is 448.4 . This result motivates the inclusion of\nseveral LAMA-2008 configurations in the portfolio.\nThe version of LAMA-2008 used in the competition was set to\nrestart on a memory limit instead of on a time limit. This limit was\nenforced through the use of an internal memory estimator. The closedlist was also saved in between restarts so as to avoid recomputing theheuristic values of states seen in previous iterations. Subsequent ex-periments suggest that this was not necessarily the best approach.\nWhen restarting with a 2GB memory limit and using the 5configu-\nrations from above, an average of 440.2 problems was solved. While\nthis is competitive with restarting on a time limit with GBFS alone,\nit trails behind the expected performance of restarting with differ-ent configurations. This is at least partially due to inaccuracies in\nour memory estimator which could only provide rough estimates. Amore in-depth consideration is beyond the scope of this paper.\n6 Putting the Portfolio Together\nIn this section, we evaluate ArvandHerd on IPC benchmark do-\nmains and show that it outperforms several state-of-the-art planners,\nand would do so even if these planners could be efficiently paral-lelized. Before doing so, we begin by describing the Arvand and\nLAMA-2008 configurations used in the ArvandHerd portfolio.\nArvandHerd runs multiple configurations of Arvand using the\nArvand parallelization described in Section 4.4, and a single in-\nstance of LAMA-2008. The configurations of Arvand used in both\nthe IPC competition and in the experiments below are the same asthose used in that section.\nThe version of LAMA-2008 used below differs slightly from that\nused in IPC 2011 in terms of its restart-inducing memory limit. Inthe competition, the memory limit was set at 2.7GB even though the\nsystems were allowed a maximum of 6GB. This limit was selected\nso as to accommodate the memory needs of the plan improvementsystem used. As the focus of this paper is coverage and the plan im-provement system (a description of which is beyond the scope ofthis paper) has been updated to allow for a larger limit, this restart-inducing limit has been increased to 4GB in the experiments below.\nA second difference between the competition version of\nLAMA-2008 and that used in the experiments below relates to the\nconfigurations used. In the experiments, the first iteration performedis GBFS, which is followed by a set of W A* searches that use thefollowing weights in the order given: 10,5,2, and 1. If the weight\n1search restarts due to the memory limit being reached, this cycle\nof searches is repeated indefinitely (starting back at GBFS) until the\ntime limit is hit or a solution is found. In the competition version, the\nweight 1search was followed by several more low-weight searches\nwhich were included for plan improvement, and have therefore beenremoved. A final difference is that in the competition version, onceallLAMA-2008 configurations were tried once, the thread running\nit would join the other 3 in running parallel Arvand. This is not\ndone in the experiments below in the interest of evaluating the gen-\neral portfolio technique, as the switch from LAMA-2008 toArvand\nassumes the portfolio members can all be run from a single binary.\n6.1 ArvandHerd on IPC Benchmarks\nArvandHerd was run 5times on each of the 790 problems in the\n2006, 2008, and 2011 planning competitions on the same cluster de-scribed in Section 4.4 (as were all planners considered in this sec-\ntion). The performance of this system can be seen in Table 4, whichshows that ArvandHerd ’s coverage is significantly better than that\nof the Arvand parallelization and the perfectly linear paralleliza-\ntions of LAMA-2011 andFDSS. ArvandHerd achieves its high\ncoverage in the expected way, with Arvand andLAMA-2008 can-\ncelling out each others weaknesses. For example, recall that Arvand\nis unable to solve even a single barman problem. With LAMA-2008\nin the portfolio, 2-core ArvandHerd solves an average of 15.4 of\nthe20problems (similar to the 16solved by LAMA-2008 when\nrun on its own). Similarly, while LAMA-2008 only solves 19of30\nproblems in storage (IPC 2006), 2-core ArvandHerd solves an\naverage of 29.4 (similar to the 30thatArvand solves when run\non its own). In this way, ArvandHerd combines two planners in\nLAMA-2008 andArvand whose performance lag significantly be-\nhindLAMA-2011 when used on their own to surpass even a per-\nfectly linear parallelization of LAMA-2011.R.Valenzano etal./ArvandHer d:ParallelPlanning with aPortfolio 790\nTable 4. Performance of parallel planners.\nPlannerNumber of Cores\n1248\nLAMA-2008 Simulation 639.0 641.0 643.0 NA\nLAMA-2011 Simulation 721.0 724.0 726.0 727.0\nFDSS-1 Simulation 720.0 724.0 726.0 727.0\nParallel Arvand 660.4 668.0 677.8 679.6\nArvandHerd NA 737.2 743.2 741.8\nArvandHerd +LAMA-2011 NA 750.4 754.2 755.2\n6.2 Using LAMA-2011 inArvandHerd\nLAMA-2008 was included in the portfolio instead of LAMA-2011\nbecause Arvand had originally been built into the LAMA-2008\nframework. Table 4 shows that performance would further im-\nprove if LAMA-2011 had been used instead (see the row labelled\n“ArvandHerd +LAMA-2011”). For testing the k-core perfor-\nmance of this portfolio, parallel Arvand was run with k−1cores on\nall problems that LAMA-2011 could not solve with a 4 GB memory\nlimit. The table shows the sum of the number of problems solved by\nLAMA-2011 and the average number of problems solved by (k−1)-\ncores running parallel Arvand.\nThe fact that the new portfolio successfully solves even more prob-\nlems than LAMA-2011 by itself reflects the importance of Arvand\nin the ArvandHerd portfolio. Arvand is not simply covering the\nweaknesses in LAMA-2008 that have been addressed with the re-\nlease of LAMA-2011. It is also handling problems that this state-of-\nthe art planner cannot.\nWhile coverage does not increase substantially with additional\ncores regardless of whether LAMA-2008 orLAMA-2011 is used in\nthe portfolio, this is largely due to a “glass ceiling” effect. For exam-\nple, the 2-core portfolio containing LAMA-2011 is already solving\n95% of the test problems.\n7 Conclusion\nIn this paper, we began by demonstrating that parallelizing a singleplanning algorithm is not necessarily the best way to use a multi-core shared memory machine if the goal is to maximize coverage.This occurs because while the parallelized algorithm may be faster,it will have similar limitations as the original single-core algorithm interms of both resource usage and the domains it handles well. Insteadof parallelizing a single algorithm, we used an algorithm portfolioapproach to parallel planning in the development of ArvandHerd,\nwhich won the multi-core sequential satisficing track at IPC 2011.\nThis paper contains a full description of ArvandHerd and an\nanalysis of how several design decisions contributed to its success.In particular, we have shown that the use of multiple configurationsand restarts can improve the coverage of each of the two plannersused in the portfolio, namely LAMA-2008 andArvand, even when\nused on only a single core. While these techniques have previouslybeen used in the SA T-solving community, we have shown that theirsuccess extends into automated planning. The combination of theseplanners in ArvandHerd is then shown to outperform even the sim-\nulated performance of perfect parallelizations of two state-of-the-artsingle-core planners. It is also shown that the coverage can be fur-ther improved by replacing LAMA-2008 withLAMA-2011in the\nArvandHerd portfolio.\nMore generally, we have demonstrated through the construction\nofArvandHerd that the use of a portfolio is a powerful approach\nfor building general parallel planners due to its ability to combinethe strengths of different planners. While the ArvandHerd port-\nfolio only contains two planners, others may be included as well.\nIn particular, we suspect that planners that use approaches other than\nthe W A*-based and random-walk-based approaches already includedwill offer the most potential for further improving coverage. Thesemay include SA T-based planners [19] and probe-based planners [13].However, an evaluation of portfolios containing these approaches ontop of those already in ArvandHerd is left as future work.\nAcknowledgments\nThis research was supported by NSERC, iCore, and Alberta Ingenu-\nity.\nREFERENCES\n[1] Peter Auer, Nicol `o Cesa-Bianchi, and Paul Fischer, ‘Finite-time Anal-\nysis of the Multiarmed Bandit Problem’, Machine Learning, 47(2-3),\n235–256, (2002).\n[2] Ethan Burns, Wheeler Ruml, Sofia Lemons, and Rong Zhou, ‘Best-First\nHeuristic Search for Multicore Machines’, JAIR, 39, 689–743, (2010).\n[3] Juhan Ernits, Charles Gretton, and Richard Deardon, ‘AyAlsoPlan: Bit-\nstate Pruning for State-Based Planning on Massively Parallel Compute\nClusters’, in IPC 2011 Deterministic Track, pp. 117–124, (2011).\n[4] ´Angel Garc ´ıa-Olaya, Sergio Jim ´enez, and Carlos Linares L ´opez. IPC\n2011 Deterministic Track, 2011. http://ipc.icaps-conference.org.\n[5] Carla Gomes, Bart Selman, and Nuno Crato, ‘Heavy-tailed distribu-\ntions in combinatorial search’, in CP, pp. 121–135, (1997).\n[6] Malte Helmert, ‘The Fast Downward Planning System’, JAIR, 26, 191–\n246, (2006).\n[7] Malte Helmert, ‘Concise finite-domain representations for pddl plan-\nning tasks’, Artificial Intelligence, 173(5-6), 503–535, (2009).\n[8] Malte Helmert, Minh Do, and Ioannis Refanidis. IPC 2008 Determin-\nistic Track, 2008. http://ipc.informatik.uni-freiburg.de.\n[9] Malte Helmert and Gabriele R ¨oger, ‘Fast Downward Stone Soup: A\nBaseline for Building Planner Portfolios.’, in ICAPS-2011 Workshop\non Planning and Learning, pp. 28–35, (2011).\n[10] J ¨org Hoffmann and Bernhard Nebel, ‘The FF Planning System: Fast\nPlan Generation Through Heuristic Search’, JAIR, 14, 253–302, (2001).\n[11] Bernardo Huberman, Rajan Lukose, and Tad Hogg, ‘An Economics\nApproach to Hard Computational Problems’, Science, 275, 51–54,\n(January 3 1997).\n[12] Akihiro Kishimoto, Alex Fukunaga, and Adi Botea, ‘Scalable, Parallel\nBest-First Search for Optimal Sequential Planning’, in ICAPS, (2009).\n[13] Nir Lipovetzky and Hector Geffner, ‘Searching for Plans with Carefully\nDesigned Probes’, in ICAPS, (2011).\n[14] Hootan Nakhost, J ¨org Hoffmann, and Martin M ¨uller, ‘Resource-\nConstrained Planning: A Monte Carlo Random Walk Approach’, in\nICAPS, (2012).\n[15] Hootan Nakhost and Martin M ¨uller, ‘Monte-Carlo Exploration for De-\nterministic Planning’, in IJCAI, pp. 1766–1771, (2009).\n[16] Silvia Richter, Jordan Thayer, and Wheeler Ruml, ‘The Joy of Forget-\nting: Faster Anytime Search via Restarting’, in ICAPS, pp. 137–144,\n(2010).\n[17] Silvia Richter and Matthias Westphal, ‘The LAMA Planner: Guiding\nCost-Based Anytime Planning with Landmarks’, JAIR, 39, 127–177,\n(2010).\n[18] Sylvia Richter, Matthias Westphal, and Malte Helmert, ‘LAMA 2008\nand 2011’, in IPC 2011 Deterministic Track, pp. 117–124, (2011).\n[19] Jussi Rintanen, ‘Heuristics for Planning with SA T and Expressive Ac-\ntion Definitions’, in ICAPS, (2011).\n[20] Richard V alenzano, Hootan Nakhost, Martin M ¨uller, Nathan Sturtevant,\nand Jonathan Schaeffer, ‘ArvandHerd: Parallel Planning with a Portfo-\nlio’, in IPC 2011 Deterministic Track, pp. 113–116, (2011).\n[21] Richard V alenzano, Nathan Sturtevant, Jonathan Schaeffer, Karen\nBuro, and Akihiro Kishimoto, ‘Simultaneously Searching with Mul-\ntiple Settings: An Alternative to Parameter Tuning for Suboptimal\nSingle-Agent Search Algorithms’, in ICAPS, pp. 177–184, (2010).R.Valenzano etal./ArvandHer d:ParallelPlanning with aPortfolio 791", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "UUYGJxBDUUhe", "year": null, "venue": "ECAI2006", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=UUYGJxBDUUhe", "arxiv_id": null, "doi": null }
{ "title": "Verification of Medical Guidelines Using Task Execution with Background Knowledge.", "authors": [ "Arjen Hommersom", "Perry Groot", "Peter J. F. Lucas", "Michael Balser", "Jonathan Schmitt" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "TM6MidWdo_JU", "year": null, "venue": "ECAI1992", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=TM6MidWdo_JU", "arxiv_id": null, "doi": null }
{ "title": "Collision-Free Movement of an Autonomous Vehicle Using Reinforcement Learning.", "authors": [ "D. Kontoravdis", "Aristidis Likas", "Andreas Stafylopatis" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "bXMHrSPMB1", "year": null, "venue": "ECAI2002", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=bXMHrSPMB1", "arxiv_id": null, "doi": null }
{ "title": "Mining maximal frequent itemsets by a boolean approach.", "authors": [ "Ansaf Salleb", "Zahir Maazouzi", "Christel Vrain" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "VVtpE6tbjgk", "year": null, "venue": "EAAMO 2022", "pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3551624.3555305", "forum_link": "https://openreview.net/forum?id=VVtpE6tbjgk", "arxiv_id": null, "doi": null }
{ "title": "On Meritocracy in Optimal Set Selection", "authors": [ "Thomas Kleine Buening", "Meirav Segal", "Debabrota Basu", "Anne-Marie George", "Christos Dimitrakakis" ], "abstract": "Typically, merit is defined with respect to some intrinsic measure of worth. We instead consider a setting where an individual’s worth is relative: when a decision maker (DM) selects a set of individuals from a population to maximise expected utility, it is natural to consider the expected marginal contribution (EMC) of each person to the utility. We show that this notion satisfies an axiomatic definition of fairness for this setting. We also show that for certain policy structures, this notion of fairness is aligned with maximising expected utility, while for linear utility functions it is identical to the Shapley value. However, for certain natural policies, such as those that select individuals with a specific set of attributes (e.g. high enough test scores for college admissions), there is a trade-off between meritocracy and utility maximisation. We analyse the effect of constraints on the policy on both utility and fairness in an extensive experiments based on college admissions and outcomes in Norwegian universities.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "N6jKNa2yxOZ", "year": null, "venue": "EAAMO 2021", "pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3465416.3483296", "forum_link": "https://openreview.net/forum?id=N6jKNa2yxOZ", "arxiv_id": null, "doi": null }
{ "title": "When Efficiency meets Equity in Congestion Pricing and Revenue Refunding Schemes", "authors": [ "Devansh Jalota", "Kiril Solovey", "Karthik Gopalakrishnan", "Stephen Zoepf", "Hamsa Balakrishnan", "Marco Pavone" ], "abstract": "Congestion pricing has long been hailed as a means to mitigate traffic congestion; however, its practical adoption has been limited due to social inequity issues, e.g., low-income users are priced out off certain roads. This issue has spurred interest in the design of equitable mechanisms that refund the collected toll revenues to users. Although revenue refunding has been extensively studied, there has been no characterization of how such schemes can be designed to simultaneously achieve system efficiency and equity objectives. In this work, we bridge this gap through the study of congestion pricing and revenue refunding (CPRR) schemes in non-atomic congestion games. We first develop CPRR schemes, which in comparison to the untolled case, simultaneously (i) increase system efficiency and (ii) decrease wealth inequality, while being (iii) user-favorable: irrespective of their initial wealth or values-of-time (which may differ across users) users would experience a lower travel cost after the implementation of the proposed scheme. We then characterize the set of optimal user-favorable CPRR schemes that simultaneously maximize system efficiency and minimize wealth inequality. These results assume a well-studied behavior model of users minimizing a linear function of their travel times and tolls, without considering refunds. Overall, our work demonstrates that through appropriate refunding policies we can achieve system efficiency while reducing wealth inequality.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "p-Zc7ax0QuU", "year": null, "venue": "ECAL 2011", "pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap26.pdf", "forum_link": "https://openreview.net/forum?id=p-Zc7ax0QuU", "arxiv_id": null, "doi": null }
{ "title": "Natural selection fails to optimize mutation rates for long-term adaptation on rugged fitness landscapes", "authors": [ "Jeff Clune", "Dusan Misevic", "Charles Ofria", "Richard E. Lenski", "Santiago F. Elena", "Rafael Sanjuán" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "udxBh0sZLF", "year": null, "venue": "ECAL 2003", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=udxBh0sZLF", "arxiv_id": null, "doi": null }
{ "title": "The Learning and Emergence of Mildly Context Sensitive Languages", "authors": [ "Edward P. Stabler", "Travis C. Collier", "Gregory M. Kobele", "Yoosook Lee", "Ying Lin", "Jason Riggle", "Yuan Yao", "Charles E. Taylor" ], "abstract": "This paper describes a framework for studies of the adaptive acquisition and evolution of language, with the following components: language learning begins by associating words with cognitively salient representations (“grounding”); the sentences of each language are determined by properties of lexical items, and so only these need to be transmitted by learning; the learnable languages allow multiple agreements, multiple crossing agreements, and reduplication, as mildly context sensitive and human languages do; infinitely many different languages are learnable; many of the learnable languages include infinitely many sentences; in each language, inferential processes can be defined over succinct representations of the derivations themselves; the languages can be extended by innovative responses to communicative demands. Preliminary analytic results and a robotic implementation are described.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "KqLfGdEGh-k", "year": null, "venue": "ECAL 2005", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=KqLfGdEGh-k", "arxiv_id": null, "doi": null }
{ "title": "Grammar Structure and the Dynamics of Language Evolution", "authors": [ "Yoosook Lee", "Travis C. Collier", "Gregory M. Kobele", "Edward P. Stabler", "Charles E. Taylor" ], "abstract": "The complexity, variation, and change of languages make evident the importance of representation and learning in the acquisition and evolution of language. For example, analytic studies of simple language in unstructured populations have shown complex dynamics, depending on the fidelity of language transmission. In this study we extend these analysis of evolutionary dynamics to include grammars inspired by the principles and parameters paradigm. In particular, the space of languages is structured so that some pairs of languages are more similar than others, and mutations tend to change languages to nearby variants. We found that coherence emerges with lower learning fidelity than predicted by earlier work with an unstructured language space.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Y8oFeEoBcID", "year": null, "venue": "ECAL 1999", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=Y8oFeEoBcID", "arxiv_id": null, "doi": null }
{ "title": "Compression and Adaptation", "authors": [ "Tracy K. Teal", "Daniel Albro", "Edward P. Stabler", "Charles E. Taylor" ], "abstract": "What permits some systems to evolve and adapt more effectively than others? Gell-Mann [3] has stressed the importance of “compression” for adaptive complex systems. Information about the environment is not simply recorded as a look-up table, but is rather compressed in a theory or schema. Several conjectures are proposed: (I) compression aids in generalization; (II) compression occurs more easily in a “smooth”, as opposed to a “rugged”, string space; and (III) constraints from compression make it likely that natural languages evolve towards smooth string spaces. We have been examining the role of such compression for learning and evolution of formal languages by artificial agents. Our system does seem to conform generally to these expectations, but the tradeoffs between compression and the errors that sometimes accompany it need careful consideration.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "LUlu_ymLzSz", "year": null, "venue": "Auton. Robots 2010", "pdf_link": "https://link.springer.com/content/pdf/10.1007/s10514-009-9148-5.pdf", "forum_link": "https://openreview.net/forum?id=LUlu_ymLzSz", "arxiv_id": null, "doi": null }
{ "title": "EL-E: an assistive mobile manipulator that autonomously fetches objects from flat surfaces", "authors": [ "Advait Jain", "Charles C. Kemp" ], "abstract": "Assistive mobile robots that autonomously manipulate objects within everyday settings have the potential to improve the lives of the elderly, injured, and disabled. Within this paper, we present the most recent version of the assistive mobile manipulator EL-E with a focus on the subsystem that enables the robot to retrieve objects from and deliver objects to flat surfaces. Once provided with a 3D location via brief illumination with a laser pointer, the robot autonomously approaches the location and then either grasps the nearest object or places an object. We describe our implementation in detail, while highlighting design principles and themes, including the use of specialized behaviors, task-relevant features, and low-dimensional representations.We also present evaluations of EL-E’s performance relative to common forms of variation. We tested EL-E’s ability to approach and grasp objects from the 25 object categories that were ranked most important for robotic retrieval by motor-impaired patients from the Emory ALS Center. Although reliability varied, EL-E succeeded at least once with objects from 21 out of 25 of these categories. EL-E also approached and grasped a cordless telephone on 12 different surfaces including floors, tables, and counter tops with 100% success. The same test using a vitamin pill (ca. 15 mm × 5 mm × 5 mm) resulted in 58% success.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Eg64YrnkLjf", "year": null, "venue": "EC 2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=Eg64YrnkLjf", "arxiv_id": null, "doi": null }
{ "title": "Computing Large Market Equilibria using Abstractions", "authors": [ "Christian Kroer", "Alexander Peysakhovich", "Eric Sodomka", "Nicolás E. Stier Moses" ], "abstract": "Computing market equilibria is an important practical problem for market design (e.g. fair division, item allocation). However, computing equilibria requires large amounts of information (e.g. all valuations for all buyers for all items) and compute power. We consider ameliorating these issues by applying a method used for solving complex games: constructing a coarsened abstraction of a given market, solving for the equilibrium in the abstraction, and lifting the prices and allocations back to the original market. We show how to bound important quantities such as regret, envy, Nash social welfare, Pareto optimality, and maximin share when the abstracted prices and allocations are used in place of the real equilibrium. We then study two abstraction methods of interest for practitioners: 1) filling in unknown valuations using techniques from matrix completion, 2) reducing the problem size by aggregating groups of buyers/items into smaller numbers of representative buyers/items and solving for equilibrium in this coarsened market. We find that in real data allocations/prices that are relatively close to equilibria can be computed from even very coarse abstractions.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "LmWkL5o7my", "year": null, "venue": "E4MAS 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=LmWkL5o7my", "arxiv_id": null, "doi": null }
{ "title": "Infrastructures to Engineer Open Agent Environments by Means of Electronic Institutions", "authors": [ "Dave de Jonge", "Juan A. Rodríguez-Aguilar", "Bruno Rosell i Gui", "Carles Sierra" ], "abstract": "Electronic institutions provide a computational analogue of human institutions to engineer open environments in which agents can interact in an autonomous way while complying with the norms of an institution. The purpose of this paper is twofold. On the one hand, we lightly survey our research on coordination infrastructures for electronic institutions in the last ten years. On the other hand, we highlight the research challenges in environment engineering that we have tackled during this journey as well as promising research paths for future research on the engineering of open environments for multi-agent systems.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "iTQFSRbVy-O", "year": null, "venue": "E4MAS 2004", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=iTQFSRbVy-O", "arxiv_id": null, "doi": null }
{ "title": "Environment-Based Coordination Through Coordination Artifacts", "authors": [ "Alessandro Ricci", "Mirko Viroli", "Andrea Omicini" ], "abstract": "In the context of human organisations, environment plays a fundamental role for supporting cooperative work and, more generally, complex coordination activities. Support is realised through services, tools, artifacts shared and exploited by the collectivity of individuals for achieving individual as well as global objectives. The conceptual framework of coordination artifacts is meant to bring the same sort of approach to multiagent systems (MAS). Coordination artifacts are the entities used to instrument the environment so as to fruitfully support cooperative and social activities of agent ensembles. Here, infrastructures play a key role by providing services for artifact use and management. In this work we describe this framework, by defining a model for the coordination artifact abstraction, and discussing the infrastructures and technologies currently available for engineering MAS application with coordination artifacts.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "QmefiYDZUcx", "year": null, "venue": "E4MAS 2004", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=QmefiYDZUcx", "arxiv_id": null, "doi": null }
{ "title": "\"Exhibitionists\" and \"Voyeurs\" Do It Better: A Shared Environment for Flexible Coordination with Tacit Messages", "authors": [ "Luca Tummolini", "Cristiano Castelfranchi", "Alessandro Ricci", "Mirko Viroli", "Andrea Omicini" ], "abstract": "Coordination between multiple autonomous agents is a major issue for open multi-agent systems. This paper proposes the notion of Behavioural Implicit Communication (BIC) originally devised in human and animal societies as a new and critical coordination mechanism also for artificial agents. BIC is a parasitical form of communication that exploits both some environmental properties and the agents’ capacity to interpret their actions. In this paper we abstract from the agents’ architecture to focus on the interaction mediated by the environment. Observability of the environment – and in particular of agents’ actions – is crucial for implementing BIC-based form of coordination in artificial societies. Accordingly in this paper we introduce an abstract model of environment providing services to enhance observation power of agents, enabling BIC and other form of observation-based coordination. Also, we describe a typology of environments and examples of observation based coordination with and without implicit communication.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "SyPr8jgNPoR", "year": null, "venue": "E4MAS 2006", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=SyPr8jgNPoR", "arxiv_id": null, "doi": null }
{ "title": "Designing Self-organising MAS Environments: The Collective Sort Case", "authors": [ "Luca Gardelli", "Mirko Viroli", "Matteo Casadei", "Andrea Omicini" ], "abstract": "Self-organisation is being recognised as an effective conceptual framework to deal with the complexity inherent to modern artificial systems. In this article, we explore the applicability of self-organisation principles to the development of multi-agent system (MAS) environments. First, we discuss a methodological approach for the engineering of complex systems, which features emergent properties: this is based on formal modelling and stochastic simulation, used to analyse global system dynamics and tune system parameters at the early stages of design. Then, as a suitable target for this approach, we describe an architecture for self-organising environments featuring artifacts and environmental agents as fundamental entities. As an example, we analyse a MAS distributed environment made of tuple spaces, where environmental agents are assigned the task of moving tuples across tuples spaces in background and according to local criteria, making complete clustering an emergent property achieved through self-organisation.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Bpim1V6JAsL", "year": null, "venue": "E4MAS 2006", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=Bpim1V6JAsL", "arxiv_id": null, "doi": null }
{ "title": "Cognitive Stigmergy: Towards a Framework Based on Agents and Artifacts", "authors": [ "Alessandro Ricci", "Andrea Omicini", "Mirko Viroli", "Luca Gardelli", "Enrico Oliva" ], "abstract": "Stigmergy has been adopted in MAS (multi-agent systems) and in other fields as a technique for realising forms of emergent coordination in societies composed by a large amount of ant-like, non-rational agents. In this paper we discuss a conceptual (and engineering) framework for exploring the use of stigmergy in the context of societies composed by cognitive / rational agents, as a means for supporting high-level, knowledge-based social activities.multi-agent We refer to this kind of stigmergy as cognitive stigmergy. Cognitive stigmergy is based on the use of artifacts as tools populating and structuring the agent working environment, and which agents perceive, share and rationally use for their individual goals. Artifacts are environment abstractions that mediate agent interaction and enable emergent coordination: as such, they can be used to encapsulate and enact the stigmergic mechanisms and the shared knowledge upon which emergent coordination processes are based. In this paper, we start exploring this scenario introducing an agent-based framework for cognitive stigmergy based on artifacts. After discussing the main conceptual issues—the notion of cognitive stigmergy and the role of artifacts—, we sketch an abstract architecture for cognitive stigmergy, and outline its implementation upon the TuCS oN agent coordination infrastructure.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "yteosr4L9w7", "year": null, "venue": "E4MAS 2006", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=yteosr4L9w7", "arxiv_id": null, "doi": null }
{ "title": "CArtA gO : A Framework for Prototyping Artifact-Based Environments in MAS", "authors": [ "Alessandro Ricci", "Mirko Viroli", "Andrea Omicini" ], "abstract": "This paper describes CArtA gO, a framework for developing artifact-based working environments for multiagent systems (MAS). The framework is based on the notion of artifact, as a basic abstraction to model and engineer objects, resources and tools designed to be used and manipulated by agents at run-time to support their working activities, in particular the cooperative ones. CArtA gO enables MAS engineers to design and develop suitable artifacts, and to extend existing agent platforms with the possibility to create artifact-based working environments, programming agents to exploit them. In this paper, first the abstract model and architecture of CArtA gO is described, then a first Java-based prototype technology is discussed.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "cnJdifI0D8O", "year": null, "venue": "E4MAS 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=cnJdifI0D8O", "arxiv_id": null, "doi": null }
{ "title": "Reconciling Event- and Agent-Based Paradigms in the Engineering of Complex Systems: The Role of Environment Abstractions", "authors": [ "Andrea Omicini", "Stefano Mariani" ], "abstract": "In spite of the growing influence of agent-based models and technologies, the event-based architectural style is still prevalent in the design of large-scale distributed applications. In this paper we discuss the role of environment in both EBS and MAS, and show how it could be used as a starting point for reconciling agent-based and event-based abstractions and techniques within a conceptually-coherent framework that could work as the foundation of a principled discipline for the engineering of complex software systems.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "5EmaED5TrvO", "year": null, "venue": "E4MAS 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=5EmaED5TrvO", "arxiv_id": null, "doi": null }
{ "title": "Towards a 'Smart' Collaborative Virtual Environment and Multi-agent Approach to Designing an Intelligent Virtual Agent", "authors": [ "Nader Hanna", "Deborah Richards" ], "abstract": "Increasing interest in Collaborative Virtual Environments (CVEs) in different applications has imposed new requirements on the design of the CVEs and the resident Intelligent Virtual Agents (IVAs). In addition to cognitive abilities, IVAs in CVEs require social and communication behaviours. The use of a Multi-Agent System (MAS) has been a successful approach to address the variety of evolving abilities needed by an IVA. In this paper, a model of a ‘smart’ CVE is presented. This CVE model publicizes the properties and the possible events of each entity located in the sensory range of the nearby IVAs. Additionally, this CVE model offers a level of abstraction for the IVAs to interact with the entities in the CVE. This level of the abstraction is distributed within the design of the resident IVAs. Moreover, this paper presents a MAS-based IVA design. This IVA is able to collaborate with humans in CVEs. The proposed model simulates humans by including input, output and processing modules. In addition, the model coordinates the IVA’s verbal and non-verbal communication to convey its internal state while achieving a collaborative task.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "paepxrWfKDP", "year": null, "venue": "E4MAS 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=paepxrWfKDP", "arxiv_id": null, "doi": null }
{ "title": "Mixed Environments for MAS: Bringing Humans in the Loop", "authors": [ "Alessandro Ricci", "Juan A. Rodríguez-Aguilar", "Ander Pijoan", "Franco Zambonelli" ], "abstract": "In many application domains for agents and MAS, the interaction between the systems and human users is a main element. In some cases, the interaction occurs behind a traditional computing device, such as a computer desktop or a smartphone. In other cases, the interaction occurs through the physical world. This is the case, for instance, of smart/intelligent environment applications, and more generally in the wide context of Internet-of-Things based apps. Can the concept of agent environment for MAS play a role in the design of such systems, where humans are in the loop? In this position paper we further develop this question, providing some reflections and suggestions for future works.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "L98zLmmD0_", "year": null, "venue": "E4MAS 2006", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=L98zLmmD0_", "arxiv_id": null, "doi": null }
{ "title": "E4MAS Through Electronic Institutions", "authors": [ "Josep Lluís Arcos", "Pablo Noriega", "Juan A. Rodríguez-Aguilar", "Carles Sierra" ], "abstract": "Today, the concept of an environment for multi-agent systems is in its pioneering phase. Consequently, the development of supporting software technologies is still rather primitive and environment technologies reflecting a specific world-of-interest to the agent systems are yet to be developed in full. In contrast, environment technologies that focus on the agent system itself have been in the agenda of MAS research from its very start. Electronic institutions are prominent in this respect for they have been conceived as a type of restricted MAS environment and have had an engineering technology developed around them. In this paper we explore how the restrictions currently imposed by electronic institutions may be overcome when they are seen as a part of a larger environment where agents act. In particular, we focus on situating electronic institutions by connecting them to a world-of-interest and how this process can facilitate full-fledged environment engineering.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "od-qxNj-ffy", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122766.pdf", "forum_link": "https://openreview.net/forum?id=od-qxNj-ffy", "arxiv_id": null, "doi": null }
{ "title": "Anomaly Detection in Video Data Based on Probabilistic Latent Space Models", "authors": [ "Giulia Slavic", "Damian Campo", "Mohamad Baydoun", "Pablo Marin", "David Martín", "Lucio Marcenaro", "Carlo S. Regazzoni" ], "abstract": "This paper proposes a method for detecting anomalies in video data. A Variational Autoencoder (VAE) is used for reducing the dimensionality of video frames, generating latent space information that is comparable to low-dimensional sensory data (e.g., positioning, steering angle), making feasible the development of a consistent multi-modal architecture for autonomous vehicles. An Adapted Markov Jump Particle Filter defined by discrete and continuous inference levels is employed to predict the following frames and detecting anomalies in new video sequences. Our method is evaluated on different video scenarios where a semi-autonomous vehicle performs a set of tasks in a closed environment.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "RA4k0EA7CU", "year": null, "venue": "EACL 2023", "pdf_link": "https://aclanthology.org/2023.eacl-main.280.pdf", "forum_link": "https://openreview.net/forum?id=RA4k0EA7CU", "arxiv_id": null, "doi": null }
{ "title": "Meeting the Needs of Low-Resource Languages: The Value of Automatic Alignments via Pretrained Models", "authors": [ "Abteen Ebrahimi", "Arya D. McCarthy", "Arturo Oncevay", "John E. Ortega", "Luis Chiruzzo", "Gustavo Giménez Lugo", "Rolando A. Coto Solano", "Katharina Kann" ], "abstract": "Abteen Ebrahimi, Arya D. McCarthy, Arturo Oncevay, John E. Ortega, Luis Chiruzzo, Gustavo Giménez-Lugo, Rolando Coto-Solano, Katharina Kann. Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023.", "keywords": [], "raw_extracted_content": "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics , pages 3912–3926\nMay 2-6, 2023 ©2023 Association for Computational Linguistics\nMeeting the Needs of Low-Resource Languages:\nThe Value of Automatic Alignments via Pretrained Models\nAbteen Ebrahimi♢Arya D. McCarthy∇Arturo Oncevay♡\nLuis Chiruzzo△John E. OrtegaΩGustavo A. Giménez-Lugo♣\nRolando Coto-SolanoϕKatharina Kann♢\n♢University of Colorado Boulder∇Johns Hopkins University♡University of Edinburgh\n△Universidad de la República, UruguayΩNortheastern University\n♣Universidade Tecnológica Federal do ParanáϕDartmouth College\nAbstract\nLarge multilingual models have inspired a new\nclass of word alignment methods, which work\nwell for the model’s pretraining languages.\nHowever, the languages most in need of\nautomatic alignment are low-resource and,\nthus, not typically included in the pretraining\ndata. In this work, we ask: How do modern\naligners perform on unseen languages, and\nare they better than traditional methods? We\ncontribute gold-standard alignments for Bribri–\nSpanish, Guarani–Spanish, Quechua–Spanish,\nand Shipibo-Konibo–Spanish. With these,\nwe evaluate state-of-the-art aligners with and\nwithout model adaptation to the target lan-\nguage. Finally, we also evaluate the resulting\nalignments extrinsically through two down-\nstream tasks: named entity recognition and\npart-of-speech tagging. We find that although\ntransformer-based methods generally outper-\nform traditional models, the two classes of\napproach remain competitive with each other.\n1 Introduction\nWord alignment is a valuable tool for extending\nthe coverage of natural language processing (NLP)\napplications to low-resource languages through,\ne.g., statistical machine translation (SMT; Koehn\nand Knowles, 2017; Duh et al., 2020) or anno-\ntation projection (Yarowsky et al., 2001; Smith\nand Smith, 2004; Nicolai et al., 2020; Eskan-\nder et al., 2020). The traditional approach for\ngenerating alignments has been with statistical\nmethods such as Giza++ (Och and Ney, 2003)\nand FastAlign (Dyer et al., 2013), which provide\nstrong alignment quality while remaining quick\nand lightweight to run. Recently, new methods\nhave been proposed which extract alignments from\nmassive pretrained multilingual models , and out-\nperform these longstanding methods (Dou and\nNeubig, 2021).\nOur code and data can be found at https://github.\ncom/abteen/alignment .\nCIA ××××\n×××\n××\n×××nisqam\npelícula\nnisqata\nuraykachirqa\nhinaspam\nNaciones\nUnidasman\npaqarintanta\naparurqa\n.\nLa\nCIA\ndescar gó\nla\npelícula\ny\nla\nllevó\na\nlas\nNaciones\nUnidas\nal\ndía\nsiguiente\n.×××Figure 1: A word alignment between Quechua and\nSpanish (shaded), as well as mBERT+TLM’s predicted\nalignment (marked by ×’s). FastAlign and Giza++ can-\nnot take advantage of surface features of proper names\nand borrowings. We evaluate alignments intrinsically\nvia AER and extrinsically with POS-tagging and NER\nmodels learned on annotations projected across align-\nments from Spanish.\nHowever, results on other NLP tasks, such as\npart-of-speech (POS) tagging and named-entity\nrecognition (NER), have shown that, while pre-\ntrained models generally work well out-of-the-box\nfor high-resource languages, performance is far\nlower for low-resource ones, particularly those\nwhich are unseen during pretraining (Pires et al.,\n2019; Wu and Dredze, 2020; Muller et al., 2021;\nLee et al., 2022). Models can be adapted (Guru-\nrangan et al., 2020; Chau et al., 2020) to improve\nperformance, but this comes with a large compu-\ntational cost. Given these two considerations, for\nunseen low resource languages it remains unclear\n(1) whether modern neural approaches based on\nadapted pretrained models generate higher-quality\nalignments than traditional approaches and (2) if\nso, whether the quality difference is severe enough\nto justify the additional computational cost.\nWe investigate this by collecting gold-standard\nalignments for Bribri, Guarani, Quechua, and3912\nShipibo-Konibo. These languages are low-\nresource and unrepresented in the pretraining data\nof popular models—a relevant real-world scenario.\nIn addition to intrinsically evaluating alignment\nquality, we measure the downstream utility of each\nmethod for training POS-tagging and NER models\nby annotation projection.\nWe find traditional and neural methods to\nbe competitive, but pretrained models result in\nslightly lower alignment error rates and stronger\ndownstream task performance, even for initially\nunseen languages. Through further analysis, we\nalso find that adaptation may be a more reliable ap-\nproach given minimally available resources. Taken\ntogether, these results indicate that alignment from\nmultilingual models can indeed be a valuable tool\nfor low-resource languages, but traditional ap-\nproaches continue to be a strong option and should\nstill be considered for practical applications.\n2 Related Work\nAlignment Word alignment is a long studied\ntask, with origins in the IBM models for statistical\nmachine translation (Brown et al., 1993), which\nare the basis of Giza++ (Och and Ney, 2003) and\nFastAlign (Dyer et al., 2013). As these approaches\ncan only generate one-to-many alignments, mod-\nels are trained in both forward and reverse di-\nrections (reversing the role of source and target),\nand final alignments are created via symmetriza-\ntion heuristics (Och and Ney, 2000; Koehn et al.,\n2005); other approaches explicitly symmetrize dur-\ning training (Matusov et al., 2004; Liang et al.,\n2006).1While these models rely on only position\nand word identity information, subword informa-\ntion can be integrated without requiring costly in-\nference (Berg-Kirkpatrick et al., 2010), leading to\nbetter parameter estimation for rare words. Align-\nments can also be extracted from neural translation\nmodels (Chen et al., 2020; Zenkel et al., 2020).\nWord alignment also enables annotation projec-\ntion (Yarowsky and Ngai, 2001; Yarowsky et al.,\n2001) which can offer strong performance, particu-\nlarly for low-resource languages (Buys and Botha,\n2016; Ortega and Pillaipakkamnatt, 2018; Nicolai\nand Yarowsky, 2019; Nicolai et al., 2020; Eskan-\nder et al., 2020).\n1The poor estimation of rare words’ translation parameters\nalso motivates symmetrization; without this, rarely observed\nwords become garbage collector words (Moore, 2004).Multilingual Transformer Models Pretrained\nmultilingual models (Devlin et al., 2019; Conneau\nand Lample, 2019; Conneau et al., 2020; Xue et al.,\n2021) have become the de facto standard approach\nfor cross-lingual transfer. In general, these mod-\nels are an extension of their monolingual variants,\ncreated by including data from many languages in\ntheir pretraining. They rely on a subword vocab-\nulary (Kudo and Richardson, 2018) which jointly\nspans all of the pretraining languages. Models\nare pretrained using a masked language modeling\n(MLM) objective and a translation language mod-\neling (TLM; Conneau and Lample, 2019) objec-\ntive that uses parallel sentences. Outside of con-\ntinued pretraining (Gururangan et al., 2020), mod-\nels can be adapted using Adapters (Pfeiffer et al.,\n2020) or through vocabulary adaptation (Wang\net al., 2020; Hong et al., 2021). Word alignment\nmethods which depend on these models have also\nbeen proposed (Jalili Sabet et al., 2020; Nagata\net al., 2020); we focus on AWESoME align (Dou\nand Neubig, 2021) because it outperforms other\nunsupervised methods.\n3 Experiment 1: Intrinsic Evaluation\n3.1 Experimental Setup\nLanguages We focus on four Indigenous lan-\nguages spoken in the Americas for our experi-\nments. Bribri (bzd) is a tonal language in the\nChibchan family spoken by approximately 7000\npeople in Costa Rica. Guarani (gn) is a polysyn-\nthetic language in the Tupi–Guarani family spo-\nken by around 6 million people across South\nAmerica. Quechua (quy) is a family of Indige-\nnous languages—from which we study Quechua\nChanka—spoken across the Peruvian Andes by\nover 6 million people, and Shipibo-Konibo (shp)\nis a language spoken by around 30,000 people in\nPeru, Bolivia, and Brazil (Cardenas and Zeman,\n2018). The latter three languages are agglutina-\ntive.\nTraining Data For training, we use the parallel\ndata between Spanish and our languages described\nby Mager et al. (2021).2We note that there is\na distinct difference in the amount of unlabeled\ndata available within the four languages: Guarani\nand Quechua have considerably more data avail-\nable. These two languages also have monolingual\n2Although parallel, digitized Bibles exist for over 1600\nlanguages (McCarthy et al., 2020), groups may object to an-\nnotating the Bible for historical or cultural reasons.3913\ntext available in Wikipedia, which we extract using\nWikiExtractor (Attardi, 2015). The exact number\nof parallel and monolingual sentences for all lan-\nguages is shown in Table B.1.\nEvaluation Data To create gold standard align-\nments for evaluation, we sample multi-way paral-\nlel examples from AmericasNLI (Ebrahimi et al.,\n2022), allowing for multi-parallel alignments (Xia\nand Yarowsky, 2017) across all languages. Sam-\nples for the development and test sets are taken\nfrom their respective splits in the AmericasNLI\ndataset. Development examples were collected\nfirst, manually checked, and corrected. Examples\nwith misalignments in punctuation, numbers, or\nnamed entities were not used. After a period of\ndevelopment with this data, the test set of 50 ex-\namples was collected and manually verified. An-\nnotations were collected using JHU’s open-source\nTurkle platform.3We ask annotators to only mark\nsure alignments. Additional discussion on data\ncollection and the test set can be found in §6.\nMetrics We evaluate automatic alignments via\nalignment error rate (AER; Och and Ney, 2000).\nBecause we only collect sure alignments, this is\nequivalent to the balanced F-measure (Fraser and\nMarcu, 2007). We give additional metrics in Ta-\nble C.3.\n3.2 Models\nTraditional Aligners We use Giza++ (Och and\nNey, 2003) and FastAlign (Dyer et al., 2013) as our\ntraditional aligners. Giza++ is based on IBM Mod-\nels 1–5 (Brown et al., 1993). FastAlign (Dyer et al.,\n2013) is a re-parameterization of IBM Model 2.\nWe use the implementation and hyperparameters\nof Zenkel et al. (2020), which relies on MGiza++\n(Gao and V ogel, 2008) and the standard FastAlign\npackage. Both approaches run on CPUs, and their\ntraining time ranges between 6 seconds to 3 min-\nutes for FastAlign, and 43 seconds to 22 minutes\nfor Giza++. We use the union of the forward and\nreverse alignments, as this symmetrization heuris-\ntic offers the best result for all languages on the\ndevelopment set. We show the performance of\nother heuristics in Table C.2.\nNeural Aligners AWESoME (Dou and Neubig,\n2021) identifies alignment links by considering co-\nsine similarities between hidden layer representa-\ntions of tokens in a neural encoder. We consider\n3https://github.com/hltcoe/turkleModel Method BZD GN QUY SHP AVG .\nAWESoME BL 70.03 63.13 67.02 60.41 65.15\n(mBERT) +MLM-T 68.95 49.68 46.59 58.17 55.85\n+MLM-ST 70.63 50.25 42.52 58.66 55.52\n+TLM 58.43 43.10 36.96 52.34 47.71\nAWESoME BL 80.15 73.11 75.24 69.21 74.43\n(XLM-R) +MLM-T 76.89 65.44 53.65 65.16 65.29\n+MLM-ST 77.53 64.55 52.90 66.56 65.39\n+TLM 74.90 58.84 43.25 63.48 60.12\nFastAlign Union 51.40 43.52 54.06 54.67 50.91\nGiza++ Union 55.61 49.92 66.01 60.84 58.10\nmBERT +MLM-WT - 40.00 46.00 - 43.00\nXLM-R +MLM-WT - 52.27 48.83 - 50.55\nTable 1: AER, in percentages, for each language and\nmethod. The best overall result for each language is\nbolded, while the best model within each method is\nunderlined. We separate results which use Wikipedia,\nas they are not directly comparable.\ntwo such encoders: mBERT (Devlin et al., 2019)\nand XLM-R (Liu et al., 2019), and we use the\ndefault AWESoME configuration to extract align-\nments. We give layer-by-layer alignment perfor-\nmance in Figure C.1.\nModel Adaptation We experiment with three\nadaptation schemes based on continued pretrain-\ning (+TLM, +MLM-T, and +MLM-ST) which rely\non unlabeled data and further train the model using\nMLM (Gururangan et al., 2020) before alignments\nare extracted. We focus on these objectives as they\nhave been used by prior work for general model\nadaptation, and they work well in situations with\nlimited resources (Ebrahimi and Kann, 2021). As\nwe have access to bitext between Spanish and the\ntarget languages, for the +TLM scheme each ex-\nample is the concatenation of a Spanish sentence\nwith its translation. For +MLM- Twe adapt using\nsolely the target side of the available data, and for\n+MLM- STwe adapt on both the source and target;\nhowever, this data is treated as monolingual data\nand not explicitly aligned. +MLM-WT denotes tar-\nget language adaptation which includes Wikipedia\ndata. The duration of adaptation depends on the\nGPU and method used; it ranges from around 6\nminutes for Bribri to 4 hours for Quechua. We\nprovide additional training details in Appendix A.\n3.3 Results\nTraditional vs. Neural Aligners We present re-\nsults in Table 1. The best traditional method is\nFastAlign, and the best neural approach is with\nmBERT+TLM. Comparing the two, we see that3914\nthe lowest error rate is achieved with the neu-\nral approach for all languages except for Bribri,\nwhere FastAlign offers 7.03% absolute improve-\nment. Of the other three languages, the perfor-\nmance for two is close: the difference in perfor-\nmance for Guarani is only 0.42% and 2.33% for\nShipibo-Konibo. For Quechua, +TLM improves\nover FastAlign by 17.10%.\nComparing Adaptation Strategies With\nmBERT, +MLM-T improves performance over\nthe non-adapted baseline by 9.30% on average,\nwith +MLM-ST increasing this gain to 9.63%\nand +TLM offering the highest improvement\nof 17.44%, consistent with prior work on seen\nlanguages (Dou and Neubig, 2021). Per language,\nthe largest and smallest gains are for Quechua\n(30.06%) and for Shipibo-Konibo (8.07%);\nintuitively, gains from adaptation are proportional\nto the size of the adaptation data. For XLM-R,\nwe again see relative gains from adaptation, with\n+TLM offering the highest performance increase.\nAdditional Monolingual Data Neural ap-\nproaches can easily benefit from additional\nmonolingual data. Adding Wikipedia data\nresults in the highest performance for Guarani,\noutperforming the previous best approach by\n3.1%. In contrast, while the additional data\nfor Quechua does help relative to +MLM-T, it\ndoes not outperform +TLM. This difference in\nperformance may be due to the relative sizes of\nthe additional data; the Guarani Wikipedia has\n1.3×as many tokens as the target-side parallel\ndata, while the Quechua Wikipedia only has 0.5 ×\nas many.\n4 Experiment 2: Extrinsic Evaluation\nWe further compare aligner performance extrinsi-\ncally by evaluating downstream task performance\nwhen using a projected training set. We consider\ntwo tasks: NER and POS tagging.\n4.1 Experimental Setup\nData Due to the limited availability and quality\nof evaluation datasets, we focus on Guarani for\nthis experiment. We use the test set provided by\nRahimi et al. (2019) for NER and Universal De-\npendencies (Nivre et al., 2020) for POS. For exper-\niments where we finetune directly on English or\nSpanish, we use the provided training data.Model Train Source POS NER\nmBERT en 10.36 46.64\nes 19.82 49.18\n+TLM es 36.94 49.62\n+MLM-T es 34.69 55.25\n+MLM-ST es 33.78 52.34\nmBERT mBERT 31.53 47.54\n+MLM-T 38.29 47.97\n+MLM-ST 42.34 49.80\n+TLM 40.99 49.80\nFastAlign 37.84 46.55\nGiza++ 39.19 48.33\nTable 2: POS tagging (accuracy) and NER results (F1)\nfor Guarani. Model denotes if baseline or adapted\nmBERT is used. Train Source defines the training data\nused for finetuning; language codes indicate training\non original data, while alignment methods denote how\na projected training set was created.\nAnnotation Projection To create the projected\ntraining sets, we first annotate the (unlabeled)\nSpanish parallel data with Stanza (Qi et al., 2020)\nand generate bidirectional alignments using each\nmethod. We then project the tags from Spanish\nto Guarani using type and token constraints as de-\nscribed by Buys and Botha (2016).\nModels For baseline performance, we finetune\nmBERT on the provided English and Spanish train-\ning sets for each task. Additionally, we also fine-\ntune adapted versions of mBERT on Spanish train-\ning data – English is omitted as performance is\nworse and adaptation data is in Spanish. Finally,\nwe evaluate performance when finetuning mBERT\non the training sets created through projection.\n4.2 Results\nWe present results for both tasks in Table 2.\nPOS For POS tagging, the baseline zero-shot\nperformance is extremely poor, and we see a mini-\nmum increase of 11.71% accuracy when using any\nprojection method. Giza++ outperforms FastAl-\nign, as well as projection with +MLM-T, however\nthe best performance is achieved with +MLM-ST,\nwith +TLM offering the second best result. While\nthe ordering of methods changes, the best perfor-\nmance is still achieved with the neural approaches,\nconsistent with the results of Experiment 1.\nNER For NER, baseline performance is high: in-\nspecting the data shows that many entities have En-\nglish or Spanish names, and as multilingual mod-\nels already have knowledge of these two languages,3915\n(a) Subset Analysis\n (b) Length Analysis\nFigure 2: Plots for data analysis. In Figure 2b, a vertical line denotes the average example length for Bribri.\nstandard aligners with projection may not effec-\ntively leverage surface word-form clues. How-\never, they remain a valuable indication of align-\nment quality. Among the projection-based ap-\nproaches, we find that using Giza++ again outper-\nforms +MLM-T and FastAlign but falls short of\n+MLM-ST and +TLM.\nOverall, considering what both downstream\ntasks indicate regarding alignment quality, neural\nmodels adapted using Spanish and target-language\ndata—either sentence-aligned or unaligned—\nconsistently outperform traditional methods.\n5 Analysis\nAs data for low-resource languages often varies\nconsiderably in both amount and length, we con-\nsider two additional analysis experiments which\ncontrol for these factors. We focus solely on\nQuechua, as it has the most parallel data available.\nResults are presented in Figure 2 with numerical\nresults in Tables C.4 and C.5.\nSubset Analysis For this analysis, we ask how\nthe performance of neural alignment depends on\nthe amount of data and with how much data it sur-\npasses traditional approaches. We subsample the\nadaptation data, and use this to extract alignments\nusing both FastAlign and AWESoME. Results for\nthis experiment can be seen in Figure 2a. For ref-\nerence, we also plot the AER obtained when using\nFastAlign on all the available training data as an\nupper bound for the performance of the traditional\napproaches. In the smallest extreme, all methods\nare roughly equivalent. However, as the number of\nexamples increases, adaptation using +TLM and\n+MLM-WT improves at a faster rate than other ap-\nproaches: with only 6400 sentence pairs, these ap-\nproaches overtake the best expected performance\nof FastAlign.Length Analysis Aligner performance may not\nonly be affected by the total number of examples\navailable, but also by the length of these examples.\nThis is doubly relevant for low-resource languages,\nas resources may be limited to sources which do\nnot contain long (or even complete) sentences. To\nsee how the performance of each method may vary\nwhen faced with examples of different lengths, we\nsort the unlabeled data by the number of charac-\nters, and partition the examples in groups of 7508,\nthe total number of examples available for Bribri.\nWe choose this amount as it is representative how\nmuch data may be available for other low-resource\nlanguages. As before, the expected upper bound\nFastAlign performance is denoted. For the short-\nest group, all methods are similar; however, AWE-\nSoME alignments improve with longer sequences,\nwith +TLM showing the quickest decrease in er-\nror rate. We attribute the improved AER when\nadapting using longer sequences to the increased\nnumber of tokens available for adaptation. For\nQuechua, the performance of AWESoME align is\nsensitive to both the number of examples and se-\nquence length. In contrast, FastAlign only shows\na small improvement as example length increases.\n6 Conclusion\nIn this work, we have investigated the perfor-\nmance of modern word aligners versus classical ap-\nproaches for languages unseen to pretrained mod-\nels. While classical methods remain competitive,\nthe lowest AER on average is achieved by modern\nneural approaches. However, using these models\ncomes with a larger computational cost. There-\nfore, the trade-off between training requirements\nand overall performance must be considered. If\naccess to computing resources is limited or train-\ning time is a factor, classical approaches remain a\nviable approach which should not be discounted.3916\nEthics and Limitations\nEthics Statement\nWhen collecting data in an Indigenous language,\nit becomes vital that the process does not exploit\nany member of the community or commodify the\nlanguage (Schwartz, 2022). Further, it is important\nthat members of the community benefit from the\ndataset. While the creation of a word alignment\ndataset will not directly impact community mem-\nbers, we believe that it can contribute to the devel-\nopment of tools, such as translation systems, that\ncan be directly beneficial, and that increasing the\nvisibility of these languages within the research\ncommunity will further spur the creation of useful\nsystems. Our annotations were created by either\nco-authors of the paper or by native speakers of the\nlanguages, who were compensated at a rate chosen\nwith the minimum hourly salary in their respective\ncountries taken into account.\nLimitations\nTest Set Size One limitation of our work is the\nsize of the evaluation set used for our main re-\nsults. This arises from the general difficulty in\ncollecting annotations and data for low-resource,\nand particularly Indigenous languages. The size\nof the test set was chosen to balance the trade-off\nbetween the cost of annotation collection and ex-\nperimental validity. Fortunately, for the task of\nword alignment the main metric used to summa-\nrize performance—alignment error rate—does not\ndepend directly on the number of examples in the\nevaluation set, but the total number of alignments,\nof which there are a sufficiently high number in\nour evaluation set. However, even when only con-\nsidering the number of examples, our test set is\nstill within the same order of magnitude as other\nwidely used word alignment evaluation sets, such\nas the Romanian–English test set which consists of\n248 examples (Mihalcea and Pedersen, 2003), and\nthe English–Inuktitut and English–Hindi test sets\nwhich have 75 and 90 examples each, respectively\n(Martin et al., 2005).\nWe run a small experiment to gain insight into\nhow much precision is lost when using a test set\nof size 50, versus 248, which we choose as this is\nthe size of the widely used Romanian–English test\nset mentioned above. We take 100 independent\nsamples without replacement from the Romanian-\nEnglish test set, each of size 50, and evaluate the\nperformance of FastAlign and AWESoME align.For FastAlign, we use the training data defined by\nMihalcea and Pedersen (2003), and for Awesome,\nwe use mBERT with no additional finetuning. The\ndistributions of AER are shown in Figure A.1, with\nsummary statistics in Table A.1. We can see that\nthe standard deviation of both distributions is rela-\ntively low, around 2%. At the extremes, we see a\ndifference of −4.70% and +4.90%, and −4.28%\nand+6.4% for FastAlign and AWESoME align\nrespectively, between the min/max values of our\ndistribution as compared to the whole set AER.\nConsidering these points, we believe that the size\nof our evaluation set does not invalidate our experi-\nmental results and main conclusions; however, we\nnote that additional care must be taken when com-\nparing specific models whose performances are\nclose together, particularly when this performance\nis low or close to random.\nTest Set Domain Other limitations of our work\narise from the sources of data used. Annotations\nwere done using sentences sampled from Ameri-\ncasNLI, which itself is a translation of XNLI. As\nsuch, any errors from the original XNLI dataset,\nwhich may have propagated through translation,\nwill persist in our dataset as well (annotators were\ngiven the option to modify target language sen-\ntences to correct any errors). Furthermore, due to\ntranslation, the sentences may not be directly rep-\nresentative of a natural utterance which would be\nspoken by members of the communities.\nLanguage Selection The languages we high-\nlight in this work are true low-resource languages,\nand present challenges commonly faced by other\nlow-resource languages. Namely, these languages\nhave a relatively small amount of easily avail-\nable and clean unlabeled data, are typically un-\nseen from most released pretrained models, and\nare morphologically different from typically used\nsource languages. However, one feature of these\nlanguages which may inflate aligner performance\nis the language script: all of our target languages\nshare the same script with the two source lan-\nguages which we use. This may lead to higher\noccurrences of shared words or entities, making\nalignment easier. As such, our results may not\ngeneralize fully to other low-resource languages\nwhich have a different script from the source lan-\nguages, or which may have a script which is un-\nseen to the underlying pretrained model.3917\nAcknowledgements\nWe would like to thank Roque Helmer Luna-\nMontoya (Academia Mayor de la Lengua Quechua\nin Cuzco, Perú) and Richard Castro Mamani\n(Universidad Nacional de San Antonio Abad\nand Hinantin Software) for their help in anno-\ntating and verifying the Quechua–Spanish align-\nments. We would also like to thank Liz Karen\nChavez Sanchez for annotating the Shipibo-\nKonibo–Spanish alignments. A.D.M. is supported\nby an Amazon Fellowship and a Frederick Jelinek\nFellowship.\nReferences\nGiusepppe Attardi. 2015. Wikiextractor. https://\ngithub.com/attardi/wikiextractor .\nTaylor Berg-Kirkpatrick, Alexandre Bouchard-Côté,\nJohn DeNero, and Dan Klein. 2010. Painless un-\nsupervised learning with features. In Human Lan-\nguage Technologies: The 2010 Annual Conference\nof the North American Chapter of the Association\nfor Computational Linguistics , pages 582–590, Los\nAngeles, California. Association for Computational\nLinguistics.\nPeter F. Brown, Stephen A. Della Pietra, Vincent J.\nDella Pietra, and Robert L. Mercer. 1993. The math-\nematics of statistical machine translation: Parameter\nestimation. Computational Linguistics , 19(2):263–\n311.\nJan Buys and Jan A. Botha. 2016. Cross-lingual mor-\nphological tagging for low-resource languages. In\nProceedings of the 54th Annual Meeting of the As-\nsociation for Computational Linguistics (Volume 1:\nLong Papers) , pages 1954–1964, Berlin, Germany.\nAssociation for Computational Linguistics.\nRonald Cardenas and Daniel Zeman. 2018. A mor-\nphological analyzer for Shipibo-konibo. In Proceed-\nings of the Fifteenth Workshop on Computational\nResearch in Phonetics, Phonology, and Morphology ,\npages 131–139, Brussels, Belgium. Association for\nComputational Linguistics.\nEthan C. Chau, Lucy H. Lin, and Noah A. Smith. 2020.\nParsing with multilingual BERT, a small corpus, and\na small treebank. In Findings of the Association\nfor Computational Linguistics: EMNLP 2020 , pages\n1324–1334, Online. Association for Computational\nLinguistics.\nYun Chen, Yang Liu, Guanhua Chen, Xin Jiang, and\nQun Liu. 2020. Accurate word alignment induction\nfrom neural machine translation. In Proceedings of\nthe 2020 Conference on Empirical Methods in Natu-\nral Language Processing (EMNLP) , pages 566–576,\nOnline. Association for Computational Linguistics.Alexis Conneau, Kartikay Khandelwal, Naman Goyal,\nVishrav Chaudhary, Guillaume Wenzek, Francisco\nGuzmán, Edouard Grave, Myle Ott, Luke Zettle-\nmoyer, and Veselin Stoyanov. 2020. Unsupervised\ncross-lingual representation learning at scale. In Pro-\nceedings of the 58th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 8440–\n8451, Online. Association for Computational Lin-\nguistics.\nAlexis Conneau and Guillaume Lample. 2019. Cross-\nlingual language model pretraining. In Advances in\nNeural Information Processing Systems , volume 32.\nCurran Associates, Inc.\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In Proceedings of the 2019 Conference of\nthe North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies, Volume 1 (Long and Short Papers) , pages\n4171–4186, Minneapolis, Minnesota. Association\nfor Computational Linguistics.\nZi-Yi Dou and Graham Neubig. 2021. Word alignment\nby fine-tuning embeddings on parallel corpora. In\nConference of the European Chapter of the Associa-\ntion for Computational Linguistics (EACL) .\nKevin Duh, Paul McNamee, Matt Post, and Brian\nThompson. 2020. Benchmarking neural and statis-\ntical machine translation on low-resource African\nlanguages. In Proceedings of the 12th Language\nResources and Evaluation Conference , pages 2667–\n2675, Marseille, France. European Language Re-\nsources Association.\nChris Dyer, Victor Chahuneau, and Noah A. Smith.\n2013. A simple, fast, and effective reparameteriza-\ntion of IBM model 2. In Proceedings of the 2013\nConference of the North American Chapter of the\nAssociation for Computational Linguistics: Human\nLanguage Technologies , pages 644–648, Atlanta,\nGeorgia. Association for Computational Linguistics.\nAbteen Ebrahimi and Katharina Kann. 2021. How to\nadapt your pretrained multilingual model to 1600\nlanguages. In Proceedings of the 59th Annual Meet-\ning of the Association for Computational Linguistics\nand the 11th International Joint Conference on Nat-\nural Language Processing (Volume 1: Long Papers) ,\npages 4555–4567, Online. Association for Computa-\ntional Linguistics.\nAbteen Ebrahimi, Manuel Mager, Arturo Once-\nvay, Vishrav Chaudhary, Luis Chiruzzo, Angela\nFan, John Ortega, Ricardo Ramos, Annette Rios,\nIvan Vladimir Meza Ruiz, Gustavo Giménez-Lugo,\nElisabeth Mager, Graham Neubig, Alexis Palmer,\nRolando Coto-Solano, Thang Vu, and Katharina\nKann. 2022. AmericasNLI: Evaluating zero-shot\nnatural language understanding of pretrained multi-\nlingual models in truly low-resource languages. In3918\nProceedings of the 60th Annual Meeting of the As-\nsociation for Computational Linguistics (Volume 1:\nLong Papers) , pages 6279–6299, Dublin, Ireland.\nAssociation for Computational Linguistics.\nRamy Eskander, Smaranda Muresan, and Michael\nCollins. 2020. Unsupervised cross-lingual part-of-\nspeech tagging for truly low-resource scenarios. In\nProceedings of the 2020 Conference on Empirical\nMethods in Natural Language Processing (EMNLP) ,\npages 4820–4831, Online. Association for Computa-\ntional Linguistics.\nAlexander Fraser and Daniel Marcu. 2007. Squibs and\ndiscussions: Measuring word alignment quality for\nstatistical machine translation. Computational Lin-\nguistics , 33(3):293–303.\nQin Gao and Stephan V ogel. 2008. Parallel implemen-\ntations of word alignment tool. In Software Engi-\nneering, Testing, and Quality Assurance for Natu-\nral Language Processing , pages 49–57, Columbus,\nOhio. Association for Computational Linguistics.\nSuchin Gururangan, Ana Marasovi ´c, Swabha\nSwayamdipta, Kyle Lo, Iz Beltagy, Doug Downey,\nand Noah A. Smith. 2020. Don’t stop pretraining:\nAdapt language models to domains and tasks. In\nProceedings of the 58th Annual Meeting of the\nAssociation for Computational Linguistics , pages\n8342–8360, Online. Association for Computational\nLinguistics.\nJimin Hong, TaeHee Kim, Hyesu Lim, and Jaegul Choo.\n2021. A V ocaDo: Strategy for adapting vocabulary\nto downstream domain. In Proceedings of the 2021\nConference on Empirical Methods in Natural Lan-\nguage Processing , pages 4692–4700, Online and\nPunta Cana, Dominican Republic. Association for\nComputational Linguistics.\nMasoud Jalili Sabet, Philipp Dufter, François Yvon,\nand Hinrich Schütze. 2020. SimAlign: High quality\nword alignments without parallel training data using\nstatic and contextualized embeddings. In Findings\nof the Association for Computational Linguistics:\nEMNLP 2020 , pages 1627–1643, Online. Associa-\ntion for Computational Linguistics.\nPhilipp Koehn, Amittai Axelrod, Alexandra\nBirch Mayne, Chris Callison-Burch, Miles\nOsborne, and David Talbot. 2005. Edinburgh\nsystem description for the 2005 IWSLT speech\ntranslation evaluation. In Proceedings of the Second\nInternational Workshop on Spoken Language\nTranslation , Pittsburgh, Pennsylvania, USA.\nPhilipp Koehn and Rebecca Knowles. 2017. Six chal-\nlenges for neural machine translation. In Proceed-\nings of the First Workshop on Neural Machine Trans-\nlation , pages 28–39, Vancouver. Association for\nComputational Linguistics.\nTaku Kudo and John Richardson. 2018. SentencePiece:\nA simple and language independent subword tok-\nenizer and detokenizer for neural text processing. InProceedings of the 2018 Conference on Empirical\nMethods in Natural Language Processing: System\nDemonstrations , pages 66–71, Brussels, Belgium.\nAssociation for Computational Linguistics.\nEn-Shiun Lee, Sarubi Thillainathan, Shravan Nayak,\nSurangika Ranathunga, David Adelani, Ruisi Su,\nand Arya McCarthy. 2022. Pre-trained multilin-\ngual sequence-to-sequence models: A hope for low-\nresource language translation? In Findings of the As-\nsociation for Computational Linguistics: ACL 2022 ,\npages 58–67, Dublin, Ireland. Association for Com-\nputational Linguistics.\nPercy Liang, Ben Taskar, and Dan Klein. 2006. Align-\nment by agreement. In Proceedings of the Main\nConference on Human Language Technology Con-\nference of the North American Chapter of the Asso-\nciation of Computational Linguistics , HLT-NAACL\n’06, page 104–111, USA. Association for Computa-\ntional Linguistics.\nYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-\ndar Joshi, Danqi Chen, Omer Levy, Mike Lewis,\nLuke Zettlemoyer, and Veselin Stoyanov. 2019.\nRoberta: A robustly optimized bert pretraining ap-\nproach. ArXiv , abs/1907.11692.\nManuel Mager, Arturo Oncevay, Abteen Ebrahimi,\nJohn Ortega, Annette Rios, Angela Fan, Xi-\nmena Gutierrez-Vasques, Luis Chiruzzo, Gustavo\nGiménez-Lugo, Ricardo Ramos, Ivan Vladimir\nMeza Ruiz, Rolando Coto-Solano, Alexis Palmer,\nElisabeth Mager-Hois, Vishrav Chaudhary, Graham\nNeubig, Ngoc Thang Vu, and Katharina Kann. 2021.\nFindings of the AmericasNLP 2021 shared task on\nopen machine translation for indigenous languages\nof the Americas. In Proceedings of the First Work-\nshop on Natural Language Processing for Indige-\nnous Languages of the Americas , pages 202–217,\nOnline. Association for Computational Linguistics.\nJoel Martin, Rada Mihalcea, and Ted Pedersen. 2005.\nWord alignment for languages with scarce resources.\nInProceedings of the ACL Workshop on Building\nand Using Parallel Texts , pages 65–74, Ann Arbor,\nMichigan. Association for Computational Linguis-\ntics.\nEvgeny Matusov, Richard Zens, and Hermann Ney.\n2004. Symmetric word alignments for statistical ma-\nchine translation. In COLING 2004: Proceedings\nof the 20th International Conference on Computa-\ntional Linguistics , pages 219–225, Geneva, Switzer-\nland. COLING.\nArya D. McCarthy, Rachel Wicks, Dylan Lewis, Aaron\nMueller, Winston Wu, Oliver Adams, Garrett Nico-\nlai, Matt Post, and David Yarowsky. 2020. The\nJohns Hopkins University Bible corpus: 1600+\ntongues for typological exploration. In Proceedings\nof the Twelfth Language Resources and Evaluation\nConference , pages 2884–2892, Marseille, France.\nEuropean Language Resources Association.3919\nRada Mihalcea and Ted Pedersen. 2003. An evaluation\nexercise for word alignment. In Proceedings of the\nHLT-NAACL 2003 Workshop on Building and Using\nParallel Texts: Data Driven Machine Translation\nand Beyond , pages 1–10.\nRobert C. Moore. 2004. Improving IBM word align-\nment model 1. In Proceedings of the 42nd An-\nnual Meeting of the Association for Computational\nLinguistics (ACL-04) , pages 518–525, Barcelona,\nSpain.\nBenjamin Muller, Antonios Anastasopoulos, Benoît\nSagot, and Djamé Seddah. 2021. When being un-\nseen from mBERT is just the beginning: Handling\nnew languages with multilingual language models.\nInProceedings of the 2021 Conference of the North\nAmerican Chapter of the Association for Computa-\ntional Linguistics: Human Language Technologies ,\npages 448–462, Online. Association for Computa-\ntional Linguistics.\nMasaaki Nagata, Katsuki Chousa, and Masaaki\nNishino. 2020. A supervised word alignment\nmethod based on cross-language span prediction us-\ning multilingual BERT. In Proceedings of the 2020\nConference on Empirical Methods in Natural Lan-\nguage Processing (EMNLP) , pages 555–565, Online.\nAssociation for Computational Linguistics.\nGarrett Nicolai, Dylan Lewis, Arya D. McCarthy,\nAaron Mueller, Winston Wu, and David Yarowsky.\n2020. Fine-grained morphosyntactic analysis and\ngeneration tools for more than one thousand lan-\nguages. In Proceedings of the Twelfth Language\nResources and Evaluation Conference , pages 3963–\n3972, Marseille, France. European Language Re-\nsources Association.\nGarrett Nicolai and David Yarowsky. 2019. Learning\nmorphosyntactic analyzers from the Bible via itera-\ntive annotation projection across 26 languages. In\nProceedings of the 57th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 1765–\n1774, Florence, Italy. Association for Computational\nLinguistics.\nJoakim Nivre, Marie-Catherine de Marneffe, Filip Gin-\nter, Jan Haji ˇc, Christopher D. Manning, Sampo\nPyysalo, Sebastian Schuster, Francis Tyers, and\nDaniel Zeman. 2020. Universal Dependencies v2:\nAn evergrowing multilingual treebank collection. In\nProceedings of the 12th Language Resources and\nEvaluation Conference , pages 4034–4043, Marseille,\nFrance. European Language Resources Association.\nFranz Josef Och and Hermann Ney. 2000. Improved\nstatistical alignment models. In Proceedings of the\n38th Annual Meeting of the Association for Com-\nputational Linguistics , pages 440–447, Hong Kong.\nAssociation for Computational Linguistics.\nFranz Josef Och and Hermann Ney. 2003. A systematic\ncomparison of various statistical alignment models.\nComputational Linguistics , 29(1):19–51.John Ortega and Krishnan Pillaipakkamnatt. 2018. Us-\ning morphemes from agglutinative languages like\nQuechua and Finnish to aid in low-resource transla-\ntion. In Proceedings of the AMTA 2018 Workshop\non Technologies for MT of Low Resource Languages\n(LoResMT 2018) , pages 1–11, Boston, MA. Associ-\nation for Machine Translation in the Americas.\nJonas Pfeiffer, Ivan Vuli ´c, Iryna Gurevych, and Se-\nbastian Ruder. 2020. MAD-X: An Adapter-Based\nFramework for Multi-Task Cross-Lingual Transfer.\nInProceedings of the 2020 Conference on Empirical\nMethods in Natural Language Processing (EMNLP) ,\npages 7654–7673, Online. Association for Computa-\ntional Linguistics.\nTelmo Pires, Eva Schlinger, and Dan Garrette. 2019.\nHow multilingual is multilingual BERT? In Pro-\nceedings of the 57th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 4996–\n5001, Florence, Italy. Association for Computational\nLinguistics.\nPeng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton,\nand Christopher D. Manning. 2020. Stanza: A\nPython natural language processing toolkit for many\nhuman languages. In Proceedings of the 58th An-\nnual Meeting of the Association for Computational\nLinguistics: System Demonstrations .\nAfshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Mas-\nsively multilingual transfer for NER. In Proceed-\nings of the 57th Annual Meeting of the Association\nfor Computational Linguistics , pages 151–164, Flo-\nrence, Italy. Association for Computational Linguis-\ntics.\nLane Schwartz. 2022. Primum Non Nocere: Before\nworking with Indigenous data, the ACL must con-\nfront ongoing colonialism. In Proceedings of the\n60th Annual Meeting of the Association for Compu-\ntational Linguistics (Volume 2: Short Papers) , pages\n724–731, Dublin, Ireland. Association for Computa-\ntional Linguistics.\nDavid A. Smith and Noah A. Smith. 2004. Bilingual\nparsing with factored estimation: Using English to\nparse Korean. In Proceedings of the 2004 Confer-\nence on Empirical Methods in Natural Language\nProcessing , pages 49–56, Barcelona, Spain. Associ-\nation for Computational Linguistics.\nZihan Wang, Karthikeyan K, Stephen Mayhew, and\nDan Roth. 2020. Extending multilingual BERT to\nlow-resource languages. In Findings of the Associ-\nation for Computational Linguistics: EMNLP 2020 ,\npages 2649–2656, Online. Association for Computa-\ntional Linguistics.\nThomas Wolf, Lysandre Debut, Victor Sanh, Julien\nChaumond, Clement Delangue, Anthony Moi, Pier-\nric Cistac, Tim Rault, Rémi Louf, Morgan Funtow-\nicz, Joe Davison, Sam Shleifer, Patrick von Platen,\nClara Ma, Yacine Jernite, Julien Plu, Canwen Xu,\nTeven Le Scao, Sylvain Gugger, Mariama Drame,3920\nQuentin Lhoest, and Alexander M. Rush. 2020.\nTransformers: State-of-the-art natural language pro-\ncessing. In Proceedings of the 2020 Conference on\nEmpirical Methods in Natural Language Process-\ning: System Demonstrations , pages 38–45, Online.\nAssociation for Computational Linguistics.\nShijie Wu and Mark Dredze. 2020. Are all languages\ncreated equal in multilingual BERT? In Proceedings\nof the 5th Workshop on Representation Learning for\nNLP, pages 120–130, Online. Association for Com-\nputational Linguistics.\nPatrick Xia and David Yarowsky. 2017. Deriving con-\nsensus for multi-parallel corpora: an English Bible\nstudy. In Proceedings of the Eighth International\nJoint Conference on Natural Language Processing\n(Volume 2: Short Papers) , pages 448–453, Taipei,\nTaiwan. Asian Federation of Natural Language Pro-\ncessing.\nLinting Xue, Noah Constant, Adam Roberts, Mihir\nKale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua,\nand Colin Raffel. 2021. mT5: A massively multilin-\ngual pre-trained text-to-text transformer. In Proceed-\nings of the 2021 Conference of the North American\nChapter of the Association for Computational Lin-\nguistics: Human Language Technologies , pages 483–\n498, Online. Association for Computational Linguis-\ntics.\nDavid Yarowsky and Grace Ngai. 2001. Inducing mul-\ntilingual POS taggers and NP bracketers via robust\nprojection across aligned corpora. In Second Meet-\ning of the North American Chapter of the Associa-\ntion for Computational Linguistics .\nDavid Yarowsky, Grace Ngai, and Richard Wicen-\ntowski. 2001. Inducing multilingual text analysis\ntools via robust projection across aligned corpora.\nInProceedings of the First International Conference\non Human Language Technology Research .\nThomas Zenkel, Joern Wuebker, and John DeNero.\n2020. End-to-end neural word alignment outper-\nforms GIZA++. In Proceedings of the 58th Annual\nMeeting of the Association for Computational Lin-\nguistics , pages 1605–1617, Online. Association for\nComputational Linguistics.3921\nA Training Details and Hyperparameters\nWe compare two data loading strategies for adaptation: a naïve approach where each example in the\ndataset represents an example in the loaded training examples, and a packing strategy following the\nFULL-SENTENCES approach of Liu et al. (2019). We use the hyperparameters described by Ebrahimi\net al. (2022) – a learning rate of 2e-5, batch size of 32, and warmup ratio of 1% – however due to the\ndifferent loading strategy we tune the total amount of training time. We experiment with 40 and 80\nepochs of training, using the alignment development set to select the final hyperparameters. For both\nMLM-T and MLM-ST we find that packing sequences yields better results, however for +TLM we use\nthe naïve strategy to preserve sentence alignment. We use packing by default for Wikipedia data, due to\nthe length of extracted documents. For all adaptation methods we find that training for 80 epochs is best,\nexcept for +MLM-ST, which we train for 40. We train with 1 Nvidia A100 or 2 V100 GPUs. Due to the\ncomputational cost associated with pretraining, we only conduct one model run for each language and\nmethod. We pretrain our models using Huggingface (Wolf et al., 2020).\nTraining Time As mentioned in Section 3.2, for adaptation the training duration depends on the GPU\nand method used, with times ranging from around 6 minutes for Bribri to 4 hours for Quechua. For the\nstatistical approaches, both run solely on CPUs, and their training time ranges between 6 seconds to\n3 minutes for FastAlign, and 43 seconds to 22 minutes for Giza++. However, GPU availability is not\nalways certain – to roughly compare training times given a more restricted setting, we run our adaptation\nexperiments without access to any GPUs, and compute an estimate for the total training time using only\nCPUs as approximately 2 weeks.\nWhole Set AER Avg. AER AER Std. Min AER 25% 50% 75% Max AER\nFastAlign 35.00 35.09 1.94 30.30 33.70 35.00 36.23 39.90\nAwesome 28.23 28.26 2.04 23.95 26.66 28.12 29.71 34.63\nTable A.1: Summary statistics for subsample AER distribution.\n22.5 25.0 27.5 30.0 32.5 35.0 37.5 40.0\nAERFastAlign\nAwesomeViolin plot of subsample AER\nFigure A.1: Distribution of AER when using FastAlign and AWESoME align to evaluate subsets of size 50 taken\nfrom a complete evaluation set of size 248. Quartiles are displayed using dashed lines, while inverted colors\nrepresent the AER calculated when evaluating on the complete set.3922\nB Dataset Features\nFeature BZD GN QUY SHP\nNumber of examples (Parallel) 7,508 26,032 121,064 14,592\nNumber of examples (Wiki) - 4721 22610 -\nNumber of tokens - MLM-T 123,992 1,104,645 3,912,582 179,451\nNumber of tokens - TLM 194,798 2,006,996 6,697,771 328,427\nNumber of tokens - Wiki - 1,460,240 2,023,297 -\nNumber of dev examples 50 48 45 46\nNumber of test examples 50 50 50 50\nTable B.1: Features of the data used for our experiments.\nC Supplementary Results\nModel Method BZD GN QUY SHP\nAWESoME BL 65.38 58.51 63.98 62.80\n+mBERT +MLM-T 64.26 43.29 39.10 66.44\n+MLM-ST 65.43 43.46 37.20 65.63\n+TLM 54.25 34.62 30.38 62.10\nAWESoME BL 76.29 71.85 71.53 73.96\n+XLM-R +MLM-T 72.73 57.50 43.30 69.25\n+MLM-ST 73.08 60.28 44.88 70.48\n+TLM 71.88 49.76 36.11 69.23\nFastAlign Union 47.39 39.78 58.37 57.91\nGiza++ Union 51.03 62.07 47.18 64.98\nTable C.1: Development AER for each language and method.3923\nMethod Heuristic BZD GN QUY SHP\nFastAlign grow-diagonal-final 54.56 49.64 60.51 56.11\ngrow-diagonal 55.36 50.41 63.81 56.87\nintersection 57.11 52.89 66.92 61.67\nunion 51.40 43.52 54.06 54.67\nreverse 52.21 51.51 61.27 58.41\nGiza++ grow-diagonal-final 55.51 53.38 75.29 62.72\ngrow-diagonal 59.33 58.41 80.06 69.53\nintersection 63.71 64.95 82.55 77.41\nunion 55.61 49.92 66.01 60.84\nreverse 56.43 62.20 76.05 72.39\nTable C.2: AER results on the test set for various growing heuristics.\nBZD GN QUY SHP\nModel Method P R F P R F P R F P R F\nAWESoME BL 41.8 23.4 30.0 50.6 29.0 36.9 49.4 24.7 33.0 64.0 28.7 39.6\n(mBERT) +MLM-T 42.7 24.4 31.1 68.6 39.7 50.3 69.1 43.5 53.4 66.0 30.6 41.8\n+MLM-ST 42.9 22.3 29.4 67.2 39.5 49.8 73.8 47.1 57.5 67.6 29.8 41.3\n+TLM 62.1 31.2 41.6 76.3 45.4 56.9 79.0 52.5 63.0 79.4 34.0 47.7\nAWESoME BL 48.0 12.5 19.9 48.4 18.6 26.9 49.7 16.5 24.8 64.5 20.2 30.8\n(XLM-R) +MLM-T 38.9 16.4 23.1 63.0 23.8 34.6 70.2 34.6 46.4 57.2 25.0 34.8\n+MLM-ST 40.8 15.5 22.5 65.7 24.3 35.4 74.8 34.4 47.1 56.5 23.7 33.4\n+TLM 50.0 16.8 25.1 76.9 28.1 41.2 83.2 43.1 56.8 77.0 23.9 36.5\nFastAlign Union 46.4 51.0 48.6 55.4 57.6 56.5 44.3 47.7 45.9 48.0 43.0 45.3\nGiza++ Union 39.9 49.8 44.3 48.3 52.0 50.1 32.0 36.3 34.0 37.2 41.4 39.2\nmBERT +MLM-WT - - - 76.3 49.4 60.0 70.8 43.6 54.0 - - -\nXLM-R +MLM-WT - - - 66.4 31.5 42.7 75.0 38.8 51.2 - - -\nTable C.3: Precision, recall, and F-measure for main test set results. All metrics are on a 0–100 scale (larger is\nbetter).\nNum. Examples +TLM +MLM-WT +MLM-ST FastAlign\n50 67.58 66.89 67.53 67.26\n100 67.74 63.87 65.79 66.97\n200 68.42 65.28 65.75 66.91\n400 65.61 66.43 65.31 66.80\n800 63.43 62.84 63.76 66.26\n1600 61.81 59.82 63.34 65.91\n3200 56.93 57.41 62.99 64.75\n6400 50.59 52.24 61.83 64.84\n12800 43.98 52.14 61.18 63.06\n25600 39.87 48.69 56.80 59.72\nTable C.4: AER for each method and subset used in the Subset Analysis.3924\n+MLM-WT +TLM FastAlign\nAvg. Char AER Avg. Char AER Avg. Char AER\n13.20 64.13 14.31 68.97 14.31 65.99\n30.29 63.13 31.20 61.61 31.20 64.89\n41.49 63.39 42.51 55.95 42.51 63.89\n50.19 62.19 51.45 54.73 51.45 64.19\n57.47 61.01 59.23 53.70 59.23 63.89\n64.20 59.07 66.44 49.12 66.44 62.15\n70.83 59.24 73.30 50.04 73.30 63.38\n77.12 57.06 80.09 48.12 80.09 63.56\n82.63 57.63 87.02 48.10 87.02 63.15\n89.31 55.77 94.31 47.63 94.31 62.96\n96.66 55.54 102.30 46.76 102.30 63.78\n104.76 54.40 111.48 46.07 111.48 62.99\n113.76 53.24 122.29 45.56 122.29 62.43\n124.33 51.07 135.93 45.62 135.93 61.87\n137.03 51.36 154.86 44.35 154.86 63.31\n152.70 50.43 195.18 42.55 195.18 62.23\n174.88 50.25 - - - -\n212.44 51.10 - - - -\n319.76 49.22 - - - -\nTable C.5: AER for each method and length group used in the Length Analysis. Average Chars represents the\naverage number of characters per example, for each group.3925\nbaseline MLM-T MLM-ST TLM123456789 10111272 72 73 70\n72 72 72 70\n70 70 69 67\n70 69 69 64\n69 67 67 63\n68 67 68 67\n65 66 65 57\n65 64 65 54\n70 67 68 66\n75 73 71 72\n80 76 75 76\n77 76 76 67bert - bzd\nbaseline MLM-T MLM-ST TLM123456789 10111276 76 76 75\n75 76 76 76\n74 75 74 75\n73 72 73 71\n75 73 73 71\n76 74 73 71\n76 74 75 72\n76 73 73 72\n74 70 71 64\n74 73 74 68\n74 74 74 67\n84 84 84 81xlmr - bzd\nbaseline MLM-T MLM-ST TLM123456789 10111268 67 68 63\n67 64 64 57\n66 63 61 55\n65 57 56 50\n63 55 53 47\n63 54 52 48\n60 45 46 38\n59 43 43 35\n67 53 51 42\n76 64 67 51\n81 73 78 61\n73 65 66 59bert - gn\nbaseline MLM-T MLM-ST TLM123456789 10111266 68 68 64\n75 76 76 78\n73 71 71 77\n69 63 64 61\n70 61 62 55\n71 57 60 55\n73 60 63 54\n72 58 60 50\n72 54 57 47\n74 60 62 47\n73 64 63 47\n78 74 75 60xlmr - gn\nbaseline MLM-T MLM-ST TLM123456789 10111264 63 61 55\n63 58 56 49\n62 55 53 46\n61 51 48 40\n62 50 45 39\n64 51 48 41\n63 43 39 32\n64 39 37 30\n67 44 42 32\n71 62 54 38\n76 73 65 52\n69 63 59 46bert - quy\nbaseline MLM-T MLM-ST TLM123456789 10111266 65 65 61\n70 70 69 70\n69 67 67 68\n67 58 57 51\n69 54 52 42\n69 48 46 38\n70 47 48 41\n72 43 45 36\n71 39 39 29\n72 42 41 34\n71 47 47 33\n77 69 71 43xlmr - quy\nbaseline MLM-T MLM-ST TLM123456789 10111268 69 69 67\n66 67 67 65\n67 69 69 67\n65 66 66 64\n63 66 67 66\n66 68 69 69\n64 66 66 62\n63 66 66 62\n71 72 73 69\n78 76 76 78\n82 78 79 83\n70 72 70 65bert - shp\nbaseline MLM-T MLM-ST TLM123456789 10111265 65 65 66\n75 76 76 76\n74 73 74 76\n70 68 70 71\n71 70 71 69\n71 70 71 69\n72 69 71 69\n74 68 70 69\n74 71 73 66\n75 71 72 71\n74 72 72 70\n73 72 72 68xlmr - shp\n304050607080\n304050607080\n304050607080\n304050607080\n304050607080\n304050607080\n304050607080\n304050607080Figure C.1: AER using the development set, per layer, per language, for both mBERT and XLM-R.3926", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "V0RwzhM-ns", "year": null, "venue": "EACL2014", "pdf_link": "https://aclanthology.org/E14-1014.pdf", "forum_link": "https://openreview.net/forum?id=V0RwzhM-ns", "arxiv_id": null, "doi": null }
{ "title": "Generalizing a Strongly Lexicalized Parser using Unlabeled Data.", "authors": [ "Tejaswini Deoskar", "Christos Christodoulopoulos", "Alexandra Birch", "Mark Steedman" ], "abstract": "Tejaswini Deoskar, Christos Christodoulopoulos, Alexandra Birch, Mark Steedman. Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. 2014.", "keywords": [], "raw_extracted_content": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics , pages 126–134,\nGothenburg, Sweden, April 26-30 2014. c\r2014 Association for Computational Linguistics\nGeneralizing a Strongly Lexicalized Parser using Unlabeled Data\nTejaswini Deoskar1, Christos Christodoulopoulos2, Alexandra Birch1, Mark Steedman1\n1School of Informatics, University of Edinburgh, Edinburgh, EH8 9AB\n2University of Illinois, Urbana-Champaign, Urbana, IL 61801\n{tdeoskar,abmayne,steedman }@inf.ed.ac.uk, [email protected]\nAbstract\nStatistical parsers trained on labeled data\nsuffer from sparsity, both grammatical and\nlexical. For parsers based on strongly\nlexicalized grammar formalisms (such as\nCCG, which has complex lexical cate-\ngories but simple combinatory rules), the\nproblem of sparsity can be isolated to\nthe lexicon. In this paper, we show that\nsemi-supervised Viterbi-EM can be used\nto extend the lexicon of a generative CCG\nparser. By learning complex lexical entries\nfor low-frequency and unseen words from\nunlabeled data, we obtain improvements\nover our supervised model for both in-\ndomain (WSJ) and out-of-domain (ques-\ntions and Wikipedia) data. Our learnt\nlexicons when used with a discriminative\nparser such as C&C also significantly im-\nprove its performance on unseen words.\n1 Introduction\nAn important open problem in natural language\nparsing is to generalize supervised parsers, which\nare trained on hand-labeled data, using unlabeled\ndata. The problem arises because further hand-\nlabeled data in the amounts necessary to signif-\nicantly improve supervised parsers are very un-\nlikely to be made available. Generalization is also\nnecessary in order to achieve good performance on\nparsing in textual domains other than the domain\nof the available labeled data. For example, parsers\ntrained on Wall Street Journal (WSJ) data suffer a\nfall in accuracy on other domains (Gildea, 2001).\nIn this paper, we use self-training to generalize\nthe lexicon of a Combinatory Categorial Gram-\nmar ( CCG) (Steedman, 2000) parser. CCG is a\nstrongly lexicalized formalism, in which every\nword is associated with a syntactic category (sim-\nilar to an elementary syntactic structure) indicat-ing its subcategorization potential. Lexical en-\ntries are fine-grained and expressive, and contain\na large amount of language-specific grammatical\ninformation. For parsers based on strongly lexical-\nized formalisms, the problem of grammar general-\nization can be cast largely as a problem of lexical\nextension.\nThe present paper focuses on learning lexi-\ncal categories for words that are unseen orlow-\nfrequency in labeled data, from unlabeled data.\nSince lexical categories in a strongly lexicalized\nformalism are complex, fine-grained (and far more\nnumerous than simple part-of-speech tags), they\nare relatively sparse in labeled data. Despite per-\nforming at state-of-the-art levels, a major source\nof error made by CCG parsers is related to unseen\nand low-frequency words (Hockenmaier, 2003;\nClark and Curran, 2007; Thomforde and Steed-\nman, 2011). The unseen words for which we learn\ncategories are surprisingly commonplace words of\nEnglish; examples are conquered, apprehended,\nsubdivided, scoring, denotes, hunted, obsessed,\nresiding, migrated (Wikipedia). Correctly learn-\ning to parse the predicate-argument structures as-\nsociated with such words (expressed as lexical cat-\negories in the case of CCG), is important for open-\ndomain parsing, not only for CCG but indeed for\nany parser.\nWe show that a simple self-training method,\nViterbi-EM (Neal and Hinton, 1998) when used\nto enhance the lexicon of a strongly-lexicalized\nparser can be an effective strategy for self-training\nand domain-adaptation. Our learnt lexicons im-\nprove on the lexical category accuracy of two su-\npervised CCG parsers (Hockenmaier (2003) and\nthe Clark and Curran (2007) parser, C&C) on\nwithin-domain (WSJ) and out-of-domain test sets\n(a question corpus and a Wikipedia corpus).\nIn most prior work, when EM was initialized\nbased on labeled data, its performance did not im-\nprove over the supervised model (Merialdo, 1994;126\nCharniak, 1993). We found that in order for per-\nformance to improve, unlabeled data should be\nused only for parameters which are not well cov-\nered by the labeled data, while those that are well\ncovered should remain fixed.\nIn an additional contribution, we compare two\nstrategies for treating unseen words (a smoothing-\nbased, and a part-of-speech back-off method) and\nfind that a smoothing-based strategy for treat-\ning unseen words is more effective for semi-\nsupervised learning than part-of-speech back-off.\n2 Combinatory Categorial Grammar\nCombinatory Categorial Grammar ( CCG) (Steed-\nman, 2000) is a strongly lexicalized grammar\nformalism, in which the lexicon contains all\nlanguage-specific grammatical information. The\nlexical entry of a word consists of a syntactic cat-\negory which expresses the subcategorization po-\ntential of the word, and a semantic interpretation\nwhich defines the compositional semantics (Lewis\nand Steedman, 2013). A small number of combi-\nnatory rules are used to combine constituents, and\nit is straightforward to map syntactic categories to\na logical form for semantic interpretation.\nFor statistical CCG parsers, the lexicon is learnt\nfrom labeled data, and is subject to sparsity due\nto the fine-grained nature of the categories. Fig-\nure 1 illustrates this with a simple CCG deriva-\ntion. In this sentence, bake is used as a ditransi-\ntive verb and is assigned the ditransitive category\nS\\NP/NP/NP. This category defines the verb syn-\ntactically as mapping three NParguments to a sen-\ntence S, and semantically as a ternary relation be-\ntween its three arguments, thus providing a com-\nplete analysis of the sentence.\n[NNP John ] [ V BD baked ] [ NNP Mary] [ DTa ] [NN cake]\nNP S \\NP/NP/NP NP NP /N N\n> >\nS\\NP/NP NP\n>\nS\\NP\n<S\n‘John baked Mary a cake’\nFigure 1: Example CCG derivation\nFor a CCG parser to obtain the correct deriva-\ntion above, its lexicon must include the ditransitive\ncategory S\\NP/NP/NPfor the verb bake . It is not\nsufficient to have simply seen the verb in another\ncontext (say a transitive context like “John baked a\ncake”, which is a more common context). This is\nin contrast to standard treebank parsers where theverbal category is simply VBD (past tense verb)\nand a ditransitive analysis of the sentence is not\nruled out as a result of the lexical category.\nIn addition to sparsity related to open-class\nwords like verbs as in the above example, there are\nalso missing categories in labeled data for closed-\nclass words like question words, due to the small\nnumber of questions in the Penn Treebank. In gen-\neral, lexical sparsity for a statistical CCG parser\ncan be broken down into three types: (i)where a\nword is unseen in training data but is present in\ntest data, (ii)where a word is seen in the train-\ning data but not with the category type required\nin the test data (but the category type is seen with\nother words) and (iii)where a word bears a cate-\ngory type required in the test data but the category\ntype is completely unseen in the training data.\nIn this paper, we deal with the first two kinds.\nThe third kind is more prevalent when the size\nof labeled data is comparatively small (although,\neven in the case of the English WSJ CCG tree-\nbank, there are several attested category types that\nare entirely missing from the lexicon, Clark et al.,\n2004). We make the assumption here that all cat-\negory types in the language have been seen in the\nlabeled data. In principle new category types may\nbe introduced independently without affecting our\nsemi-supervised process (for instance, manually,\nor via a method that predicts new category types\nfrom those seen in labeled data).\n3 Related Work\nPrevious attempts at harnessing unlabeled data to\nimprove supervised CCG models using methods\nlike self-training or co-training have been unsat-\nisfactory (Steedman et al., 2003, 43-44). Steed-\nman et al. (2003) experimented with self-training\na generative CCG parser, and co-training a genera-\ntive parser with an HMM-based supertagger. Co-\ntraining (but not self-training) improved the results\nof the parser when the seed labeled data was small.\nWhen the seed data was large (the full treebank),\ni.e., the supervised baseline was high, co-training\nand self-training both failed to improve the parser.\nMore recently, Honnibal et al. (2009) improved\nthe performance of the C&C parser on a domain-\nadaptation task (adaptation to Wikipedia text) us-\ning self-training. Instead of self-training the pars-\ning model, they re-train the supertagging model,\nwhich in turn affects parsing accuracy. They\nobtained an improvement of 1.09% (dependency127\nscore) on supertagger accuracy on Wikipedia (al-\nthough performance on WSJ text dropped) but did\nnot attempt to re-train the parsing model.\nAn orthogonal approach for extending a CCG\nlexicon using unlabeled data is that of Thomforde\nand Steedman (2011), in which a CCG category for\nan unknown word is derived from partial parses\nof sentences with just that one word unknown.\nThe method is capable of inducing unseen cate-\ngories types (the third kind of sparsity mentioned\nin§2.1), but due to algorithmic and efficiency is-\nsues, it did not achieve the broad-coverage needed\nfor grammar generalisation of a high-end parser. It\nis more relevant for low-resource languages which\ndo not have substantial labeled data and category\ntype discovery is important.\nSome notable positive results for non- CCG\nparsers are McClosky et al. (2006) who use a\nparser-reranker combination. Koo et al. (2008)\nand Suzuki et al. (2009) use unsupervised word-\nclusters as features in a dependency parser to get\nlexical dependencies. This has some notional sim-\nilarity to categories, since, like categories, clus-\nters are less fine-grained than words but more fine-\ngrained than POS-tags.\n4 Supervised Parser\nThe CCG parser used in this paper is a re-\nimplementation of the generative parser of Hock-\nenmaier and Steedman (2002) and Hockenmaier\n(2003)1, except for the treatment of unseen and\nlow-frequency words.\nWe use a model (the LexCat model in Hock-\nenmaier (2003)) that conditions the generation of\nconstituents in the parse tree on the lexical cate-\ngory of the head word of the constituent, but not on\nthe head word itself. While fully-lexicalized mod-\nels that condition on words (and thus model word-\nto-word dependencies) are more accurate than un-\nlexicalized ones like the LexCat model, we use\nan unlexicalized model2for two reasons: first,\n1These generative models are similar to the Collins’ head-\nbased models (Collins, 1997), where for every node, a head is\ngenerated first, and then a sister conditioned on the head. De-\ntails of the models are in Hockenmaier and Steedman (2002)\nand Hockenmaier 2003:pg 166.\n2A terminological clarification: unlexicalized here refers\nto the model, in the sense that head-word information is\nnot used for rule-expansion. The formalism itself ( CCG)\nis referred to as strongly-lexicalized , as used in the title of\nthe paper. Formalisms like CCG and LTAG are consid-\nered strongly-lexicalized since linguistic knowledge (func-\ntions mapping words to syntactic structures/semantic inter-\npretations) is included in the lexicon.our lexicon smoothing procedure (described in the\nnext section) introduces new words and new cat-\negories for words into the lexicon. Lexical cate-\ngories are added to the lexicon for seen and un-\nseen words, but no new category types are intro-\nduced. Since the LexCat model conditions rule ex-\npansions on lexical categories, but not on words, it\nis still able to produce parses for sentences with\nnew words. In contrast, a fully lexicalized model\nwould need all components of the grammar to be\nsmoothed, a task that is far from trivial due to the\nresulting explosion in grammar size (and one that\nwe leave for future work).\nSecond, although lexicalized models perform\nbetter on in-domain WSJ data (the LexCat model\nhas an accuracy of 87.9% on Section 23, as op-\nposed to 91.03% for the head-lexicalized model\nin Hockenmaier (2003) and 91.9% for the C&C\nparser), our parser is more accurate on a question\ncorpus, with a lexical category accuracy of 82.3%,\nas opposed to 71.6% and 78.6% for the C&C and\nHockenmaier (2003) respectively.\n4.1 Handling rare and unseen words\nExisting CCG parsers (Hockenmaier (2003) and\nClark and Curran (2007)) back-off rare and unseen\nwords to their POS tag. The POS-backoff strategy\nis essentially a pipeline approach, where words\nare first tagged with coarse tags ( POS tags) and\nfiner tags ( CCG categories) are later assigned, by\nthe parser (Hockenmaier, 2003) or the supertag-\nger (Clark and Curran, 2007). As POS-taggers\nare much more accurate than parsers, this strat-\negy has given good performance in general for\nCCG parsers, but it has the disadvantage that POS-\ntagging errors are propagated. The parser can\nnever recover from a tagging error, a problem that\nis serious for words in the Zipfian tail, where these\nwords might also be unseen for the POS tagger\nand hence more likely to be tagged incorrectly.\nThis issue is in fact more generally relevant than\nforCCG parsers alone—the dependence of parsers\nonPOS-taggers was cited as one of the problems\nin domain-adaptation of parsers in the NAACL-\n2012 shared task on parsing the web (Petrov and\nMcDonald, 2012). Lease and Charniak (2005)\nobtained an improvement in the accuracy of the\nCharniak (2000) parser on a biomedical domain\nsimply by training a new POStagger model.\nIn the following section, we describe an alter-\nnative smoothing-based approach to handling un-128\nseen and rare words. This method is less sen-\nsitive to POS tagging errors, as described below.\nIn this approach, in a pre-processing step prior\nto parsing, categories are introduced into the lex-\nicon for unseen and rare words from the data to\nbe parsed. Some probability mass is taken from\nseen words/categories and given to unseen word\nand category pairs. Thus, at parse time, no word is\nunseen for the parser.\n4.1.1 Smoothing\nIn our approach, we introduce lexical entries for\nwords from the unlabeled corpus that are unseen\nin the labeled data, and also add categories to ex-\nisting entries for rarely seen words. The most gen-\neral case of this would be to assign all known cat-\negories to a word. However, doing this reduces\nthe lexical category accuracy.3A second option,\nchosen here, is to limit the number of categories\nassigned to the word by using some information\nabout the word (for instance, its part-of-speech).\nBased on the part-of-speech of an unseen word in\nthe unlabeled or test corpus, we add an entry to the\nlexicon of the word with the top ncategories that\nhave been seen with that part-of-speech in the la-\nbeled data. Each new entry of (w,cat ), wherew\nis a word and catis a CCG category, is associated\nwith a count c(w,cat ), obtained as described be-\nlow. Once all (w,cat )entries are added to the lex-\nicon along with their counts, a probability model\nP(w|cat)is calculated over the entire lexicon.\nOur smoothing method is based on a method\nused in Deoskar (2008) for smoothing a PCFG\nlexicon. Eq. 1 and 2 apply it to CCG entries for\nunseen and rare words. In the first step, an out-\nof-the-box POStagger is used to tag the unlabeled\nor test corpus (we use the C&C tagger). Counts\nof words and POS-tagsccorpus (w,T )are obtained\nfrom the tagged corpus. For the CCG lexicon, we\nultimately need a count for a word wand a CCG\ncategorycat. To get this count, we split the count\nof a word and POS-tag amongst all categories seen\nwith that tag in the supervised data in the same\nratio as the ratio of the categories in the super-\nvised data. In Eq. 1, this ratio is ctb(catT)/ctb(T)\nwherectb(catT)is the treebank count of a cate-\ngorycatTseen with a POS-tagT, andctb(T)is the\nmarginal count of the tag Tin the treebank. This\n3For instance, we find that assigning all categories to un-\nseen verbs gives a lexical category accuracy of 52.25 %, as\nopposed to an accuracy of 65.4% by using top 15 categories,\nwhich gave us the best results, as reported later in Table 3.ratio makes a more frequent category type more\nlikely than a rarer one for an unseen word. For ex-\nample, for unseen verbs, it would make the transi-\ntive category more likely than a ditransitive one\n(since transitives are more frequent than ditran-\nsitives). There is an underlying assumption here\nthat relative frequencies of categories and POS-\ntags in the labeled data are maintained in the un-\nlabeled data, which in fact can be thought of as\na prior while estimating from unlabeled data (De-\noskar et al., 2012).\nccorpus (w,cat ) =ctb(catT)\nctb(T)·ccorpus (w,T )(1)\nAdditionally, for seen but low-frequency words,\nwe make use of the existing entry in the lexicon.\nThus in a second step, we interpolate the count\nccorpus (w,cat )of a word and category with the\nsupervised count of the same ctb(w,cat )(if it ex-\nists) to give the final smoothed count of a word and\ncategorycsmooth (w,cat )(Eq. 2).\ncsmooth (w,cat ) =λ·ctb(w,cat ) +\n(1−λ)·ccorpus (w,cat )\n(2)\nWhen this smoothed lexicon is used with a\nparser, POS-backoff is not necessary since all\nneeded words are now in the lexicon. Lexical en-\ntries for words in the parse are determined not by\nthePOS-tag from a tagger, but directly by the pars-\ning model, thus making the parse less susceptible\nto tagging errors.\n5 Semi-supervised Learning\nWe use Viterbi-EM (Neal and Hinton, 1998) as\nthe self-training method. Viterbi-EM is an alter-\nnative to EM where instead of using the model\nparameters to find a true posterior from unlabeled\ndata, a posterior based on the single maximum-\nprobability (Viterbi) parse is used. Viterbi-EM\nhas been used in various NLP tasks before and\noften performs better than classic EM (Cohen\nand Smith, 2010; Goldwater and Johnson, 2005;\nSpitkovsky et al., 2010). In practice, a given pars-\ning model is used to obtain Viterbi parses of un-\nlabeled sentences. The Viterbi parses are then\ntreated as training data for a new model. This pro-\ncess is iterated until convergence.\nSince we are interested in learning the lexi-\ncon, we only consider lexical counts from Viterbi\nparses of the unlabeled sentences. Other parame-\nters of the model are held at their supervised val-\nues. We conducted some experiments where we129\nself-trained all components of the parsing model,\nwhich is the usual case of self-training. We ob-\ntained negative results similar to Steedman et al.\n(2003), where self-training reduced the perfor-\nmance of the parsing model. We do not report\nthem here. Thus, using unlabeled data only to es-\ntimate parameters that are badly estimated from\nlabeled data (lexical entries in CCG, due to lexi-\ncal sparsity) results in improvements, in contrast\nto prior work with semi-supervised EM.\nAs is common in semi-supervised settings, we\ntreated the count of each lexical event as the\nweighted count of that event in the labeled data\n(treebank)4and the count from the Viterbi-parses\nof unlabeled data. Here we follow Bacchiani et al.\n(2006) and McClosky et al. (2006) who show that\ncount merging is more effective than model inter-\npolation.\nWe placed an additional constraint on the con-\ntribution that the unlabeled data makes to the semi-\nsupervised model—we only use counts (from un-\nlabeled data) of lexical events that are rarely\nseen/unseen in the labeled data. Our reasoning\nwas that many lexical entries are estimated accu-\nrately from the treebank (for example, those re-\nlated to function words and other high-frequency\nwords) and estimation from unlabeled data might\nhurt them. We thus had a cut-off frequency (of\nwords in labeled data) above which we did not\nallow the unlabeled counts to affect the semi-\nsupervised model. In practise, our experiments\nturned out to be fairly insensitive to the value of\nthis parameter, on evaluations over rare or un-\nseen verbs. However, overall accuracy would drop\nslightly if this cut-off was increased. We experi-\nmented with cut-offs of 5, 10 and 15, and found\nthat the most conservative value (of 5) gave the\nbest results on in-domain WSJ experiments, and a\nhigher value of 10 gave the best results for out-of-\ndomain experiments.\nWe also conducted some limited experiments\nwith classical semi-supervised EM, with similar\nsettings of weighting labeled counts, and using un-\nlabeled counts only for rare/unseen events. Since\nit is a much more computationally expensive pro-\ncedure, and most of the results did not come close\nto the results of Viterbi-EM, we did not pursue it.\n4The labeled count is weighted in order to scale up the la-\nbeled data which is usually smaller in size than the unlabeled\ndata, to avoid swamping the labeled counts with much larger\nunlabeled counts.5.1 Data\nLabeled : Sec. 02-21 of CCGbank (Hockenmaier\nand Steedman, 2007). In one experiment, we used\nSec. 02-21 minus 1575 sentences that were held\nout to simulate test data containing unseen verbs—\nsee§6.2 for details.\nUnlabeled : For in-domain experiments, we used\nsentences from the unlabeled WSJ portion of the\nACL/DCI corpus (LDC93T1, 1993), and the WSJ\nportion of the ANC corpus (Reppen et al., 2005),\nlimited to sentences containing 20 words or less,\ncreating datasets of approximately 10, 20 and 40\nmillion words each. Additionally, we have a\ndataset of 140 million words – 40M WSJ words\nplus an additional 100M from the New York\nTimes.\nFor domain-adaptation experiments, we use\ntwo different datasets. The first one consists\nof question-sentences – 1328 unlabeled ques-\ntions, obtained by removing the manual annota-\ntion of the question corpus from Rimell and Clark\n(2008). The second out-of-domain dataset con-\nsists of Wikipedia data, approximately 40 million\nwords in size, with sentence length <20 words.\n5.2 Experimental setup\nWe ran our semi-supervised method using our\nparser with a smoothed lexicon (from §4.1.1) as\nthe initial model, on unlabeled data of different\nsizes/domains. For comparison, we also ran ex-\nperiments using a POS-backed off parser (the orig-\ninal Hockenmaier and Steedman (2002) LexCat\nmodel) as the initial model. Viterbi-EM converged\nat 4-5 iterations. We then parsed various test sets\nusing the semi-supervised lexicons thus obtained.\nIn all experiments, the labeled data was scaled to\nmatch the size of the unlabeled data. Thus, the\nscaling factor of labeled data was 10 for unlabeled\ndata of 10M words, 20 for 20M words, etc.\n5.3 Evaluation\nWe focused our evaluations on unseen and low-\nfrequency verbs, since verbs are the most impor-\ntant open-class lexical entries and the most am-\nbiguous to learn from unlabeled data (approx. 600\ncategories, versus 150 for nouns). We report lexi-\ncal category accuracy in parses produced using our\nsemi-supervised lexicon, since it is a direct mea-\nsure of the effect of the lexicon.5We discuss four\n5Dependency recovery accuracy is also used to evaluate\nperformance of CCG parsers and is correlated with lexical130\nAll words All Verbs Unseen\nVerbs\nSUP 87.76 78.10 52.54\nSEMISUP 88.14 78.46 **57.28\nSUPbkoff 87.91 76.08 54.14\nSEMISUPbkoff 87.79 75.68 54.60\nTable 1: Lexical category accuracy on TEST -4SEC\n**: p<0.004, McNemar test\nexperiments below. The first two are on in-domain\n(WSJ) data. The last two are on out-of-domain\ndata – a question corpus and a Wikipedia corpus.\n6 Results\n6.1 In-domain: WSJ unseen verbs\nOur first testset consists of a concatenation of 4\nsections of CCGbank (01, 22, 24, 23), a total of\n7417 sentences, to form a testset called TEST -\n4SEC. We use all these sections in order to get\na reasonable token count of unseen verbs, which\nwas not possible with Sec. 23 alone.\nTable 1 shows the performance of the smoothed\nsupervised model (S UP) and the semi-supervised\nmodel (S EMISUP) on this testset. There is a sig-\nnificant improvement in performance on unseen\nverbs, showing that the semi-supervised model\nlearns good entries for unseen verbs over and\nabove the smoothed entry in the supervised lexi-\ncon. This results in an improvement in the over-\nall lexical category accuracy of the parser on all\nwords, and all verbs.\nWe also performed semi-supervised training us-\ning a supervised model that treated unseen words\nwith a POS-backoff strategy S UPbkoff . We used\nthe same settings of cut-off and the same scal-\ning of labeled counts as before. The supervised\nbacked-off model performs somewhat better than\nthe supervised smoothed model. However, it did\nnot improve as much as the smoothed one from\nunlabeled data. Additionally, the overall accuracy\nof S EMISUPbkoff fell below the supervised level,\nin contrast to the smoothed model, where overall\nnumbers improved. This could indicate that the\naccuracy of a POS tagger on unseen words, es-\npecially verbs, may be an important bottleneck in\nsemi-supervised learning.\nLow-frequency verbs We also obtain improve-\nments on verbs that are seen but with a low fre-\nquency in the labeled data (Table 2). We divided\ncategory accuracy, but a dependency evaluation is more rele-\nvant when comparing performance with parsers in other for-\nmalisms and does not have much utility here.Freq. Bin 1-5 6-10 11-20\nSUP 64.13 75.19 77.6\nSEMISUP 66.72 76.21 79.8\nTable 2: Seen but rare verbs, TEST -4SEC\nverbs occurring in TEST -4SEC into different bins\naccording to their occurrence frequency in the la-\nbeled data (bins of frequency 1-5, 6-10 and 11-20).\nSemi-supervised training improves over the super-\nvised baseline for all bins of low-frequency verbs.\nNote that our cut-off frequency for using unlabeled\ndata is 5, but there are improvements in the 6-10\nand 11-20 bins as well, suggesting that learning\nbetter categories for rare words (below the cut-off)\nimpacts the accuracy of words above the cut-off as\nwell, by affecting the rest of the parse positively.\n6.2 In-domain : heldout unseen verbs\nThe previous section showed significant improve-\nment in learning categories for verbs that are un-\nseen in the training sections of CCGbank. How-\never, these verbs are in the Zipfian tail, and for this\nreason have fairly low occurrence frequencies in\nthe unlabeled corpus. In order to estimate whether\nour method will give further improvements in the\nlexical categories for these verbs, we would need\nunlabeled data of a much larger size. We there-\nfore designed an experimental scenario in which\nwe would be able to get high counts of unseen\nverbs from a similar size of unlabeled data. We\nfirst made a list of Nverbs from the treebank and\nthen extracted all sentences containing them (ei-\nther as verbs or otherwise) from CCGbank training\nsections. These sentences form a testset of 1575\nsentences, called TEST -HOV (forheld out verbs ).\nThe verbs in the list were chosen based on occur-\nrence frequency fin the treebank, choosing all\nverbs that occurred with a frequency of f= 11 .\nThis number gave us a large enough set and a\ngood type/token ratio to reliably evaluate and ana-\nlyze our semi-supervised models—112 verb types,\nwith 1115 token occurrences6. Since these verbs\nare actually mid-frequency verbs in the supervised\ndata, they have a correspondingly large occurrence\nfrequency in the unlabeled data, occurring much\nmore often than true unseen verbs. Thus, the un-\nlabeled data size is effectively magnified—as far\nas these verbs are concerned, the unlabeled data is\napproximately 11 times larger than it actually is.\nTable 3 shows lexical category accuracy on\n6Selecting a different but close value of fsuch as f= 10\norf= 12 would have also served this purpose.131\nAll Words All Verbs Unseen\nVerbs\nSUP 87.26 74.55 65.49\nSEMISUP 87.78 75.30 *** 70.43\nSUPbkoff 87.58 73.06 67.25\nSEMISUPbkoff 87.52 72.89 68.05\nTable 3: Lexical category accuracy in TEST -HOV.\n***p<0.0001, McNemar test\n55 60 65 70\n0102040140\nSize of Unlabelled Data (in millions of words)Lexical Category Accuracy for Unseen VerbsTest:HOV\nTest:4Sec\nFigure 2: Increasing accuracy on unseen verbs\nwith increasing amounts of unlabeled data.\nthis testset. The baseline accuracy of the parser\non these verbs is much higher than that on the\ntruly unseen verbs.7The semi-supervised model\n(SEMISUP) improves over the supervised model\nSUPvery significantly on these unseen verbs. We\nalso see an overall improvement on all verbs (seen\nand unseen) in the test data, and in the over-\nall lexical category accuracy as well. Again, the\nbacked-off model does not improve as much as\nthe smoothed model, and moreover, overall per-\nformance falls below the supervised level.\nFigure 2 shows the effect of different sizes of\nunlabeled data on accuracy of unseen verbs for\nthe two testsets TEST -HOV and TEST -4SEC . Im-\nprovements are monotonic with increasing unla-\nbeled data sizes, up to 40M words. The additional\n100M words of NYT also improve the models but\nto a lesser degree, possibly due to the difference in\ndomain. The graphs indicate that the method will\nlead to more improvements as more unlabeled data\n(especially WSJ data) is added.\n7This could be because verbs in the Zipfian tail have more\nidiosyncratic subcategorization patterns than mid-frequency\nverbs, and thus are harder for a parser. Another reason is that\nthey may have been seen as nouns or other parts of speech,\nleading to greater ambiguity in their case.QUESTIONS WIKIPEDIA\nAll wh All Unseen\nwords words words words\nSUP 82.36 61.77 84.31 79.5\nSEMISUP *83.21 63.22 *85.6 80.25\nTable 4: Out-of-domain: Questions and\nWikipedia, *p<0.05, McNemar test\n6.2.1 Out-of-Domain\nQuestions The question corpus is not strictly a\ndifferent domain (since questions form a differ-\nent kind of construction rather than a different do-\nmain), but it is an interesting case of adaptation\nfor several reasons: WSJ parsers perform poorly\non questions due to the small number of questions\nin the Penn Treebank/ CCGbank. Secondly, unsu-\npervised adaptation to questions has not been at-\ntempted before for CCG (Rimell and Clark (2008)\ndid supervised adaptation of their supertagger).\nThe supervised model S UPalready performs\nat state-of-the-art on this corpus, on both overall\nscores and on wh(question)-words alone. C&C\nand Hockenmaier (2003) get 71.6 and 78.6% over-\nall accuracies respectively, and only 33.6 and 50.7\non wh-words alone. To our original unlabeled\nWSJ data (40M words), we add 1328 unlabeled\nquestion-sentences from Rimell and Clark, 2008,\nscaled by ten, so that each is counted ten times. We\nthen evaluated on a testset containing questions\n(500 question sentences, from Rimell and Clark\n(2008)). The overall lexical category accuracy on\nthis testset improves significantly as a result of the\nsemi-supervised learning (Table 4). The accuracy\non the question words alone (who, what, where,\nwhen, which, how, whose, whom) also improves\nnumerically, but by a small amount (the number\nof tokens that improve are only 7). This could be\nan effect of the small size of the testset (500 sen-\ntences, i.e. 500 wh-words).\nWikipedia We obtain statistically significant im-\nprovements in overall scores over a testset consist-\ning of Wikipedia sentences hand-annotated with\nCCG categories (from Honnibal et al. (2009)) (Ta-\nble 4). We also obtained improvements in lexical\ncategory accuracy on unseen words, and on un-\nseen verbs alone (not shown), but could not prove\nsignificance. This testset contains only 200 sen-\ntences, and counts for unseen words are too small\nfor significance tests, although there are numeric\nimprovements. However, the overall improvement\nis statistically significantly, showing that adapting\nthe lexicon alone is effective for a new domain.132\n6.3 Using semi-supervised lexicons with the\nC&C parser\nTo show that the learnt lexical entries may be use-\nful to parsers other than our own, we incorpo-\nrate our semi-supervised lexical entries into the\nC&C parser to see if it benefits performance. We\ndo this in a naive manner, as a proof of concept,\nmaking no attempt to optimize the performance\nof the C&C parser (since we do not have access\nto its internal workings). We take all entries of\nunseen words from our best semi-supervised lex-\nicon (word, category and count) and add them to\nthe dictionary of the C&C supertagger (tagdict).\nThe C&C is a discriminative, lexicalized model\nthat is more accurate than an unlexicalized model.\nEven so, the lexical entries that we learn improve\nthe C&C parsers performance over and above its\nback-off strategy for unseen words. Table 5 shows\nthe results on WSJ data TEST -4SEC and TEST -\nHOV. There were numeric improvements on the\nTEST -4SEC test set as shown in Table 58. We ob-\ntain significance on the TEST -HOV testset which\nhas a larger number of tokens of unseen verbs and\nentries that were learnt from effectively larger un-\nlabeled data. We tested two cases: when these\nverbs were seen for the POS tagger used to tag\nthe test data, and when they were unseen for the\nPOS tagger, and found statistically significant im-\nprovement for the case when the verbs were un-\nseen for the POS tagger9, indicating sensitivity to\nPOS-tagger errors.\n6.4 Entropy and KL-divergence\nWe also evaluated the quality of the semi-\nsupervised lexical entries by measuring the over-\nall entropy and the average Kullback-Leibler (KL)\ndivergence of the learnt entries of unseen verbs\nfrom entries in the gold testset. The gold entry\nfor each verb from the TEST -HOV testset was ob-\ntained from the heldout gold treebank trees. Su-\npervised (smoothed) and semi-supervised entries\nwere obtained from the respective lexicons. These\nmetrics use the conditional probability of a cate-\ngory given a word, which is not a factor in the\ngenerative model (which considers probabilities of\n8There were also improvements on the question and\nWikipedia testsets (not shown) (8 and 6 tokens each) but the\nsize of these testsets is too small for significance.\n9Note that for this testset TEST -HOV, the numbers are the\nsupertagger’s accuracy, and not the parser’s. We were only\nable to retrain the supertagger on training data with TEST -\nHOV sentences heldout, but could not retrain the parser, de-\nspite consultation with the authors.TEST -4SEC TEST -HOV\nPOS-seen POS-unseen\n(590) (1134) (1134)\nC&C 62.03 (366) 76.71 (870) 72.39 (821)\nC&C\n(enhanced) 63.89 (377) 77.34 (877) *73.98 (839)\nTable 5: TEST -4SEC: Lexical category accuracy of\nC&C parser on unseen verbs. Numbers in brackets\nare the number of tokens.*p <0.05, McNemar test\nwords given categories), but provide a good mea-\nsure of how close the learnt lexicons are to the gold\nlexicon. We find that the average KL divergence\nreduces from 2.17 for the baseline supervised en-\ntries to 1.40 for the semi-supervised entries. The\noverall entropy for unseen verb distributions also\ngoes down from 2.23 (supervised) to 1.37 (semi-\nsupervised), showing that semi-supervised distri-\nbutions are more peaked, and bringing them closer\nto the true entropy of the gold distribution ( 0.93).\n7 Conclusions\nWe have shown that it is possible to learn CCG lex-\nical entries for unseen and low-frequency words\nfrom unlabeled data. When restricted to learning\nonly lexical entries, Viterbi-EM improved the per-\nformance of the supervised parser (both in-domain\nand out-of-domain). Updating all parameters of\nthe parsing model resulted in a decrease in the ac-\ncuracy of the parser. We showed that the entries\nwe learnt with an unlexicalized model were accu-\nrate enough to also be useful to a highly-accurate\nlexicalized parser. It is likely that a lexicalized\nparser will provide even better lexical entries. The\nlexical entries continued to improve with increas-\ning size of unlabeled data. For the out-of-domain\ntestsets, we obtained statistically significant over-\nall improvements, but we were hampered by the\nsmall sizes of the testsets in evaluating unseen/wh\nwords.\nIn future work, we would like to add unseen but\npredicted category types to the initial lexicon using\nan independent method, and then apply the same\nsemi-supervised learning to words of these types.\nAcknowledgements\nWe thank Mike Lewis, Shay Cohen and the three\nanonymous EACL reviewers for helpful com-\nments. This work was supported by the ERC Ad-\nvanced Fellowship 249520 GRAMPLUS.133\nReferences\nMichiel Bacchiani, Michael Riley, Brian Roark, and Richard\nSproat. 2006. MAP adaptation of stochastic grammars.\nComputer Speech and Language , 20(1):41–68.\nEugene Charniak. 1993. Statistical Language Learning . MIT\nPress.\nStephen Clark and James R. Curran. 2007. Wide-Coverage\nEfficient Statistical Parsing with CCG and Log-Linear\nModels. Computational Linguistics , 33(4):493–552.\nStephen Clark, Mark Steedman, and James Curran. 2004.\nObject-extraction and question-parsing using CCG. In\nProceedings of EMNLP 2004 .\nShay Cohen and Noah Smith. 2010. Viterbi Training for\nPCFGs: Hardness Results and Competitiveness of Uni-\nform Initialization. In Proceedings of ACL 2010 .\nMichael Collins. 1997. Three generative, lexicalised models\nfor statistical parsing. In Proceedings of the 35th ACL .\nTejaswini Deoskar. 2008. Re-estimation of Lexical Param-\neters for Treebank PCFGs. In Proceedings of COLING\n2008 .\nTejaswini Deoskar, Markos Mylonakis, and Khalil Sima’an.\n2012. Learning Structural Dependencies of Words in the\nZipfian Tail. Journal of Logic and Computation .\nDaniel Gildea. 2001. Corpus Variation and Parser Perfor-\nmance. In Proceedings of EMNLP 2001 .\nSharon Goldwater and Mark Johnson. 2005. Bias in learning\nsyllable structure. In Proceedings of CoNLL05 .\nJulia Hockenmaier. 2003. Data and Models for Statistical\nParsing with Combinatory Categorial Grammar . Ph.D.\nthesis, School of Informatics, University of Edinburgh.\nJulia Hockenmaier and Mark Steedman. 2002. Generative\nModels for Statistical Parsing with Combinatory Catego-\nrial Grammar. In ACL40 .\nJulia Hockenmaier and Mark Steedman. 2007. CCGbank: A\nCorpus of CCG Derivations and Dependency Structures\nExtracted from the Penn Treebank. Computational Lin-\nguistics , 33:355–396.\nMatthew Honnibal, Joel Nothman, and James R. Curran.\n2009. Evaluating a Statistial CCG Parser on Wikipedia.\nInProceedings of the 2009 Workshop on the People’s Web\nMeets NLP , ACL-IJCNLP .\nTerry Koo, Xavier Carreras, and Michael Collins. 2008. Sim-\nple Semi-supervised Dependency Parsing. In Proceedings\nof ACL-08: HLT , pages 595–603. Association for Com-\nputational Linguistics, Columbus, Ohio.\nLDC93T1. 1993. LDC93T1. Linguistic Data Consortium,\nPhiladelphia .\nMatthew Lease and Eugene Charniak. 2005. Parsing Biomed-\nical Literature. In R. Dale, K.-F. Wong, J. Su, and\nO. Kwong, eds., Proceedings of the 2nd International\nJoint Conference on Natural Language Processing (IJC-\nNLP’05) , vol. 3651 of Lecture Notes in Computer Science ,\npages 58 – 69. Springer-Verlag, Jeju Island, Korea.\nMike Lewis and Mark Steedman. 2013. Combined Distribu-\ntional and Logical Semantics. Transactions of the Associ-\nation for Computational Linguistics .\nDavid McClosky, Eugene Charniak, and Mark Johnson.\n2006. Effective Self-Training for Parsing. In Proceedings\nof HLT-NAACL 2006 .\nBernard Merialdo. 1994. Tagging English Text with a Prob-\nabilistic Model. Computational Linguistics , 20(2):155–\n171.Radford M. Neal and Geoffrey E. Hinton. 1998. A view of\nthe EM algorithm that justifies incremental, sparse, and\nother variants. In Learning and Graphical Models , pages\n355 – 368. Kluwer Academic Publishers.\nSlav Petrov and Ryan McDonald. 2012. Overview of the\n2012 Shared Task on Parsing the Web. In First Work-\nshop on Syntactic Analysis of Non-Canonical Language\n(SANCL) Workshop at NAACL 2012 .\nRandi Reppen, Nancy Ide, and Keith Suderman. 2005.\nLDC2005T35, American National Corpus (ANC) Second\nRelease. Linguistic Data Consortium, Philadelphia .\nLaura Rimell and Stephen Clark. 2008. Adapting a\nLexicalized-Grammar Parser to Contrasting Domains. In\nProceedings of the Conference on Empirical Methods in\nNatural Language Processing (EMNLP-08) .\nValentin I. Spitkovsky, Hiyan Alshawi, Daniel Jurafsky, and\nChristopher D. Manning. 2010. Viterbi Training Improves\nUnsupervised Dependency Parsing. In Proceedings of\nCoNLL-2010 .\nMark Steedman. 2000. The Syntactic Process . MIT\nPress/Bradford Books.\nMark Steedman, Steven Baker, Jeremiah Crim, Stephen\nClark, Julia Hockenmaier, Rebecca Hwa, Miles Osbornn,\nPaul Ruhlen, and Anoop Sarkar. 2003. Semi-Supervised\nTraining for Statistical Parsing. Tech. rep., CLSP WS-02.\nJun Suzuki, Hideki Isozaki, Xavier Carreras, and Michael\nCollins. 2009. An Empirical Study of Semi-supervised\nStructured Conditional Models for Dependency Parsing.\nInProceedings of the 2009 Conference on Empirical\nMethods in Natural Language Processing , pages 551–\n560. Association for Computational Linguistics, Singa-\npore.\nEmily Thomforde and Mark Steedman. 2011. Semi-\nsupervised CCG Lexicon Extension. In Proceedings of the\nConference on Empirical Methods in Natural Language\nProcessing, Edinburgh UK .134", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "8yJn4x8mFr", "year": null, "venue": "CoRR 2022", "pdf_link": "http://arxiv.org/pdf/2210.02438v3", "forum_link": "https://openreview.net/forum?id=8yJn4x8mFr", "arxiv_id": null, "doi": null }
{ "title": "DALL-E-Bot: Introducing Web-Scale Diffusion Models to Robotics", "authors": [ "Ivan Kapelyukh", "Vitalis Vosylius", "Edward Johns" ], "abstract": "We introduce the first work to explore web-scale diffusion models for robotics. DALL-E-Bot enables a robot to rearrange objects in a scene, by first inferring a text description of those objects, then generating an image representing a natural, human-like arrangement of those objects, and finally physically arranging the objects according to that goal image. We show that this is possible zero-shot using DALL-E, without needing any further example arrangements, data collection, or training. DALL-E-Bot is fully autonomous and is not restricted to a pre-defined set of objects or scenes, thanks to DALL-E's web-scale pre-training. Encouraging real-world results, with both human studies and objective metrics, show that integrating web-scale diffusion models into robotics pipelines is a promising direction for scalable, unsupervised robot learning.", "keywords": [], "raw_extracted_content": "IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED APRIL, 2023 1\nDALL-E-Bot: Introducing Web-Scale\nDiffusion Models to Robotics\nIvan Kapelyukh\u00031;2, Vitalis V osylius\u00031, Edward Johns1\nAbstract —We introduce the first work to explore web-scale\ndiffusion models for robotics. DALL-E-Bot enables a robot to\nrearrange objects in a scene, by first inferring a text description\nof those objects, then generating an image representing a natural,\nhuman-like arrangement of those objects, and finally physically\narranging the objects according to that goal image. We show\nthat this is possible zero-shot using DALL-E, without needing\nany further example arrangements, data collection, or training.\nDALL-E-Bot is fully autonomous and is not restricted to a pre-\ndefined set of objects or scenes, thanks to DALL-E’s web-scale\npre-training. Encouraging real-world results, with both human\nstudies and objective metrics, show that integrating web-scale\ndiffusion models into robotics pipelines is a promising direction\nfor scalable, unsupervised robot learning. Videos are available\non our webpage at: https://www.robot-learning.uk/dall-e-bot.\nIndex Terms —AI-Based Methods, Big Data in Robotics and\nAutomation, Deep Learning in Grasping and Manipulation\nI. I NTRODUCTION\nMANY everyday tasks, such as setting a dining table,\ntidying an office, or packing groceries, can be ex-\npressed as an object rearrangement problem [1] in robotics:\ngiven a set of objects, determine a goal pose for each and\nthen physically move the objects accordingly. However, cal-\nculating these goal poses is a challenging problem, due to the\ndiversity of factors that should be considered. For example,\nwhen tidying a room, the created arrangement should be\nsemantically appropriate, aesthetically appealing, physically\nstable, and convenient for a human to use.\nMost prior approaches for predicting goal states (i.e. a goal\npose for each object) rely on a training dataset of example\narrangements, where objects are placed into desirable poses\neither manually [2], [3], [4], [5], or in simulation using a hand-\ncrafted function [6], [7], [8]. At test time, the robot can then\nrearrange a given set of objects into a similar arrangement.\nThis approach can be effective if the training and testing\nscenes are similar, but it is challenging to scale to unstructured\nenvironments such as homes, due to the sheer diversity of ob-\njects present, and the combinatorial complexity of acceptable\nManuscript received: February, 24, 2023; Accepted April 11, 2023.\nThis paper was recommended for publication by Editor Aleksandra Faust\nupon evaluation of the Associate Editor and Reviewers’ comments. This\nwork was supported by Dyson Technology Ltd, and the Royal Academy of\nEngineering under the Research Fellowship Scheme.\n\u0003Ivan Kapelyukh and Vitalis V osylius are co-first authors.\n1Ivan Kapelyukh, Vitalis V osylius and Edward Johns are\nwith the Robot Learning Lab at Imperial College London.\nfik517,vv19,e.johns [email protected]\n2Ivan Kapelyukh is also with the Dyson Robotics Lab at Imperial College\nLondon.\nDigital Object Identifier (DOI): 10.1109/LRA.2023.3272516. © 2023 IEEE.\nA fork, a knife, a plate, \nand a spoon, top-down Generate \nimage \nInitial scene \nDALL-E image Final scene \nFig. 1. In DALL-E-Bot, the robot prompts DALL-E with a list of objects it\nhas detected, which then generates an image of a human-like arrangement of\nthose objects. The robot then rearranges the real objects via pick-and-place\nto match the generated goal image.\narrangements. Today’s methods often still require hundreds or\nthousands of examples to be provided [4], [6], [7], [8].\nAs an alternative direction to manually collecting datasets\nof desirable object arrangements, we observe that human\npreferences for object arrangement are implicit in images\nof human-arranged scenes, which are abundant at scale on\nthe Web. Extracting arrangement preferences from this web-\nscale data is therefore an attractive research direction, as this\ncould enable generalisation to a broad set of objects and\nscenes. Recently, diffusion models such as OpenAI’s DALL-\nE 2 [9] have been trained on hundreds of millions of image-\ncaption pairs from the Web. These models learn a language-\nconditioned distribution over natural images, from which new\nimages can be generated given a text prompt. In our work, we\nshow for the first time how these web-scale diffusion models\ncan be used directly in robotics pipelines, without requiring\nany further training, thus offering an exciting direction towards\nscalable learning of object rearrangement.\nOur framework, called DALL-E-Bot, uses these pre-trained\ndiffusion models as “imagination engines” for robots. As\nshown in Fig. 1, DALL-E-Bot enables a robot to generate an\nimage of a natural, human-like goal state for a rearrangement\ntask, requiring as input only the image that the robot initially\nobserves of the scene. Consequently, the robot can then\ndetermine the goal pose for each object, and execute the\nrearrangement with pick-and-place actions. The contributions\npresented in this paper include:arXiv:2210.02438v3 [cs.RO] 4 May 2023\n2 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED APRIL, 2023\n(1)A modular pipeline for performing rearrangement tasks,\nfrom visual perception to real-world robot execution, incor-\nporating a web-scale diffusion model. This involves creating\nan object-centric representation of the scene, without limiting\nthe method to a pre-defined set of classes. We also show em-\npirically that a sample-and-filter strategy when using diffusion\nmodels is crucial for performance. (2)Techniques for crossing\nthe domain gap between real images and diffusion-generated\nimages. Our algorithm combines cross-instance Hungarian\nmatching based on CLIP features, ICP alignment on segmen-\ntation masks, and photometric loss on semantic feature maps.\n(3)Leveraging the inpainting capability of diffusion models\nto allow the robot to take into account the poses of objects\npre-placed by the human, enabling collaborative human-robot\nrearrangement. (4)Experiments using both subjective and\nobjective metrics, including a user study collecting 3000 user\nratings, where we evaluate our method on several useful\neveryday rearrangement tasks.\nThe DALL-E-Bot framework has several useful properties.\n(1) Zero-shot: it uses only pre-trained models like DALL-\nE, with no demonstrations or training required. This scalable\napproach greatly eases the burden on both researchers and\nusers. (2) Open-set: it is not restricted to a specific set of\nobjects or scenes, since these diffusion models are pre-trained\non web-scale data. (3) Autonomous: it does not require any\nuser-provided goal state specification or supervision.\nTo the best of our knowledge, this is the first work to\ninvestigate web-scale diffusion models for robotics and unlock\nthese advantageous properties.\nII. R ELATED WORK\nA. Predicting Goal Arrangements\nWe now highlight prior approaches to predicting goal poses\nfor rearrangement tasks. Some methods view the prediction\nof goal poses as a classification problem, by choosing from a\nset of discrete options for an object’s placement. For house-\nscale rearrangement, a pre-trained language model can be used\nto predict goal receptacles such as tables [4], and out-of-\nplace objects can be detected automatically [10]. At a room\nlevel, the correct drawer or shelf can be classified [11], taking\npreferences into account [12]. Lower-level prediction from\na dense set of goal poses can be achieved with a graph\nneural network [13] or a preference-aware transformer [8]. Our\nframework generates high-resolution images of where objects\nshould be placed, thus does not require a set of discrete options\nto be pre-defined, and can predict more precise poses than is\npossible with language.\nMethods for predicting continuous object poses typically\nuse a dataset of example arrangements. They can learn spatial\npreferences with a graph V AE [3]. For language-conditioned\nrearrangement, an autoregressive transformer [6] can be used,\nor a diffusion model over poses can be combined with learned\ndiscriminators to avoid collisions [7]. For furniture layout\ngeneration, there are methods which predict goal states via\niterative denoising [14], or avoid collisions during the rear-\nrangement process using gradient fields [15]. Other rearrange-\nment approaches use full demonstrations [16], [17], or applypriors such as human pose context [2]. Unlike all of these\nworks, our proposed framework does not require collecting\nand training on a dataset of rearrangement examples, which\noften restricts these methods to a specific set of objects and\nscenes. Instead, we show that exploiting existing web-scale\ndiffusion models enables zero-shot rearrangement.\nB. Goal Images for Task Specification\nMany manipulation methods use a goal image to specify\nthe goal state. This includes flow-based rearrangement which\ngeneralises to novel objects [18], as well as manipulation\npolicies which are learned from demonstration [19] or through\nreinforcement learning [20]. Requiring a provided goal image\noften places a burden on the user to complete the task them-\nselves in order to show the goal state. Instead, our proposed\nframework for automatically generating realistic goal images\ncan be used together with all these existing methods to make\nthem truly autonomous by avoiding manual goal specification.\nC. Diffusion Models\nWeb-scale image diffusion models such as DALL-E are at\nthe heart of our framework. A diffusion model [21] is trained\nto remove added noise from a data sample, e.g. an image.\nBy starting from random noise and iteratively applying many\nlearned denoising steps, a new sample can be generated from\nthe learned distribution. In robotics, diffusion models have\nbeen trained to learn the distribution over actions for trajectory\nplanning [22] and for visuomotor control [23]. In our work,\nwe show how pre-trained image diffusion models can be used\nzero-shot. We use DALL-E 2 [9], but our framework can also\nbe used with other text-to-image models [24].\nD. Image Generation in Robot Manipulation\nIn our work, we study how to generate a goal image\nusing a general-purpose web-scale diffusion model (DALL-\nE). But image generation models have previously been used\nfor robotics in various other ways. For example, learning an\nimage dynamics model can then be used for visual control\n[25], [26]. And whilst our work is the first to study web-\nscale diffusion models for robotics, other recent work [27] also\nuses these diffusion models to add distractor objects during\ntraining as a form of data augmentation. Subsequent work uses\naugmentation to generalise pick-and-place policies to novel\nobjects and environments [28], and to automatically select\nregions for augmentation using text guidance [29]. However,\nthese augmentation methods aim to make existing learned\ncontrollers more robust and general, whilst our work aims\nfor zero-shot object rearrangement by generating goal images,\nwithout requiring prior learned controllers. Recently, text-to-\nvideo diffusion models have also been used in robotics by\nfine-tuning on robot demonstration videos [30].\nIII. M ETHOD\nA. Overview\nWe address the problem of predicting a goal pose for each\nobject in a scene, such that objects can then be rearranged in\nKAPELYUKH, VOSYLIUS, JOHNS: DALL-E-BOT 3\n“A fork, a knife, \na plate, and \na spoon, \ntop-down” a fork with a black handle on a wooden table \na knife on top of a wooden table \nan empty white plate on a wooden table \na spoon with a black handle on a wooden table Object Captions \n…………Visual Semantic Features Segmentation Masks Initial Observation \nCreated Arrangement Target Poses Object-Level Representation Generated Image Mask R-CNN \n&\nCLIP \n&\nImage \nCaptioning Prompt \nGeneration \nWeb-Scale \nDiffusion Model, \ne.g. DALL-E \nMask R-CNN \n&\nCLIP Object \nMatching & \nPose \nEstimation Robot \nExecution Object-Level Representation \nFig. 2. DALL-E-Bot creates a human-like arrangement of objects in the scene using a modular approach. First, the initial observation image is converted into\na per-object description consisting of a segmentation mask, an object caption, and a CLIP visual feature vector. Next, a text prompt is constructed describing\nthe objects in the scene and is passed into DALL-E to create a goal image for the rearrangement task, where the objects are arranged in a human-like way.\nThen, the objects in the initial and generated images are matched using their CLIP visual features, and their poses are estimated by aligning their segmentation\nmasks. Finally, a robot rearranges the scene based on the estimated poses to create the generated arrangement.\na human-like way. We propose to predict goal poses zero-shot\nfrom a single RGB image IIof the initial scene.\nTo achieve this, we propose a modular pipeline shown in\nFig. 2. At the heart of our method is a web-scale image\ndiffusion model DALL-E 2 [9], which, given a text description\n`of the objects in a scene, can generate a goal image IG,\ndepicting a human-like arrangement of those objects. We can\nsample many such images for a given text description. We\nconvert an initial RGB observation into a more relevant object-\nlevel representation to individually reason about the objects\nin the scene. This representation consists of text captions\nciof crops of individual objects (used to construct a text\nprompt`) together with their segmentation masks Mi, and\nvisual-semantic feature vector viacquired using the CLIP\nmodel [31]. We also convert generated images into object-level\nrepresentations and select the image that has the same number\nof objects as the initial scene, and best matches the objects in\nthe initial scene semantically. Using an Iterative Closest Point\n(ICP) [32] algorithm in image space, we then register corre-\nsponding segmentation masks to obtain transformations, which\nare applied to each object to achieve the desired arrangement.\nFinally, we convert these transformations from image space to\nCartesian space using a depth camera observation, and deploy\na real Franka Emika Panda robot equipped with a compliant\nsuction gripper to rearrange the scene. Since this method is\nmodular, it will improve as the individual components (e.g.\nsegmentation) improve in the future.\nB. Object-Level Representation\nTo reason about the poses of individual objects in the\nobserved scene, we need to convert the initial RGB observation\ninto a more functional, object-level representation. We use the\nMask R-CNN model [33] from the Detectron2 library [34] to\ndetect objects in an image and generate segmentation masksMi. This model was pre-trained on the LVIS dataset [35],\nwhich has 1200 object classes, being more than sufficient\nfor many rearrangement tasks. For each object, Mask R-CNN\nprovides us with a bounding box, a segmentation mask, and\na class label. However, we found that whilst the bounding\nbox and segmentation mask predictions are usually high\nquality and can be used for pose estimation (described in\nSection III-E), the predicted class labels are often incorrect\ndue to the large number of classes in the training dataset.\nAs we are using labels of objects in the scene (described\nin Section III-C) to construct a prompt for an image diffusion\nmodel, it is crucial for these labels to be accurate. Therefore,\ninstead of using Mask R-CNN’s predicted class labels, we pass\nRGB crops around each object’s bounding box through an\nOFA image-to-text captioning model [36], to get text descrip-\ntionsciof the objects in the initial scene image. Generally,\nthis approach allows us to more accurately predict object class\nlabels and go beyond the objects in Mask R-CNN’s training\ndistribution, and even obtain their visual characteristics such\nas colour or shape. Finally, we also pass each object crop\nthrough a CLIP visual model [31], giving each object a 512-\ndimensional visual-semantic feature vector vi. These features\nwill be used later for matching objects between the initial\nscene image and the generated image. An alternative approach\nwould be to also infer captions for generated objects and\nuse them for matching, but this would rely on the captioning\nmodel’s ability to recognise generated objects.\nIn summary, by the end of this stage, we have converted\nan RGB observation IIinto an object-level representation\n(Mi;ci;vi), which represents each object by a segmentation\nmask, a text caption, and a semantic feature vector.\n4 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED APRIL, 2023\nC. Goal Image Generation\nWe wish to generate images of natural and human-like\narrangements, given their text descriptions. To this end, we\nexploit recent advances in text-to-image generation and web-\nscale diffusion models, by using the publicly-available DALL-\nE 2 [9] model from OpenAI. This has been trained on a vast\nnumber of image-caption pairs from the Web, and represents\nthe conditional distribution p\u0012(IGj`;IM). Here,IGis an image\ngenerated by the model, `is a text prompt, and IMis an image\nmask that can be used to prevent the model from changing the\nvalues of certain pixels. Distribution p\u0012includes many images\nwith scenes arranged by humans in a natural and usable way.\nTherefore, by sampling from this distribution, we can generate\nimages depicting human-like arrangements and create those\narrangements in the real world by moving objects to the same\nposes as in the generated images. The ability to condition this\ndistribution on an image mask IMlets us handle cases where\nsome objects in the scene should not be moved by the robot.\nTo generate an image using DALL-E, we must first construct\na text prompt `describing the scene. To this end, we use\nobject captions from our object-level representation. Although\nfull captions, including visual characteristics, could be used to\ngenerate images with objects closely resembling the observed\nones, in this work, we only use the nouns describing the\nobject’s class and leave including visual characteristics for\nfuture work. Thus, we extract the class of each object from the\ncaption of its object crop, i.e. we extract “apple” from “a red\napple on a wooden table”. We do this by passing the object\ncaptions through the Part-of-Speech tagging model [37] from\nthe Flair NLP library [38], which tags each word as a noun, a\nverb, etc. From this list of classes, we construct a prompt that\nmakes minimal assumptions about the scene, to allow DALL-\nE to arrange it in the most natural way. In this work, our\nexperiments use tabletop scenes, with observations captured\nby a camera mounted on a robot’s wrist pointing downwards\ntowards the table. Therefore, we added a “top-down” phrase to\nthe prompt to better align the initial and generated images. As\nsuch, an example prompt we use would be “A fork, a knife,\na plate, and a spoon, top-down” (as in Fig. 2).\nWe use DALL-E’s ability to condition distribution p\u0012on\nimage masks in three ways. First, if there are objects in the\nscene that a robot is not allowed to move, we add their\ncontours toIM. This prevents DALL-E from generating these\nobjects in different poses while still allowing for other objects\nto be placed on top or in them (e.g. a basket cannot be moved,\nbut other objects can be placed inside it). Second, we add a\nmask of the tabletop’s edges in our scene to IMto visually\nground the generated images. This prevents objects from being\nplaced on the edge of the generated image. Additionally, we\nfound empirically that this makes the generated objects have\nmore appropriate sizes. Third, we subtract segmentation masks\nof all the movable objects from IM, with enlarged masks to\nremove their shadows. Removing these shadows from IMis\nhelpful, as if DALL-E sees shadows of objects in their original\nposes, it often generates objects in the same poses to fit with\nthose shadows, making it harder to generate novel goal poses.\nUsing the prompt `and the image mask IM, we sam-ple a batch of images from the conditional distribution\np\u0012(IGj`;IM), representing the text-to-image model. We do so\nusing an automated script and OpenAI’s web API. Examples\nof generated images are shown in Fig. 3.\nExamples of generated goal images Prompt to DALL-E \n(automatically inferred \nfrom initial image) Initial image \nA fork, a knife, a plate, \nand a spoon, top-down \nA keyboard, a mouse, \nand a mug, top-down \nTwo apples, and an \norange, top-down \nFig. 3. The robot automatically infers a text prompt `from the initial disorgan-\nised scene and uses it to generate candidate goal images. DALL-E generates\na diverse set of high-quality images depicting human-like arrangements.\nD. Image Selection & Object Matching\nIn the batch of images generated by DALL-E, not all will be\ndesirable for the rearrangement task; some may have artefacts\nhindering object detection, others may include extra objects\nthat were not part of the text prompt, etc. Therefore, we need\nto select the generated image IGwhose objects best match\nthose in the real-world initial image II.\nFor each generated image, we obtain segmentation masks\nand a CLIP semantic feature vector for each object, using\nthe procedure in Section III-B. Then, we filter out generated\nimages where the number of objects is different to the initial\nscene, or where movable objects overlap. If there are no\ngenerated images which pass these checks, we sample another\nbatch. We then match objects between the generated image and\ninitial image. This is non-trivial since the generated objects\nare different instances to the real objects, often with a very\ndifferent appearance. Inspired by [39], we compute a similarity\nscore between any two objects (one from II, and one from IG)\nusing the cosine similarity between their CLIP visual feature\nvectors. Since greedy matching is not guaranteed to yield\noptimal results in general, we use the Hungarian Matching\nalgorithm [40] to compute an assignment of each object in\nthe initial image to an object in the generated image, such\nthat the total similarity score is maximised. Then, we select\nthe generated image IGwhich has the best overall score with\nthe initial image II. This image depicts the most similar set\nof objects to the real scene, and therefore gives the best\nopportunity for rearranging the real scene.\nE. Object Pose Estimation\nFor each object in the initial image, we now know its\nsegmentation mask in the initial image and the corresponding\nsegmentation mask in the generated image. By aligning these\nmasks, we can estimate a transformation from the initial pose\n(in the initial image) to the goal pose (in the generated image).\nWe rescale each initial segmentation mask, such that the\ndimensions of its bounding box equal those in the generated\nKAPELYUKH, VOSYLIUS, JOHNS: DALL-E-BOT 5\nimage, and then use the Iterative Closest Point (ICP) algorithm\n[32] to align the two masks, taking each pixel to be a point.\nThis gives us a 3-DoF (x;y;\u0012 )transformationTin pixel space\nbetween the initial and goal pose. We run ICP from many\nrandom initial poses, due to local optima. For objects with\nnearly symmetric binary masks such as knives, aligning masks\nwith ICP leads to multiple candidate solutions (for knives, they\ndiffer by 180 degrees). To select the correct solution (handle\naligned with handle, blade aligned with blade), we pass the\ngenerated object image oGand the transformed real object\nimageT(oI)through a semantic feature map extractor fS\n(an ImageNet-trained ResNet [41], [42]). We select the ICP\nsolutionTwhich minimises the photometric loss between the\nsemantic feature maps: LS= (fS(oG)\u0000fS(T(oI)))2.\nThe generated image can depict objects of a different scale\nthan the real objects. Naively moving objects to estimated\nposes can lead to collisions (if generated objects are smaller) or\nunnaturally spaced-out arrangements (if generated objects are\nlarger). Therefore, we move objects closer together or further\napart based on the mismatch in size, ensuring this does not\nintroduce collisions by moving colliding objects further apart.\nNext, we use a depth camera to project the pixel-space poses\ninto 3D space on the tabletop, to obtain a transformation for\neach object which would move it from the initial real-world\npose to the goal real-world pose. Finally, the robot executes\nthese transformations by performing a sequence of pick-and-\nplace operations. We also designed a simple planner which first\nmoves objects that would cause collisions into intermediate\nposes to the side, before later moving them to their goal poses.\nMore details about the robot execution and hardware used in\nour experiments can be found in Section IV-A. Putting these\nsteps together, we summarise our DALL-E-Bot method for\nautonomous rearrangement in Algorithm 1.\nIV. E XPERIMENTS\nIn our experiments, we evaluate the ability of our method\nto create human-like arrangements using both subjective (Sec-\ntion IV-B) and objective (Section IV-C) metrics.\nA. Experiment Setup\nIn real-world applications of DALL-E-Bot, users would\nonly see the outcome of the real-world rearrangement, which\nwould include all the errors that might accumulate through\nthe pipeline. To simulate this experience for our evaluation,\nthe predicted arrangements were autonomously created in a\n54x54 cm tabletop environment using a 7-DoF Franka Emika\nrobot equipped with a compliant suction gripper to execute a\nseries of top-down pick-and-place operations. For each object,\nthe robot grasps the object in its initial pose and moves it to its\ntarget pose, performing a rotation of \u0012in between to achieve\nthe target orientation. The robot’s motion is calculated using\nInverse Kinematics and interpolating Cartesian end-effector\nposes between a series of waypoints. We use hand-designed\ngrasping primitives to calculate grasping poses for each object,\nbut it is possible to swap in another grasping module such as\n[43], [44] into our pipeline if required. The suction gripper\nis connected to a commercial vacuum device and controlledAlgorithm 1: DALL-E-Bot Autonomous Rearrangement\n1Capture image IIof initial, disorganised scene\n2Get(MIi;cIi;vIi)for each obj. oIifound inII\n#Mfrom MASKRCNN ,cfrom OFA,vfrom CLIP\n3Construct scene-level text description `from allcIi\n4Sample goal image batch fIGg\u0018p\u0012(IGj`;IM)\n5foreach generated goal image IGdo\n6Get(MGj;cGj;vGj)for each obj. oGjfound inIG\n7Skip thisIGif fails checks (e.g. wrong obj. count)\n# Match objects between IIand current IG:\n8Fill in similarity matrix for each pair (oIi;oGj):\nSij cos(vIi;vGj)\n9Compute optimal matching using HUNGARIAN (S)\n10Select goal image IGwith max-similarity matches\n11foreach object oIiand its match oGjdo\n12 foreach ICP initialisation do\n13 Align initial and goal masks: Tk ICP(MIi;MGj)\n# Get pixelwise loss between semantic feature maps:\n14LSk (fS(oGj)\u0000fS(Tk(oIi)))2\n15 Select pose transform for object: T argminTkLSk\n16foreach object owith computed transform Tdo\n17 Move other objects away if collision predicted\n18 Find grasp and place gripper poses differing by T\n19 Execute on robot: PICKANDPLACE(o;T)\nvia an integrated microcontroller. In our experiments, we use\na wrist-mounted Intel Realsense D435i RGBD camera and\ncrop images to a 700x700 resolution. After the arrangement is\ncompleted, we record the outcome as a top-down RGB image.\nB. Zero-Shot Autonomous Rearrangement\nFirst, we explore the following question: Can DALL-E-Bot\narrange a set of objects in a human-preferred way? We\nevaluate on 3 everyday tabletop rearrangement tasks: dining\nscene ,office scene , and fruit scene (Fig. 4). The dining scene\ncontains four objects: a knife, a fork, a spoon, and a plate.\nThe office scene contains a stationary iPad which the robot is\nnot allowed to move, and three movable objects: a keyboard,\na mouse, and a mug. The fruit scene contains a stationary\nbasket, and three movable objects: two apples and an orange.\nSince DALL-E-Bot is the first method to predict precise\ngoal poses for rearrangement in a way which is zero-shot\n(requiring no training arrangements), it cannot be directly com-\npared against methods that require collecting large datasets of\nexample object arrangements, such as [6], [7]. These existing\nmethods are not designed for the zero-shot setting. Instead, we\ndesigned two heuristic-based baselines, which are also zero-\nshot for a fair comparison. The Rand-No-Coll baseline places\nobjects randomly in the environment while ensuring they do\nnot overlap. The Geometric baseline puts all the objects evenly\nin a straight line such that they are not colliding, and aligns the\nobjects so that they are parallel using their bounding boxes. In\naddition, we compare our method to two variants. DALL-E-\nBot-AR creates an arrangement in an auto-regressive way, with\na sequence of goal images rather than a single image, where\n6 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED APRIL, 2023\nDining \nScene Office \nScene Fruit \nScene DALL-E-Bot DALL-E-Bot-NF Geometric Rand-No-Coll \n DALL-E-Bot-AR \nFig. 4. Examples of scenes rearranged by the robot using different methods. Columns for the methods that use DALL-E include the generated image (left)\nand the final arrangement (right). For DALL-E-Bot-AR , images are from the last step.\neach placed object is treated as a stationary object for the next\ngenerated image (and thus its contours are added to IM). Here,\nwe do not adjust the poses of the objects based on the size\nmismatch and do not reject generated images with the wrong\nnumber of objects. Finally, DALL-E-Bot-NF (no filtering) does\nnot filter generated images and always uses the first image. If\nthis image has fewer objects than in the real scene, unmatched\nobjects are placed randomly, whilst avoiding collisions.\nSince we aim to create arrangements which are appealing\nto humans, the most direct evaluation is to ask humans for\nfeedback. This follows related work which also evaluates with\nhuman feedback [2], [3], [4], [9], [24], since contrived metrics\nfor arrangement quality may not correlate with what users\nactually desire. We showed human users images of the final\nreal-world scene created by the robot, and asked them the\nfollowing question: “If the robot made this arrangement for\nyou at home, how happy would you be?” . The user provided a\nscore for each method on a Likert Scale from 1 (very unhappy)\nto 10 (very happy), while being shown arrangements made by\neach method side-by-side in a web-based questionnaire. We\nrecruited 40 users representing 18 nationalities, both male and\nfemale, with ages ranging from 22 to 71. Each rated the results\nof 5 methods on 5 random initialisations of 3 scenes, for a total\nof 3000 ratings. Initialisations were roughly matched for all\nthe methods and all users were shown the same images.\nMethod Dining Scene Office Scene Fruit Scene Mean\nRand-No-Coll 2.03\u00061.34 3.56\u00062.01 2.94\u00062.01 2.84\nGeometric 4.08\u00062.27 3.36\u00062.01 3.13\u00061.82 3.52\nDALL-E-Bot-NF 3.87\u00062.78 6.54\u00062.34 7.45\u00063.19 5.95\nDALL-E-Bot-AR 4.88\u00062.61 7.37\u00062.05 9.59\u00060.90 7.28\nDALL-E-Bot 8.01\u00062.03 7.56\u00062.02 9.81\u00060.52 8.46\nTABLE I\nUSER RATINGS FOR ARRANGEMENTS BY EACH METHOD . EACH CELL\nSHOWS THE MEAN AND STANDARD DEVIATION ACROSS ALL USERS AND\nSCENE INITIALISATIONS ,WITH THE BEST IN BOLD .\nThe results of this user study are in Table I. Example\narrangements are shown in Fig. 4, and videos are avail-able at: https://www.robot-learning.uk/dall-e-bot .DALL-E-\nBotreceives high user scores, showing that it can create satis-\nfactory arrangements zero-shot, without requiring task-specific\ntraining. It beats the heuristic baselines, showing that users\nvalue semantic correctness for arranging scenes beyond simple\ngeometric alignment, which justifies the use of web-scale\nlearning of these semantic rules. This is especially evident\nin the dining scene, where DALL-E recognises the semantic\nstructure which can be created from those objects. The DALL-\nE-Bot-NF ablation performs the worst out of the DALL-E-\nBot variants on all scenes. This justifies our sample-and-filter\napproach for using these web-scale models, which ensures\nthat the robot can feasibly create the generated arrangement,\nrather than naively using the first generated image, which can\noccasionally be unnatural (see Fig. 5). The DALL-E-Bot-AR\nvariant performs well generally but struggles in the dining\nscene, where the thin cutlery may slip, leading to accumulating\nerror since the method auto-regressively conditions on the\nobjects placed so far. DALL-E-Bot avoids this issue by jointly\npredicting all object poses.\nDining \nScene Office \nScene Fruit \nScene \nFig. 5. Diffusion models occasionally produce surreal or unnatural images,\nsuch as those shown here. As diffusion models improve, this will happen less\nfrequently. Our sample-and-filter approach removes these and instead selects\na suitable goal image, such as those shown in Fig. 3.\nKAPELYUKH, VOSYLIUS, JOHNS: DALL-E-BOT 7\nC. Placing Missing Objects with Inpainting\nIn the next experiment, we use objective metrics to answer\nthe following question: Can DALL-E-Bot precisely complete\na partial arrangement made by a human? We ask DALL-E-\nBot to find a suitable pose for an object that has been masked\nout from a human-made scene, while the other objects are\nkept fixed. We study this using the dining scene, because it\nhas the most semantically rigid structure, lending itself well to\nquantitative, objective evaluation. To create these scenes, we\nasked ten users (both left and right-handed) the following:\n“Imagine you are sitting down here for dinner. Can you\nplease arrange these objects so that you are happy with the\narrangement? ”. As there can be multiple suitable poses for any\nobject, for each of the objects we asked the users to provide\nany alternative poses that they would also be happy with, while\nkeeping other objects in their original poses.\nGiven the image of the arrangement made by a user, we\nmask out everything except the fixed objects. This means\nthat DALL-E cannot change the pixels belonging to fixed\nobjects. The method must then predict the pose of the missing\nobject. DALL-E-Bot does this by inpainting the missing object\nsomewhere in the image. For a given user, the predicted pose\nfor the missing object is compared against the actual pose in\ntheir arrangement. This is done by aligning two segmentation\nmasks of the missing object, one from the actual scene and one\nat the predicted pose. Since this is for two poses of exactly the\nsame object instance, we find the alignment is highly accurate\nand can be used to estimate the error between the actual\nand predicted pose. From this transformation, we take the\norientation and distance errors projected into the workspace\nas our metrics. This is repeated for every object individually\nas the missing object, and across all the users.\nWe compare our method to two zero-shot heuristic base-\nlines, Rand-No-Coll andGeometric .Rand-No-Coll places the\nmissing object randomly within the bounds of the image,\nensuring it does not collide with the fixed objects. Geometric\nfirst finds a line defined by centroids of segmentation maps of\ntwo fixed objects. Then it places the considered object on that\nline such that it is as close to the fixed objects as possible,\ndoes not collide with them, and its orientation is aligned with\nthe orientation of the closest object.\nFork Plate Spoon Knife\nMethod cm / deg cm / deg cm / deg cm / deg\nRand-No-Coll 25.85 / 70.32 10.78 / - 27.47 / 42.56 23.51 / 99.32\nGeometric 15.59 / 40.57 2.29 / - 23.83 / 86.11 11.58 / 1.47\nDALL-E-Bot 4.95 /1.26 1.28 / - 2.13 /2.72 2.1/ 3.27\nTABLE II\nPOSITION AND ORIENTATION ERRORS BETWEEN PREDICTED AND USER\nPREFERRED OBJECT POSES . EACH CELL SHOWS THE MEDIAN ACROSS ALL\nUSERS ,WITH THE BEST IN BOLD .\nWe compare the predicted pose against each of the accept-\nable poses provided by the user, and report the position and\norientation errors from the closest acceptable pose in Table II.\nThe distribution of acceptable poses is multimodal. Therefore,\nwe present the median error across all users, which is less\ndominated by outliers than the mean and is a more informative\nrepresentation of the aggregate performance. DALL-E-Botoutperforms the baselines, and is able to accurately place\nthe missing objects for different users. This implies that it\nis successfully conditioning on the poses of the other objects\nin the scene using inpainting, and that the human and robot\ncan create an arrangement collaboratively.\nV. D ISCUSSION\nA. Limitations\nTop-down pick-and-place . Our experiments focus on 3-\nDoF rearrangement, which is sufficient for many everyday\ntasks. However, future work can extend to 6-DoF object poses\nwith more complex interactions, e.g. to stack shelves. This\ncould draw from recent works on collision-aware manipulation\n[45] and learning of skills beyond grasping [46].\nOverlap between objects . Currently, our method assumes\nthat movable objects cannot overlap, e.g. the fork cannot go\non top of the plate. In future, the robot could plan an order\nfor stacking objects. At the start of the rearrangement, the\nrobot could spread out all the objects on the table to reduce\nocclusions as it detects all the objects it needs to arrange.\nRobustness of cross-domain object alignment. We use\nImageNet semantic features, inspired by [47], to align real\nand generated objects. However, the generated objects are\nsometimes difficult to align, e.g. the generated keyboards lack\nlegible text. As diffusion models improve and with techniques\nsuch as [48], this issue will be mitigated.\nB. Future Work\nPersonal preferences . If objects placed by users are visible\nin the inpainting mask, DALL-E may implicitly condition\nimages on inferred preferences (e.g. left/right-handedness).\nFuture work could extend to conditioning on preferences\ninferred in previous scenes arranged by users [3].\nPrompt engineering . Adding terms such as “neat, precise,\nordered, geometric” for the dining scene improved the appar-\nent neatness of the generated image. As found in other works\n[49], there is significant scope to explore this further.\nLanguage-conditioned rearrangement . User instructions\ncan easily be added to the text prompt, e.g. “plates stacked”\nvs “plates laid out”. Prior work shows that following spatial\nrelations such as “inside of” is difficult for some diffusion\nmodels [50], but future work could overcome this.\nC. Conclusions\nIn this paper, we show for the first time that web-scale dif-\nfusion models like DALL-E can act as “imagination engines”\nfor robots, acting like an aesthetic prior for arranging scenes\nin a human-like way. This allows for zero-shot, open-set, and\nautonomous rearrangement, using DALL-E without requiring\nany further data collection or training. In other words, our\nsystem gives web-scale diffusion models an embodiment to\nactualise the scenes that they imagine. Studies with human\nusers showed that they are happy with the results for everyday\nrearrangement tasks, and that the inpainting feature of diffu-\nsion models is useful for conditioning on pre-placed objects.\nWe believe that this is an exciting direction for the future of\nrobot learning, as diffusion models continue to impress and\ninspire complementary research communities.\n8 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED APRIL, 2023\nACKNOWLEDGMENTS\nThe authors thank Andrew Davison, Ignacio Alzugaray,\nKamil Dreczkowski, Kirill Mazur, Alexander Nielsen, Eric\nDexheimer, and Tristan Laidlow for helpful discussions, and\nKentaro Wada for developing some of the robot control\ninfrastructure which was used in the experiments.\nREFERENCES\n[1] D. Batra, A. X. Chang, S. Chernova, A. J. Davison, J. Deng, V . Koltun,\nS. Levine, J. Malik, I. Mordatch, R. Mottaghi, M. Savva, and H. Su,\n“Rearrangement: A challenge for embodied AI,” arXiv , 2020.\n[2] Y . Jiang, M. Lim, and A. Saxena, “Learning object arrangements in\n3D scenes using human context,” International Conference on Machine\nLearning, ICML , 2012.\n[3] I. Kapelyukh and E. Johns, “My house, my rules: Learning tidying\npreferences with graph neural networks,” in Conference on Robot\nLearning (CoRL) , 2021.\n[4] Y . Kant, A. Ramachandran, S. Yenamandra, I. Gilitschenski, D. Batra,\nA. Szot, and H. Agrawal, “Housekeep: Tidying virtual households using\ncommonsense reasoning,” arXiv , 2022.\n[5] M. Kang, Y . Kwon, and S.-E. Yoon, “Automated task planning using\nobject arrangement optimization,” in International Conference on Ubiq-\nuitous Robots (UR) , 2018.\n[6] W. Liu, C. Paxton, T. Hermans, and D. Fox, “StructFormer: Learning\nspatial structure for language-guided semantic rearrangement of novel\nobjects,” International Conference on Robotics and Automation , 2022.\n[7] W. Liu, T. Hermans, S. Chernova, and C. Paxton, “StructDiffusion:\nObject-centric diffusion for semantic rearrangement of novel objects,”\narXiv , 2022.\n[8] V . Jain, Y . Lin, E. Undersander, Y . Bisk, and A. Rai, “Transformers are\nadaptable task planners,” in Conference on Robot Learning , 2022.\n[9] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, “Hierarchical\ntext-conditional image generation with CLIP latents,” arXiv , 2022.\n[10] G. Sarch, Z. Fang, A. W. Harley, P. Schydlo, M. J. Tarr, S. Gupta, and\nK. Fragkiadaki, “TIDEE: Tidying up novel rooms using visuo-semantic\ncommonsense priors,” in European Conference on Computer Vision ,\n2022.\n[11] M. J. Schuster, D. Jain, M. Tenorth, and M. Beetz, “Learning organiza-\ntional principles in human environments,” in International Conference\non Robotics and Automation , 2012, pp. 3867–3874.\n[12] N. Abdo, C. Stachniss, L. Spinello, and W. Burgard, “Robot, organize\nmy shelves! Tidying up objects by predicting user preferences,” in\nInternational Conference on Robotics and Automation , 2015.\n[13] Y . Lin, A. S. Wang, E. Undersander, and A. Rai, “Efficient and\ninterpretable robot manipulation with graph neural networks,” IEEE\nRobotics and Automation Letters , vol. 7, pp. 2740–2747, 2022.\n[14] Q. A. Wei, S. Ding, J. J. Park, R. Sajnani, A. Poulenard, S. Sridhar,\nand L. Guibas, “Lego-net: Learning regular rearrangements of objects\nin rooms,” arXiv , 2023.\n[15] M. Wu, fangwei zhong, Y . Xia, and H. Dong, “TarGF: Learning\ntarget gradient field for object rearrangement,” in Conference on Neural\nInformation Processing Systems , 2022.\n[16] M. Shridhar, L. Manuelli, and D. Fox, “CLIPort: What and where\npathways for robotic manipulation,” in Conference on Robot Learning\n(CoRL) , 2021.\n[17] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian,\nT. Armstrong, I. Krasin, D. Duong, V . Sindhwani, and J. Lee, “Trans-\nporter networks: Rearranging the visual world for robotic manipulation,”\nConference on Robot Learning (CoRL) , 2020.\n[18] A. Goyal, A. Mousavian, C. Paxton, Y .-W. Chao, B. Okorn, J. Deng,\nand D. Fox, “IFOR: Iterative flow minimization for robotic object rear-\nrangement,” in Conference on Computer Vision and Pattern Recognition\n(CVPR) , 2022.\n[19] C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and\nP. Sermanet, “Learning latent plans from play,” Conference on Robot\nLearning (CoRL) , 2019.\n[20] A. Nair, V . Pong, M. Dalal, S. Bahl, S. Lin, and S. Levine, “Visual\nreinforcement learning with imagined goals,” in nternational Conference\non Neural Information Processing Systems , 2018.\n[21] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli,\n“Deep unsupervised learning using nonequilibrium thermodynamics,”\ninInternational Conference on Machine Learning , 2015.[22] M. Janner, Y . Du, J. Tenenbaum, and S. Levine, “Planning with diffusion\nfor flexible behavior synthesis,” in International Conference on Machine\nLearning , 2022.\n[23] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song,\n“Diffusion policy: Visuomotor policy learning via action diffusion,”\narXiv , 2023.\n[24] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-\nresolution image synthesis with latent diffusion models,” arXiv , 2021.\n[25] C. Finn and S. Levine, “Deep visual foresight for planning robot\nmotion,” in International Conference on Robotics and Automation , 2017.\n[26] A. S. Chen, S. Nair, and C. Finn, “Learning generalizable robotic reward\nfunctions from “in-the-wild” human videos,” Robotics: Science and\nSystems , 2021.\n[27] Z. Mandi, H. Bharadhwaj, V . Moens, S. Song, A. Rajeswaran, and\nV . Kumar, “CACTI: A framework for scalable multi-task multi-scene\nvisual imitation learning,” arXiv , 2022.\n[28] Z. Chen, S. Kiami, A. Gupta, and V . Kumar, “GenAug: Retargeting\nbehaviors to unseen situations via generative augmentation,” arXiv ,\n2023.\n[29] T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh,\nC. Tan, D. M, J. Peralta, B. Ichter, K. Hausman, and F. Xia, “Scaling\nrobot learning with semantically imagined experience,” arXiv , 2023.\n[30] Y . Du, M. Yang, B. Dai, H. Dai, O. Nachum, J. B. Tenenbaum,\nD. Schuurmans, and P. Abbeel, “Learning universal policies via text-\nguided video generation,” arXiv , 2023.\n[31] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal,\nG. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever,\n“Learning transferable visual models from natural language supervi-\nsion,” in International Conference on Machine Learning, ICML , 2021.\n[32] P. Besl and N. D. McKay, “A method for registration of 3-d shapes,”\nTransactions on Pattern Analysis and Machine Intelligence , 1992.\n[33] K. He, G. Gkioxari, P. Doll ´ar, and R. Girshick, “Mask R-CNN,” in\nInternational Conference on Computer Vision (ICCV) , 2017.\n[34] Y . Wu, A. Kirillov, F. Massa, W.-Y . Lo, and R. Girshick, “Detectron2,”\nhttps://github.com/facebookresearch/detectron2, 2019.\n[35] A. Gupta, P. Dollar, and R. Girshick, “LVIS: A dataset for large vocab-\nulary instance segmentation,” in Proceedings of the IEEE Conference\non Computer Vision and Pattern Recognition , 2019.\n[36] P. Wang, A. Yang, R. Men, J. Lin, S. Bai, Z. Li, J. Ma, C. Zhou,\nJ. Zhou, and H. Yang, “OFA: Unifying architectures, tasks, and modal-\nities through a simple sequence-to-sequence learning framework,” in\nInternational Conference on Machine Learning , 2022.\n[37] A. Akbik, D. Blythe, and R. V ollgraf, “Contextual string embeddings\nfor sequence labeling,” in COLING , 2018.\n[38] A. Akbik, T. Bergmann, D. Blythe, K. Rasul, S. Schweter, and R. V oll-\ngraf, “FLAIR: An easy-to-use framework for state-of-the-art NLP,” in\nNAACL , 2019.\n[39] W. Goodwin, S. Vaze, I. Havoutis, and I. Posner, “Semantically grounded\nobject matching for robust robotic scene rearrangement,” in International\nConference on Robotics and Automation , 2022.\n[40] H. W. Kuhn and B. Yaw, “The Hungarian method for the assignment\nproblem,” Naval Res. Logist. Quart , 1955.\n[41] T. Ridnik, E. Ben-Baruch, A. Noy, and L. Zelnik-Manor, “ImageNet-\n21K pretraining for the masses,” arXiv , 2021.\n[42] T. Ridnik, H. Lawen, A. Noy, E. Ben, B. G. Sharir, and I. Friedman,\n“TResNet: High performance gpu-dedicated architecture,” in Winter\nConference on Applications of Computer Vision (WACV) , 2021.\n[43] A. Mousavian, C. Eppner, and D. Fox, “6-DOF GraspNet: Variational\ngrasp generation for object manipulation,” in International Conference\non Computer Vision (ICCV) , 2019.\n[44] E. Johns, S. Leutenegger, and A. J. Davison, “Deep learning a grasp\nfunction for grasping under gripper pose uncertainty,” in International\nConference on Intelligent Robots and Systems (IROS) , 2016.\n[45] V . V osylius and E. Johns, “Where to start? Transferring simple skills to\ncomplex environments,” in Conference on Robot Learning , 2022.\n[46] E. Johns, “Coarse-to-fine imitation learning: Robot manipulation from\na single demonstration,” in International Conference on Robotics and\nAutomation , 2021.\n[47] K. Mazur, E. Sucar, and A. J. Davison, “Feature-realistic neural fusion\nfor real-time, open set scene understanding,” arXiv , 2022.\n[48] R. Gal, Y . Alaluf, Y . Atzmon, O. Patashnik, A. H. Bermano, G. Chechik,\nand D. Cohen-Or, “An image is worth one word: Personalizing text-to-\nimage generation using textual inversion,” arXiv , 2022.\n[49] T. Kojima, S. S. Gu, M. Reid, Y . Matsuo, and Y . Iwasawa, “Large\nlanguage models are zero-shot reasoners,” ArXiv , 2022.\n[50] C. Conwell and T. Ullman, “Testing relational understanding in text-\nguided image generation,” arXiv , 2022.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "9TQ6vGrgaH", "year": null, "venue": "EAIS 2011", "pdf_link": "https://ieeexplore.ieee.org/iel5/5936949/5945904/05945912.pdf", "forum_link": "https://openreview.net/forum?id=9TQ6vGrgaH", "arxiv_id": null, "doi": null }
{ "title": "Case studies with evolving fuzzy grammars", "authors": [ "Trevor Martin", "Nurfadhlina Mohd Sharef" ], "abstract": "Evolving fuzzy grammars have been introduced as a way of identifying meaningful text fragments such as addresses, names, times, dates, as well as finding phrases that indicate complaints, questions, answers, general sentiment, etc. Once tagged in this way, the fragments can undergo further processing e.g. text mining. Fuzziness arises because we do not require a complete match between text and the grammar patterns, and the evolving aspect is necessary because it is rarely possible to specify all patterns in advance. In this paper we briefly describe the evolving fuzzy grammar (EFG) approach and present two experiments: (i) to compare its performance to named-entity recognition systems and (ii) to highlight the importance of evolving new grammars as novel text fragment patterns are seen. In both cases, the EFG system performs well.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "FrpouUC7-n", "year": null, "venue": "EAIS 2022", "pdf_link": "https://ieeexplore.ieee.org/iel7/9787685/9787686/09787690.pdf", "forum_link": "https://openreview.net/forum?id=FrpouUC7-n", "arxiv_id": null, "doi": null }
{ "title": "EFNN-Gen - a uni-nullneuron-based evolving fuzzy neural network with generalist rules", "authors": [ "Paulo Vitor de Campos Souza", "Edwin Lughofer" ], "abstract": "The evolving fuzzy neural networks have a high degree of interpretability and a high capacity for solving pattern classification problems. However, their accuracy could deteriorate when there are few samples for particular classes available, e.g., when new classes appear in the data stream. One way to improve these models’ performance is to include a priori knowledge in their data-driven architectural structure. This article proposes the new concept of generalist rules based on assessing the specificity of Gaussian functions that make up the neurons of the first layer of the evolving fuzzy neural network (EFNN-Gen). These rules can be seen as general (expert) knowledge about the classification problem. Tests performed with real binary pattern classification bases proved that integrating generalist rules can increase the accuracy of an evolving system and that there is a specific limit on how many rules can be used for this improvement.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "5DEHnzIUQQ", "year": null, "venue": "ECAI2010", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-60750-606-5-655", "forum_link": "https://openreview.net/forum?id=5DEHnzIUQQ", "arxiv_id": null, "doi": null }
{ "title": "Automating Layouts of Sewers in Subdivisions.", "authors": [ "Neil Burch", "Robert C. Holte", "Martin Müller", "David O'Connell", "Jonathan Schaeffer" ], "abstract": "An important part of the creation of a housing subdivision is the design and layout of sewers underneath the road. This is a challenging cost optimization problem in a continuous threedimensional space. In this paper, heuristic-search-based techniques are proposed for tackling this problem. The result is new algorithms that can quickly find near optimal solutions that offer important reductions in the cost of design and construction.", "keywords": [], "raw_extracted_content": "Automating Layouts of Sewers in Subdivisions\nNeil Burch and Robert Holte and Martin M ¨uller and David O’Connell and Jonathan Schaeffer1\nAbstract. An important part of the creation of a housing subdivi-\nsion is the design and layout of sewers underneath the road. This\nis a challenging cost optimization problem in a continuous three-dimensional space. In this paper, heuristic-search-based techniques\nare proposed for tackling this problem. The result is new algorithms\nthat can quickly find near optimal solutions that offer important re-ductions in the cost of design and construction.\n1 INTRODUCTION\nThe design of a housing subdivision is a complex civil engineeringtask. In the concept phase, the designer decides on how a tract of landwill be subdivided into housing lots connected by roads. In the design\nphase, the core infrastructure is planned, including the analysis of the\nterrain, grading of the land (elevation), sewers (water, sanitary), and\nconduits (power, telephone). Finally, in the layout phase, all the de-sign considerations are mapped to an implementation. The design is\ntypically done as part of the tendering process by a civil engineering\nfirm. If they are the successful bidder, then they do the layout.\nMost of the design and layout of a subdivision is manually done,\nwith a professional civil engineer required to verify its adequacy andcompliance with regulations. One of the labor-intensive aspects is the\ndesign and layout of the sewers. This is a difficult problem for engi-\nneers. It is a three-dimensional optimization problem in a continuous\nsearch space. Typically, engineers adopt a least-effort solution, such\nas laying the sewers down underneath the middle of the road. Thesesolutions are demonstrably suboptimal, adding to the subdivision de-\nveloper’s construction costs.\nThis paper poses the problem of automating the design and layout\nof sewers in subdivisions as an interesting AI application. Althoughthe sewer problem is a small part of the overall process of creating asubdivision, automating this process has cost advantages. First, it will\nreduce the manual effort required for the design and layout phases.\nSecond, a (near) optimal solution will reduce the construction costs.\nAutomation of the sewer problem has been tackled by several re-\nsearchers in the engineering community. Their solutions are typicallygenetic algorithm or simulated annealing based. In contrast, we take\na heuristic search approach.\nThe contributions of this paper are as follows:\n1. An understanding of an important real-world engineering applica-\ntion. Given the difficult cost function and numerous special cases,\nan elegant all-encompassing solution is not possible.\n2. A new algorithm for determining and placing the minimum num-\nber of manholes (ground connections to the sewer) on a singleroad. Dynamic programming is used to obtain an optimal globallayout cost, subject to a restriction on the location of manholes in\nintersections.\n1Department of Computing Science, University of Alberta, Canada, email:\n{rholte,jonathan }@ualberta.ca3. The road layout, expressed as a graph, has cycles. The sewer sys-\ntem cannot have cycles. We present a new approach for “cutting”\nthe graph into a tree and exploring the search space of possiblespanning trees to find a high-quality solution.\n4. A complete working system that can produce an anytime solution.\nIt scales to larger networks than have been solved previously.\nAlthough this research deals with sewer system planning, these\nideas are applicable to other types of pipe networks (e.g., water dis-\ntribution networks [1]).\n2 SEWER SYSTEMS\nA sewer system is a subterranean system used to convey waste to one\nor more collection points (outfalls). There are two types of sewer sys-\ntems: sanitary, to convey industrial and household waste, and storm,\nto prevent flooding by draining surface water. In this paper, without\nloss of generality we limit the discussion to storm sewers.\nEach segment of pipe in a sewer system is connected by a man-\nhole. A sewer design is a list of manholes and pipes. Each manhole\nhas a location and depth. Every pipe has an upstream and down-\nstream manhole, a diameter, and an upstream and downstream depth.\nThe difficulties in designing a sewer arise from a large set of con-\nstraints. For example, the sewer must have the capacity to handle the\nload it is expected to experience. All roads in a subdivision must havea pipe, and all pipes must go under the road. The slope of all pipes\nmust be over some minimum value so that they drain (we only con-sider systems which use gravity to properly drain) and under some\nmaximum so that the pipes do not wear too quickly. There are only\ncertain fixed diameters of pipes available. Pipes must be buried to aminimum depth. Segments of pipe must not exceed some fixed maxi-\nmum length. We consider all of these in our work, but there are a few\nother parameters not included that would be needed for a production\nsystem (e.g., curved pipe segments, and constraints on the angle at\nwhich two pipes can meet).\nAn important issue that needs to be considered when burying pipes\nis the slope. Pipe slope affects both the flow velocity and the pressurewithin the pipes. The goal of design is to keep the flow velocity above\nthe minimum self cleansing velocity. If the velocity of the flow istoo low, deposits may build up within the pipe, obstructing the flow.\nHowever, if the velocity is high enough these deposits are prevented,\nand thus the system is self cleansing. Sewer systems where the slope\nof the pipes alone convey the sewage are known as gravity sewer\nsystems; more complex systems may need pumps. In this paper, weonly consider gravity-based systems.\nThe rules for the design and layout of a sewer system are governed\nby the regulations of the local municipality in which the housingsubdivision is located. There are numerous constraints imposed by\nmunicipalities, with no standardization across jurisdictions. A pro-\nduction quality sewer design tool needs to handle the plethora of pa-rameter settings.ECAI 2010\nH. Coelho et al. (Eds.)\nIOS Press, 2010\n© 2010 The authors and IOS Press. All rights reserved.\ndoi:10.3233/978-1-60750-606-5-655655\nFor storm sewers, the flow of water that will go through the pipes\nis estimated using hydrological analysis. This is a mathematical ap-\nproximation based on evaporation, precipitation, snow meltage, othertypes of natural water movement, and the historical record.\nWhen creating a sewer system plan, the engineer must consider the\nhorizontal and vertical alignments. The horizontal alignment spec-ifies where the pipes and manholes are placed within the subdivi-\nsion. Standard engineering practice defines a corridor underneath the\nroadway where pipes are allowed to be placed. To allow for propermaintenance of the sewer system, manholes must be placed at speci-\nfied intervals along the pipe. The maximum distance differs amongst\nthe different diameters of pipe. Manholes are also typically placed\nwherever the pipe changes direction. For roads with a high degree of\ncurvature, this can result in many manholes.\nVertical alignment is concerned with the depth of sewer pipes.\nThere are several factors that must be considered when determin-\ning these depths. First, the pipes must be buried deep enough to pre-\nvent damage from the surface, including pressure from traffic andweather. Second, there is usually a minimum required separation be-\ntween the sewer and pipes/cables from other utilities. For example,underground power cables and clean water distribution systems may\nalso lie underneath the same road corridors as the sewer system.\nThe optimization process for sewer system planning can be sepa-\nrated into the two interrelated problems of layout (horizontal align-ment) and implementation (vertical alignment). The main goal is to\nproduce the plan with the lowest cost, while meeting the specified en-gineering constraints defined for the project. Cost considerations typ-\nically includes the manholes (number and depth) and pipes (number,\nlength, diameter, and depth). For more technical information consulta civil engineering text such as [6].\n3 RELATED WORK\nMuch of the engineering literature assumes fixed pipe locations and\ntries to optimize the pipe diameters and slopes. “ Most pipe optimiza-\ntion methods have not considered the layout optimization along with\nthe cost due to the extreme complexity involved ...” [7]. These solversmake simplifying assumptions, such as assuming any pipe diame-\nter is possible. By considering the diameter as a continuous variable,\nthe problem is easier to solve by linear/non-linear programming tech-niques. Such solvers have been built based on dynamic programming\n[12] and genetic algorithms [8, 11]. An alternative is a knowledge-\nbased blackboard architecture approach [2].\nDespite the promising results reported in the literature, none of\nthese approaches are used by the engineering community. Some ofthese solutions work well on small subdivisions, but do not scale to\nlarge ones. Further, the simplifying assumptions (e.g., pipe diameter)\nresult in illegal solutions.\nThe graph of roads has to be converted to a tree data structure\n(cycles are not allowed). Cuts are made in the road graph to create\na tree. When it comes to selecting where to make the cuts, many\nauthors optimize the sewer system layout considering only the high-level connectivity [4, 9]. An intuitive approach to cut selection is\nto apply standard spanning tree algorithms to the multigraph. One\napproach is to use the cuts that yield a shortest path spanning tree\nrooted at the outfall [10]. Evolution-based algorithms are a popularalternative [4, 9]. The difficulty with all of the above approaches is\nthat the cost for each element in the graph is not fixed. For example,for an edge the connectivity of the remainder of the network affects\nthe cost. Modifying the connectivity might change the diameter of\nthe pipe represented by the edge. To be effective, a cut selection al-\nFigure 1. Small subdivision (left); multigraph (middle left); augmented\ngraph (middle right) with two edges marked by arrows for removal to form a\nspanning tree; cut graph (right).\nFigure 2. Sewer planner architecture.\ngorithm must address these issues.\nFew researchers consider the entire sewer problem. There have\nbeen several attempts to simultaneously optimize two or more com-\nponents of the problem [3, 5]. Recent work has tried multi-objective\noptimization [1, 7]. All of these systems were tested with either small\nsubdivisions or a few artificial graphs. The largest problem reportedsolved is half that of the large subdivision reported in this paper.\nMost previous research disregards the placement of manholes. A\nclever manhole placement scheme may result in pipelines with fewer\nmanholes, further reducing the cost of the solution. The reduction of\neven a few manholes in a sewer system plan may save tens or hun-\ndreds of thousands of dollars in construction costs. Few researchers\nhave addressed this issue. [12] presents a method for doing this, how-ever this method requires an initial layout to be provided and is not a\ngeneral solution. [3] introduces a system architecture that considers\nthis level of optimization, but provide little detail.\n4 SEWER PLACEMENT SOLVER\nConsider the example subdivision shown in Figure 1 (left). This is\ntransformed into an equivalent multigraph representation, Figure 1\n(middle left), where each straight-line road segment becomes an edge\nand each intersection and cul de sac becomes a vertex. Cut selection\nalgorithms are used to convert this to a cycle-free graph representing\na set of pipes which cover each road in the subdivision. Figure 1\n(right) is a potential cut selection result.\nOur proposed solution architecture is presented in Figure 2. Given\na road layout and hydrology information, the design and implemen-\ntation of a sewer system can be broken down into four problems:\n•Cut selection: take a network of roads, which may contain cycles,and produce a graph of roads and intersections with no cycles;\n•Global layout: choose the manhole locations at intersectionswhere multiple pipes meet;N.Burchetal./Automating Layouts ofSewersinSubdivisions 656\nFigure 3. Test subdivisions (large, medium, and small).\n•Local layout: lay down a pipeline along a single road;\n•Vertical layout: resolve the third dimension by figuring out pipe\ndiameters and depths.\nWith the exception of the vertical layout algorithm, which finds alocal minimum, suboptimality comes from sampling and discretiza-\ntion. The program can increase these, allowing for improved quality\nsolutions with increased investment of computation time.\nOur system supports iterating on a solution or parts of a solution.\nAs stated previously, the cost for each element in the graph is notfixed; for an edge the connectivity of the remainder of the network\naffects the cost. The vertical solver might need to get a solution, re-\nvise the edge costs, and iterate until a stable result is achieved.\nAll our tests used three real-world subdivisions shown in Figure 3,\nlarge (38 intersections and 55 roads), medium (18 intersections and26 roads), and small (15 intersections and 16 roads). All three are\nconstructed from actual curb-line survey data. They are not shown to\na common scale. Note that the large subdivision has twice as manyroads and intersections as the largest real-world subdivision that has\nbeen previously solved.\n4.1 Cut Selection\nThe general problem we consider starts with nothing more than an\noutflow and a description of the road curb-lines. There is no informa-\ntion about the direction of flow, and there are almost certainly cycles.Before doing the layout we must generate a tree from the multigraph\ndescribing the roads. We cannot just remove a road from each cy-\ncle, as all roads must still be serviced. For each cycle we choose a\nroad and introduce a cut: a small gap in the pipeline and some newmanholes at the cut ends.\nThis cut could be placed anywhere along a road, but we only con-\nsider cutting the road right by an intersection. This only requires one\nadditional manhole, rather than one manhole for each of two new\nsegments of road. While this might prevent us from discovering a\ntruly optimal solution, the cost of having an extra manhole makes\nthis simplification seem quite reasonable.\n2The result of this process\nis a new graph, possibly with new vertices, which is a tree rooted\nat the outfall. We call this graph a cut graph, and it satisfies all theconditions necessary for running the global layout algorithm.\nWe found it unnecessarily complicated dealing directly with the\noriginal multigraph describing the roads. Instead we use an aug-mented graph, with a vertex for each intersection and road. There\nis an edge between road and intersection vertices if the road runs\ninto the intersection. An example is shown in Figure 1. The smallsubdivision (left) has 15 intersections and 16 roads. The augmented\n2This may be suboptimal in the case where a road segment has a hill in the\nmiddle. Here it might be better to cut at the top of the hill.graph (middle right) has 31 vertices representing both intersections\nand roads. There are 32 edges, one for each side of every road.\nGenerating cuts in the original multigraph is now just a matter of\ngenerating a spanning tree in the augmented graph. If a road vertex isconnected to both intersection vertices in the spanning tree, the road\nis unmodified in the original road graph. On the other hand, if theedge to one of the intersection vertices is missing, a cut is placed on\nthe road beside that intersection. It is not possible for both edges to\nbe missing, as there are only two edges connected to a road vertexand all vertices will be reachable in the spanning tree. An example of\nthis process is shown in Figure 1.\nThis allows us to use standard spanning tree algorithms, but it does\nnothing to reduce the space of possible arrangements of cuts. On the\nreal-world test data we used, the small subdivision had only 24 pos-\nsible trees, which is quite manageable. The medium sized graph hadroughly ten million arrangements, and the largest had a billion. At\nthis time, these numbers are large enough that exhaustive enumera-\ntion is not reasonable, so we randomly sample the spanning trees.\nIn our experiments, we tried two different sampling methods. One\nmethod was to sample uniformly at random from all spanning trees,and the other was to choose spanning trees that have a small maxi-mum distance to the outfall. The second method finds minimum cost\nspanning trees where the weight of both road edges in the augmentedgraph is the shortest path length to the upstream intersection from the\noutfall in the original graph. It is not guaranteed to produce only the\ncut graphs with the shortest maximum pipe length, but it will producea much lower average maximum length. The rationale for trying to\nfind a spanning tree with a low maximum path length is that one\nlong pipeline is likely to be more expensive than two short pipelinesthat have the same total length but run in parallel (roughly speaking),\nbecause the depth, and corresponding amount of dirt that must be\nexcavated, increases over the length of the pipe.\n4.2 Global Layout\nGiven a cycle-free pipe graph, we can consider the problem of plac-\ning the manholes within intersections (the vertices in the cut graph.)We make the simplifying assumption that we only want to minimize\npipe length and number of manholes. This will minimize cost locally\nfor any single road, but ignores global effects of flow constraint. We\nalso ignore any constraints about the angles at which pipelines meet.\nThe motivating idea is to consider every possible arrangement of\nlocations for every intersection manhole. For every arrangement, we\nrun a local and vertical layout algorithm to get the cost. After con-\nsidering every arrangement, we would know the best global layout.\nThis is an intractable search within a non-convex continuous space,\nbut it can be approximated by discretizing the space. We used four\ndifferent subsets of the candidate manhole locations shown in Fig-\nure 4, with a spacing of 4m. The first set only contains point 1, in the\ncenter of the intersection. This provides a zero-effort baseline. The\nsecond set contains points 1 to 5, the third contains points 1 to 9, and\nthe fourth set contains all 13 points.\nUsing a small set of locations makes the space finite, but the naive\nidea of trying all combinations of locations is still exponential in thenumber of intersections, with the base being determined by the num-\nber of positions per intersection. Since we assumed that all the costsare determined by local factors, we can do much better.\nThe network of pipes is free of cycles, and can thus be described as\na tree rooted at the outfall. Combined with the costs being local, thismeans we can use dynamic programming to find the optimal set of\nlocations efficiently. We visit the vertices using a postfix depth-firstN.Burchetal./Automating Layouts ofSewersinSubdivisions 657\n12\n3\n456\n7\n8910 11\n12 13\nFigure 4. Global layout intersection positions.\nFigure 5. Reducing the number of manholes needed.\nordering, which guarantees that we will know the cost of all possible\nchild subtrees when considering an intersection. If intersection iis\na leaf, we let cost (i, p)=0 for each position pof intersection i.\nOtherwise, cost (i, p) is\n/summationdisplay\nc∈children (i)min\npc(cost (c, p c)+local (i, p, c, p c))\nwhere local (i, p, c, p c)is the cost of the road from intersection ito\nintersection cas determined by running a local layout algorithm on\nthat road from ptopc.\nOnce the costs are known, the optimal positions for every intersec-\ntion can be recovered by making a second pass through the intersec-\ntions, choosing positions for the children which achieves this cost.\n4.3 Local Layout\nGiven a road with fixed endpoints, a local layout algorithm chooses\nthe best locations for manholes when laying a pipe down under that\nroad. The segments of pipe can vary in length, but must remain undera maximum length and stay between the curbs of the road. There are\npotential savings to be found by doing a more complicated design\nthan just running the pipeline down the center of the road. Considerthe roadway in Figure 5, shown with thick lines. When the pipeline\nruns under the center of the road, shown by the solid line, two inter-\nmediate manholes are needed. By choosing off-centre points, as inthe dashed line, only one intermediate manhole is needed. As with\nglobal layout, we use the simplifying assumption that we always\nwant to minimize pipe length and the number of manholes.\nThe idea is to repeatedly find the positions that are reachable with\none more pipe. For example, given one pipe of length l, we can draw\nan arc between the two curbs that are reachable in ≤ldistance and\nreachable from the start point. All points on the inside of this frontier\nare reachable using just one pipe and an extra manhole. The next\nfrontier is generated by drawing arcs around all points within thefirst frontier, and taking the union of all these spaces. This process\nof generating new frontiers continues until the end point lies withina frontier. This is demonstrated in Figure 6 (left).\nAt this point we can generate the layout by tracing backwards\nthrough the frontiers. For any point outside the first frontier, therewill be some point in a previous frontier which minimizes the dis-\ntance from the start to the given point. This new point will be a man-\nhole location, and we continue tracing backwards from there. This\nwill produce an optimal solution.123 4 5 6\n7\nFigure 6. A simple roadway with successive frontiers of reachable points\n(left); two strips along the roadway with dashed lines indicating allowed\nmanhole locations (right).\n# f is the frontier to be updated\n# p is the point we are expanding from# p_distance is the distance to reach pUPDATE_FRONTIER( f, p, p_distance )\nfor each segment s in strips\nq := farthest reachable point on s from p\nif q is a valid point\nd := DISTANCE( p, q ) + p_distancei fsi si nf\n<q’, p, d’> := f[ s ]if q farther along s than q’ or\nq=q ’a n dd<d ’\nf [s]: =< q ,p ,d >\nelse f[s]: =< q ,p ,d >\nLOCAL_LAYOUT( start, end )\nif end->start crossing a curb\nreturn []\n# generate new frontiers until# we reach the end of the roadcreate frontier fnum_frontiers := 1frontiers[ num_frontiers ] := ffinished := {}UPDATE_FRONTIER( f, start, 0 )\nwhile no point in f can reach end\ncreate frontier new_ffor each segment s in f\nif s not in finished\n<p, parent, d> := f[ s ]UPDATE_FRONTIER( new_f, p, d )if p is at the end of s\nadd s to finished\nnum_frontiers := num_frontiers + 1frontiers[ num_frontiers ] := new_ff := new_f\n# build the final paths := segment in f with shortest path to endpoints := []for each f from num_frontiers down to 1\nf := frontiers[ f ]<p, parent, d> := f[ s ]add p to front of pointss := segment containing parent\nFigure 7. Local layout algorithm.\nThe above is not feasible for a continuous space of manhole lo-\ncations, as bends or corners in the road preclude a closed form de-\nscription of the frontiers. Instead, we discretize the space by dividing\nevery straight section of the road lengthwise into strips, and only al-\nlow points along the middle of the strips. Figure 6 (right) shows a\nsimple roadway divided into strips. A frontier is now a description of\nhow far along a strip we can reach, for every strip. Note that runningthe pipe along the center of the road is equivalent to using a single\nstrip. Figure 7 shows pseudocode for the local layout algorithm.\n4.4 Vertical Layout\nA complete sewer layout does not just specify the two-dimensional\nlocation of pipes and manholes. Depths must also be specified, asN.Burchetal./Automating Layouts ofSewersinSubdivisions 658\nwell as selecting from available pipe diameters for each segment.\nThe design must also satisfy various additional constraints, the most\nobvious being the capability of handling the expected load on thesewer. There exist commercial packages for automating this process\n(like StormCAD) that will suggest depths and pipe diameters given\na description of the pipeline and the loading.\nWe developed a program which solves this problem for storm sew-\ners. The cost function is messy, reflecting the real-world nature of\nthe application. The most important factors are the number of man-holes, the pipe depths (piecewise non-linear) and the pipe diameters\n(roughly quadratic). The solver iterates through successive mixed in-\nteger programs. Each iteration starts with the two-dimensional layout\nof the pipeline and the expected input to the pipeline. The output is aminimal cost set of depths and pipe diameters, given any piecewise\nlinear approximation of a cost function.\nMultiple iterations are necessary because the expected load, as de-\ntermined by a commonly used prediction called the rational method,depends on hydrological information and travel times through thepipes. The times, calculated using the Manning formula, depend on\nthe slope and diameter of the pipes, which are decided during an iter-ation. In the rational method, shorter times result in a larger expected\nload. Under-estimating the times always produces a valid design, in\nthat the resulting sewer will handle the real load estimate given theactual times, but the pipes may be larger or steeper than necessary,\nwhich increases the cost.\nThe initial time estimates are set low enough to be a guaranteed\nunderestimate, so the first proposed layout is a valid design. Actual\ntravel times are computed for this design, and we then have times\nwhich produce a valid design as well as some new time estimates.We now proceed as follows. Using the new times, generate a new\ndesign. If this design is valid, we save the time estimates for this\ndesign, compute the new actual travel times, and continue. If the newcost is worse, we stop, and use the previous valid design. Finally, if\nthe new layout is not valid, we generate new time estimates by taking\nthe average of the current estimates and the times that last produceda valid layout. If no time changes by more than a minimum amount\nwhen doing this, we stop, and just use the last valid layout.\nIterating in this fashion is not guaranteed to be globally optimal,\nbecause we do not explore the entire space of flow time estimates.This code, along with a real world cost model, was used to generate\nthe total cost estimates in the results section. The model provided acost per manhole and a per length pipe cost over varying depths for\neach of 23 different pipe diameters.\nWhile this component is similar to that in other proposed systems,\nours differs by 1) using a piecewise linear approximation of the cost\nfunction, and 2) iterating starting with a lower bound.\n4.5 Complete System\nThe information that is available initially is a collection of curb lines,\neach of which is a sequential set of three dimensional positions all theway along one continuous piece of curb. This gives the outline of the\nroads, as seen in the real-world data shown in Figure 3. The other\npiece of information needed is the hydrology information.\nWe use the process shown in Figure 8. Because local layout only\nconsiders local properties, we save time by precomputing local lay-out results for all positions of both intersections and all three possi-\nbilities on every road (no cut, or a cut at either end). After this, werepeatedly generate new cut graphs. For each cut graph, we run the\nglobal layout algorithm using the precomputed local layout costs.\nThis gives a complete two-dimensional layout. At this point, oneSEWER_LAYOUT()\nfor each road r\nfor each pair u,d of upstream and\nand downstream manhole locations\nsave results of LOCAL_LAYOUT(u,d) on r\ncut r by upstream manholesave results of LOCAL_LAYOUT(u,d) on rmove cut beside downstream manhole\nsave results of LOCAL_LAYOUT(u,d) on rremove in cut r\nfor some desired number of cut graphs\nCUT_SELECTION()GLOBAL_LAYOUT( saved local layout costs )compute storm input for each pipeVERTICAL_LAYOUT()\nif new design has lowest cost so far\nsave current design\nreturn saved design\nFigure 8. Sewer system layout process.\nwould use the topology and hydrology information to figure out how\nmuch water is expected to enter every pipe. In our experiments, wewere forced to just use fixed estimates, as we did not have sufficient\ninformation available to compute this for arbitrary manhole loca-\ntions. With these estimates, we can run the vertical layout algorithm\nto produce a complete sewer design and a dollar cost estimate.\n5 RESULTS\nIt is difficult to do a fully realistic assessment of our system. Thereare no available test sets, other than a few artificial problems. Further,\ncommercial data is not public—the final layout is public but the data\nused to compute the answer (e.g., hydrology) is not. Thus compar-\nisons to deployed sewer systems are not possible. We are fortunate\nto have three real-world subdivisions for use in our testing. Even so,\nwe do not have access to all the data needed to do a fair comparisonbetween the computer-generated and human-generated solutions.\nIn the following, system performance is expressed in terms of\nnumber of manholes placed. A typical manhole costs between $20K\nand $50K (the cost grows nonlinearly with depth). The cost of the\npipe is usually secondary.\nThe experiments were run on the three subdivisions shown in Fig-\nure 3: large (L), medium (M) and small (S). For all three subdivi-\nsions, we consider a number of randomly generated cut graphs and\nrecord the best result. The first two experiments consider the numberof manholes used in the layout. The final experiment uses the vertical\nlayout code to provide an estimated total cost of a layout.\nThe results for the first set of experiments, shown in Table 1, are\ngenerated from 100,000 random cut graphs.\n3We ran tests using 1,\n5, and 20 strips with the local layout algorithm. For each of these\ncases, we use 1, 5, 9, or 13 intersection manhole positions for the\nglobal layout algorithm. Finally, we used both 30m and 60m for the\nmaximum length of a pipe segment. The best results are in bold.\nThe most substantial improvement can be seen with the global lay-\nout algorithm. In the case of the large subdivision with shorter pipes,\nusing 13 positions produced a layout with 16 fewer manholes than\nusing a single manhole location. Using more strips for the local lay-\nout algorithm only reduced the manhole count by one when longer\npipes were allowed. This is obviously a much more modest savings,\nbut even a single manhole can be a significant cost.\nThe use of one strip and 13 positions at intersections is a close\n3Experiments with artificially-constructed subdivisions show similar trends.N.Burchetal./Automating Layouts ofSewersinSubdivisions 659\nStrips Manhole 30m pipes 60m pipes\nPositions L M S L M S\n1 1 210 98 62 101 47 31\n1 5 202 93 62 94 46 30\n1 9 196 90 60 89 42 28\n1 13 195 90 60 88 42 27\n5 1 207 97 62 98 45 31\n5 5 199 92 62 93 44 29\n5 9 192 88 59 88 41 28\n5 13 191 88 59 87 41 27\n20 1 206 97 62 98 45 30\n20 5 199 92 62 93 44 29\n20 9 191 88 59 88 41 28\n20 13 190 88 59 87 41 26\nTable 1. Manholes (random).\n30m pipes 60m pipes\nL M S L M S\n1 position 210 98 62 101 47 30\n5 positions 202 94 62 94 44 29\n9 positions 193 90 60 90 42 28\n13 positions 193 90 60 90 42 27\nTable 2. Manholes (random; low maximum path length).\napproximation to what a civil engineer might do (center of the road).\nGiven this assumption, then the computer outperforms the human onthe 30m pipes by 5, 2, and 1 manholes for the large, medium and\nsmall subdivisions, respectively, and by 1 in each case for the 60m\npipes. These savings are in addition to the many hours of human\nlabor required to obtain these numbers.\nThe results show diminishing returns as the number of strips and\ncandidate manhole locations are increased. A typical run on the largesubdivision took 23 minutes, suggesting that high-quality results can\nbe obtained in a reasonable amount of time. Further, we can easilysolve much larger problems (artificially constructed), but in this pa-\nper we limit ourselves to only real-world scenarios.\nThe second series of experiments, shown in Table 2, uses 100,000\nrandom cut graphs with a low maximum path length. We only con-\nsider the case where we use 20 strips for local layout. The results\nare very similar to using random cut graphs with only a single stripfor local layout. The decrease in performance when using low path\nlength cut graphs is not entirely unexpected, as it is limiting the space\nof cut graphs being searched in an attempt to minimize a cost whichmanhole count does not reflect.\nTrying to reduce the maximum pipeline length was done because\nwe hoped that it would generate lower cost solutions. To look for thiseffect, we did a final series of experiments using the vertical layout\ncode to generate a total cost. We first used a baseline of 1 strip and 1position, running the pipe down the center of the road. We then used\n20 strips for local layout, 13 positions for global layout, and a 30m\nmaximum length for pipe segments. The solution was constrained to\nuse a combination of 23 commerial pipe sizes. Results are shown inTable 3. Each data point represents the average of the minimum cost\nin millions of dollars over 10 runs with 2,000 cut graphs. For the\nlarge subdivision, a roughly $240,000 (7%) reduction is possible.\nWhile the results for the minimum path length cut graphs are\nslightly negative for the medium and small subdivisions, the differ-ence is smaller than the difference in the manhole counts multipliedType L M S\nBaseline $3.459 $1.565 $0.983\nRandom cuts $3.239 $1.459 $0.957\nRandom low depth cuts $3.220 $1.462 $0.960\nTable 3. Solution costs, in millions of dollars.\nby the $10,000 per manhole cost. Encouragingly, the largest subdivi-\nsion result is positive despite this, suggesting that maximum pipeline\nlength is a factor, but perhaps not as significant as hoped.\nOn a standard 2.83 GHz quad core PC, the computation times for\nthe random cuts line in Table 3 were 120.8 (large), 26.3 (medium)\nand 11.5 (small) minutes. For quick prototyping, the linear relaxation\nof the vertical layout problem is generally around only 0.1% error,\nand is about 6 times faster on the largest problem.\n6 CONCLUSIONS\nIn this paper, we demonstrated the first system for addressing the\ncomplete cycle of design and layout for a storm sewer system. The\nsystem can produce high-quality solutions that reduce the numberof manholes and the cost of construction. The system is scalable to\nmuch larger subdivisions than has been attempted before.\nThis research is just the first step. The goal is to produce high-\nquality solutions for the problem of laying down both storm and san-\nitary sewers. With two sets of pipes, there will be conflicts as each\nsystem has additional connections (e.g., all houses have a connection\nto the sanitary sewers). The “easy” solution is to have one set of pipes\nat a lower elevation than the other, but this can dramatically increase\nthe cost. Automating the two-pipe problem is one of the long-sought-\nafter dreams of civil engineering.\nREFERENCES\n[1] M. Afshar, M. Akbari, and M. Marino, ‘Simultaneous layout and size\noptimization of water distribution networks: Engineering approach’,\nJournal of Infrastructure Systems, 11(4), 221–230, (2005).\n[2] K. Chau and C. Cheung, ‘Knowledge representation on design of storm\ndrainage system’, in Innovations in Applied Artificial Intelligence ,n u m -\nber 3029 in LNCS, 886–894, (2004).\n[3] F. Diogo, G. Walters, E. de Sousa, and V . Graveto, ‘Three-dimensional\noptimization of urban drainage systems’, Computer-Aided Civil and In-\nfrastructure Engineering ,15, 409–42, (2000).\n[4] Z. Geem, T. Kim, and J. Kim, ‘Optimal layout of pipe networks us-\ning harmony search’, in International Conference on HydroScience and\nEngineering , (2000).\n[5] A. Hassanli and G. Dandy, ‘Optimal layout and hydraulic design of\nbranched networks using genetic algorithms’, Applied Engineering in\nAgriculture, 21(1), 55–62, (2005).\n[6] H. Methods and S. Durrans, Stormwater Conveyance Modeling and De-\nsign, Haestad Press, 2003.\n[7] T. Prasad and N. Park, ‘Multiobjective genetic algorithms for design of\nwater distribution networks’, Water Resources Planning and Manage-\nment, 73–82, (2004).\n[8] D. Savic and G. Walters, ‘Genetic operators and constraint handling for\npipe network optimization’, in Evolutionary Computing, number 933 in\nLNCS, 154–165, (1995).\n[9] D. Smith and G. Walters, ‘An evolutionary approach for finding op-\ntimal trees in unidirected networks’, European Journal of Operations\nResearch, 120, 593–602, (2000).\n[10] S. Tekeli and H. Belkaya, ‘Computerized layout generation for sanitary\nsewers’, Water Resources and Management ,112(4), 500–515, (1986).\n[11] K. Vairavamoorthy and M. Ali, ‘Optimal design of water distribution\nsystems using genetic algorithms’, Computer-Aided Civil and Infras-\ntructure Engineering ,15, 374–382, (2000).\n[12] G. Walters, ‘The design of the optimal layout for a sewer network’,\nEngineering Optimization ,9, 37–50, (1985).N.Burchetal./Automating Layouts ofSewersinSubdivisions 660", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Ji-m3hQuzEcw", "year": null, "venue": "ECAI2010", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-60750-606-5-1111", "forum_link": "https://openreview.net/forum?id=Ji-m3hQuzEcw", "arxiv_id": null, "doi": null }
{ "title": "Towards Argumentation-based Multiagent Induction.", "authors": [ "Santiago Ontañón", "Enric Plaza" ], "abstract": "In this paper we propose an argumentation-based framework for multiagent induction, where two agents learn separately from individual training sets, and then engage in an argumentation process in order to converge to a common hypothesis about the data. The result is a multiagent induction strategy in which the agents minimize the set of cases that they have to exchange (using argumentation) in order to converge to a shared hypothesis. The proposed strategy works for any induction algorithm which expresses the hypothesis as a collection of rules. We show that the strategy converges to a hypothesis indistinguishable in training set accuracy from that learned by a centralized strategy.", "keywords": [], "raw_extracted_content": "Towards Argumentation-based Multiagent Induction\nSantiago Onta ˜n´onand Enric Plaza1\nAbstract. In this paper we propose an argumentation-based frame-\nwork for multiagent induction, where two agents learn separately\nfrom individual training sets, and then engage in an argumentationprocess in order to converge to a common hypothesis about the data.The result is a multiagent induction strategy in which the agents min-imize the set of cases that they have to exchange (using argumen-tation) in order to converge to a shared hypothesis. The proposed\nstrategy works for any induction algorithm which expresses the hy-\npothesis as a collection of rules. We show that the strategy converges\nto a hypothesis indistinguishable in training set accuracy from that\nlearned by a centralized strategy.\n1 Introduction\nMultiagent induction is the problem of learning a hypothesis (such\nas a set of rules, or a decision true) from data when the data is dis-tributed among different agents. Some real-life domains involve suchforms of distributed data, where data cannot be centralized for sev-\neral reasons. In this paper we will propose a framework in which\nagents will use a limited form of argumentation in order to arrive to ahypothesis of all the data while minimizing the communication, andspecially minimizing the amount of examples exchanged, and ensur-ing that the hypothesis found is as good as if centralized inductionwith all the data was used.\nPrevious work [5] has shown how argumentation can be used by\nagents that use lazy learning or case-based reasoning (CBR) tech-niques. In this paper we will introduce a framework where agentsthat use inductive learning together with CBR to argue about learnt\nhypotheses. In this framework, agents will generate hypotheses lo-\ncally, and then argue about them until both agents agree. During theargumentation process, agents might exchange a small number of ex-amples. Formalizing agent communication as argumentation allowsus to abstract away from the induction algorithm used by the agents.Thus, all the strategies presented in this paper can work with anyinduction algorithm that learns sets of rules.\n1.1 Agents, Examples, and Arguments\nLetA1andA2be two agents who are completely autonomous and\nhave access only to their individual training sets T1, andT2. A train-\ning set Ti={e1, ..., e n}is a collection of examples. We will restrict\nourselves to classification tasks. Thus, an example e=/angbracketleftP,S/angbracketrightis a\npair containing a problem Pand a solution S∈S.\nOur framework is restricted to hypotheses Hthat can be repre-\nsented as a set of rules: H={r1, ..., r m}. A rule r=/angbracketleftD,S /angbracketright\nis composed of a bodyr.D, and a solution, r.S. When a problem\n1IIIA-CSIC, Artificial Intelligence Research Institute Campus UAB, 08193\nBellaterra, Catalonia (Spain), {santi,enric}@iiia.csic.esmatches the body of a rule r.B, we say that the rule subsumes the\nproblem: r.D/subsetsqequalP.\nIn order to use argumentation, two elements must be defined: the\nargument language (that defines the set of arguments that can be gen-\nerated), and a preference relation (that determines which arguments\nare stronger than others). In our framework, the argument language\nis composed of two kinds of arguments:\n•Arule argument α=/angbracketleftA, r/angbracketright, is an argument generated by an agent\nAstating that the rule ris true.\n•Acounterexample argument β=/angbracketleftA, e, α/angbracketright, is an argument gen-\nerated by an agent Astating that eis a counterexample of (an\nexample contradicting) argument α.\nIncluding additional types of arguments, such as “rule counterargu-\nments” is part of our future work.\nAn agent sees two rule arguments αandβasconflicting if there\nare examples in the training set of the agent, which are classifieddifferently by the two rule arguments. In our framework, we assumethat a counterexample cannot be defeated, but a rule argument αcan\nbe defeated by counterexample argument β,i fα subsumes βbutβ\nhas a different solution than α.\n2 Argumentation-based Multiagent Induction\nIn this section we will present two strategies, AMAI (Argumentation-\nbased Multiagent Induction) and RAMAI (Reduced Argumentation-\nbased Multiagent Induction). Both strategies are based on the sameidea, and share the same high level structure.\n1.A\n1andA2use induction locally with their respective training sets,\nT1andT2, and obtain initial hypotheses H1andH2respectively.\n2.A1andA2argue about H1, obtaining a new H∗\n1derived from H1\nthat is consistent with both A1andA2’s data.\n3.A1andA2argue about H2, obtaining a new H∗\n2derived from H2\nthat is consistent with both A1andA2’s data.\n4.A1andA2obtain a final hypothesis H∗=H∗\n1∪H∗\n2. Remove all\nthe rules that are subsumed by any other rule.\nThus, both agents perform induction individually in step 1 and\nthen, in steps 2 and 3 (which are symmetric), the agents use argumen-tation to refine the individually obtained hypotheses and make themcompatible with the data known to both agents. Finally, when bothhypotheses are compatible, a final global hypothesis H\n∗is obtained\nas the union of all the rules learned by both agents while removingredundant rules. AMAI and RAMAI only differ in the way steps 2 and\n3 are performed. Step 2 in AMAI works as follows\n2.a Let H\n0\n1=H1, andt=0.\n2.b If there is any rule r∈Ht\n1that has not yet been accepted byA2,\nthen send the argument α=/angbracketleftA1,r/angbracketrighttoA2. Otherwise (all the\nrules in Ht\n1have been accepted) the protocol goes to step 2.e.ECAI 2010\nH. Coelho et al. (Eds.)\nIOS Press, 2010\n© 2010 The authors and IOS Press. All rights reserved.\ndoi:10.3233/978-1-60750-606-5-11111111\n2.cA2analyzes α.rand tries to find a counterexample that defeats\nit.A2sends the counterargument β=/angbracketleftA2,e ,α/angbracketrighttoA1if a\ncounterexample eis found; otherwise ris accepted and the pro-\ntocol goes back to step 2.b.\n2.d When A1receives a counterexample argument β,β.eis added\nto the training set T1, andA1updates its hypothesis obtaining\nHt+1\n1. The protocol goes back to step 2.b, and t=t+1.\n2.e The protocol returns Ht\n1.\nThe main idea is that A1infers rules according to its individual\ntraining set T1, andA2evaluates them, trying to generate counterar-\nguments to the rules that do not agree with its own individual train-\ning set T2. Step 3 in AMAI is the dual situation where A2’s rules are\nattacked byA 1’s counterexamples. Notice that only one counterex-\nample is exchanged at a time in AMAI. The second strategy, RAMAI,\nimproves over AMAI in trying to minimize the number of times the\nhypothesis has to be updated while trying to keep a low number ofexchanged counterexamples. Step 2 in RAMAI works as follows:\n2.a Let H\n0\n1=H1, andt=0.\n2.b Let Rt⊆Ht\n1be the set of rules in the hypothesis of A1not\nyet accepted byA2. If empty, then the protocol goes to step 2.e,\notherwise A1sends the Rt={/angbracketleftA1,r/angbracketright|r∈Rt}toA2.\n2.c For each α∈Rt,A2determines the set of examples Cα\nin its training set that are defeating counterexamples of α.r:\nCα={e∈T2|α.r.D /subsetsqequale.P∧α.r.S /negationslash=e.S}. For each ar-\ngument α∈Rtsuch that Cα=∅,A2accepts rule α.r. Let\nIt⊆Rtbe the subset of arguments for which A2could find\ndefeating counterexamples. A2computes the minimum set of\ncounterexamples Btsuch that ∀α∈It,Cα∩Bt/negationslash=∅.A2\nsends the set of counterexample arguments Btconsisting of a\ncounterexample argument β=/angbracketleftA2,e ,α/angbracketrightfor each pair e,αsuch\nthate∈Bt,α∈It, andβdefeats α.\n2.d When A1receives a set of counterexample arguments Bt, it adds\ntheir counterexamples to its training set T1, and updates its in-\nductive hypothesis, obtaining Ht+1\n1. The protocol goes back to\nstep 2.b, and t=t+1.\n2.e The protocol returns Ht\n1.\nStep 3 in RAMAI is just the dual of Step 2. The idea behind RA-\nMAI is that an example can be a defeating counterexample of more\nthan one rule at the same time. RAMAI computes the minimum set\nof examples that defeat all the rules in Itand sends them all at once.\n3 Experimental Evaluation\nWe tested AMAI and RAMAI in four different data sets from the\nIrvine machine learning repository: soybean, zoology, cars and de-mospongiae. Moreover, we tested it using three different inductionalgorithms: ID3 [7], CN2 [2] and INDIE (a relational inductivelearner [1]). We compared against four strategies: Individual (whereagents just do induction individually), Union (where agents do induc-\ntion individually, and then they put together all the rules they learn\ninto one common hypothesis), DAGGER [3], and Centralized induc-tion (one sole agent having all data). All the results presented are the\naverage of 10 fold cross validation runs.\nWe ran each combination of induction algorithm with multiagent\ninduction strategy (except the combination of INDIE-DAGGER, that\nis not possible, since DAGGER assumes propositional data sets, andINDIE requires them in relational form). The training set accuracyresults confirm is that the hypotheses learnt by AMAI and RAMAI\nare indistinguishable in training set accuracy from those learnt byusing Centralized induction, achieving a 100% accuracy every timewhere Centralized induction also does. When agents perform Indi-vidual induction, having less data, accuracy diminishes; agents using\nthe Union strategy improve their accuracy with respect to an indi-\nvidual strategy, but still it is not guaranteed to be as good as that\nof Centralized accuracy. DAGGER shows good accuracy (although\nnot guaranteeing that of Centralized induction). Concerning test set\naccuracy, we observe that, except in one case (demospongiae with\nCN2) where DAGGER achieves higher accuracy, AMAI and RAMAI\nachieve same or higher accuracy than any other strategy, includingthe Centralized approach. The explanation is that when agents useAMAI or RAMAI, two different hypothesis of the data are learnt\n(one per agent), and then merged. Therefore, the resulting hypoth-\nesis has rules derived from different training sets (thus having differ-\nent biases). This, alleviates overfitting, increasing classification accu-racy in unseen problems. Finally, among all the multiagent inductionstrategies, DAGGER is the one that requires exchanging the highestpercentage of examples, 68.56%, while AMAI and RAMAI exchange\nonly 19.04% and 21.52% respectively.\n4 Conclusions and Future Work\nIn this paper we have presented AMAI and RAMAI, two different\nmultiagent induction strategies that can be used on top of any in-duction algorithm capable of learning hypotheses represented usingsets of rules. AMAI and RAMAI ensure that the hypothesis learnt\nwill be undistinguishable in terms of training set accuracy from thatproduced by a centralized approach. The main idea behind AMAI\nand RAMAI is to let each agent perform induction individually, then\nargue about the learnt hypotheses to remove inconsistencies, and fi-\nnally merge both hypotheses.\nAMAI and RAMAI use counterexamples as the only form of coun-\nterargument. However, we have been investigating more complex ar-\ngumentation protocols that let agents use rules also generalizations ascounterarguments[6]. The problem of that, is that the base learningalgorithms have to be modified to be able to take rules into account.This is related to the research by Mo ˇzina et al. [4] where they modify\nthe CN2 algorithm to take into account specific rules (arguments) inaddition to examples for learning purposes.\nACKNOWLEDGEMENTS\nThis research was partially supported by projects Next-CBR\n(TIN2009-13692-C03-01) and Agreement Technologies CON-SOLIDER CSD2007-0022.\nREFERENCES\n[1] E. Armengol and E. Plaza, ‘Bottom-up induction of feature terms’, Ma-\nchine Learning Journal, 41(1), 259–294, (2000).\n[2] Peter Clark and Tim Niblett, ‘The CN2 induction algorithm’, in Machine\nLearning, pp. 261–283, (1989).\n[3] Winston H. E. Davies, The Communication of Inductive Inference, Ph.D.\ndissertation, University of Aberdeen, 2001.\n[4] Martin Mo ˇzina, Jure ˇZabkar, and Ivan Bratko, ‘Argument based machine\nlearning’, Artificial Intelligence, 171(10-15), 922–937, (2007).\n[5] Santiago Onta ˜n´on and Enric Plaza, ‘Learning and joint deliberation\nthrough argumentation in multiagent systems’, in AAMAS, p. 159,\n(2007).\n[6] Santiago Onta ˜n´on and Enric Plaza, ‘Multiagent inductive learning: an\nargumentation-based approach’, in ICML-2010, p. to appear, (2010).\n[7] J. R. Quinlan, ‘Induction of decision trees’, Machine Learning, 1(1), 81–\n106, (1986).S.Ontañón andE.Plaza /TowardsArgumentation-Based MultiagentInduction 1112", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "JyV9b6d5qGQv", "year": null, "venue": "ECAI2010", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-60750-606-5-401", "forum_link": "https://openreview.net/forum?id=JyV9b6d5qGQv", "arxiv_id": null, "doi": null }
{ "title": "Using Bayesian Networks in an Industrial Setting: Making Printing Systems Adaptive.", "authors": [ "Arjen Hommersom", "Peter J. F. Lucas" ], "abstract": "Control engineering is a field of major industrial importance as it offers principles for engineering controllable physical devices, such as cell phones, television sets, and printing systems. Control engineering techniques assume that a physical system's dynamic behaviour can be completely described by means of a set of equations. However, as modern systems are often of high complexity, drafting such equations has become more and more difficult. Moreover, to dynamically adapt the system's behaviour to a changing environment, observations obtained from sensors at runtime need to be taken into account. However, such observations give an incomplete picture of the system's behaviour; when combined with the incompletely understood complexity of the device, control engineering solutions increasingly fall short. Probabilistic reasoning would allow one to deal with these sources of incompleteness, yet in the area of control engineering such AI solutions are rare. When using a Bayesian network in this context the required model can be learnt, and tuned, from data, uncertainty can be handled, and the model can be subsequently used for stochastic control of the system's behaviour. In this paper we discuss industrial research in which Bayesian networks were successfully used to control complex printing systems.", "keywords": [], "raw_extracted_content": "Using Bayesian Networks in an Industrial Setting:\nMaking Printing Systems Adaptive\nArjen Hommersom and Peter J.F. Lucas1\nAbstract. Control engineering is a field of major industrial im-\nportance as it offers principles for engineering controllable physical\ndevices, such as cell phones, television sets, and printing systems.\nControl engineering techniques assume that a physical system’s dy-\nnamic behaviour can be completely described by means of a set of\nequations. However, as modern systems are often of high complexity,\ndrafting such equations has become more and more difficult. More-\nover, to dynamically adapt the system’s behaviour to a changing en-\nvironment, observations obtained from sensors at runtime need to\nbe taken into account. However, such observations give an incom-plete picture of the system’s behaviour; when combined with the in-\ncompletely understood complexity of the device, control engineer-ing solutions increasingly fall short. Probabilistic reasoning would\nallow one to deal with these sources of incompleteness, yet in the\narea of control engineering such AI solutions are rare. When using\na Bayesian network in this context the required model can be learnt,\nand tuned, from data, uncertainty can be handled, and the model can\nbe subsequently used for stochastic control of the system’s behaviour.In this paper we discuss industrial research in which Bayesian net-\nworks were successfully used to control complex printing systems.\n1 INTRODUCTION\nMany complex physical systems are required to make dynamic trade-\noffs between the various characteristics of operation, which can be\nviewed as the capability to adapt to a changing environment. For\nexample, in printing systems such characteristics include power di-\nvision and consumption, the speed of printing, and the quality of the\nprint product. Such trade-offs heavily depend on the system’s envi-\nronment determined by humidity, temperature, and available power.Failure to adapt adequately to the environment may result in faults\nor suboptimal behaviour, resulting, for example, in low quality print\nproducts or low throughput of paper.\nThe problem of adaptability concerns taking actions based on\navailable runtime information, which we call making decisions.A s\ndefined above it has two main features. First, making decisions is\ntypically required at a low frequency: it is not necessary and not evendesirable to change the speed or energy usage of an engine many\ntimes per second. Second, there is a lot of uncertainty involved whenmaking decisions, in particular about the environment, the state of\nthe machine, and also about the dynamics of the system. Complex\nsystems usually cannot be modelled accurately, whereas adaptability\nrequires one to make system-wide, complex, decisions. In order to\ndeal with these uncertainties, techniques where probability distribu-tions can be learnt from available data seem most appropriate.\n1Radboud University Nijmegen, Institute for Computing and Information\nSciences, The Netherlands, email: {arjenh,peterl}@cs.ru.nlIn this paper, we propose to use Bayesian networks [17] to deal\nwith the control of such complex systems. The formalism possesses\nthe unique quality of being both an AI-like and statistical knowledge-\nrepresentation formalism. Nowadays, Bayesian networks take a cen-\ntral role for dealing with uncertainty in AI and have been successfully\napplied in many fields, such as medicine and finance. The control of\nphysical systems, on the other hand, is largely done using traditionalmethods from control theory.\nOne of the attractive features of Bayesian networks is that they\ncontain a qualitative part, which can be constructed using expert\nknowledge, normally yielding an understandable, white-box model.\nMoreover, the quantitative parameters of a Bayesian network can be\nlearnt from data. Other AI learning techniques, such as neural net-\nworks, resist providing insight into why a machine changes its be-\nhaviour, as they are black-box models. Furthermore, rules—possibly\nfuzzy—are difficult to obtain and require extensive testing in orderto check whether they handle all the relevant situations.\nThe present paper summarises our successful effort in using\nBayesian-network based controllers in the industrial design of adap-tive printing systems, which can be looked upon as special stochastic\ncontrollers. In our view, as systems get more and more complex, theembedded software will need to be equipped with such AI reason-\ning capabilities to render the design of adaptive industrial systems\nfeasible.\n2 BAYESIAN NETWORKS FOR CONTROL\nWe first offer some background about Bayesian networks and discuss\nneeded assumptions for modelling and reasoning about dynamic sys-\ntems using Bayesian networks.\n2.1 Background\nABayesian network B=(G, P )consists of a directed acyclic graph\nG=(V,E ),w h e r eV is a set of vertices and E⊆V×Vis a set of\ndirected edges or arcs, and Pis a joint probability distribution asso-\nciated with a set of random variables Xthat correspond one-to-one\nto the vertices of G, i.e., to each vertex v∈Vcorresponds exactly\none random variable Xvand vice versa. As the joint probability dis-\ntribution Pof the Bayesian network is always factored in accordance\nto the structure of the graph G, it holds that:\nP(X)=Y\nv∈VP(Xv|Xπ(v)),\nwhere π(v)is the set of parents of v. Thus, Pcan be defined as a\nfamily of local conditional probability distributions P(Xv|Xπ(v)),ECAI 2010\nH. Coelho et al. (Eds.)\nIOS Press, 2010\n© 2010 The authors and IOS Press. All rights reserved.\ndoi:10.3233/978-1-60750-606-5-401401\ntHidden\nStates\n(H)Information\n(S)Control\nVariables\n(C)\nTarget\nVariables\n(T)Sensor\nHidden\nStates\n(H)Information\n(S)Control\nVariables\n(C)\nTarget\nVariables\n(T)Sensor\nt+1\nFigure 1. A temporal Bayesian network structure for modelling dynamic systems.\nfor each vertex v∈V. Bayesian networks can encode various prob-\nability distributions. Most often the variables are either all discrete or\nall continuous. Hybrid Bayesian networks, however, containing both\ndiscrete and continuous conditional probability distributions are also\npossible. A commonly used type of hybrid Bayesian network is the\nconditional linear Gaussian model [3, 8]. Efficient exact and approxi-mate algorithms have been developed to infer probabilities from such\nnetworks [10, 2, 12]. Also important in the context of embedded sys-\ntems is that real-time probabilistic inference can be done using any-\ntime algorithms [6].\nA Bayesian network can be constructed with the help of one\nor more domain experts. However, building Bayesian networks us-\ning expert knowledge, although by now known to be feasible forsome domains, can be very tedious and time consuming. Learning\na Bayesian network from data is also possible, a task which can be\nseparated into two subtasks: (1) structure learning, i.e., identifying\nthe topology of the network, and (2) parameter learning, i.e., deter-\nmining the associated joint probability distribution, P,f o rag i v e n\nnetwork topology. In this paper, we employ parameter learning. This\nis typically done by computing the maximum likelihood estimates of\nthe parameters, i.e., the conditional probability distributions, associ-\nated to the networks structure given data [9].\nTemporal Bayesian networks are Bayesian network where the ver-\ntices of the graph are indexed with (discrete) time. All vertices with\nthe same time index form a so-called time slice. Each time slice con-\nsists of a static Bayesian network and the time slices are linked to\nrepresent the relationships between states in time. If the structure\nand parameters of the static Bayesian network are the same at every\ntime slice (with the exception of the first), one speaks of a dynamic\nBayesian network, as such networks can be unrolled (cf. [16] for an\noverview).\n2.2 Bayesian-network modelling of a dynamic\nsystem\nCommon assumptions in the modelling of dynamic physical systems\nare that the system is Markovian and stationary (e.g., [11]), i.e., the\nsystem state at time t+1 is only dependent on the system state at\ntimet, and the probabilistic dependencies are the same at each time\nt. Stationarity is an assumption too strong for the work discussed be-\nlow; however, it is assumed that the network structure is the same\nfor every t. In case a particular dependence is absent in a time-slices,\nthen such independence will be reflected in the conditional probabil-ity distributions rather than in the structure.\nFour different types of vertices were distinguished in developingBayesian networks for stochastic control:\n•Control variables Cact as input to the physical system’s control\nsystem, such as the car engine’s throttle position.\n•Hidden state variables Hdetermine the unobservable state of\nthe system, such as the engine’s speed.\n•Sensor information Sprovides observations about the (unob-\nservable) state of the machine (here engine), for example by a\nmeasurement of the speed of the car.\n•Target variables Tact as reference values or set-points of the\nsystem. It is the purpose of a Bayesian network to control thesevariables.\nA schematic representation of such a network is shown in Figure 1.Given ntime slices, a Bayesian network will have an associated joint\nprobability distribution of the following form:\nP(S\n1,C 1,H 1,T1,...,S n,Cn,Hn,Tn)\nThe chosen representation closely fits the concepts of traditional con-\ntrol theory. A typical feedback controller influences the system ( H)\nthrough a system’s input (C ); it does so by comparing the sensed data\n(S) with a reference value (T ). A feed-forward controller is similar\nin this view, except that the sensor variables are missing or cannot be\nobserved.\nAfterttime steps, the probability distribution can be updated with\nthe observations S1,...,S tand earlier control choices C1,...,C t\nto a probability distribution over the remaining variables:\nP(H1,...,H n,T1,...,T n,St+1,...,S n,Ct+1,...,C n\n|S1,...,S t,C 1,...,C t)\nIn the following, this conditional probability distribution is abbrevi-\nated to\nPt(H1,...,H n,T1,...,T n,St+1,...,S n,Ct+1,...,C n) (1)\nA common question in control is to provide an estimation of the tar-get variable, i.e., to compute P\nt(Tk)for some kfrom the conditional\nprobability distribution (1). If this can be done reliably, it offers the\npossibility to exploit the network for control. The controller is able\nto decide what to do in the future by reasoning about the target ofcontrol in the future T\nf=Tt+1,...T t+m,t+m≤ngiven a pos-\nsible choice of control Cf=Ct+1,...T t+p,t+p≤n.B o t hm as\nwell as pcan be tuned to domain-specific requirements.\nLetU:Tf→Rbe a utility function defined for the target\nvariables Tf. The expected utility for controlling the machine byA.Hommer somandP.J.F.Lucas /Using Bayesian Networks inanIndustrial Setting: Making Printing Systems Adaptive 402\n0 1 2 3 4 5 6 7 8 9 10 11 12 13 140.10.20.30.40.50.60.70.80.91\nrunning time since standby (s)accuracyaccuracy of model using 20−fold crossvalidation\n \nFigure 2. This graph shows the classification accuracy of the Bayesian\nnetwork plotted as a function of time after the start of a print job.\nCf=cf,eu(cf)is then equal to:\neu(cf)=X\ntfPt(tf|cf)U(tf)\nThis approach can also be adapted to continuous variables by inte-\ngrating over the domain of Tf. A control strategy c∗\nfwith maximal\nexpected utility yields a maximal value for eu(cf).\nc∗\nf=a r g m a x\ncfeu(cf)\nIn Sections 3 and 4, we present research in which we have explored\nthe theory summarised above with the aim of making an industrial\nprinting system adaptive to its environment.\n3 ESTIMATION OF MEDIA TYPE\nPrinting systems contain many challenging control problems. As a\nfirst study, we consider the problem of establishing the media weight\nduring a run, which can be exploited during the control of many dif-ferent parts of the printer. For example, if the media weight, here\ninterpreted as paper type, is known, then we can (i)avoid bad print\nquality, (ii)avoid engine pollution in several parts of the printer, and\n(iii)help service technician at service call diagnosis, i.e., to avoid\nblocking for specific media under specific circumstances. Given the\nmechanical space limitations in printers, it is non-trivial to designa printer that measures paper properties directly nor is it desirable\nto ask the user to supply this information. We therefore investigated\nwhether the available sensor data could be used to estimate these\nproperties. This sensor data mainly consists of temperatures and volt-\nages that are required during the regular control of the printer and is\navailable without any extra cost.\nThe data that is available consists of logging data of runs from\nstand-by position with a warm machine. In order to vary different\nconditions, the duration of stand-by prior to the run was deliberately\nvaried. This ensures a realistic variation of temperatures inside theengine. Moreover, the paper type was varied, namely the data con-\ntains runs with 70 gsm ( n=3 5 ), 100 gsm (n =1 0 ), 120 gsm\n(n=2 4 ), 140 gsm (n =1 0 ), and 200 gsm (n =2 9 ) paper.\nWith the help of the domain experts we designed a Bayesian net-\nwork structure with 8 vertices at each time-slice. Logging data of an3 6 9 12 15050100150200250300\ntime (s)prediction (GSM)\n \nFigure 3. The estimation of paper weight plotted in time. The solid line\ndenotes the mean media weight estimation and the gray area visualises three\nstandard deviations from the mean.\nindustrial printing system were obtained at a frequency of 2Hz for\n15 second, which would yield a total of 240 vertices in the temporal\nBayesian network if represented explicitly. All variables were mod-\nelled as Gaussian random variables. To convert the Bayesian network\ninto a classifier, the estimations of the model were mapped to a num-\nber of classes, corresponding to the distinguished media types.\nThe plot shown in Figure 2 indicates that it takes some time be-\nfore the Bayesian network is able to reliably distinguish between themedia types based on the sensor information available. After about 6seconds the classification reaches a performance that is seen as suffi-\nciently reliable for many applications. However, for high-speed print-\ning systems a higher reliability may be required. As the plot shows,\non the whole there is a gradual, and robust increase in performance\nwith a form that looks like a sigmoid learning curve. However, note\nthat the only thing that changes in time are the data: the nature of the\ndata is such that in time it becomes easier for the Bayesian network\nto distinguish between various media types. Further evidence of the\nrobustness of the approach is obtained by computing the confidenceintervals of the weight estimates. As shown in Figure 3, the confi-\ndence intervals become smaller in time, and conclusions about media\ntype are therefore also more reliable. Hence, it is fair to conclude that\nthe model was able to derive useful information about media type byusing sensor information, here about temperature and voltage usage,\nthat is not immediately related to media type.\nIn case there is reasonable confidence in the estimation, decisions\ncan be made to adapt the system’s behaviour. Our work on such adap-tation is presented in the next section.\n4 CONTROL OF ENGINE SPEED\n4.1 Description of the problem\nThe productivity of printers is limited by the amount of power avail-\nable, in particular in countries or regions with weak mains. If there\nis insufficient power available, then temperature setpoints cannot bereached, which causes bad print quality. To overcome this problem,\nit is either possible to decide to always print at lower speeds or to\nadapt to the available power dynamically. In this section, we explorethe latter option by a dynamic speed adjustment using a Bayesian\nnetwork.A.Hommer somandP.J.F.Lucas /Using Bayesian Networks inanIndustrial Setting: Making Printing Systems Adaptive 403\n4.2 Approach\nThe block diagram in Figure 4 offers an overview of this approach. In\nthis schema, ‘sensors’ are put on low-level controllers and signal the\nhigh-level controller with requests. The network then reasons aboutappropriate setpoints of the low-level controller. In this problem set-\nting, the high-level controller decides on a specific velocity of the\nengine based on requested power by a lower-level controller.\nFor this problem, we look at the model of a part of the printer in\nmore detail. The structure of the model at each time slice is shown\nin Figure 5. The requested power available is an observable variablethat depends on low-level controllers that aim at maintaining the right\nsetpoint for reaching a good print quality. The error variable models\nthe deviation of the actual temperature from the ideal temperature,\nwhich can be established in a laboratory situation, but not during run-\ntime. If this exceeds a certain threshold, then the print quality will bebelow a norm that has been determined by the printer manufacturer.\nBoth velocity and available power influence the power that is or\ncan be requested by the low-level controllers. Furthermore, the com-\nbination of the available power and the requested power is a good\npredictor of the error according to the domain experts. To model the\ndynamics, we use two time slices with the interconnections between\nthe available power – which models that the power supply on dif-ferent time slices is not independent – and requested power, which\nmodels the state of the machine that influences the requested power.\nWe again considered to model all the variables as Gaussian dis-\ntributed random variables. This seemed reasonable, as most variableswere Gaussian distributed, however with the exception of the avail-\nable power (see Figure 6). Fitting a Gaussian distribution to such a\ndistribution will typically lead to insufficient accuracy. To improve\nthis, this variable was modelled as a mixture of two Gaussian dis-\ntributed variables, one with mean μ\nlow\nPower and one with mean μhigh\nPower\nwith a small variance. Such a distribution can be modelled using a\nhybrid network as follows. The network is augmented with an addi-tional (binary) parent vertex Swith values ‘high’ and ‘low’ for the\nrequested power variable. For both states of this node, a normal dis-\ntribution is associated to this variable. The marginal distribution of\nrequested power is obtained by basic probability theory by\nP(P\nreq)=X\nSP(Preq|S)P(S).\n4.3 Error estimation\nThe main reasoning tasks of the network is to estimate the error, i.e.,\nthe deviation from the ideal temperature, given a certain velocity and\nobservations. This problem could be considered as a classification\ntask, i.e., the print quality is badorgood. The advantage is that this\n−+Controller Process\nSetpointDecision engineon behaviourRequirementsBayesian network\ncontrol\nparametersobservation\nFigure 4. Architecture of an adaptive controller using a Bayesian network.Available\nPower\nErrorVelocity\nRequested\nPower\nFigure 5. Structure of the Bayesian network of each time slice.\nprovides means to compare different models and see how well it per-\nforms at distinguishing between these two possibilities. A standard\nmethod to visualise and quantify this is by means of a Receiver Oper-ating Characteristic (ROC) curve, which shows the relation between\nthe false positive ratio and the true positive ratio (sensitivity). The\narea under the curve is a measure for its classification performance.\nWe have compared three models, i.e., a discrete model, a fully con-\ntinuous model and a hybrid model for modelling the distribution of\nthe requested power with two normally distributed random variables.The classification performance is outlined in Figure 7. As expected,\nthe fully continuous model performs worse, whereas the hybrid and\ndiscrete show a similar trend. The advantage of the discrete version is\nthat the probability distribution can easily be inspected and it has no\nunderlying assumptions about the distribution, which makes it easier\nto use in practice. The hybrid version however allows for more ef-\nficient computation as we need a large number of discrete values to\ndescribe the conditional distributions. For this reason, we have usedthe hybrid version in the following.\n4.4 Decision making for control\nAs the error information is not observable during runtime, the\nmarginal probability distribution of the error in the next time sliceis computed using the information about the power available and\npower requested. This error is a normal random variable with mean\nμand standard deviation σ. The maximum error that we allow in\nthis domain is denoted by E\nmaxand we define a random variable for\nprint quality Qk, which is true if μ+kσ < E max,w h e r e kis a con-\nPmin Pmax050010001500200025003000\nFigure 6. Distribution of requested power.A.Hommer somandP.J.F.Lucas /Using Bayesian Networks inanIndustrial Setting: Making Printing Systems Adaptive 404\n0 200 400 600 800 1000 1200 1400 1600 1800 2000vminvmaxvelocity\n \n0 200 400 600 800 1000 1200 1400 1600 1800 2000PlowPhighavailable power\n \n0 200 400 600 800 1000 1200 1400 1600 1800 20000Emaxerror\n Bayesian\nrule−based\nFigure 8. In the centre figure, the available power is plotted, which is fluctuating. at the top, we compare the velocity of the engine which is controlled by a\nrule-based system and by a Bayesian network. Below, we present the error that the controller based on the Bayesian network yields, which is within the\nrequired limits.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10.950.960.970.980.991\nfalse positive ratiosensitivity\n \nhybrid\ncontinuous\ndiscrete\nFigure 7. ROC curves of the three Bayesian networks. The hybrid and\ndiscrete versions show the best classification performance.\nstant. Different values of kcorrespond to different points on the ROC\ncurve as depicted in Figure 7. For a normal random variable, more\nthan 99.73% of the real value of the error will be within three stan-\ndard deviations of the mean, so for example k=3would imply that\nP(Error t+1<E max)>99.87%. The target variables of our control\nare the print quality, modelled by Qk, and the velocity V. Define a\nutility function U:{Qk,V}→Ras:\nU(q,v )=j−1 ifq=⊥\nv otherwise\nand apply the maximal expected utility criterion as discussed in Sec-\ntion 2. This implies that the expected utility of a velocity v,eu(v)\nequals −1orvdepending on the risk of having bad print quality. In\neffect, we choose the highest velocity vsuch that Qk=/latticetop.\nIn order to evaluate the approach, we compared the productivity\nof the resulting network with the rule-based method (implementedin terms of repetitive and conditional statements of a programming\nlanguage for use in the actual control engine) that incorporates some\nheuristics for choosing the right velocity. The productivity is definedhere simply asR\nτ\n0v(t)dt,w h e r e τis the simulation time. In order\nto smooth the signal that the network produces, we employ a a FIR\n(Finite Impulse Response) filter in which we employ a moving av-\nerage of 10 decisions. The resulting behaviour was simulated and is\npresented in Figure 8 (with k=3). Compared to the rule-based ap-\nproach, we improve roughly 9% in productivity while keeping the\nerror within an acceptable range. While it could certainly be the case\nthat the rules could be improved and optimised, the point is that thelogic underlying the controller does not have to be designed. What is\nrequired is a qualitative model, data, and a probabilistic criterion that\ncan be inferred.\n5 RELATED WORK\nBayesian inference is well-known for the inference of hidden states\nin a dynamic model. Typical applications are filtering – i.e., infer-\nring the current hidden state given the observations in the past – andsmoothing where past states are inferred. For example, the Kalman\nfilter [7] is well-known in stochastic control theory (see e.g., [1]) and\nis a special case of a dynamic Bayesian networks, where the model\nis the linear Gaussian variant of a hidden Markov model, i.e., it de-scribes a Markov process with noise parameters on the input and out-\nput variables. Non-linear variants, such as the extended Kalman filteror the unscented Kalman filter (see e.g., [15]) are approximate infer-\nence algorithms for non-linear Gaussian models by linearisation of\nthe model. More recently, particle filters [13], have been proposed as\nan alternative, which relies on sampling to approximate the posterior\ndistribution.A.Hommer somandP.J.F.Lucas /Using Bayesian Networks inanIndustrial Setting: Making Printing Systems Adaptive 405\nThe difference with these filtering approaches is that for Bayesian\nnetworks there is an underlying domain model which is understand-\nable. As Bayesian networks are general formalisms, they could alsobe used or re-used for diagnostic purposes, where it is typically re-\nquired that a diagnosis can be represented in a human-understandable\nway so that proper action can be taken (e.g., [19] in the printing do-main). Furthermore, it is well-known that the structure of the graphi-\ncal part of a Bayesian network facilitates the assessment of probabil-\nities, even to the extent that reliable probabilistic information can beobtained from experts (see [14]). One other advantage compared to\nblack-box models is that the modelled probability distribution can be\nexploited for decision making using decision theory. This is particu-\nlarly important if one wants to make real trade-offs such as between\nproductivity and energy consumption.\nWith respect to decision making, adaptive controllers using ex-\nplicit Bayesian networks have not been extensively investigated. The\nmost closely related work is by Deventer [4], who investigated the\nuse of dynamic Bayesian networks for controlling linear and non-linear systems. The premise of this work is that the parameters of\na Bayesian network can be estimated from a deterministic physicalmodel of the system. In contrast, we aim at using models that were\nlearnt from data. Such data can be obtained from measurements dur-\ning design time or during runtime of the system.\nSeveral approaches for traditional adaptive control already exists.\nFirst, model-reference adaptive control uses a reference model thatreflects the desired behaviour of the system. On the basis of the ob-served output and of the reference model, the system is tuned. The\nsecond type of adaptive controllers are so called self-tuning con-\ntrollers, which estimate the correct parameters of the system basedon observations and tunes the control accordingly. Our approach em-\nploys a mixture of the two, where a reference model is given by a\nBayesian network and tunes other parts of the system accordingly.In the last few decades, also techniques from the area of artificial in-\ntelligence, such as rule-based systems, fuzzy logic, neural networks,\nevolutionary algorithms, etc. have been used in order to determineoptimal values for control parameters (see e.g. [5]). The work pre-\nsented in this paper extends these approaches using human-readable\nBayesian networks.\n6 CONCLUSIONS\nIn embedded software, there is an increasing trend to apply and verify\nnew software methods in an industrial context, i.e., the industry-as-\nlaboratory paradigm [18]. This means that concrete cases are stud-\nied in their industrial context to promote the applicability and scal-\nability of solution strategies under the relevant practical constraints.\nMuch of the current AI research, on the other hand, is done in theory\nusing standard benchmark problems and data sets. It poses a num-\nber of challenges if one wishes to apply an AI technique such as\nBayesian networks to industrial practice. First, there is little support\nfor modelling systems in an industrial context. Bayesian networksare expressive formalisms and little guidance is given to the construc-\ntion of networks that can be employed in such an industrial setting.\nMoreover, there seems to be little theory of using Bayesian networks\nin these areas. For example, while there is a lot of research in the\narea of stochastic control, it is unclear how these results carry over\nto Bayesian networks. Similarly, techniques developed in context of\nBayesian networks do not carry over to the problem of control.\nBayesian networks have drawn attention in many different re-\nsearch areas, such as AI, mathematics and statistics. In this paper, wehave explored the use of Bayesian networks for designing an adapt-able printing systems. We have shown that the approach is feasible\nand can act as a basis for designing an intelligent printing system.\nThis suggests that Bayesian networks can have a much wider appli-cation in the engineering sciences, in particular for control and fault\ndetection. With the increasing complexity of systems, there is little\ndoubt that these AI techniques will play a pivotal role in industry.\nACKNOWLEDGEMENTS\nThis work has been carried out as part of the OCTOPUS project\nunder the responsibility of the Embedded Systems Institute. This\nproject is partially supported by the Netherlands Ministry of Eco-\nnomic Affairs under the Embedded Systems Institute program. We\nthank Marcel van Gerven for making his Bayesian network toolbox\navailable to us.\nREFERENCES\n[1] K.J. ˚Astr¨om, Introduction to Stochastic Control Theory, Academic\nPress, 1970.\n[2] G. Casella and C. Robert, Monte Carlo Statistical Methods , Springer-\nVerlag, 1999.\n[3] R.G. Cowell, A.P. Dawid, S.L. Lauritzen, and D.J. Spiegelhalter, Prob-\nabilistic Networks and Expert Systems , Springer, 1999.\n[4] R. Deventer, Modeling and Control of Static and Dynamic Systems with\nBayesian Networks, Ph.D. dissertation, University Erlangen-N ¨urnberg,\nChair for Pattern recognition, 2004.\n[5] J.A. Farrell and M.M. Polycarpou, Adaptive Approximation Based Con-\ntrol: Unifying Neural, Fuzzy and Traditional Adaptive Approximation\nApproaches , Adaptive and Learning Systems for Signal Processing,\nCommunications and Control Series, Wiley-Interscience, 2006.\n[6] H. Guo and W.H. Hsu, ‘A survey of algorithms for real-time Bayesian\nnetwork inference’, in AAAI/KDD/UAI02 Joint Workshop on Real-\nTime Decision Support and Diagnosis Systems, eds., A. Darwiche andN. Friedman, Edmonton, Canada, (2002).\n[7] R.E. Kalman, ‘A new approach to linear filtering and prediction prob-\nlems’, Journal of Basic Engineering, 82(1), 35–45, (1960).\n[8] S.L. Lauritzen, ‘Propagation of probabilities, means and variances in\nmixed graphical association models’, Journal of the American Statisti-\ncal Association, 87, 1098–1108, (1992).\n[9] S.L. Lauritzen, ‘The EM algorithm for graphical association models\nwith missing data’, Computational Statistics and Analysis ,19,1 9 1 –\n201, (1995).\n[10] S.L. Lauritzen and D.J. Spiegelhalter, ‘Local computations with proba-\nbilities on graphical structures and their application to expert systems’,Journal of the Royal Statistical Society ,50, 157–224, (1988).\n[11] U. Lerner, B. Moses, M. Scott, S. Mcilraith, and S. Koller, ‘Monitor-\ning a complex physical system using a hybrid dynamic Bayes net’, inProceedings of the UAI, (2002).\n[12] U. Lerner and R. Parr, ‘Inference in hybrid networks: theoretical limits\nand practical algorithms’, in Proceedings of the UAI, eds., J. Breese and\nD. Koller, volume 17, pp. 310–318, San Francisco, CA, (2001). MorganKaufmann.\n[13] J.S. Liu and R. Chen, ‘Sequential Monte Carlo methods for dynamic\nsystems’, Journal of the American Statistical Association ,93, 1032–\n1044, (1998).\n[14] P.J.F. Lucas, H. Boot, and B.G. Taal, ‘Computer-based decision-support\nin the management of primary gastric non-Hodgkin lymphoma’, Meth\nInform Med ,37, 206–219, (1998).\n[15] P.S. Maybeck, Stochastic models, estimation, and control , Academic\nPress, 1979.\n[16] K.P. Murphy, Dynamic Bayesian Networks: Representation, Inference\nand Learning, Ph.D. dissertation, UC Berkeley, 2002.\n[17] J. Pearl, Probabilistic Reasoning in Inteligent Systems: Networks of\nPlausible Inference , Morgan Kaufmann, 1988.\n[18] C. Potts, ‘Software-engineering research revisited’, IEEE Software ,\n19(9), 19–28, (1993).\n[19] Claus Skaanning, Finn V . Jensen, and Uffe Kjærulff, ‘Printer trou-\nbleshooting using Bayesian networks’, in Proceedings of IEA/AIE, pp.\n367–379. Springer-Verlag New York, Inc., (2000).A.Hommer somandP.J.F.Lucas /Using Bayesian Networks inanIndustrial Setting: Making Printing Systems Adaptive 406", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "TDbbUk8lwsA", "year": null, "venue": "ECAI2010", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-60750-606-5-823", "forum_link": "https://openreview.net/forum?id=TDbbUk8lwsA", "arxiv_id": null, "doi": null }
{ "title": "Multi Grain Sentiment Analysis using Collective Classification.", "authors": [ "S. Shivashankar", "Balaraman Ravindran" ], "abstract": "Multi grain sentiment analysis is the task of simultaneously classifying sentiment expressed at different levels of granularity, as opposed to single level at a time. Models built for multi grain sentiment analysis assume fully labeled corpus at fine grained level or coarse grained level or both. Huge amount of online reviews are not fully labeled at any of the levels, but are partially labeled at both the levels. We propose a multi grain collective classification framework to not only exploit the information available at all the levels but also use intra dependencies at each level and inter dependencies between the levels. We demonstrate empirically that the proposed framework enables better performance at both the levels compared to baseline approaches.", "keywords": [], "raw_extracted_content": "Multi Grain Sentiment Analysis using Collective\nClassification\nS.Shivashankar1and B.Ravindran2\nAbstract. Multi grain sentiment analysis is the task of simultane-\nously classifying sentiment expressed at different levels of granular-\nity, as opposed to single level at a time. Models built for multi grainsentiment analysis assume fully labeled corpus at fine grained levelor coarse grained level or both. Huge amount of online reviews arenot fully labeled at any of the levels, but are partially labeled at boththe levels. We propose a multi grain collective classification frame-work to not only exploit the information available at all the levelsbut also use intra dependencies at each level and inter dependenciesbetween the levels. We demonstrate empirically that the proposed\nframework enables better performance at both the levels compared\nto baseline approaches.\n1 Introduction\nSentiment analysis is one of the tasks which benefits from increasing\nweb usage and reach in the form of forums, blogs and other web-sites that hold product reviews and discussions. In general, sentimentanalysis is the task of identifying the sentiment expressed in the givenpiece of text about the target entity discussed. Depending on the tar-get entity, the granularity of the analysis varies. The target entity can\nbe the product itself, for example “Canon digital camera”, in which\ncase it is called as coarse-grained analysis. On the other hand the tar-\nget entity can also be at finer granularity, capturing various features\nof a product, for example “clarity of the camera”, in which case it\nis called as fine-grained analysis. This multi grain sentiment analysiscan be achieved by performing analysis at various levels of granu-larity — document level, paragraph level/sentence level/phrase level— which basically captures product level, sub topic level or fea-ture level target sentiments. The former refers to physical structureof the text taken for analysis, while the latter corresponds to logicallevel. We use this notion of physical and logical levels in granularity\nthroughout the paper. There are models built at each level individ-\nually [1, 2, 3, 4, 5, 6] which are called as independent models [7].Considering the nature of information available, the emphasis of re-cent approaches is towards exploring the intra dependencies at eachlevel and inter dependencies between the levels. Intuition behind in-\ntra dependency between entities at a single level is explained using\nthe following example. In the fragment of automobile review text\ngiven below :\n“The manual gear shifter is rubbery. ”\nIf the sentiment about manual gear shifter is unknown, then the sen-\ntiment about similar features such as driving experience in “..has an\n1Department of Computer Science and Engineering, Indian Institute of Tech-\nnology Madras, Chennai, India, email: [email protected]\n2Department of Computer Science and Engineering, Indian Institute of Tech-\nnology Madras, Chennai, India, email: [email protected] driving experience.. ” can help in disambiguating the sen-\ntiment.\nInter dependency between coarser and finer levels are also useful inpredicting the unknown sentiments [7, 8]. For instance, if the senti-ment of a document is known, then a majority of the sentences shouldhave the same sentiment as the document and vice versa. This forms\nthe basis of the proposed model, since we use these dependencies in\nthe multi grain collective classification framework. Since the above\nmentioned approaches [7, 8] assume fully labeled corpus, which is\nnot naturally available and huge amount of web data is partially la-\nbeled, we propose a multi grain collective classification algorithm for\nthe semi-supervised environment. We focus on document level andsentence level analysis in this work.\n2 Related Works\nPang and Lee [8], used the local dependencies between sentimentlabels on sentences to classify sentences as subjective or not, andthe top subjective sentences are used to predict the document levelsentiment. It can be seen as a cascaded fine to coarse model andwas shown to be better than other document level models. Follow-ing this work which gives the motivation to study these dependen-cies, McDonald, Ryan et al.,[7] proposed a joint structured modelfor sentence and document level sentiment analysis. In this work,\nthey model document level sentiment using sentences level senti-\nment, sentence level sentiment using other sentences in the local con-text and document-level sentiment. This model was proven to be bet-\nter than cascaded models (both fine to coarse and coarse to fine) and\nindependent models at both the levels. Both the approaches assumefully labeled data. Minimum cut based approach [8] uses labeled sen-tences to predict document level sentiment, and the structured model[7] uses labeled documents and sentences to build the joint model. Asthe data available in web is naturally partially labeled, and it wouldinvolve human annotation to get fully labeled corpus, we do not as-sume the data to be fully labeled at any level. Secondly the above\nmentioned approaches capture the structural cohesion between sen-\ntences i.e., sentences occurring in physical proximity (next to eachother) and not the logical cohesion, i.e., sentences discussing aboutsimilar features. The approach utilizing dependency between sen-tences based on logical cohesion captured using anaphora and dis-course relations has been shown to perform better than other ap-proaches [9]. Also it is a sentence level approach and not a multi\ngrain approach as the proposed model. The issues with the exist-\ning approaches and the way proposed model differs from those ap-proaches is briefly explained below:\n•To the best of our knowledge, no framework has been proposed formulti grain sentiment analysis in a semi-supervised environment,ECAI 2010\nH. Coelho et al. (Eds.)\nIOS Press, 2010\n© 2010 The authors and IOS Press. All rights reserved.\ndoi:10.3233/978-1-60750-606-5-823823\ni.e., data is not fully labeled at any of the levels. Only a subset of\ndocuments are labeled with document level sentiment, and againonly a subset of sentences in a document are labeled in the formof ‘pros and cons’.\n•In the above mentioned approaches, dependencies captured eitherat structural or logical level, expect the text to be written in anideal manner for better performance. Structural cohesion based\napproaches expect sentences discussing related features to be writ-\nten next to each other, and the logical cohesion based approaches\ncaptured using discourse graph [9] expect the sentences to have\nexplicit anaphoric and discourse relations.\n•Another disadvantage of the discourse graph based approach is\nthat it needs anaphora resolution and discourse structure identi-fication to be performed for all input documents. It also ignores\nbackground domain knowledge available in the form of domain\ntaxonomy, knowledge bases like Wordnet, and fully relies on dis-course graph construction methods. Also with the availability ofhuge amount of text it is possible to perform knowledge engineer-ing and build a domain knowledge base which captures featuresof a domain and similarity between the features.\nIn this paper we propose an iterative classification algorithm whichperforms multi grain classification in a semi-supervised environ-ment. Intra-dependencies at sentence level are captured using domainknowledge base, i.e., relation between features of a domain. The ad-\nvantage is that it can be prebuilt, and instantiated for each document.\nThis can be seen as adding domain knowledge to avoid sparsity thatmight arise in discourse graph based techniques. Since the construc-\ntion of domain knowledge is not the focus of this paper, we do not\ndiscuss it in detail apart from briefly explaining the knowledge baseused for this work in the experimental section.\n3 Proposed Approach\nWe define the problem scenario and notation as follows. Let Cbe\na corpus of web based review documents. A subset of review doc-uments have a ‘pros and cons’ section, which has positive and neg-ative sentences. The ‘pros and cons’ sentences have phrases which\nare generally comma or semi-colon separated. An example sentence\ntaken from pros section of a laptop review in CNET website is givenbelow\n“Slim design; easy-to-use Intel Wireless Display built-in;speedy Core i5 processor . ”\nSince not all the sentences in the document are labeled, those docu-\nments are referred to as sentence level partially labeled data, S. Also\na subset of documents contain overall sentiment label of the product,\nin the form of star rating, or scores between 1 and 10, or binary labelssuch as YES or NO. In this paper only binary labels are considered.Since only a subset of documents have document level sentiment la-bel, they are referred to as document level partially labeled data, G.\nDocuments that either have ‘pros and cons’ section or document levelsentiment label or both are referred to as multi grain partially labeleddata,M. Documents that do not have any labels are called as unla-\nbeled data, U. In general, a document level sentiment label, can be\nseen as a function, Y\ndof sentence level sentiment labels. It is stated\nformally as follows: Ddenotes a document containing a set of sen-\ntences, D={s1,s2,s3, ..., s n}. Sentiment label is denoted by Ω,\nthis yields the below formulation\nΩ(D)=Yd(Ω(s i)),∀siwhere si/epsilon1DEachsjin turn can be seen as a set of sentiment terms Oj, where\nOj/epsilon1sj. Thus sentence level sentiment label, Ω(s j)can be seen as\na function, Yusof sentiment term level labels. Sentiment terms are\nwords that carry polarity, and the most commonly used class of words\nare adjectives. We refer to this as unigram based classification.I ti s\nstated as below\nΩ(s j)=Yus(Ω(O j)),∀O jwhere Oj/epsilon1sj\nPrecision issues in unigram based classification is due to the fact that\nlexicon Wis not domain and context specific [3].\n•Domain adaptation : This includes polarity mismatch of terms\nin the lexicon for different domains, and also expansion ofthe lexicon for evolving domain oriented sentiment terms. Forexample, it is not possible to predict the sentiment of the term\n“rough” in the sentence “...surface is rough... ”, independent of\nthe domain. It will be negative in the case of products like camera,\nand positive in the case of tyre. General lexicon might fail in\nthis case depending on the domain. Also, we pose the problem\nof dealing with missing sentiment terms as one of improvingprecision, by adding the sentiment terms found in the corpus tothe lexicon and assigning arbitrary labels to it. So the polarity hasto be relearned using the evidences in multi grain partially labeleddataset M.\n•Context specificity issues : Set of sentiment terms have differ-\nent sentiments in different context, i.e., when they modify differ-ent target features. For example, the same sentiment term “huge”\nwill be positive in “huge win”, and negative in “huge loss”. Since\nit is not possible to capture the context using the sentiment termalone, the unigram based classification faces precision issues inthese cases.\nAlternatively, s\njcan be seen as a collection of constituent target\nfeature term and sentiment term pairs, thus sentiment label at sen-tence level Ω(s\nj)can be seen as a function, Ytsof tuple’s labels\nΩ(F j,O j). We refer to it as tuple based sentence classification.I ti s\ngiven formally as below\nΩ(s j)=Yts(Ω(F j,O j)),∀(F j,O j)where (Fj,O j)/epsilon1s j\nNote that “tuple” and “phrase” have the same meaning. Tuple is usedin the proposed model for convenient usage of the aforementionedpair notation. The same examples, “huge win” and “huge loss”,h a v e\ndifferent tuples (win, huge) and(loss, huge). And the classifica-\ntion is based on these tuples. Kaji and Kitsuregawa [4] extract ‘prosand cons’ phrases from huge collection of Japanese HTML docu-\nments whose polarity is computed using chi-square and PMI values.\nThe phrases are used to classify sentences, and the method showed\nhigh precision but low recall because of sparsity, since it is tough to\ncollect data for all possible phrases. Thus the proposed model must\nsolve the domain adaptation, context specificity, and sparsity issuesin order to build a framework which has proper balance between pre-cision and recall. The proposed model employs tuple based sentenceclassification as the base classifier, and iteratively improves its recallwithout compromising much of the precision. It uses unigram evi-dence for sentiment classification when there is no evidence at tuplelevel. The model can be seen as an undirected graph which containsdocument (D), sentence(s) and tuple level(t) nodes, in which the tu-ple classification model combines evidences at tuple and unigram\nlevel. This is given by the backoff inference procedure in Algorithm\n1. Henceforth, the notation of (F\nj,Oj)is used to mention a tupleS.Shivashankar andB.Ravindr an/MultiGrainSentiment Analysis using Collective Classification 824\ntj. The following section 3.1 explains the observations and intuitions\nbehind the proposed model, which helps in bridging the gap between\nunigram and tuple based sentence classification techniques.\n3.1 Intuition\nThe following are the observations that act as the base for proposedmodel. Note that we assume to have the background domain knowl-edge that has the set of features and similarity between the features.\nWe denote this knowledge base as K\nb, and the lexicon of sentiment\nterms (say for instance General Inquirer) is denoted as W. The effect\nof negating terms such as “not”, “except” is handled using a hand\ncompiled list. For example “sound clarity is not good” is captured\nas(sound clarity, NOT good)and the label is reversed accord-\ningly.Observation 1 - Simple propagation This observation shows how\nmore tuples can be learned compared to Kaji and Kitsuregawa’s ap-proach [4], which just uses phrases from ‘pros and cons’ sentences.A feature has the same polarity in a review. Due to the fact that thesame feature can be mentioned in multiple sentences, this leads to\nmore interesting cases where more tuples can be learned than just\nusing ‘pros and cons’ sentences. Those cases are given below\n•Case 1: In a document’s ‘pros and cons’ section, the sentimentterm used to modify a feature term will not always be identi-cal to the one used in detailed review section. For example tuplein pros section is (steering, quick ), while the review contains\n(steering, firm ). This helps to infer that (steering, firm )is\npositive.\n•Case 2: The tuples in ‘pros and cons’ section may not be al-ways complete. It can have only the feature term specified. Forexample (touchscreen interface, NULL) in pros section, the\nreview part has (touchscreen interface, comfy ). From this\n(touchscreen interface, comfy )can be learned as positive.\nNULL denotes the absence of sentiment term in the tuple.\nObservation 2 : Intra tuple label propagation Tuples will\nhave same sentiment in a domain across review documents. So\na tuple already seen as positive or negative can be used to label\nfuture occurrences of the tuple in the corpus. Not just identical\ntuples, but also the ones that share similar features. For example,(navigation system, reachable )can be labeled if the tuple\n(center stack, reachable) is already labeled. The similarity is\nnot just based on parent-child relationships between features, italso includes synonyms or substitutable terms, such as “looks” and\n“appearance”. This propagation is stated formally in the backoff\nmodel based inferencing procedure, which is referred to as backoffinference procedure. The lexicon with labeled tuples in the corpusis denoted as L\nt, also note that the lexicon is updated through the\nprocess when labels of more tuples are learned.Observation3:T op-down label propagation Not all sentiment\nterms require domain and context information for classification.Terms like “good”, “wonderful” are positive independent of domain.So the general lexicon W, has a mix of domain independent and\ndomain dependent sentiment terms. In order to avoid domain\nadaptation issues, the words’ label must be relearned for the target\ndomain using the multi grain partially labeled dataset M. The\npredominant label of the sentences and documents it occurs in, is\ntaken as the sentiment terms’ label. The evidences at sentence anddocument level are combined using linear interpolation, which isgiven belowΩ(O\nk)=argmax l(w s∗POkls+wd∗POkld)\nPOkls=NOkls\nNOks\nPOkld=NOkld\nNOkd\nwhere ldenotes the label space, l/epsilon1{+,−}, and ws,wddenote the\nweightage for sentence and document level evidence, also ws+wd=\n1.POklsis the sentence level evidence that the sentiment term Ok\nhas the label l.NOklsdenotes number of sentences containing Ok,\nand labeled l.NOksdenotes number of sentences containing Ok.\nSimilarly POkld,NOkldandNOkdare at document level. As the\nnumber of sentences required to reliably predict a sentiment term’slabel is lesser compared to the number of documents, we treat sen-tence level evidence as more reliable than document level’s. So theweightage is formulated as follows\nN\nOkls>0→{ws=1,w d=0}\nNOkls=0→{ws=0,w d=1}\nThis can be seen as top-down label propagation, where sentences anddocuments are used to relearn the polarity of sentiment terms. Notethat relearning of polarity of words in Wusing Mdoes not fully\nsolve the domain adaptation issue. Since it depends on the amount oflabeled data available. We denote the lexicon with labeled unigrams,which is built using WandMasL\nu. Thus the unknown label of\na tuple ( Fq,Oq) can be inferred based on LtandLuusing a back-\noff model. The intuition is that Lucan be used to infer sentiment\nof a tuple when it cannot be inferred using Lt. In the Algorithm 1,\nLltdenotes the label lgiven by Ltfor the input tuple (tuple based\nclassification), similarly Lludenotes the label lgiven by Lufor the\ninput sentiment term (unigram based classification). If the tuple hasevidence to be classified, then its label is committed. Also it is obvi-ous that the label identified using committed labels of tuples is morereliable than uncommitted labels of tuples and prediction using un-igrams. Thus we assign a confidence flag for each prediction. If theprediction is not confident then the labels are further refined using\nlabel smoothing, which is given in next observation.\nObservation 4 : Label Smoothing The intuition is that similar\nfeatures tend to have the same sentiment label in a single review doc-\nument. Feature terms used in ‘pros and cons’ need not be identical\nto the ones given in detailed review section; it might have similarfeatures. For example (performance, good )in pros section, would\nimply that the tuple (engine, high −revving )in detailed review\nsection is positive, as engine and performance are similar features.This also applies to any labeled tuple t\nkin a document, where the tu-\nplestiwhich have target feature terms Fithat are similar to Fkwill\nhave more probability for tk’s label. This along with observation 2\ndefines the intra-dependency at fine-grained level, where the neigh-borhood is defined by the domain knowledge base K\nb, and is instan-\ntiated for each document. Note that the difference between intra tuplelabel propagation discussed in observation 2 and label smoothing isthat, observation 2 expects F\niandFkto be similar and OiandOk\nbe identical. But it is not the case in observation 4, which only ex-pectsF\niandFkto be similar. Also it is obvious that observation 2\napplies to neighborhood between features within a document and be-tween documents, while the neighborhood structure in observation 4applies to a single document only. Also observation 1 can be calledas a special case of observation 4, where the identity or repetition ofa feature term is used to propagate labels.\nObservation 5 : Bottom-up label propagation Unknown documentS.Shivashankar andB.Ravindr an/MultiGrainSentiment Analysis using Collective Classification 825\nAlgorithm 1 Backoff Inference Procedure(F q,Oq)\nInput: Lt,Lu,Kb\nOutput: Label of (F q,Oq)\nLabel ←UNK\nLabel l←0//count of neighbors with label l , whose labels\nare not committed\nCLabel l←0//count of neighbors with label l , whose\nlabels are committedConfidence ←Low\nif(F\nq,Oq)i nL tAND committed then\nLabel ←Llt((Fq,O q))\nConfidence ←High\nelse\nFk←Neighbors(F q,K b)\nfor(Fk,Ok)i nL tdo\nifOq==O kthen\nif(Fk,Ok) is committed then\nCLabel l←CLabel l+Llt((Fk,O k))\nelse\nLabel l←Label l+Llt((Fk,O k))\nend if\nend if\nend forifCLabel\nlis NOT zero vector then\nLabel ←argmax l(CLabel l)\nConfidence ←High\nelse\nLabel ←argmax l(Label l)\nend if\nend ififLabel ==UNK then\nLabel ←L\nlu(O q)\nend ifReturn Label, Confidence\nlevel sentiment label is disambiguated using known sentence levellabels. This follows from the intuition that a product or overall topicof discussion should carry the same sentiment as majority of its fea-tures. For example, if every aspect of a camera has positive feedback,then its obvious that the camera should also have positive feedback.\nObservations 3 and 5 define the inter-dependency between finer and\ncoarser levels. The formulation given in section 3 is rewritten incor-\nporating the above mentioned observations. It is stated as follows:\nunknown sentence level sentiment for s\njis given as a function of\ntuple level labels\nΩ(s j)=Yts(Ω(F j,O j)),∀(F j,O j)where (Fj,O j)/epsilon1s j\nwhere tuple level label is predicted using backoff inferencing proce-\ndure and tuples’ labels with similar features in the same review doc-ument (label smoothing). The neighbors of the tuple t\njwithin a doc-\nument are referred to as its local context, denoted by Context (tj).\nSince the backoff model relies on Mto build both tuple based and\nunigram based classification, tuple level label Ω(t j)is defined as a\nfunction YcofMand labels of Context (tj).\nΩ(t j)=Yc(M,Ω(Context (tj)))\nNote that this Context given by neighbors is different from context\nspecific issues discussed earlier. And the document label is a functionof sentence level labels, as given in section 3.3.2 Collective Classification Framework\nGiven: Semi-supervised environment where the datapoints are par-\ntially labeled at both coarse and fine grained levels -document andsentence level respectively.Target: Propagate labels and make Mfully labeled at both coarse\nand fine grained levels.Procedure: A document is seen as a function of sentence level labels,\nand a sentence can in turn be seen as a function of tuple level labels.So the entire corpus can be posed an undirected graph (V , E), whereE is the set of edges and nodes in V corresponds to different enti-\nties - document, sentence and tuples. A tuple has target features and\nsentiment terms, and with the domain knowledge base a neighbor-hood structure is formed within a document and between documentsas mentioned above. Since not all the nodes are labeled, the goal isto utilize the information available in the overall graph structure, andfully label the nodes. Example neighborhood structure for two docu-ments D\n1andD2is given in Figure 1, we do not explain the example\nmuch since the idea is to show the graph structure. Classical learn-ing algorithms train a classifier on the labeled datapoints, and use itto classify unlabeled datapoints. But they ignore the valuable neigh-\nborhood structure available. Collective classification is a frameworkwhich combines both classical learning method and neighborhood\nstructure based classification in an iterative procedure [11]. One ofthe common inferencing procedures is relaxation labeling, which isa popular technique in image processing algorithms. The intuition\nbehind the method is that pixel’s probability to be assigned a label\nincreases, given that its neighbors are assigned the same label. Asimilar intuition is employed here.\nFigure 1. Example showing neighborhood for two documents\nThe notations for the iterative procedure are re-established and\nsimplified as follows\nl- Sentiment label space, which is {+,−}\nNi(s)- Number of subjective sentences in the document Di\nNil(s)- Number of subjective sentences with label linDi\nNj(t)- Number of tuples in the subjective sentence sj\nNjl(t)- Number of tuples in the subjective sentence sjwith label l\nLu- Lexicon with labeled sentiment terms from the lexicon W,\nwhich are relearned for the domain using M.Luhas the structure\nwith 5 elements [O, Flag, Label, c +,c−], where O denotes the sen-\ntiment term, Flag has commit and not-commit status which meanswhether the class label is committed or not. Label gives the class la-bel,c\n+andc−denote the count of occurrence of tuple in positive and\nnegative contexts during the iteration i.e., number of times classifiedas positive and negative respectively. The querying that was men-tioned in backoff model, L\nlu(O)which denotes the label returnedS.Shivashankar andB.Ravindr an/MultiGrainSentiment Analysis using Collective Classification 826\nusingLufor the sentiment term Ois done as follows.\nIf(Flag ==commit) →Return Label\nElseif (Flag ==uncommit )→Return argmax l(cl)\nLt- Lexicon with labeled tuples initialized using ‘pros and\ncons’ section in M.Lthas the structure with 6 elements\n[F, O, Flag, Label, c +,c−], where F denotes the feature term,\nand other 5 tuples are similar to those in Lu. The querying procedure\nin backoff model and the way counts are updated during iterative\nprocedure is similar to what is done for Lu.\nΩ(t) - Sentiment label of a tuple\n(Ω(t), Confidence )- Pair denotes sentiment label of a tuple and\nConfidence of inference given by backoff inference procedureΩ(s) - Sentiment label of a sentence is defined as the function\nof labels of constituent tuples. The function used in this work ismaxlabel (x), where xis a collection of labels. maxlabel (x)\nreturns the label which occurs predominantly in the input x, i.e., a\nsentence is labeled with the maximum occurring label of its tuplesas given below\nΩ(s\nj)=maxlabel (Ω(F j,O j)),∀(F j,O j)\nwhere (Fj,O j)/epsilon1s j,Ω(F j,O j)/epsilon1l.\nNote that any function can be used in place of maxlabel , for\ninstance a linear classification model. Since the focus of this task\nis to show how the proposed model improves the accuracy of anyclassical learning algorithm and not to propose a new fine-grainedor coarse-grained classifier, the simplest classification function ischosen.\nΩ(D)- Sentiment label of a document is given by bottom up label\npropagation of the constituent sentences. maxlabel is the function\nemployed again, thus giving the following formulation\nΩ(D)=maxlabel (Ω(s\ni)),∀siwhere si/epsilon1D\nγtli- Number of neighboring tuples for a tuple tin a document Di,\nwith the sentiment label l.\n3.3 Multi Grain Iterative Classification Algorithm\nMulti grain iterative classification is a joint model that helps in pre-\ndicting the unknown sentiment of sentences and documents to getcompletely labeled dataset, and also acquire more evidence for uni-\ngrams and tuples to update the lexicon L\nuandLt.\nInitialization :LuandLtare initialized. Only subjective sentences\nin a document are taken for analysis, and the presence of adjectives is\ntaken as an indicator for subjectivity [10]. Since the iterative proce-dure is shown to have same performance for any arbitrary ordering ofnodes [11, 12], we choose an arbitrary ordering for document nodesand the natural order of occurrence within a document for sentences.\nIterative procedure is given in Algorithm 2\n4 Experimental results\nThe review articles are taken from websites like CNET, Epinions and\nEdmunds which contain Automobile reviews. CNET’s and Epinions’articles have both pros, cons section and document level label. Ed-munds’ test drives articles contain pros, cons section only and notdocument level label. 100 articles were chosen arbitrarily from eachwebsite , thus forming a dataset of 300 articles. The class distribu-tion of document labels is — 120 positive and 80 negative documentsAlgorithm 2 Iterative procedure\nrepeat\nforDocument DiinMdo\nforSentence sjinDido\nforTuple tkinsjthat is NOT committed do\n(Ω(t k), Confidence )←Backoff Model(t k)\nifConfidence ==High ORFkis labeled in Dithen\nPropagate and commit labels, Lt←Lt∪(tk,Ω(t k))\nend if\nend for\nend forrepeat\nforSentence s\njinDido\nforTuple tkinsjthat is NOT committed do\nΩ(t k)←argmax lγtli\nend for\nend for\nuntil Convergence of labels\nUpdate counts in Lu,Lt\nforSentence sjinDido\nP(l|sj)=Njl(t)\nNj(t)\nΩ(s j)=argmax lP(l|sj)\nend for\nP(l|D i)=Nil(s)\nNi(s)\nΩ(D i)=argmax lP(l|D i)\nend for\nuntil Convergence\nCommit the current labels in Lu,Lt\namong 200 documents which have document level label, and the re-\nmaining 100 have unknown labels. We do not use the tuple basedapproach using the ‘pros and cons’ phrases as a baseline approach,since Kaji and Kitsuregawa [4] have shown that the method has highprecision but low recall. We define the baseline approaches belowTrivial Baseline : For any document D\ni, which has document level\nlabel, propagate the label top-down such that all unknown labels ofthe sentences are labeled with the document’s label. It is given as\nΩ(s\nj)=Ω ( Di),∀sjwhere sj/epsilon1D i\nIt was tested on 200 labeled documents among the 300 input docu-ments. Since other baseline approaches and the proposed model aretested on 300 documents, the results of trivial baseline approach ispresented separately. It can be seen as unrestricted top-down labelpropagation. The results are given in Table 1.\nClass P R F1\n+ 0.313 0.272 0.291\n- 0.247 0.214 0.229\nTable 1. Trivial classifier\nLexical classifier at document level : Using the words in W, clas-\nsify documents as positive or negative depending on the count. LetC\npbe the count of positive terms in the document, Cnbe the count\nof negative terms in the document, then the documents are classifiedusingargmax\ni(C i).S.Shivashankar andB.Ravindr an/MultiGrainSentiment Analysis using Collective Classification 827\nLexical classifier at sentence level : Using the words in W, clas-\nsify sentences as positive or negative depending on the count using a\nmethod similar to the one mentioned above for document level.Proposed model : The multi grain collective classification model\nis applied on the documents, after initializing L\nuandLt. Here we\ndescribe briefly the knowledge base used in this work. It has the do-main terms list built using collection of review articles. Noun, nounphrases are considered as domain terms, and are put in partial hi-erarchy using Wordnet, since Wordnet is not complete with all thedomain terms and jargon. The remaining terms are inserted at somenode in the hierarchy based on distributional similarity computed us-\ning PMI of the second-order co-occurrence vector. The results com-\nparing the methods for sentence and document classification is given\nin Table 2 and Table 3. P denotes precision, R denotes recall and\nF1 denotes F measure, which is the harmonic mean of precision and\nrecall. The experimental evaluation is based on comparison of pre-\nClass P R F1 Method\n+ 0.381 0.334 0.355 Lexical classifier\n- 0.278 0.245 0.260 Lexical classifier\n+ 0.73 0.61 0.664 Proposed model\n- 0.63 0.56 0.593 Proposed model\nTable 2. Sentence level classification\nClass P R F1 Method\n+ 0.55 0.39 0.456 Lexical classifier\n- 0.225 0.30 0.257 Lexical classifier\n+ 0.78 0.734 0.756 Proposed model\n- 0.56 0.498 0.527 Proposed model\nTable 3. Document level classification\ndicted labels against human labeled documents and sentences. Thefollowing are the observations of the results\n•The main contribution of the proposed model is the use of theneighborhood structure for tuples within and between documents,by which the accuracy of any ordinary classifier can be improved.The classifier chosen in this work is an ordinary lexical classifierwhich takes the maximum labels of tuples. The cases where sen-\ntence level classification failed in the proposed model are those\nthat needed other contextual evidences to be taken care of. An ex-\nample case is given below,\n“The question is whether the car itself is as good as thewrapper it comes in”\nSo, it indicates that with a better classifier (state of the art) in place,\nthe additional strength provided by the framework should enable\nbetter performance.\n•From the results where the proposed framework outperforms thebaseline classifier, it is clear that it can improve any local classi-fier’s performance when used in this framework. Though the finalresults obtained are on par with state of art methods [6, 13], it hasto be noted that this method is not a fully labeled approach and\nuses only a small amount of labeled data at sentence level.5 Conclusion and Future Work\nIn this paper, we have proposed a multi grain collective classifica-\ntion framework which uses partially labeled data at document andsentence level, and converts it into a fully labeled dataset. Thoughits transductive, the unigram and tuple lexicon learned in the itera-tive procedure can be used for inductive labeling on testset. The keycontributions of the work include the following a) collective classifi-cation framework for multi grain sentiment analysis which requiresvery less labeled data, and improves the performance of any localclassifier used in the iterative procedure b) utilizes the sentiment\nterm lexicon and domain knowledge base to handle sparsity and im-\nprove recall without compromising precision. Future work is given\nas follows, a) In this work 0-1 neighborhood is used, where all theneighbors of a node are considered equally important, while practicalknowledge about any domain shows that not all neighboring nodeswould influence equally. For example “novelty of story” influences“quality of movie” the most in a movie review. So, potentials of theedges must be automatically learned and used in the iterative pro-cedure. b)In this work a prebuilt domain knowledge base was used.In order to handle scaling domains, an automated way to build thedomain knowledge base must be investigated.\nACKNOWLEDGEMENTS\nThis work was partially funded by GM R&D PO 910000121410.\nREFERENCES\n[1] Peter D. Turney. 2002. Thumbs Up or Thumbs Down? Semantic Ori-\nentation Applied to Unsupervised Classification of Reviews, In Proc. of\nACL.\n[2] V . Sindhwani and P. Melville. 2008. Document-word co-regularization\nfor semi-supervised sentiment analysis, In Proc. ICDM\n[3] Peter D. Turney and Michael L. Littman. 2003. Measuring praise and crit-\nicism: Inference of semantic orientation from association, ACM Transac-\ntions on Information Systems.\n[4] Nobuhiro Kaji and Masaru Kitsuregawa. 2007. Building lexicon for Sen-\ntiment Analysis from Massive Collection of HTML Documents, In Proc.\nEMNLP.\n[5] B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up? sentiment\nclassification using machine learning techniques, In Proc. EMNLP.\n[6] Melville, P., Gryc, W., and Lawrence, R. D. 2009. Sentiment analysis\nof blogs by combining lexical knowledge with text classification, In Proc.\nSIGKDD.\n[7] McDonald, Ryan , Hannan Kerry , Neylon Tyler , Wells Mike , Reynar\nJeffrey C. 2007. Structured Models for Fine-to-Coarse Sentiment Analy-\nsis, In Proc. ACL.\n[8] B. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis\nusing subjectivity summarization based on minimum cuts, In Proc. ACL.\n[9] Somasundaran, S., Namata, G., Wiebe, J., and Getoor, L. 2009. Super-\nvised and unsupervised methods in employing discourse relations for im-proving opinion polarity classification., In Proc. EMNLP.\n[10] Bing Liu. 2006. Web Data Mining, Springer.\n[11] Neville, J. and Jensen, D. 2000. Iterative Classification in Relational\nData, In Proc. AAAI\n[12] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor Collec-\ntive Classification in Network Data, Technical Report CS-TR-4905 and\nUMIACS-TR-2008-04\n[13] Qiu, G., Liu, B., Bu, J., and Chen, C. 2009. Expanding domain sentiment\nlexicon through double propagation, In Proc. IJCAIS.Shivashankar andB.Ravindr an/MultiGrainSentiment Analysis using Collective Classification 828", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "3PGx6mEF0-y", "year": null, "venue": "Bull. EATCS 2001", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=3PGx6mEF0-y", "arxiv_id": null, "doi": null }
{ "title": "Hardness Results and Efficient Appromixations for Frequency Assignment Problems and the Radio Coloring Problem", "authors": [ "Dimitris Fotakis", "Sotiris E. Nikoletseas", "Vicky G. Papadopoulou", "Paul G. Spirakis" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "5XznFusPRf6", "year": null, "venue": "E-Commerce Agents 2001", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=5XznFusPRf6", "arxiv_id": null, "doi": null }
{ "title": "Security Issues in M-Commerce: A Usage-Based Taxonomy", "authors": [ "Suresh Chari", "Parviz Kermani", "Sean W. Smith", "Leandros Tassiulas" ], "abstract": "M—commerce is a new area arising from the marriage of electronic commerce with emerging mobile and pervasive computing technology. The newness of this area—and the rapidness with which it is emerging—makes it difficult to analyze the technological problems that m–commerce introduces—and, in particular, the security and privacy issues. This situation is not good, since history has shown that security is very difficult to retro—fit into deployed technology, and pervasive m– commerce promises (threatens?) to permeate and transform even more aspects of life than e–commerce and the Internet has. In this paper, we try to begin to rectify this situation: we offer a preliminary taxonomy that unifies many proposed m–commerce usage scenarios into a single framework, and then use this framework to analyze security issues.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "xI_dcaaBNA", "year": null, "venue": "EC2012", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=xI_dcaaBNA", "arxiv_id": null, "doi": null }
{ "title": "Optimization with demand oracles.", "authors": [ "Ashwinkumar Badanidiyuru", "Shahar Dobzinski", "Sigal Oren" ], "abstract": "We study combinatorial procurement auctions, where a buyer with a valuation function v and budget B wishes to buy a set of items. Each item i has a cost ci and the buyer is interested in a set S that maximizes v(S) subject to ∑i∈Sci ≤ β. Special cases of combinatorial procurement auctions are well-studied problems from submodular optimization. In particular, when the costs are all equal (cardinality constraint), a classic result by Nemhauser et al shows that the greedy algorithm provides an e/e-1 approximation. Motivated by many papers that utilize demand queries to elicit the preferences of agents in economic settings, we develop algorithms that guarantee improved approximation ratios in the presence of demand oracles. We are able to break the e/e-1 barrier: we present algorithms that use only polynomially many demand queries and have approximation ratios of 9/8+∈ for the general problem and 9/8 for maximization subject to a cardinality constraint. We also consider the more general class of subadditive valuations. We present algorithms that obtain an approximation ratio of 2+∈ for the general problem and 2 for maximization subject to a cardinality constraint. We guarantee these approximation ratios even when the valuations are non-monotone. We show that these ratios are essentially optimal, in the sense that for any constant ∈>0, obtaining an approximation ratio of 2-∈ requires exponentially many demand queries.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "8R3aw24opTn", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=8R3aw24opTn", "arxiv_id": null, "doi": null }
{ "title": "Baseline algorithm", "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "2V2qmAt0WSk", "year": null, "venue": "ECAI2014", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-419-0-1049", "forum_link": "https://openreview.net/forum?id=2V2qmAt0WSk", "arxiv_id": null, "doi": null }
{ "title": "Probabilistic Active Learning: A Short Proposition.", "authors": [ "Georg Krempl", "Daniel Kottke", "Myra Spiliopoulou" ], "abstract": "Active Mining of Big Data requires fast approaches that ideally select for a user-specified performance measure and arbitrary classifier the optimal instance for improving the classification performance. Existing generic approaches are either slow, like error reduction, or heuristics, like uncertainty sampling. We propose a novel, fast yet versatile approach that directly optimises any user-specified performance measure: Probabilistic Active Learning (PAL). PAL follows a smoothness assumption and models for a candidate instance both the true posterior in its neighbourhood and its label as random variables. By computing for each candidate its expected gain in classification performance over both variables, PAL selects the candidate for labelling that is optimal in expectation. PAL shows comparable or better classification performance than error reduction and uncertainty sampling, has the same asymptotic linear time complexity as uncertainty sampling, and is faster than error reduction.", "keywords": [], "raw_extracted_content": "Probabilistic Active Learning: A Short Proposition\nGeorg Krempl1and Daniel Kottke1and Myra Spiliopoulou1\nAbstract. Active Mining of Big Data requires fast approaches that\nideally select for a user-specified performance measure and arbitrary\nclassifier the optimal instance for improving the classification per-formance. Existing generic approaches are either slow, like error re-duction, or heuristics, like uncertainty sampling. We propose a novel,fast yet versatile approach that directly optimises any user-specifiedperformance measure: Probabilistic Active Learning (PAL).\nPAL follows a smoothness assumption and models for a candidate\ninstance both the true posterior in its neighbourhood and its labelas random variables. By computing for each candidate its expected\ngain in classification performance over both variables, PAL selects\nthe candidate for labelling that is optimal in expectation. PAL showscomparable or better classification performance than error reductionand uncertainty sampling, has the same asymptotic linear time com-plexity as uncertainty sampling, and is faster than error reduction.\n1 INTRODUCTION\nIn some applications of machine learning to large data pools and fastdata streams, features are cheap but labels are costly, for example dueto human annotation efforts [5]. This motivates active learning (AL)\n[8] approaches that actively select the instance, which –once incor-porated into the training set– will yield the highest gain in terms of aclassification performance measure. Ideally, such an approach allowsa) optimisation of an arbitrary, user-defined performance measure, b)is fast and scalable , and c) is usable with any classifier technology.\nEach of the existing approaches offers some of the above quali-\nties, but not a combination of them in a single approach. We propose\na novel, probabilistic active learning (PAL) approach\n2that fills this\ngap. As expected error reduction (ER) [7], PAL is not limited to aparticular classifier technology or performance measure. Like fastuncertainty sampling (US) [6], PAL requires only linear asymptotictime for selecting the best instance from a pool of labelling candi-dates. We will present PAL in the next section 2, before relating it toexisting approaches in section 3 and evaluating it in section 4.\n2 PROBABILISTIC ACTIVE LEARNING\nWe address the pool-based [9] active learning scenario for binary\nclassifiers, where an active classifier has access to a pool of unla-belled instances U={(x,.)}. Repeatedly, the best instance (x\n∗,.)∈\nUis selected, its label y∗is requested from an oracle, and it is moved\nfromUto the set of labelled instanced L.\nFollowing the common smoothness assumption [3], we consider\nthat an instance xinfluences the classification the most in its neigh-\nbourhood. Thus, the impact of an additional label primarily depends\n1Knowledge Management & Discovery Lab, Univ. Magdeburg, Germany,\nemail: [georg.krempl|daniel.kottke|myra]@iti.cs.uni-magdeburg.de\n2See the companion website: http://kmd.cs.ovgu.de/res/pal/on the already obtained labels in its neighbourhood. We summarisethese by their absolute numbern, and the share of positives ˆptherein,\nyielding the label statistics ls=(n,ˆp). Here,nis obtained by count-\ning the similar labelled instances for pre-clustered or categorical data,or approximated by frequency estimates such as kernel frequency es-timates for smooth, continuous data. Thus, in x’s neighbourhood, n\nexpresses the absolute quantity of labelled information, whereas the\ndensityd\nxof unlabelled instances quantifies the importance of this\nneighbourhood, i.e. the share of future classifications that will take\nplace therein compared to other regions of the feature space.\nGiven a candidate instance (x,.) with lsanddx, we want to com-\npute the overall gain in classification performance if requesting its\nlabel. This gain depends also on the realisation of the candidate’slabely, and of the true posterior probability pof the positive class\nwithin the neighbourhood. Both values are unknown, thus we usea probabilistic approach and model the candidate’s label Yand the\ntrue posterior of the positive class Pas random variables. This al-\nlows to compute the expected value of the gain in performance over\nall different true posteriors and label realisations, which we denote asprobabilistic gain\n3(pgain). Weighting the latter with dx, we obtain\nan estimate on the impact of x’s label on the overall classification\nperformance. Subsequently, we select among all instances the one\nwith highest density-weighted probabilistic gain.\nThe figure below summarises PAL’s pseudo-code. Iterating over\nthe candidate pool U(lines 2-6), for each candidate xone computes\nits label statistics lsx=(nx,ˆpx), its density weight dx, and its prob-\nabilistic gain by using numerical integration, which is then weighted\nby its density weight to obtain gx. Finally, the candidate with the\nhighest density-weighted probabilistic gain is selected (line 7).\n1:function POOL BASED PAL(U,L)\n2: forx∈U do\n3: (nx,ˆpx)←labelstatistics( x,L)\n4: dx←densityweight (x,L∪U)\n5: gx←pgain((n x,ˆpx))·dx\n6: end for\n7: returnx∗←arg maxx∈U(gx)\n8:end function\nWe propose to precompute dx,a sU∪L is static, and to use\nprobabilistic classifiers to compute the absolute frequency estimatesneeded for ls. Thus, lines 3–4 are constant-time operations, but the\nprobabilistic gain (pgain) computation deserves further discussion:\npgain( ls)=E\np/bracketleftbigg\nEy/bracketleftbig\ngainp(ls,y)/bracketrightbig/bracketrightbigg\n(1)\n=/integraldisplay1\n0Betaα,β(p)·/summationdisplay\ny∈{0,1}Berp(y)·gainp(ls,y)dp (2)\n3We do this to differentiate it from the expected gain as in expected error\nreduction methods like [2], where expectation is solely over label outcomes.ECAI 2014\nT. Schaub et al. (Eds.)© 2014 The Authors and IOS Press.This article is published online with Open Access by IOS Press and distributed under the termsof the Creative Commons Attribution Non-Commercial License.\ndoi:10.3233/978-1-61499-419-0-10491049\nHere,gainp(ls,y)is the candidate’s (x,.) performance gain given its\nlabel realisation yand the neighbourhood’s true posterior p:\ngainp(ls,y)=p e r fp/parenleftbiggnˆp+y\nn+1/parenrightbigg\n−perfp(ˆp) (3)\nperfp(ˆp)is an arbitrary point-performance measure (e.g. accuracy),\nindicating the classification performance within the neighbourhood,\ngiven the true posterior pand a posterior estimate ˆpby the classifier.\nBerp(y)is the probability of the Bernoulli-distributed random\nvariableYproducing the label realisation y∈{0,1}(1correspond-\ning to a positive label), whose parameter pcorresponds to the true\nposterior, which itself is the realisation of the Beta-distributed ran-dom variable Pwith parameters α=n·ˆp+1 andβ=n·(1−ˆp)+1\nand the resulting probability density function Beta\nα,β(p). Note that\nthis Beta-distribution and its particular parameters are the result ofusing a Bayesian approach that assumes a uniform prior g(p)for\nthe true posterior probability and computes the normalised likelihoodω\nls(p)ofpgiven the data in ls, that is:\nω ls(p)=L(p|ls)g(p)/integraltext1\n0L(ψ|ls)g(ψ)dψ=B e t a α,β(p) (4)\nThus the parameters αandβof the normalised likelihood correspond\nto the absolute numbers of positive and negative labels (plus one).\n3 DISCUSSION AND RELATED WORK\nOur approach is related to expected error reduction (ER), first pro-posed by [4], where for each labelling candidate the expected reduc-tion in classification error is computed. While in [4] closed-form so-lutions are derived for optimal data selection for two specific learning\nmethods, [7] proposed a generic ER approach, both with respect to\narbitrary performance measures and classifiers: using a Monte Carlosampling approach, it estimates the performance on a labelled val-idation sample V, rather than integrating over the full feature dis-\ntribution Pr(x)as in [4]. Furthermore, it uses the posterior estimate\nˆp=ˆPr(y|x) provided by the current classifier as proxy for the true\nposterior Pr(y|x) that is required for the expectation over the label\nrealisations y. However, [2] noted that this proxy is not reliable if\nsolely few labels are available (as common in active learning) andrequires regularisation approaches such as using Beta priors.\nIn contrast to ER, expectation in PAL is also over the true pos-\nteriorp, and evaluation is done using the label statistics within an\ninstances neighbourhood, rather than simulating classifier updatesand evaluating them on a validation sample. The latter makes ERprohibitively slow [8], as even for incremental classifiers its asymp-totic time complexity is O(| V |·| U |). PAL’s time complexity is\nO(|U| ·q·2) =O(|U|) , as the probabilistic gain computation for\neach candidate in Uaccording to eq. 2 requires a constant number of\nqnumerical integration steps (q =5 0 was used in our experiments),\nand summarising over the two potential label outcomes {0,1}.\nThis is identical to the asymptotic time complexity of uncertainty\nsampling (US), proposed in [6]. US uses simple uncertainty mea-\nsures [9], like sample margin, confidence, or entropy as proxies for\na candidate’s value, and selects the candidate with maximal uncer-tainty. However, these proxies do not consider the number of similarinstances, neither does US directly optimise a performance measure.\n4 EXPERIMENTAL EV ALUATION\nWe compare PAL to the error reduction approach proposed in [2],to uncertainty sampling proposed in [6] (using confidence [9]), andto random sampling. We use the synthetic Checkerboard datasetfrom [2], the Mammographic mass dataset from [1], and a syntheticdataset consisting of a Gaussian mixture model in 2d with varyingtraining set sizes for speed-testing. For comparison with [2], we usea Parzen Window classifier with pre-tuned bandwidth ( 0.1,0.1, and\n0.08, resp.). Evaluation was done by averaging the performance over\n100 randomly generated partitionings in training and test subsets.\nThe results are shown in the figure below, where a) and b) are plots\nof the approaches’ learning curves, and c) is a plot of the executiontime relative to the pool size. Overall, PAL yields superior classifica-tion performance than all other approaches, while its runtime solelyincreases linearly in the pool size.\n0.40.50.60.70.80.91\n0 510 15 20 25 30 35 40accuracy\nrequested labelsPAL\nChapelle\nUncertainty\nRandoma) Checkerboard\n0.450.50.550.60.650.70.750.8\n0 510 15 20 25 30 35 40accuracy\nrequested labelsMammograp hic mass\nPAL\nChapelle\nUncertainty\nRandomb)\n0246810\n0 200 400 600 800 1000 1200c) Runt ime vs. Synt hetic Data Set S izeexecut ion time in secon ds\ntrainin g set size PAL\nChapelle\nUncertainty\nlinear \nextrapolationtop left and right (a,b):Learning curves on twodatasets, early convergence tohigh values is favourable.\nbottom left (c):\nPAL’s runtimes on a synthetic\ndata set show a linear increase\nwith dataset size.\nACKNOWLEDGEMENTS\nWe thank Vincent Lemaire from Orange Labs, France, for the in-sightful discussion on this approach.\nREFERENCES\n[1] Arthur Asuncion and David J. Newman. UCI ML repository, 2013.\n[2] Olivier Chapelle, ‘Active learning for parzen window classifier’, in Proc.\n10th Int. Workshop on AI and Statistics, pp. 49–56, (2005).\n[3] Semi-Supervised Learning, eds., Olivier Chapelle, Bernhard Sch ¨olkopf,\nand Alexander Zien, MIT Press, 2006.\n[4] David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan, ‘Active\nlearning with statistical models’, J. of AI Research, 4, 129–145, (1996).\n[5] Vivekanand Gopalkrishnan, David Steier, Harvey Lewis, and James\nGuszcza, ‘Big data, big business: Bridging the gap’, in Proc. 1st Int.\nWorkshop on Big Data, Streams and Heterogeneous Source Mining, Big-\nMine 2012, pp. 7–11. ACM, (2012).\n[6] David D. Lewis and William A. Gale, ‘A sequential algorithm for train-\ning text classifiers’, in Proc. 17th ann. int. ACM SIGIR conf. on research\nand development in information retrieval, pp. 3–12, (1994).\n[7] Nicholas Roy and Andrew McCallum, ‘Toward optimal active learning\nthrough sampling estimation of error reduction’, in Proc. 18th Int. Conf.\non ML, ICML 2001, pp. 441–448. Morgan Kaufmann, (2001).\n[8] Burr Settles, ‘Active learning literature survey’, Computer Sciences\nTechnical Report 1648, University of Wisconsin, USA, (2009).\n[9] Burr Settles, Active Learning, number 18 in Synthesis Lectures on AI\nand ML, Morgan and Claypool Publishers, 2012.G.Krempl etal./Probabilistic Active Learning: AShort Proposition 1050", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "fNr1l0jbwZZ", "year": null, "venue": "ECAI2014", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-419-0-1109", "forum_link": "https://openreview.net/forum?id=fNr1l0jbwZZ", "arxiv_id": null, "doi": null }
{ "title": "Probabilistic Two-Level Anomaly Detection for Correlated Systems.", "authors": [ "Bin Tong", "Tetsuro Morimura", "Einoshin Suzuki", "Tsuyoshi Idé" ], "abstract": "We propose a novel probabilistic semi-supervised anomaly detection framework for multi-dimensional systems with high correlation among variables. Our method is able to identify both abnormal instances and abnormal variables of an instance.", "keywords": [], "raw_extracted_content": "Probabilistic Two-Level Anomaly Detection for\nCorrelated Systems\nBin T ong1and T etsuro Morimura2and Einoshin Suzuki3and Tsuyoshi Id ´e4\nAbstract. We propose a novel probabilistic semi-supervised\nanomaly detection framework for multi-dimensional systems with\nhigh correlation among variables. Our method is able to identify bothabnormal instances and abnormal variables of an instance.\n1 Introduction\nAnomaly detection is one of the most practical artificial intelligentproblems. It aims at recognizing unusual patterns within normal be-haviors. Unlike traditional anomaly detection whose task is to iden-tify the anomalous samples, we invent an anomaly detection frame-work that is capable of detecting the abnormal at both the variable\nand instance levels.\n5\nOne of pioneering studies on variable-level anomaly detection is\npresented in [2], in which a sparse Graphical Gaussian Model (GGM)[5] was shown to be effective. However, we found that GGM may failto achieve a fair performance for high correlated data. Most anomalydetection methods based on Principal Component Analysis (PCA)implicitly assume that abnormal patterns rarely span the normal sub-\nspace, which is generally referred to as a subspace with main vari-ances [3]. However, this kind of assumption does not always hold forthe high correlated data, since most of abnormal patterns are wrappedby normal patterns that lie along the direction of the main variance.\nIn this paper, by clarifying the relationship between GGM and\nProbabilistic PCA (PPCA), we propose a probabilistic model foranomaly detection at both the variable and instance levels. We cal-\nculate anomaly scores for both the variables and the instances.\n2 Problem Setting\nWe are given Nobserved samples represented by a centered matrix\nX=[x1,x2,..., xN]∈RD×N. Each sample xi(i=1,2,...,N )\nis denoted by a D-dimensional vector [x1,x2,...,x D]T. A label\nvector for the samples is defined as p=[p1,p2,...,p N], where\npiis the label of xi. In the practical setting of anomaly detection,\npi∈{1,2,3}in which 1,2and3represent normal label, abnormal\nlabel, and unknown labels, respectively. For the variables, we alsodefine a label matrix Vthe same size as X, in which the ij-th entry is\nset to be 1,2or3, if the corresponding variable is normal, abnormal,\nor in an unknown state, respectively. Our task is to identify whichvariables and which samples are in abnormal states.\n1Central Research Laboratory, Hitachi, email: [email protected]\n2IBM Research - Tokyo, email: [email protected]\n3Kyushu University, email: [email protected]\n4IBM T.J. Watson Research Center, email: [email protected]\n5This work was mainly done during Bin Tong’s internship at IBM Research\n- Tokyo.3 Relationship between GGM and PPCA\nIn GGM, D-dimensional random variables are modeled by a Gaus-\nsian distribution, which is associated with a graph with Dnodes\n(variables) and a set of edges. Two variables without an edge indi-cates the two variables are conditionally independent given the othervariables. The edge connections among nodes can be represented byaprecision matrix. The logarithm of likelihood for the Gaussian dis-\ntribution is written as:\nJ\nGGM(ΛΛΛ) = lndetΛ ΛΛ−tr(SΛΛΛ) + const. (1)\nwhere ΛΛΛdenotes the precision matrix ,Srepresents the empirical es-\ntimate of covariance matrix, which is calculated as S=N−1XXT,\ntr denotes the trace operator, and det represents the determinant of\na matrix. In PPCA [4], a linear mapping for each observed data xn\nthat is corrupted by noise is defined as:\nxn=Wz n+ηηηn (2)\nwhere the mapping matrix is W∈RD×Dif all dimensions are kept,\nzn∈RDis a latent vector having a Gaussian distribution N(0,I)\nwhere Iis an identity matrix, ηηηnis a noise vector having a Gaussian\ndistribution N(0,β2I)where β2is a variance. By imposing a Gaus-\nsian prior for the latent data, the logarithm of likelihood of W after\nmarginalizing over znis written as:\nJPPCA(W)=− lndet C−tr(C−1S)+const. (3)\nwhere C=WWT+β2I. Through the equality lndet C=\n−lndet C−1, we see that the precision matrix ΛΛΛin Eq. (1) corre-\nsponds to C−1in Eq. (3).\nFrom the viewpoint of optimizing C, PPCA can be considered\nas a parameterized version of GGM, since Cis parameterized into\nthe form WWT+β2I. The relationship between GGM and PPCA\nprovides a novel perspective on the transformation matrix W to un-\nderstand the precision matrix.\n4 Probabilistic Two-Level Anomaly Detection\nIn order to integrate the supervised information on variables and in-stances, we naturally extend PPCA to a matrix-variate linear modeland derive anomaly scores by using the relation between GGM andPPCA.\n4.1 Probabilistic Model\nStarting from a linear model, we can write Eq. (2) into a matrix form:\nX=Y+ΨΨΨ (4)ECAI 2014\nT. Schaub et al. (Eds.)© 2014 The Authors and IOS Press.This article is published online with Open Access by IOS Press and distributed under the termsof the Creative Commons Attribution Non-Commercial License.\ndoi:10.3233/978-1-61499-419-0-11091109\nwhere Y=WZ ,Xis the data matrix, W∈RD×Dis the map-\nping matrix, Z=[z1,z2,..., zN]∈RD×Nis the latent data ma-\ntrix for Xwhere each zi(i=1,2,...,N )i sa D-dimensional\nvector [z1,z2,...,z D]T, andΨΨΨis a noise matrix. We assume that\nZis drawn from a matrix-variate normal distribution [1], a ma-\ntrix extension of Gaussian for vectors, Z∼N(0,ID,KN), where\nID∈RD×Drepresents the row covariance matrix which encodes\nthe relationships among the variables, and KN∈RN×Ndenotes the\ncolumn covariance matrix which describes the relationships among\nthe instances. The distribution of Zis with the mean of a zero matrix,\neach row independent of each other, and each column correlated withK\nN.\nWe start to build a generative model. In anomaly detection, we\nattempt to learn a generative model for generating the normal datawith high probabilities and the abnormal data with low probabilities.In addition, it is often the case that, even if an instance is labeled asabnormal, some variables of the instance may be normal. Inspired bythis observation, we define a vector a=[α\n2\n1,α2\n2,α2\n3]Twhere α2\n1,\nα2\n2, andα2\n3represent the variances of the Gaussian noise for normal\nvariables, abnormal variables, and variables with unknown labels,respectively. In a general setting, it follows α\n2\n1≤α2\n3≤α2\n2, since a\ngenerative model tends to generate the data with high probabilities,which have low degrees of noise. Therefore, using the label matrixVfor the variables, the conditional probability of the observed data\nXcan be defined as:\np(X|Y ,V,a)=\nD/productdisplay\ni=1N/productdisplay\nj=1N(Xij|(Y)ij,α2\nVij). (5)\nIn order to integrate the supervised information on the instances,\nGaussian Random Field (GRF) is utilized. Following the idea in [6],we can write the distribution of Zas follows.\np(Z)=1\nF/primeexp/braceleftBig\n−τ\n2tr/parenleftBig\nZLZT/parenrightBig/bracerightBig\n(6)\nwhere F/primedenotes a constant, τis a scale parameter, and Lis seen\nas a Laplacian matrix for a similarity matrix Gthat encodes the su-\npervised information on the instances. The entries of Gare defined\nas:\nGij=⎧\n⎪⎪⎨\n⎪⎪⎩1,\nθ,\nδ,\n0,x\niand xj(i/negationslash=j )∈normal class\nxiand xj(i/negationslash=j )∈unlabeled class\nxi∈normal class ,xj∈unlabeled class\notherwise\n(7)\nsuch that L=D−Gwhere Dis a diagonal matrix with Dii=/summationtext\njGij,θandδ∈[0,1]. From the definition of G, we can see\nthat the larger the value of Gijis, the closer xiand xjare to\neach other. In Eq. (6), we interpret that Zfollows a Gaussian dis-\ntribution with precision matrix L. Compared with the definition for\nZ∼N(0,ID,KN),w eh a v eL =K−1\nN. According to the Lemma\non pp. 64 of [1], we can define a prior for Y, which is a matrix variate\nnormal distribution on WZ , as follows:\nY=WZ∼N D,N(0,WWT,KN). (8)\nWith the prior in Eq. (8) and the likelihood in Eq. (5), the posterior\ndistribution of Yis defined below.\np(Y|X)∝p(X|Y )p(Y) (9)\nThe MAP estimate of Ycan be obtained by minimizing the negative\nlogarithm of Eq. (9). For details of the optimization on both W and\nZ, refer to Section 2 of the supplementary document6.\n6http://ide-research.net/papers/ecai14_doc.pdf4.2 Anomaly Score\nAfter obtaining W through the optimization, the precision matrix ΛΛΛ\nfor the distribution on Xis calculated as (WWT+β2I)−1. Given an\ninstance x, the abnormal scores s=[s1,s2,...,s D]for all variables\nare calculated as:\ns≡s0+1\n2diag(ΛΛΛxxTΛΛΛP−1) (10)\nwhere diag (·)represents a vector in which the elements correspond\nto the diagonal elements of a matrix. The matrix P=diag2(ΛΛΛ)\nwhere diag2(·)denotes a matrix with the diagonal elements of a ma-\ntrix and zero off-diagonal elements. The vector s0is defined so that\n(s0)i=1\n2ln2π\nΛΛΛi,i.\nWith respect to the anomaly scores for the instances, we first nor-\nmalize s, which is denoted by b=[b1,b2,...,b D]. Given an in-\nstance x, its abnormal score, which is derived from R ´enyi entropy of\norderλ, is defined as:\nt≡1\nλ−1ln/parenleftBiggD/summationdisplay\ni=1bλ\ni/parenrightBigg\n. (11)\nFor the detailed discussion on anomaly score, refer to Section 3 of\nthe supplementary document6.\n5 Experiment\nAs a case study, we made an experiment on the high correlated datafrom a train sensor system. We denote our method Probabilistic Two-Level Anomaly Detection as PTLAD, an extension of Glasso [2] asEGlasso, an supervised extension of GLasso as SEGlasso ( k), where\nthe first k(k=1,...,D ) directions of main variances of data are\nremoved. We utilize Signal to Noise Ratio (SNR) to evaluate the dif-ferences between the anomaly scores for normal and abnormal data.Table 1 presents SNRs for the instances and the average values overvariables, showing that PTLAD outperforms the other methods. Forthe detailed discussion on the experiment, refer to Section 4 of thesupplementary document\n6.\nT able 1: SNRs for variables and instances\nType PTLAD SEGlasso (k=1) EGlasso JSPCA[3]\nAve. V ariable 25.58 3.22 3.22 9.30\nInstance 4.14 1.49 0.39 0.60\n6 Conclusion\nWe clarified the relationship between GGM and PPCA, and proposeda novel anomaly detection framework at both the variable and in-\nstance levels for high correlated data.\nREFERENCES\n[1] A. K. Gupta, Matrix V ariate Distributions , Chapman & Hall/CRC, Oc-\ntober 1999.\n[2] T. Id ´e, A. C. Lozano, N. Abe, and Y . Liu, ‘Proximity-Based Anomaly\nDetection Using Sparse Structure Learning’, in SDM, pp. 97–108,\n(2009).\n[3] R. Jiang, H. Fei, and J. Huan, ‘Anomaly Localization for Network Data\nStreams with Graph Joint Sparse PCA’, in KDD, pp. 886–894, (2011).\n[4] M. E. Tipping and C. M. Bishop, ‘Probabilistic Principal ComponentAnalysis’, Journal of the Royal Statistical Society ,61, 611–622, (1999).\n[5] N. Meinshausen, P . Bhlmann, and E. Zrich, ‘High Dimensional Graphsand V ariable Selection with the Lasso’, Annals of Statistics, 34, 1436–\n1462, (2006).\n[6] G. Zhong, W. Li, D. Yeung, X. Hou, and C.-L. Liu, ‘Gaussian ProcessLatent Random Field’, in AAAI, pp. 679–684, (2010).B. Tong et al. / Probabilistic Two-Level Anomaly Detection for Correlated Systems 1110", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "5AhOdtlZIg4N", "year": null, "venue": "ECAI2014", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-419-0-483", "forum_link": "https://openreview.net/forum?id=5AhOdtlZIg4N", "arxiv_id": null, "doi": null }
{ "title": "Learning Pruning Rules for Heuristic Search Planning.", "authors": [ "Michal Krajnanský", "Jörg Hoffmann", "Olivier Buffet", "Alan Fern" ], "abstract": "When it comes to learning control knowledge for planning, most works focus on “how to do it” knowledge which is then used to make decisions regarding which actions should be applied in which state. We pursue the opposite approach of learning “how to not do it” knowledge, used to make decisions regarding which actions should not be applied in which state. Our intuition is that “bad actions” are often easier to characterize than “good” ones. An obvious application, which has not been considered by the few prior works on learning bad actions, is to use such learned knowledge as action pruning rules in heuristic search planning. Fixing a canonical rule language and an off-the-shelf learning tool, we explore a novel method for generating training data, and implement rule evaluators in state-of-the-art planners. The experiments show that the learned rules can yield dramatic savings, even when the native pruning rules of these planners, i.e., preferred operators, are already switched on.", "keywords": [], "raw_extracted_content": "Learning Pruning Rules for Heuristic Search Planning\nMichal Kraj ˇnansk ´y1and J¨org Hoffmann1and Olivier Buffet2and Alan Fern3\nAbstract. When it comes to learning control knowledge for plan-\nning, most works focus on “how to do it” knowledge which is then\nused to make decisions regarding which actions should be applied inwhich state. We pursue the opposite approach of learning “how tonotdo it” knowledge, used to make decisions regarding which ac-\ntions should notbe applied in which state. Our intuition is that “bad\nactions” are often easier to characterize than “good” ones. An ob-vious application, which has not been considered by the few priorworks on learning bad actions, is to use such learned knowledge as\naction pruning rules in heuristic search planning. Fixing a canonical\nrule language and an off-the-shelf learning tool, we explore a novelmethod for generating training data, and implement rule evaluatorsin state-of-the-art planners. The experiments show that the learnedrules can yield dramatic savings, even when the native pruning rulesof these planners, i.e., preferred operators, are already switched on.\n1 Introduction\nLearning can be applied to planning in manifold ways. To name afew, existing approaches include learning to predict planner perfor-mance (e.g., [16]), learning macro actions (e.g., [2, 3]), learning toimprove a heuristic (e.g., [20]), learning which heuristic to use when[6], and learning portfolio configurations (e.g., [17]).\nThe approach we pursue here is the venerable (i.e., old) idea of\nlearning control knowledge, in the sense of “domain-dependent in-\nformation about the structure of plans”. That approach has a longtradition, focusing almost entirely on “how to do it” knowledge,\nmostly learning representations of closed-loop action-selection poli-\ncies or open-loop macro actions. Learned policies are often used forsearch-free plan generation (e.g., [12, 7, 8, 19, 4]). Recent work hasalso used learned policies for macro generation during search (e.g.,[20, 4]).\nIn this work, we pursue an alternative approach of learning “how\ntonotdo it” knowledge. Consider, e.g., Sokoban. Finding the “good”\nactions in many critical states is very hard to do, as it effectively en-tails search or already knowing what the solution is. In contrast, witha bit of practice it is often easy to avoid clearly “bad” actions (like,blocking an exit) based on simple features of the state. A plausiblehypothesis therefore is that it may be easier to learn a representationthat is able to reliably identify some of the bad actions in a state,\ncompared to learning to reliably select a good action.\n4\n1Saarland University, Saarbr ¨ucken, Germany, {krajnansky,hoffmann}@cs.\nuni-saarland.de\n2INRIA / Universit ´e de Lorraine,Nancy, France, [email protected]\n3Oregon State University, Corvallis, USA, [email protected]\n4Note the “some” here: learning to reliably identify allbad actions is equiv-\nalent to learning to identify all good actions. Our focus is on learning a\nsubset of the bad actions. From a machine learning perspective, this cor-\nresponds to the precision-recall tradeoff. We are willing to sacrifice recall(the percentage of bad actions that are pruned), in favor of precision (theIndeed, in the literature on search, pruning rules – conditions un-\nder which the search discards an applicable action – play a promi-\nnent role. Temporal logic pruning rules are highly successful inhand-tailored planning with TLPlan [1] and TALPlanner [13]. Prun-ing rules derived as a side effect of computing a heuristic func-tion, commonly referred to as helpful actions orpreferred opera-\ntors, are of paramount importance to the performance of domain-independent heuristic search planners like FF [10], Fast Downward[9], and LAMA [15]. In fact, it has been found that such pruning typ-\nically is more important to performance than the differences between\nmany of the heuristic functions that have been developed [14].\nDespite the prominence of pruning from a search perspective,\nhardly any research has been done on learning to characterize badactions (presumably due to the traditional focus on learning stand-alone knowledge as opposed to helping a search algorithm). To thebest of our knowledge, there are exactly two such prior works. Con-sidering SAT-based planning, Huang et al. [11] learn simple datalog-style conjunctive pruning rules, conveniently expressed in the formof additional clauses. They find this method to be very effective em-pirically, with speed-ups of up to two orders of magnitude on a col-lection of mostly transport-type domains (although, from today’s per-spective, it should be mentioned that the original planner, but not theone using the pruning rules, is time-step optimal). More recently,de la Rosa and McIlraith [5] tackled the long-standing question ofhow to automatically derive the control knowledge for TLPlan and\nTALPlanner. Accordingly, their pruning rules are formulated in lin-\near temporal logic (LTL); they introduce techniques to automaticallygenerate derived predicates to expand the feature space for theserules. Experiments in three domains show that these rules providefor performance competitive with that of hand-written ones.\nAgainst this background, our work is easy to describe: Like de la\nRosa and McIlraith, we hook onto the search literature in attemptingto learn a prominent form of pruning; while de la Rosa and McIl-raith considered TLPlan, we consider action pruning ( `a la preferred\noperators) in heuristic search planning. The idea is to let that pow-erful search framework do the job of finding the “good” actions, re-ducing our job to helping out with quickly discarding the bad ones.Like Huang et al., we concentrate on simple datalog-style conjunc-tive pruning rules, the motivation being to determine first how far\nsuch a simple framework carries. (More complex frameworks, and\nin particular the application of de la Rosa and McIlraith’s rules in\nheuristic search planning, are left open as future topics.) We also di-\nverge from prior work in the generation of training data, which wederive comprehensively from all optimal states as opposed to just thestates visited by one (or a subset of) solutions.\nAs it turns out, our simple approach is quite promising. Experi-\nmenting with the IPC’11 learning track benchmarks, we obtain dra-\npercentage of pruned actions that are bad). This makes sense as it avoids\nremoving solutions from the search space.ECAI 2014\nT. Schaub et al. (Eds.)© 2014 The Authors and IOS Press.This article is published online with Open Access by IOS Press and distributed under the termsof the Creative Commons Attribution Non-Commercial License.\ndoi:10.3233/978-1-61499-419-0-483483\nmatic speed-ups over standard search configurations in Fast Down-\nward, on several domains. The speed-ups are counter-balanced byequally dramatic losses on other domains, but a straightforward port-folio approach suffices to combine the complementary strengths ofthe different configurations involved.\nWe next introduce our notations. We then detail our features for\nlearning, the generation of training data, our formulation of pruningrules and how they are being learned, as well as their usage during\nthe search. We present our experiments and conclude.\n2 Preliminaries\nOur approach requires that states be represented as sets of instanti-\nated first-order atoms (so we can learn first-order conjunctive prun-ing conditions), that actions are instantiated action schemas (so thepruning conditions can be interpreted as rules disallowing particularschema instantiations in a given state), and that the first-order pred-icates and the action schemas are shared across the entire planningdomain (so the rules can be transferred across instances of the do-main). Apart from this, we don’t need to make any assumptions, inparticular as to how exactly action schemas are represented and how\ntheir semantics is defined.\nOur assumptions are obviously satisfied by sequential planning\nin all variants of deterministic non-metric non-temporal PDDL. Our\npruning rules are designed for use during a forward search. In ourconcrete implementation, we build on FF [10] and Fast Downward(FD) [9]. In what follows, we introduce minimal notation as will beneeded to describe our techniques and their use in forward search.\nWe presume a fixed planning domain D, associated with a set P\nof first-order predicates, each p∈Pwith arity arity\np; we identify\npwith a string (its “name”). Dis furthermore associated with a set\nAofaction schemas, each of which has the form a[X]whereais\nthe schema’s name and Xis a tuple of variables; we will sometimes\nidentifyXwith the set of variables it contains.\nAfirst-order atom has the form p[X]wherep∈PandXis an\narityp-tuple of variables; like for action schemas, we will sometimes\nidentifyXwith the set of variables it contains. A first-order literal\nl[X]is either a first-order atom p[X](apositive literal ), or a negated\nfirst-order atom ¬p[X](anegative literal ).\nAn instanceΠof the domain Dcomes with a set Oofobjects.A\nground atom then has the form p[o1,...,o k]wherep∈P,oi∈O,\nandk=arityp.Ground literals are defined in the obvious manner. A\nground action has the form a[o1,...,o k]wherea[X]∈A,oi∈O,\nandk=|X|; we will often denote ground actions simply with “a”.\nAstatesis a set of ground atoms.\nEach domain instance Πis furthermore associated with a state I\ncalled the initial state, and with a set Gof ground atoms called the\ngoal. A state sis a goal state ifG⊆s.\nIfsis a state and ais a ground action, then we assume that there\nis some criterion stating whether aisapplicable tos, and what the\nresulting state of applying atosis. A solution (or plan) for a domain\ninstance is a sequence of ground actions that is iteratively applicabletoI, and whose iterated application results in a goal state. The so-\nlution is optimal if its length is minimal among all solutions. (For\nsimplicity, we do not consider more general action costs, althoughour approach is applicable to these in principle.)\n3 Features\nA basic decision is which features to use as input for the learningalgorithm. Many previous works on learning control knowledge forstates (e.g., [20, 19, 4, 5]) used features different from the state itself,or in addition to the state itself. We did not do that for now, as thesimpler approach already led to good results. However, of course,whether an action is “good” or “bad” often depends on the goal .\nAs the goal is not reflected in the states during a forward search, weneed to augment the states with that information.\nGiven a domain instance Πand a predicate p, denote by Goal(p)\nsome new predicate unique to p(in our implementation, Goal(p)\nprefixesp’s name with the string “Goal -”), and with the same arity\nasp. The augmented predicates are obtained as P∪{Goal(p)|\np∈P}. Given a state sinΠ, the augmented state is obtained as\ns∪{Goal(p)[o\n1,...,o k]|p[o1,...,o k]∈G}whereGis the in-\nstance’s goal. In words, we make goal-indicator copies of the predi-cates, and introduce the respective ground atoms into the states. Weassume from now on that this operation has been performed, with-out explicitly using the word “augmented”. The input to the learn-ing algorithm are (augmented) states, the learned rules employ (aug-mented) predicates, and the rule usage is based on evaluating these(augmented) predicates against (augmented) states during the search.\nFor example, in a transportation domain with predicate at[x,y],\nwe introduce the augmented predicate Goal -at[x,y].I fat[o\n1,c2]∈\nGis a goal atom, we augment all states with Goal -at[o 1,c2].I no u r\nexperiments, the majority of the learned rules ( ≥70% in 5 of 9 do-\nmains) contain at least one augmented predicate in the rule condition.\n4 Generating the Training Data\nThe pruning rules we wish to learn are supposed to represent, givena states, what are the “bad action choices”, i.e., which applicable\nground actions should not be expanded by the search. But when is an\naction “bad” in a state? How should we design the training data?\nAlmost all prior approaches to learning control knowledge (e.g.,\n[12, 7, 20, 19]) answer that question by choosing a set of training\nproblem instances, generating a single plan for each, and extractingthe training data from that plan. In case of learning which actions\nshould be applied in which kinds of states, in particular, it is basicallyassumed that the choices made by the plan – the action aapplied in\nany state the plan svisits – are “good”, and every other action a\n/prime\napplicable to these states sis “bad”. Intuitively, the “good” part is\njustified as the training plan works for its instance, but the “bad” partignores the fact that other plans might have worked just as well, re-sulting in noisy training data. Some prior approaches partly counter-act this by removing unnecessary ordering constraints from the plan,thus effectively considering a subset of equally good plans. How-\never, those approaches are incomplete and can still mislabel “good”\nactions as “bad”. Herein, we employ a more radical approach basedon generating alloptimal plans.\nWe assume any planning tool that parses domain Dand an in-\nstanceΠ, that provides the machinery to run forward state space\nsearch, and that provides an admissible heuristic function h. To gen-\nerate the training data, we use A\n∗with small modifications. Precisely,\nour base algorithm is the standard one for admissible (but potentiallyinconsistent) heuristics: best-first search on g+hwheregis path\nlength; maintaining a pointer to the parent node in each search node;duplicate pruning against all generated states, updating the parentpointer (and re-opening the node if it was closed already) if the newpath is cheaper. We modify two aspects of this algorithm, namely (a)the termination condition and (b) the maintenance of parent pointers.\nFor (a), instead of terminating when the first solution is found, we\nstop the search only when the best node in the open list has g(s)+\nh(s)>g\n∗whereg∗is the length of the optimal solution (which weM.Krajˇnanský etal./Learning Pruning Rules forHeuristic SearchPlanning 484\nfound beforehand). For (b), instead of maintaining just one pointer to\nthe best parent found so far, we maintain a list of pointers to all suchparents. Thanks to (a), as g(s)+h(s)is a lower bound on the cost\nof any solution through s, and as all other open nodes have at least\nvalueg+h, upon termination we must have generated all optimal\nsolutions. Thanks to (b), at that time we can find the set S\n∗of all\nstates on optimal plans very easily: Simply start at the goal statesand backchain over all parent pointers, collecting all states along theway until reaching the initial state. The training data then is:\n•Good examples E\n+:Every pair (s,a) of states∈S∗and ground\nactionaapplicable to swhere the outcome state s/primeof applying a\ntosis a member of S∗.\n•Bad examples E−:Every pair (s,a) of states∈S∗and ground\nactionaapplicable to swhere the outcome state s/primeof applying a\ntosisnota member of S∗.\nGiven several training instances, E+, respectively E−, are obtained\nsimply as the union of E+, respectively E−, over all those instances.\nTo our knowledge, the only prior work taking a similar direction\nis that of de la Rosa et al. [4]. They generate all optimal plans usinga depth-first branch and bound search with no duplicate pruning. Asubset of these plans is then selected according to a ranking crite-rion, and the training data is generated from that subset. The latterstep, i.e. the training data read off the solutions, is similar to ours,corresponding basically to a subset of S\n∗(we did not investigate yet\nwhether such subset selection could be beneficial for our approach aswell). The search step employed by de la Rosa et al. is unnecessar-ily ineffective as the same training data could be generated using ourA\n∗-based method, which does include duplicate pruning (a crucial\nadvantage for search performance in many planning domains).\nWe will refer to the above as the\n•conservative training data (i.e.based on all optimal plans), con-\ntrasted with what we call\n•greedy training data.\nThe latter is oriented closely at the bulk of previous approaches: For\nthe greedy data we take S∗to be the states along a single optimal\nplan only, otherwise applying the same definition of E+andE−.\nIn other words, in the greedy training data, (s,a) is “good” if the\noptimal plan used applies atos, and is “bad” if the optimal plan\npassed through sbut applied an action a/prime/negationslash=a.\nNote that above all actions in each state of S∗are included in either\nE+orE−. We refer to this as the\n•all-operators training data, contrasted with what we call\n•preferred-operators training data.\nIn the latter, E+andE−are defined as above, but are restricted to\nthe subset of state/action pairs (s,a) wheres∈S∗, and ground ac-\ntionais applicable to sandis a helpful action for s(according to the\nrelaxed plan heuristic hFF[10]). Knowledge learned using this mod-\nified training data will be used only within searches that already em-\nploy this kind of action pruning: The idea is to focus the rule learningon those aspects missed by this native pruning rule.\nSimilarly to de la Rosa et al. [4], in our implementation the training\ndata generation is approximate in the sense that we use the relaxedplan heuristic h\nFFas our heuristic h.hFFis not in general admissible,\nbut in practice it typically does not over-estimate. Hence this config-uration is viable in terms of runtime and scalability, and in terms ofthe typical quality of the training data generated.\nThere is an apparent mismatch between the distribution of states\nused to create the training data (only states on optimal plans) and\nthe distribution of states that will be encountered during search (bothoptimal and sub-optimal states). Why then should we expect the rules\nto generalize properly when used in the context of search?\nIn general, there is no reason for that expectation, beyond the in-\ntuition that bad actions on optimal states will typically be bad alsoon sub-optimal ones sharing the relevant state features. It would cer-tainly be worthwhile to try training on intelligently selected subop-timal states. Note though that, as long as the pruning on the opti-mal states retains the optimal plans (which is what we are trying toachieve when learning from conservative data), even arbitrary prun-ing decisions at suboptimal states do not impact the availability ofoptimal plans in the search space.\n5 Learning the Pruning Rules\nOur objective is to learn some representation R, in a form that gener-\nalizes across instances of the same domain D, so thatRcovers a large\nfraction of bad examples in E−without covering any of the good ex-\namplesE+. We want to use Rfor pruning during search, where on\nany search state s, an applicable ground action awill not be expanded\nin case(s,a) is covered by R. It remains to define what kind of rep-\nresentation will underlie R, what it means to “cover” a state/action\npair(s,a) , and how Rwill be learned. We consider these in turn.\nAs previously advertized, we choose to represent Rin the form of\na set of pruning rules. Each rule r[Y]∈Rtakes the form r[Y]=\n¬a[X]⇐l1[X1]∧···∧ln[Xn]\nwherea[X]is an action schema from the domain D,li[Xi]are first-\norder literals, and Y=X∪/uniontext\niXiis the set of all variables oc-\ncuring in the rule. In other words, we associate each action schemawith conjunctive conditions identifying circumstances under whichthe schema is to be considered “bad” and should be pruned. As usual,we will sometimes refer to ¬a[X]as the rule’s head and to the con-\nditionl\n1[X1]∧···∧ln[Xn]as its body.\nWe choose this simple representation for precisely that virtue: sim-\nplicity. Our approach is (relatively) simple to implement and use, andas we shall see can yield excellent results.\nGiven a domain instance with object set O, and a pruning rule\nr[Y]∈R,agrounding ofr[Y]takes the form r=\n¬a[o\n1,...,ok]⇐l1[o1\n1,...,ok1\n1]∧···∧ln[o1n,...,okn\nn]\nwhereoj=oj/prime\ni/primewhenever XandXi/primeshare the same variable at po-\nsitionjrespectively j/prime, andoj\ni=oj/prime\ni/primewhenever XiandXi/primeshare\nthe same variable at position jrespectively j/prime. We refer to such ras\naground pruning rule. In other words, ground pruning rules are ob-\ntained by substituting the variables of pruning rules with the objects\nof the domain instance under consideration.\nAssume now a state sand a ground action aapplicable to s.A\nground pruning rule r=[¬a/prime⇐l1∧···∧l n]covers(s,a) if\na/prime=aands|=l1∧···∧l n. A pruning rule r[Y]covers(s,a) if\nthere exists a grounding of r[Y]that covers (s,a) . A setRof pruning\nrules covers (s,a) if one of its member rules does.\nWith these definitions in hand, our learning task – learn a set of\npruning rules Rwhich covers as many bad examples in E−as pos-\nsible without covering any of the good examples E+– is a typical\ninductive logic programming (ILP) problem: We need to learn a set\nof logic programming rules that explains the observations as given byour training data examples. It is thus viable to use off-the-shelf toolsupport. We chose the well-known Aleph toolbox [18]. (Exploringapplication-specific ILP algorithms for our setting is an open topic.)\nIn a nutshell, in our context, Aleph proceeds as follows:\n1. IfE\n−=∅, stop. Else, select an example (s,a)∈E−.M.Krajˇnanský etal./Learning Pruning Rules forHeuristic SearchPlanning 485\n2. Construct the “bottom clause”, i.e., the most specific conjunction\nof literals that covers (s,a) and is within the language restrictions\nimposed. (See below for the restrictions we applied.)\n3. Search for a subset of the bottom clause yielding a rule r[Y]\nwhich covers (s,a) , does not cover any example from E+, and\nhas maximal score (covers many examples from E−).\n4. Addr[Y]to the rule set, and remove all examples from E−cov-\nered by it. Goto 1.\nNote that our form of ILP is simple in that there is no recursion.\nThe rule heads (the action schemas) are from a fixed and known setseparate from the predicates to be used in the rule bodies. Aleph of-fers support for this simply by separate lists of potential rule headsrespectively potential body literals. These lists also allow experimen-\ntation with different language variants for the rule bodies:\n•Positive vs. mixed conditions: We optionally restrict the rule con-\nditions to contain only positive literals, referring to the respective\nvariant as “positive” respectively “mixed”. The intuition is thatnegative condition literals sometimes allow more concise repre-sentations of situations, but their presence also has the potential tounnecessarily blow up the search space for learning.\n•With vs. without inequality constraints: As specified above,\nequal variables in a rule will always be instantiated with the sameobject. But, per default, different variables also may be instanti-ated with the same object. Aleph allows “x /negationslash=y” body literals\nto prevent this from happening. Similarly to the above, such in-equality constraints may sometimes help, but may also increasethe difficulty of Aleph’s search for good rules.\nAs the two options can be independently switched on or off, we have\na total of four condition language variants. We will refer to these by\nP,M,P\n/negationslash=, and M/negationslash=in the obvious manner.\nWe restrict negative condition literals, including literals of the\nformx/negationslash=y, to use bound variables only: In any rule r[Y]learned,\nwhenever variable xoccurs in a negative condition literal, then x\nmust also occur in either a positive condition literal or in the rule’shead.\n5Intuitively, this prevents negative literals from having exces-\nsive coverage by instantiating an unbound variable with all valuesthat do notoccur in a state (e.g., “¬at[x,y ]” collects all but one city y\nfor every object x). Note that, in our context, head variables are con-\nsidered to be bound as their instantiation will come from the groundactionawhose “bad” or “good” nature we will be checking.\nAleph furthermore allows various forms of fine-grained control\nover its search algorithm. We used the default setting for all excepttwo parameters. First, the rule length bound restricts the search space\nto conditions with at most Lliterals. We empirically found this pa-\nrameter to be of paramount importance for the runtime performanceof learning. Furthermore, we found that L=6was an almost univer-\nsally good “magic” setting of this parameter in our context: L>6\nrarely ever lead to better-performing rules, i.e., to rules with more\npruning power than those learned for L=6; andL<6very fre-\nquently lead to much worse-performing rules. We thus fixed L=6,\nand use this setting throughout the experiments reported. Second,minimum coverage restricts the search space to rules that cover atleastCexamples from E\n−. We did not run extensive experiments\nexamining this parameter, and fixed it to C=2to allow for a maxi-\nmally fine-grained representation of the training examples (refrainingonly from inserting a rule for the sake of a single state/action pair).\n5We implemented this restriction via the “input/output” tags Aleph allows in\nthe lists of potential rule heads and body literals. We did not use these tags\nfor any other purpose than the one described, so we omit a description oftheir more general syntax and semantics.6 Using the Pruning Rules\nGiven a domain instance Π, a statesduring forward search on Π,\nand an action aapplicable to s, we need to test whether Rcovers\n(s,a) . If the answer is “no”, proceed as usual; if the answer is “yes”,\nprunea, i.e., do not generate the resulting state.\nThe issue here is computational efficiency: We have to pose the\nquestion “does Rcover(s,a) ?” not only for every state sduring\na combinatorial search, but even for every action aapplicable to s.\nSo it is of paramount importance for that test to be fast. Indeed, we\nmust avoid the infamous utility problem, identified in early work on\nlearning for planning, where the overhead of evaluating the learned\nknowledge would often dominate the potential gains.\nUnfortunately, the problem underlying the test is NP-complete:\nFor rule heads with no variables, and rule bodies with only posi-tive literals, we are facing the well-known problem of evaluating aBoolean conjunctive query (the rule body) against a database (thestate). More precisely, the problem is NP-complete when consider-\ning arbitrary-size rule bodies (“combined complexity” in databasetheory). When fixing the rule body size, as we do in our work (re-member that L=6), the problem becomes polynomial-time solv-\nable (“data complexity”), i.e., exponential in the fixed bound. Forour bound 6, this is of course still way too costly with a na ¨ıve solu-\ntion enumerating all rule groundings. We employ backtracking in thespace of partial groundings, using unification to generate only par-tial groundings that match the state and ground action in question. Inparticular, a key advantage in practice is that, typically, many of the\nrule variables occur in the head and will thus be fixed by the ground\nactionaalready, substantially narrowing down the search space.\nFor the sake of clarity, let us fill in a few details. Say that sis\na state,a[o\n1,...,o k]is a ground action, and ¬a[x 1,...,x k]⇐\nl1[X1]∧ ··· ∧ ln[Xn]is a pruning rule for the respective ac-\ntion schema. We view the positive respectively negative body lit-\nerals as sets of atoms, denoted LPrespectively LN. Withα:=\n{(x1,o1),...,(xk,ok)}, we setLP:=α(LP)andLN:=α(LN),\ni.e., we apply the partial assignment dictated by the ground action toevery atom. We then call the following recursive procedure:\nifL\nP/negationslash=∅then\nselectl∈LP\nforallq∈sunifiable with lvia partial assignment βdo\nifrecursive call on β(LP\\{l})andβ(LN)succeeds then\nsucceed\nendif\nendfor\nfail\nelse/*LP=∅*/\nifLN∩s=∅then succeed elsefailendif\nendif\nThe algorithm iteratively processes the atoms in LP. When we\nreachLN, i.e., when all positive body literals have already been pro-\ncessed, all variables must have been instantiated because negative\nliterals use bound variables only (cf. previous section). So the neg-ative part of the condition is now a set of ground atoms and can betested simply in terms of its intersection with the state s.\nWe use two simple heuristics to improve runtime. Within each rule\ncondition, we order predicates with higher arity up front so that manyvariables will be instantiated quickly. Across rules, we dynamicallyadapt the order of evaluation. For each rule rwe maintain its “suc-\ncess count”, i.e., the number of times rfired (pruned out an action).\nWhenever rfires, we compare its success count with that of the pre-\nceding rule r\n/prime; if the count for ris higher, randr/primeget switched. This\nsimple operation takes constant time but can be quite effective.M.Krajˇnanský etal./Learning Pruning Rules forHeuristic SearchPlanning 486\n7 Experiments\nWe use the benchmark domains from the learning track of IPC’11.\nAll experiments were run on a cluster of Intel E5-2660 machinesrunning at 2.20 GHz. We limited runtime for training data genera-tion to 15 minutes (per task), and for rule learning to 30 minutes(per domain, configuration, and action schema). To obtain the train-ing data, we manually played with the generator parameters to findmaximally large instances for which the learning process was feasi-ble within these limits. We produced 8–20 training instances per do-main and training data variant (i.e., conservative vs.greedy). Han-\ndling sufficiently large training instances turned out to be a challenge\nin Gripper, Rovers, Satellite and TPP. For example, in Gripper the\nbiggest training instances contain 3 grippers, 3 rooms and 3 objects;\nfor Rovers, our training instances either have a single rover, or onlyfew waypoints/objectives. We ran all four condition language vari-ants – P,M,P\n/negationslash=, and M/negationslash=– on the same training data. We show data\nonly for the language variants with inequality constraints, i.e., for P/negationslash=\nandM/negationslash=), as these generally performed better.\nall-operators preferred-operators\nConservative Greedy Conservative Greedy\nP/negationslash=M/negationslash=P/negationslash=M/negationslash=P/negationslash=M/negationslash=P/negationslash=M/negationslash=\n#L #L #L #L #L #L #L #L\nBarman 14 2.7 5 2.4 17 2.1 17 1.8 7 2.9 5 2.4 8 2.1 8 1.5\nBlocksworld 29 4.4 0— 61 3.8 23 2.7 28 4.3 0— 46 3.7 21 2.7\nDepots 2 4.5 14 16 3.3 10 2.8 4 4.8 24 12 3.4 9 3.1\nGripper 27 4.9 14 26 4.1 23 3.2 20 4.8 94 17 4.2 11 3.4\nParking 92 3.4 51 2.8 39 2.6 31 2.2 71 3.3 48 2.8 20 2.6 18 2.1\nRover 30 2.2 18 1.8 45 1.8 36 1.6 32 32 14 1.7 16 1.7\nSatellite 27 3.2 26 3 25 2.6 22 2.2 12 3.4 12 3 93 9 2.6\nSpanner 13 13 13 13 13 13 13 13\nTPP 13 2.5 10 2.4 18 2.6 21 2.6 6 2.8 5 2.8 11 2.7 12 2.8\nTable 1. Statistics regarding the rule sets learned. “#”: number of rules;\n“L”: average rule length (number of rule body literals).\nTable 1 shows statistics about the learned rule sets. One clear ob-\nservation is that fewer rules tend to be learned when using preferred-operators training data. This makes sense simply as that training datais smaller. A look at rule length shows that rules tend to be shortexcept in a few cases. A notable extreme behavior occurs in Span-ner, where we learn a single three-literal pruning rule, essentially in-\nstructing the planner to not leave the room without taking along all\nthe spanners. As it turns out, this is enough to render the benchmark\ntrivial for heuristic search planners. We get back to this below.\nWe implemented parsers for our pruning rules, and usage during\nsearch, in FF [10] and Fast Downward (FD) [9]. We report data only\nfor FD; that for FF is qualitatively similar. To evaluate the effect ofour rules when using/not using the native pruning, as “base planners”we run FD with h\nFFin single-queue lazy greedy best-first search\n(FD1), respectively in the same configuration but with a second openlist for states resulting from preferred operators (FD2). To evaluatethe effect of our rules on a representation of the state of the art inruntime, we run (the FD implementation of) the first search itera-tion of LAMA [15], which also is a dual-queue configuration where\none open list does, and one does not, use the native pruning. As wenoticed that, sometimes, FD’s boosting (giving a higher preference\nto the preferred-operators queue), is detrimental to performance, wealso experimented with configurations not using such boosting.\nIn both dual-queue configurations, we apply our learned pruning\nrules only to the preferred-operators queue, keeping the other “com-plete” queue intact. The preferred-operators training data is used in\nthese cases. For FD1, where we apply the rules to a single queue notusing preferred operators, we use the all-operators training data.\nFor the experiments on test instances, we used runtime (memory)\nlimits of 30 minutes (4 GB). We used the original test instances fromIPC’11 for all domains except Gripper and Depots, where LAMAwas unable to solve more than a single instance (with or without ourrules). We generated smaller test instances using the generators pro-vided, using about half as many crates than the IPC’11 test instancesin Depots, and cutting all size parameters by about half in Gripper.\nTable 2 gives a summary of the results. Considering the top parts\nof the tables (FD-default with boosting where applicable), for 4 outof 9 domains with FD1, for 4 domains with FD2, and for 4 do-mains with LAMA, the best coverage is obtained by one of ourrule-pruning configurations. Many of these improvements are dra-matic: 2 domains (FD1: Barman and Spanner), 3 domains (FD2:Barman, Blocksworld, and Parking), respectively 1 domain (LAMA:Barman). When switching the boosting off in FD2 and LAMA, a fur-ther dramatic improvement occurs in Satellite (note also that, overall,the baselines suffer a lot more from the lack of boosting than thoseconfigurations using our pruning rules). Altogether, our pruning ruleshelp in different ways for different base planners, and can yield dra-matic improvements in 5 out of the 9 IPC’11 domains.\nThe Achilles heel lies in the word “can” here: While there are\nmany great results, they are spread out across the different configura-tions. We did not find a single configuration that combines these ad-\nvantages. Furthermore, on the two domains where our pruning tech-\nniques are detrimental – Rovers and TPP – we lose dramatically, sothat, for the default (boosted) configurations of FD2 and LAMA, inoverall coverage we end up doing substantially worse.\nIn other words, our pruning techniques (a) have high variance\nand are sensitive to small configuration details, and (b) often arehighly complementary to standard heuristic search planning tech-niques. Canonical remedies for this are auto-tuning, learning a con-\nfiguration per-domain, and/or portfolios, employing combinations of\nconfigurations. Indeed, from that perspective, both (a) and (b) couldbe good news, especially as other satisficing heuristic search plan-ning techniques have a tendency to be strong in similar domains.\nA comprehensive investigation of auto-tuning and portfolios is be-\nyond the scope of this paper, but to give a first impression we reportpreliminary data in Table 2 (bottom right), based on the configuration\nspace{FD1, FD2, LAMA}×{ P,M,P\n/negationslash=,M/negationslash=}×{ boost, no-boost} .\nFor “AutoTune”, we created medium-size training data (in between\ntraining data and test data size) for each domain, and selected theconfiguration minimizing summed-up search time on that data. For“Portfolios”, we created sequential portfolios of four configurations,namely FD1 Cons P\n/negationslash=, FD2 base planner (boosted), LAMA Cons P/negationslash=\n(boosted), and LAMA Greedy M/negationslash=not boosted. For “Seq-Uniform”\neach of these gets 1/4 of the runtime (i.e., 450 seconds); for “Seq-Hand”, we played with the runtime assignments a bit, ending up with30, 490, 590, and 690 seconds respectively. Despite the compara-tively little effort invested, these auto-tuned and portfolio plannersperform vastly better than any of the components, including LAMA.\nRegarding rule content and its effect on search, the most striking,\nand easiest to analyze, example is Spanner. Failing to take a suffi-cient number of spanners to tighten all nuts is the major source of\nsearch with delete relaxation heuristics. Our single learned rule con-\ntains sufficient knowledge to get rid of that, enabling FD1 to solve\nevery instance in a few seconds. This does not work for FD2 and\nLAMA because their preferred operators prune actions taking span-ners (the relaxed plan makes do with a single one), so that the com-bined pruning (preferred operators andour rule) removes the plan.\nWe made an attempt to remedy this by pruning with our rules on onequeue and with preferred operators on the other, but this did not workeither (presumably because, making initial progress on the heuristicvalue, the preferred operators queue gets boosted). The simpler andmore successful option is to use a portfolio, cf. above.M.Krajˇnanský etal./Learning Pruning Rules forHeuristic SearchPlanning 487\nFD1 (hFF) FD2 (dual queue hFF+ preferred operators)\nbase pl. Cons P/negationslash=Cons M/negationslash=Greedy P/negationslash=Greedy M/negationslash=base planner Cons P/negationslash=Cons M/negationslash=Greedy P/negationslash=Greedy M/negationslash=\nCC¬SC¬SC ¬SC ¬S CT E CTE R T CTE R T CTE R T CTE R T\nBarman (30) 027 000 00 00 14 609.6 271972 13 12.9 28.9 63% 23 17.1 39.2 57% 27 1.0 1.4 47% 21 1.6 2.3 45%\nBlocksworld (30) 000 00 01 8 1 019 37.4 19916 18 0.6 1.0 54% 19 1.2 1.0 0% 1 0.0 0.0 85% 27 3.6 3.4 17%\nDepots (30) 1313 013 013 1213 11 18 48.2 111266 18 0.7 1.1 33% 18 0.8 1.0 20% 23 1.6 2.1 18% 21 3.2 3.5 18%\nGripper (30) 13 00 15 002 3 02 0 29 3.9 2956 19 0.0 0.1 95% 26 0.0 0.1 90% 19 0.0 0.3 96% 17 0.0 0.2 84%\nParking (30) 130 4 003 0 03 0 7 642.5 16961 8 0.5 0.5 6% 6 0.8 0.8 5% 2535.5 15.2 2% 14 15.3 11.8 1%\nRover (30) 002 9 03 01 00 30 41.9 22682 11 0.0 0.1 91% 12 0.0 0.1 91% 3 0.0 0.1 94% 13 0.0 0.1 83%\nSatellite (30) 000 00 00 01 3752.3 51741 0——— 0——— 2 0.5 0.7 54% 0———\nSpanner (30) 030 030 030 030 0 0— — 0——— 0——— 0——— 0———\nTPP (30) 000 00 00 00 29 232.5 13057 0——— 0——— 0——— 0———/summationtext(270) 2773 2962 3 43 84 44 62 149 87 104 100 113\nno FD preferred operators boosting\nSatellite (30) 2 1009,0 68253 0——— 12 1,1 11,1 92% 0——— 16 3,4 23,2 84% /summationtext(270) 53 50 65 80 72\nLAMA (first iteration) AutoTune Portfolios\nbase planner Cons P/negationslash=Cons M/negationslash=Greedy P/negationslash=Greedy M/negationslash=Seq-Uniform Seq-Hand\nCT E CTE R T CT ER T CT E R T CTE R T C C C\nBarman (30) 7 648.1 151749 3023.8 51.1 53% 305.0 9.7 44% 22 0.8 1.3 38% 21 0.8 1.3 36% 23 30 30\nBlocksworld (30) 27 63.5 13093 24 0.7 1.0 45% 27 1.3 1.0 0% 6 0.3 0.6 55% 3014.2 13.5 19% 27 27 28\nDepots (30) 23 43.2 37299 22 0.9 1.2 35% 25 0.9 1.0 25% 26 7.0 9.9 22% 25 15.3 17.3 22% 23 24 25\nGripper (30) 29 6.4 3122 9 0.0 0.0 85% 16 0.0 0.0 87% 21 0.0 0.4 93% 24 0.0 0.2 76% 29 28 29\nParking (30) 26 699.3 3669 10 0.4 0.2 7% 16 1.4 1.2 7% 2910.2 5.5 2% 28 11.3 6.1 2% 28 30 30\nRover (30) 29211.2 28899 9 0.1 0.2 78% 10 0.1 0.2 80% 0— —— 7 0.1 0.1 65% 30 29 29\nSatellite (30) 4986.7 34739 0——— 0— — — 0— —— 0——— 3 13 16\nSpanner (30) 0— — 0——— 0— — — 0— —— 0——— 30 30 30\nTPP (30) 20360.5 13262 0——— 0— — — 0— —— 0——— 29 18 18/summationtext(270) 165 104 124 104 135 222 229 235\nno FD preferred operators boosting\nSatellite (30) 3 819,7 32301 0——— 22 4,1 26,4 85% 1 0,4 0,8 73% 23 4,2 14,0 78% /summationtext(270) 84 80 106 95 125\nTable 2. Performance overview. “C”: coverage; “¬S”: all solutions pruned out (search space exhausted); “T” search time and “E” number of expanded states\n(median for base planner, median ratio “base-planner/rules-planner” for planners using our rules); “RT”: median percentage of total time spent evaluating rules.\nFor each base planner, best coverage results are highlighted in boldface. By default, FD’s preferred operators queue in FD2 and LAMA is boosted; we show\npartial results switching that boosting off. For explanation of the “AutoTune” and “Portfolios” data, see text.\nRegarding conservative vs. greedy training data, consider FD1. As\nthat search does not employ a complete “back-up” search queue, if\nour pruning is too strict then no solution can be found. The “¬S”columns vividly illustrate the risk incurred. Note that, in Parking,while the greedy rules prune out all solutions on FD1 (the same hap-pens when training them on the preferred-operators training data),\nthey yield dramatic improvements for FD2, and significant improve-\nments for LAMA. It is not clear to us what causes this.\nRegarding the overhead for rule evaluation, the “RT” columns for\nLAMA show that this can be critical in Gripper, Rovers, and Satel-\nlite. Comparing this to Table 1 (right half), we do see that Grippertends to have long rules, which complies with our observation. Onthe other hand, for example, Parking has more and longer rules thanRovers, but its evaluation overhead is much smaller. Further researchis needed to better understand these phenomena.\nFor TPP, where none of the configurations using our rules can\nsolve anything and so Table 1 does not provide any indication whatthe problem is, observations on smaller examples suggest that so-lutions otherwise found quickly are pruned: the FD1 search spacebecame larger when switching on the rule usage.\n8 Conclusion\nWe realized a straightforward idea – using off-the-shelf ILP forlearning conjunctive pruning rules acting like preferred operators inheuristic search planning – that hadn’t been tried yet. The results arequite good, with substantial to dramatic improvements across sev-eral domains, yielding high potential for use in portfolios. Togetherwith the simplicity of the approach, this strongly suggests that fur-ther research on the matter may be worthwhile. The most immediateopen lines in our view are to (a) systematically explore the design ofcomplementary configurations and portfolios thereof, as well as (b)understanding the behavior of the technique in more detail.\nAcknowledgments. This work is partially supported by the EU FP7\nProgramme under grant agreement no. 295261 (MEALS).REFERENCES\n[1] F. Bacchus and F. Kabanza, ‘Using temporal logics to express search\ncontrol knowledge for planning’, AIJ, 116(1-2), 123–191, (2000).\n[2] A. Botea, M. Enzenberger, M. M ¨uller, and J. Schaeffer, ‘Macro-FF:\nImproving AI planning with automatically learned macro-operators’,\nJAIR, 24, 581–621, (2005).\n[3] A. Coles and A. Smith, ‘Marvin: A heuristic search planner with online\nmacro-action learning’, JAIR, 28, 119–156, (2007).\n[4] T. Rosa, S. Jim ´enez, R. Fuentetaja, and D. Borrajo, ‘Scaling up heuristic\nplanning with relational decision trees’, JAIR, 40, 767–813, (2011).\n[5] T. Rosa and S. McIlraith, ‘Learning domain control knowledge for\nTLPlan and beyond’, in Proc. P AL’11, (2011).\n[6] C. Domshlak, E. Karpas, and S. Markovitch, ‘Online speedup learning\nfor optimal planning’, JAIR, 44, 709–755, (2012).\n[7] A. Fern, S. Yoon, and R. Givan, ‘Approximate policy iteration with a\npolicy language bias: Solving relational Markov decision processes’,JAIR, 25, 75–118, (2006).\n[8] C. Gretton, ‘Gradient-based relational reinforcement-learning of tem-\nporally extended policies’, in Proc. ICAPS’07.\n[9] M. Helmert, ‘The Fast Downward planning system’, JAIR, 26, 191–\n246, (2006).\n[10] J. Hoffmann and B. Nebel, ‘The FF planning system: Fast plan genera-\ntion through heuristic search’, JAIR, 14, 253–302, (2001).\n[11] Y . Huang, B. Selman, and H. Kautz, ‘Learning declarative control rules\nfor constraint-based planning’, in Proc. ICML’00.\n[12] R. Khardon, ‘Learning action strategies for planning domains’, AIJ,\n113(1-2), 125–148, (1999).\n[13] J. Kvarnstr ¨om and M. Magnusson, ‘TALplanner in the 3rd IPC: Exten-\nsions and control rules’, JAIR, 20, 343–377, (2003).\n[14] S. Richter and M. Helmert, ‘Preferred operators and deferred evaluation\nin satisficing planning’, in Proc. ICAPS’09.\n[15] S. Richter and M. Westphal, ‘The LAMA planner: Guiding cost-based\nanytime planning with landmarks’, JAIR, 39, 127–177, (2010).\n[16] M. Roberts and A. Howe, ‘Learning from planner performance’, AIJ,\n173(5-6), 536–561, (2009).\n[17] N ´u˜nez S, D. Borrajo, and C. Linares, ‘Performance analysis of planning\nportfolios’, in Proc. SoCS’12.\n[18] A. Srinivasan. The Aleph manual, 1999.\n[19] Y . Xu, A. Fern, and S. Yoon, ‘Iterative learning of weighted rule sets\nfor greedy search’, in Proc. ICAPS’10.\n[20] S. Yoon, A. Fern, and R. Givan, ‘Learning control knowledge for for-\nward search planning’, Journal of ML Research, 9, 683–718, (2008).M. Krajˇ nanský et al. / Learning Pruning Rules for Heuristic Search Planning 488", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "tQBMDJFZve6M", "year": null, "venue": "ECAI2014", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-419-0-621", "forum_link": "https://openreview.net/forum?id=tQBMDJFZve6M", "arxiv_id": null, "doi": null }
{ "title": "Imprecise Probabilistic Horn Clause Logic.", "authors": [ "Steffen Michels", "Arjen Hommersom", "Peter J. F. Lucas", "Marina Velikova" ], "abstract": "Approaches for extending logic to deal with uncertainty immanent to many real-world problems are often on the one side purely qualitative, such as modal logics, or on the other side quantitative, such as probabilistic logics. Research on combinations of qualitative and quantitative extensions to logic which put qualitative constraints on probability distributions, has mainly remained theoretical until now. In this paper, we propose a practically useful logic, which supports qualitative as well as quantitative uncertainty and can be extended with modalities with varying level of quantitative precision. This language has a solid semantic foundation based on imprecise probability theory. While in general imprecise probabilistic inference is much harder than the precise case, this is the first expressive imprecise probabilistic formalism for which probabilistic inference is shown to be as hard as corresponding precise probabilistic problems. A second contribution of this paper is an inference algorithm for this language based on the translation to a weighted model counting (WMC) problem, an approach also taken by state-of-the-art probabilistic inference methods for precise problems.", "keywords": [], "raw_extracted_content": "Imprecise Probabilistic Horn Clause Logic1\nSteffen Michels and Arjen Hommersom and Peter J.F. Lucas and Marina Velikova2\nAbstract. Approaches for extending logic to deal with uncertainty\nimmanent to many real-world problems are often on the one side\npurely qualitative, such as modal logics, or on the other side quan-\ntitative, such as probabilistic logics. Research on combinations of\nqualitative and quantitative extensions to logic which put qualitativeconstraints on probability distributions, has mainly remained theo-retical until now. In this paper, we propose a practically useful logic,\nwhich supports qualitative as well as quantitative uncertainty and can\nbe extended with modalities with varying level of quantitative preci-\nsion. This language has a solid semantic foundation based on im-\nprecise probability theory . While in general imprecise probabilistic\ninference is much harder than the precise case, this is the first expres-sive imprecise probabilistic formalism for which probabilistic infer-ence is shown to be as hard as corresponding precise probabilisticproblems. A second contribution of this paper is an inference algo-rithm for this language based on the translation to a weighted model\ncounting (WMC) problem, an approach also taken by state-of-the-art\nprobabilistic inference methods for precise problems.\n1 INTRODUCTION\nThe use of knowledge representation and reasoning methods to copewith the uncertainty that comes with real-world problems has be-come a crucial element in the development of intelligent systems.The way in which this uncertainty presents itself varies from prob-lem to problem; in some cases precise probabilistic information isavailable, whereas in other cases the best that can be achieved is qual-itative uncertainty.\nProbabilistic logics, which use the structure of a logic theory to\ndefine a probability distributions, are the state-of-the-art for knowl-edge representation when precise probabilistic information is avail-\nable. Alethic modal logics , which extend logic with purely qualita-\ntive modalities, such as that something is possible, allow dealing\nwith uncertainty that is merely qualitative in nature. Alternatively,\none can handle lack of precise probabilistic information by puttingconstraints on a probability distribution, as in the probabilistic belief\nlogic of Bacchus [1].\nIn this paper we propose a new language that integrates these dif-\nferent approaches. It supports qualitative as well as quantitative un-certainty and can also be extended with modalities with varying levelof quantitative precision. This is achieved by the well-defined seman-tic basis of imprecise probability theory [18], specifically interval\nprobabilities. This semantics allows to exactly define the semanticsof qualitative modalities in terms of probability intervals and also\n1This publication was supported by the Dutch national program COMMIT.\nThe research work was carried out as part of the Metis project under the re-\nsponsibility of the TNO-Embedded Systems Innovation, with Thales Ned-\nerland B.V . as the carrying industrial partner.\n2Radboud University Nijmegen, The Netherlands\nemail: {s.michels,arjenh,peterl,marinav}@science.ru.nlprovides precise probabilistic logic as a special case. The language is\ndesigned with particular guarantees for computational tractability in\nmind.\nIn general, computing marginal probabilities of imprecise prob-\nability distributions is more complex than inference for their pre-\ncise counterparts. For instance, it is known that inference in credal\nnetworks [5] is much harder (NPPP-hard) than for the correspond-\ning precise case of Bayesian networks [15]. However, as we claim\nour language to be practically useful, we designed an expressive lan-guage in such a way that inference has the same complexity as corre-sponding precise probabilistic inference problems, making it the firstimprecise probabilistic language that offers such guarantees. This is\nthe best we can hope for, as the language supports precise probabili-\nties as a special case. While inference is still NP-hard in general, onecan often make use of the problem’s structure to perform more effi-cient inference. For example, it is well-known that inference is linearfor problems corresponding to singly-connected Bayesian networkswith a bounded indegree.\nWe further propose a concrete inference mechanism based on the\ntranslation to a weighted model counting (WMC) problem, an ap-\nproach also taken by state-of-the-art probabilistic inference methodsfor precise problems. The approach has successfully been used in the\ncontext of probabilistic logic programming [7] and can exploit local\nstructure, such as determinism, which is often present in logic the-\nories. We have already shown in previous work that WMC can beused to compute marginals for certain classes of credal sets, whichwe used to approximate continuous distributions [11].\nThe paper is structured as follows. We first provide some back-\nground in Section 2. Then the language is defined formally (Sec-\ntion 3) and the inference approach is discussed (Section 4). Finally,\nrelated work is discussed in Section 5 and Section 6 concludes the\npaper.\n2 BACKGROUND\nWe first give some background and focus on properties and limita-\ntions of existing languages.\n2.1 Logic programming\nAs the work described in this paper builds upon probabilistic logic\nprogramming, we will first introduce some basic logic programming(LP) concepts. The idea of LP is to use predicate logic as program-ming language [10], with programs consisting of rules, in this paper\nHorn clauses. They are (implicitly universally quantified) expres-sions of the form: h←b\n1,...,b n, wherehis called the head and\nb1,...,b nis called the body of the rule, representing a conjunction.\nThe head hand elements of the body bi,1≤i≤nare atoms,\ni.e. expressions of the form p(t1,...,t m)withpapredicate, and\nt1,...,t nare terms.ECAI 2014\nT. Schaub et al. (Eds.)© 2014 The Authors and IOS Press.This article is published online with Open Access by IOS Press and distributed under the termsof the Creative Commons Attribution Non-Commercial License.\ndoi:10.3233/978-1-61499-419-0-621621\nIn the remainder of this paper, we assume the traditional least\nmodel semantics of LP . This semantics implies the closed world as-\nsumption (CW A), which states that statements that do not followfrom the rules are false.\nExample 1. To illustrate the different formalisms and their proper-\nties, we make use of the following running example originating from\nthe maritime safety and security domain, in which we already suc-cessfully applied precise probabilistic logics [12].\nSuppose we want to model in which cases a vessel is an environ-\nmental hazard and may for instance not enter certain restricted ar-eas. One reason for a vessel being an environmental hazard is that ithas some chemical substances loaded. We could model this using LPas:\nenv\nhazard←chemicals\nThe problem is that the model actually expresses that in case we knowthe vessel has chemicals loaded it certainly is an environmental haz-ard and otherwise it is certainly not, using the CWA. Clearly, thereare vessels with chemicals which are no environmental hazard, for\ninstance because the amount is not significant, and ships without\nchemicals on board which still are an environmental hazard.\n2.2 Modal logic\nThe idea of modal logics is to lift the restrictions that propositionalstatements are certainly true or false, by including operators that ex-press modalities. Several practical implementations of programminglanguages based on modal logic are available [14]. There are differ-ent ways to define and interpret those modalities, but we restrict toclassical alethic modalities, which express that something is possibly\n(♦) or necessarily true (/square ).\nExample 2. Example 1 can be made more precise using modal op-\nerators. F or example, we could model the fact that having chemicalsloaded makes it possible that the vessel is an environmental hazardwith:\n♦env\nhazard←chemicals\nHowever , using the CWA, this rule alone implies that if the vesselhas no chemicals on board, then it is not an environmental hazard.While we could try to sum up all the reasons for a vessel being anenvironmental hazard, it is a reasonable assumption that in realitywe can never observe or even know all of those reasons. A possiblesolution is adding the rule which states that it is always possibly truethat a vessel is an environmental hazard:\n♦env\nhazard←/latticetop\nwhere/latticetopdenotes true. This is not useful in practice, since these rules\nimply that env hazard is possible whether or not the vessel is car-\nrying chemicals. The knowledge that chemicals are a risk factor , in-creasing the likelihood of environmental hazard, cannot be expressed\nin the language.\nGenerally, modal logics can only be used to represent and rea-\nson about qualitative uncertainty, whereas the available quantitative\nknowledge cannot be used.\n2.3 Probabilistic logic programming\nProbability theory offers an alternative widely used and well-foundedbasis for representing and reasoning with uncertainty. Combininglogic with probability theory is a subject gaining increasing inter-est and various approaches and their implementations have been de-veloped. We focus on logic programming extended with probabilitiesassociated to atoms or rules, for which efficient inference mechanism\nare available (see e.g. [7]).\nExample 3. To include degrees of uncertainty, we extend Example 2\nwith probabilities as follows:\n0.1:env\nhazard←/latticetop\n0.4:env hazard←chemicals\nThis states that the fact that if a vessel has chemicals loaded, then\nit will cause an environmental hazard with a probability of 0.4. The\nfirst rule represents other causes we do not model explicitly with alow probability.\nThis basic language serves as a basis for the work described in this\npaper. The semantics of this language is a variant of Sato’s distribu-\ntion semantics [16], which we introduce next.\nFirst, if Ris a set of probability-rule pairs, then to each subset\nS⊆R, a probability is assigned:\nP\nSdef=/productdisplay\np:(h←b1,...,b n)∈Sp/productdisplay\np:(h←b1,...,b n)∈R\\ S1−p (1)\nIn this paper, we will assume that there are a finite number of groundterms, which means that we can look upon each program as a propo-sitional one by replacing all variables by all ground terms. As a result,there will be a finite number of subsets; however, the approach can beeasily generalised to the full first-order case with an infinite numberof constants, as shown in the original distribution semantics [16].\nFor each such subset we can determine whether a query qholds\n(S|=q) under the least model semantics of logic programming.\nThen, the probability of a query qgiven the rules of a program Ris\ndefined as the sum of the probabilities of all rule subsets for whichthe query can be derived:\nP\nR(q)def=/summationdisplay\nS|=q\nS⊆RPS (2)\nExample 4. The query env hazard can be derived for the following\nsubsets of rules in Example 3 assuming chemicals is true:\nS1=/braceleftbig\n0.1:env hazard←/latticetop/bracerightbig\nS2=/braceleftbig\n0.4:env hazard←chemicals/bracerightbig\nS3=/braceleftbig\n0.1:env hazard←/latticetop\n0.4:env hazard←chemicals/bracerightbig\nWe have the following probabilities: PS1=0.1·(1−0.4) = 0.06,\nPS2=0.36andPS3=0.04. Therefore PR(env hazard)=0.06+\n0.36+0.04 = 0.46.\nA limitation of such probabilistic approaches is that they require\nthe precise quantification of likelihoods. This is often infeasible, for\ninstance in domains dealing with very rare events. For instance, esti-\nmations of the probability that a vessel without chemicals on boardis an environmental hazard are unreliable, as there are few of such\ncases. The consequence is that predictions suggest more precision\nthan can be provided by the knowledge available, which may lead towrong decisions.S.Michelsetal./Imprecise Probabilistic Horn Clause Logic 622\n2.4 Imprecise probabilities\nImprecise probability theory is a generalisation of probability theory,\nwhich allows to express different levels of ignorance regarding the\nlikelihoods of events. There are different approaches with varyingexpressiveness [18]. In this paper we follow the approach of usingsets of probability distributions, called credal sets. In the remainder,\nwe only deal with binary logic statements and convex credal sets,which makes it possible to denote and define credal sets in termsof probability intervals. For instance, this allows to express that theprobability that a ship is an environmental hazard is in the interval\n[0.3,0.6]. In this way, we can differentiate between cases in which\nthe best decision can be made based on the available knowledge and\ncases more knowledge is required to take the optimal decision.\n3 IMPRECISE PROBABILISTIC HORN\nCLAUSE LOGIC\nWe introduce the idea of the language and then formally develop its\nsemantics.\n3.1 Basic idea\nThe basic idea of the language is to extend logic programming withprobability interval annotations, such as probabilistic logic program-ming extends logic programming with point probabilities. Our lan-guage supports two possible interpretations of rules as p:h←\nb\n1,...,b nwith pa probability interval, which can be closed, open,\nor half-closed. We refer to those different interpretations as differ-ent kinds of imprecisions, rule-imprecisions and head-imprecisions,\nwhich come into play when there are multiple rules with thesame head. Formally, we say an imprecise probabilistic horn clauselogic (IPHL) program P=(R\nR,RH), where RRmodels rule-\nimprecisions and RHmodels head-imprecisions. The heads occur-\nring in RRandRHare disjoint.\nRules in RRare denoted by p:(h←b1,...,b n)with pa proba-\nbility interval. The interpretation of these rules is that the probabilitythatb\n1,...,b nleads tohis inp, which corresponds to the semantics\nof precise probabilistic programming as given in Section 2.3. Multi-ple rules with the same head are combined using a noisy-OR operator\nfor probability intervals.\nExample 5. Consider an imprecise version of Example 3:\n[0.05,0.15]: (env\nhazard←/latticetop)\n[0.4,0.6]: (env hazard←chemicals )\nThis means that carrying chemicals causes a vessel to be an envi-\nronmental hazard with probability between 0.4and0.6, while it is\nunlikely that other reasons cause a ship to be an environmental haz-ard.\nIn case a vessel has no chemicals on board the probability of it\nbeing an environmental hazard is between 0.05 and0.15. Otherwise,\nwe consider the probabilities one gets for all possible choices ofprobabilities from the intervals given the semantics of point prob-\nabilities (Section 2.3). These probabilities are within the interval\n[0.43,0.66] .\nRules in R\nHare denoted by (p:h)←b1,...,b n. As indicated\nby the brackets, in this case, the probability interval only appliesto the head. The rules are interpreted as follows: in case b\n1,...,b n\nholds the probability of his inp. This means that rules do not repre-\nsent independent causes for the head, but conditions under which theknowledge about the head’s probability becomes more precise. If norule applies, there is complete ignorance about the head’s probabil-ity, i.e. it is in [0.0,1.0]. While this interpretation makes no sense for\nthe precise case, it can conveniently express certain kinds of impre-cise knowledge. Note that this kind of rules can lead to inconsistentdefinitions, while the former kind cannot.\nExample 6. Suppose we have statistical knowledge about tankers,\nfor example because all tankers have to register their cargo due to\nsafety regulations, and can estimate the likelihood of tankers havingchemicals loaded as 0.3. F or a random vessel, we do not have that in-\nformation, for instance because we do not have data about all ships,\nand only express that it is possibly carrying chemicals, we interpret\nas the probability interval (0.0,1.0], i.e. the probability is greater\n0.0. We can model this with head-imprecisions:\n/parenleftbig\n(0.0,1.0]:chemicals/parenrightbig\n←/latticetop/parenleftbig\n[0.3,0.3]:chemicals/parenrightbig\n←tanker\nGiven the rules above, if we do not know the ship is a tanker , itsprobability of having chemicals on board is in (0.0,1.0]. In the case\nit is a tanker it is in (0.0,1.0]and in[0.3,0.3], which means it is in\n(0.0,1.0]∩[0.3,0.3] = [0.3 ,0.3].\n3.2 Qualitative interpretation\nGiven the basic language defined above, we can give various intervalsa qualitative interpretation. For example, special cases of intervalsinclude determinism ([0.0 ,0.0]and[1.0,1.0]), complete ignorance\nas in 3-valued logics ([0.0 ,1.0]) and precise probabilities ( [p,p] with\npsome probability).\nIPHL can also be seen as a basic language to define various modal-\nities in terms of probability intervals. For example, notice that the\ninterval[1.0,1.0]corresponds to the modality that something is nec-\nessarily true: /square. Furthermore, the modality ♦expressing that some\nstatement is possible, in its least strict interpretation, means that theprobability is greater than 0.0, i.e. it is in the interval (0.0,1.0].\nAnalogously one could introduce the modality ♦¬as the interval\n[0.0,1.0). One could however more carefully only consider state-\nments possible in case their probability is above a certain threshold\nand define for instance ♦\n0.05as(0.05,1.0]. One could also introduce\nlinguistic modalities, for instance unlikely as[0.05,0.15] . Note that\nin order to differentiate between complete ignorance ( [0.0,1.0]) and\npossibility ((0.0 ,1.0]) the distinction between open and closed in-\ntervals is necessary, which cannot be made in common impreciseformalisms as credal networks [5].\nSimilar to a qualitative specification of the IPHL program,\nmarginal probability intervals of arbitrary atoms can also be given\na qualitative interpretation, which implies that the language supports\nqualitative reasoning as a special case. For instance, a probability in-\nterval of [1.0,1.0]means a particular statement is necessarily true\nand a probability greater 0.0implies that the statement is possible.\nFurthermore, given the lowerbound of a probability interval, we may\nconclude whether or not the probability is possibly or necessarily\nlarger than a given threshold. Finally, one can also qualitatively com-pare the likelihood of two statements, for instance in case the prob-\nability intervals of two statements are disjoint, one can determine\nwhich one is more likely than the other.\n3.3 Semantics\nIn accordance with probabilistic logics programming, the semanticsof IPHL programs is defined in terms of the marginal probabilityS.Michelsetal./Imprecise Probabilistic Horn Clause Logic 623\nintervals of arbitrary query atoms. This semantics is defined incre-\nmentally, by first extending the semantics for precise logic programsgiven in Section 2.3 for rules with rule-imprecision. Then, we alsoshow how to deal with head-imprecisions.\n3.3.1 Rules with rule-imprecision\nWe first develop a semantics assuming the program only consistsof rules with rule-imprecision. The semantics of rules with rule-imprecision is defined in terms of a set of programs obtained by re-placing the intervals with point probabilities. Let Rbe the set of all\nprograms where each rule p:(h←b\n1,...,b n)from RRis replaced\nbyp:(h←b1,...,b n)such that p∈p. In other words, we con-\nsider all programs for all possible choices of probabilities for eachinterval.\nExample 7. The precise program of Example 3 is one example of an\nelement of Rgiven the imprecise program of Example 5.\nWe can then define the probability range of a query qgiven a pro-\ngram as:\nP(q)def∈{PR(q)|R∈R } (3)\nwherePR(q)is defined as in (2). This set of probabilities is convex\nand can therefore be expressed as interval.\nExample 8. F or Example 5 we get P(env hazard)∈[0.05,0.15] ,\nsince there is no rule with head chemicals its probability is 0.0and\nthe second rule never applies. In case we assume we know the vessel\nis carrying chemical and add [1.0,1.0]: (chemicals ←/latticetop)to the\nrules, the probability is in [0.43,0.66].\n3.3.2 Rules with head-imprecision\nNext, we extend the semantics with head-imprecisions. Probabilityintervals on heads have a different characteristic as probability inter-vals on rules. Head-imprecisions can be looked upon as constraintsthat exclude programs for which the probability distribution does notobey the specified bounds. Therefore, we first generate rules allow-ing for all possible probabilities and then enforce the constraints on\nthe set of programs.\nExample 9. As example we use the following program, which is a\ncombination of the rules of Examples 5 and 6:\nR\nR=/braceleftbig\n[0.05,0.15]: ( env hazard←/latticetop)\n[0.4,0.6] :( env hazard←chemicals )/bracerightbig\nRH=/braceleftbig/parenleftbig\n(0.0,1.0] : chemicals/parenrightbig\n←/latticetop/parenleftbig\n[0.3,0.3] : chemicals/parenrightbig\n←tanker/bracerightbig\nTo allow all possible probabilities for rules with head-imprecision,\nwe add for each head hoccurring in the rules RHa rule\n[0.0,1.0]: (h←/latticetop)toRRand call the resulting set of rules R/prime\nR.\nExample 10. F or the above example we get the following trans-\nformed set of rules:\nR/prime\nR=/braceleftbig\n[0.05,0.15]: (env hazard←/latticetop)\n(0.4,0.6] :( env hazard←chemicals )\n[0.0,1.0] :( chemicals ←/latticetop)/bracerightbigWe can now consider a set of programs Rwith point probabilities\ngenerated by R/prime\nR, under the semantics of rule-imprecision. To in-\ncorporate the constraints given by the probability intervals on heads,\nwe define a set of programs R/prime, by including only those programs\nobeying all constraints given by RH:\nR/primedef={R∈R |∀R∈RH:obeys(R,R)} (4)\nwhere obeys/parenleftbig\nR,(p:h)←b1,...,b n/parenrightbig\n=PR(h|b1,...,b n)∈p.\nExample 11. Consider the rules with head-imprecision of Exam-\nple 9. The restricted set of rules R/primegiven those constraints is:\nR/prime={R∈R |PR(chemicals |tanker)=0.3∧\n0<P R(chemicals )≤1}\nIn caseR/primebecomes the empty set, the entire program is called\ninconsistent. Otherwise, the probability of a query qis defined as in\nEq. (3) using R/primeinstead of R.\n4 INFERENCE\nIn this section, an inference mechanism is introduced which trans-lates the problem of computing a bound to a precise probabilisticinference problem without changing the complexity of the problem.We only deal with exact inference, as approximate inference could\nchange the qualitative nature of the answer. For instance, there is\na qualitative difference of a probability greater zero and greater orequal to zero, which cannot be made by sampling algorithms. Sim-\nilarly to the semantics, we introduce the inference approach incre-\nmentally, starting with point probabilities. Therefore, first, we briefly\ndiscuss probabilistic inference by WMC.\n4.1 Probabilistic Inference by Weighted Model\nCounting\nV arious inference approaches have been proposed to exploit thestructure of probabilistic inference problems. We focus on perform-\ning probabilistic inference by translation to a WMC problem, which\nhas been shown to be an efficient inference method for probabilisticlogic programming [7]. In contrast to other approaches, WMC notonly exploits topological structure, but also local structure as deter-\nminism and context-specific independence.\nThe problem of model counting is to find the number of models\nof a propositional knowledge base. WMC is a straightforward gen-eralisation of the problem, where each model has a weight. Those\nweights are defined in terms of weights attached to the literals. The\nweight of a model is the product of the weights of all literals in-\ncluded. In the following we denote the weight of a literal lwith W(l)\nand the weighted model count of a weighted knowledge base Δwith\nWMC(Δ).\n4.2 Rules with point probabilities\nWMC is defined for propositional knowledge bases without the\nCW A assumption. We therefore translate the rules to propositionallogic and make the heads equivalent to the combination of all rulesdefining it to capture the CW A. This is known as Clark’s comple-\ntion [4] and a similar approach for probabilistic inference it taken by\nProblog [7].\nProbabilities are added by introducing auxiliary atoms for each\nrule. So in the first step each rule R=p:(h←b\n1,...,b n)isS.Michelsetal./Imprecise Probabilistic Horn Clause Logic 624\nAlgorithm 1: Imprecise inference (lower bound)\nInput: Query qand IPHL program (RR,RH)\nResult: The lower probability bound of q\n1R=∅\n2for/parenleftbig\np:(h←b1,...,b n)/parenrightbig\n∈RR\n3 R← /mapsfromchar/parenleftbig\nlower(p):(h←b1,...,b n)/parenrightbig\n4forall heads hinRH\n5 for{b1,...,b n}⊆B , with Ball body-atoms defining h\n6 R← /mapsfromchar/parenleftbig\nsubset lower h(b1,...,b n):(h←b1,...,b n)/parenrightbig\n7return WMC(ΔR∧q)\ntranslated to h←aux R,b1,...,b n, before Clark’s completion is\nperformed. The weight pis assigned to aux Rand1−pto its nega-\ntion. All other literals get weight 1. We denote the resulting weighted\nknowledge base with ΔR. The probability can then be computed as\nPR(q)=WMC(ΔR∧q).\nExample 12. Consider the rules of Example 3 and the assumption\nthe vessel is carrying chemicals (1 .0:chemicals ←/latticetop ). The result-\ning weighted knowledge base ΔRis:\n/parenleftbig\nenv hazard↔aux 1∨(aux 2∧chemicals )/parenrightbig\n∧chemicals\nNote that for brevity we added chemicals as fact instead of mak-\ning it equivalent to an auxiliary literal with weight 1.0. The weights\nare: W(aux 1)=0.1,W(¬aux 1)=0.9,W(aux 2)=0.4,\nW(¬aux 2)=0.6,W(env hazard)=W(¬env hazard)=1.0,\nW(chemicals )=1.0andW(¬chemicals )=0.0.\nTo compute the probability of env hazard , we consider the mod-\nels of the knowledge base in which env hazard holds, with corre-\nsponding weights:\n0.04: env hazard,aux 1,aux 2,chemicals\n0.06: env hazard,aux 1,¬aux 2,chemicals\n0.36: env hazard,¬aux 1,aux 2,chemicals\nThe sum of those weights is 0.46, which corresponds to the probabil-\nity according to the semantic definition as illustrated in Example 4.\n4.3 Imprecise inference\nGiven the fact that we only make use of intervals, the set of proba-\nbilities of a query is always convex. We can therefore represent thesemantically infinite set of programs Rby its extreme points. We\ndenote the lower and upper bounds of the result probability as P\n(q)\nandP(q)respectively. The basic idea of the inference algorithm is\nto translate the problem to a precise probabilistic inference problem,for both bounds. An algorithm for the lower bound is given in Algo-rithm 1.\nThe fact that we use horn clauses, thus bodies consist of positive\natoms only, makes it possible to locally determine the extreme pointsof each rule independent of the query. In fact, taking the minimum or\nmaximum probability for all rules determines the minimal or maxi-\nmal probability for all possible queries, respectively. This is the key\ninsight which makes the inference problem tractable: without the re-\nstriction to horn clauses, the combination of all rules’ extreme points\nwould have to be considered. This requires an exponential numberof precise probabilistic programs which heavily increases the com-plexity of the inference problem. Therefore, for the rules with rule-imprecision the translation is straightforward: for each rule we justuse its lower bound probability (Lines 2, 3).In order to represent open as well as closed intervals, we make\nuse of a calculus for hyperreal numbers [8]. For instance, the in-\nterval(0.2,0.7) can be represented by the extreme points 0.2+d\nand0.7−d, wheredrepresents an infinitesimal number. WMC can\nstraightforwardly be extended with hyperreal weights, by making useof addition and multiplication as defined for the hyperreal calculus.In Algorithm 1, the function lower(p), is defined as minpin case\nthe lower bound is closed and supp+d otherwise.\nExample 13. Consider the rules of Example 5. F or each query the\nlower bound of the probability is the probability given the followingprogram:\n0.05: ( env\nhazard←/latticetop)\n0.4: ( env hazard←chemicals )\nTheorem 1. F or any IPHL program P=(RR,∅), the computa-\ntional complexity of computing any query is the same as for a prob-abilistic logic program consisting of R\nRwhere all intervals are re-\nplaced by point probabilities.\nThe theorem obviously holds as replacing imprecisions with point\nprobabilities is a simple linear transformation. In practice, inferencecould even be less expensive if the extremes of some intervals are 0.0\nand1.0, introducing additional determinism, which can be exploited\nby WMC algorithms.\nThe translation of rules with head-imprecision is more involved\n(Lines 4 – 6). For each head h(Line 4) we have to consider all cases\nof truth assignments to the atoms in the bodies of the rules definingh(Line 5). Each of those cases corresponds to a subset Bof those\natoms. We can define the probability interval in which hmust be\nin that case by the intersection of all intervals of rules which applygiven the set of atoms:\nP\nh(B)=/intersectiondisplay\n{b1,...,b n}⊆B\n(p:h)←b 1,...,b n∈Rhp (5)\nwhere Rhare all rules with head h.Ph(B)is never the empty set\nfor consistent programs.\nNaively translating to rules with all possible Bas body and using\nthe lower bounds of such intervals, i.e. lower(Ph(B)): (h←B),\nresults in incorrect probabilities, since all rules with a subset of Bas\nbody also contribute to the probability of hin case Bholds. To solve\nthat problem we make use of the property that with increasing cardi-nality of B, the number of satisfied bodies also increases. Therefore\nwith increasing cardinality of Bthe probability is restricted to a more\ntight interval, as it is restricted to the intersection of all such rules’intervals. This implies that the lower bound monotonically increaseswith the cardinality of B.\nFor the correct transformation of the probabilities we use in Line 6\nof the algorithm the function subset\nlower , which computes the cor-\nrect probabilities for each subset B. The idea is to consider the prob-\nability already given by the rules corresponding to proper subsets of\nBand only add as much probability mass as is needed to get the\ndesired lower bound of the probability Ph(B):\nsubset lower h(B)=1−1−lower(Ph(B))/producttext\nB/prime⊂B1−subset lower h(B/prime)(6)\nwhere the denominator is 1for the empty set.\nTheorem 2. F or any IPHL Pand any query q, Algorithm 1 computes\nthe correct lower probability bound of q, as defined by (3)and (4).S.Michelsetal./Imprecise Probabilistic Horn Clause Logic 625\nThe proof of this theorem is not provided due to lack of space.\nExample 14. The rules for chemicals in Example 9 are translated\nto:\n(d:chemicals )←/latticetop\n(1−(0.7/(1−d)): chemicals )←tanker\nNote that in case tanker holds, the probability of chemicals accord-\ning to (2)is:(1−(0.7/(1−d)))d+(0.7 /(1−d))d+(1 −(0.7/(1−\nd)))(1−d) = 0.3 , which is equal to the lower bound for that case\ndefined by RH.\nA similar transformation of rules with head-imprecision is incor-\nrect for the upper bound, because it decreases with larger cardinality\nofBand additional rules of which the body holds can only increase,\nbut not decrease, the probability. This problem is solved by actuallycomputing the lower bound of the query’s negation and computingthe upper bound of the original query as\nP(q)=1−P(¬q).T o\ncompute the lower bound of the negation we transform the knowl-edge base, such that each atom actually represents its negation. Thiscan be achieved by just swapping ∧and∨inClark’s completion and\nuse as weight for auxiliary atoms 1−upper(p).\nTo characterize the computational complexity of a full IPHL pro-\ngram, it makes no sense to speak about a corresponding precise prob-abilistic logic program for IPHLs with head-impressions as in Theo-\nrem 1, since it makes no sense to replace all probabilities with point\nprobabilities for such rules. However, we still get the following guar-\nantee in terms of complexity compared to programs with a similar\nstructure.\nTheorem 3. Inference for a IPHL programs has the same com-\nplexity in terms of the treewidth, as a corresponding precise prob-\nabilistic logic program, where each head is defined in terms of thesame body atoms as in the IPHL program. That is, the complexityisO(n2\nw), wherenis the number of variables and wthe program\nCNF’s treewidth.\nProof. This follows from the complexity of WMC [3] and the fact\nthat the translation in Algorithm 1 does not change the program’s\ntreewidth.\n5 RELATED WORK\nSeveral approaches have been proposed to combine probability the-ory and qualitative modalities. One example is the logic of Bac-chus [1], which makes it possible to put constraints on probabili-\nties as ‘the agent believes ϕwith probability greater than 0.5’. The\nconnection to imprecise probability theory is not made in this work.\nThere are also approaches allowing for higher-order probabilisticstatements [9], such as ‘the probability that the probability of ϕis\nlarger0.5is0.9’. This work is mainly theoretical in nature and effi-\ncient inference mechanisms are not provided.\nThe epistemic logic approaches of Milch and Koller [13] and Shi-\nrazi and Amir [17], deal with beliefs of multiple agents and alsohigher-order beliefs about beliefs, and therefore have a different goalas our approach. They provide mechanisms for exact inference, based\non Bayesian networks. However, this probabilistic inference is only\na subroutine of the complete inference method whereas the completeinference is strictly more expensive than ordinary probabilistic infer-ence.\nThere is some work on inference for imprecise formalisms such as\nlocally defined credal networks (LDCNs) [5]. All exact approaches,such as [2, 6], suffer from the worst-case complexity of the prob-lem. We do not discuss approximate methods here, since they are notsuited for qualitative inference, as we discussed before.\n6 CONCLUSIONS\nWe introduced an imprecise probabilistic horn clause logic language,which makes it possible to express and unambiguously define qual-itative statements with varying level of precision, with as specialcases complete ignorance, point probabilities and determinism. Thisis made possible by a solid semantic foundation based on impreciseprobability theory. We have furthermore shown that it is possible to\nprovide inference for an imprecise language, which is as expensive\nas its precise counterpart, while in general imprecise inference prob-lems are more complex. Finally, the approach shows that it is pos-sible to employ state-of-the-art probabilistic inference methods forimprecise problems.\nREFERENCES\n[1] F. Bacchus, Representing and Reasoning with Probabilistic Knowl-\nege: A Logical Approach to Probabilities, MIT Press, Cambridge, MA,\n1990.\n[2] Andr ´es Cano, Jos ´e E Cano, and Seraf ´ın Moral, ‘Convex sets of prob-\nabilities propagation by simulated annealing’, in In Proceedings of the\nFith International Conference IPMU’94. Citeseer, (1994).\n[3] Mark Chavira and Adnan Darwiche, ‘On probabilistic inference by\nweighted model counting’, Artif. Intell., 172(6-7), 772–799, (2008).\n[4] K.L. Clark, ‘Negation as failure’, Logic and Data Bases, 293–322,\n(1978).\n[5] Fabio G Cozman, ‘Credal networks’, Artificial Intelligence, 120(2),\n199–233, (2000).\n[6] Enrico Fagiuoli and Marco Zaffalon, ‘2u: an exact interval propagation\nalgorithm for polytrees with binary variables’, Artificial Intelligence,\n106(1), 77–107, (1998).\n[7] Daan Fierens, Guy V an den Broeck, Joris Renkens, Dimitar Shterionov,\nBernd Gutmann, Ingo Thon, Gerda Janssens, and Luc De Raedt, ‘In-\nference and learning in probabilistic logic programs using weighted\nboolean formulas’, arXiv preprint arXiv:1304.6810, (2013).\n[8] Robert Goldblatt, Lectures on the hyperreals: an introduction to non-\nstandard analysis, volume 188, Springer, 1998.\n[9] Aviad Heifetz and Philippe Mongin, ‘The modal logic of probability’,\ninProceedings of the 7th conference on Theoretical aspects of ratio-\nnality and knowledge, pp. 175–185. Morgan Kaufmann Publishers Inc.,(1998).\n[10] J.W. Lloyd, F oundations of Logic Programming, 2nd Edition, Springer,\n1987.\n[11] Steffen Michels, Arjen Hommersom, Peter J. F. Lucas, Marina V e-\nlikova, and Pieter W. M. Koopman, ‘Inference for a new probabilisticconstraint logic’, in IJCAI, ed., Francesca Rossi. IJCAI/AAAI, (2013).\n[12] Steffen Michels, Marina V elikova, Arjen Hommersom, and Peter JF Lu-\ncas, ‘A decision support model for uncertainty reasoning in safety andsecurity tasks’, in Systems, Man, and Cybernetics (SMC), 2013 IEEE\nInternational Conference on, pp. 663–668. IEEE, (2013).\n[13] Brian Milch and Daphne Koller, ‘Probabilistic models for agents’ be-\nliefs and decisions’, in Proceedings of the Sixteenth conference on\nUncertainty in artificial intelligence , pp. 389–396. Morgan Kaufmann\nPublishers Inc., (2000).\n[14] Mehmet A Orgun and Wanli Ma, ‘An overview of temporal and modal\nlogic programming’, in Temporal logic, 445–479, Springer, (1994).\n[15] Judea Pearl, Probabilistic reasoning in intelligent systems: networks of\nplausible inference, Morgan Kaufmann, 1988.\n[16] Taisuke Sato, ‘A statistical learning method for logic programs with\ndistribution semantics’, in ICLP, pp. 715–729, (1995).\n[17] Afsaneh Shirazi and Eyal Amir, ‘Probabilistic modal logic’, in Pro-\nceedings of the national conference on artificial intelligence , vol-\nume 22, p. 489. Menlo Park, CA; Cambridge, MA; London; AAAI\nPress; MIT Press; 1999, (2007).\n[18] Peter Walley, ‘Towards a unified theory of imprecise probability’, Inter-\nnational Journal of Approximate Reasoning, 24(2), 125–148, (2000).S.Michelsetal./Imprecise Probabilistic Horn Clause Logic 626", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "zkVGnkyh6Gm", "year": null, "venue": "EC 2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=zkVGnkyh6Gm", "arxiv_id": null, "doi": null }
{ "title": "On the Price of Anarchy for flows over time", "authors": [ "José R. Correa", "Andrés Cristi", "Tim Oosterwijk" ], "abstract": "Dynamic network flows, or network flows over time, constitute an important model for real-world situations where steady states are unusual, such as urban traffic and the Internet. These applications immediately raise the issue of analyzing dynamic network flows from a game-theoretic perspective. In this paper we study dynamic equilibria in the deterministic fluid queuing model in single-source single-sink networks, arguably the most basic model for flows over time. In the last decade we have witnessed significant developments in the theoretical understanding of the model. However, several fundamental questions remain open. One of the most prominent ones concerns the Price of Anarchy, measured as the worst case ratio between the minimum time required to route a given amount of flow from the source to the sink, and the time a dynamic equilibrium takes to perform the same task. Our main result states that if we could reduce the inflow of the network in a dynamic equilibrium, then the Price of Anarchy is exactly $\\e/(\\e-1)\\approx 1.582$. This significantly extends a result by Bhaskar, Fleischer, and Anshelevich (SODA 2011). Furthermore, our methods allow to determine that the Price of Anarchy in parallel-link networks is exactly 4/3. Finally, we argue that if a certain very natural monotonicity conjecture holds, the Price of Anarchy in the general case is exactly $\\e/(\\e-1)$.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "0Seqe2mG5gm", "year": null, "venue": "E-Commerce Agents 2001", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=0Seqe2mG5gm", "arxiv_id": null, "doi": null }
{ "title": "Conversational Speech Biometrics", "authors": [ "Stéphane H. Maes", "Jirí Navrátil", "Upendra V. Chaudhari" ], "abstract": "This paper discusses a new modality for speaker recognition - conversational biometrics - as a high security voice-based authentication method for E-commerce applications. By combining diverse simultaneous conversational technologies, high accuracy transparent speaker recognition becomes possible even in channel or environment mismatches. For speaker identification over very large populations, we combine dialogs to reduce the set of confusable speakers and text-independent speaker identification to pin-point the actual speaker. Similarly, dialogs with personal random or predefined questions are used to perform simultaneously knowledge-based and acoustic-based verifications of the user. Adequate design of the dialog allows to tailor the ROC curves to the needs of most applications. We demonstrate the conceptual advantages using our telephony prototype. Users familiar with the system can log into the system with 0.8% or 1.3% false rejection and ca. 5 • 10−12% or 2 • 10−6% false acceptance rates in about 40 sec or 20 sec respectively which is an impressive result as compared to purely voice-print based authentication.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "8fpPBVJ7G4U", "year": null, "venue": "E-Commerce Agents 2001", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=8fpPBVJ7G4U", "arxiv_id": null, "doi": null }
{ "title": "Conversational Speech Biometrics", "authors": [ "Stéphane H. Maes", "Jirí Navrátil", "Upendra V. Chaudhari" ], "abstract": "This paper discusses a new modality for speaker recognition - conversational biometrics - as a high security voice-based authentication method for E-commerce applications. By combining diverse simultaneous conversational technologies, high accuracy transparent speaker recognition becomes possible even in channel or environment mismatches. For speaker identification over very large populations, we combine dialogs to reduce the set of confusable speakers and text-independent speaker identification to pin-point the actual speaker. Similarly, dialogs with personal random or predefined questions are used to perform simultaneously knowledge-based and acoustic-based verifications of the user. Adequate design of the dialog allows to tailor the ROC curves to the needs of most applications. We demonstrate the conceptual advantages using our telephony prototype. Users familiar with the system can log into the system with 0.8% or 1.3% false rejection and ca. 5 • 10−12% or 2 • 10−6% false acceptance rates in about 40 sec or 20 sec respectively which is an impressive result as compared to purely voice-print based authentication.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "zHwmeNfcP_", "year": null, "venue": "EASE 2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=zHwmeNfcP_", "arxiv_id": null, "doi": null }
{ "title": "Attention Please: Consider Mockito when Evaluating Newly Proposed Automated Program Repair Techniques", "authors": [ "Shangwen Wang", "Ming Wen", "Xiaoguang Mao", "Deheng Yang" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ZQ-rHwPw0E", "year": null, "venue": "EASE 2021", "pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3463274.3463806", "forum_link": "https://openreview.net/forum?id=ZQ-rHwPw0E", "arxiv_id": null, "doi": null }
{ "title": "SLGPT: Using Transfer Learning to Directly Generate Simulink Model Files and Find Bugs in the Simulink Toolchain", "authors": [ "Sohil Lal Shrestha", "Christoph Csallner" ], "abstract": "Finding bugs in a commercial cyber-physical system (CPS) development tool such as Simulink is hard as its codebase contains millions of lines of code and complete formal language specifications are not available. While deep learning techniques promise to learn such language specifications from sample models, deep learning needs a large number of training data to work well. SLGPT addresses this problem by using transfer learning to leverage the powerful Generative Pre-trained Transformer 2 (GPT-2) model, which has been pre-trained on a large set of training data. SLGPT adapts GPT-2 to Simulink with both randomly generated models and models mined from open-source repositories. SLGPT produced Simulink models that are both more similar to open-source models than its closest competitor, DeepFuzzSL, and found a super-set of the Simulink development toolchain bugs found by DeepFuzzSL.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "JwFmfqEzKD", "year": null, "venue": "EASE 2020", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=JwFmfqEzKD", "arxiv_id": null, "doi": null }
{ "title": "Quality Assessment of Online Automated Privacy Policy Generators: An Empirical Study", "authors": [ "Ruoxi Sun", "Minhui Xue" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "LJDNip890g", "year": null, "venue": "EASE 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=LJDNip890g", "arxiv_id": null, "doi": null }
{ "title": "Quality assessment of systematic reviews in software engineering: a tertiary study", "authors": [ "You Zhou", "He Zhang", "Xin Huang", "Song Yang", "Muhammad Ali Babar", "Hao Tang" ], "abstract": "Context: The quality of an Systematic Literature Review (SLR) is as good as the quality of the reviewed papers. Hence, it is vital to rigorously assess the papers included in an SLR. There has been no tertiary study aimed at reporting the state of the practice of quality assessment used in SLRs in Software Engineering (SE). Objective: We aimed to study the practices of quality assessment of the papers included in SLRs in SE. Method: We conducted a tertiary study of the SLRs that have performed quality assessment of the reviewed papers. Results: We identified and analyzed different aspects of the quality assessment of the papers included in 127 SLRs. Conclusion: Researchers use a variety of strategies for quality assessment of the papers reviewed, but report little about the justification for the used criteria. The focus is creditability but not relevance aspect of the papers. Appropriate guidelines are required for devising quality assessment strategies.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Jr9_3xHLGW", "year": null, "venue": "EASE 2018", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=Jr9_3xHLGW", "arxiv_id": null, "doi": null }
{ "title": "An Inception Architecture-Based Model for Improving Code Readability Classification", "authors": [ "Qing Mi", "Jacky Keung", "Yan Xiao", "Solomon Mensah", "Xiupei Mei" ], "abstract": "The process of classifying a piece of source code into a Readable or Unreadable class is referred to as Code Readability Classification. To build accurate classification models, existing studies focus on handcrafting features from different aspects that intuitively seem to correlate with code readability, and then exploring various machine learning algorithms based on the newly proposed features. On the contrary, our work opens up a new way to tackle the problem by using the technique of deep learning. Specifically, we propose IncepCRM, a novel model based on the Inception architecture that can learn multi-scale features automatically from source code with little manual intervention. We apply the information of human annotators as the auxiliary input for training IncepCRM and empirically verify the performance of IncepCRM on three publicly available datasets. The results show that: 1) Annotator information is beneficial for model performance as confirmed by robust statistical tests (i.e., the Brunner-Munzel test and Cliff's delta); 2) IncepCRM can achieve an improved accuracy against previously reported models across all datasets. The findings of our study confirm the feasibility and effectiveness of deep learning for code readability classification.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Eh1Fx7iCDa", "year": null, "venue": "EASE 2018", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=Eh1Fx7iCDa", "arxiv_id": null, "doi": null }
{ "title": "Bug Localization with Semantic and Structural Features using Convolutional Neural Network and Cascade Forest", "authors": [ "Yan Xiao", "Jacky Keung", "Qing Mi", "Kwabena Ebo Bennin" ], "abstract": "Background: Correctly localizing buggy files for bug reports together with their semantic and structural information is a crucial task, which would essentially improve the accuracy of bug localization techniques. Aims: To empirically evaluate and demonstrate the effects of both semantic and structural information in bug reports and source files on improving the performance of bug localization, we propose CNN_Forest involving convolutional neural network and ensemble of random forests that have excellent performance in the tasks of semantic parsing and structural information extraction. Method: We first employ convolutional neural network with multiple filters and an ensemble of random forests with multi-grained scanning to extract semantic and structural features from the word vectors derived from bug reports and source files. And a subsequent cascade forest (a cascade of ensembles of random forests) is used to further extract deeper features and observe the correlated relationships between bug reports and source files. CNNLForest is then empirically evaluated over 10,754 bug reports extracted from AspectJ, Eclipse UI, JDT, SWT, and Tomcat projects. Results: The experiments empirically demonstrate the significance of including semantic and structural information in bug localization, and further show that the proposed CNN_Forest achieves higher Mean Average Precision and Mean Reciprocal Rank measures than the best results of the four current state-of-the-art approaches (NPCNN, LR+WE, DNNLOC, and BugLocator). Conclusion: CNNLForest is capable of defining the correlated relationships between bug reports and source files, and we empirically show that semantic and structural information in bug reports and source files are crucial in improving bug localization.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "kZy0tIgCEHH", "year": null, "venue": "EASE 2020", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=kZy0tIgCEHH", "arxiv_id": null, "doi": null }
{ "title": "Towards continues code recommendation and implementation system: An Initial Framework", "authors": [ "Muhammad Azeem Akbar", "Zhiqiu Huang", "Yu Zhou", "Faisal Mehmood", "Yasir Hussain", "Muhammad Hamza" ], "abstract": "In the current era, the auto and reliable recommendation system plays a significant role in human life. The code recommender systems are being used in various source code databases to recommend the most suitable source code to the user. While code recommendation, the code analysis concerning 'code quality' and 'code implementation' is important to recommend the most reliable code by considering the objective of the user. The ultimate aim of this research work is to propose a code recommendation and implementation model using the characteristics of DevOps that assist in extracting, analyzing, implementing, and updating the recommender system continuously. The current study presents an initial framework of the proposed code recommender model. The design of the model is based on the data collected through literature review and by conducting an empirical study with experts. We believe that the proposed model will assist the researchers and practitioners to recommend the most secure and suitable source code according to their requirement.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "GMVlwRLnIhP", "year": null, "venue": "EASE 2020", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=GMVlwRLnIhP", "arxiv_id": null, "doi": null }
{ "title": "SIOT-RIMM: Towards Secure IOT-Requirement Implementation Maturity Model", "authors": [ "Muhammad Hamza", "Haibo Hu", "Muhammad Azeem Akbar", "Faisal Mehmood", "Yasir Hussain", "Ali Mahmoud Baddour" ], "abstract": "It is very crucial for an organization to encapsulate the requirements in its early stage when they are intending to build a novel system such as the internet of things (IoT), particularly when it comes to capturing privacy and security requirements to gain the public confidence. The proposed research is focused to develop a secure IoT-requirement implementation maturity model (SIOT-RIMM). The proposed model will assist the software development organizations to improve and modify their requirement engineering processes in terms of security and privacy of IoT. The SIOT-RIMM model will be developed based on the existing IoT literature pertaining to security and privacy, industrial empirical study and understanding of the challenges that could negatively influence the implementation of security and privacy in IoT. To develop the maturity levels of SIOT-RIMM, we will consider the concepts of existing maturity models of other software engineering domains. In this preliminary study, 19 challenges were identified using the SLR approach that might have a negative impact on the IoT requirements engineering process. The identified challenges will contribute to the development of SIOT-RIMM maturity levels.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "cTYgYpe68-", "year": null, "venue": "EASE 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=cTYgYpe68-", "arxiv_id": null, "doi": null }
{ "title": "Performance Modeling of Hyperledger Fabric 2.0", "authors": [ "Ou Wu", "Shanshan Li", "Liwen Liu", "He Zhang", "Xin Zhou", "Qinghua Lu" ], "abstract": "Hyperledger Fabric has become one of the most widely used consortium blockchain frameworks with the ability to execute custom smart contracts. Performance modeling and network evaluation are necessary for performance estimation and optimization of the Fabric blockchain platform. The compatibility and effectiveness of existing performance modeling methods must be improved. For this reason, we proposed a compatible performance modeling method using queuing theory for Fabric considering the limited transaction pool. Taking the 2.0 version of Fabric as a case, we have established the model for the transaction process in the Fabric network. By analyzing the two-dimensional continuous-time Markov process of this model, we solved the system stationary equation and obtained the analytical expressions of performance indicators such as the system throughput, the system steady-state queue length, and the system’s average response time. We collected the required parameter values through the official test suite. An extensive analysis and simulation was performed to verify the accuracy and the effectiveness of the model and formula. We believe that this method can be extended to a wide range of scenarios in other blockchain systems.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "RHPxNsTmU75", "year": null, "venue": "EASE 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=RHPxNsTmU75", "arxiv_id": null, "doi": null }
{ "title": "Preliminary comparison of techniques for dealing with imbalance in software defect prediction", "authors": [ "Daniel Rodríguez", "Israel Herraiz", "Rachel Harrison", "José Javier Dolado", "José C. Riquelme" ], "abstract": "Imbalanced data is a common problem in data mining when dealing with classification problems, where samples of a class vastly outnumber other classes. In this situation, many data mining algorithms generate poor models as they try to optimize the overall accuracy and perform badly in classes with very few samples. Software Engineering data in general and defect prediction datasets are not an exception and in this paper, we compare different approaches, namely sampling, cost-sensitive, ensemble and hybrid approaches to the problem of defect prediction with different datasets preprocessed differently. We have used the well-known NASA datasets curated by Shepperd et al. There are differences in the results depending on the characteristics of the dataset and the evaluation metrics, especially if duplicates and inconsistencies are removed as a preprocessing step. Further Results and replication package: http://www.cc.uah.es/drg/ease14", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "o0jJUHmvmkN", "year": null, "venue": "EASE 2017", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=o0jJUHmvmkN", "arxiv_id": null, "doi": null }
{ "title": "Preliminary Study on Applying Semi-Supervised Learning to App Store Analysis", "authors": [ "Roger Deocadez", "Rachel Harrison", "Daniel Rodríguez" ], "abstract": "Semi-Supervised Learning (SSL) is a data mining technique which comes between supervised and unsupervised techniques, and is useful when a small number of instances in a dataset are labelled but a lot of unlabelled data is also available. This is the case with user reviews in application stores such as the Apple App Store or Google Play, where a vast amount of reviews are available but classifying them into categories such as bug related review or feature request is expensive or at least labor intensive. SSL techniques are well-suited to this problem as classifying reviews not only takes time and effort, but may also be unnecessary. In this work, we analyse SSL techniques to show their viability and their capabilities in a dataset of reviews collected from the App Store for both transductive (predicting existing instance labels during training) and inductive (predicting labels on unseen future data) performance.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "sr8IhpAYIkd", "year": null, "venue": "EASE 2011", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=sr8IhpAYIkd", "arxiv_id": null, "doi": null }
{ "title": "Using background colors to support program comprehension in software product lines", "authors": [ "Janet Feigenspan", "Michael Schulze", "Maria Papendieck", "Christian Kästner", "Raimund Dachselt", "Veit Köppen", "Mathias Frisch" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "qrEnhr8V6t-E", "year": null, "venue": "EASE 2016", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=qrEnhr8V6t-E", "arxiv_id": null, "doi": null }
{ "title": "Reporting usability defects: do reporters report what software developers need?", "authors": [ "Nor Shahida Mohamad Yusop", "John C. Grundy", "Rajesh Vasa" ], "abstract": "Reporting usability defects can be a challenging task, especially in convincing the software developers that the reported defect actually requires attention. Stronger evidence in the form of specific details is often needed. However, research to date in software defect reporting has not investigated the value of capturing different information based on defect type. We surveyed practitioners in both open source communities and industrial software organizations about their usability defect reporting practices to better understand information needs to address usability defect reporting issues. Our analysis of 147 responses show that reporters often provide observed result, expected result and steps to reproduce when describing usability defects, similar to the way other types of defects are reported. However, reporters rarely provide usability-related information. In fact, reporters ranked cause of the problem is the most difficult information to provide followed by usability principle, video recoding, UI event trace and title. Conversely, software developers consider cause of the problem as the most helpful information for them to fix usability defects. Our statistical analysis reveals a substantial gap between what reporters provide and what software developers need when fixing usability defects. We propose some remedies to resolve this gap.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "oQwDZ3zPydO", "year": null, "venue": "EASE 2013", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=oQwDZ3zPydO", "arxiv_id": null, "doi": null }
{ "title": "Towards high performance software teamwork", "authors": [ "Emily Weimar", "Ariadi Nugroho", "Joost Visser", "Aske Plaat" ], "abstract": "Context: Research indicates that software quality, to a large extent, depends on cooperation within software teams [1] Since software development is a creative process that involves human interaction in the context of a team, it is important to understand the teamwork factors that influence performance. Objective: We present a study design in which we aim to examine the factors within software development teams that have significant influence on the performance of the team. We propose to consider factors such as communication, coordination of expertise, cohesion, trust, cooperation, and value diversity. The study investigates whether and to which extent these factors correlate with a performance of the team. In order to capture a variety of relevant teamwork factors, we created a new model extending the work of Hoegl and Gemuenden [2] and Liang et al. [3] Method: The study is based on quantitative research by means of an online questionnaire. We invited more than 20 software development teams in the Netherlands to participate in our team performance assessment, evaluating the teamwork and performance of the team. Based on an average team size of five people, one would therefore expect at least 100 participants in total. Also, product stakeholders will be asked to give their independent assessments of the performance of the team. Expected result: By analyzing the correlation between teamwork factors and team performance, we expect to gain a deeper understanding of how teamwork factors influence team performance. We also expect to validate the implemented extensions of teamwork model with respect to earlier work. Conclusion: Software teamwork factors are important to understand. In order to get a better understanding of the role of teamwork factors, this study should be conducted.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "BZobFzocBZd", "year": null, "venue": "EASE 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=BZobFzocBZd", "arxiv_id": null, "doi": null }
{ "title": "Detection and Elimination of Systematic Labeling Bias in Code Reviewer Recommendation Systems", "authors": [ "K. Ayberk Tecimer", "Eray Tüzün", "Hamdi Dibeklioglu", "Hakan Erdogmus" ], "abstract": "Reviewer selection in modern code review is crucial for effective code reviews. Several techniques exist for recommending reviewers appropriate for a given pull request (PR). Most code reviewer recommendation techniques in the literature build and evaluate their models based on datasets collected from real projects using open-source or industrial practices. The techniques invariably presume that these datasets reliably represent the “ground truth.” In the context of a classification problem, ground truth refers to the objectively correct labels of a class used to build models from a dataset or evaluate a model’s performance. In a project dataset used to build a code reviewer recommendation system, the recommended code reviewer picked for a PR is usually assumed to be the best code reviewer for that PR. However, in practice, the recommended code reviewer may not be the best possible code reviewer, or even a qualified one. Recent code reviewer recommendation studies suggest that the datasets used tend to suffer from systematic labeling bias, making the ground truth unreliable. Therefore, models and recommendation systems built on such datasets may perform poorly in real practice. In this study, we introduce a novel approach to automatically detect and eliminate systematic labeling bias in code reviewer recommendation systems. The bias that we remove results from selecting reviewers that do not ensure a permanently successful fix for a bug-related PR. To demonstrate the effectiveness of our approach, we evaluated it on two open-source project datasets —HIVE and QT Creator— and with five code reviewer recommendation techniques —Profile-Based, RSTrace, Naive Bayes, k-NN, and Decision Tree. Our debiasing approach appears promising since it improved the Mean Reciprocal Rank (MRR) of the evaluated techniques up to 26% in the datasets used.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "SVfAazoYodz", "year": null, "venue": "EANN 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=SVfAazoYodz", "arxiv_id": null, "doi": null }
{ "title": "Efficient Realistic Data Generation Framework Leveraging Deep Learning-Based Human Digitization", "authors": [ "Charalampos Symeonidis", "Paraskevi Nousi", "Pavlos Tosidis", "Konstantinos Tsampazis", "Nikolaos Passalis", "Anastasios Tefas", "Nikos Nikolaidis" ], "abstract": "The performance of supervised deep learning algorithms depends significantly on the scale, quality and diversity of the data used for their training. Collecting and manually annotating large amount of data can be both time-consuming and costly tasks to perform. In the case of tasks related to visual human-centric perception, the collection and distribution of such data may also face restrictions due to legislation regarding privacy. In addition, the design and testing of complex systems, e.g., robots, which often employ deep learning-based perception models, may face severe difficulties as even state-of-the-art methods trained on real and large-scale datasets cannot always perform adequately as they have not adapted to the visual differences between the virtual and the real world data. As an attempt to tackle and mitigate the effect of these issues, we present a method that automatically generates realistic synthetic data with annotations for a) person detection, b) face recognition, and c) human pose estimation. The proposed method takes as input real background images and populates them with human figures in various poses. Instead of using hand-made 3D human models, we propose the use of models generated through deep learning methods, further reducing the dataset creation costs, while maintaining a high level of realism. In addition, we provide open-source and easy to use tools that implement the proposed pipeline, allowing for generating highly-realistic synthetic datasets for a variety of tasks. A benchmarking and evaluation in the corresponding tasks shows that synthetic data can be effectively used as a supplement to real data.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]