Dataset Viewer
Auto-converted to Parquet
metadata
dict
paper
dict
review
dict
citation_count
int64
0
0
normalized_citation_count
int64
0
0
cited_papers
sequencelengths
0
0
citing_papers
sequencelengths
0
0
{ "id": "C3p_Rj0TBq", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=C3p_Rj0TBq", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "_P-ooh65kS", "year": null, "venue": "EC2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=_P-ooh65kS", "arxiv_id": null, "doi": null }
{ "title": "Prophet Inequalities for I.I.D. Random Variables from an Unknown Distribution.", "authors": [ "José R. Correa", "Paul Dütting", "Felix A. Fischer", "Kevin Schewior" ], "abstract": "A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E [Xτ]≥α• E [maxt Xt]. What makes this problem challenging is that the decision whether τ", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "2fqHDUd9_B", "year": null, "venue": "ECAL2017", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=2fqHDUd9_B", "arxiv_id": null, "doi": null }
{ "title": "Shepherding with robots that do not compute.", "authors": [ "Anil Özdemir", "Melvin Gauci", "Roderich Gross" ], "abstract": "We examine the problem solving capabilities of swarms of computation- and memory-free agents. Each agent has a single line-of-sight sensor providing two bits of information. The agent maps this information directly onto constant motor commands. In previous work, we showed that such simplistic agents can solve tasks requiring them to organize spatially (multi-robot aggregation and circle formation) and manipulate passive objects (clustering). In the present work, we address the shepherding problem, where the computation- and memory-free agents—the shepherds—are tasked to gather and move a group of dynamic agents—the sheep—towards a pre-defined goal. The shepherds and sheep are modelled as e-puck robots using computer simulations. Our findings show that the shepherding problem does not fundamentally require arithmetic computation or memory to be solved. The obtained controller solution is robust with respect to sensory noise, and copes well with changes in the number of sheep.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "JPAUFf9u4Q", "year": null, "venue": "E2DC@e-Energy 2016", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=JPAUFf9u4Q", "arxiv_id": null, "doi": null }
{ "title": "Learning-based power prediction for data centre operations via deep neural networks", "authors": [ "Yuanlong Li", "Han Hu", "Yonggang Wen", "Jun Zhang" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "lri_iAbpn_r", "year": null, "venue": "MIDL 2023 Oral", "pdf_link": "/pdf/ef254aeccd7177afc23382bcbd89f741046b9132.pdf", "forum_link": "https://openreview.net/forum?id=lri_iAbpn_r", "arxiv_id": null, "doi": null }
{ "title": "E(3) x SO(3) - Equivariant Networks for Spherical Deconvolution in Diffusion MRI", "authors": [ "Axel Elaldi", "Guido Gerig", "Neel Dey" ], "abstract": "We present Roto-Translation Equivariant Spherical Deconvolution (RT-ESD), an $E(3)\\times SO(3)$ equivariant framework for sparse deconvolution of volumes where each voxel contains a spherical signal. Such 6D data naturally arises in diffusion MRI (dMRI), a medical imaging modality widely used to measure microstructure and structural connectivity. As each dMRI voxel is typically a mixture of various overlapping structures, there is a need for blind deconvolution to recover crossing anatomical structures such as white matter tracts. Existing dMRI work takes either an iterative or deep learning approach to sparse spherical deconvolution, yet it typically does not account for relationships between neighboring measurements. This work constructs equivariant deep learning layers which respect to symmetries of spatial rotations, reflections, and translations, alongside the symmetries of voxelwise spherical rotations. As a result, RT-ESD improves on previous work across several tasks including fiber recovery on the DiSCo dataset, deconvolution-derived partial volume estimation on real-world in vivo human brain dMRI, and improved downstream reconstruction of fiber tractograms on the Tractometer dataset. Our implementation is available at \\url{https://github.com/AxelElaldi/e3so3_conv}.", "keywords": [ "Equivariance", "Diffusion", "MRI", "fODF", "Geometric Deep Learning", "Spherical Deep Learning" ], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "iFrMZGUVQfp", "year": null, "venue": "eBISS 2011", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=iFrMZGUVQfp", "arxiv_id": null, "doi": null }
{ "title": "The GoOLAP Fact Retrieval Framework", "authors": [ "Alexander Löser", "Sebastian Arnold", "Tillmann Fiehn" ], "abstract": "We discuss the novel problem of supporting analytical business intelligence queries over web-based textual content, e.g., BI-style reports based on 100.000’s of documents from an ad-hoc web search result. Neither conventional search engines nor conventional Business Intelligence and ETL tools address this problem, which lies at the intersection of their capabilities. Three recent developments have the potential to become key components of such an ad-hoc analysis platform: significant improvements in cloud computing query languages, advances in self-supervised keyword generation techniques and powerful fact extraction frameworks. We will give an informative and practical look at the underlying research challenges in supporting ”Web-Scale Business Analytics” applications that we met when building GoOLAP, a system that already enjoys a broad user base and over 6 million objects and facts.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "en2SEuECnpg", "year": null, "venue": "EAIA 1990", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=en2SEuECnpg", "arxiv_id": null, "doi": null }
{ "title": "Three Lectures on Situation Theoretic Grammar", "authors": [ "Robin Cooper" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "HiHWMiLP035", "year": null, "venue": "ICLR 2022 Submitted", "pdf_link": "/pdf/b83bf8fb27156c92d7d4fa61ebe900d6cb9cd5f7.pdf", "forum_link": "https://openreview.net/forum?id=HiHWMiLP035", "arxiv_id": null, "doi": null }
{ "title": "E$^2$CM: Early Exit via Class Means for Efficient Supervised and Unsupervised Learning", "authors": [ "Alperen Gormez", "Erdem Koyuncu" ], "abstract": "State-of-the-art neural networks with early exit mechanisms often need considerable amount of training and fine-tuning to achieve good performance with low computational cost. We propose a novel early exit technique, E$^2$CM, based on the class means of samples. Unlike most existing schemes, E$^2$CM does not require gradient-based training of internal classifiers. This makes it particularly useful for neural network training in low-power devices, as in wireless edge networks. In particular, given a fixed training time budget, E$^2$CM achieves higher accuracy as compared to existing early exit mechanisms. Moreover, if there are no limitations on the training time budget, E$^2$CM can be combined with an existing early exit scheme to boost the latter's performance, achieving a better trade-off between computational cost and network accuracy. We also show that E$^2$CM can be used to decrease the computational cost in unsupervised learning tasks.", "keywords": [ "class means", "early exit", "efficient neural networks" ], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "zJDHFzNWGCF", "year": null, "venue": "EAMT 2010", "pdf_link": "https://aclanthology.org/2010.eamt-1.17.pdf", "forum_link": "https://openreview.net/forum?id=zJDHFzNWGCF", "arxiv_id": null, "doi": null }
{ "title": "Integration of statistical collocation segmentations in a phrase-based statistical machine translation system", "authors": [ "Marta R. Costa-jussà", "Vidas Daudaravicius", "Rafael E. Banchs" ], "abstract": "Marta R. Costa-jussa, Vidas Daudaravicius, Rafael E. Banchs. Proceedings of the 14th Annual conference of the European Association for Machine Translation. 2010.", "keywords": [], "raw_extracted_content": "Integration of statistical collocation segmentations in a phrase-based\ns\ntatistical machine translationsystem\nMarta R. Costa-juss `a∗, VidasDaudaravicius†and Rafael E. Banchs∗\n∗Barcelona Media Research Center\nAv Diagonal, 177, 9th floor, 08018 Barcelona, Spain\n{marta.ruiz,rafael.banchs}@barcelonamedia.org\n†Faculty of Informatics, Vytautas Magnus University\nVileikos 8, Kaunas, Lithuania\[email protected]\nAbstract\nThis study evaluates the impact of inte-\ngrating two different collocation segmen-\ntationsmethodsinastandardphrase-based\nstatistical machine translation approach.\nThe collocation segmentation techniques\nare implemented simultaneously in the\nsource and target side. Each resulting col-\nlocation segmentation is used to extract\ntranslationunits. Experimentsarereported\nin the English-to-Spanish Bible task and\npromising results (an improvement over\n0.7 BLEU absolute) are achieved in trans-\nlation quality.\n1 Introduction\nMachine Translation (MT) investigates the use of\ncomputer software to translate text or speech from\nonelanguagetoanother. Statisticalmachinetrans-\nlation (SMT) has become one of the most popu-\nlar MT approaches given the combination of sev-\neral factors. Among them, it is relatively straight-\nforward to build an SMT system given the freely\navailable software and, additionally, the system\nconstruction does not require of any language ex-\nperts.\nNowadays, one of the most popular SMT ap-\nproaches is the phrase-based system (Koehn et al.,\n2003) which implements a maximum entropy ap-\nproach based on a combination of feature func-\ntions. The Moses system (Koehn et al., 2007)\nis an implementation of this phrase-based ma-\nchine translation approach. An input sentence\nis first split into sequences of words (so-called\nphrases),whicharethenmappedone-to-onetotar-\nget phrases using a large phrase translation table.\nc/circlecopyrt2010 European Association forMachine Translation.Introducing chunking in the standard phrase-\nbased SMT system is a relatively frequent\nstudy (Zhou et al., 2004; Wang et al., 2002; Ma\net al., 2007). Chunking may be used either to\nimprove reordering or to enhance the translation\ntable. For example, authors in (Zhang et al.,\n2007) present a shallow chunking based on syn-\ntactic information and they use the chunks to re-\norder phrases. Other studies report the impact on\nthe quality of word alignment and in translation\nafter using various types of multi-word expres-\nsions which can be regarded as a type of chunks,\nsee (Lambert and Banchs, 2006) or sub-sentential\nsequences (Macken et al., 2008; Groves and Way,\n2005). Chunking is usually performed on a syn-\ntacticorsemanticbasiswhichforcestohaveatool\nfor parsing or similar. We propose to introduce\nthe collocation segmentation developed by (Dau-\ndaravicius, 2009) which is language independent.\nThis collocation segmentation was applied in key-\nword assigment task and a high classification im-\nprovement was achieved (Daudaravicius, 2010).\nWe use this collocation segmentation technique\nto enrich the phrase translation table. The phrase\ntranslationtableiscomposedofphraseunitswhich\ngenerally are extracted from a word aligned paral-\nlel corpus. Given this word alignment, an extrac-\ntion of contiguous phrases is carried out (Zens et\nal., 2002), specifically all extracted phrases fulfill\nthefollowingrestrictions: allsource(target)words\nwithin a phrase are aligned only to target (source)\nwords within the same phrase.\nThis paper is organized as follows. First, we\ndetail the different collocation segmentation tech-\nniques proposed. Secondly, we make a brief de-\nscription of the phrase-based SMT system and\nhow we introduce the collocation segmentation to\nimprove the phrase-based SMT system. Then,\n[EAMT May 2010 St Raphael, France]\nwe present experiments performed in an standard\nphrase-based system comparing the phrase extrac-\ntion. Finally,we presenttheconclusions.\n2 Collocation segmentation\nThe Dice score is used to measure the associa-\ntion strength of two words. This score is used,\nfor instance, in the collocation compiler XTract\n(Smadja, 1993) and in the lexicon extraction sys-\ntem Champollion (Smadja and Hatzivassiloglou,\n1996). Diceis definedas follows:\nDice(x;y) =2f(x, y)\nf(x) +f(y)\nwhere f(x, y)isthefrequencyofco-occurrence\nofxandy, andf(x)andf(y)the frequencies of\noccurrence of xandyanywhere in the text. If x\nandytendtooccurinconjunction,theirDicescore\nwill be high. The text is seen as a changing curve\nof the word associativity values (see Figure 1 and\nFigure2).\nThe collocation segmentation is the process of\ndetecting the boundaries of collocation segments\nwithin a text. A collocation segment is a piece of\na text between boundaries. The boundaries are set\nin two steps. First, we set the boundary between\ntwo words within a text where the Dice value is\nlower than a threshold. The threshold value is set\nmanually and is kept at the Dice value of exp(-\n8)in our experiment CS-1(i.e. Collocation Seg-\nmentation type 1), and the Dice value of exp(-4)\nin our experiment CS-2(i.e. Collocation Segmen-\ntation type 2). This decision was based on the\nshape of the curve found in (Daudaravicius and\nMarcinkeviciene,2004). ThethresholdforCS-1is\nkept very low, and many weak word associations\nare considered. The threshold for CS-2 is high to\nkeep together only strongly connected words. The\nhigher threshold value makes shorter collocation\nsegments. Shorter collocation segments are more\nconfident collocations and we may expect better\ntransaltion results. Nevertheless, the results of our\nstudy show that longer collocation segments are\npreferable. Second, we introduce an average min-\nimum law (AML). The average minimum law is\nappliedtothethreeadjacentDicevalues(i.e.,four\nwords). Thelawisexpressedas follows:\nDice(xi−2, xi−1) +Dice(xi, xi+1)\n2>\nDice(xi−1, xi)− →xi−1boundaryx iThe boundary of a segment is set at the point,\nwhere the value of collocability is lower than the\naverage of preceding and following values of col-\nlocability. The example of setting the boundaries\nfor English sentence is presented in Figure 1, and\nitshowsasentenceandDicevaluesbetweenword\npairs. Almost all values are higher than an arbi-\ntrary chosen level of the threshold. Most of the\nboundaries in the example sentence are made by\nthe use of the average minimum law. This law\nidentifiessegmentorcollocationboundariesbythe\nchange of Dice value. This approach is new and\ndifferent from other widely used statistical meth-\nods (Tjong-Kim-Sang and S., 2000). For instance,\nthe general method used by Choueka (Choueka,\n1988) is the following: for each length n, (1≤\nn≤6), produce all the word sequences of length\nnand sort them by frequency; impose a thresh-\noldfrequency14. Xtractisdesignedtoextractsig-\nnificant bigrams, and then expands 2-Grams to n-\nGrams (Smadja, 1993). Lin (Lin, 1998) extends\nthe collocation extraction methods with syntactic\ndependency triples. Such collocation extraction\nmethods are performed on a dictionary level. The\nresultofthisprocessisadictionaryofcollocations.\nOur collocation segmentation is performed within\na text and the result of this process is a segmented\ntext(seeFigure3).\nThe segmented text could be used later to cre-\nate a dictionary of collocations. Such dictionary\naccepts all collocation segments. The main dif-\nference from Choueka and Smadja methods is\nthat our proposed method accepts all collocations\nand no significance tests for collocations are per-\nformed. The main advantage of this segmentation\nis the ability to perform collocation segmentation\nusing plain corpora only, and no manually seg-\nmented corpora or other databases and language\nprocessing tools are required. Thus, this approach\ncouldbeusedsuccessfullyinmanyNLPtaskssuch\nas statistical machine translation, information ex-\ntraction,informationretrievalandetc.\nThedisadvantageofcollocationsegmentationis\nthat the segments do not always conform to the\ncorrect grammatical and lexical phrases. E.g., in\nFigure1anappropriatesegmenationoftheconsec-\nutive set of words on the seventh day would give\nsegments onandthe seventh day . But the collo-\ncation segmentation takes on theandseventh day\nsegmentation. This happens because we have no\nextra information about structure of grammatical\nFigure1: Thesegmentboundaries of theEnglishSentence.\nFigure2: Thesegmentboundaries ofthe SpanishSentence.\nphsases. On the other hand, it is important to no-\ntice that the collocation segmentation of the same\ntranslated text is similar for different languages,\neven if a word or phrase order is different (Dau-\ndaravicius, 2010). Therefore, even if collocation\nsegments are not grammatically well formed, the\ncollocation segments are more or less symetrical\nfor different languages. The same sentence from\nBible corpus is segmented and the result is shown\nin Figures 1 and 2. As future work, it is neces-\nsary to make a thorough evaluation of conformity\nof the proposed collocation segmentation method\ntophrase-basedsegmentationbyusingparsers.\n3 Phrase-basedSMTsystem\nThe basic idea of phrase-based translation is to\nsegmentthegivensourcesentenceintounits(here-\nafter called phrases), then translate each phrase\nandfinallycomposethetargetsentencefromthese\nphrasetranslations.\nBasically, a bilingual phrase is a pair of m\nsource words and ntarget words. For extraction\nfromabilingualwordalignedtrainingcorpus,two\nadditional constraints are considered: words are\nconsecutive,and,theyareconsistentwiththeword\nalignmentmatrix.\nGiven the collected phrase pairs, the phrasetranslation probability distribution is commonly\nestimatedbyrelativefrequencyinbothdirections.\nThe translation model is combined together\nwith the following six additional feature func-\ntions: the target language model, the word and\nthe phrase bonus and the source-to-target and\ntarget-to-source lexicon model and the reorder-\ning model. These models are optimized in the\ndecoder following the procedure described in\nhttp://www.statmt.org/jhuws/ .\n4 Integration ofthe collocation\nsegmentationin the phrase-basedSMT\nsystem\nThe collocation segmentation provides a new seg-\nmentation of the data. One straightforward ap-\nproachistousethecollocationsegmentsaswords,\nandtobuildanewphrase-basedSMTsystemfrom\nscratch. Therefore, phrases are composed from\ncollocation segments. However, we have tested\nthat this approach does not yield to better results.\nThe reason for worse results could be the insuf-\nficient amount of data to build a transaltion table\nwith reliable statistics. The collocation segmenta-\nton increases the size of a dictionary more than 5\ntimes (Daudaravicius, 2010), and we need a suf-\nficient size corpus to get better results than base\nFigure3: Thecollocation segmentationof thebeginingof theBible.\nline. But the size of parallel corpora is limited by\nthe number of texts we are able to gather. There-\nfore, we propose to integrate collocation segments\nintostandardSMT.InsteadofbuildinganewSMT\nsystem from scrach, we enrich the base SMT with\ncollocaton segments.\nIn this work, we integrate the collocation-\nsegmentationas follows.\n1. First, we build a baseline phrase-based sys-\ntemwhichiscomputedasreportedinthesec-\ntionabove.\n2. Second, we build a collocation-based system\nwhich uses collocation segments as words.\nThe main difference of this system is that\nphrases are composed of collocations instead\nof words.\n3. Third,weconvertthesetofcollocation-based\nphrases (which was computed in step 2) into\na set of phrases composed by words. For\nexample, given the collocation-based phrase\ninthesightof|||delante, it is converted into\nthephrase inthesightof |||delante.\n4. Fourth, we consider the union of the baseline\nphrase-based extracted phrases (computed in\nstep 1) and the collocation-based extracted\nphrases (computed in step 2 and modified\nin step 3). That is, the set of standard\nphrases is combined with the set of modified\ncollocation-phrases.\n5. Finally, the phrase translation table is com-\nputed over the concatenated set of extracted\nphrases. This phrase table contains the stan-\ndardphrase-basedmodelswhichwerenamed\nin section 3: relative frequencies, lexical\nprobabilities and phrase bonus. Notice that\nsome pairs of phrases can be generated in\nboth extractions. Then this phrases will have\na higher score when computing the relativefrequencies. The IBM probabilities are com-\nputed atthelevelof words.\nHereinafter, this approach will be referred to as\nconcatenate-based approach ( CONCAT ). Figure 4\nshowsan exampleof phraseextraction.\nThe goal of the integration of the collocations\nsegmentation into the base SMT system is to in-\ntroduce new phrases into translation table and\nsmoothing of the relative frequencies of the trans-\nlationphraseswhichappearinbothsegmentations.\nAdditionally, the concatenation of two translation\ntablesgivesthepossibilitytohighlightthosetrans-\nlation phrases that are recognized in both trans-\nlation tables. Therefore, this allows to ‘vote’ for\nthe better translation phrases adding a new feature\nfunction which is ‘1’ in case of appearing in both\nsegmentations or’0’ intheoppositecase.\n5 Experimentalframework\nThe phrase-based system used in this paper\nis based on the well-known MOSES toolkit,\nwhich is nowadays considered as a state-of-the-\nart SMT system (Koehn et al., 2007). The\ntraining and weights tuning procedures are ex-\nplained in details in the above-mentioned pub-\nlication, as well as, on the MOSES web page:\nhttp://www.statmt.org/moses/ .\n5.1 Corpusstatistics\nExperiments were carried out on the English to\nSpanishBibletask,whichhavebeenproventobea\nvalid NLP resource (Chew et al., 2006). The main\nadvantages of using this corpus are that it is the\nworld’s most translated book, with translations in\nover 2,100 languages (often, multiple translations\nper language) and easy availability, often in elec-\ntronic form and in the public domain; it covers a\nvariety of literary styles including narrative, po-\netry, and correspondence; great care is taken over\nthe translations; it has a standard structure which\nFigure 4: Example of the phrase extraction process in the CONCAT approach. New phrases added by\nthecollocation-basedsystemaremarkedwitha ∗∗.\nallows parallel alignment on a verse-by-verse ba-\nsis; and, perhaps surprisingly, its vocabulary ap-\npears to have a high rate of coverage (as much as\n85%) of modern-day language. The Bible is small\ncompared to many corpora currently used in com-\nputationallinguisticsresearch,butstillfallswithin\nthe range of acceptability based on the fact that\nother corpora of similar size are used (see IWSLT\nInternationalEvaluationCampaign1).\nTable 1 shows the main statistics of the data\nused, namely the number of sentences, words and\nvocabulary,for eachlanguage.\n5.2 Collocation Segment statistics\nHereweanalysethecollocationsegmentstatistics.\nTable 2 shows the number of tokens and types of\ncollocation segments. We see that the number of\ntypes of collocation segments is around 6 times\nhigher than the number of types of words. The in-\ncrease is different for Spanish and English. The\n1http://mastarpj.nict.go.jp/IWSLT2009/Spanish English\nTrainingSentences 28,887 28,887\nTokens 781,113 848,776\nTypes 28,178 13,126\nDevelopmentSentences 500 500\nTokens 13,312 14,562\nTypes 2,879 2,156\nTestSentences 500 500\nTokens 13,170 14,537\nTypes 2,862 2,095\nTable 1:Bible corpus: training, development and\ntestdatasets.\nCS-1segmentation increased the number of types\nforSpanishtrainingsetby4times,andforEnglish\nby 6.5 times. Therefore, the dictionaries for Span-\nish and English become comparable in size. This\nallows to expect better alignment, and that is in-\ndeed in our experiments. The CS-2segmentation\nincreased the number of types for Spanish train-\nSpanish English\nTrainingSentences 28,887 28,887\nTokensCS-1 407,505 456,608\nTypesCS-1 109,521 84,789\nTokensCS-2 524,916 549,585\nTypesCS-2 57,893 37,030\nTable2:Tokensandtypesofcollocationsegments.\ning set by 2 times, and for English by 2.8 times.\nThe dictionaries are still comparably different in\nsize. In section 4.5 we show that CS-1segmenta-\ntionprovidesthebestresults. Thisresultmayindi-\ncate initial number of types before alignment is an\nimportant feature. The number of types should be\ncomparableinordertoachievethebestalignment,\nandthebesttranslationresultsafterward. Thismay\nexplain why CS-1segmentation contributes to ob-\ntain higher quality translations than CS-2segmen-\ntation,as willbeshowninSection4.5.\n5.3 Experimental systems\nWe build four different systems: the phrase-based\n(PB), with two different phrase length limits, and\nthe concatenate-based ( CONCAT ) SMT system,\nwhich has two versions: one for each type of seg-\nmentationpresentedabove.\nPhrase length is understood as the maximum\nnumber of words either in the source or the target\npart. In our experiments, the CONCAT systems\ncatenated the baseline system which used phrases\nupto10wordstogetherwiththeunitscomingfrom\nthe collocation segmentation which was limited to\n10. This collocation segmentation limitation al-\nlowed for translation units of a maximum of 20\nwords. In order to make a fair comparison, we\nused two baseline systems, one with a maximum\nof 10 words ( PB-10) and another of maximum of\n20words( PB-20)per translationunit.\n5.4 Translation unitsanalysis\nThis section analyses the translation units that\nwere used in the test set (i.e. the highest scoring\ntranslationunitsfoundbythedecoder).\nAdding more phrases (in the PB-20system)\nwithout any selection leads to a phrase table of\n7Mtranslationunits,whereasusingour CONCAT-\n1proposal the phrase table contains 4.6M transla-\ntion units and in the CONCAT-2 , the phrase table\ncontains5.3Mtranslationunits. Thatmeansa35%\nreductionof thetotaltranslationunitvocabulary.Table 3 shows average and maximum length\nof the translation units used in the test set. The\ncollocation segmentation influences the length of\ntranslation phrases. Neither the CONCAT-1 nor\nCONCAT-2 approach does not use longer phrases\nin average. In fact, the segmentation reduces the\naverage length of the translation unit. This result\nmay be surprising, because a segmentation which\nuses chunks instead of words may be expected to\nincreasetheaveragelengthofthetranslationunits.\nIn the next section, we will see that using longer\nphrasesdonotimprovethetranslation. Noticethat\nthe literature showed that using longer phrases do\nnotprovidebetter translation(Koehnet al.,2003).\n5.5 Automatictranslationevaluation\nThe translation performance of the four experi-\nmental systemsis evaluatedandshowninTable4.\nIn fact, an indirect composition of phrases with\nthehelpofthesegmentationallowstogetbetterre-\nsultsthanastraightforwardcompositionoftransla-\ntion phrases from single words. However, adding\nphrases using the standard algorithm can lead to\nslightlyworsetranslations(Koehnetal.,2003).\nThebesttranslationresultswereachievedbyin-\ntegrating collocation segmentation 1, which uses\nlonger collocation segments, into the SMT sys-\ntem. This result shows that shorter collocations,\ni.e. more confident collocations, do not improve\nresults. This could be due to ability of the base\nSMT system to capture collocations in the similar\nway as the collocation segmentation 2 does. The\ncollocation segmentation 1 introduces longer col-\nlocation that the base SMT system is not able to\ncapture. Thus, longer collocations improves base\nSMTsystembetter thanshortercollocations.\nThe results show that the higher average of the\nlength of translation phrases do not necessarly\nlead to better translations (see table 3). The im-\nprovement of translation quality (when using the\ncollocation segmentation) may indicate that short\nphrasescomingfromthecollocationsegmentation\nhave a better association between words and lead\nto a better translation. It is difficult to make a con-\nclusionabouttheimportanceofthemeasureofthe\naverage length of the phrase in the translation ta-\nble. Therefore, the average phrase length measure\nalone is not a reliable feature, and does not give\nimportant information and could cheat the conlu-\nsions. Thisisclearlyseeninourresults: theBLEU\nscore of PB–10 and CONCAT-2 are very close,\nPB-10PB-20CONCAT-1 CONCAT-2\nSourcephraseaveragelength 2.512.562.36 2.27\nSourcephrasemaximumlength 102010 16\nTargetphraseaveragelength 2.322.342.13 2.05\nTargetphrasemaximumlength 102010 10\nTable3:Translationunitlength statisticsusedinthetestset.\nbut the average length of phrases are too different,\nand appear in the oposite sides of the CONCAT-\n1 value. Futher studies could show what features\ncould be used to describe the quality of the trans-\nlationdictionary.\nCollocation segmentation is capable to intro-\nduce new translation units that are useful in the fi-\nnal translation system and to smooth the relative\nfrequencies of those units which were already in\nthe baseline translation table. The improvement is\nalmost of +0.6 point BLEU in the test set. Fur-\nther experiments could be dedicated to investigate\nthe separate improvement due to (1) new transla-\ntion units or (2) smoothing (in case they give in-\ndependent gains). From now on, the comparison\nismadewiththebestbaseline( PB-10)systemand\nthebestCONCAT (CONCAT-1 )system,whichob-\ntainedthe bestresultsintheautomatic evaluation.\nWefoundoutthatacertainnumberofsentences\nproducedthesameoutputwithdifferentsegmenta-\ntion. When comparing the best CONCAT with the\nbest baseline ( PB-10) systems’ outputs, 165 sen-\ntences produced the same output (in most cases\nwith different segmentation). The last row in ta-\nble 4 shows BLEU when evaluating only the sen-\ntenceswhichweredifferent(Subset-Test,335sen-\ntences). In this case, the BLEU improvement\nreaches +0.75.\n5.6 Translation analysis\nWeperformedamanualanalysisofthetranslation.\nWecompared100outputsentencesfromthebase-\nlineandthe CONCAT system.\nNosignificantadvantagesofthebaselinesystem\nwastracked,whereasthecollocationsegmentation\nallowstoimprovetranslationqualityinthefollow-\ningways (onlysentencesubsegmentsareshown):\n1. Notremovalof words.\nBas:llam´osunombreNo ´e:\n+CS:llam´osunombreNo ´e,diciendo:\nREF:llam´osunombreNo ´e,diciendo:\n2. Better choiceof prepositions.Bas:declarar´apor juramento\n+CS:declarar´abajo juramento\nREF:declarar´abajo juramento\n3. Better choice oftranslationunits.\nBas:.|||;\n+CS:.|||.\nREF:.\n4. Better preservationof idiomacity.\nBas:podr´as comer pan\n+CS:comer´as pan\nREF:comer´as pan\n5. Better selectionof aphrasestructure.\nBas:cuando´elconoce\n+CS:cuando´elllegueasaberlo\nREF:cuando´elllegueasaberlo\n6 Conclusionsandfurtherresearch\nThis work explored the feasibility for improving\na standard phrase-based statistical machine trans-\nlation system by using a novel collocation seg-\nmentation method for translation unit extraction.\nExperiments were carried out with the English-to-\nSpanish Bible corpus task. A small but significant\ngainintranslationBLEUwasobtainedwhencom-\nbiningtheseunits withthestandardsetofphrases.\nFuture research in this area is envisioned in the\nfollowing main directions: to study how the col-\nlocations learned on the Bible corpus differ from\nthoselearnedonmoregeneralcorpora;toimprove\ncollocationsegmentationqualityinordertoobtain\nmorehuman-liketranslationunitsegmentations;to\nexplore the use of a specific feature function for\nhelping the translation systems to select transla-\ntion units from both categories (collocation seg-\nmentsandconventionalphrases)accordingtotheir\nrelative importance at each decoding step; and to\nevaluate the impact of new translation units vs.\nsmoothing.\nPB-10PB-20CONCAT-1 CONCAT-2\nTest 35.6835.6036.28 35.82\nSubset-Test 33.65–34.40 –\nTable4:Translationresultsintermsof BLEU.\n7 Acknowledgements\nThis work has been partially funded by the Span-\nish Department of Education and Science through\ntheJuan de la Cierva fellowship program and the\nBUCEADOR project (TEC2009-14094-C04-01).\nTheauthorsalsowantstothanktheBarcelonaMe-\ndia Innovation Centre for its support and permis-\nsiontopublishthis research.\nReferences\nChew, P. A, S. J Verzi, T. L Bauer, and J. T McClain.\n2006. Evaluationofthebibleasaresourceforcross-\nlanguage information retrieval. In Proceedings of\nthe Workshop on Multilingual Language Resources\nand Interoperability , pages 68–74.\nChoueka, Y. 1988. Looking for needles in a haystack,\nor locating interesting collocational expressions in\nlarge textual databases. In Proceedings of the RIAO\nConference on User-Oriented Content-Based Text\nandImageHandling ,pages21–24,Cambridge,MA.\nDaudaravicius,V.andRMarcinkeviciene. 2004. Grav-\nity counts for the boundaries of collocations. In-\nternationalJournalofCorpusLinguistics ,9(2):321–\n348.\nDaudaravicius, V. 2009. Automatic identification of\nlexicalunits. AninternationalJournalofComputing\nand Informatics. Special Issue Computational Lin-\nguistics.\nDaudaravicius, Vidas. 2010. The influence of colloca-\ntion segmentation and top 10 items to keyword as-\nsignment performance. In 11th International Con-\nferenceonIntelligentTextProcessingandComputa-\ntional Linguistics, Springer Verlag, LNCS , page 12,\nIasi,Romania.\nGroves, D. and A. Way. 2005. Hybrid data-driven\nmodels of machine translation. Machine Transla-\ntion, 19(3):301323.\nKoehn, P., F.J. Och, and D. Marcu. 2003. Statistical\nphrase-basedtranslation. In ProceedingsoftheHLT-\nNAACL, pages 48–54, Edmonton.\nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen,\nC.Moran,R.Zens,C.Dyer,O.Bojar,A.Constantin,\nandE.Herbst. 2007. Moses: Opensourcetoolkitfor\nstatisticalmachinetranslation. In Proceedingsofthe\nACL,pages177–180,Prague,CzechRepublic,June.Lambert, P. and R. Banchs. 2006. Grouping multi-\nword expressions according to part-of-speech in sta-\ntistical machine translation. In Proceedings of the\nEACL, pages 9–16, Trento.\nLin, D. 1998. Extracting collocations from text cor-\npora. In First Workshop on Computational Termi-\nnology, Montreal.\nMa, Y., N. Stroppa, and A. Way. 2007. Alignment-\nguided chunking. In Proc. of TMI 2007 , pages 114–\n121, Skvde, Sweden.\nMacken, L., E. Lefever, and V. Hoste. 2008.\nLinguistically-based sub-sentential alignment for\nterminology extractionfrom a bilingual automotive\ncorpus. In ProceedingsofCOLING ,pages529–536,\nMachester.\nSmadja, F.and McKeown, K. R. and V. Hatzivas-\nsiloglou. 1996. Translation collocations for bilin-\nguallexicons: Astatisticalapproach. Computational\nLinguistics , 22(1):1–38.\nSmadja, F. 1993. Retrieving collocations from text:\nXtract.Computational Linguistics , 19(1):143–177.\nTjong-Kim-Sang, E. and Buchholz S. 2000. Introduc-\ntion to the conll-2000 shared task: Chunking. In\nProc. of CoNLL-2000 and LLL-2000 , pages 127–\n132, Lisbon, Portugal.\nWang, W., J. Huang, M. Zhou, and C. Huang. 2002.\nStructurealignmentusingbilingualchunks. In Proc.\nof COLING 2002 , Taipei.\nZens,R.,F.J.Och,andH.Ney. 2002. Phrase-basedsta-\ntisticalmachinetranslation. InJarke,M.,J.Koehler,\nand G. Lakemeyer, editors, KI - 2002: Advances in\nartificialintelligence ,volumeLNAI2479,pages18–\n32. Springer Verlag, September.\nZhang, Y., R. Zens, and H. Ney. 2007. Chunk-level\nreordering of source language sentences with auto-\nmatically learned rules for statistical machine trans-\nlation. In Proc. of the Human Language Technol-\nogy Conf. (HLT-NAACL’06):Proc. of the Workshop\non Syntax and Structure in Statistical Translation\n(SSST), pages 1–8, Rochester, April.\nZhou, Y., C. Zong, and X. Bo. 2004. Bilingual chunk\nalignmentinstatisticalmachinetranslation. In IEEE\nInternational Conference on Systems, Man and Cy-\nbernetics, volume 2, pages 1401–1406, Hague.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "5-G0TKKKDtG", "year": null, "venue": "ECAL2005", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=5-G0TKKKDtG", "arxiv_id": null, "doi": null }
{ "title": "Self-assembly on Demand in a Group of Physical Autonomous Mobile Robots Navigating Rough Terrain.", "authors": [ "Rehan O'Grady", "Roderich Groß", "Francesco Mondada", "Michael Bonani", "Marco Dorigo" ], "abstract": "Consider a group of autonomous, mobile robots with the ability to physically connect to one another (self-assemble). The group is said to exhibit functional self-assembly if the robots can choose to self-assemble in response to the demands of their task and environment [15]. We present the first robotic controller capable of functional self-assembly implemented on a real robotic platform. The task we consider requires a group of robots to navigate over an area of unknown terrain towards a target light source. If possible, the robots should navigate to the target independently. If, however, the terrain proves too difficult for a single robot, the robots should self-assemble into a larger group entity and collectively navigate to the target. We believe this to be one of the most complex tasks carried out to date by a team of physical autonomous robots. We present quantitative results confirming the efficacy of our controller. This puts our robotic system at the cutting edge of autonomous mobile multi-robot research.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "h6phZYPxeK6", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=h6phZYPxeK6", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "K2YM3d4StLF", "year": null, "venue": "EALS 2014", "pdf_link": "https://ieeexplore.ieee.org/iel7/7000192/7009492/07009514.pdf", "forum_link": "https://openreview.net/forum?id=K2YM3d4StLF", "arxiv_id": null, "doi": null }
{ "title": "A recurrent meta-cognitive-based Scaffolding classifier from data streams", "authors": [ "Mahardhika Pratama", "Jie Lu", "Sreenatha G. Anavatti", "José Antonio Iglesias" ], "abstract": "A novel incremental meta-cognitive-based Scaffolding algorithm is proposed in this paper crafted in a recurrent network based on fuzzy inference system termed recurrent classifier (rClass). rClass features a synergy between schema and scaffolding theories in the how-to-learn part, which constitute prominent learning theories of the cognitive psychology. In what-to-learn component, rClass amalgamates the new online active learning concept by virtue of the Bayesian conflict measure and dynamic sampling strategy, whereas the standard sample reserved strategy is incorporated in the when-to-learn constituent. The inference scheme of rClass is managed by the local recurrent network, sustained by the generalized fuzzy rule. Our thorough empirical study has ascertained the efficacy of rClass, which is capable of producing reliable classification accuracies, while retaining the amenable computational and memory burdens.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "HKbEFfD3v0Y", "year": null, "venue": "E2DC 2012", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=HKbEFfD3v0Y", "arxiv_id": null, "doi": null }
{ "title": "Modeling and Simulation of Data Center Energy-Efficiency in CoolEmAll", "authors": [ "Micha vor dem Berge", "Georges Da Costa", "Andreas Kopecki", "Ariel Oleksiak", "Jean-Marc Pierson", "Tomasz Piontek", "Eugen Volk", "Stefan Wesner" ], "abstract": "In this paper we present an overview of the CoolEmAll project which addresses the important problem of data center energy efficiency. To this end, CoolEmAll aims at delivering advanced simulation, visualization and decision support tools along with open models of data center building blocks to be used in simulations. Both building blocks and the toolkit will take into account aspects that have major impact on actual energy consumption such as cooling solutions, properties of applications, and workload and resource management policies. In the paper we describe the CoolEmAll approach, its expected results and an environment for their verification.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "3IbIwIUAxAY", "year": null, "venue": "EBCCSP 2022", "pdf_link": "https://ieeexplore.ieee.org/iel7/9845455/9845499/09845664.pdf", "forum_link": "https://openreview.net/forum?id=3IbIwIUAxAY", "arxiv_id": null, "doi": null }
{ "title": "A toolbox for neuromorphic perception in robotics", "authors": [ "Julien Dupeyroux", "Stein Stroobants", "Guido C. H. E. de Croon" ], "abstract": "The third generation of artificial intelligence (AI) introduced by neuromorphic computing is revolutionizing the way robots and autonomous systems can sense the world, process the information, and interact with their environment. Research towards fulfilling the promises of high flexibility, energy efficiency, and robustness of neuromorphic systems is widely supported by software tools for simulating spiking neural networks, and hardware integration (neuromorphic processors). Yet, while efforts have been made on neuromorphic vision (event-based cameras), it is worth noting that most of the sensors available for robotics remain inherently incompatible with neuromorphic computing, where information is encoded into spikes. To facilitate the use of traditional sensors, we need to convert the output signals into streams of spikes, i.e., a series of events (+1,-1) along with their corresponding timestamps. In this paper, we propose a review of the coding algorithms from a robotics perspective and further supported by a benchmark to assess their performance. We also introduce a ROS (Robot Operating System) toolbox to encode and decode input signals coming from any type of sensor available on a robot. This initiative is meant to stimulate and facilitate robotic integration of neuromorphic AI, with the opportunity to adapt traditional off-the-shelf sensors to spiking neural nets within one of the most powerful robotic tools, ROS.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "UevSbA0H2_2", "year": null, "venue": "ECAL2003", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=UevSbA0H2_2", "arxiv_id": null, "doi": null }
{ "title": "A Multi-agent Based approach to Modelling and Rendering of 3D Tree Bark Textures.", "authors": [ "Ban Tao", "Changshui Zhang", "Shu Wei" ], "abstract": "Multi-Agent System (MAS) has been a wide used and effective method to solve distributed AI problems. In this paper, we simplify the biological mechanism in tree bark growth and build a MAS model to simulate the generation of tree barks. The epidermis of the bark serves as the environment of the MAS while splits and lenticels are modelled as split agents and lenticel agents. The environment records the geometrics formed by the interactions of the agents during the life cycles. Visualization of the geometrics can result in realistic 3D tree bark textures which can give much fidelity to computer graphics applications.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "RRK0bFrFCJya", "year": null, "venue": "ECAL2003", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=RRK0bFrFCJya", "arxiv_id": null, "doi": null }
{ "title": "Evolving Aggregation Behaviors in a Swarm of Robots.", "authors": [ "Vito Trianni", "Roderich Groß", "Thomas Halva Labella", "Erol Sahin", "Marco Dorigo" ], "abstract": "In this paper, we study aggregation in a swarm of simple robots, called s − bots, having the capability to self-organize and self-assemble to form a robotic system, called a swarm − bot. The aggregation process, observed in many biological systems, is of fundamental importance since it is the prerequisite for other forms of cooperation that involve self-organization and self-assembling. We consider the problem of defining the control system for the swarm − bot using artificial evolution. The results obtained in a simulated 3D environment are presented and analyzed. They show that artificial evolution, exploiting the complex interactions among s − bots and between s − bots and the environment, is able to produce simple but general solutions to the aggregation problem.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "GqmCqI6rbt2A", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122696.pdf", "forum_link": "https://openreview.net/forum?id=GqmCqI6rbt2A", "arxiv_id": null, "doi": null }
{ "title": "Variants of Recursive Consequent Parameters Learning in Evolving Neuro-Fuzzy Systems", "authors": [ "Edwin Lughofer" ], "abstract": "A wide variety of evolving (neuro-)fuzzy systems (E(N)FS) approaches have been proposed during the last 10 to 15 years in order to handle (fast and real-time) data stream mining and modeling processes by dynamically updating the rule structure and antecedents. The current denominator in the update of the consequent parameters is the usage of the recursive (fuzzily weighted) least squares estimator (R(FW)LS), as being applied in almost all E(N)FS approaches. In this paper, we propose and examine alternative variants for consequent parameter updates, namely multi-innovation RFWLS, recursive corr-entropy and especially recursive weighted total least squares. Multi-innovation RLS guarantees more stability in the update, whenever structural changes (i.e. changes in the antecedents) in the E(N)FS are performed, as the rule membership degrees on (a portion of) past samples are actualized before and properly integrated in each update step. Recursive corr-entropy addresses the problematic of outliers by down-weighing the influence of (atypically) higher errors in the parameter updates. Recursive weighted total least squares takes into account also a possible noise level in the input variables (and not solely in the target variable as in RFWLS). The approaches are compared with standard RFWLS i.) on three data stream regression problems from practical applications, affected by (more or less significant) noise levels and one embedding a known drift, and ii.) on a realworld time-series based forecasting problem, also affected by noise. The results based on accumulated prediction error trends over time indicate that RFWLS can be largely outperformed by the proposed alternative variants.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "H1V8G-0Iz", "year": null, "venue": null, "pdf_link": "/pdf/387eeb47ef2b850977cfac2d964057330b755ef8.pdf", "forum_link": "https://openreview.net/forum?id=H1V8G-0Iz", "arxiv_id": null, "doi": null }
{ "title": "eCommerceGAN: A Generative Adversarial Network for e-commerce", "authors": [ "Ashutosh Kumar", "Arijit Biswas", "Subhajit Sanyal" ], "abstract": "E-commerce companies such as Amazon, Alibaba, and Flipkart process billions of orders every year. However, these orders represent only a small fraction of all plausible orders. Exploring the space of all plausible orders could help us better understand the relationships between the various entities in an e-commerce ecosystem, namely the customers and the products they purchase. In this paper, we propose a Generative Adversarial Network (GAN) for e-commerce orders. Our contributions include: (a) creating a dense and low-dimensional representation of e-commerce orders, (b) train an ecommerceGAN (ecGAN) with real orders to show the feasibility of the proposed paradigm, and (c) train an ecommerce-conditional- GAN (ec2GAN) to generate the plausible orders involving a particular product. We evaluate ecGAN qualitatively to demonstrate its effectiveness. The ec2GAN is used for various kinds of characterization of possible orders involving cold-start products.", "keywords": [ "E-commerce", "Generative Adversarial Networks", "Deep Learning", "Order Embedding", "Product Recommendation" ], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "AGwwR_r9azn", "year": null, "venue": "E2EMON 2007", "pdf_link": "https://ieeexplore.ieee.org/iel5/4261330/4261331/04261338.pdf", "forum_link": "https://openreview.net/forum?id=AGwwR_r9azn", "arxiv_id": null, "doi": null }
{ "title": "Traffic Trace Artifacts due to Monitoring Via Port Mirroring", "authors": [ "Jian Zhang", "Andrew W. Moore" ], "abstract": "Port-mirroring techniques are supported by many of today's medium and high-end Ethernet switches. The ubiquity and low-cost of port mirroring has made it a popular method for collecting packet traces. Despite its wide-spread use little work has been reported on the impacts of this monitoring method upon the measured network traffic. In particular, we focus upon each of delay and jitter (tinting difference), packet-reordering, and packet-loss statistics. We compare the port-mirroring method with inserting a passive TAP (test access point), such as a fibre splitter, into a monitored link. Despite a passive TAP being transparent to monitored traffic, port-mirroring popularity arises from its limited set-up disruption, and (potentially) easier management This paper documents experimental comparison of traffic using the passive TAP and port-mirroring functionality, and shows that port-mirroring will introduce significant changes to the inter-packet timing, packet-reordering, and packet-loss - even at very low levels of utilisation.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "UasCXDalEPZ", "year": null, "venue": "E2EMON 2005", "pdf_link": "https://ieeexplore.ieee.org/iel5/10461/33206/01564465.pdf", "forum_link": "https://openreview.net/forum?id=UasCXDalEPZ", "arxiv_id": null, "doi": null }
{ "title": "Autonomous end to end QoS monitoring", "authors": [ "Constantine Elster", "Danny Raz", "Ran Wolff" ], "abstract": "Verifying that each flow in the network satisfies its QoS requirements is one of the biggest scalability challenges in the current DiffServ architecture. This task is usually performed by a centralized allocation entity that monitors the flows' QoS parameters. Efficient detection of problematic flows is even more challenging when considering aggregated information such as the end to end delay suffered by packets belonging to a specific flow. Known oblivious and reactive monitoring techniques do not scale well when the number of flows and the length of their paths increase, and when the network load increases. This is due both to load on the centralized bandwidth allocation entity and to the excessive number of monitoring and control messages needed. We propose a new monitoring paradigm termed autonomous monitoring, in which the network itself (i.e. the routers along the flow path) is responsible to discover when a violation of the SLA occurs (or is soon to occur). Only in such cases the centralized allocation entity is notified, and can take the required actions. We study the performance of this new distributed algorithm through theoretical analysis and extensive simulations. Our results indicate that in addition to dramatically reducing the load from the centralized allocation entity, the amount of network traffic needed is relatively small and thus the new monitoring scheme scales well.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "oBbSGJLAzC", "year": null, "venue": "E2EMON 2005", "pdf_link": "https://ieeexplore.ieee.org/iel5/10461/33206/01564467.pdf", "forum_link": "https://openreview.net/forum?id=oBbSGJLAzC", "arxiv_id": null, "doi": null }
{ "title": "InTraBase: integrated traffic analysis based on a database management system", "authors": [ "Matti Siekkinen", "Ernst W. Biersack", "Guillaume Urvoy-Keller", "Vera Goebel", "Thomas Plagemann" ], "abstract": "Internet traffic analysis as a research area has attracted lots of interest over the last decade. The traffic data collected for analysis are usually stored in plain files and the analysis tools consist of customized scripts each tailored for a specific task. As data are often collected over a longer period of time or from different vantage points, it is important to keep metadata that describe the data collected. The use of separate files to store the data, the metadata, and the analysis scripts provides an abstraction that is much too primitive. The information that \"glues\" these different files together is not made explicit but is solely in the heads of the people involved in the activity. As a consequence, manipulating the data is very cumbersome, does not scale, and severely limits the way these data can be analyzed. We propose to use a database management system (DBMS) that provides the infrastructure for the analysis and management of data from measurements, related metadata, and obtained results. We discuss the problems and limitations with today's approaches, describe our ideas, and demonstrate how our DBMS-based solution, called InTraBase, addresses these problems and limitations. We present the first version of our prototype and preliminary performance analysis results.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ZSjNviOrwKl", "year": null, "venue": "E2EMON 2006", "pdf_link": "https://ieeexplore.ieee.org/iel5/10987/34624/01651274.pdf", "forum_link": "https://openreview.net/forum?id=ZSjNviOrwKl", "arxiv_id": null, "doi": null }
{ "title": "Object-Relational DBMS for Packet-Level Traffic Analysis: Case Study on Performance Optimization", "authors": [ "Matti Siekkinen", "Ernst W. Biersack", "Vera Goebel" ], "abstract": "Analyzing Internet traffic at packet level involves generally large amounts of raw data, derived data, and results from various analysis tasks. In addition, the analysis often proceeds in an iterative manner and is done using ad-hoc methods and many specialized software tools. These facts together lead to severe management problems that we propose to address using a DBMS-based approach, called In TraBase. The challenge that we address in this paper is to have such a database system (DBS) that allows to perform analysis efficiently. Off-the-shelf DBMSs are often considered too heavy and slow for such usage because of their complex transaction management properties that are crucial for the usage that they were originally designed for. We describe in this paper the design choices for a generic DBS for packet-level traffic analysis that enable good performance and describe how we implement them in the case of the InTraBase. Furthermore, we demonstrate their importance through performance measurements on the InTraBase. These results provide valuable insights for researchers who intend to utilize a DBMS for packet-level traffic analysis.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Cy8maxG_MEI5", "year": null, "venue": "EASE 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=Cy8maxG_MEI5", "arxiv_id": null, "doi": null }
{ "title": "Human Values Violations in Stack Overflow: An Exploratory Study", "authors": [ "Sara Krishtul", "Mojtaba Shahin", "Humphrey O. Obie", "Hourieh Khalajzadeh", "Fan Gai", "Ali Rezaei Nasab", "John C. Grundy" ], "abstract": "A growing number of software-intensive systems are being accused of violating or ignoring human values (e.g., privacy, inclusion, and social responsibility), and this poses great difficulties to individuals and society. Such violations often occur due to the solutions employed and decisions made by developers of such systems that are misaligned with user values. Stack Overflow is the most popular Q&A website among developers to share their issues, solutions (e.g., code snippets), and decisions during software development. We conducted an exploratory study to investigate the occurrence of human values violations in Stack Overflow posts. As comments under posts are often used to point out the possible issues and weaknesses of the posts, we analyzed 2,000 Stack Overflow comments and their corresponding posts (1,980 unique questions or answers) to identify the types of human values violations and the reactions of Stack Overflow users to such violations. Our study finds that 315 out of 2,000 comments contain concerns indicating their associated posts (313 unique posts) violate human values. Leveraging Schwartz’s theory of basic human values as the most widely used values model, we show that hedonism and benevolence are the most violated value categories. We also find the reaction of Stack Overflow commenters to perceived human values violations is very quick, yet the majority of posts (76.35%) accused of human values violation do not get downvoted at all. Finally, we find that the original posters rarely react to the concerns of potential human values violations by editing their posts. At the same time, they usually are receptive when responding to these comments in follow-up comments of their own.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "QASfXDoo7Jl", "year": null, "venue": "eBISS 2012", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=QASfXDoo7Jl", "arxiv_id": null, "doi": null }
{ "title": "Machine Learning Strategies for Time Series Forecasting", "authors": [ "Gianluca Bontempi", "Souhaib Ben Taieb", "Yann-Aël Le Borgne" ], "abstract": "The increasing availability of large amounts of historical data and the need of performing accurate forecasting of future behavior in several scientific and applied domains demands the definition of robust and efficient techniques able to infer from observations the stochastic dependency between past and future. The forecasting domain has been influenced, from the 1960s on, by linear statistical methods such as ARIMA models. More recently, machine learning models have drawn attention and have established themselves as serious contenders to classical statistical models in the forecasting community. This chapter presents an overview of machine learning techniques in time series forecasting by focusing on three aspects: the formalization of one-step forecasting problems as supervised learning tasks, the discussion of local learning techniques as an effective tool for dealing with temporal data and the role of the forecasting strategy when we move from one-step to multiple-step forecasting.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "F-iLp5bDOLU", "year": null, "venue": "eBISS 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=F-iLp5bDOLU", "arxiv_id": null, "doi": null }
{ "title": "Context-Aware Business Intelligence", "authors": [ "Rafael Berlanga Llavori", "Victoria Nebot" ], "abstract": "Modern business intelligence (BI) is currently shifting the focus from the corporate internal data to external fresh data, which can provide relevant contextual information for decision-making processes. Nowadays, most external data sources are available in the Web presented under different media such as blogs, news feeds, social networks, linked open data, data services, and so on. Selecting and transforming these data into actionable insights that can be integrated with corporate data warehouses are challenging issues that have concerned the BI community during the last decade. Big size, high dynamicity, high heterogeneity, text richness and low quality are some of the properties of these data that make their integration much harder than internal (mostly relational) data sources. In this lecture, we review the major opportunities, challenges, and enabling technologies to accomplish the integration of external and internal data. We also introduce some interesting use case to show how context-aware data can be integrated into corporate decision-making.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "bCDQFHex0EY", "year": null, "venue": "eBISS 2017", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=bCDQFHex0EY", "arxiv_id": null, "doi": null }
{ "title": "Three Big Data Tools for a Data Scientist's Toolbox", "authors": [ "Toon Calders" ], "abstract": "Sometimes data is generated unboundedly and at such a fast pace that it is no longer possible to store the complete data in a database. The development of techniques for handling and processing such streams of data is very challenging as the streaming context imposes severe constraints on the computation: we are often not able to store the whole data stream and making multiple passes over the data is no longer possible. As the stream is never finished we need to be able to continuously provide, upon request, up-to-date answers to analysis queries. Even problems that are highly trivial in an off-line context, such as: “How many different items are there in my database?” become very hard in a streaming context. Nevertheless, in the past decades several clever algorithms were developed to deal with streaming data. This paper covers several of these indispensable tools that should be present in every big data scientists’ toolbox, including approximate frequency counting of frequent items, cardinality estimation of very large sets, and fast nearest neighbor search in huge data collections.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "mTJxI0G-9tS", "year": null, "venue": "eBISS 2013", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=mTJxI0G-9tS", "arxiv_id": null, "doi": null }
{ "title": "Introduction to Pattern Mining", "authors": [ "Toon Calders" ], "abstract": "We present an overview of data mining techniques for extracting knowledge from large databases with a special emphasis on the unsupervised technique pattern mining. Pattern mining is often defined as the automatic search for interesting patterns and regularities in large databases. In practise this definition most often comes down to listing all patterns that exceed a user-defined threshold for a fixed interestingness measure. The simplest such problem is that of listing all frequent itemsets: given a database of sets, called transactions, list all sets of items that are subset of at least a given number of the transactions. We revisit the two main strategies for mining all frequent itemsets: the breadth-first Apriori algorithm and the depth-first FPGrowth, after which we show what are the main issues when extending to more complex patterns such as listing all frequent subsequences or subgraphs. In the second part of the paper we then look into the pattern explosion problem. Due to redundancy among patterns, most often the list of all patterns satisfying the frequency thresholds is so large that post-processing is required to extract useful information from them. We give an overview of some recent techniques to reduce the redundancy in pattern collections using statistical methods to model the expectation of a user given background knowledge on the one hand, and the minimal description length principle on the other.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "luGBhGcld1p", "year": null, "venue": "EAIT 2017", "pdf_link": "https://link.springer.com/content/pdf/10.1007/s10639-016-9539-0.pdf", "forum_link": "https://openreview.net/forum?id=luGBhGcld1p", "arxiv_id": null, "doi": null }
{ "title": "A peer assessment approach to project based blended learning course in a Vietnamese higher education", "authors": [ "Viet Anh Nguyen" ], "abstract": "This article presents a model using peer assessment to evaluate students taking part in blended - learning courses (BL). In these courses, teaching activities are carried out in the form of traditional face-to-face (F2F) and learning activities are performed online via the learning management system Moodle. In the model, the topics of courses are built as a set of projects and case studies for the attending students divided into groups. The result of the implementation of projects is evaluated and ranked by all course participants and is one of the course evaluation criteria for lecturers. To assess learners more precisely, we propose a multi-phase assessment model in evaluating all groups and the group members. The result of each student in the group based on himself evaluation, evaluations of the team members, the tearcher and all students in the course. There are 107 students, who participated in the course entitled “web application development”, are divided into 20 groups conducting the course in the field of information technology is deployed in the form of blended learning through peer assessment. The results of student’s feedback suggested that the usage of various peer assessment created positive learning effectiveness and more interesting learning attitude for students. The survey was conducted with the students through the questionnaire, each question with scale 5-point Likert scale that ranged from 1 (very unsatisfied) to 5 (very statisfied) to investigate the factors: Collaboration, Assessment, Technology showed that students were satisfied with our approach.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "HJMRvsAcK7", "year": null, "venue": null, "pdf_link": "/pdf/6a4ff08a9813a460c46f2ceae551e213c432c95b.pdf", "forum_link": "https://openreview.net/forum?id=HJMRvsAcK7", "arxiv_id": null, "doi": null }
{ "title": "Dynamic Pricing on E-commerce Platform with Deep Reinforcement Learning", "authors": [ "Jiaxi Liu", "Yidong Zhang", "Xiaoqing Wang", "Yuming Deng", "Xingyu Wu", "Miaolan Xie" ], "abstract": "In this paper we develop an approach based on deep reinforcement learning (DRL) to address dynamic pricing problem on E-commerce platform. We models real-world E-commerce dynamic pricing problem as Markov Decision Process. Environment state are defined with four groups of different business data. We make several main improvements on the state-of-the-art DRL-based dynamic pricing approaches: 1. We first extend the application of dynamic pricing to a continuous pricing action space. 2. We solve the unknown demand function problem by designing different reward functions. 3. The cold-start problem is addressed by introducing pre-training and evaluation using the historical sales data. Field experiments are designed and conducted on real-world E-commerce platform, pricing thousands of SKUs of products lasting for months. The experiment results shows that, on E-commerce platform, the difference of the revenue conversion rates (DRCR) is a more suitable reward function than the revenue only, which is different from the conclusion from previous researches. Meanwhile, the proposed continuous action model performs better than the discrete one.", "keywords": [ "reinforcement learning", "dynamic pricing", "e-commerce", "revenue management", "field experiment" ], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "7LRBatZAo5", "year": null, "venue": "EC2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=7LRBatZAo5", "arxiv_id": null, "doi": null }
{ "title": "Simple versus Optimal Contracts.", "authors": [ "Paul Dütting", "Tim Roughgarden", "Inbal Talgam-Cohen" ], "abstract": "We consider the classic principal-agent model of contract theory, in which a principal designs an outcome-dependent compensation scheme to incentivize an agent to take a costly and unobservable action. When all of the model parameters---including the full distribution over principal rewards resulting from each agent action---are known to the designer, an optimal contract can in principle be computed by linear programming. In addition to their demanding informational requirements, however, such optimal contracts are often complex and unintuitive, and do not resemble contracts used in practice. This paper examines contract theory through the theoretical computer science lens, with the goal of developing novel theory to explain and justify the prevalence of relatively simple contracts, such as linear (pure commission) contracts. First, we consider the case where the principal knows only the first moment of each action's reward distribution, and we prove that linear contracts are guaranteed to be worst-case optimal, ranging over all reward distributions consistent with the given moments. Second, we study linear contracts from a worst-case approximation perspective, and prove several tight parameterized approximation bounds.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "G3gp8s6FObYR", "year": null, "venue": "EC2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=G3gp8s6FObYR", "arxiv_id": null, "doi": null }
{ "title": "Posted Pricing and Prophet Inequalities with Inaccurate Priors.", "authors": [ "Paul Dütting", "Thomas Kesselheim" ], "abstract": "In posted pricing, one defines prices for items (or other outcomes), buyers arrive in some order and take their most preferred bundle among the remaining items. Over the last years, our understanding of such mechanisms has improved considerably. The standard assumption is that the mechanism has exact knowledge of probability distribution the buyers' valuations are drawn from. The prices are then set based on this knowledge. We examine to what extent existing results and techniques are robust to inaccurate prior beliefs. That is, the prices are chosen with respect to similar but different probability distributions. We focus on the question of welfare maximization. We consider all standard distance measures on probability distributions, and derive tight bounds on the welfare guarantees that can be derived for all standard techniques in the various metrics.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "nZDnJsaSn0L", "year": null, "venue": "NeHuAI@ECAI 2020", "pdf_link": "http://ceur-ws.org/Vol-2659/beaudoin.pdf", "forum_link": "https://openreview.net/forum?id=nZDnJsaSn0L", "arxiv_id": null, "doi": null }
{ "title": "Identifying the \"right\" level of explanation in a given situation", "authors": [ "Valérie Beaudouin", "Isabelle Bloch", "David Bounie", "Stéphan Clémençon", "Florence d'Alché-Buc", "James Eagan", "Winston Maxwell", "Pavlo Mozharovskyi", "Jayneel Parekh" ], "abstract": null, "keywords": [], "raw_extracted_content": "IDENTIFYING THE “RIGHT” LEVEL OF\nEXPLANATION IN A GIVEN SITUATION\nVal´erie Beaudouin1and Isabelle Bloch2and David Bounie1and St´ephan Cl ´emenc ¸on2and\nFlorence d’Alch ´e-Buc2and James Eagan2and Winston Maxwell1and\nPavlo Mozharovskyi2and Jayneel Parekh2 1\nAbstract. We present a framework for defining the “right” level of\nexplainability based on technical, legal and economic considerations.\nOur approach involves three logical steps: First , define the main con-\ntextual factors, such as who is the audience of the explanation, the\noperational context, the level of harm that the system could cause,\nand the legal/regulatory framework. This step will help characterize\nthe operational and legal needs for explanation, and the correspond-\ning social benefits. Second , examine the technical tools available,\nincluding post-hoc approaches (input perturbation, saliency maps...)\nand hybrid AI approaches. Third , as function of the first two steps,\nchoose the right levels of global and local explanation outputs, taking\ninto the account the costs involved. We identify seven kinds of costs\nand emphasize that explanations are socially useful only when total\nsocial benefits exceed costs.\n1 INTRODUCTION\nThis paper summarizes the conclusions of a longer paper [1] on\ncontext-specific explanations using a multidisciplinary approach. Ex-\nplainability is both an operational and ethical requirement. The op-\nerational needs for explainability are driven by the need to increase\nrobustness, particularly for safety-critical applications, as well as en-\nhance acceptance by system users. The ethical needs for explainabil-\nity address harms to fundamental rights and other societal interests\nwhich may be insufficiently addressed by the purely operational re-\nquirements. Existing works on explainable AI focus on the computer\nscience angle [18], or on the legal and policy angle [20]. The origi-\nnality of this paper is to integrate technical, legal and economic ap-\nproaches into a single methodology for reaching the optimal level of\nexplainability. The technical dimension helps us understand what ex-\nplanations are possible and what the trade-offs are between explain-\nability and algorithmic performance. However explanations are nec-\nessarily context-dependent, and context depends on the regulatory\nenvironment and a cost-benefit analysis, which we discuss below.\nOur approach involves three logical steps: First , define the main\ncontextual factors, such as who is the audience of the explanation,\nthe operational context, the level of harm that the system could cause,\nand the legal/regulatory framework. This step will help characterize\nthe operational and legal needs for explanation, and the correspond-\ning social benefits. Second , examine the technical tools available,\n1Copyright c\r2020 for this paper by its authors. Use permitted under\nCreative Commons License Attribution 4.0 International (CC BY 4.0).\n1. I3, T ´el´ecom Paris, CNRS, Institut Polytechnique de Paris, France –\n2. LTCI, T ´el´ecom Paris, Institut Polytechnique de Paris, France – email:\[email protected] post-hoc approaches (input perturbation, saliency maps...)\nand hybrid AI approaches. Third , as function of the first two steps,\nchoose the right levels of global and local explanation outputs, taking\ninto the account the costs involved.\nThe use of hybrid solutions, combining machine learning and sym-\nbolic AI, is a promising field of research for safety-critical applica-\ntions, and applications such as medicine where important bodies of\ndomain knowledge must be associated with algorithmic decisions.\nAs technical solutions to explainability converge toward hybrid AI\napproaches, we can expect that the trade-off between explainability\nand performance will become less acute. Explainability will become\npart of performance. Also, as explainability becomes a requirement\nfor safety certification, we can expect an alignment between opera-\ntional/safety needs for explainability and ethical/human rights needs\nfor explainability. Some of the solutions for operational explainabil-\nity may serve both purposes.\n2 DEFINITIONS\nAlthough several different definitions exist in the literature [1], we\nhave treated explainability and interpretability as synonyms [16], fo-\ncusing instead on the key difference between “global” and “local”\nexplainability/interpretability. Global explainability means the abil-\nity to explain the functioning of the algorithm in its entirety, whereas\nlocal explainability means the ability to explain a particular algorith-\nmic decision [7]. Local explainability is also known as “post hoc”\nexplainability.\nTransparency is a broader concept than explainability [6], because\ntransparency includes the idea of providing access to raw informa-\ntion whether or not the information is understandable. By contrast,\nexplainability implies a transformation of raw information in order\nto make it understandable by humans. Thus explainability is a value-\nadded component of transparency. Transparency and explainability\ndo not exist for their own sake. Instead, they are enablers of other\nfunctions such as traceability and auditability, which are critical in-\nputs to accountability. In a sense, accountability is the nirvana of al-\ngorithmic governance [15] into which other concepts, including ex-\nplainability, feed.\n3 THREE FACTORS DETERMINING THE\n“RIGHT” LEVEL OF EXPLANATION\nOur approach identifies three considerations that will help lead to\nthe right level of explainability: the contextual factors (an input),\nthe available technical solutions (an input), and the explainability\nchoices regarding the form and detail of explanations (the outputs).\n\n3.1 Contextual factors\nWe have identified four kinds of contextual factors that will help\nidentify the various reasons why we need explanations and choose\nthe most appropriate form of explanation (output) as a function of\nthe technical possibilities and costs. The four contextual factors are:\n\u000fAudience factors: Who is receiving the explanation? What is their\nlevel of expertise? What are their time constraints? These will\nprofoundly impact the level of detail and timing of the explana-\ntion [5, 7].\n\u000fImpact factors: What harms could the algorithm cause and how\nmight explanations help? These will determine the level of social\nbenefits associated with the explanation. Generally speaking, the\nhigher the impact of the algorithm, the higher the benefits flowing\nfrom explanation [8].\n\u000fRegulatory factors: What is the regulatory environment for the ap-\nplication? What fundamental rights are affected? These factors are\nexamined in Section 5 and will help characterize the social bene-\nfits associated with an explanation in a given context.\n\u000fOperational factors: To what extent is explanation an operational\nimperative? For safety certification? For user trust? These factors\nmay help identify solutions that serve both operational and ethi-\ncal/legal purposes.\n3.2 Technical solutions\nAnother input factor relates to the technical solutions available\nfor explanations. Post-hoc approaches such as LIME [18], Kernal-\nSHAP [14] and saliency maps [21] generally strive to approximate\nthe functioning of a black-box model by using a separate explanation\nmodel. Hybrid approaches tend to incorporate the need for explana-\ntion into the model itself. These approaches include:\n\u000fModifying objective or predictor function;\n\u000fProducing fuzzy rules, close to natural language;\n\u000fOutput approaches [22];\n\u000fInput approaches, which pre-process the inputs to the machine\nlearning model, making the inputs more meaningful and/or bet-\nter structured [1];\n\u000fGenetic fuzzy logic.\nThe range of potential hybrid approaches, i.e. approaches that com-\nbine machine learning and symbolic or logic-based approaches, is\nalmost unlimited. The examples above represent only a small selec-\ntion. Most of the approaches, whether focused on inputs, outputs, or\nconstraints within the model, can contribute to explainability, albeit\nin different ways. Explainability by design mostly aims at incorpo-\nrating explainability in the predictor model.\n3.3 Explanation output choices\nThe output of explanation will be what is actually shown to the rel-\nevant explanation audience, whether through global explanation of\nthe algorithm’s operation, or through local explanation of a particu-\nlar decision.\nThe output choices for global explanations will include the fol-\nlowing:\n\u000fAdoption of a “user’s manual” approach to present the functioning\nof the algorithm as a whole [10];\n\u000fThe level of detail to include in the user’s manual;\u000fWhether to provide access to source code, taking into account\ntrade secret protection and the sometimes limited utility of source\ncode to the relevant explanation audience [10, 20];\n\u000fInformation on training data, including potentially providing a\ncopy of the training data [10, 13, 17];\n\u000fInformation on the learning algorithm, including its objective\nfunction;\n\u000fInformation on known biases and other inherent weaknesses of the\nalgorithm; identifying use restrictions and warnings.\nThe output choices for local explanations will include the follow-\ning:\n\u000fCounterfactual dashboards, with “what if” experimentation avail-\nable for end-users [20, 24];\n\u000fSaliency maps to show the main factors contributing to decision;\n\u000fDefining the level of detail, including how many factors and rele-\nvant weights to present to end-users;\n\u000fLayered explanation tools, permitting a user to access increasing\nlevels of complexity;\n\u000fAccess to individual decision logs [11, 26];\n\u000fWhat information should be stored in logs, and for how long?\n4 EXPLAINABILITY AS AN OPERATIONAL\nREQUIREMENT\nMuch of the work on explainability in the 1990s, as well as the\nnew industrial interest in explainability today, focus on explanations\nneeded to satisfy users’ operational requirements. For example, the\ncustomer may require explanations as part of the safety validation\nand certification process for an AI system, or may ask that the sys-\ntem provide additional information to help the end user (for example,\na radiologist) put the system’s decision into a clinical context.\nThese operational requirements for explainability may be required\nto obtain certifications for safety-critical applications, since the sys-\ntem could not go to market without those certifications. Customers\nmay also insist on explanations in order to make the system more\nuser-friendly and trusted by users. Knowing which factors cause cer-\ntain outcomes increases the system’s utility because the decisions\nare accompanied by actionable insights, which can be much more\nvaluable than simply having highly-accurate but unexplained pre-\ndictions [25]. Understanding causality can also enhance quality by\nmaking models more robust to shifting input domains. Customers\nincreasingly consider explainability as a quality feature for the AI\nsystem. These operational requirements are distinct from regulatory\ndemands for explainability, which we examine in Section 5, but may\nnevertheless lead to a convergence in the tools used to meet the vari-\nous requirements.\nExplainability has an important role in algorithmic quality con-\ntrol, both before the system goes to market and afterwards, because\nit helps bring to light weaknesses in the algorithm such as bias that\nwould otherwise go unnoticed [9]. Explainability contributes to “to-\ntal product lifecycle” [23] or “safety lifecycle” [12] approaches to\nalgorithmic quality and safety.\nThe quality of machine learning models is often judged by the\naverage accuracy rate when analyzing test data. This simple mea-\nsure of quality fails to reflect weaknesses affecting the algorithm’s\nquality, particularly bias and failure to generalize. Explainability so-\nlutions presented can assist in identifying areas of input data where\nthe performance of the algorithm is poor, and identify defects in the\nlearning data that lead to bad predictions. Traditional approaches to\nsoftware verification and validation (V&V) are ill-adapted to neu-\nral networks [3, 17, 23]. The challenges relate to neural networks’\nnon-determinism, which makes it hard to demonstrate the absence\nof unintended functionality, and to the adaptive nature of machine-\nlearning algorithms [3, 23]. Specifying a set of requirements that\ncomprehensively describe the behavior of a neural network is con-\nsidered the most difficult challenge with regard to traditional V&V\nand certification approaches [2, 3]. The absence of complete require-\nments poses a problem because one of the objectives of V&V is to\ncompare the behavior of the software to a document that describes\nprecisely and comprehensively the system’s intended behavior [17].\nFor neural networks, there may remain a degree of uncertainty about\njust what will be the output for a given input.\n5 EXPLAINABILITY AS A LEGAL\nREQUIREMENT\nThe legal approaches to explanation are different for government de-\ncisions and for private sector decisions. The obligation for govern-\nments to give explanations has constitutional underpinnings, for ex-\nample the right to due process under the United States Constitution,\nand the right to challenge administrative decisions under European\nhuman rights instruments. These rights require that individuals and\ncourts be able to understand the reasons for algorithmic decisions,\nreplicate the decisions to test for errors, and evaluate the proportion-\nality of systems in light of other affected human rights such as the\nright to privacy. In the United States, the Houston Teachers case2\nillustrates how explainability is linked to the constitutional guaran-\ntee of due process. In Europe, the Hague District Court decision on\nthe SyLI algorithm3shows how explainability is closely linked to\nthe European constitutional principle of proportionality. France has\nenacted a law on government-operated algorithms4, which includes\nparticularly stringent explainability requirements: disclosure of the\ndegree and manner in which the algorithmic processing contributed\nto the decision; the data used for the processing and their source; the\nparameters used and their weights in the individual processing; and\nthe operations effected by the processing.\nFor private entities, a duty of explanation generally arises when\nthe entity becomes subject to a heightened duty of fairness or loyalty,\nwhich can happen when the entity occupies a dominant position un-\nder antitrust law, or when it occupies functions that create a situation\nof trust or dependency vis`a vis users. A number of specific laws im-\npose algorithmic explanations in the private sector. One of the most\nrecent is Europe’s Platform to Business Regulation (EU) 2018/1150,\nwhich imposes a duty of explanation on online intermediaries and\nsearch engines with regard to ranking algorithms. The language in\nthe regulation shows the difficult balance between competing princi-\nples: providing complete information, protecting trade secrets, avoid-\ning giving information that would permit bad faith manipulation of\nranking algorithms by third parties, and making explanations eas-\nily understandable and useful for users. Among other things, online\nintermediaries and search engines must provide a “reasoned descrip-\ntion” of the “main parameters” affecting ranking on the platform,\nincluding the “general criteria, processes, specific signals incorpo-\nrated into algorithms or other adjustment or demotion mechanisms\n2Local 2415 v. Houston Independent School District , 251 F. Supp. 3d 1168\n(S.D. Tex. 2017).\n3NJCM v. the Netherlands , District Court of The Hague, Case n. C-09-\n550982-HA ZA 18-388, February 5, 2020.\n4French Code of Relations between the Public and the Administration, arti-\ncles L. 311-3-1 et seq.used in connection with the ranking.”5These requirements are more\ndetailed than those in Europe’s General Data Protection Regulation\nEU 2016/679 (GDPR), which requires only “meaningful informa-\ntion about the logic involved.”6In the United States, banks already\nhave an obligation to provide the principal reasons for any denial of a\nloan.7A proposed bill in the United States called the Algorithmic Ac-\ncountability Act would impose explainability obligations on certain\nhigh-impact algorithms, including an obligation to provide “detailed\ndescription of the automated decision system, its design, its training,\ndata, and its purpose.”8\n6 THE BENEFITS AND COSTS OF\nEXPLANATIONS\nLaws and regulations generally impose explanations when doing so\nis socially beneficial, that is, when the collective benefits associated\nwith providing explanations exceed the costs. When considering al-\ngorithmic explainability, where the law has not yet determined ex-\nactly what form of explainability is required and in which context,\nthe costs and benefits of explanations will help fill the gaps and define\nthe right level of explanation. The cost-benefit analysis will help de-\ntermine when and how explanations should be provided, permitting\nvarious trade-offs to be highlighted and managed. For explanations to\nbe socially useful, benefits should always exceed the costs. The ben-\nefits of explanations are closely linked to the level of impact of the\nalgorithm on individual and collective rights [5, 8]. For algorithms\nwith low impact, such as a music recommendation algorithms, the\nbenefits of explanation will be low. For a high-impact algorithm such\nas the image recognition algorithm of an autonomous vehicle, the\nbenefits of explanation, for example in finding the cause of a crash,\nwill be high.\nExplanations generate many kinds of costs, some of which are not\nobvious. We have identified seven categories of costs:\n\u000fDesign and integration costs, which may be high because explana-\ntion requirements will vary among different applications, contexts\nand geographies, meaning that a one-size-fits-all explanation so-\nlution will rarely be sufficient [9];\n\u000fSacrificing prediction accuracy for the sake of explainability\ncan result in lower performance, thereby generating opportunity\ncosts [5];\n\u000fThe creation and storage of decision logs create operational costs\nbut also tensions with data privacy principles which generally re-\nquire destruction of logs as soon as possible [11, 26];\n\u000fForced disclosure of source code or other algorithmic details may\ninterfere with constitutionally-protected trade secrets [4];\n\u000fDetailed explanations on the functioning of an algorithm can fa-\ncilitate gaming of the system and result in decreased security;\n\u000fExplanations create implicit rules and precedents, which the de-\ncision maker will have to take into account in the future, thereby\nlimiting her decisional flexibility in the future [19];\n\u000fMandating explainability can increase time to market, thereby\nslowing innovation [9].\nFor high-impact algorithmic decisions, these costs will often be\noutweighed by the benefits of explanations. But the costs should nev-\nertheless be considered in each case to ensure that the form and level\n5Regulation 2018/1150, recital 24.\n6Regulation 2016/679, article 13(2)(f).\n712 CFR Part 1002.9.\n8Proposed Algorithmic Accountability Act, H.R. 2231, introduced April 10,\n2019.\nof detail of mandated explanations is adapted to the situation. The net\nsocial benefit (total benefits less total costs) should remain positive.\n7 CONCLUSION: CONTEXT-SPECIFIC AI\nEXPLANATIONS BY DESIGN\nRegulation of AI explainability remains largely unexplored territory,\nthe most ambitious efforts to date being the French law on the ex-\nplainability of government algorithms and the EU regulation on Plat-\nform to Business relations. However, even in those instances, the\nlaw leaves many aspects of explainability open to interpretation. The\nform of explanation and the level of detail will be driven by the four\ncategories of contextual factors described in this paper: audience fac-\ntors, impact factors, regulatory factors, and operational factors. The\nlevel of detail of explanations – global or local – would follow a\nsliding scale depending on the context, and the costs and benefits at\nstake. One of the biggest costs of local explanations will relate to\nstorage of individual decision logs. The kind of information stored in\nthe logs, and the duration of storage, will be key questions to address\nwhen determining the right level of explainability. Hybrid solutions\nattempt to create explainability by design, mostly by incorporating\nexplainability in the predictor model. While generally addressing op-\nerational needs, these hybrid approaches may also serve ethical and\nlegal explainability needs. Our three-step method involving contex-\ntual factors, technical solutions, and explainability outputs will help\nlead to the “right” level of explanation in a given situation.\nFuture work aims at instantiating the proposed three steps to re-\nalistic and concrete problems, to give insight in the feasibility and\nvalue of the method to provide the right level of explanation.\nREFERENCES\n[1] Val ´erie Beaudoin, Isabelle Bloch, David Bounie, St ´ephan Cl ´emencon,\nFlorence d’Ach ´e Buc, James Eagan, Maxwell Winston, Pavlo\nMozharovskyi, and Jayneel Parekh, ‘Flexible and context-specific AI\nexplainability: a multidisciplinary approach’, Technical report, ArXiv,\n(2020).\n[2] Siddhartha Bhattacharyya, Darren Cofer, D Musliner, Joseph Mueller,\nand Eric Engstrom, ‘Certification considerations for adaptive systems’,\nin2015 IEEE International Conference on Unmanned Aircraft Systems\n(ICUAS) , pp. 270–279, (2015).\n[3] Markus Borg, Cristofer Englund, Krzysztof Wnuk, Boris Duran,\nChristoffer Levandowski, Shenjian Gao, Yanwen Tan, Henrik Kaijser,\nHenrik L ¨onn, and Jonas T ¨ornqvist, ‘Safely entering the deep: A review\nof verification and validation for machine learning and a challenge elic-\nitation in the automotive industry’, Journal of Automotive Software En-\ngineering ,1(1), 1–19, (2019).\n[4] Jenna Burrell, ‘How the machine ‘thinks’: Understanding opac-\nity in machine learning algorithms’, Big Data & Society ,3(1),\n2053951715622512, (2016).\n[5] Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam\nGershman, David O’Brien, Stuart Schieber, James Waldo, David Wein-\nberger, and Alexandra Wood, ‘Accountability of ai under the law: The\nrole of explanation’, arXiv preprint arXiv:1711.01134 , (2017).\n[6] European Commission, ‘Communication from the Commission to the\nEuropean Parliament, the Council, the European Economic and Social\nCommittee and the Committee of the Regions - Building trust in hu-\nman centric artificial intelligence (com(2019)168)’, Technical report,\n(2019).\n[7] Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini,\nFosca Giannotti, and Dino Pedreschi, ‘A survey of methods for explain-\ning black box models’, ACM Computing Surveys (CSUR) ,51(5), 93,\n(2018).\n[8] AI HLEG, ‘High-level expert group on artificial intelligence’, Ethics\nGuidelines for Trustworthy AI , (2019).\n[9] ICO, ‘Project ExplAIn interim report’, Technical report, Information\nCommissioner’s Office, (2019).[10] IEEE, ‘Ethically aligned design: A vision for prioritizing human well-\nbeing with autonomous and intelligent systems’, IEEE Global Initiative\non Ethics of Autonomous and Intelligent Systems , (2019).\n[11] Joshua A Kroll, Solon Barocas, Edward W Felten, Joel R Reidenberg,\nDavid G Robinson, and Harlan Yu, ‘Accountable algorithms’, U. Pa. L.\nRev.,165, 633, (2016).\n[12] Zeshan Kurd and Tim Kelly, ‘Safety lifecycle for developing safety crit-\nical artificial neural networks’, in Computer Safety, Reliability, and Se-\ncurity , eds., Stuart Anderson, Massimo Felici, and Bev Littlewood, pp.\n77–91, Berlin, Heidelberg, (2003). Springer Berlin Heidelberg.\n[13] David Lehr and Paul Ohm, ‘Playing with the data: what legal scholars\nshould learn about machine learning’, UCDL Rev. ,51, 653, (2017).\n[14] Scott M Lundberg and Su-In Lee, ‘A unified approach to interpreting\nmodel predictions’, in Advances in Neural Information Processing Sys-\ntems, pp. 4765–4774, (2017).\n[15] OECD, Artificial Intelligence in Society , 2019.\n[16] OECD, Recommendation of the Council on Artificial Intelligence ,\n2019.\n[17] Gerald E Peterson, ‘Foundation for neural network verification and val-\nidation’, in Science of Artificial Neural Networks II , volume 1966, pp.\n196–207. International Society for Optics and Photonics, (1993).\n[18] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin, ‘Why should\nI trust you?: Explaining the predictions of any classifier’, in 22nd ACM\nSIGKDD International Conference on Knowledge Discovery and Data\nMining , pp. 1135–1144, (2016).\n[19] Frederick Schauer, ‘Giving reasons’, Stanford Law Review , 633–659,\n(1995).\n[20] Andrew Selbst and Solon Barocas, ‘The intuitive appeal of explainable\nmachines’, SSRN Electronic Journal ,87, (01 2018).\n[21] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman, ‘Deep in-\nside convolutional networks: Visualising image classification models\nand saliency maps’, arXiv preprint arXiv:1312.6034 , (2013).\n[22] Philip S. Thomas, Bruno Castro da Silva, Andrew G. Barto, Stephen\nGiguere, Yuriy Brun, and Emma Brunskill, ‘Preventing undesirable be-\nhavior of intelligent machines’, Science ,366(6468), 999–1004, (2019).\n[23] US Food and Drug Administration, ‘Proposed regulatory framework for\nmodifications to artificial intelligence/machine learning (AI/ML)-based\nsoftware as a medical device’, Technical report, (2019).\n[24] Sandra Wachter, Brent Mittelstadt, and Chris Russell, ‘Counterfactual\nexplanations without opening the black box: Automated decisions and\nthe gpdr’, Harv. JL & Tech. ,31, 841, (2017).\n[25] Max Welling, ‘Are ML and statistics complementary?’, in IMS-ISBA\nMeeting on ‘Data Science in the Next 50 Years , (2015).\n[26] Alan FT Winfield and Marina Jirotka, ‘The case for an ethical black\nbox’, in Annual Conference Towards Autonomous Robotic Systems , pp.\n262–273. Springer, (2017).", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "PsNILwmCQ-w", "year": null, "venue": "EAPCogSci 2015", "pdf_link": "https://ceur-ws.org/Vol-1419/paper0116.pdf", "forum_link": "https://openreview.net/forum?id=PsNILwmCQ-w", "arxiv_id": null, "doi": null }
{ "title": "How Metalinguistic Negation Differs from Descriptive Negation: ERP Evidence", "authors": [ "Chungmin Lee" ], "abstract": null, "keywords": [], "raw_extracted_content": "How Metalinguisti c Negation Differs f rom Descriptive Negat ion: ERP Evidence \n \nChungmin Lee ([email protected] ) \nDepartment of Linguistics , Gwanak -ro 1, Gwanak -gu \nSeoul , 151-742, Korea \n \nAbstract \nThis talk explores degree adverbial modifiers licensed \nexclusive ly by metalinguistic negation (MN), and compare s \nthem with those licensed by descriptive negation ( DN) such \nas NPIs. It show s how MN-licensing is more marked than \nDN-licensing in prosody and then attempts to show how \nanomalies arising from misplacing MN -licensed adverbs in \nDN-requiring short form negation sentences elicit the \napproximate N400 but not the P600 in ERPs. This strongly \nsuggests that such anomalies are meaning -related and tends to \nsupport the pragmatic ambiguity position by Horn than the \ncontex tualist or relevance -theoretic approach. \nKeywords: metalinguistic negation; descriptive negation; \nmarkedness; prosody; ERPs; N400; pragmatic ambiguity; \ncontextualist \n1. Markedness of MN Adverbials \nSo far researchers have worked more on negative polarity \narguments and modifiers , which are licensed by descriptive \nnegation (DN) . The NPIs here simply reinforce the \nfalsification of the propositional contents. They are \ntherefore emphatic in general (Potts 2010, Israel 2004). \nCrosslinguistically and diachronical ly, NPIs have typically \ndeveloped from minimizers with ‘even ’ (Lee 1993, Y. Lee \nand Horn 1994, Lee 1999, Lee 2010 a.o.). \n(1) a. amwu -to o-ci anh –ass-ta (Korean = K) anyone -even \ncome -not-PAST -DEC \n‘Not anyone came.’ =b. ∼∃x (x: person ’ (x)) [came (x)] \nb. dare-mo ko-nakat -ta (Japanese = J) \nc. shwei -ye mei-you lai (Chinese = C) \n(2) a. theibul -i tomwuci wumciki -ci anh-nun-ta (K)table –\nNOM at all move -CI not -PRES -DEC ‘The table does not \nmove at all.’ =b. ∼∃x (x: way’ (x)) [move ’ (t)(in x)] \nb. teeburu wa mattaku ugoka -nai (J) \nc. zhuo -zi gen -be budong (C) \nMN, on the other hand, is used to reject, object to or rectify \na previous utterance ‘on any grounds whatever’ ((Horn \n1985 ), (Ducrot 1972) ). In (3), what is negated is not the \nproposition ‘I am happy ’ in its reference or truth but the \ndegree of happiness expressed by the adjective ‘HAPPY ’ in \nthe scale of happiness. The speaker objects to the way how \nit is put by the interpocutor. Typically, the expres sion \n‘HAPPY ’ occurs or is assumed to occur in a previous \nutterance. Because the first clause in (3) does not falsify its \npositive proposition but object to the degree of happiness, \nthe following clarification clause can assert a higher degree \nof happiness – ‘ECSTATIC ’ without creating a \ncontradiction, even though ecstatic entails happy in the \nHorn or entailment scale. \n(3) I’m not HAPPY; I’m ECSTATIC . (No contradiction arises ) \nIn this metalinguistic use o f negation, a negative polarity \nitem such as at al l, which co -occurs with DN, as in (2), cannot intervene. See * I’m not HAPPY at all ; I’m \nECSTATIC . A metalinguistic use of negation cannot be \nreplaced by a prefixal negation, either, as in * I’m unhappy ; \nI’m ECSTATIC . Therefore , we cannot include Geurts ’ \n(1998) ‘propositional ’ denial as one of the MN -like denials. \nIrony also has some sense of refutation , based on the \ngeneral or mutual assumption, expectation or hope for ‘a \npicnic day’ as a mental representation or thought , as in (4) \n(‘echoic use’ (Sperbe r and Wilson 1986 ; Carston 1996) . It is \nnegative, although expressed affirmatively . \n(4) It’s a lovely /fine/great day for a picnic! \nMN is an echoic rebuttal of whatever aspect of an \nexpression in a previous utterance to assert a rectifying \nexpression. There fore, the speaker ’s implicit inner \nalternative Q in C ontrastive Focus can be assumed to \nprecede it, as in (3 ’) and its initial reply equivalent to MN \ncan be assumed to be (5a), with the pair of expressions \nconnected by SN but (sino Spanish and sonder n Germ an), \nand its bi -clausal manifestation with no but is (5b), whose \nintonation is the L*(+H) L - H% of incredulity , distinct from \nthe Contrastive Topic intonation L+H* L - H% (Lee 2006, \nConstant 2012) . \n(3’) Are you HAPPY or ECSTATIC ? \n(5) a. I’m not HAPPY but ECSTATIC . \n b. I’m not HAPPY ; I’m ECSTATIC . \nThis paper explores degree modifiers licensed by MN, \nand compare s them with those licensed by DN and show s \nhow MN-licensing is more marked than DN-licensing in \nprosody first. The MN -licensed degree modifier A LITTLE \nin (6) forms a rising high peak of 254Hz after another peak \nof not (MN) in Fig. 1 . This is in sharp contrast with those \nNPI-like minimizers licensed by DN in (7), one of which \nforms the a bit/a little !H downstep with 211.7Hz, preceded \nby a high H* not. Because of the distinct and marked MN \nintonation for (6) and other cases, the rectification or \nclarification clause may not follow; the conveyed meanings \nwhich may be called conventional implicatures, not \ncancellable, seem to be more assertive than ‘implicatures. ’ \nAs a result, the purport of (6) is affirmative whereas that of \n(7) is negative, although their written form is one and the \nsame , creating ambiguity in English. \n(6) She is not A LITTLE upset . (She is VERY upset .) \n(7) She is NOT a little upset . [even a little ] (She is not upset \nat all, is quite composed.) Sentences for our phonetic \nexperiments are modified from Bolinger (1972) . \n \nFig 1 a-little-MN: a double of rising accent peaks \n698\nIn Korean, the marked intonation of the MN -licensed \nadverbial POTHONG ‘commonly, ’ with a high pitch of \n375Hz on the adverb, is sharply contrasted with the \nintonation of the adverb of the same form with the scalar \nmarker –to ‘even ’ [pothong uro–to] attached to function as \nan NPI for DN (as in ‘--- not do well even commonly ’), \nwhich generates a comparatively low pitch of 295Hz on the \nadverb. The MN adverbial is prosodically marked. \nNow turn to the syntactic aspects of Korean negation to \nsee how MN is syntactically marked as well . The MN -\nlicensed stressed degree modifiers POTHONG and YEKAN , \nboth ‘commonly, ’ require external negation , as in (11a), \nlong form negation , as in (11b), or copula negation , as in \n(9c), but they cannot oc cur in a p ositive declarative S, as in \n(9d). In contrast, short form negation is typically for DN in \nKorean. Therefore, if the MN -licensed stressed degree \nmodifiers POTHONG or YEKAN occurs in short form \nnegation sentence, the result is anomalous , as in ( 8). \n(8) a. Mia-ka POTHONG yeppu -n kes-i ani-i -ya. \nM -Nom commonly pretty -PreN COMP -Nom not-Cop-Dec \n[extern -neg] \n‘Mia is not COMMONLY pretty .’ ~> Mia is exceedingl y \npretty. ’ \nb. Mia-ka POTHONG /Yekan yeppu -ci anh-a \nM -NOM commonly pretty -CI not -DEC (= a) [long-f \nneg]1 \n c. Mia-ka POTHONG (-i) ani -ya. [cop-neg]2 \n M -NOM common (NOM) not -DEC \n ‘Mia is not COMMON /ORDINARY. ’ ~> Mia is \nextraordinary. \n d. *Mia-ka POTHONG /Yekan yeppu -e. (with no negation) \n M-NOM commonly/relatively pretty -DEC \n(9) * Mia-ka POTHONG an yeppu -e. [short form neg ] \n(K) \nM -NOM commonly not pretty –DEC \nCf. Mia-ka cenhye an yeppu -e [NPI] \n at all \n ‘Mia is not pretty at all. ’ \nIn C, if bu ‘not’ co-occurs with an immediately following \nmain predicate to negate, it is interpreted as DN, not \nallowi ng a rectifying clause, as in (10 ). If it is, however, \nfollowed by the Focus marker shi (from ‘be’) first and then \nthe main predicate, it forms a bi -clausal MN construction \nwith shi in the rectifying clause, as in (11 ). An overt (or \ncovert) modal may replace shi for MN -licensing. The \n \n1 The syntactic form of external negation may favor M N both in Korean \nand English but external negation is not a sufficient condition for MN. \nAn NPI in the complement clause is not happily licensed. \n(a) ??It is not the case that anyone came. (ExtN) \n(b) ?? amu-to o-n key ani -ya (ExtN) (K) \n2 This may be regarded as a variant of external negation, as property \nnegation. negation of (11 ) can be assumed to be external (or cleft) S \nnegation in the Conrastive Focus construction. The MN \nconstruction is crucially connected to the SN ‘but’ \ncoordination in C as in ( 12), anira in Korean, naku in J, ma \nin Vietnamese, etc. (Lee 2010). \n(10) a. Ta bu g ao. #Ta feichang gao. (C) (cf. Wible et al 2000) \n3sg NEG be tall 3sg be extremely tall \nb. Ta bu rang wo qu. #Ta bi wo qu. \n3sg NEG let 1sg go 3sg force 1sg go \n(11) a. Ta bu shi gao. Ta shi feichang gao. \n3sg NEG FOC tall 3sg FOC extremely tall \nb. Ta bu shi rang wo qu. Ta shi bi wo qu. \n3sg NEG FOC let 1sg go 3sg FOC force 1sg go \n c. Ta bu hui rang wo qu. Ta hui bi wo qu. \n3sg NEG able let 1sg go 3sg ab le force1sg go \n(12) a. Wo bu shi xihuan ta, er-shi ai ta. \nI not like her but love her \n ‘I don ’t LIKE CF her but LOVE CF her.’ \n b. Ta bu shi gao, ershi pang. [content also matters] \n 3sg NEG FOC tall SN fat \n‘(S)he is no t tall but fat.’ \nLikewise in Chinese, YIBAN de ‘commonly ’ is an MN -\nlicensed degree adverb and freely occurs in an MN \nsentence, as in (13a), conveying a higher degree expression. \nBut it cannot occur in a positive sentence, as in (13b), nor in \na DN sentence, as in (14). Similarly in Japanese, the degree \nmodifier fuTSUU is typically licensed by MN to convey a \nhigher degree, as in (15). \n(13) a. Ta bu shi yibande piaoliang . (C) \n she MN commonly beautiful \n ‘She i s not COMMONLY beautiful .’ ~> (S )he is very beautiful . \n b. *Ta yiban de piyaoliang.3 \n(14) *Ta bu yiban de piyaoliang . (C) \n (s)he NEG commonly beautiful \n(15) a. fuTSUU -no kawaisa ja -nai [--- ja naku honto -no kawaisa -\nda] (J) \n common –of prettiness not MN much -of prettiness \n ‘(She) is not COMMONLY pretty.’ ~> She is very pretty. \n b. fuTSUU janai [fuTSUU ja naku sugoi ] \n common (Adj) not MN extraordinary \n ‘Not COMMON.’ (EXTRAORDINARY) \nCross linguistically in general, if ds is the echoic standard \ndegree of the predicate, its metalinguistically negated \nutterance generates its positive proposition with a higher \ndegree d > ds of the same predicate. The epistemic agent is \nthe speaker in a simple s entence, but it can be the subject in \nan embedded reported speech or complex attitude sentence. \nYEKAN in Korean and YIBAN de in Chinese are fixed as \nMN-licensed modifiers whereas POTHONG (uro) in Korean \nand fuTSUU in Japanese may have their unstressed uses i n \npositive utterances; pothong as an adverb is used in a \n \n3 Sojung Im (pc) brought this to my attention. The string bu yiban de in \n(14) was not found in the Peking University corpus and the anomal y of (14) \nwas confirmed by several native speakers of Chinese. \n699\ndifferent quantificational meaning ‘usually ’ and as a \npredicative noun pothong in K and fuTSUU in J they have \ntheir positive degree meaning of ‘common standard. ’4 \nEnglish has no counterpart of the MN -licensed echoic \nstandard degree modifier ‘common, ’ except the stressed \nMN-licensed below the middle degree modifier ‘A \nLTTLE ’/’A BIT, ’ previously discussed. \nWith those marked prosodic features and /or syntactic \nenvironments, MN -licensed degree modifiers c an take place \ncross -linguistically, as opposed to DN -licensed ones. We \nwill turn now to the next step: ERP studies. \n \n2. ERPs for MN Adverbials \nWe conducted ERP experiments with MN adverbials data \ntwice. In the two experiments, we tried to see what hap pens \nwhen MN -requiring adverbial s are placed in a short form \nnegation (typically exclusively used for DN) in Korean , not \nproperly in an external negation or a long form negation . \nNaturally we presented well -formed MN sentences with MN \nadverbials and ill -formed short form negation sentences \nwith MN adverbials in contrast. In Experiment 1, written \nsentences were presented visually, whereas in Experiment 2, \nspoken sentences were presented auditorily. \nERP E xperiment 1 Data Set A : Well -formed Extern al \nNegation with STRESSED MN adverbial in red color vs . \nill-formed Short Form Negation with STRESSED MN \nadverbial all in red. 10 well -formed (with 5 POTHONG \nsentences and 5 YEKAN sentences) , 10 ill -formed \nsentences (with 5 POTHONG sentences and 5 YEKAN \nsentences) , with 80 fillers , counterbalanced and presented to \neach. \n요즘 │ 아이들은 │ 보통 │큰 게 │ 아니야 \nthese days children commonly tall-Comp not -Cop-Dec \n‘It is not that these days children are COMMONLY tall. ’ \nFig 2 well-formed : MN-licens ed 보통 is in external \nnegation \n저 영화 │ 어제 │ 보통 │안 │ 졸렸어 \nthat movie yesterday commonly not boring \n‘It is not that that movie yesterday was commonly boring. ’ \nFig 3 ill-formed : MN-licens ed 보통 is in short form \nnegation \nProcedure, EEG Measu rement and Analysis \na. Subjects were presented with written sentences visually by E-\nPrime 2.0 our s timulus presentation software . \nb. Ag/AgC1 electrodes and Brainamp were used ;. VEOG and \nHEOG were employed with online filtering at 0.1Hz -70Hz, \nsampling rat e at 500Hz, and the impedance of electrodes under \n10 kΩ. \n \n4 See the degree expressions with a copula in a positive utterance, all \nunstressed: \na. Pothong -i-ya (K) b. FuTSUU –desu (J) Comm\non-COPUL A-DEC Common -COPUL A-DEC ‘That’s commo\nn (ordinary) (in degree/standard) .’ c. To measure individual subjects’ brainwave responses to each \nstimulus, the waves by each stimulus were divided by the time \nunits at which each stimulus was presented. In Experiment 1 \nwith Set A, the averages of the divided waveforms from all the \nelectrodes were measured to get respective significant P -values . \nBy targeting the average of all subjects’ ERP responses, we \nproduced the final, grand average curve of ERP responses with \nthe N400 , as shown in Fig 12. \n \nDiscussion of Experiment 1 on Written Visual Data \nWhat do the results of Experiment 1 say? The N400 ERP \nresults on Cz in Fig 12 , the g rand average of four subjects’ \nbrain -wave curves , reveal that some meaning -related \nanomaly occurred from dat a Set A of the contrast between \nthe well -formed external MN sentences with the MN -\nlicensed degree adverbials and the ill -formed short form \nnegation sentences with the same MN -licensed degree \nadverbials. In the Set A experiment, when a subject ’s eyes \nin the external negation condition reach the MN -licensed \ndegree adverb marked in red, (s)he must expect an adjective \nor adverb to be modified by the MN adverb and the \ncomplement clause ending, followed by external negation. \nBut in the short form negation conditi on, when the subject ’s \neyes reach the same MN -licensed degree adverb marked in \nred, (s)he must expect exactly the same external negation \n(or a long form negation) that can license the MN degree \nadverb but in fact (s)he encounters the short form negati on \nin the fourth column, followed by an adjective or adverb to \nbe modified. (S)he would then be in a conflict between the \nMN adverb and the DN. An MN adverb cannot be licensed \nor interpreted by DN, which implies that MN and DN are \ndistinctly used at least in pragmatic meaning. \nThe adverb in red must have been charitably interpreted \nas a stressed MN adverb. Similarly, even without red for the \nadverb in the case of the intended ill -formed unstressed \nadverb condition in the external negation sentence in Set 1 , \nbecause of the forceful MN bias of the external negation, \nparticipants seem to have interpreted the adverb in black \ncharitably as (stressed) MN -licensed degree adverb and that \nseems to be why no results appeared. \n \nExperiment 2: ERP Analysis of MN Adverbi als in \nSpoken Sentences \n \nMethod \nSubjects \n15 undergraduate subjects (4 females and 11 males) with \na mean age of 23.53 years (range: from 20 to 34, \nundergraduate Seoul National University students) \n700\nparticipated for a cash payment of W25, 000 (about \n$25/h our). All were standard (Seoul -Gyeonggi) Korean \nspeakers, right -handed, not weak -sighted, with no history of \nneurological disorders. These conditions were announced \nbeforehand in the internet recruitment and were met in the \nsubjects ’ written experiment protocol in the lab. . \nStimuli \nIn Experiment 2, recorded auditory sentences, unlike the \nwritten sentences in Experiment 1, were presented. The \nmatch (well -formed) condition with the stressed MN -\nlicensed degree adverb in external negation sentence vs. the \nmismatch (ill -formed) condition here with the same stressed \nMN-licensed degree adverb in short form negation sentence \nis the same as in Experiment 1 (Set A). The only difference \nlies in that the MN adverb was in red in written sentences of \nexternal neg ation and short form negation in Experiment 1 \nbut the same MN adverb was heard or auditory in recorded \nsentences of external negation and short form negation in \nExperiment 2. \nIn the match (well -formed) condition , 30 external \nnegation sentences (15 with pothong ‘commonly ’ and 15 \nwith yekan ‘ordinarily ’) were prepared, and in the mismatch \n(ill-formed) condition, 30 short from negation sentences (15 \nwith pothong ‘commonly ’ and 15 with yekan ‘ordinarily ’), \n60 experimental sentences in total , were prepared, as well \nas 80 filler sentences , totaling 140 sentences . The MN-\nlicensed degree adverbs were all stressed in the spoken \nsentences. Each subject heard all these types, but with each \nsentence randomly assigned to one type. \nThe Well -forme d Condition sentences and the Ill-\nformed Condition sentences were constructed in the same \nfashion as done for Experiment 1. \n \nProcedure, EEG Measurement and Analysis \nIn order to keep the participants attentive during the whole \nsession, they were told to pr ess M if the sentence just heard \nis natural and to press Z if not natural, at the end of each \nsentence heard. From this test, we could distinguish a group \nof seven participants who made the wrong opposite \nresponses 11 to 30 times from the rest who made les s than \nsix wrong responses. We eliminated the seven ill -behaved \nsubjects from the analysis. Because a last minite E -Prime \nprogramming error (of placing a pair of anomalous \nsentences in a row) was found, one relevant subject was also \neliminated and the tota l left for analysis was seven (7) \nsubjects. \nSignific ant differences were detected at the five \nelectrode sites near the center (particularly C4) with the \nN400 effect in Experiment 2 . This is slightly different from \nExperiment 1, where the locus was exact ly Cz (center) of \nthe scalp. In order to decrease the noise effect, the ERP \nsignals were down sampled to 30Hz (and the +-200uv ones \n(30-40 out of 115~117) were eliminated ). \nBy employing the t -value of the T -Test as the Test \nStatistics in Permutation Test, we obtained the following: \n(16) a. From the five electrode sites (C4, \nCP2, CP5, P4, P7) significant differences between the mismatch (ill -formed) (S10 in the E -Prime) \ncondition and the match (well -formed) (S20 in the \nE-Prime) condition were obtained . 5,000 t imes \nrepeated; α=0.05, [IMG1] . \nb. ANOVA: The following were examin ed: \n(i) subject s (random) x experiment manipulation \n(repeated measures) (ii) electrode s (random) x \nexperiment manipulation (repeated measures) \nAn F1 repeated measures ANOVA with hemisphere s (2) \nx ROIs (electrodes) x manipulation is desirable but will be \naddressed in a later refinement with the total raw data. \n \nDiscussion of Experiment 2 \n \nAs indicated, the N400 effect was elicited from the five \nelectrode sites near the center on both hemisph eres \nincluding C4 in Experiment 2 with the spoken sentences in \nwhich MN -licensed degree adverbs placed in the matching \nexternal (MN) sentences vs. those placed in the \nmismatching short -form negation (DN) sentences. A certain \ndifference with the results of Experiment 1 with the written \nsentences lies in that the N400 effect was elicited from \nchannel Cz (center) in Experiment 1. The difference may be \ndue to visual vs. auditory data. The same perspicuous \nnegativity with the N400 effect in Experiment 2, however , \nshould be caused by the same meaning -related anomalies. \nThe N400 is ‘qualitatively distinct ’ from the P600, which is \na reflection of syntactic anomalies such as number and \ngender agreement, phrase structure, verb subcategorization , \nverb tense, constituen t movement, case, and subject -verb \nhonorification agreement to be added in this work (see \nOsterthout et al (1999) for the distinction, stating that the \nERP brain responses to semantic/pragmatic anomalies \n(selection restriction violation etc.) is dominated by a large \nincrease in the N400 component and the response to a \ndisparate set of syntactic anomalies is dominated by a large -\namplitude positive shift. See Kutas et al (2011) for a survey \nof ERP N400 and meaning. \n \nFig 19: The N400 elicited at C4. \n \n3. Ge neral Discussion of ERPs for MN \nAdverbials \nThe m arkedness hierarchy of the three different types of S must be: \n701\n(17) MN S> DN S> Affirma tive S5 (DN = descriptive negation) \nMN reveals phonetic and/or syntactic prominence in Contrastive \nFocus ( CF) in contras t to DN in English /Korean. Because the \nstressed POTHONG/YEKAN in Korean cannot appear in a positive \nsentence, as in (11d), researchers so far could not distinguish this \nfrom NPIs in Korean lingui stics (Cho et al 2002 ; Whitman et al \n2004 ). But crucially the y cannot co -occur in a negative sentence. A \nlong form negation in Korean can license either an NPI or an MN \nadverb but only separately. See (1a) with an NPI and (11b) with an \nMN adverb, both licensed by long form negation. Not the same \nnegatio n can, howeve r, licens e both NPI and MN -adverb at the \nsame time.6 Observe (1 8). \n(18) *amwu yeca–to POTHONG/YEKAN yeppu -ci anha \n any woman -even commonly pretty -conn not(LF) \n ‘Not any woman is commonly pretty.’ (Intended) \nRegarding the dis tinct functions between MN and DN, unlike \nscholars such as Russell (1905) and Karttunen & Peters (1979) , \nwho advocate the semantic ambiguity position, Horn (1985, 1989) \ntakes the pragmatic ambiguity position. Horn ’s position is based on \nthe unavailability of the implicated upper bound of weak scalar \npredicates (e.g. ---we don ’t like coffee, we love it), which he argues \nis pragmatic. It is a denying of the assertability or felicity of an \nutterance or statement rather than negating the truth of a \nproposition. His pragmatic ambiguity must be between two uses \nMN and DN in his still one semantic negation monoguist position. \nLevinson ’s (2000) criticism that even a semantically negated \nstatement doesn ’t have any implicatures is not tenable. Some more \nechoic, nonver idical contexts may license MN uses, often \nrhetorically. I argue that the prosodically frozen MN uses of A \nLITTLE , POTHONG (K), and fuTSUU (J) and lexicalized MN \nuses of YEKAN (K) and YIBANde (C) have their pragmatic \nmeaning associated with MN. On the othe r hand, the context -\ndriven or relevance -theoretic approach by Sperber & Wilson \n(1986), Carston (1988, 1998), Noveck et al (2007), Breheny et al. \n(2006) and Noh et al (2013) also as monoguists argue that there is \nno pragmatic ‘ambiguity ’ or separate MN use/ meaning and that \nscalar implicature is by the pragmatic enrichment of the scalar term \ninvolved. So, the literal form a or b as excluding a and b is due to \nthe contextual enrichment from inclusive (‘literal ’) to exclusive , \nnot by default for them. But consi der ‘not a or b ’ by DN becoming \n‘not a and b’=’neither a nor b.’ We need MN to get a and b from a \nor b .’ To settle the debate, we need empirical, experimental \nevidence. \nIn the case of English and other intonation -based MN languages, \nprosody distinction elicits the MN vs. DN ambiguity (with the \nfrozen MN ∼MN adverb intonation), as in (6) vs. (7). Here \nsemantically weak degree adverbs like ‘a little ’ were involved . In \nKorean and Japanese, stress (prosod y) distinction (less in J ) elicits \nthe same ambiguity but on the standard degree adverb such as \n‘commonly. ’ Furthermore, some lexicalized MN -licensed de gree \nadverbs developed in K and C , as in yekan ‘ordinarily ’ and ibande \n‘commonly. ’ The MN -licensed adverbs placed in short form \nnegation (DN) sentence in contrast to those in external negation \n(MN) sentence eli cited the N400. \n \n5 Giora (20 06) takes the symmetry position between (descriptive) \nnegation and affirmation. \n6 A similar phenomenon in English has been indicated: an NPI cannot \nappear in MN , as in (a). (Karttunen et al (1979:46 47). \n(a) *Chris didn ’t manage to solve any of the problems ---he managed\n to solve all of them. (Horn 1989, 374). Unlike the contradictory pairs with explicit or implicit negation \ninvolved in the past experiments, which often didn ’t elicit any \nimmediate N400 effect and needed previous proper linguistic \ncontexts for due expectation s (Staab et al 2008), the distinction \nbetween MN and DN is not necessarily context -dependent because \nof MN ’s marked prosodic and/or lexical features that require MN \nand the necessary conveyed implicature or following clarification \nclause. \nI give an independent support to my claim that pragmatic \nmeaning anomalies elicit the N400. Sakai ’s (2013) ERP studies on \nJapanese honorific processing show : If you address a boy by \n“Kato -sama ” honorifically, it is mismatched with the context and \nelicits the N400 when in contrast with callin g him “Kato. ” \nNoh et al (2013) report in a rare valuable psycholinguistic eye -\ntracking experiment on MN that the subjects ’ processing times at \nthe clarification clauses were not different between MN and DN in \ntheir eye -tracking experiments, claiming that their results support \nthe contextualist or relevance theory. As indicated, this theory has \nno separate use or pragmatic ‘meaning ’ and therefore no \nambiguity; MN is also truth -functional for them. But the Korean \nexamples this study employed are dubious ; the first “MN” example \nthe authors provided is the following short form negation an ‘not’: \n(18) (7) a. Yuna -nun ton -ul an pel-ess-e; ssule moa -ss-e. \nYuna -TC money -AC not make -PST-DC; rake in -PST-DC \n“Yuna didn ’t make money; she raked in money. ” \nAs we alre ady explained , the short form negation an ‘not’ is \ntypically used as DN in Korean. Then, what can we expect from \nthe bi-clausal construction in (18 )? Sheer contradiction and it is. \nNative Korean speakers who are not biased will all agree. The \nEnglish bi -causal MN construction is prosodically marked and \ncannot allow for the concessive But/but before the clarification \nclause. Therefore, if the combined use condition is met, MN can \ninvolve even truth -conditional entailment cases and that ’s why \nHorn ’s definitio n has the expression ‘on any grounds whatever .’ \nThe following utterance: \n(19) I’m not HAPPY; (*but) I ’m MISERABLE \nis an MN case for Horn even though miserable entails ∼happy , \nnot creating any contra diction. The first clause of (19 ) objects to \nthe expression HAPPY and asserts the salient alternative \nclarification clause.7 Compare it with (3), where not leads to a \ncontradiction if read descriptively. This is not an MN for \ncontextualists. Of course, there are quite a few researchers who do \nnot adopt this claim and narrow down the range of MN cases. \nAlthough this is still debatable, taking such “DN” examples \noccurring in external negation that typically licenses MN is not \nconvincing; for Horn, they are simply other cases of MN. This is \nparticularly true of pairs of expressives or emotion -charged \nexpressions such as wangtaypak ‘hit the jackpot ’ vs. \nphwungpipaksan ‘break into fragments, ’ occurring in MN -\nlicensing constructions in Korean. Either one of the two \nexpressives may be metalinguistically negated. The part icipants \nmight have skipped ‘non-sensible ’ MNs quickly ‘with a fast effect ’ \n(in their sensicality test, the mean sensicality of MNs was \nsignif icantly lower than that of DNs) and might have read sensible \nMNs slower than DN ones with a slow effect, resulting in ‘no \ndifference ’ between conditions. As the reviewer supposed, this is \nrather in support of the ‘meaning ’ approach than their contextualist \nposition. MN -licensing is most optimal in external negation and far \nless optimal in long form negation. The long form negation tends \nto lead to DN by default, although it can license MN. The intended \n \n7 In German, the SN ‘but’ is employed for this situation: Ich bin nicht \nglueclich, sondern ungluecklich . \n \n702\nMN alternatives in contrast may become more easily non -sensible \nin long form nega tion than in external negation and they are \ndoomed to be non -sensible in short form negation. \n \n4. Concluding Remarks \n \nWe made the distinction between two types of modifiers: those \nlicensed exclusively by MN and those by DN. The former are \nsome MN -licensed degree adverbs, which are prosodically, \nlexically and syntactically conditioned, and the latter are NPIs, \nwhich reinforce negation unlike the former. The distinction \nsuggests that MN and DN have distinct functions and uses, even if \nwe assume that there is one single logical negation, departing from \nRusse ll (1905) and Karttunen et al (1979) . Horn’ s (1985, 1989) \npragmatic ambiguity position is in contrast to the context -driven or \nrelevance -theoretic approach by Sperber et al (1986), Carston \n(1988, 1 998), who deny that there is pragmatic ‘ambiguity’ and \nclaim that scalar implicature is by the pragmatic enrichment of the \nscalar term involved. How can we settle the debate? \nWe are curious about possible empirical, experimental evidence \nthat may shed lig ht on the debate. A hypothesis can be: if the \nstressed MN -licensed degree adverb POTHONG /YEKAN co-\noccurs with short form negation (DN) in a sentence, the adverb \nwill not be licensed by MN, which is absent, and as a result the \nsentence will be anomalous . But would it be meaning -based or \nstructure -based? With this in mind, we conducted two types of \nERP experiments on MN for the fisrt time as far as we know : in \nExperiment 1 (pilot), the pair of written sentences (with the \nstressed adverb in red) was presented and by targeting the average \nof all the four subjects’ ERP responses, we produced the final, \ngrand average curve of ERP responses with the N400 over Cz, the \ncentral site. In Experiment 2, fifteen subjects participated. In the \nwell-formed condition, 30 exte rnal negation sentences, with \npothong ‘commonly’ and yekan ‘ordinarily,’ and in the ill -formed \ncondition, 30 short fo rm negation sentences, with stressed pothong \nand yekan , as well as 80 fillers, were presented all in recorded \nsound. The N400 effect rangin g near 400ms from onset was \nelicited from the five electrode sites near the center including C4 in \nthis experiment with the spoken sentences. Also, a significant \nnegativity signal around 700ms was detected. This is an interesting \ndifference with the result s of Experiment 1, where a rather typical \nN400 effect was observed. However, nothing like the P600 was \ndetected. \nWe need more data and analyses but we tentatively claim that \nthe N400 effect was elicited from the two conditions and that if \nthis turns out to be valid it shows that the anomaly is meaning -\nrelated, though pragmatic. This tends to be in support of the \npragmatic ambiguity position than the contextualist non -ambiguity \napproach. This is just the first step in the direction of researching \nbrain respo nses to anomalies involving MN -licensed degree \nmodifiers. \n \nReferences (Selected) \nBreheny, R., Katsos, N., Williams, J. (2006 ). Are scalar implicatures \ngenerated by default? Cognition 100 (3), 434 -463. \nBurton -Roberts, Noël, (1989 ). On Horn ’s dilemm a: presupposition and \nnegation. Journal of Linguistics 25, 95 --125. \nCarston, Robyn. 1996 . Metalinguistic Negation and Echoic Use, Journal of \nPragmatics 25, 309 -330. \nCho, S. and Lee, H. (2002 ). Syntactic and Pragmatic Properties of NPI \nYekan in Korean. In N. Akatsuka et al (eds.) Japanese/Korean \nLinguistics 10. CSLI. \nDucrot, O. (1972 ). Dire et ne pas dire. Hermann. Horn, L. (1985 ). Metalinguistic Negation and Pragmatic Ambiguity, \nLanguage 61, 121-74. \nIsrael, Michael (1996 ). Polarity Sensitivity as Lexical Semantics. \nLinguistics & Phil osophy 19, 619 -666. \nKuno, S. and J. Whitman (2004 ). Licensing of Multiple Negative Polarity \nItems , in Studies in Korean Syntax and Semantics . Seoul: Pagijong. \nKutas, Marta Kutas and Kara D. Federmeier (2011 ). Thirty Years and \nCounting: Finding Meaning in the N400 Component of the Event -\nRelated Brain Potential (ERP) , Annual Review of Psychology 62:14.1 –\n27. \nLee, Chungmin (1993). Frozen expressions and semantic representation , \nLanguage Research 29: 301-326. \nLee, Chungmin 2006 . Contrastive Topic/Focus and Polarity in Discourse, \nWhere Semantics Meets Pragmatics (K. von Heusinger and K. Turner \n(eds)), CRiSPI 15, 381 -420, Elsevier. \nLee, Y oung -Suk and L aurence Horn 1994. Any as indefinite plus even. MS. \nYale University. \nLevinson, S. (2000 ). Presumptive Meaning: The Theory of Generalized \nConversational Implicature . MIT Press, Cambridge, MA. \nNoh, Eun-Ju, Hyeree Choo, Sungryong Koh (2013 ). Processing \nmetalinguistic negation: Evidence from eye -tracking experiments, \nJournal of Pragmatics 57: 1-18. \nNoveck , Ira and Dan Sperber (2007). The why and how of experimental \npragmatics: The case of ‘scalar inferences ,’ in Noel Roberts (ed.) \nAdvances in Pragmatics, Palgrave . \nOsterhout, Lee and Janet Nicol (1999 ). On the distinctiveness, \nindependence, a nd time course of the brain responses to syntactic and \nsemantic anomalies. Langauge and Cognitive Processes 14:3, 283 -317. \nPotts, Chris (2010 ). On the negativity of negation. In David Ludz and Nan \nLi (eds.) Proceedings of SALT 30. \nRecanati , Francois (1993). Direct Reference: From Language to Thought , \nBlackwell . \nRussell, B. (1905 ). On denoting . Mind , 14. Blackwell. \nSakai, H. (2013 ). Computation for Syntactic Dependency at Language \nCulture Interface: A View from ERP Studies on Japanese Honorific \nProcessing. Hiroshima U. Konkuk U Talk. \nSperber, D. D. Wilson. (2004 ). Relevance Theory, in G. Ward and L. Horn \n(eds) Handbook of Pragmatics . Oxford: Blackwell, 607 -632. \nStaab, Jenny, Thomas P . Urbach, and Marta Kutas (2008 ). Negation \nProcessing in Context Is Not (Always) Delayed, in Jamie Alexandre \n(Ed.) Center for Research in Language, UCSD. \nWhitman, John and Susumo Kuno (2004 ). Licensing of Multiple Negative \nPolarity Items. Studies in Korea n Syntax and Semantics ed. by Susumu \nKuno , 207-228. Seoul : Pagijong . \nWible, David and Eva Chen (2000 ). Linguistic Limits on Metalinguistic \nNegation: Evidence from Mandarin and English, Language and \nLingu istics. \n \nAcknowledgments \nI thank Sung -Eun Lee, and Sungryong Koh for their technical \ncontributions to t he ERP experiments and to Yoonjung Kang and Jeff \nHolliday for their contributions to the phonetic experiments . I am also \ngrateful to Larry Horn and Michael Israel for their comm ents on one of the \nearliest versions and the CIL19 presentation. This work was supported by \nthe National Research Foundation under (Excellent Scholar) Grant No. 100 -\n20090049 through Korean Government. \n \n \n703", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "qPwsT0CDnA", "year": null, "venue": "EAPCogSci 2015", "pdf_link": "https://ceur-ws.org/Vol-1419/paper0026.pdf", "forum_link": "https://openreview.net/forum?id=qPwsT0CDnA", "arxiv_id": null, "doi": null }
{ "title": "On Mental Imagery in Lexical Processing: Computational Modeling of the Visual Load Associated to Concepts", "authors": [ "Daniele Paolo Radicioni", "Francesca Garbarini", "Fabrizio Calzavarini", "Monica Biggio", "Antonio Lieto", "Katiuscia Sacco", "Diego Marconi" ], "abstract": null, "keywords": [], "raw_extracted_content": "On Mental Imagery in Lexical Processing:\nComputational Modeling of the Visual Load Associated to Concepts\nDaniele P. Radicioni\u001f, Francesca Garbarini , Fabrizio Calzavarini\u001e\nMonica Biggio , Antonio Lieto\u001f, Katiuscia Sacco , Diego Marconi\u001e\n([email protected])\n\u001fDepartment of Computer Science, Turin University { Turin, Italy\n\u001eDepartment of Philosophy, Turin University { Turin, Italy\n Department of Psychology, Turin University { Turin, Italy\nAbstract\nThis paper investigates the notion of visual load , an es-\ntimate for a lexical item's e\u000ecacy in activating mental\nimages associated with the concept it refers to. We elab-\norate on the centrality of this notion which is deeply\nand variously connected to lexical processing. A com-\nputational model of the visual load is introduced that\nbuilds on few low level features and on the dependency\nstructure of sentences. The system implementing the\nproposed model has been experimentally assessed and\nshown to reasonably approximate human response.\nKeywords: Visual imagery; Computational modeling;\nNatural Language Processing.\nIntroduction\nOrdinary experience suggests that lexical competence,\ni.e. the ability to use words, includes both the abil-\nity to relate words to the external world as accessed\nthrough perception ( referential tasks) and the ability to\nrelate words to other words in inferential tasks of sev-\neral kinds (Marconi, 1997). There is evidence from both\ntraditional neuropsychology and more recent neuroimag-\ning research that the two aspects of lexical competence\nmay be implemented by partly di\u000berent brain processes.\nHowever, some very recent experiments appear to show\nthat typically visual areas are also engaged by purely\ninferential tasks, not involving visual perception of ob-\njects or pictures (Marconi et al., 2013). The present work\ncan be considered as a preliminary investigation aimed\nat verifying this main hypothesis, by investigating the\nfollowing issues: i)to what extent the visual load asso-\nciated with concepts can be assessed, and which sort of\nagreement exists among humans about the visual load\nassociated to concepts; ii)which features underlie the\nvisual load associated to concepts; and iii)whether the\nnotion of visual load can be grasped and encapsulated\ninto a computational model.\nAs it is widely acknowledged, one main visual cor-\nrelate of language is imageability , that is the property\nof a particular word or sentence to produce an experi-\nence of imagery: in the following, we focus on visual im-\nagery (thus disregarding acoustic ,olfactory and tactile\nimagery), which we denote as visual load . The visual\nload is related to the easiness of producing visual im-\nagery when an external linguistic stimulus is processed.Intuitively, words like `dog' or `apple' refer to concrete\nentities and are associated with a high visual load, im-\nplying that these terms immediately generate a mental\nimage. Conversely, words like `algebra' or `idempotence'\nare hardly accompanied by the production of vivid im-\nages. Although the construct of visual load is closely\nrelated to that of concreteness, concreteness and visual\nload can clearly dissociate, in that i)some words have\nbeen rated high in visual load but low in concreteness,\nsuch as some concrete nouns that have been rated low\nin visual load (Paivio, Yuille, & Madigan, 1968); and,\nconversely, ii)abstract words such as `bisection' are as-\nsociated with a high visual load.\nThe notion of visual load is relevant to many disci-\nplines, in that it contributes to shed light on a wide vari-\nety of cognitive and linguistic tasks and helps explaining\na plethora of phenomena observed in both impaired and\nnormal subjects. In the next Section we survey a mul-\ntidisciplinary literature showing how mental imagery af-\nfects memory, learning and comprehension; we consider\nhow imagery is characterized at the neural level; and we\nshow how visual information is exploited in state-of-the-\nart Natural Language Processing research. In the subse-\nquent Section we illustrate the proposed computational\nmodel for providing concepts with their visual load char-\nacterization. We then describe the experiments designed\nto assess the model through an implemented system, re-\nport and discuss the obtained results. Conclusion will\nsummarize the work done and provide an outlook on fu-\nture work.\nRelated Work\nAs regards linguistic competence, it is generally ac-\ncepted that visual load facilitates cognitive perfor-\nmance (Bergen, Lindsay, Matlock, & Narayanan, 2007),\nleading to faster lexical decisions than not-visually\nloaded concepts (Cortese & Khanna, 2007). For ex-\nample, nouns with high visual load ratings are remem-\nbered better than those with low visual load ratings in\nlong-term memory tests (Paivio et al., 1968). More-\nover, visually loaded terms are easier to recognize for\nsubjects with deep dyslexia, and individuals respond\n181\nbered better than those with low visual load ratings inlong-term memory tests (Paivio, 1986, 1991). Yet, vi-sually loaded terms are easier to recognize for subjectswith deep dyslexia, and individuals respond more quicklyand accurately when making judgments about visuallyloaded sentences (Kiran & Tuchtenhagen, 2005). Neu-ropsychological researches have shown that many apha-sic patients perform better with linguistic items thatare easier to elicit visual imagery (Goodglass, Hyde, &Blumstein, 1969; Coltheart, 1980), although the oppo-site pattern has also been documented (Breedin, Sa↵ran,& Coslett, 1994; Cipolotti & Warrington, 1995; Warring-ton, 1975).Visual imageability ofconceptsevoked by words andsentences is commonly known to a↵ect brain activity.While visuosemantic processing regions, such as left in-ferior temporal gyrus and fusiform gyrus revealed greaterinvolvement during the comprehension of highly image-able words and sentences (Bookheimer et al., 1998; Mel-let, Tzourio, Denis, & Mazoyer, 1998), other seman-tic brain regions (i.e., superior and middle temporalcortex) are selectively activated by low-imageable sen-tences (Mellet et al., 1998; Just, Newman, Keller, McE-leney, & Carpenter, 2004). Furthermore, a mountingnumber of studies suggests that words encoding di↵er-ent visual properties (such as color, shape, motion, etc.)are processed in cortical areas that overlap with some ofthe same areas that are activated during visual percep-tion of those properties (Kemmerer, 2010).Investigating the visual features associated to linguis-tic input can be useful to many Natural Language Pro-cessing tasks, such as individuating verbs subcategoriza-tion frames (Bergsma & Goebel, 2011), enriching the tra-ditional extraction of distributional semantics from textwith a multimodal approach, integrating textual featureswith visual ones (Bruni, Tran, & Baroni, 2011, 2014). Fi-nally, visual attributes are at the base of the developmentof annotated corpora and resources that can be used toextend text-based distributional semantics by groundingword meanings on visual features, as well (Silberer, Fer-rari, & Lapata, 2013).The computational model for mental imagery de-scribed by Glasgow and Papadias (1992) aims at recon-structing image representations for retrievingvisualandspatialinformation: the first mode is concerned withhow objects look like, and the second one regards theirplacement within a visual scene. This bipartite schemeis justified based on neural accounts. In fact, visual andspatial correlates of imagery seem to have a direct coun-terpart in the cortical pathways involved in vision: whilethe temporal cortex is involved in recognizing objectsthemselves, on the other side the parietal cortex is acti-vated for accessing spatial information (Mishkin, Unger-leider, & Macko, 1983). The model proposes three stagesof image representation —each featured by its own kindL’animalechemangiabananesuunalbero`elascimmiaTheanimalthateatsbananasonatreeisthemonkeyFigure 1: The dependency tree corresponding to a stim-ulus.of information processing—, including adeep represen-tation, which is a semantic network stored in long-termmemory that contains a hierarchical representation ofimage descriptions; thespatial representationintendedfor collecting image components along with their spatialfeatures; thevisual representationthat builds on an oc-cupancy array, storing information such as shape, size,etc..ModelThe modeling phase has been characterized by the needof defining the notion of visual load in a uniform andcomputationally tractable manner. Such concept, infact, is used by and large in literature with di↵erentmeanings, thus giving raise to di↵erent levels of ambi-guity. We define visual load as the concept representinga direct indicator (a numeric value) of the e\u0000cacy for alexical item to activate mental images associated to theconcept referred to by the lexical item. Consequently,we expect that visual load also represents an indirectmeasure of the probability of activation of a brain areadeputed to the visual processing.We conjecture that visual load is situated at the inter-section oflexicalandsemanticspaces, mostly associatedto the semantic level. That is, the visual load is primar-ily associated to aconcept, although lexical phenomenaliketerms availability(implying that the most frequentlyused terms are easier to recognize than those seen lessoften (Tversky & Kahneman, 1973)) can also a↵ect it.Based on the work by Kemmerer (2010) we explore thehypothesis that a limited number of primitive elementscan be used to characterize and evaluate the visual loadassociated to concepts. Namely, Kemmerer’s SimulationFramework allows to grasp information about a wide va-riety of concepts and properties used to denote objects,events and spatial relations. Three main visual semanticcomponents have been individuated that, in our opin-ion, are also suitable to be used as di↵erent dimensionsalong which to characterize the concept of visual load.They are:colorproperties,shapeproperties, andmo-tionproperties. The perception of these properties isexpected to occur in a immediate way, such that “dur-ing our ordinary observation of the world, these threeattributes of objects are tightly bound together in uni-Figure 1: The (simpli\fed) dependency tree correspond-\ning to the sentence `The animal that eats bananas on a\ntree is the Monkey'.\nmore quickly and accurately when making judgments\nabout visually loaded sentences (Kiran & Tuchtenhagen,\n2005). Neuropsychological research has shown that\nmany aphasic patients perform better with linguistic\nitems that more easily elicit visual imagery (Coltheart,\n1980), although the opposite pattern has also been doc-\numented (Cipolotti & Warrington, 1995).\nVisual imageability of concepts evoked by words and\nsentences is commonly known to a\u000bect brain activity.\nWhile visuosemantic processing regions, such as left in-\nferior temporal gyrus and fusiform gyrus revealed greater\ninvolvement during the comprehension of highly image-\nable words and sentences (Bookheimer et al., 1998; Mel-\nlet, Tzourio, Denis, & Mazoyer, 1998), other seman-\ntic brain regions (i.e., superior and middle temporal\ncortex) are selectively activated by low-imageable sen-\ntences (Mellet et al., 1998; Just, Newman, Keller, McE-\nleney, & Carpenter, 2004). Furthermore, a growing num-\nber of studies suggests that words encoding di\u000berent vi-\nsual properties (such as color, shape, motion, etc.) are\nprocessed in cortical areas that overlap with some of the\nareas that are activated during visual perception of those\nproperties (Kemmerer, 2010).\nInvestigating the visual features associated to linguis-\ntic input can be useful to build semantic resources de-\nsigned to deal with Natural Language Processing (NLP)\nproblems, such as individuating verbs subcategorization\nframes (Bergsma & Goebel, 2011), enriching the tradi-\ntional extraction of distributional semantics from text\nwith a multimodal approach, integrating textual features\nwith visual ones (Bruni, Tran, & Baroni, 2014). Finally,\nvisual attributes are at the base of the development of\nannotated corpora and resources that can be used to ex-\ntend text-based distributional semantics by grounding\nword meanings on visual features, as well (Silberer, Fer-\nrari, & Lapata, 2013).\nModel\nAlthough much work has been invested in di\u000berent ar-\neas for investigating imageability in general and visual\nimagery in particular, at the best of our knowledge no\nattempt has been carried out to formally characterize\nvisual load, and no computational model has been de-\nvised to compute how visually loaded are sentences andlexicalized concepts therein. We propose a model that\nrelies on a simple hypothesis additively combining few\nlow-level features, re\fned by exploiting syntactic infor-\nmation.\nThe notion of visual load, in fact, is used by and large\nin literature with di\u000berent meanings, thus giving rise to\ndi\u000berent levels of ambiguity. We de\fne visual load as the\nconcept representing a direct indicator (a numeric value)\nof the e\u000ecacy for a lexical item to activate mental images\nassociated to the concept referred to by the lexical item.\nWe expect that visual load also represents an indirect\nmeasure of the probability of activation of brain areas\ndeputed to the visual processing.\nWe conjecture that the visual load is primarily as-\nsociated to concepts , although lexical phenomena like\nterms availability (implying that the most frequently\nused terms are easier to recognize than those seen less\noften (Tversky & Kahneman, 1973)) can also a\u000bect it.\nBased on the work by Kemmerer (2010) we explore the\nhypothesis that a limited number of primitive elements\ncan be used to characterize and evaluate the visual load\nassociated to concepts. Namely, Kemmerer's Simulation\nFramework allows to grasp information about a wide va-\nriety of concepts and properties used to denote objects,\nevents and spatial relations. Three main visual semantic\ncomponents have been individuated that, in our opin-\nion, are also suitable to be used as di\u000berent dimensions\nalong which to characterize the concept of visual load.\nThey are: color properties, shape properties, and mo-\ntion properties. The perception of these properties is\nexpected to occur in a immediate way, such that \\dur-\ning our ordinary observation of the world, these three\nattributes of objects are tightly bound together in uni-\n\fed conscious images\" (Kemmerer, 2010). We added a\nfurther perceptual component related to size. More pre-\ncisely, our assumption is that information about the size\nof a given concept can also contribute, as an adjoint fac-\ntor and not as a primitive one, to the computation of a\nvisual load value for the considered concept.\nIn this setting, we represent each concept/property as\naboolean -valued vector of four elements, each encoding\nthe following information: lemma , morphological infor-\nmation on POS (part of speech), and then whether the\nconsidered concept/property conveys information about\ncolor ,shape ,motion and size.1For example, this piece\nof information\ntable,Noun,1,1,0,1 (1)\ncan be used to indicate that the concept table (associated\nwith a Noun, and di\u000bering, e.g., from that associated\nwith a Verb) conveys information about color, shape and\nsize, but not about motion. In the following, these are\n1We adopt here a simpli\fcation, since we are assuming\nthat the pair hlemma ;POS iis su\u000ecient to identify a con-\ncept/property, and that in general we can access items by\ndisregarding the word sense disambiguation problem, which\nis known as an open problem in the \feld of NLP.\n182\nTUP parserdefinitiondz}|{The big carnivore with yellow and black stripes is thetargetTz}|{...tiger|{z}stimulusstDictionary annotatedwith featureslemma,POS,\u0000col,\u0000sha,\u0000mot,\u0000sizlemma,POS,\u0000col,\u0000sha,\u0000mot,\u0000sizlemma,POS,\u0000col,\u0000sha,\u0000mot,\u0000siz.....Dependency structureL’animalechemangiabananesuunalbero`elascimmiaTheanimalthateatsbananasonatreeisthemonkeyFigure 1: The dependency tree corresponding to a stimulus.the following information:lemma, morphological infor-mation on POS (part of speech), and then whether theconsidered concept/property conveys information aboutcolor,shape,motionandsize.1For example, this pieceof informationfinger,Noun,1,1,0,1(1)can be used to indicate that the concept finger (associ-ated to a Noun, and di↵ering, e.g., from that associatedto a Verb) conveys information about color, shape andsize, but not about motion. In the following these are re-ferred to as the visual features\u0000·associated to the givenconcept.We have then built a dictionary by extracting it from aset ofstimuli(illustrated hereafter) composed by simplesentences describing a concept; and manually annotatedthe visual features associated to each concept. The as-signment of features scores has been conducted by theauthors on a purely introspective basis.Di↵erent weighting schemes~w={↵,\u0000,\u0000}have beentested in order to set features contribution to the visualload associated to a conceptc, that results from com-putingVL(c,~w)=Xi\u0000i=↵(\u0000col+\u0000sha)+\u0000\u0000mot+\u0000\u0000siz.(2)For the experimentation we set↵to 1.35,\u0000to 1.1 and\u0000to.9.To the ends of combining the contribution of conceptsin a sentencesto the overallVLscore fors, we adapttheprinciple of compositionality2to the visual load do-main. In other words, we assume that the visual load ofa sentence can be computed by starting from the visual1We adopt here a simplification, since we are assumingthat the pairhlemma,POSiis su\u0000cient to identify a con-cept/property, and that in general we can access items bydisregarding the word sense disambiguation problem, whichis actually an open problem in the field of Natural LanguageProcessing (Vidhu Bhala & Abirami, 2014).2This principle states that the meaning of an expressionis a function of the meanings of its parts and of the way theyare syntactically combined: to get the meaning of a sentencewe combine words to form phrases, then we combine phrasesto form clauses, and so on (Partee, 1995).load of the concepts denoted by the lexical items in thesentence, that isVL(s)=Pc2sVL(c).The calculation of theVLscore also accounts for thedependency structure of the input sentences. The syn-tactic structure of sentences is computed through theTurin University Parser (TUP) in the dependency for-mat (Lesmo, 2007). Dependency formalisms representsyntactic relations by connecting adominantword, thehead (e.g., the verb ‘fly’ in the sentenceThe eagle flies)and adominatedword, the dependent (e.g., the noun‘eagle’ in the same sentence). The connection betweenthese two words is usually represented by using labeleddirected edges (e.g.,subject): the collection of all depen-dency relations of a sentence forms a tree, rooted in themain verb (see the parse tree illustrated in Figure 1).The dependency structure is relevant in our approach,because we assume that some sort ofreinforcementef-fect may apply in cases where both a word and its de-pendent(s) (or governor(s)) are associated to some visualfeature. For example, a phrase like ‘with black stripes’is expected to evoke mental images in a more vivid waythan its elements taken in isolation (that is, ‘black’ and‘stripes’), and its visual load is expected to still growif we add a coordinated term, like in ‘withyellow andblack stripes’. Yet, theVLwould –recursively– grow ifwe added a governor term (like ‘furwith yellow and blackstripes’). We then introduced a parameter⇠to controlthe contribution of the aforementioned features in casethe corresponding terms are linked in the parse tree bya modifier/argument relation (denoted asmodandargin Equation 3).VL(ci)=(⇠VL(ci)i f9cjs.t.mod(ci,cj)_arg(ci,cj)VL(ci) otherwise.(3)In the experimentation⇠was set to 1.2.The stimuli in the dataset are pairs consisting ofa definitiondand a targetT(st=hd, Ti), such asdefinitiondz}|{The big carnivore with yellow and black stripes is thetargetTz}|{...tiger|{z}stimulusst.The visual load associated tostcomponents, given theL’animalechemangiabananesuunalbero`elascimmiaTheanimalthateatsbananasonatreeisthemonkeyFigure 1: The dependency tree corresponding to a stimulus.the following information:lemma, morphological infor-mation on POS (part of speech), and then whether theconsidered concept/property conveys information aboutcolor,shape,motionandsize.1For example, this pieceof informationfinger,Noun,1,1,0,1(1)can be used to indicate that the concept finger (associ-ated to a Noun, and di↵ering, e.g., from that associatedto a Verb) conveys information about color, shape andsize, but not about motion. In the following these are re-ferred to as the visual features\u0000·associated to the givenconcept.We have then built a dictionary by extracting it from aset ofstimuli(illustrated hereafter) composed by simplesentences describing a concept; and manually annotatedthe visual features associated to each concept. The as-signment of features scores has been conducted by theauthors on a purely introspective basis.Di↵erent weighting schemes~w={↵,\u0000,\u0000}have beentested in order to set features contribution to the visualload associated to a conceptc, that results from com-putingVL(c,~w)=Xi\u0000i=↵(\u0000col+\u0000sha)+\u0000\u0000mot+\u0000\u0000siz.(2)For the experimentation we set↵to 1.35,\u0000to 1.1 and\u0000to.9.To the ends of combining the contribution of conceptsin a sentencesto the overallVLscore fors, we adapttheprinciple of compositionality2to the visual load do-main. In other words, we assume that the visual load ofa sentence can be computed by starting from the visual1We adopt here a simplification, since we are assumingthat the pairhlemma,POSiis su\u0000cient to identify a con-cept/property, and that in general we can access items bydisregarding the word sense disambiguation problem, whichis actually an open problem in the field of Natural LanguageProcessing (Vidhu Bhala & Abirami, 2014).2This principle states that the meaning of an expressionis a function of the meanings of its parts and of the way theyare syntactically combined: to get the meaning of a sentencewe combine words to form phrases, then we combine phrasesto form clauses, and so on (Partee, 1995).load of the concepts denoted by the lexical items in thesentence, that isVL(s)=Pc2sVL(c).The calculation of theVLscore also accounts for thedependency structure of the input sentences. The syn-tactic structure of sentences is computed through theTurin University Parser (TUP) in the dependency for-mat (Lesmo, 2007). Dependency formalisms representsyntactic relations by connecting adominantword, thehead (e.g., the verb ‘fly’ in the sentenceThe eagle flies)and adominatedword, the dependent (e.g., the noun‘eagle’ in the same sentence). The connection betweenthese two words is usually represented by using labeleddirected edges (e.g.,subject): the collection of all depen-dency relations of a sentence forms a tree, rooted in themain verb (see the parse tree illustrated in Figure 1).The dependency structure is relevant in our approach,because we assume that some sort ofreinforcementef-fect may apply in cases where both a word and its de-pendent(s) (or governor(s)) are associated to some visualfeature. For example, a phrase like ‘with black stripes’is expected to evoke mental images in a more vivid waythan its elements taken in isolation (that is, ‘black’ and‘stripes’), and its visual load is expected to still growif we add a coordinated term, like in ‘withyellow andblack stripes’. Yet, theVLwould –recursively– grow ifwe added a governor term (like ‘furwith yellow and blackstripes’). We then introduced a parameter⇠to controlthe contribution of the aforementioned features in casethe corresponding terms are linked in the parse tree bya modifier/argument relation (denoted asmodandargin Equation 3).VL(ci)=(⇠VL(ci)i f9cjs.t.mod(ci,cj)_arg(ci,cj)VL(ci) otherwise.(3)In the experimentation⇠was set to 1.2.The stimuli in the dataset are pairs consisting ofa definitiondand a targetT(st=hd, Ti), such asdefinitiondz}|{The big carnivore with yellow and black stripes is thetargetTz}|{...tiger|{z}stimulusst.The visual load associated tostcomponents, given theweighting scheme~w, is then computed as follows:VL(d,~w)=Pc2dVL(c)(4)VL(T,~w)=VL(T).(5)Aggiungere figura e descrizione di alto livello dellapipeline.ExperimentationMaterials and MethodsForty-five healthy volunteers (23 females and 22 males),19\u000052 years of age (mean±sd= 25.7±5.1),w e r erecruited for the experiment. One of them was excludedbecause she was outlier with respect to the group. Noneof the subjects had a history of psychiatric or neuro-logical disorders. All participants gave their written in-formed consent before taking part to the experimentalprocedure, which was approved by the ethical commit-tee of the University of Turin, in accordance with theDeclaration of Helsinki (BMJ 1991; 302: 1194). Par-ticipants were all na¨ ıve to the experimental procedureand to the aims of the study.The set of stimuli was devised by the multidisciplinaryteam of philosophers, neuropsychologists and computerscientists in the frame of a broader project aimed at in-vestigating both the role of visual load in concepts in-volved in inferential and referential tasks.Experimental design and procedureParticipantswere asked to perform an inferential task “Naming bydefinition”. During the task a sentence was pronouncedand the subjects were instructed to listen to the stim-ulus given in the headphones and to overtly name, asaccurately and as fast as possible, the target word cor-responding to the definition, using a microphone con-nected to a response box. Auditory stimuli were pre-sented through the E-Prime software, which was alsoused to record data on accuracy and reaction times.Furthermore, at the end of the experimental session,the subjects were administered a questionnaire: they hadto rate on a 1\u00007 Likert scale the intensity of the visualload they perceived as related to each target and to eachdefinition.The factorial design of the study included two within-subjects factors, in which the visual load of both targetand definition was manipulated. The resulting four ex-perimental conditions were as follows:VVVisual Target—Visual Definition (e.g., ‘The bird ofprey with great wings flying over the mountains is the...eagle’);VNVVisual Target—Non-Visual Definition (e.g., Thehottest of the four elements of the ancients is...fire);NVVNon-Visual Target—Visual Definition (e.g., Thenose of Pinocchio stretched when he said a...lie);NVNVNon-Visual Target—Non-Visual Definition(e.g., The quality of people that easily solve di\u0000cultproblems is said...intelligence).For each condition, there were 48 sentences, for overall192 sentences. Each trial lasted about 30 minutes. Thenumber of words (nouns and adjectives) and the (syn-tactic dependency) structure of the considered sentenceswere homogeneous within conditions.The same set of stimuli used for the human experi-ment was given in input to the system implementing theproposed computational model. The system was used tocompute the visual load score associated to (lexicalized)concepts according to Eq. 4 and 5, implementing the vi-sual load model in Eq. 2, with the system’s parametersset to the aforementioned values.Data analysisThe participants’ performance in the “naming by def-inition” task was evaluated by recording, for each re-sponse, the reaction timeRT, in milliseconds, and theaccuracyAC, as the percentage of the correct answers.Then, for each subject, bothRTandACwere com-bined in theInverse E\u0000ciency Score(IES), by usingthe formula IES = (RT·AC)/100. IES is a metrics com-monly used to aggregate reaction time and accuracy andto summarize them. The mean IES value was used asdependent variable and entered in a 2⇥2 repeated mea-sures ANOVA with ‘target’ (two levels: ‘visual’ and ‘not-visual’) and ‘definition’ (two levels: ‘visual’ and ‘not-visual’) as within-subjects factors.Post hoccomparisonswere performed by using the Duncan test.The scores obtained by the participants in the vi-sual load questionnaire were analyzed by using pairedT-tests, two tailed. Two comparisons were performedfor visual and not-visual targets, and for visual and not-visual definitions.The computational model results were analyzed by us-ing paired T-tests, two tailed. Two comparisons wereperformed for visual and not-visual targets and for vi-sual and not-visual definitions.Correlations between IES, computational modeland visual load questionnaire. We also explored theexistence of correlations between IES, the visual loadquestionnaire and the computational model output byusing linear regressions. For both the IES values andthe questionnaire scores, we calculated for each item themean of the 30 subjects’ responses. In a first model, weused the visual-load questionnaire scores as independentvariable to predict the participants’ performance (withthe IESas dependent variable); in a second model, weused the computational data as independent variable topredict the participants’ visual load evaluation (with thequestionnaire scores as independent variable).System implementing thecomputational model ofVLhlemma,POSi{hlemma,POSihlemma,POSihlemma,POSi.....Morphological informationFigure 2: The pipeline to compute the VLscore according to the proposed computational model.\nreferred to as the visual features \u001e\u0001associated with the\ngiven concept.\nWe have then built a dictionary by extracting it from\na set of stimuli (illustrated hereafter) composed of sim-\nple sentences describing a concept; next, we have man-\nually annotated the visual features associated with each\nconcept. The automatic annotation of visual properties\nassociated with concepts is deferred to future work: it\ncan be addressed either through a classical Information\nExtraction approach building on statistics, or in a more\nsemantically-principled way.\nDi\u000berent weighting schemes ~ w=f\u000b;\f;\rghave been\ntested in order to determine the features' contribution to\nthe visual load associated with a concept c, that results\nfrom computing\nVL(c;~ w) =X\ni\u001ei=\u000b(\u001ecol+\u001esha)+\f\u001emot+\r\u001esiz:(2)\nFor the experimentation we set \u000bto 1:35,\fto 1:1 and\n\rto:9: these assignments re\rect the fact that color and\nshape information is considered more important, in the\ncomputation of VL.\nTo the ends of combining the contribution of concepts\nin a sentence sto the overall VLscore fors, we adopted\nthe following additive schema: VL(s) =P\nc2sVL(c).\nThe computation of the VLscore also accounts for\nthe dependency structure of the input sentences. The\nsyntactic structure of sentences is computed by the\nTurin University Parser (TUP) in the dependency for-\nmat (Lesmo, 2007). Dependency formalisms represent\nsyntactic relations by connecting a dominant word, the\nhead (e.g., the verb `\ry' in the sentence The eagle \ries )\nand a dominated word, the dependent (e.g., the noun`eagle' in the same sentence). The connection between\nthese two words is usually represented by using labeled\ndirected edges (e.g., subject ): the collection of all depen-\ndency relations of a sentence forms a tree, rooted in the\nmain verb (see the parse tree illustrated in Figure 1).\nThe dependency structure is relevant in our approach,\nbecause we assume that a reinforcement e\u000bect may ap-\nply in cases where both a word and its dependent(s) (or\ngovernor(s)) are associated with visual features. For ex-\nample, a phrase such as `with black stripes' is expected\nto evoke mental images in a more vivid way than its el-\nements taken in isolation (that is, `black' and `stripes'),\nmoreover its visual load is expected to further grow if\nwe add a coordinated term, as in `with yellow and black\nstripes'. Moreover, the VLwould {recursively{ grow if\nwe added a governor term (like ` furwith yellow and black\nstripes'). We then introduced a parameter \u0018to control\nthe contribution of the aforementioned features in case\nthe corresponding terms are linked in the parse tree by\na modi\fer/argument relation (denoted as mod andarg\nin Equation 3).\nVL(ci) =(\n\u0018VL(ci) if9cjs.t.mod(ci;cj)_arg(ci;cj)\nVL(ci) otherwise.\n(3)\nIn the experimentation \u0018was set to 1 :2.\nThe stimuli in the dataset are pairs consisting of\na de\fnition dand a target T(st=hd;Ti), such as\nde\fnitiondz }| {\nThe big carnivore with yellow and black stripes is thetargetTz}|{\n:::tiger| {z }\nstimulusst.\nThe visual load associated to stcomponents, given the\n183\nweighting scheme ~ w, is then computed as follows:\nVL(d;~ w) =P\nc2dVL(c) (4)\nVL(T;~ w) = VL(T): (5)\nThe whole pipeline from the input parsing to compu-\ntation of the VLfor the considered stimulus has been\nimplemented as a computer program; its main steps in-\nclude the parsing of the stimulus, the extraction of the\n(lexicalized) concepts by exploiting the output of the\nmorphological analysis, and the tree traversal of the de-\npendency structure resulting from the parsing step. The\nmorphological analyzer has been preliminarily fed with\nthe whole set of stimuli, and its output has been anno-\ntated with the visual features and stored into a dictio-\nnary. At run time, the dictionary is accessed based on\nmorphological information, then used to retrieve the val-\nues of the features associated with the concepts in the\nstimulus. The output obtained by the proposed model\nhas been compared with the results obtained in a behav-\nioral experimentation as described below.\nExperimentation\nMaterials and Methods\nThirty healthy volunteers, native Italian speakers, (16\nfemales and 14 males), 19 \u000052 years of age (mean\n\u0006sd= 25:7\u00065:1), were recruited for the experiment.\nNone of the subjects had a history of psychiatric or neu-\nrological disorders. All participants gave their written\ninformed consent before participating in the experimen-\ntal procedure, which was approved by the ethical com-\nmittee of the University of Turin, in accordance with\nthe Declaration of Helsinki (World Medical Association,\n1991). Participants were all na \u0010ve to the experimental\nprocedure and to the aims of the study.\nExperimental design and procedure Participants\nwere asked to perform an inferential task \\Naming from\nde\fnition\". During the task a sentence was pronounced\nand the subjects were instructed to listen to the stim-\nulus given in the headphones and to overtly name, as\naccurately and as fast as possible, the target word cor-\nresponding to the de\fnition, using a microphone con-\nnected to a response box. Auditory stimuli were pre-\nsented through the E-Prime software, which was also\nused to record data on accuracy and reaction times. Fur-\nthermore, at the end of the experimental session, the\nsubjects were administered a questionnaire: they had to\nrate on a 1\u00007 Likert scale the intensity of the visual\nload they perceived as related to each target and to each\nde\fnition.\nThe factorial design of the study included two within-\nsubjects factors, in which the visual load of both target\nand de\fnition was manipulated. The resulting four ex-\nperimental conditions were as follows:\nVV Visual Target|Visual De\fnition (e.g., `The bird ofprey with great wings \rying over the mountains is the\n:::eagle');\nVNV Visual Target|Non-Visual De\fnition (e.g., The\nhottest of the four elements of the ancients is :::\fre);\nNVV Non-Visual Target|Visual De\fnition (e.g., The\nnose of Pinocchio stretched when he told a :::lie);\nNVNV Non-Visual Target|Non-Visual De\fnition\n(e.g., The quality of people that easily solve di\u000ecult\nproblems is said :::intelligence).\nFor each condition, there were 48 sentences, 192 sen-\ntences overall. Each trial lasted about 30 minutes. The\nnumber of words (nouns and adjectives), their balancing\nacross stimuli, and the (syntactic dependency) structure\nof the considered sentences were uniform within condi-\ntions, so that the most relevant variables were controlled.\nThe same set of stimuli used for the human experiment\nwas given in input to the system implementing the com-\nputational model.\nData analysis\nThe participants' performance in the \\Naming from def-\ninition\" task was evaluated by recording, for each re-\nsponse, the reaction time RT, in milliseconds, and the\naccuracy AC, computed as the percentage of correct an-\nswers. The answers were considered correct if the target\nword was plausibly matched with the de\fnition. Then,\nfor each subject, both RTandACwere combined in\ntheInverse E\u000eciency Score (IES), by using the formula\nIES = ( RT=AC)\u0001100. IES is a metrics commonly used\nto aggregate reaction time and accuracy, and to summa-\nrize them (Townsend & Ashby, 1978). The mean IES\nvalue was used as the dependent variable and entered\nin a 2\u00022 repeated measures ANOVA with `target' (two\nlevels: `visual' and `non-visual') and `de\fnition' (two lev-\nels: `visual' and `non-visual') as within-subjects factors.\nPost hoc comparisons were performed by using the Dun-\ncan test.\nThe scores obtained by the participants in the visual\nload questionnaire were analyzed by using unpaired T-\ntests, two tailed. Two comparisons were performed for\nvisual and non-visual targets, and for visual and non-\nvisual de\fnitions. The computational model results were\nanalyzed by using unpaired T-tests, two tailed. Two\ncomparisons were performed for visual and non-visual\ntargets and for visual and non-visual de\fnitions.\nCorrelations between IES, computational model\nand visual load questionnaire . We also explored the\nexistence of correlations between IES, the visual load\nquestionnaire and the computational model output by\nusing linear regressions. For both the IES values and\nthe questionnaire scores, we computed for each item the\nmean of the 30 subjects' responses. In a \frst model, we\nused the visual load questionnaire scores as independent\nvariable to predict the participants' performance (with\n184\nFigure 3: The graph shows, for each condition, the mean\nIES with standard error.\nIESas the dependent variable); in a second model, we\nused the computational data as independent variable to\npredict the participants' visual load evaluation (with the\nquestionnaire scores as the independent variable). In\norder to verify the consistency of the correlation e\u000bects,\nwe also performed linear regressions where we controlled\nfor three covariate variables: the number of words, their\nbalancing across stimuli and the syntactic dependency\nstructure.\nResults\nThe ANOVA showed a signi\fcant e\u000bect of the within-\nsubject factors \\target\" ( F1;29= 14:4;p <0:001), sug-\ngesting that the IES values were signi\fcantly lower in\nthe visual than in the non-visual targets, and \\de\fni-\ntion\" (F1;29= 32:78;p<0:001), suggesting that the IES\nvalues were signi\fcantly lower in the visual than in the\nnon-visual de\fnitions. This means that, for both the tar-\nget and the de\fnition, the participants' performance was\nsigni\fcantly faster and more accurate in the visual than\nin the non-visual condition. We also found a signi\fcant\ninteraction \\target*de\fnition\" ( F1;29= 7:54;p= 0:01).\nBased on the Duncan post hoc comparison, we veri\fed\nthat this interaction was explained by the e\u000bect of the\nvisual de\fnitions of the visual targets (VV condition),\nin which the participants' performance was signi\fcantly\nfaster and more accurate than in all the other conditions\n(VNV; NVV; NVNV), as shown in Figure 3.\nBy comparing the questionnaire scores for visual\n(mean\u0006sd= 5:69\u00060:55) and non-visual (mean \u0006sd=\n4:73\u00060:71) de\fnitions we found a signi\fcant di\u000berence\n(p < 0:001; unpaired T-test, two tailed). By compar-\ning the questionnaire scores for visual (mean \u0006sd=\n6:32\u00060:4) and non-visual (mean \u0006sd= 4:23\u00060:9)\ntargets we found a signi\fcant di\u000berence ( p < 0:001).\nThis suggest that our arbitrary categorization of each\nsentences within the four conditions was supported by\nFigure 4: Linear regression \\Inverse E\u000eciency Score\n(IES) by Visual Load Questionnaire\". The mean score\nin the Visual Load Questionnaire, reported on 1 \u00007 Lik-\nert scale, was used as an independent variable to predict\nthe subjects' performance, as quanti\fed by the IES.\nthe general agreement of the subjects. By compar-\ning the computational model scores for visual (mean\n\u0006sd= 4:0\u00062:4) and non-visual (mean \u0006sd= 2:9\u00062:0)\nde\fnitions we found a signi\fcant di\u000berence ( p <0:001;\nunpaired T-test, two tailed). By comparing the compu-\ntational model scores for visual (mean \u0006sd= 2:53\u00061:29)\nand non-visual (mean \u0006sd= 0:26\u00060:64) targets we\nfound a signi\fcant di\u000berence ( p <0:001). This suggest\nthat we were able to computationally model the visual-\nload of both targets and descriptions, describing it as a\nlinear combination of di\u000berent low-level features: color,\nshape, motion and dimension.\nResults correlations . By using the visual load ques-\ntionnaire scores as independent variable we were able\nto signi\fcantly ( R2= 0:4;p<0:001) predict the partici-\npants' performance (that is, their IES values), illustrated\nin Figure 4. This means that the higher the participants'\nvisual score for a de\fnition, the better the participants'\nperformance in giving the correct response (or, alterna-\ntively, the lower the IES value).\nBy using the computational data as independent vari-\nable we were able to signi\fcantly ( R2= 0:44;p<0:001)\npredict the participants' visual load evaluation (their\nquestionnaire scores), as shown in Figure 5. This means\nthat a correlation exists between the computational pre-\ndiction about the visual load of the de\fnitions and the\nparticipants visual load evaluation: the higher is the\ncomputational model result, the higher is the partici-\npants' visual score in the questionnaire. We also found\nthat these e\u000bects were still signi\fcant in the regres-\nsion models where the number of words, their balancing\nacross stimuli and the syntactic dependency structure\nwas controlled for.\n185\nFigure 5: Linear regression \\Visual Load Questionnaire\nby Computational Model\". The mean value obtained\nby the Computational model was used as an indepen-\ndent variable to predict the subjects' scores on the Visual\nLoad Questionnaire, reported on 1 \u00007 Likert scale.\nConclusions\nIn the next future we plan to extend the representation\nof the conceptual information by grounding the concep-\ntual representation on a hybrid representation composed\nof conceptual spaces and ontologies (Lieto, Minieri, Pi-\nana, & Radicioni, 2015; Lieto, Radicioni, & Rho, 2015).\nAdditionally, we plan to integrate the current model in\nthe context of cognitive architectures.\nAcknowledgments\nThis work has been partly supported by the Project The\nRole of the Visual Imagery in Lexical Processing , grant\nTO-call03-2012-0046, funded by Universit\u0012 a degli Studi\ndi Torino and Compagnia di San Paolo.\nReferences\nBergen, B. K., Lindsay, S., Matlock, T., & Narayanan, S.\n(2007). Spatial and linguistic aspects of visual imagery\nin sentence comprehension. Cognitive Sci ,31(5), 733{\n764.\nBergsma, S., & Goebel, R. (2011). Using visual infor-\nmation to predict lexical preference. In RANLP (pp.\n399{405).\nBookheimer, S., Ze\u000ero, T., Blaxton, T., Gaillard, W.,\nMalow, B., & Theodore, W. (1998). Regional cerebral\nblood \row during auditory responsive naming: evi-\ndence for cross-modality neural activation. Neurore-\nport,9(10), 2409{2413.\nBruni, E., Tran, N.-K., & Baroni, M. (2014). Multimodal\ndistributional semantics. J. Artif. Intell. Res. ,49, 1{\n47.\nCipolotti, L., & Warrington, E. K. (1995). Semantic\nmemory and reading abilities: A case report. J INT\nNEUROPSYCH SOC ,1(01), 104{110.Coltheart, M. (1980). Deep dyslexia: A right hemisphere\nhypothesis. Deep dyslexia , 326{380.\nCortese, M. J., & Khanna, M. M. (2007). Age of acquisi-\ntion predicts naming and lexical-decision performance\nabove and beyond 22 other predictor variables: An\nanalysis of 2,342 words. Q J Exp Psychol A ,60(8),\n1072{1082.\nJust, M. A., Newman, S. D., Keller, T. A., McEleney,\nA., & Carpenter, P. A. (2004). Imagery in sentence\ncomprehension: an fmri study. Neuroimage ,21(1),\n112{124.\nKemmerer, D. (2010). Words and the Mind: How words\ncapture human experience. In B. Malt & P. Wol\u000b\n(Eds.), (chap. How Words Capture Visual Experience\n- The Perspective from Cognitive Neuroscience). Ox-\nford Scholarship Online.\nKiran, S., & Tuchtenhagen, J. (2005). Imageability ef-\nfects in normal spanish{english bilingual adults and in\naphasia: Evidence from naming to de\fnition and se-\nmantic priming tasks. Aphasiology ,19(3-5), 315{327.\nLesmo, L. (2007, June). The Rule-Based Parser of the\nNLP Group of the University of Torino. Intelligenza\nArti\fciale ,2(4), 46{47.\nLieto, A., Minieri, A., Piana, A., & Radicioni, D. P.\n(2015). A knowledge-based system for prototypical\nreasoning. Connection Science ,27(2), 137{152.\nLieto, A., Radicioni, D. P., & Rho, V. (2015, July).\nA Common-Sense Conceptual Categorization System\nIntegrating Heterogeneous Proxytypes and the Dual\nProcess of Reasoning. In Proc. of IJCAI 2015. Buenos\nAires, Argentina: AAAI Press.\nMarconi, D. (1997). Lexical competence . MIT Press.\nMarconi, D., Manenti, R., Catricala, E., Della Rosa,\nP. A., Siri, S., & Cappa, S. F. (2013). The neural\nsubstrates of inferential and referential semantic pro-\ncessing. Cortex ,49(8), 2055{2066.\nMellet, E., Tzourio, N., Denis, M., & Mazoyer, B. (1998).\nCortical anatomy of mental imagery of concrete nouns\nbased on their dictionary de\fnition. Neuroreport ,\n9(5), 803{808.\nPaivio, A., Yuille, J. C., & Madigan, S. A. (1968). Con-\ncreteness, imagery, and meaningfulness values for 925\nnouns. Journal of experimental psychology ,76, 1.\nSilberer, C., Ferrari, V., & Lapata, M. (2013). Models\nof semantic representation with visual attributes. In\nAcl 2013 proceedings (pp. 572{582).\nTownsend, J. T., & Ashby, F. G. (1978). Methods of\nmodeling capacity in simple processing systems. Cog-\nnitive theory ,3, 200{239.\nTversky, A., & Kahneman, D. (1973). Availability: A\nheuristic for judging frequency and probability. Cog-\nnitive psychology ,5(2), 207{232.\nWorld Medical Association. (1991). Code of Ethics:\nDeclaration of Helsinki. BMJ ,302, 1194.\n186", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
6